uuid int64 541B 3,299B | dataset stringclasses 1
value | text stringlengths 1 4.29M |
|---|---|---|
1,314,259,994,163 | arxiv | \section{Results}
The samples studied are single-crystal GaN layers, codoped with Mn and Mg, each with concentrations lower than 1\%. Their fabrication,architecture, and preliminary characterisation are summarised in the \emph{Methods}.
Aware that the incorporation of interstitial hydrogen forming complexes with Mg is a recurrent challenge associated with Mg-doping of GaN, we have performed a careful chemical analysis, detailed in the Supplementary Figure~S1, showing that in our case we can reasonably neglect the effect of interstitial hydrogen.
\subsection{Nature of Mn--Mg complexes}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.95\textwidth]{fig1-devillers.eps}
\caption{ \label{fig:complexes}\textbf{Mn--Mg$_k$ complexes predicted by theory and experimental demonstration by EXAFS.} \textbf{a}, most stable Mn--Mg$_k$ complexes ($k$\,=\,1,2 and 3) and their pairing energies -- computed by DFT within the GGA+U approximation -- relatively to the previous Mn-Mg$_{k-1}$ complex. A gray arrow indicates the [0001] direction, \emph{i.e.} the GaN \emph{c}-axis; \textbf{b}, number of Mg atoms seen by Mn in the first cation coordination sphere ($n_\mathrm{Mg}$), extracted from EXAFS measurements and DFT predictions; \textbf{c}, schematic representation of a Mn--Mg complex; \textbf{d}, detailed values of the bond lengths for different complexes, calculated by DFT; \textbf{e}, average Mn-N bond length ($d_\mathrm{Mn-N}$) as a function of the Mg/Mn ratio, from EXAFS analysis and DFT calculations.}
\end{figure*}
Having established the single crystallinity and chemical homogeneity of the samples $via$ a spectrum of both local and averaging characterisation techniques, we discuss the lattice positions of Mn and Mg impurities in GaN, as determined from extended x-ray absorption fine structure (EXAFS) measurements at the Mn K-edge and by {\em ab initio} computations. We find that within a confidence of 90\% the Mn ions occupy exclusively Ga-substitutional positions. This result is consistent with the {\em ab initio} computations reported in the Supplementary Table~S1 showing that the energy required to place a Mn or Mg ion in either octahedral or tetrahedral interstitial positions of GaN is more than 4\,eV higher than the one needed to incorporate it in a Ga-substitutional site.
According to our previous EXAFS and electron energy loss spectroscopy (EELS) results, Mn is randomly and homogeneously distributed in GaN:Mn, at least up to a Mn concentration of 3\%~(refs. \onlinecite{Stefanowicz:2010_PRBa,Bonanni:2011_PRB}).
While in the conventional treatment of dilute magnetic semiconductors, the spatial distributions of co-dopants and TM ions are assumed to be uncorrelated, a quantitative analysis of our EXAFS data points to a substantial \emph{correlation} between the positions occupied by Mn and Mg in the host lattice. Simulations of EXAFS spectra for a large variety of relaxed defects in GaN were performed and reported in the Supplementary Information, and indicate that the combination of substitutional Mn and Mg is the most likely to account for the experimental EXAFS data. The experimental EXAFS spectra are then fitted, according to the procedure described in the Supplementary Information. As shown in Fig.~\ref{fig:complexes}b, the number of substitutional Mg atoms ($n_\mathrm{Mg}$) in the first cation coordination sphere of Mn increases linearly with the ratio between the Mg and Mn concentrations, $y = x_{\mathrm{Mg}}/x_{\mathrm{Mn}}$, up to $y$\,$=$\,$3$ and then saturates at higher $y$ values. Simultaneously, the average distance beween Mn and the nearest neighbour N atoms $d_\mathrm{Mn-N}$ diminishes in range up to $y=3$ and then levels off, as seen in Fig.\,\ref{fig:complexes}e. The experimental EXAFS spectra from which $d_\mathrm{Mn-N}$ and $n_\mathrm{Mg}$ are extracted are plotted in Supplementary Figure~S4 for different values of $y$. It is possible to confirm the correlation between the Mn and Mg positions by comparing the spectra for $y>0$ (correlated) and $y=0$, the latter being strictly equivalent to the \emph{not} correlated case where Mn does not interact with any Mg atom.
These results can be explained by our {\em ab initio} computations. In particular, the estimated values of the pairing energies $E_{\text{p}}$ shown in Fig.\,\ref{fig:complexes}a for up to three Mg per Mn are all negative, demonstrating the tendency of Mg to form the Mn-Mg$_{k}$ complexes sketched in Fig.\,\ref{fig:complexes}a and in Supplementary Figure\,S8. Taking into account the statistical distribution of $k$, implying that some of these complexes can coexist at a given $y$, we have obtained with no adjustable parameters a remarkable agreement between the experimental and computed trends, describing how the number of bound Mg atoms (Fig.\,\ref{fig:complexes}b) and the shortening of the bond length (Fig.\,\ref{fig:complexes}c,d,e) depend on $y$. This agreement with the theory based on the statistical distribution of the complexes populations detailed in the \emph{Methods} implies a comparable stability of the complexes with different $k$, up to $k=3$. According to the DFT calculations and as reported in Fig. 1a, a similar binding energy (0.5\,eV) is indeed computed for Mn-Mg and Mn-Mg$_2$ but a lower value (0.2\,eV) is expected for Mn-Mg$_{3}$. If these DFT predictions are quantitatively correct, we may await a concentration of Mn-Mg$_3$ complexes somewhat lower than the one resulting from $P_3(y)$ for $y<1$.
According to the \emph{ab initio} results in Fig.~\ref{fig:complexes}d, the bond shortening is particularly significant for the Mn-N pairs nearest to Mg. Intuitively, this effect results from a sizable charge transfer at the Mn-Mg$_k$ complexes, that leads to the onset of Coulomb attraction between the ionized Mg acceptors and Mn donor.
\subsection{Control of charge and spin state}
\begin{figure*}[htb]
\centering
\includegraphics[width=0.87\textwidth]{fig2-devillers.eps}
\caption{\label{fig:spin} \textbf{Evolution of the Mn spin state}. \textbf{a}, non-resonant XES for GaN:Mn samples with and without Mg; \textbf{b}, evolution of the nominal spin value with the ratio between Mg and Mn concentration, extracted from the analysis of XES spectra and calculated \emph{via} DFT; \textbf{c}, computed spin polarization density [$\Delta \rho =\rho(\uparrow)-\rho(\downarrow)$] for Mn$_\mathrm{Ga}$ in wurtzite (wz)-GaN. The positive and negative spin polarizations are represented by violet and grey colours, respectively; \textbf{d--f}, difference $\rho_k-\rho_{k-1}$, between the spin polarizations of the Mn--Mg$_k$ complexes, with $k=1$, 2 and 3. The red and blue colours represent positive and negative values, respectively. The blue colour in {\bf d} indicates the enhanced delocalization of spin polarization in Mn-Mg, whereas the red colour in {\bf e} and {\bf f} points to a gradual shift of the spin density to Mn in Mn-Mg$_2$ and then in Mn-Mg$_3$. In all plots the contour value is set to 0.005 corresponding to about $2\times10^{-5} \mu_\mathrm{B}\,$\AA$^{-3}$.}
\end{figure*}
Having discussed the atomistic structure of the Mn-Mg$_{k}$ complexes, we determine their charge and spin states. According to our {\em ab initio} computations, if Mn-Mg$_k$ complexes were not formed, only a single hole would be trapped by Mn upon Mg codoping, resulting in the Mn spin state $S$\,=\,3/2. In that case, if $y$\,$>$\,$1$, extra holes would be directed to the valence band, increasing the \emph{p}-type conductivity. However, layers containing a small concentration of Mg ($y<3$) are insulating and the onset of conductivity takes place only for $y\ge 3$, with a decrease in the resistance of the layers of more than three orders of magnitude, indicating that at low $y$, Mg are bound to Mn in complexes, and do not exist as isolated acceptors, as free Mg acceptors are only present for $y \ge 3$. Considering the Mn-Mg$_k$ complexes, our computations predict the presence of Mn with spin state down to $S$\,=\,1 for $k \ge 2$. Altogether, our theoretical studies suggest the possibility to control, with Mg codoping, the Mn$^{n+}$ charge and spin state over the range $3 \le n \le 5$ and $2 \ge S \ge 1$, respectively.
We give now experimental verification of these theoretical expectations by tracing the evolution of $S$ upon Mg codoping.
We exploit the K$\beta$ x-ray emission spectroscopy (XES) technique which, by probing the 3$p$ $\to$ 1$s$ transitions, is sensitive to the magnitude of the exchange interaction between the Mn 3$p$ core-hole and the net magnetic moment in the Mn 3$d$ valence shell \cite{Tsutsumi:1976_PRB}.
We note that compared to x-ray absorption spectroscopy (XAS), where a charge state dependent shift of the absorption edge is also visible, the XES data are more linearly correlated with the spin state, and depend less on the atomic configuration\cite{Pizarro:2004_PCCP}.
The spectra are plotted in Fig.~\ref{fig:spin}a from which we extract, by comparison with reference oxides, as described in the Supplementary Information, the values of $S$ reported in Fig.~\ref{fig:spin}b. This analysis demonstrates that upon Mg codoping the Mn spin state first decreases linearly from 2 to 1, and then saturates for $y \gtrsim 3$. As shown in Fig.~\ref{fig:spin}b, these results are in accord with the magnitudes of $S$ determined from the {\em ab initio} computations for a unit cell containing one Mn.
\subsection{Magnetism of complexes}
Having established the TM charge and spin states, one can examine the properties of the centres and their coupling to the environment in terms of the time-honoured crystal-field theory\cite{Henderson:2006_B}, providing the expected structure and symmetries of the relevant energy levels at a given oxidation state\cite{Jansen:2008_ACh}. In order to gain more understanding at the microscopic level\cite{Raebiger:2008_N}, it is instructive to consider the actual spin distribution around the Mn ions and its evolution with Mg codoping. As shown in Fig.~\ref{fig:spin}b and in the Supplementary Table~S2, our \emph{ab initio} computations confirm that in the case of GaN:Mn the total magnetic moment $m$\,=\,4\,$\mu_{\text{B}}$ is built of $m_{\text{c}}$\,$\approx$\,4.5\,$\mu_{\text{B}}$ on the $d$ shell of the Mn ion and $m_{\text{a}}$\,$\approx$\,-0.5\,$\mu_{\text{B}}$ on the $p$ orbitals of the N ligands. This means, in agreement with the notion of strong coupling limit for GaN:Mn (ref.~\onlinecite{Dietl:2008_PRB}), that the wave function of the hole provided by the Mn is equally spread between Mn and the neighbouring N ions.
As reported in Fig.~\ref{fig:spin}d, the delocalization of holes is enhanced for $k$\,=\,1, suggesting that Mn-Mg complexes can mediate spin-dependent interactions between Mn spins. On the other hand, as evidenced in Fig.~\ref{fig:spin}e,f, for $k$\,=\,2 and 3 the delocalization is quenched, and this result is important for understanding the optical data discussed later.
The changes of the Mn spin state and Mn-N bonding with Mg codoping, evaluated by EXAFS, XES, and {\em ab initio} studies affect the magnetic properties of the system. Quite generally, the values of the magnetization $M(H)$ and its anisotropy are determined by the relevant spin Hamiltonian, whose form, appropriate for a given $S$, is determined by crystal symmetry, including local strains, whereas the magnitudes of the spin Hamiltonian parameters provide information on the coupling to the environment (including $p-d$ hybridization) and on the strength of the spin-orbit interaction. In GaN:Mn, a sizable difference in the magnetization curves $M(H)$ is observed for two orientations of the magnetic field $H$ with respect to the wurtzite $c$-axis, $H \perp c$ and $H\parallel c$ (refs.~\onlinecite{Stefanowicz:2010_PRBa,Gosk:2005_PRB}). A quantitative analysis of such data allowed to obtain the values of the spin Hamiltonian parameters for this case, with $S =2$ and wurtzite symmetry \cite{Stefanowicz:2010_PRBa,Gosk:2005_PRB}. In the case of GaN:(Mn,Mg), as seen in Fig. 1c-d, the distortions of the nitrogen tetrahedron and, in particular, the shortening of the Mn-N bond induced by the presence of Mg, is much more pronounced than the elongation of the Mn-N bond parallel to the $c$-axis. Accordingly, the local anisotropy will now be determined by the position of one or more Mg in the first cation coordination sphere. Since there is no preferential orientation for the occupation of any of the 12 nearest neighbour positions, the local anisotropy and, thus, the orientation of the easy axis will be randomly distributed over the Mn-Mg$_k$ complexes. Hence, the presence of Mg leads to the disappearance of anisotropy with respect to the $c$-axis. Indeed, as shown in Fig. 3, we observe that the magnitude $A$ of the magnetic anisotropy decays to zero with increasing $y$. In agreement with the model, $A(y)$ follows the probability $P_0(y)$ that no Mg is bound by Mn at a given $y$, $A(y) = A(0)(1-y/3)^3$. At the same time, the absolute values of $M(H)$ are determined by unknown parameters of the spin Hamiltonians, appropriate for particular values of $S$, local strains, and strength of the spin-orbit interaction.
\begin{figure}[htb]
\centering
\includegraphics[width=0.45\textwidth]{fig3-devillers.eps}
\caption{ \label{fig:SQUID}\textbf{Magnetism of Mn--Mg$_k$ complexes.} Magnetic anisotropy energy density as a function of $y$ measured by SQUID magnetometry, and calculated assuming that it originates from Mn without Mg in the first cation coordination sphere. Inset: normalized magnetization curves $M$--$H$ of GaN:Mn containing 0.4\% of Mn without and with Mg codoping (left and right panel, respectively) measured at 1.85~K.}
\end{figure}
\begin{figure*}[htb]
\centering
\includegraphics[width=0.98\textwidth]{fig4-devillers.eps}
\caption{ \label{fig:PL}\textbf{Infrared photoluminescence of GaN:(Mn,Mg) samples.} \textbf{a}, evolution of the PL spectra (excited with a 442 nm (2.8 eV) laser) as a function of the $y$ ratio, measured at 2~K (a multiplying factor of 1.4 has been applied between consecutive spectra for clarity); \textbf{b}, integrated PL intensity normalized by the Mn concentration and sample thickness as a function of the $y$ ratio (red circles, left scale) and evolution of the fraction of different complexes, calculated as a function of $y$ (lines, right scale); \textbf{c}, evolution of the PL spectra with temperature, for a sample with $y=4.1$; the background at 296 K is also included. A factor of 1.4 has been applied between consecutive spectra for clarity. \textbf{d}, evolution of the integrated PL intensity as a function of the inverse temperature, normalized to the value at $T$\,=\,5\,K; the line is a guide for the eye. Inset: levels relevant to PL as discussed in the Supplementary Information. }
\end{figure*}
\subsection{Effect of complexes on the luminescence of Mn}
Particularly remarkable is the influence of the Mn--Mg$_k$ complexes on the light emission of GaN. At present, TM-doping of oxides and II-VI chalcogenides allows the fabrication of optically pumped tunable lasers\cite{Sorokin:2005_IEEE,Mirov:2010_LPR}.
The PL spectra reported in Fig.~\ref{fig:PL}a for our samples, extends for high $y$-ratios over a broad infrared band, covering two of the telecommunication windows, namely 1.33\,$\mu$m and 1.55\,$\mu$m. In addition, the multisite structure suggests that only one transition is involved, but different environments as well as phonon replicas are responsible for the peaks multiplicity. The overall shape is indicative of a Huang-Rhys factor $S_{\text{HR}}$\,$>$\,2. According to Fig.\,\ref{fig:PL}b, the PL intensity increases with the concentration of the Mn-Mg$_3$ complexes $P_3(y) = (y/3)^3$, demonstrating that Mn$^{5+}$, in presumably two slightly different environments, accounts for the photoluminescence in GaN:(Mn,Mg).
Importantly, as one sees in Fig~\ref{fig:PL}c and d, the spectrally broad infrared emission persists up to room temperature, and is very attractive for ultrashort pulse generation as well as for wide infrared tunability.
In order to understand the photoluminescence related to Mn--Mg$_k$ cation complexes, it is necessary to consider first the reason of the poor luminescence of Mn$^{3+}$ in GaN.
In contrast to Cr$^{2+}$ in Al$_2$O$_3$ and ZnSe, Mn$^{3+}$ in GaN does not show the strong and application-relevant red or infrared emission associated with optical transitions between the $^5$E and $^5$T$_2$ crystal field multiplets of the 3$d^4$ shell. This surprising result is however in agreement with previous extensive studies of GaN:Mn (refs~\onlinecite{Graf:2002_APL,Marcet:2006_PRB,Zenneck:2007_JAP,Malguth:2008_MRS}), in which the intra-centre photoluminescence of Mn$^{3+}$ was found to be hardly detectable. This puzzling result can be traced back to an abnormally small value of the Huang-Rhys factor $S_{\text{HR}} \lesssim 1$ implied by optical absorption\cite{Wolos:2004_PRBb,Marcet:2006_PRB}, this observation being interpreted\cite{Dietl:2008_PRB} in terms of a strong $p$-$d$ coupling in GaN:Mn, leading to a significant admixture of anion $t_2$ orbitals with the wave functions of the $^5$E and $^5$T$_2$ states. Such delocalization of the centre wave function, clearly visible in Fig.~\ref{fig:spin}c, reduces the electron-phonon coupling. As a result, the oscillator strength is shifted to the zero-phonon line at the expense of the phonon-assisted transitions that cease to constitute the channel for PL excitation.
On the other hand, highly efficient and Stokes shifted PL, similar to the one presented in Fig.~\ref{fig:PL}, was found for GaN:(Mn,Mg) and assigned to intra Mn$^{4+}$ transitions\cite{Malguth:2008_MRS,Korotkov:2001_PBb,Han:2005_APL}.
However, we have demonstrated here that following the evolution of the PL as a function of the different populations of complexes (Fig.~\ref{fig:PL}b), the Mn$^{5+}$ present in the complexes are likely to be at the origin of the IR PL signal.
This assignment is consistent with the shape of the PL spectrum in a 10~T magnetic field -- given in the Supplementary Figure~S7 -- which can be described by a splitting of the ground state into three components. This splitting is expected for $S$\,=\,1, as observed previously\cite{Thurian:1997_APL} for V$^{3+}$ in AlN.
In order to explain the origin of the PL activation upon codoping with Mg evidenced in Fig.~\ref{fig:PL}a, we refer to Fig.~\ref{fig:spin}d-f where the variations in the local spin densities brought about by the binding of an increasing number of Mg ions are shown. According to these data, the complexing with one Mg ion enhances the delocalization of the spin density over neighbouring N anions. However, with the binding of two and then three Mg ions, the
delocalization of the spin density decreases. Accordingly, a strong electron-phonon coupling and, thus, a large magnitude of $S_{\text{HR}}$ is restored, particularly for Mn--Mg$_3$ complexes, where two $d$ electrons reside in the $e$ orbitals of the $^3$A$_2$ ground state.
This IR broadband PL promoted by Mn--Mg$_k$ cation complexes is of high relevance in \emph{e.g.} laser and telecommunication technologies. Actually, in comparison with Al$_2$O$_3$ and ZnSe, GaN has respectively seven and twelve times better thermal conductivity, lessening thermal effects even for high laser powers and intensities.
\section{Discussion}
We conclude that the data presented here indicate a new way to manipulate the charge and spin state of single paramagnetic centres by complexing a magnetic impurity with electrical dopants. The demonstration of these new degrees of freedom opens wide prospects illustrated here by the infrared emission of GaN:(Mn,Mg). Another line of research is to explore the potential of these Mn--Mg$_k$ cation complexes for mediating the coupling between localized spins in magnetic semiconductors. Furthermore, these centres may serve for storing and manipulating information in a single qubit or for single photon generation\cite{Koenraad:2011_NM}. Interestingly, unlike the case of CdTe:Mn (ref.~\onlinecite{Besombes:2004_PRL}) or InAs:Mn (ref.~\onlinecite{Krebs:2009_PRB}), it is not necessary to place GaN:(Mn,Mg) in a quantum dot, as Mn in GaN can bind the exciton\cite{Suffczynski:2011_PRB} needed to read or write information. The possibility to change energy level splitting, excitation channel and excited state lifetime by manipulating the Mn charge and spin state through Mn--Mg$_k$ cation complexes offers a not yet explored spectrum of opportunities for further investigations.
\section{Methods}
\textbf{ Growth and preliminary characterisation}: The samples consist of single crystal wurtzite (wz) GaN codoped with Mn and Mg grown by metalorganic vapor phase epitaxy (MOVPE) on a 1~$\mu$m GaN buffer layer on \emph{c}-plane sapphire, according to the procedure described elsewhere\cite{Stefanowicz:2010_PRBa,Bonanni:2011_PRB}. The doped layer is 600~nm thick. The samples are grown under H$_2$ atmosphere, with a pressure of 200~mbar and a temperature of 850$^\circ$C. The precursors used are ammonia (NH$_3$) for nitrogen, trimethylgallium (TMGa) for Ga, dicyclopentadienyl-magnesium (Cp$_2$Mg) for Mg and dicyclopentadienyl-manganese (Cp$_2$Mn) for Mn. The source flow of ammonia was kept constant at 1500~sccm, the TMGa at 5~sccm, and Cp$_2$Mg was varied between 150 and 450 sccm as Cp$_2$Mn was varied between 75 and 490 sccm. The Mn and Mg concentrations considered in this work are both between 0 and 1\% as measured by secondary ion mass spectroscopy (SIMS).
The absence of parasitic elements like hydrogen or oxygen has been carefully checked with SIMS, energy dispersive x-ray spectropscopy (EDX), Raman spectropscopy, and electron energy loss spectroscopy (EELS).
Prior to the extensive synchrotron investigations by EXAFS and XES, the structure of the layers has been characterised by high-resolution x-ray diffraction (HRXRD) on a X'Pert PRO MRD system with a dynamics as high as $10^7$ between the GaN (002) peak and the noise. In addition, high-resolution transmission electron microscopy (HRTEM) was performed on a JEOL 2011 Fast TEM microscope operating at 200 kV and capable of an ultimate point-to-point resolution of 0.19 nm and
allowing to image lattice fringes with a 0.14-nm resolution. The combination of the two techniques has allowed us to rule out the presence of precipitation in the layers.
\newline
\textbf{EXAFS}: EXAFS spectroscopy has been carried out at the BM08--GILDA Italian beamline\cite{Dacapito:1998_EN} at the ESRF (Grenoble, France). The Mn K edge x-ray absorption spectra have been acquired using a monochromator
equipped with a pair of Si(311) crystals and run in dynamical focusing mode.
Harmonics rejection is achieved through a pair of Pd-coated mirrors and the
monochromator de-tuning. The data are collected in the fluorescence mode using
a 13-element hyperpure Ge detector and normalized by the incoming flux measured
with an ion chamber. The incident beam is at 55.7$^\circ$ in respect to the sample surface
to avoid dichroic effects. The samples are cooled down to liquid nitrogen temperature.
The counting time and the number of scans for each sample have been chosen in order
to collect at least 10$^6$ counts per point. The EXAFS signal, $\chi(k)$, is extracted
from the absorption data, $\mu(E)$, using a smoothing spline algorithm (as implemented
in the {\sc viper} program) and choosing the energy edge, E$_0$, at the maximum of the
derivative. The data analysis is detailed in the Supplementary Information.
\newline
\textbf{XES}: Non resonant XES has been measured at the ID26 beamline of the ESRF\cite{Glatzel:2005_CCR}. The optics for the incoming beam consists of three coupled undulators, a double Si crystal monochromator and three Si coated mirrors working at 2.5 mrad incidence for harmonics rejection and beam focusing. The emission spectrometer is run in a vertical Rowland geometry with five Si(110) analyzer crystals working at the (440) reflection, that is, around a Bragg angle of 84.2$^\circ$. The spectrometer energy broadening is approximately 0.9\,eV at Mn\,K-edge (6539\,eV). The experimental geometry consists of the spectrometer and the incoming beam at 90$^\circ$ on the same scattering plane (to minimize the elastic contribution) with the sample surface placed vertically to this plane and at 55.7$^\circ$ incidence angle, that is, the magic angle for wurtzite symmetry in order to avoid dichroism effects due to the linearly polarized beam\cite{Brouder:1990_JPCM}. All the samples are measured at room temperature and are tested against radiation damage.
The emitted fluorescence is scanned around the Mn K$\beta$ main line with the incoming excitation at 6700\,eV.
The quantitative data analysis is based on the integrated absolute values of the difference spectra (IAD) and is performed as a function of the Mg/Mn concentration ratio, $y$. The data have been calibrated by the IAD values obtained from commercial Mn-oxides powders, assuming the ionic approximation and considering the high-spin scenario for both systems. The details of the analysis are reported in the Supplementary Information.
\newline
\textbf{SQUID magnetometry}: The magnetic anisotropy energy density has been assessed by integrating the area between the magnetization curves measured along easy ($H \perp c $) and hard ($H \parallel c$) directions and expressed as energy per one Mn atom in the given layer. Magnetization curves are measured at 1.85\,K using a superconductor quantum interference device (SQUID), as described previously\cite{Stefanowicz:2010_PRBa,Bonanni:2011_PRB}. The size of the error bars for the anisotropy is determined mostly by the errors related to the inaccuracy of substrate signal compensation and do not include the uncertainty generated by the insufficient strength of the magnetic field to saturate $M$ for hard direction in our SQUID magnetometer (50 kOe).
\newline
\textbf{PL}:
Photoluminescence is excited with a continuous wave 404\,nm (3.1\,eV) or 442\,nm (2.8\,eV) laser with the excitation power up to tens of mW. The temperature has been varied in the range between 2\,K and 296\,K. An InGaAs type CCD camera coupled to a grating (either 300 grooves/mm or 1200 grooves/mm) monochromator is used as detector. A long wavelength pass filter is placed at the entrance of the monochromator for cutting off the stray laser light. The detection is carried out in the range from 0.7\,eV to 1.5\,eV with a spectral resolution of 0.5\,meV. The integration of the PL signal in Fig.~\ref{fig:PL}b and \ref{fig:PL}d has been performed between 900\,meV and 1100\,meV. The magnetooptical measurements reported in the Supplementary Information are performed in Faraday configuration ($B$\,$\parallel$\,$k$) using a cryostat equipped with a superconducting coil providing a magnetic field up to 10\,T.
\newline
\textbf{Theory -- DFT}:
Calculations for Mn-Mg$_k$ complexes in wz-GaN are performed within the GGA+U approximation using the Quantum Espresso code \cite{Giannozzi:2009_JPCM}. A 96-atoms supercell and a $3\times3\times3$ Monkhorst-Pack grid for Brillouin zone sampling are employed. The pairing energies are calculated from the following formula:
$ \Delta E$\,$=$\,$E_{\mathrm{tot}}(\mathrm{MnGa}_{47-k}\mathrm{N}_{48}$\,:\,$\mathrm{Mg}_k$\,$+$\,$\mathrm{Ga}_{48}\mathrm{N}_{48}) -E_{\mathrm{tot}}(\mathrm{MnGa}_{47-k+1}\mathrm{N}_{48}:\mathrm{Mg}_{k-1}+\mathrm{Ga}_{47}\mathrm{N}_{48}:\mathrm{Mg}) $
, for $k$ ranging from 1 to 5. From the computed values of the magnetic moment and from the Mn--N distance of every possible complexes, one can obtain, taking into account the relative statistical weight $P_k(y)$ of particular Mn--Mg$_k$ configurations, the average variation of these values as a function of the Mg/Mn ratio. The values of these $P_k(y)$ are then approximated with a binomial law, considering that if Mn can bind up to $m$ Mg atoms ($m$\,$=$\,$3$ in our case), the occurrence probability $P_k(y)$ of particular complexes Mn-Mg$_k$, at a given ratio of the Mg to Mn concentration $y$, is given by the binomial distribution, $P_k(y) = \binom{m}{k}(y/m)^k(1-y/m)^{m-k}$ for $y \le m$, whereas for $y > m$, $P_m(y) = 1$ and $P_k(y) = 0$ for $k < m$.
The computed values of the local magnetic moment on Mn$_\mathrm{Ga}$ and its nearest neighbouring N atoms as well as its magnitude in the Mn unit cell are collected in the Supplementary Table~S2, whereas the contour plot of the spin polarization is shown in Fig.~\ref{fig:spin}c.
|
1,314,259,994,164 | arxiv | \section{Conclusion}\label{s.conclusion}
In this work, we proposed a novel Cascade Residual Convolutional Neural Network that integrates a multiscale processing strategy (through a developed residual processing module) with a learning-based segmentation mechanism in an attempt to solve the scene change detection problems. Regarding tests conducted over CD2014 dataset, the proposed MCRCNN model achieved results close to the state-of-the-art change detection techniques. The proposal was capable of overcoming three supervised learning-based change detection methods and three other non-learning based ones. Even so the MCRCNN did not overcome the FgSegNet\_v2, FgSegNet\_S, and FgSegNet\_M techniques regarding CD2014 dataset, it proven to be much more compact, i.e., around $8\times$ smaller than the best scored FgSegNet\_v2 technique in the number of network parameters. Regarding the test conducted over PetrobrasROUTES dataset, the proposed MCRCNN model outperformed the top two state-of-the-art techniques FgSegNet\_v2 and FgSegNet\_S, and also the CRCNN method. Regarding future works, we pretend to focus our investigation in the MCRCNN false negative problem, conducting a more careful analysis of the RPM filter. We also intend to search for other possible ways to improve the residual learning process and also explore different ways of integrating the residual learned map with the second stage MCRCNN segmentation network.
\section{Introduction}\label{s.intro}
Scene change detection is a specific kind of image processing task, that involves partitioning the digitalized captured scene into foreground and background pixel regions. Such a processing strategy is frequently used in many visual knowledge-based computer intelligent systems, such as traffic monitoring~\cite{kato2002hmm}, autonomous driving~\cite{dai2019hybridnet}, object and people tracking~\cite{zhou2005real}, action recognition~\cite{feichtenhofer2019slowfast}, video surveillance~\cite{brutzer2011evaluation}, and anomaly detection~\cite{chandola2009anomaly}. Each of those systems presents its challenges for the change detection itself, such as: (a) the shooting environment condition, (b) video capture device quality, and also (c) local computer memory storage capacity.
Concerning some difficulties presented by (a), it can be named a few ones such as shadows, low-light, specular reflections, and blizzard. Regarding (b), it can be noticed problems with the device sensors, mostly due to subtle temperature variations and also issues related to digital noise, mainly generated during analogic to digital signal conversion. Regarding (c), the change detection technique must be adaptable to work in mobile-reduced memory devices such as smartphones, tablets, and drones.
In the last few decades, in an attempt to solve problems (a), (b), and (c), many scene change detection techniques have been developed. They can be classified into two big groups, i.e., the non-learning-based and the learning-based ones. Amongst the non-learning-based group, one can refer to the works of KaewTraKulPong and Bowden~\cite{kaewtrakulpong2002mog}, Zivkovic~\cite{zivkovic2004improvedgmm}, and Varadarajan et al.~\cite{varadarajan2013spatial}, with a strong basis on statistical parametric modeling of the scene changes. Considering the same statistical domain, one can also encounter the works of Bevilaqua et al.~\cite{bevilacqua2005cnm}, and Lanza and Di Stafano~\cite{lanza2011statchange}, that use nonparametric statistics for the scene change modeling. Besides such mentioned techniques, it is possible to find more simple and effective methods, which include SuBSENSE from St-Charles et al.~\cite{st2014subsense}, PWCS from~ St-Charles et al.~\cite{st2015pawcs}, and IUTIS-5 from Bianco et al.~\cite{bianco2017iutis5}.
The second group of change detection techniques includes those methods capable of learning how to differentiate between the foreground and background scene regions, that when properly designed and trained, can easily adapt to difficult change detection scenarios, as demonstrated by the works of Wang et al.~\cite{wang2017cascadecnn}, that use a multistage and multiscale network named Cascade, Babaee et al.~\cite{babaee2018deepbs}, concerning the usage of a multistage convolutional neural network named DeepBS, Santos et al.~\cite{santos2019crcnn}, which use a multistage residual convolutional neural network named CRCNN, Santana et al.~\cite{santana2019novel}, which use siamese-based change detection networks named SEU-Nets, and Lim and Keles~\cite{segnet2018tripletcnns}\textendash\cite{lim2019fgsegnetv2}, that work with autoencoder change detection convolutional neural networks FgSegNet\_M, FgSegNet\_S, and FgSegNet\_v2.
\begin{figure*}[!htb]
\centering
\includegraphics[width=4.5in,height=4.5in,keepaspectratio]{mcrcnn_arch.pdf}
\caption{Architecture of the proposed MCRCNN model, where the output of the residual processing module \textbf{RPM} is depth-wise concatenated to the 15th SCNN convolutional layer feature maps. \textbf{CONV} stands for convolutional layers, while \textbf{IBN} and \textbf{IIN} indicate, respectively, the interleaved batch normalization and instance normalization layers, which applied at every three subsequent convolutional layers.}
\label{f.mcrcnn_arch}
\end{figure*}
Although the learning-based methods present state-of-the-art results in the literature when compared against the non-learning-based techniques, they are not yet capable at solving, at the same time, the problems (a), (b), and (c). The CascadeCNN, DeepBS, and CRCNN techniques can be low memory consumptive methods, but at the same time, do not achieve FgSegNets results. On the other hand, in the case of the FgSegNets, better detection results implicate high memory consumption.
In this work, we attempt to improve the effectiveness of the CRCNN method in dealing with problems (a) and (b), trying to maintain the technique already good compromise with the problem (c). In that sense, we propose four modifications to improving the CRCNN method. Such changes include: (i) Multiscale residual map processing (ii) Multistage training using high-level feature aggregation policy (iii) Interleaved and hybrid intralayer feature normalization using batch~\cite{ioffe2015batchnorm} and instance~\cite{ulyanov2016instnorm} normalization strategies, and (iv) color image processing.
The remaining of this manuscript is divided into Section~\ref{s.proposal}, describing the theoretical basis of the proposal, Section~\ref{s.methodology}, presenting the proposal training and evaluation methodology, Section~\ref{s.experimental_section}, presenting and discussing the quantitative and qualitative obtained results, and Section~\ref{s.conclusion}, showing the conclusion of this work and pointing towards future research directions.
\section{Methodology}\label{s.methodology}
In this section, we present the methodology used to train and evaluate the proposed MCRCNN model. To simplify the explanation we structured it into Subsection~\ref{ss.datasets}, which presents the relevant information about the datasets used in this work, Subsection~\ref{ss.sub_train_proc}, that describes the proposal training procedures, and Subsection~\ref{ss.sub_evaluation_proc}, which discuss the MCRCNN evaluation protocol.
\subsection{Datasets}
\label{ss.datasets}
\subsubsection{Change Detection Dataset 2014}
\label{sss.change_detection}
The Change Detection Dataset 2014 (CD2014) is a large and freely available dataset of videos collected by Wang et al~\cite{wang2014cdnet} from different realistic, camera-captured, and challenging scenarios. Such a dataset contains $11$ video categories with $4$ to $6$ video sequences each, subdivided into:
\begin{itemize}
\item \textbf{Baseline}: combines mild challenges present in Dynamic Background, Camera Jitter, Intermittent Object Motion, and Shadow categories into four different videos named highway, office, pedestrians, and PETS2006.
\item \textbf{Dynamic Background} (Dyn. Bg.): includes scenes from six different videos with so much background motion, e.g., cars and trucks passing in front of a tree shaken. Such video names are boats, canoe, fall, fountain01, fountain02, and overpass.
\item \textbf{Camera Jitter} (C. Jitter): contains four indoor and outdoor videos captured by unstable video devices, for example vibrating cameras. Those video names are badminton, boulevard, sidewalk, and traffic.
\item \textbf{Intermittent Object Motion} (Int. Obj.): contains six videos with objects that move and then stop for a short while producing ``ghosting" artifacts. Such video names are abandonedBox, parking, sofa, streetLight, tramstop, and winterDriveway.
\item \textbf{Shadow}: six indoor and outdoor videos containing objects surrounding by a strong shadow that could be miss detected as real moving objects. Such video names are backdoor, bungalows, busStation, copyMachine, cubicle, and peopleInShade.
\item \textbf{Thermal}: five videos that have been captured by far-infrared cameras named corridor, diningRoom, lakeSide, library, and park.
\item \textbf{Bad Weather} (B. Weat.): includes four outdoor videos captured from challenging winter weather conditions, e.g., snowstorms, and fog. Such video names are blizzard, skating, snowFall, and wetSnow.
\item \textbf{Low Framerate} (L. Frame.): four videos captured varying frame-rates between $0.17$fps and $1$fps. Such video names are port\_0\_17fps, tramCrossroad\_1fps, tunnelExit\_0\_35fps, and turnpike\_0\_5fps.
\item \textbf{PTZ} (PanTZ): four videos captured by pan-tilt-zoom cameras and named continuousPan, IntermittentPan, twoPositionPTZCam, and zoomInZoomOut.
\item \textbf{Turbulence} (Turbul.): four outdoor videos that show air turbulence caused by rising heat. Which are named turbulence0, turbulence1, turbulence2, and turbulence3.
\end{itemize}
\subsubsection{PetrobrasROUTES}
\label{sss.petrobras}
The PetrobrasROUTES is a private dataset which consists of $281$ high-resolution color images collected from an indoor Petrobras\footnote{Petrobras is a publicly-held company on an integrated basis and specialized in the oil, natural gas, and energy industry~\cite{Petrobrasdataset}.} workspace. The main challenge of such a dataset regards the detection of objects obstructing escape routes.
\subsection{Training procedure}
\label{ss.sub_train_proc}
The training procedure methodology follows basically the same protocols of~\cite{santos2019crcnn}, where for the CD2014 dataset consist of:
\begin{enumerate}
\item to select $300$ color images\footnote{We used the same set of training images from~\cite{lim2019fgsegnetv2} to train the proposed MCRCNN model.} and their $300$ correspondent binary images, which were ground-truth manually annotated.
\item to calculate the deterministic background over the first $100$ images.
\item to train the BCNN network using batches of randomly extracted patches of size $40 \times 40$, like in~\cite{zhang2017dncnn}, from the input and output background images to minimize the cost of Equation~\eqref{e.backcnn_cost}. The patches were augmented using geometric transformations, such as rotation and reflection.
\item to freeze all BCNN network trainable parameters and just train the second MCRCNN part, the SCNN network, and also the RPM module using the full-sized images to minimize the cost of Equation~\eqref{e.segcnn_cost}.
\end{enumerate}
For the PetrobrasROUTES the training procedure consists in:
\begin{enumerate}
\item to select $51$ color images and their $51$ correspondent binary images, which were ground-truth manually annotated.
\item to manually select one of the $51$ color images to be the deterministic background.
\item to follow the same steps 3) and 4) from the CD2014 dataset training protocol.
\end{enumerate}
The BCNN, RPM, and SCNN parameters were trained using the Adam method~\cite{kingma2014adam} by a maximum of $100$ epochs\footnote{Depending on the training video sequence, convergence can be achieved in less than $100$ epochs.}, with $500$ gradient updates per epoch, using a learning rate\footnote{The initial value is reduced by a factor of $0.1$ every time the loss function hits a plateau.} of $0.001$ and batches of size $128$ for the BCNN training process. We trained the MCRCNN parameters with $80\%$ of the input images and used the remaining $20\%$ to evaluate the convergence of the training process.
\begin{table*}[hbt!]
\centering
\renewcommand\arraystretch{1.5}
\setlength{\tabcolsep}{.58em}
\caption{Comparison of F-measure results of 11 categories from CD2014 dataset}
\scalebox{1.05}{
\begin{tabular}{crrrrrrrrrrrr}
\toprule
Methods & Baseline & C.Jitter & B.Weat & Dyn.Bg. & Int.Obj. & L.Frame. & N.Videos & PanTZ & Shadow & Thermal & Turbul. & Overall \\
\midrule
FgSegNet\_{v2} \cite{lim2019fgsegnetv2} & \textbf{0.9980} & \textbf{0.9961} & 0.9900 & \textbf{0.9950} & 0.9939 & \textbf{0.9579} & 0.9816 & \textbf{0.9936} & 0.9966 & 0.9942 & \textbf{0.9815} & \textbf{0.9890} \\
FgSegNet\_S \cite{segnet2018tripletcnns} & \textbf{0.9980} & 0.9951 & \textbf{0.9902} & 0.9902 & \textbf{0.9942} & 0.9511 & \textbf{0.9837} & 0.9837 & \textbf{0.9967} & \textbf{0.9945} & 0.9796 & 0.9878 \\
FgSegNet\_M \cite{segnet2018tripletcnns} & 0.9975 & 0.9945 & 0.9838 & 0.9838 & 0.9933 & 0.9558 & 0.9779 & 0.9779 & 0.9954 & 0.9923 & 0.9776 & 0.9865 \\
MCRCNN & 0.9938 & 0.9889 & 0.9632 & 0.9811 & 0.9893 & 0.8619 & 0.9428 & 0.9344 & 0.9906 & 0.9765 & 0.9635 & 0.9622 \\
CRCNN \cite{santos2019crcnn} & 0.9919 & 0.9799 & 0.9569 & 0.9687 & 0.9755 & 0.8498 & 0.9388 & 0.8967 & 0.9852 & 0.9818 & 0.9637 & 0.9535 \\
Cascade \cite{wang2017cascadecnn} & 0.9786 & 0.9758 & 0.9451 & 0.9451 & 0.8505 & 0.8804 & 0.8926 & 0.8926 & 0.9593 & 0.8958 & 0.9215 & 0.9272 \\
DeepBS \cite{babaee2018deepbs} & 0.9580 & 0.8990 & 0.8647 & 0.8647 & 0.6097 & 0.5900 & 0.6359 & 0.6359 & 0.9304 & 0.7583 & 0.8993 & 0.7593 \\
IUTIS-5 \cite{bianco2017iutis5} & 0.9567 & 0.8332 & 0.8289 & 0.8289 & 0.7296 & 0.7911 & 0.5132 & 0.5132 & 0.9084 & 0.8303 & 0.8507 & 0.7820 \\
PAWCS \cite{st2015pawcs} & 0.9397 & 0.8137 & 0.8059 & 0.8059 & 0.7764 & 0.6433 & 0.4171 & 0.4171 & 0.8934 & 0.8324 & 0.7667 & 0.7477 \\
SuBSENSE \cite{st2014subsense} & 0.9503 & 0.8152 & 0.8594 & 0.8594 & 0.6569 & 0.6594 & 0.4918 & 0.4918 & 0.8986 & 0.8171 & 0.8423 & 0.7453 \\ \bottomrule
\end{tabular}}
\label{t.fmeasures}
\end{table*}
\subsection{Evaluation procedure}
\label{ss.sub_evaluation_proc}
The evaluation process consists in to apply the trained MCRCNN model over each video test image following the protocol:
\begin{itemize}
\item \textbf{Deep Segmentation}: first forward propagating the test images through the trained BCNN model, generating the residual image counterpart, and through the trained SCNN model. Before the last SCNN convolution, we concatenate the residual image to the 15th SCNN convolutional layer outputs. Later, we binarized\footnote{In the majority of the experiments, the best threshold value was $0.7$, except for the categories B. Weat, Dyn. Bg., Int. Obj., and N. Videos, which used values of respectively $0.8$, $0.9$, $0.6$, and $0.9$.} the SCNN probabilistic output.
\item \textbf{Misclassification Rate}: in such a step, we calculated the number of correct and incorrect detections encoded by the True Positives (TPs), i.e, the number of pixels correctly classified as foreground, the True Negatives (TNs), i.e., the number of pixels correctly classified as background, the False Positives (FPs), i.e., the number of background pixels incorrectly classified as foreground, and the False Negatives (FNs), i.e., the number of foreground pixels incorrectly classified as background.
\item \textbf{Detection Measurements}: in such a step, (TPs), (TNs), (FPs), and (FNs) are combined into four different measures used to evaluate the robustness of the proposed MCRCNN model. Those measures are computed as follows:
\begin{equation}\label{equ.precision}
Precision = \frac{TP}{TP+FP},
\end{equation}
\\*
\begin{equation}\label{equ.recall}
Recall = \frac{TP}{TP+FN},
\end{equation}
\\*
\begin{equation}\label{equ.fmeasure}
F-measure = 2.0 \times \frac{Recall \times Precision}{Recall + Precision},
\end{equation}
\\*
\noindent and
\\*
\begin{equation}\label{equ.pwc}
PWC = 100.0 \times \frac{FN + FP}{TP + FP + FN + TN} \\[15pt]
\end{equation}
\end{itemize}
where $PWC$ denotes the percentage of wrong classifications.
\section*{Acknowledgment}
The authors are grateful to CNPq grants 307066/2017-7 and 427968/2018-6, FAPESP grants 2013/07375-0 and 2014/12236-1, as well as Petrobras grant 2017/00285-6.
\fi
\bibliographystyle{IEEEtran}
\section{Proposed Approach}\label{s.proposal}
In this work, we propose a learning-based scene change detection technique named Multiscale Cascade Residual Convolutional Neural Network (MCRCNN). Such a proposal is based on the work of Zhang et al.~\cite{zhang2017dncnn}, concerning the usage of residual learning and on the work of Santos et al.~\cite{santos2019crcnn}, regarding the usage of a multistage cascaded convolutional neural network for scene change detection. Figure~\ref{f.mcrcnn_arch} summarizes the MCRCNN proposal, which consists of a two-stage deep convolutional neural network composed of $20$ layers and a multiscale Residual Processing Module (RPM).
The first stage of the MCRCNN model consists of learning how to generate the so-called residual map, as described in more detail in Subsection~\ref{s.bcnn_net}. In the second stage, the multiscale processed residual map, as described by Subsection~\ref{s.rpm}, is integrated into the change detection network, whose functionality is described by Subsection~\ref{s.scnn_net}.
\subsection{Background Convolutional Neural Network}\label{s.bcnn_net}
The first change detection stage of the MCRCNN model, named Background Convolutional Neural Network (BCNN), is responsible for generating a foreground highlighted image, such as the vehicles in Figure~\ref{f.mcrcnn_arch}. The BCNN architecture is very similar to the Denoising Convolutional Neural Network (DnCNN) proposed by Zhang et al.~\cite{zhang2017dncnn}. As shown by Figure~\ref{f.mcrcnn_arch}, it starts with a single convolutional layer, shown in orange color, gets deeper with the insertion of $15$ more convolutional layers, represented by the blue-colored rectangle, and ends with a single convolutional layer, shown in green color.
Blue-colored and orange-colored layers in Figure~\ref{f.mcrcnn_arch} are locally activated by Rectified Linear Unity (ReLU) functions~\cite{nair2010relu}, use kernels of size $3\times3$, and output $64$ feature maps each. The green-colored layer is linearly activated, uses kernels of size $3\times3$, and outputs the residual map color image. One particularity of the blue-colored layers is the Interleaved Batch Normalizations (IBNs), which are batch normalization~\cite{ioffe2015batchnorm} operations applied at intervals of three layers, just before the ReLU activation procedure. Such a strategy tries to equally distribute the normalization procedure along the entire network avoiding processing overhead, also diminishing the network memory consumption.
The BCNN training procedure follows the same principles of the CRCNN work~\cite{santos2019crcnn}. It consists of two phases: the first one takes an interval $I = \{S_{1}, S_{2}, ..., S_{m}\}$ of consecutive frames from the video and uses it to calculate the \emph{deterministic background image}, which stands for an image $s$ that represents the median of such an interval\footnote{The same procedure was adopted by Lanza et al.~\cite{bevilacqua2005cnm}. Other alternatives would be using auxiliary non-learning-based segmentation techniques, like performed by Babaee et al.~\cite{babaee2018deepbs}, or even manually selection.}. The second phase consists of minimizing the accumulated\footnote{Minimizing the sum rather than the mean cost value imposes to the optimization an even bigger penalization.} square error between the deterministic background image and the \emph{approximated background image} $b$, which is represented as follows:
\begin{equation}
\label{e.approximated_background}
b = f - BCNN(f; \Theta_{1}),
\end{equation}
where $f$ denotes the input image normalized between $[0, 1]$, $\Theta_{1}$ refers to the BCNN trainable parameters, and $BCNN(\cdot)$ refers to the residual map learned during the training process. In light of that, the BCNN training process aims at minimizing the following equation:
\begin{equation}
\label{e.backcnn_cost}
L_{B}(b, f; \Theta_{1}) = \frac{1}{2}\sum_{i=1}^n||b_{i} - s_{i} ||_F^2,
\end{equation}
where $n$ stands for the number of training samples and $||\cdot||_F^2$ represents the Frobenius norm. Notice that we employed a patch-based methodology, where $b_i$ and $s_i$ denote the $i^{th}$ patch extracted from images $b$ and $s$, respectively.
\subsection{Residual Processing Module}\label{s.rpm}
The Residual Processing Module (RPM) design was inspired by the Feature Processing Modules FPM~\cite{segnet2018tripletcnns} and FPM\_M~\cite{lim2019fgsegnetv2}. It serves mainly to improve the BCNN residual map quality treating undesirable spatial coherence problems. As shown by Figure~\ref{f.rpm}, RPM starts by applying, over the residual map, a Spatial Dropout (SD) pre-processing technique, which according to Hinton et al.~\cite{hinton2012dropout} is a very efficient strategy to prevent the network parameters from overspecialization.
\begin{figure}[!ht]
\centering
\includegraphics[width=3.5in]{rpm.pdf}
\caption{Residual Processing Module architecture.}
\label{f.rpm}
\end{figure}
After the SD regularization, the residual map is conducted to the multiscale processing stage, where it is convolved by dilated\footnote{Such a strategy tries to simulate the usage of kernel sizes of respectively $7\times7$, $11\times11$, $18\times18$, and $35\times35$.} filters at rates of $4$, $8$, $16$, and $32$. The dilation results are then activated by ReLU functions, which generate the feature maps \textbf{F1} to \textbf{F4}.
Such generated feature maps are next depth-wise concatenated into \textbf{F5} and convolved by a single $1\times1$ filter, producing the local linear-combined map \textbf{P}. In the last RPM processing step, \textbf{P} is smoothed by an average pooling layer of window size $4\times4$, generating the refined residual map \textbf{R'}, that after has been properly normalized\footnote{The values of the output of the RPM module are normalized between $[0, 1]$ using min-max normalization.}, is used in the second stage of the MCRCNN proposed model.
\subsection{Segmentation Convolutional Neural Network}\label{s.scnn_net}
The second stage of the MCRCNN model is named Segmentation Convolutional Neural Network (SCNN) and it is responsible to generate the probability map identifying, with real values between $[0, 1]$, the image change locations, also called foreground regions. In this multiscale version of the CRCNN proposed by Santos et al.~\cite{santos2019crcnn}, the RPM output (see Subsection~\ref{s.rpm} for more details) is depth-wise concatenated with the 15th SCNN convolutional layer output\footnote{Such output comprehends a set of $64$ feature maps activated by ReLU function.}. The resultant block of $65$ feature maps is then convolved by a single filter of size $3\times3$ and activated by a sigmoid function, been such convolutional process represented in Figure~\ref{f.mcrcnn_arch} by the yellow rectangle.
The SCNN normalization policy follows the BCNN one, but in such case, the IBNs are substituted\footnote{Since the SCNN optimization consists in using the full-sized images, IN processing adapts better than BN ones.} by Interleaved Instance Normalizations (IINs)~\cite{ulyanov2016instnorm}. The training process follows the work by~\cite{santos2019crcnn}, which aims at minimizing the average binary cross-entropy measured between the network output and the ground-truth binary detection mask. Such an image corresponds to the pre-annotated true foreground regions present in the grayscale input image. Therefore, the SCNN training process aims at minimizing the following equation:
\begin{equation}
\label{e.segcnn_cost}
\begin{aligned}
L_{S}(t, f; \Theta_{2})=-\sum_{i=1}^{k1}\sum_{j=1}^{k2}&[t_{i, j}\log(\hat{t}_{i, j}) \ +\\
&(1 - t_{i, j})\log(1 - \hat{t}_{i,j})],
\end{aligned}
\end{equation}
where
\begin{equation}
\label{e.segcnn_cost_p1}
\hat{t} = SCNN(f; \Theta_{2}),
\end{equation}
\\*
notice that $t$ is the ground-truth pre-annotated binary mask, $\Theta_{2}$ stands for the SCNN trainable parameters, $f$ indicates the SCNN input color image, the same BCNN input image, and $k1$ and $k2$ denote the maximum image height and width, respectively.
\section{Experimental Results}\label{s.experimental_section}
In this section, we present the results of the proposed MCRCNN method regarding the comparison against the non-learning-based change detection techniques, IUTIS-5~\cite{bianco2017iutis5}, PAWCS~\cite{st2015pawcs}, SuBSENSE~\cite{st2014subsense}, and the learning-based ones FgSegNet\_v2~\cite{lim2019fgsegnetv2}, FgSegNet\_S~\cite{segnet2018tripletcnns}, FgSegNet\_M~\cite{segnet2018tripletcnns}, Cascade~\cite{wang2017cascadecnn}, DeepBS~\cite{babaee2018deepbs}, and CRCNN~\cite{santos2019crcnn}.
For the sake of clarity, the discussion is subdivided into Subsection~\ref{ss.cd_dataset_res}, which presents the quantitative and qualitative results related to the CD2014 dataset, and Subsection~\ref{ss.petro_dataset_res}, which presents the results regarding PetrobrasROUTES dataset.
\subsection{CD2014 Dataset Results}\label{ss.cd_dataset_res}
According to Table~\ref{t.fmeasures}, the MCRCNN proposal, in comparison against SuBSENSE, PAWCS, and IUTIS-5 techniques, shows average overall $F\text{-}measure$ improvements of $0.2169$, $0.2145$, and $0.1802$, respectively. In the worst-case scenario, considering the comparison against the learning-based techniques, MCRCNN average overall $F\text{-}measure$ results were lower than FgSegNet\_v2, FgSegNet\_S, and FgSegNet\_M by respectively $0.0268$, $0.0256$, and $0.0243$. Table~\ref{t.fmeasures} also shows that MCRCNN average overall $F\text{-}measure$ results overcome DeepBS, Cascade, and CRCNN ones. In such cases, the results were improved by respectively $0.2029$, $0.035$, and $0.0090$, respectively.
Analyzing Table~\ref{t.overall_results}, one can see that the proposed technique, in the best-case scenario, achieved improvements in $Precision$, $Recall$, and $PWC$ of respectively $0.2196$, $0.1969$, and $1.8883$, regarding the comparisons against SuBSENSE and DeepBS techniques. Table~\ref{t.overall_results} also shows that even so MCRCNN was not capable to overcome the FgSegnets, it gets close $Precision$ results of $0.0046$ and $0.0053$ in comparisons against FgSegNet\_S and FgSegNet\_M, respectively.
\begin{table}[hbt!]
\renewcommand{\arraystretch}{1.5}
\centering
\caption{Comparison of precision, recall and PWC overall results from CD2014 dataset.}
\scalebox{1.07}{
\begin{tabular}{cccc}
\hline
Methods & Avg. Precision & Avg. Recall & Avg. PWC \\ \hline
FgSegNet\_v2 \cite{lim2019fgsegnetv2} & \textbf{0.9823} & 0.9891 & \textbf{0.0402} \\
FgSegNet\_S \cite{segnet2018tripletcnns} & 0.9751 & \textbf{0.9896} & 0.0461 \\
FgSegNet\_M \cite{segnet2018tripletcnns} & 0.9758 & 0.9836 & 0.0559 \\
MCRCNN & 0.9705 & 0.9514 & 0.1037 \\
CRCNN \cite{santos2019crcnn} & 0.9604 & 0.9602 & 0.1348 \\
Cascade \cite{wang2017cascadecnn} & 0.8997 & 0.9506 & 0.4052 \\
DeepBS \cite{babaee2018deepbs} & 0.8332 & 0.7545 & 1.9920 \\
IUTIS-5 \cite{bianco2017iutis5} & 0.8087 & 0.7849 & 1.1986 \\
PAWCS \cite{st2015pawcs} & 0.7857 & 0.7718 & 1.1992 \\
SuBSENSE \cite{st2014subsense} & 0.7509 & 0.8124 & 1.6780 \\ \hline
\end{tabular}}
\label{t.overall_results}
\end{table}
It is worth noting that even so the FgSegNets quantitative results, presented by Tables~\ref{t.fmeasures}~and~\ref{t.overall_results}, overcome the MCRCNN method, our proposal network architecture is much more compact. It has a total of $1,116,618$ parameters, while the top two ranked techniques, i.e., FgSegNet\_v2 and FgSegNet\_S, comprise an amount of $9,225,161$ and $7,622,465$ parameters, respectively. Besides, even considering the RPM module size, the MCRCNN almost preserves the same CRCNN size of $1,112,720$ parameters.
According to Figure~\ref{f.cd2014_segmentation}, when comparing the MCRCNN foreground detection masks in row (d) with the CRCNN masks in row (e), it can be noticed that MCRCNN exhibit more problems related to false negatives, been those problems more pronounced in Bad Weather and Shadow category scenes. The first one regards the incomplete detection of the truck body, and the second one concerns the middle person foot and the people heads. Such observations corroborate with the average overall MCRCNN $Recall$ results presented by Table~\ref{t.overall_petro_results}.
According to Figure~\ref{f.cd2014_segmentation}, the images from row (f) show that the Cascade technique also has some difficulties to detect changes in the Bad Weather and Shadow categories. It presents even worst false negative issues, as it can be seen by the barely detected truck body in the Bad Weather scene, and by the undetected person's head in the Shadow scene. Also, regarding the Shadow category, different from MCRCNN, CRCNN, and FgSegNet\_v2 techniques, Cascade was not able to avoid the false positive shadow regions.
Besides the foreground masks, row (b) of Figure~\ref{f.cd2014_segmentation} shows us the BCNN normalized residual map. As it can be noticed, the miss detected foreground regions are pretty much related to the dark residual map regions. In such cases, we argue that the RPM dilation processing strategy was not capable of properly fill those map holes, which could contribute to the SCNN paying less attention to such regions during its training procedure.
\subsection{PetrobrasROUTES Dataset Results}\label{ss.petro_dataset_res}
Considering the experiments conducted over the PetrobrasROUTES dataset, Table~\ref{t.overall_petro_results} shows that the MCRCNN results overcome learning-based state-of-the-art change detection techniques like FgSegNet\_v2, FgSegNet\_S, and CRCNN in terms of at least three of the four used detection measurements.
\begin{table}[hbt!]
\renewcommand{\arraystretch}{1.5}
\centering
\caption{Comparison of precision, recall and PWC overall results from PetrobrasROUTES dataset.}
\scalebox{1.08}{
\begin{tabular}{ccccc}
\hline
Methods & F-measure & Precision & Recall & PWC \\ \hline
FgSegNet\_v2 \cite{lim2019fgsegnetv2} & 0.9095 & 0.9672 & 0.8583 & 0.5831 \\
FgSegNet\_S \cite{segnet2018tripletcnns} & 0.9221 & \textbf{0.9770} & 0.8732 & 0.4287 \\
MCRCNN & \textbf{0.9664} & 0.9661 & \textbf{0.9667} & 0.2296 \\
CRCNN \cite{santos2019crcnn} & 0.9619 & 0.9611 & 0.9627 & \textbf{0.2218} \\ \hline
\end{tabular}}
\label{t.overall_petro_results}
\end{table}
According to Table~\ref{t.overall_petro_results}, in the best case scenario, regarding the comparison against the FgSegNet\_v2 technique, the MCRCNN method exhibit improvements of $0.0524$, $0.1084$, and $0.3535$ in terms of $F\text{-}measure$, $Recall$, and $PWC$ measurements, respectively. In the worst case scenarios, the MCRCNN comparisons against FgSegNet\_S and CRCNN exhibit worsen results of $0.0109$ and $0.0078$ in terms of respectively $Precision$ and $PWC$ measurements.
Concerning the detection quality analysis, Figure~\ref{f.petro_segmentation}(c) shows that MCRCNN was able to produce a much more precise foreground object detection mask in comparison against FgSegNet\_v2, whose results were severed affected by false negatives, as shown by Figure~\ref{f.petro_segmentation}(e). On the other hand, even so in Figure~\ref{f.petro_segmentation}(c) most of the object shape was recovered, in comparison against Figure~\ref{f.petro_segmentation}(d), which shows the CRCNN results, and against Figure~\ref{f.petro_segmentation}(b), which shows the reference ground-truth mask, the MCRCNN technique presents more false positive areas around the detected foreground object.
\begin{figure}[htb!]
\setlength{\tabcolsep}{2pt}
\renewcommand{\arraystretch}{0.8}
\hspace*{.05cm}
\centerline{
\begin{tabular}{ccc}
{\small Bad Weather} & {\small Low Framerate} & {\small Shadow}\\
\includegraphics[width=2.8cm, height=2.8cm]{figs/seg/inputs/badWeather/in001542.jpg}&
\includegraphics[width=2.8cm, height=2.8cm]{figs/seg/inputs/lowFramerate/in000614.jpg}&
\includegraphics[width=2.8cm, height=2.8cm]{figs/seg/inputs/shadow/in000962.jpg}\\
& (a) & \\
\end{tabular}}
\hspace*{.05cm}
\centerline{
\begin{tabular}{rccc}
\includegraphics[width=2.8cm, height=2.8cm]{figs/seg/CRCNN_rpm/badWeather/res001542.png}&
\includegraphics[width=2.8cm, height=2.8cm]{figs/seg/CRCNN_rpm/lowFramerate/res000614.png}&
\includegraphics[width=2.8cm, height=2.8cm]{figs/seg/CRCNN_rpm/shadow/res000962.png}\\
& (b) & \\
\end{tabular}}
\hspace*{.05cm}
\centerline{
\begin{tabular}{rccc}
\includegraphics[width=2.8cm, height=2.8cm]{figs/seg/groundtruths/badWeather/gt001542.png}&
\includegraphics[width=2.8cm, height=2.8cm]{figs/seg/groundtruths/lowFramerate/gt000614.png}&
\includegraphics[width=2.8cm, height=2.8cm]{figs/seg/groundtruths/shadow/gt000962.png}\\
& (c) & \\
\end{tabular}}
\hspace*{.05cm}
\centerline{
\begin{tabular}{rccc}
\includegraphics[width=2.8cm, height=2.8cm]{figs/seg/CRCNN_rpm/badWeather/bin001542.png}&
\includegraphics[width=2.8cm, height=2.8cm]{figs/seg/CRCNN_rpm/lowFramerate/bin000614.png}&
\includegraphics[width=2.8cm, height=2.8cm]{figs/seg/CRCNN_rpm/shadow/bin000962.png}\\
& (d) & \\
\end{tabular}}
\hspace*{.05cm}
\centerline{
\begin{tabular}{rccc}
\includegraphics[width=2.8cm, height=2.8cm]{figs/seg/CRCNN/badWeather/bin001542.png}&
\includegraphics[width=2.8cm, height=2.8cm]{figs/seg/CRCNN/lowFramerate/bin000614.png}&
\includegraphics[width=2.8cm, height=2.8cm]{figs/seg/CRCNN/shadow/bin000962.png}\\
& (e) & \\
\end{tabular}}
\hspace*{.05cm}
\centerline{
\begin{tabular}{rccc}
\includegraphics[width=2.8cm, height=2.8cm]{figs/seg/cascade/badWeather/bin001542.png}&
\includegraphics[width=2.8cm, height=2.8cm]{figs/seg/cascade/lowFramerate/bin000614.png}&
\includegraphics[width=2.8cm, height=2.8cm]{figs/seg/cascade/shadow/bin000962.png}\\
& (f) & \\
\end{tabular}}
\hspace*{.05cm}
\centerline{
\begin{tabular}{rccc}
\includegraphics[width=2.8cm, height=2.8cm]{figs/seg/segnetv2/badWeather/bin001542.png}&
\includegraphics[width=2.8cm, height=2.8cm]{figs/seg/segnetv2/lowFramerate/bin000614.png}&
\includegraphics[width=2.8cm, height=2.8cm]{figs/seg/segnetv2/shadow/bin000962.png}\\
& (g) & \\
\end{tabular}}
\centering
\caption{Qualitative results considering the categories ``Bad Weather", ``Low Framerate", and ``Shadow" from CD2014 dataset: (a) input RGB frame, (b) MCRCNN residual maps, (c) ground-truth detection masks, results concerning (d) proposed MCRCNN, (e) CRCNN, (f) Cascade and, (g) FgSegNet\_v2.}
\label{f.cd2014_segmentation}
\end{figure}
\begin{figure}[htb!]
\setlength{\tabcolsep}{2pt}
\renewcommand{\arraystretch}{1.5}
\vspace*{-.3cm}
\centerline{
\begin{tabular}{cr}
{\small Platform}\\
\includegraphics[width=8cm, height=3.5cm]{figs/seg/inputs/petrobras/in054.jpg}\\
(a)\\
\end{tabular}}
\centerline{
\begin{tabular}{cr}
\includegraphics[width=8cm, height=3.5cm]{figs/seg/groundtruths/petrobras/gt054.png}\\
(b)\\
\end{tabular}}
\hspace*{.05cm}
\centerline{
\begin{tabular}{cr}
\includegraphics[width=8cm, height=3.5cm]{figs/seg/CRCNN_rpm/petrobras/bin054dst.png}\\
(c)\\
\end{tabular}}
\hspace*{.05cm}
\centerline{
\begin{tabular}{cr}
\includegraphics[width=8cm, height=3.5cm]{figs/seg/CRCNN/petrobras/bin054dst.png}\\
(d)\\
\end{tabular}}
\hspace*{.05cm}
\centerline{
\begin{tabular}{cr}
\includegraphics[width=8cm, height=3.5cm]{figs/seg/segnetv2/petrobras/bin054dst.png}\\
(e)\\
\end{tabular}}
\centering
\caption{Qualitative results considering an obstructed route video scene from PetrobrasROUTES dataset: (a) input RGB frame, (b) ground-truth detection mask, and results concerning (c) MCRCNN, (d) CRCNN and (e) FgSegNet\_v2 techniques.}
\label{f.petro_segmentation}
\end{figure}
|
1,314,259,994,165 | arxiv | \section{Introduction}
Continuous frames extend the concept of frames when the indices are related to some measurable space, see \cite{Ali1, Ra, Gb, Gk}.
Apart from expected similarities, this extension pointed out various differences between the ``discrete`` and ``continuous`` theories.
For example, continuous frames need not be norm bounded, and they may describe the states of quantum systems in a neighborhood
of a point in phase space $\mathbb{R}^{2d} $, which is a more realistic situation than the corresponding discrete case related
to some lattice in $\mathbb{R}^{2d} $, cf. \cite{deGosson2020}.
On the other hand the tensor product of Hilbert spaces is a very important topic in mathematics \cite{LOAN200085} and theoretical physics \cite{Caban_2005}. Here we combine those two approaches.
We introduce the notion of continuous frames (and Bessel mappings) for tensor products of Hilbert spaces $\h = \h_1 \otimes \h_2 $ with respect to a
(tensor product) measure space $(X,\mu)$. When the measure $\mu $ is chosen to be the counting measure, the main properties of tensor products of
(discrete) frames considered in \cite{bourou, Garcia, KhoAsg, WangLi} are recovered.
We show the expected consistence property, i.e. that the continuous frame/Bessel mapping condition is preserved by the tensor product, Theorem
\ref{thm:main}. To tackle the issue of representing vectors in tensor product Hilbert spaces, different systems can be used for analysis and
synthesis, which gives rise to the notion of dual pairs of continuous frames. We study the corresponding operators,
and give a representation of canonical dual frames for the tensor product continuous frames.
In addition, we briefly discuss the existence of non-simple tensor product (dual) frames. For that result, a full characterization of all dual continuous frames is needed. We prove the generalization of the well-known result for (discrete) frames \cite[Lemma 6.3.6]{ole} to the continuous frame setting, solving an open question. We use the powerful technique of reproducing kernel Hilbert spaces in these investigations, see Theorem \ref{olecontinuous}, and derive the corresponding property for tensor product continuous frames.
\par
Let us recall that tensor product Hilbert spaces are important in many different contexts. For example, as noted in \cite{xxlgrospeck19}, {\em ``the theory of tensor products is at the heart
of kernel theorems for operators``}. In fact, tensor product of two Hilbert spaces can be introduced in terms of Hilbert-Schmidt operators which, in turn, can be identified with their kernels. In this paper we focus our attention to other aspects of tensor products, and the
approach based on kernel theorems will be given in a separate contribution.
\par
For example, in Section \ref{sec:multipliers} we study the tensor product continuous frame multipliers and their compactness properties,
thus extending results from \cite{BBR} to the tensor product setting. In addition,
we recall the partial trace theorem which is an important tool related to applications of our results to quantum systems.
As an illustration, in Section \ref{sec:STFTwavmult0} we consider particular examples of continuous frame multipliers
in the form of familiar localization operators in the context of the short-time Fourier transform and wavelet multipliers.
Localization operators are used in the context of quantization \cite{Berezin71}, in signal analysis \cite{da88},
or as an approximation of pseudodifferential operators, cf. \cite{CRodino} and the references given there.
We recover some well-known results, but also point out some Schatten class results related to the wavelet and mixed type multipliers that so far seems to remain unconsidered.
\par
Specific instances of our general theory, which is one of the main motivations for our study
could be related to the states of quantum systems. More precisely, we
propose the interpretation of a family of trace class operators as density operators also called density matrices
for composite (bipartite) quantum systems. Recently, de Gosson in \cite{deGosson2020, deGosson2021} considered Toeplitz density operators by using the approach which is closely related
to the short-time Fourier transform multipliers of Section \ref{sec:STFTwavmult0}.
The main feature of operators considered in Section \ref{sec:quantum} is that their partial traces
(or reduced density operators) are operators of the same form.
Thus we propose the study of bilinear localization
operators which, in principle, could be used to describe the state of subsystem in a prescribed region of the phase space.
This is analogous to the use of localization operators in extracting an information about a signal
in a specific region of time-frequency plane.
In our opinion the results from Sections \ref{sec:STFTwavmult0} and \ref{sec:quantum},
open the perspective of using mathematical tools developed in Sections \ref{sec:contframes} and
\ref{sec:multipliers} in the future study of bipartite quantum systems and their subsystems.
For example, Theorem \ref{thm:density} provides a description of the separable state of a composite system, and a
partial affirmative answer to the question of de Gosson \cite[Section 5]{deGosson2020} which can be roughly rephrased as follows:
can the structure of a density operator be appropriately restricted to its partial traces?
\par
\section{Preliminaries} \label{sec:preliminaries}
For the reader's convenience in this section we collect some basic facts from operator theory and tensor products of Hilbert spaces which will be
used in the sequel. We refer to \cite{conw1, Fol,
Mu} for details.
\subsection{Operator theory}
By $\h $ we denote a complex Hilbert space with the inner product $\langle x, y\rangle$
(linear in the first and conjugate linear in the second coordinate)
and norm $\|x\| = \sqrt{\langle x, x \rangle}$, $x,y \in \h$. In the sequel we consider separable Hilbert spaces.
A map $\Psi:\h \times \h\rightarrow \mathbb{C}$ is a sesquilinear form if it is linear in
the first variable and conjugate-linear in the second. The sesquilinear form
is bounded if there exists a constant $C>0$ such that $\left| \Psi(x,y) \right| \le C \cdot \| x \| \| y \|$,
$x,y \in \h$. The smallest, optimal, such constant is called the bound of $\Psi$ denoted by $\|\Psi\|$. There is a unique operator $O$ on $\h$ such that
\begin{equation} \label{murphy}
\Psi(x,y)=\langle O(x),y\rangle \quad x,y\in \h,
\end{equation}
and $\|O\|=\|\Psi\|$.
\par
A bounded operator $T: \h \rightarrow \h$ is positive (respectively
non-negative), if $\langle Tx,x\rangle>0$ for all $x\neq0$ (respectively $\langle Tx,x\rangle\geq0$ for all $x\in\h$).
A linear operator $T$ from the Banach space $X$ into the Banach space $Y$ is
compact if the image
of the closed unit ball in $X$ is a relatively compact subset of $Y$, or, equivalently, if the image of any bounded
sequence contains a convergent subsequence.
If $T$ is a compact operator on Hilbert space $\h$ and if $T^{\ast}$ is the adjoint of $T$ (i.e. $\langle Tx, y \rangle = \langle x, T^{\ast}y \rangle$, $ \forall x,y \in \h$) then the eigenvalues of
the unique non-negative and compact operator $S$ such that $S^2 = T^{\ast}T$ are called the
singular values of $T$.
An operator $T $ belongs to the Schatten class $\mathcal{S}_p(\h)$, $1 \leq p < \infty$,
if the sequence of its singular values $(s_n)$ belongs to $\ell^p$. In particular,
$\mathcal{S}_1(\h)$ consists of the trace class operators, and $\mathcal{S}_2(\h)$ is the class of Hilbert-Schmidt operators (see also below),
and $\mathcal{S}_p(\h) \subset \mathcal{S}_q(\h)$, when $ 1\leq p \leq q \leq \infty,$ where $ S_\infty (\h) = \mathcal{B}(\h) $ denotes the set of all bounded
linear operators on $\h$.
\par
Let $\h_1$ and $\h_2$ be separable Hilbert spaces.
The set $\mathcal{B}(\h_2,\h_1)$ of all bounded linear operators from $\h_2$ to $\h_1$
is a Banach space with the usual operator norm $\|T\|=\sup_{\|x\|=1}\|Tx\|$, and $GL(\h_2,\h_1)$ denotes the set
of all bounded linear operators from $\h_2$ to $\h_1$ with bounded inverse. If $\h_1 = \h_2 = \h$, we write $\mathcal{B}(\h)$ and $GL(\h)$ for
short.
If $ T \in \mathcal{B}(\h_2,\h_1)$ and
$$
\| T \| ^2 _{\h S} := \sum_{n=1} ^\infty \| T e_n \|^2 _{\h_1} < \infty
$$
for some orthonormal basis (ONB) $ (e_n ) $ in $ \h_2 $, then $T$ is called a {\em Hilbert-Schmidt} ($ \h S $) operator from $\h_2$ to $\h_1$.
We denote the class of Hilbert-Schmidt operators by $\h S (\h_2,\h_1 )$.
If $\h_1 = \h_2 = \h$, then $\h S (\h,\h ) = \mathcal{S}_2(\h)$.
$\h S (\h_2,\h_1 )$ is a Hilbert space (of compact operators) with the inner product
$$
\langle S, T \rangle_{\h S} = \sum_{n=1} ^\infty \langle S e_n, T e_n \rangle_{\h_1}.
$$
If $ x\in \h_1$ and $y \in \h_2 $, then their {\em tensor product} $x \otimes y : \h_2 \rightarrow \h_1$ is defined by
\begin{equation} \label{outten1}
(x \otimes y ) h = \langle h, y \rangle x, \;\;\; h \in \h_2,
\end{equation}
belonging to $ \h S (\h_2,\h_1 )$.
\par
For $ P \in \mathcal{B}(\h_2)$ and $ Q \in \mathcal{B}(\h_1)$ we define the {\em tensor product of operators} $ Q \otimes P :
\mathcal{B}(\h_2, \h_1) \rightarrow \mathcal{B}(\h_2, \h_1) $ by
$ (Q \otimes P) T = Q \circ T \circ P^*.$ It is invertible if and only if $P$ and $Q $ are invertible, and
$ (Q \otimes P)^{-1} = Q^{-1} \otimes P^{-1} .$
\subsection{Tensor product of Hilbert spaces}
Let $\h_1$ and $\h_2$ be separable Hilbert spaces.
Equipping the algebraic tensor product with the (extension of) the inner product
\begin{equation}
\langle x_1 \otimes y_1, x_2 \otimes y_2 \rangle_{\otimes} =
\langle x_1, x_2 \rangle_{\h_1} \langle y_1, y_2 \rangle_{\h_2}, \;\;\; x_1, x_2 \in \h_1,\; y_1, y_2 \in \h_2, \label{eq:tens innprod1}
\end{equation}
makes it into a Hilbert space, denoted by $\h_1 \otimes \h_2$, and
$ \| \cdot \|_{\otimes} = \langle \cdot, \cdot \rangle_{\otimes}.$
The space $\h_1 \otimes \h_2$ is unitary isomorphic to the class of Hilbert-Schmidt operators $\h S (\h_2,\h_1 )$.
The unitary operator maps $( x_1 \otimes y_1)$ onto the operator given by \eqref{outten1}
cf. \cite{Heil}.
\par
Let us collect basic properties of tensor products given in the following lemma.
\begin{lem} \label{lm:tensorprodprop}
Let $\h_1$ and $\h_2$ be separable Hilbert spaces and $\h_1 \otimes \h_2$ their tensor product. Then we have:
\begin{itemize}
\item[a)] $ \| u \otimes v \| = \| u \| \| v\|,$ $u \in \h_1,$ $v \in \h_2.$
\item[b)] if $ S \in \mathcal{B} (\h_1) $ and $ T \in \mathcal{B} (\h_2), $ then
$ \| S \otimes T \| = \| S \| \| T\|,$ and
$$
(S \otimes T) (u \otimes v) = Su \otimes Tv, \qquad u \in \h_1,v \in \h_2.
$$
\item[c)] $\h_1 \otimes \h_2 = \overline{\text{span}} \; \{ u \otimes v, \;:\; u \in \h_1, v \in \h_2 \}$,
i.e. $\h_1 \otimes \h_2$ is the
closure of the set of all
finite linear combinations of elements of the form $ u \otimes v$, $u \in \h_1$, $v \in \h_2$.
\item[d)] the tensor product of two ONBs is an ONB in the tensor product space.
\item[e)] (Schmidt decomposition) for every $ x \in \h_1 \otimes \h_2$ there are non-negative numbers $ c_n$ and ONB $ e_n \in \h_1$ and
$ f_n \in \h_2,$ such that
$$
x_ = \sum_{n=1} ^\infty c_n (e_n \otimes f_n), \qquad \| x\| = \sum_{n=1} ^\infty c_n ^2.
$$
\end{itemize}
\end{lem}
\begin{proof} The proof a)--d) is folklore, see e.g. \cite{Fol,gaal,Hall}. For the proof of Schmidt decomposition e) we refer to \cite{BB}.
\end{proof}
\section{Frames in tensor products of Hilbert spaces} \label{sec:contframes}
In this section we derive fundamental properties of continuous frames for tensor product of Hilbert spaces.
The usual definition of frames use discrete index sets \cite{ole}, one can also give a definiton using continuous ones \cite{Ali1, Ali2, BBR}. As an introduction we transfer the basic definitions directly for the case used in this manuscript.
\subsection{Continuous Frames in Tensor Product Hilbert spaces}
\begin{defn} \label{De:frame-Bessel}
Let $\h$ be the tensor product $\h = \h_1 \otimes \h_2 $ of separable complex Hilbert spaces, and $(X, \mu) = ( X_1 \times X_2, \mu_1 \otimes
\mu_2)$ be the product of measure spaces with $\sigma-$finite positive measures $\mu_1, \mu_2.$
The mapping $F :X\to\h$ is called a \emph{continuous frame for the tensor product Hilbert space} $\h$ with respect to $(X ,\mu)$, if
\begin{enumerate}
\item $F$ is weakly-measurable, i.e., for all $\vec{f} \in \h$,
$$
x = (x_1, x_2) \to
\langle \vec{f}, F({x})\rangle
$$
is a measurable function on $X$; \\
\item there exist constants $A>0$ and $B<\infty$ such that
\begin{equation}
\label{deframe}
A\|\vec{f} \|^{2} \leq \int_{X}|\langle \vec{f},F({x})\rangle|^{2}\,d\mu(x)
\leq B\|\vec{f}\|^{2}, \qquad \forall \vec{f}\in \h.
\end{equation}
\end{enumerate}
The constants $A$ and $B$ are called the lower and the upper \emph{continuous frame bound}, respectively.
If $A=B$, then $F$ is called a \emph{tight} continuous frame, if $A = B = 1$ a {\em Parseval} frame.
The mapping $F$ is called the {\em Bessel mapping} if only the second inequality in
(\ref{deframe}) is considered. In this case, $B$ is called the \emph{Bessel constant} or the \emph{Bessel bound}.\footnote{Please be aware, that this concept of Bessel mappings does not coincide with Bessel functions. }
\end{defn}
To each continuous frame we define the frame related operators as follows.
Let $(X, \mu) $ and $\h$ be as in Definition \ref{De:frame-Bessel}, and let
$ L^{2}(X, \mu)$ be the space of square-integrable functions on
$(X, \mu) $. The operator $T_{F}:L^{2}(X, \mu)\to\h$ defined by
\begin{equation} \label{eq:synthesisopr}
T_{F}\vec{\varphi} = \int_{X} \vec{\varphi}(x) F(x) \,d\mu(x) = \int_{X_1} \int_{X_2} \vec{\varphi}(x_1, x_2) F(x_1, x_2)
\,d\mu_1 (x_1) \,d\mu_2 (x_2)
\end{equation}
is called the \textit{synthesis operator}, and the operator
$ T_{F}^{*}: \h \to L^{2}(X, \mu),$ given by
\begin{equation} \label{eq:adjointsynthesisopr}
(T_{F}^{*}\vec{f})(x)=\langle \vec{f}, F(x)\rangle,\quad x\in X
\end{equation}
is called the \textit{analysis operator} of $F$.
The \textit{continuous frame operator} $S_F$ of $F$ is given by $S_{F}=T_{F}T_{F}^{*}$.
\begin{rem} \label{rem:Riesz bases}
In discrete frame theory, it is of interest to consider Riesz bases.
It does not make sense to address this question in the
context of continuous frames, since all continuous Riesz bases are actually discrete, cf. \cite{spexxl16,Ra,JAKOBSEN2016229}.
\end{rem}
The first inequality in (\ref{deframe}), shows that $F$ is complete, i.e.,$$\overline{\textrm{span}}\{F({x})\}_{x\in X}=\h,$$
where we have
$ \displaystyle
\overline{\textrm{span}}\{F({x})\}_{x\in X} := \{ f \in \h \;\; | \;\; \mu \left( \left\{ x \left| \langle f, F(x) \rangle \not= 0 \right\} \right. \right) \neq 0 \}
$. In contrast to discrete setting, in the continuous setting one has to be a bit more careful with
this definition due to the null sets in $X$, cf. \cite{xxlosg21}.
The next result shows that the continuous frame condition is preserved by the tensor product, generalizing the result for discrete frames.
\begin{thm} \label{thm:main}
Let $\h_1$ and $\h_2$ be separable Hilbert spaces, $\h = \h_1 \otimes \h_2 $, and let
$(X, \mu) = ( X_1 \times X_2, \mu_1 \otimes \mu_2)$ be the product of measure spaces with $\sigma-$finite positive measures $\mu_1, \mu_2.$
The mapping $F = F_1 \otimes F_2 :X\to\h$ is a continuous frame for $\h$ with respect to $(X ,\mu)$ if and only if
$ F_1 $ is a continuous frame for $\h_1$ with respect to $(X_1, \mu_1) $, and
$F_2 $ is a continuous frame for $\h_2 $ with respect to $(X_2, \mu_2) $.
Furthermore, if $F = F_1 \otimes F_2 $ is a continuous frame for $\h$ with frame bounds $A$ and $B$,
then the continuous frame bounds for $F_1$ can be chosen as
$ A_1 = A/ C_{F_2} $ and $ B_1 = B/ D_{F_2} $, where
\begin{equation} \label{eq:lowerboundF1}
C_{F_2} = \inf_{ \| g\|_{\h_2} = 1}
\int_{X_2} |\langle g , F_2(x_2) \rangle|^{2}\, d\mu_2 (x_2),
\end{equation}
\begin{equation} \label{eq:upperboundF1}
D_{F_2} = \sup_{ \| g\|_{\h_2} = 1}
\int_{X_2} |\langle g , F_2(x_2) \rangle|^{2}\, d\mu_2 (x_2),
\end{equation}
and the continuous frame bounds for $F_2$ can be chosen as
$ A_2 = A/ C_{F_1}$ and $ B_2 = B/ D_{F_1}$, where
\begin{equation} \label{eq:lowerboundF2}
C_{F_1} = \inf_{ \| f\|_{\h_1} = 1} \int_{X_1} |\langle f , F_1(x_1) \rangle|^{2}\, d\mu_1 (x_1) ,
\end{equation}
\begin{equation} \label{eq:upperboundF2}
D_{F_1} = \sup_{ \| f\|_{\h_1} = 1}
\int_{X_1} |\langle f , F_1(x_1) \rangle|^{2}\, d\mu_1 (x_1).
\end{equation}
Vice versa, if $ F_j $ is a continuous frame for $\h_j$ with the frame bounds $A_j$ and $B_j$, $ j = 1,2$,
then the frame bounds for $ F = F_1 \otimes F_2 $ can be chosen as $ A= A_1 A_2 $ and $B= B_1 B_2$.
\end{thm}
\begin{proof}
Assume that $F = F_1 \otimes F_2$ is a continuous frame for $\h = \h_1 \otimes \h_2 $ with respect to $(X ,\mu)$. Let $f\in \h_1 \setminus \{ 0
\},$
and fix $g\in \h_2 \setminus \{ 0 \}$. Then $ f\otimes g \in \h $, and
$$
T^* _{ F_1 \otimes F_2} ( f\otimes g) := \langle f\otimes g , F_1(x_1) \otimes F_2(x_2) \rangle =
\langle f , F_1(x_1) \rangle \langle g , F_2(x_2) \rangle
$$
implies that (by Fubini's theorem)
\begin{multline*}
\int_X |\langle f\otimes g , F_1(x_1) \otimes F_2(x_2) \rangle|^{2}\,d\mu(x) \\
= \int_{X_1} |\langle f, F_1(x_1) \rangle|^{2}\,d\mu_1(x_1)
\int_{X_2} |\langle g , F_2(x_2) \rangle|^{2}\, d\mu_2(x_2).
\end{multline*}
Now, \eqref{deframe} and
$$
\| f\otimes g \|_{\otimes} = \| f\|_{\h_1 } \| g \|_{ \h_2}
$$
imply
$$
A\|f\otimes g \|_{\otimes} ^{2} \leq \int_{X_1} |\langle f, F_1(x_1) \rangle|^{2}\,d\mu_1(x_1)
\int_{X_2} |\langle g , F_2(x_2) \rangle|^{2}\, d\mu_2(x_2)
\leq B\| f\otimes g \|_{\otimes} ^{2},
$$
so that
\begin{align*}
\frac{A \| g \|^2 _{ \h_2}}{\int_{X_2} |\langle g , F_2(x_2) \rangle|^{2}\, d\mu_2(x_2) }
\| f\|^2 _{\h_1 } & \leq \int_{X_1} |\langle f, F_1(x_1) \rangle|^{2}\,d\mu_1(x_1)
\\
& \leq \frac{B \| g \|^2 _{ \h_2}}{\int_{X_2} |\langle g , F_2(x_2) \rangle|^{2}\, d\mu_2(x_2) }
\| f\|^2 _{\h_1 }.
\end{align*}
Notice that $\int_{X_2} |\langle g , F_2(x_2) \rangle|^{2}\, d\mu_2(x_2) \neq 0$ for all $g \in \h_2 \setminus \left\{ 0 \right\}$, and choose
\begin{align*}
A_1 & := \sup_{\| g \|^2 _{ \h_2} = 1}
\frac{A }{\int_{X_2} |\langle g , F_2(x_2) \rangle|^{2}\, d\mu_2(x_2) } = \frac{A}{C_{F_2}} > 0, \\
B_1 & := \inf_{ \| g \|^2 _{ \h_2} = 1}
\frac{B}{\int_{X_2} |\langle g , F_2(x_2) \rangle|^{2}\, d\mu_2(x_2) } = \frac{B}{D_{F_2}} < \infty,
\end{align*}
with $C_{F_2}$ and $D_{F_2}$ given by \eqref{eq:lowerboundF1} and \eqref{eq:upperboundF1} respectively.
Thus we conclude that $ F_1 $ is a continuous frame for $ \h_1 $ with respect to $(X_1, \mu_1) $ with the continuous frame bounds $A_1$ and $B_2$.
By similar arguments we conclude that $F_2 $ is a continuous frame for $ \h_2 $ with respect to $(X_2, \mu_2) $ with continuous frame bounds $ 0< A_2 = A/C_{F_1}$ and $ B_2 = B/D_{F_1} < \infty,$
with $C_{F_1}$ and $D_{F_1}$ given by \eqref{eq:lowerboundF2} and \eqref{eq:upperboundF2} respectively.
\par
For the converse, by the assumptions it immediately follows that
$ F = F_1 \otimes F_2 $ is weakly measurable on $ \h $ with respect to $ (X, \mu) $, so it remains to check \eqref{deframe}.
Let $f\otimes g$ be a simple tensor. Then
\begin{multline*}
\norm{T^* _{F_1 \otimes F_2} (f \otimes g)}^2 = \int_{X_1} \int_{X_2} \left| \left< f \otimes g , F_1 (x_1) \otimes F_2 (x_2) \right> \right|^2 d
\mu(x_1) d \mu(x_2) \\
= \int \left| \left< f , F(x_1)\right> \right|^2 d \mu(x_1) \int \left| \left< g , F_2 (x_2) \right> \right|^2 d \mu(x_2) \\
\le B_1 B_2 \norm{f}_{\h_1} ^2 \norm{g}_{\h_2} ^2 = B_1 B_2 \norm{f \otimes g}_{\otimes} ^2,
\end{multline*}
and similarly
$$
\norm{T^* _{F_1 \otimes F_2} (f \otimes g)}^2 \geq A_1 A_2 \norm{f \otimes g}_{\otimes} ^2.
$$
This is true for the span of $f \otimes g$ which is dense in $\h_1 \otimes \h_2$.
By \cite[Proposition 2.5]{RaNaDe} it follows that $F_1 \otimes F_2$ is a continuous frame with the frame bounds $ A = A_1 A_2 $ and $B= B_1 B_2$.
\end{proof}
From the proof of Theorem \ref{thm:main} we also have the following observation.
\begin{cor}
Let the assumptions of Theorem \ref{thm:main} hold.
Then the mapping $F = F_1 \otimes F_2 :X\to\h$ is a continuous bilinear Bessel mapping for $\h$ with respect to $(X ,\mu)$ if and only if
$ F_1 $ is a continuous Bessel mapping for $\h_1$ with respect to $(X_1, \mu_1) $ and
$F_2 $ is a continuous Bessel mapping for $\h_2 $ with respect to $(X_2, \mu_2) $.
\end{cor}
\par
\subsection{Dual pairs of continuous frames}\label{sec:dualpair0}
Next we discuss dual continuous frames. If $F_j $ are continuous frames for $\h_j $, $j=1,2$,
then we may consider dual frames $G_j$ which fulfill
$$
\langle f, g \rangle = \int_{X_j} \langle f, F_j (x_j) \rangle \langle G_j (x_j), g \rangle d\mu (x_j), \;\; \forall f,g \in \h_j, \; j=1,2,
$$
cf. Definition \ref{def:dualframes} for a more general situation
(see also \cite{Gb}). It follows from Theorem \ref{thm:main} that such dual frames
give rise to continuous (dual) frames for the tensor product Hilbert space $ \h = \h_1 \otimes \h_2$.
In this subsection we focus on the frame operator in the context of tensor products,
and show that it gives rise to the {\em canonical dual frame} for a given frame.
However, as we shall see, there always exist non-simple dual frames for tensor products of Hilbert spaces.
Let us note that $\langle S_{F}\vec{f},\vec{f}\rangle=\int_{X}|\langle \vec{f},F(x)\rangle |^{2}\,d\mu(x)$. Therefore \cite{Ali2,RaNaDe},
it follows that $0 <AI\leq S_{F}\leq BI$. Hence $S_{F}$ is invertible, positive and $1/B I\leq S^{-1}_{F}\leq 1/A I$.
Every $\vec{f}\in\h$ has (weak) representations of the form
\begin{eqnarray} \label{eq:dualframerepr}
\vec{f} & = S_{F}^{-1}S_{F}\vec{f} =\int_{X}\langle \vec{f}, F(x)\rangle
S_{F}^{-1}F(x)\,d\mu(x) \\
& =S_{F}S_{F}^{-1} \vec{f}=\int_{X}\langle
\vec{f}, S_{F}^{-1}F(x)\rangle F(x)\,d\mu(x). \nonumber
\end{eqnarray}
\par
It can be proved that the mapping $F:X\to\h$ is a continuous
frame with respect to $(X, \mu)$ for $\h$ if and only if the
operator $S_{F}$ is a bounded and invertible operator.
(If $F$ is a Bessel mapping, then $S_{F}$ is bounded, selfadjoint and non-negative.)
To each continuous frame $F$ one can associate a dual continuous frame which is introduced as follows.
\begin{defn} \label{def:dualframes}
Let $F$ and $G$ be continuous frames for $\h = \h_1 \otimes \h_2$ with respect to $(X,\mu)= ( X_1 \times X_2, \mu_1 \otimes \mu_2)$.
The frame $G$ is a continuous dual frame of $F$ if
$$
\vec{f} = \int_{X} \langle \vec{f},F(x)\rangle G(x) d\mu(x), \qquad \forall \vec{f} \in\h,
$$
in the weak sense, i.e. if
\begin{equation}\label{dual} \langle \vec{f},\vec{g}\rangle=\int_{X}\langle
\vec{f},F(x)\rangle\langle G(x),\vec{g}\rangle d\mu(x), \quad \forall \vec{f}, \vec{g} \in\h.
\end{equation}
In this case the pair $(F,G)$ is called a \textit{dual pair of continuous frames}.
\end{defn}
\par
By Definition \ref{def:dualframes} and \eqref{eq:dualframerepr} it follows that for a given continuous frame $F$
there always exists an associated dual pair, i.e. $ (F, S_F ^{-1} F)$ and $ (S_F ^{-1} F, F)$ are dual pairs.
The frame $S_F ^{-1} F$ is called {\em the canonical dual frame} for $F$, denoted by $\widetilde F (x)$.
\par
In the next theorem we establish the tensor product version of the usual identification of continuous frame operator in terms of analysis and
synthesis operators.
We refer to \cite{peter2,RaNaDe} when $\h$ is a Hilbert space.
\begin{thm}\label{TF}
Let $(X, \mu) = ( X_1 \times X_2, \mu_1 \otimes \mu_2)$ be a tensor product measure space and let $F$ be a Bessel
mapping from $X$ to $\h = \h_1 \otimes \h_2.$ Then the synthesis operator
$T_{F}:L^{2}(X, \mu)\to\h$ given by \eqref{eq:synthesisopr}
is a well defined, linear and bounded operator, and its adjoint operator
$ T_{F}^{*}: \h \to L^{2}(X, \mu)$ is given by \eqref{eq:adjointsynthesisopr}.
If $F = F_1 \otimes F_2 $ is a continuous frame for $\h$ with respect to $(X ,\mu)$,
and $\vec{f} = f_1 \otimes f_2 \in \h,$ then the analysis operator can be represented by
\begin{equation} \label{eq:analysisoprepr}
(T_{F}^{*}\vec{f})(x)=\langle f_1, F_1 (x_1) \rangle \langle f_2, F_2 (x_2) \rangle
\end{equation}
The continuous frame operator $S_F$ is given by $S_{F}=T_{F}T_{F}^{*},$
and
$$
S_{F_1 \otimes F_2} = S_{F_1} \otimes S_{F_2}.
$$
The canonical dual frame for $F$ is $ G = S_{F_1 } ^{-1} F_1 \otimes S _{F_2 } ^{-1} F_2$.
\end{thm}
\begin{proof} The first part of the claim follows immediately from the definition of $T_F$ given by \eqref{eq:synthesisopr} .
Furthermore, if $F = F_1 \otimes F_2 $ is a continuous bilinear frame for $\h$ with respect to $(X ,\mu)$, then the representation
\eqref{eq:analysisoprepr} follows directly from \eqref{eq:tens innprod1}.
\par
It remains to show the second part of Theorem \ref{TF}.
Let $ f_j \in \h_j $, $ j=1,2.$ Then
\begin{align*}
T_F T_F^*(f_1\otimes f_2) & = T_F \left( \left< f_1, F_1 (x_1) \right> \left<f_2, F_2 (x_2)\right> \right) \\
& = \int_X \left< f_1, F_1 (x_1) \right> \left<f_2, F_2 (x_2)\right> F_1(x_1) \otimes F_2 (x_2) d \mu (x) \\
& = \int_{X_1} \left< f_1, F_1 (x_1) \right> F_1(x_1) d \mu_1 (x_1) \otimes \int_{X_2} \left<f_2, F_2 (x_2)\right> F_2 (x_2) d\mu_2 (x_2) \\
& = S_{F_1} f_1 \otimes S_{F_2} f_2 = \left( S_{F_1} \otimes S_{F_2} \right) \left(f_1 \otimes f_2 \right),
\end{align*}
and
\begin{align*}
T_F T_F^*(f_1\otimes f_2) & =
\int_X \left< f_1, F_1 (x_1) \right> \left<f_2, F_2 (x_2)\right> F_1(x_1) \otimes F_2 (x_2) d \mu (x) \\
& = \int_{X} \left< f_1 \otimes f_2, F_1 (x_1) \otimes F_2 (x_2) \right> F_1(x_1) \otimes F_2 (x_2) d \mu (x) \\
& = S_{F_1 \otimes F_2} (f_1 \otimes f_2 ).
\end{align*}
Therefore on simple tensors we have that $S_F = S_{F_1} \otimes S_{F_2}$. By Lemma
\ref{lm:tensorprodprop} (see also \cite[Proposition 2.5]{RaNaDe}) this is true on all of $\h$.
Moreover, $ S_F $ is self-adjoint and we have
$$
S^{-1} _F = (S_{F_1 } \otimes S_{F_2})^{-1} = S_{F_1 } ^{-1} \otimes S_{F_2} ^{-1}
= S_{G_1 } \otimes S_{G_2} = S_{G_1 \otimes G_2},
$$
where $ G_1 $ and $ G_2 $ are canonical dual frames of $F_1 $ and $ F_2 $ respectively.
Furthermore,
$$
S^{-1} _{F_1 \otimes F_2} ( F_1 \otimes F_2 ) =
S_{F_1 } ^{-1} \otimes S_{F_2} ^{-1} (F_1 \otimes F_2) =
(S_{F_1 } ^{-1} F_1) \otimes (S_{F_2} ^{-1} F_2) = G_1 \otimes G_2,
$$
which proves the claim.
\end{proof}
Recall that in $ L^{2} ( X_1 \times X_2, \mu_1 \otimes \mu_2) $ a simple tensor $f\otimes g$
is just the product $f\otimes g (x) = f(x_1) g(x_2)$ which is commonly identified with an
operator with the integral kernel $f\otimes g$. Thus, in \eqref{eq:analysisoprepr} we may put
$$
T_{F}^{*} = T_{F_1}^{*} \otimes T_{F_2}^{*}.
$$
\par
Obviously, for a pair of continuous frames $F $ and $ G $ the condition (\ref{dual}) can be written as $T_G T^*_F=I$ (in the weak sense).
\subsection{Non-simple Frames}
Let us digress a bit, and see if "everything is solved" now considering Theorem \ref{thm:main}.
In this subsection, we actually discuss the existence of non-simple tensor frames, therefore the result mentioned above does \emph{not} cover the full tensor frame theory (since it concerns only simple tensors).
Let us stress that by Definition \ref{De:frame-Bessel}
it follows that not every frame in a tensor product Hilbert space
has to be represented as a (sequence of) simple tensor(s).
We will show that any continuous frame admits a non-simple dual frame.
Our first result shows that
tensor Bessel sequences can be constructed with ranks different than $1$.
\begin{lem} Let $f_k(\omega)$ and $g_k(\nu)$ be continuous Bessel mappings in $\h_1$ and $\h_2$ respectively with bounds $B_k$ and $B_k'$ such
that $B := \sum_k B_k \cdot \sum_l B_l' < \infty$, then
$F (\omega,\nu) = \sum_k f_k (\omega) \otimes g_k (\nu)$ is a Bessel mapping in $\h_1 \otimes \h_2$ with the Bessel bound $B$.
\end{lem}
\begin{proof} Note that
\begin{multline*}
|\langle \psi \otimes \phi, F (x_1, x_2) \rangle |^2
= | \langle \psi \otimes \phi, \sum_k f_k (x_1) \otimes g_k (x_2) \rangle |^2 \\
= | \sum_k \langle \psi, f_k (x_1) \rangle \langle \phi, g_k (x_2) \rangle |^2.
\end{multline*}
Then we have
\begin{multline*}
\int |\langle \psi \otimes \phi, F (x_1, x_2) \rangle |^2 d\mu (x_1, x_2)
= \int | \sum_k \langle \psi , f_k (x_1) \rangle \langle \phi, g_k (x_2) \rangle |^2 d\mu (x_1, x_2) \\
\leq \sum_k \int | \langle \psi , f_k (x_1) \rangle|^2 d\mu (x_1) \sum_l \int | \langle \phi, g_l (x_2) \rangle |^2 d\mu ( x_2) \\
\leq \sum_k B_{k} \| \psi\|^2 \cdot \sum_l B'_{l} \| \phi\|^2
\leq (\sum_k B_{k} \sum_l B'_{l} ) \| \psi\|^2 \cdot \| \phi\|^2. \\
\end{multline*}
The result now follows by extension from simple tensors to all of $\h_1 \otimes \h_2$ (cf. \cite[Proposition 2.5]{RaNaDe}).
\end{proof}
A direct converse can never be true (consider e.g. any $f_k$ and $g_k = 0$).
So, we now know that non-simple Bessel sequence exist. If we now consider a fixed frame, do there dual frames exist, which are not necessarily of the form given by Theorem \ref{thm:main} (see also \cite{xxlfei1}).
More precisely, if $F_1 \otimes F_2$ is a frame for $\h$ with respect to $ (X, \mu)$, then we examine the existence of its dual frame $ G $
such that $ G \neq G_1 \otimes G_2 $ for any $ G_1 \in \h_1,$ $ G_2 \in \h_2.$
Let us shortly digress from the logical order of results and rather use the proof of this result as a motivation for the next section.
We first recall that a continuous frame $F$ is {\em redundant} if
$$
R (F) := \text{dim} (\range {T^{*} _{F}} ^\perp) >0.
$$
It has been observed that $R(F)$ depends on the underlying measure space $ (X, \mu)$. For example, if $ (X, \mu)$ is non-atomic, then
$R(F) = \infty$. We refer to \cite{SpBa} for details.
\par
\begin{lem} \label{thm:nontensordual}
Let $ \text{dim} (\h_1 ), $ $ \text{dim} (\h_2 ) > 1 $, and let $ F_1 \otimes F_2 $ be a redundant frame for $\h$. Then $ F_1 \otimes F_2 $ admits at least one non-simple tensor product dual.
\end{lem}
\begin{proof}
The idea is to follow the steps of the proof of \cite[Theorem 2.3]{WangLi}, and the case study examination given there.
This proof uses the fact that for $ T \in \h_1 \otimes \h_2 ,$ $\text{dim} \left( \range{T} \right) \leq 1 $ if and only if
$ T = f \otimes g$ for some $ f \in \h_1 $ and $ g\in \h_2$ (\cite[Lemma 2.2]{WangLi}).
Replacing sums by integrations the proof of \cite[Theorem 2.3]{WangLi} can be generalized in a straightforward way, {\em if} we can show the tensor product version of \cite[Theorem 6.3.7]{ole}, i.e. a description of all dual tensor frames of a given tensor frame. This is Corollary \ref{thm:dualframes}.
\end{proof}
In order to make this proof complete we have to introduce the next section, which by itself answers an open question in frame theory.
\section{Full classification of dual continuous frames}
In this section we extend the well known classification of dual discrete frames \cite[Theorem 6.3.7]{ole}
to the continuous frames setting. This is a new result in continuous frame theory, and we apply it to describe dual frames in the context of
tensor products.
It turned out that the theory of reproducing kernel Hilbert spaces (RKHS) provides convenient tools for the result in this section. The interplay
between RKHS and frame theory is recently used in \cite{spexxl16} in the study of stable analysis/synthesis processes.
Recall, a Hilbert space
$\h$ is {\em a reproducing kernel Hilbert space} on the set $X$ if it is a subspace of the space of functions from $X$ to $\mathbb{C}$ such that
for every $x\in X$ the linear {\em evaluation functional} $ev_x :\h \rightarrow \mathbb{C}$ defined by
$ ev_x (f) = f(x) $ is bounded, cf. \cite{paulsen_raghupathi_2016}.
From the Hilbert space property we have that there exists
an element $k_x \in \h$ such that $f(x) = \left< f, k_x \right>$, {\em the reproducing kernel}.
We will also use the following facts from the theory of RKHS:
any closed subspace of an RKHS is again a RKHS, \cite[Theorem 2.5.]{paulsen_raghupathi_2016}.
We also need the following trivial result, which is a direct result of the definition:
\begin{cor} \label{cor:union1} Let $\h$ and $\g$ be subspaces of a normed space $X$, where the closures are RKHS. Then $\overline{\h \cup \g} = \overline{\h} \cup \overline{\g} $ is a RKHS.
\end{cor}
\begin{proof}
Let us denote by $k^{\h} _x$ and $k^{\g} _x$ the respective reproducing kernels for the closures.
Then, for $f \in \overline{\h}$ we have that $\left| f (x) \right| \le \norm{k^{\h} _x} \norm{f}$, and $\left| f (x) \right| \le \norm{k^{\g} _x} \norm{f}$ for $f \in \overline{\g}$, so that
$$ \left| f (x) \right| \le \max \left\{ \norm{k^{\h} _x}, \norm{k^{\g} _x} \right\} \norm{f}
\text{ for } f \in \overline{\h \cup \g}.$$
\end{proof}
The following result \cite[Proposition 11]{spexxl16}, which relates frames with RKHS, is needed later:
\begin{lem} \label{lem:rkhs} If $F$ satisfies the lower frame inequality, then $ (\range {T^*_F}, \| \cdot \|) $ is a RKHS. Moreover, for any subspace $ \h_{K} $
of $ L^2 (X, \mu) $, the following are equivalent:
\begin{itemize}
\item[a)] $ \h_{K} $ is a RKHS.
\item[b)] There exists a continuous frame $ F $ such that $\range {T^*_F} = \h_{K} $.
\end{itemize}
\end{lem}
We are now ready to attack the main question in this section.
We first prove the continuous counterpart of \cite[Lemma 6.3.5]{ole}.
\begin{lem} \label{lem:dualcont1} Let $F(x)$ be a continuous frame for the Hilbert space $\h$.
Let $ e_k $ be an orthonormal basis for $\h$ and let $V:L^2(X,\mu) \rightarrow \h$ be a bounded left-inverse of $T_F ^{*} $,
such that $\left(\ker {V}\right)^\perp$ is a reproducing kernel subspace of $L^2(X,\mu)$.
Then the dual frames of $F$ are precisely the functions $G(x)=\sum_{k \in K} \overline{V^* (e_k)(x)} e_k$.
\end{lem}
\begin{proof}
The function $G(x)$ is well defined if $\sum_{k \in K}|{V^*(e_k)(x)}|^2 < \infty$.
By \cite[Proposition 6]{spexxl16}, $\sum_{k \in K}|{V^*(e_k)(x)}|^2 < \infty$
if $V^*(e_k)$ is a discrete Bessel sequence in the RKHS $\left( \ker {V} \right)^\perp$.
Now, from the proof of \cite[Proposition 5.3.1]{ole} it follows that this is true.
Next, following \cite[Proposition 21]{spexxl16}, we have
\begin{multline*}
\langle f,g \rangle_\h = \langle V T_F ^{*} f,g \rangle_\h = \langle T_F ^{*} f, V^*g \rangle_{L^2(X, \mu)} \\
= \langle T_F ^{*} f, \sum_{k\in K} \langle g,e_k \rangle V^*e_k \rangle_{L^2(X,\mu)}
= \langle T_F ^{*} f, \langle g,\sum_{k \in K} \overline{V^*(e_k)(.)} \cdot e_k \rangle \rangle_{L^2(X, \mu)} \\
= \langle T_F ^{*} f, T_G ^{*} g \rangle_{L^2(X, \mu)}.
\end{multline*}
On the other hand, let $ G_0(x)$ be a frame and set $V=T_{G_0}$. We have that $ \ker {V} ^\perp = \range {T_{G_0} ^{*} }$,
which is a reproducing kernel Hilbert space by Lemma \ref{lem:rkhs}. Thus
\begin{multline*}
T_G ^{*} (g) (x) = \langle g, G(x) \rangle_\h = \langle g, \sum_{k \in K} V^* e_k (x) e_k \rangle_\h \\
= \sum_{k \in K}V^* e_k(x) \langle g, e_k\rangle_\h = \left( V^*g \right) (x).
\end{multline*}
Thus $ T_G ^{*} = T_{G_0} ^{*} $ a.e. and so $ G(x) = G_0 (x)$ a.e.
\end{proof}
Note that the left-inverse $V$ in Lemma \ref{lem:dualcont1} can never be invertible.
(Because then $\ker {V} ^\perp = L^2(\mathbb{R})$, which is not a RKHS).
The continuous version of \cite[Lemma 6.3.6]{ole} can now be given as follows.
\begin{lem} \label{lem:637cont} Let $F(x)$ be a continuous frame for $\h$.
The bounded left-inverses of $T_F ^*$, i.e. $V T_F^* = \identity{\h}$, with $\kernel{V}^\bot$ being a RKHS are precisely the operators of the form
\begin{equation} \label{leftinverseV}
V = S_F ^{-1} T_F + W \left( \identity{\h} - T_F ^* S_F ^{-1} T_F \right),
\end{equation}
where $W: L^2(X,\mu) \rightarrow \h$ is a bounded operator with $\kernel{W}^\bot$ being a RKHS.
\end{lem}
\begin{proof}
The proof of \cite[Lemma 6.3.6]{ole} can be used directly in the sense that all left-inverses $V$ can be exactly represented by \eqref{leftinverseV}.
It remains to prove the transfer of the RKHS property.
\par
Consider the mapping $W_0 : \kernel{T_F} \rightarrow \h$, defined by $W_0 := W \pi_{\kernel{T_F}}$. The operator $W$ can then be written as $W = W_0 \pi_{\kernel{T_F}} + W \pi_{\range{T_F^*}}$.
By the assumptions we have that
$$ V = S_F^{-1} T_F + W \pi_{\kernel{T_F}} = S_F^{-1} T_F + W_0 \pi_{\kernel{T_F}}, $$
and
$$ V^* = T_F^* S_F^{-1} + \pi_{\kernel{T_F}} W^* = T_F^* S_F^{-1} + \pi_{\kernel{T_F}} W_0^*, $$
Clearly, $\kernel{W_0} = \kernel{W} \cap {\kernel{T_F}}$. In particular, $\kernel{W}^\bot \subseteq \kernel{W_0}^\bot$ and $\kernel{W_0}^\bot = \kernel{W}^\bot \cup \kernel{T_K}^\bot = \kernel{W}^\bot \cup \range{T_K^*}$. This tells us that (by Corollary \ref{cor:union1}) that $\kernel{W}^\bot$ is a RKHS if and only if $\kernel{W_0}^\bot$ is.
Additionally, by construction, $\kernel{W_0} \subseteq \kernel{V}$. Therefore, if we assume that
$\kernel{W}^\bot$ is a RKHS, then $\kernel{W_0}^\bot$ is a RKHS, and so is $\kernel{V}^\bot$. This shows the first direction.
For the opposite direction, assume that $\kernel{V}^\bot$ is a RKHS. Let $k^V_x$ be the kernel of $\range{V^*}$ and $k^F_x$ the one on $\range{T_F^*}$. Then
\begin{eqnarray*}
\left| \left( W_0^* f \right) (x) \right| & = & \left| \left( V^* f \right) (x) - \left( T_F^* S_F^{-1} f \right) (x) \right| \\
& \le & \left| \left( V^* f \right) (x) \right| + \left| \left( T_F^* S_F^{-1} f \right) (x) \right| \\
& \le & \norm{k^V_x} \norm{V^* f} + \norm{k^F_x} \norm{T_F^* S_F^{-1} f} \\
& \le & \left(\norm{k^V_x} \norm{V} + \norm{k^F_x} \frac{1}{A} \right) \norm{f}, \\
\end{eqnarray*}
where $A$ is the lower frame bound of $F(x)$.
This shows the other direction.
\end{proof}
Next we prove the continuous frame counterpart of \cite[Theorem 6.3.7]{ole}.
\begin{thm} \label{olecontinuous}
Let $F$ be a continuous frame for $\h$. The dual frames of $F$ are precisely the functions
\begin{equation} \label{eq:classdual1}
G (x) = S^{-1}_{F} F(x) + \Theta (x)- \int \left< S^{-1} F(x) , F(y) \right> \Theta(y) d \mu (y) ,
\end{equation}
where $\Theta$ is a Bessel mapping.
\end{thm}
\begin{proof}
Equation \eqref{eq:classdual1} is equivalent to
$$ G (x) = {
\left( T_{\widetilde F} - T_\Theta \right) T_F^* S^{-1} F \left(x \right) + \Theta(x).
}
$$
By the construction it is a Bessel mapping as a sum of Bessel mappings. Note that a bounded operator applied to a Bessel mapping again gives a Bessel mapping.
Since $T_G T_F^* = \identity{\h}$, it is a dual frame.
On the other hand let $G_0$ be dual frame of $F$. Then $V = T_{G_0}$ is a left inverse of $T_F^*$, where $\kernel{V}^\bot$ is a RKHS.
By Lemma \ref{lem:637cont} it follows that
$$ V = S_F^{-1}T_F + W(I - T_F^* S_F^{-1} T_F),
$$
where $W$ is a bounded operator with $\kernel{W}^\bot$ being a RKHS. By Lemma \ref{lem:dualcont1} we have
$G(x)=\sum_{k \in K} \overline{V^* (e_k)(x)} e_k$.
Therefore
\begin{eqnarray*}
G(x) & = \sum_{k \in K} \overline{T_F^* S_F^{-1} (e_k)(x)} e_k & + \sum_{k \in K} \overline{W^* (e_k)(x)} e_k \\
& & - \sum_{k \in K} \overline{T_F^* S_F^{-1} T_F W^* (e_k)(x)} e_k \\
& = \sum_{k \in K} \underbrace{\overline{T_F^* S_F^{-1} (e_k)(x)}}_{= \widetilde F (x)} e_k & + \left(\identity{\h} - T_F^* S_F^{-1} T_F \right)
\sum_{k \in K} \underbrace{\overline{W^* (e_k)(x)} e_k}_{=: \Theta(x)}.
\end{eqnarray*}
The sequence $W^* (e_k)$ is a Bessel sequence in the RKHS $\range{W^*} = \kernel{W}^\bot$ and so $\Theta(x)$ is well-defined. Furthermore
\begin{eqnarray*}
\left< f, \Theta(x) \right> = \left< f, \sum_{k \in K} \overline{W^* (e_k)(x)} e_k \right> = \sum_{k \in K} W^* (e_k)(x) \left< f, e_k \right> \\
= ev_x \left( \sum_{k \in K} W^* (e_k) \left< f, e_k \right> \right) =
ev_x \left( W^* \sum_{k \in K} \left< f, e_k \right> e_k \right) = ev_x \left(W^* f \right).
\end{eqnarray*}
Therefore
$$
\int |\langle f, \Theta(x) \rangle |^2 d\mu (x) = \norm{W^*f}_{L^2(X,\mu)} \le \norm{W}_{Op} \norm{f}_{L^2(X,\mu)}, $$
and $\Theta(x)$ is a continuous Bessel mapping.
\end{proof}
Like in the discrete setting \cite{zak} this can be reformulated as
\begin{cor}
Let $F$ be a continuous frame for $\h$. The dual frames of $F$ are precisely the functions
\begin{equation*} \label{eq:classdual1cor}
G (x) = S^{-1}_{F} F(x) + \Theta (x),
\end{equation*}
where $\Theta$ is a Bessel mapping with $\range{T_\Theta}^* \subseteq \kernel{T_F}$.
\end{cor}
Adapting Theorem \ref{olecontinuous} to the tensor frame setting we reach the following:
\begin{cor} \label{thm:dualframes}
Let $ F_1 \otimes F_2 $ be a frame for $\h$. Then the dual frames of $ F_1 \otimes F_2 $ are precisely the families of the form
\begin{align*}
S^{-1} _{F_1} F_1 (x_1) & \otimes S^{-1} _{F_2} F_2 (x_2) + W (x_1, x_2) \\
& - \int_X \langle S^{-1} _{F_1} F_1 (x_1), F (y_1)\rangle
\langle S^{-1} _{F_2} F_2 (x_2), F (y_2)\rangle W(y_1, y_2) d\mu (y),
\end{align*}
where $ W $ is a Bessel mapping in $\h$.
\end{cor}
Now let us come back to Lemma \ref{thm:nontensordual}: To show that there are non-simple dual tensor frames, one has to find a non-simple Bessel mapping $W$ such that the result is also non-simple. This can be done as in the case study of the proof of \cite[Lemma 2.2]{WangLi}.
\section{Tensor Product Continuous Frame Multipliers} \label{sec:multipliers}
Gabor multipliers \cite{feic} led to the introduction of
Bessel and frame multipliers for abstract Hilbert spaces. These operators are
defined by a fixed multiplication pattern (the symbol) which is
inserted between the analysis and synthesis operators
\cite{xxlmult1,peter2,peter3}.
This section is inspired by the continuous frame multipliers studied in \cite{BBR}.
We are interested in the tensor product setting as follows.
\begin{defn}\label{definitioncontframemult}
Let $\h$ be the tensor product $\h = \h_1 \otimes \h_2 $ of complex Hilbert spaces, and $(X, \mu) = ( X_1 \times X_2, \mu_1 \otimes \mu_2)$ be the
product of measure spaces with $\sigma-$finite positive measures $\mu_1, \mu_2.$
Also, let $F $ and $ G $ be Bessel mappings for $\h$ with respect to $(X,\mu)$ and $m:X\rightarrow \mathbb{C}$ be a
measurable function. The operator
$\textbf{M}_{m,F,G}:\h\rightarrow\h$ weakly defined by
\begin{equation}
\label{de:operatorM}
\langle \textbf{M}_{m,F,G} \vec{f}, \vec{g}\rangle =
\int_{X} m(x) \langle \vec{f}, F (x) \rangle
\langle G (x),\vec{g} \rangle d\mu(x),
\end{equation}
for all $\vec{f}, \vec{g} \in\h$, is called {\em tensor product continuous Bessel multiplier} of $F$
and $G$ with respect to the {\em symbol} $m$. If, in addition, $F$ and $G$ are continuous frames, then $\textbf{M}_{m,F,G}$
given by \eqref{de:operatorM} is called {\em tensor product continuous frame multiplier}.
\end{defn}
Eq.\eqref{de:operatorM} is equivalent to the weak formulation of
\begin{equation}
\label{de:operatorMweak}
\textbf{M}_{m,F,G} \vec{f}:=\int_{X}m(x)\langle \vec{f}, F(x)\rangle G(x)d\mu(x).
\end{equation}
\begin{rem}
If $ m \equiv 1$ and $F $ and $ G $ are Bessel mappings for $\h$ with respect to $(X,\mu)$, then $\textbf{M}_{1,F,G}$
given by \eqref{de:operatorM} is a well-defined and bounded sesquilinear form on $\h$, which could be called the {\em cross-frame operator}.
If, in addition, the corresponding operator given by
\eqref{de:operatorMweak} has a bounded inverse, then $(F,G)$ is a reproducing pair for $\h$ in the sense of \cite{spexxl16}
(when the definition of reproducing pairs is suitably interpreted for tensor product of Hilbert spaces).
If $(F,G)$ is a dual pair of continuous frames (cf. Definition \ref{def:dualframes}), then $\textbf{M}_{1,F,G}$ given by
\eqref{de:operatorMweak} is the identity operator (and vice-versa, as we have assumed the Bessel property).
\end{rem}
If $\textbf{M}_{m,F,G} $ is given by \eqref{de:operatorM}, then it immediately follows that $(\mathbf{M}_{m,F,G})^*=\mathbf{M}_{\overline{m},G,F}$, cf. \cite[Proposition 3.4]{BBR}.
\par
\begin{lem}\label{tar}
Let $F$ and $G$ be as in Definition \ref{definitioncontframemult}, with the Bessel bounds $B_F$ and $B_G$ respectively. If $m\in
L^{\infty}(X,\mu)$, then the continuous tensor product Bessel multiplier
$\textbf{M}_{m,F,G}$ given by \eqref{de:operatorM}
is well defined and bounded with
\[
\|\textbf{M}_{m,F,G}\|\leq \|m\|_\infty \sqrt{B_F B_G}.
\]
\end{lem}
\begin{proof} The proof is a modification of the proof of \cite[Lemma 3.3]{BBR} for the case of tensor products, and is therefore omitted.
\end{proof}
Here and in what follows the norm in Lebesgue spaces $ L^{p}(X,\mu)$, $1\leq p\leq \infty $ is denoted by
$\| \cdot \|_p $. As usual, we shorten notation by setting $ \| \cdot \| = \| \cdot \|_2.$
If $m(x)>0$ a.e., then for any Bessel
mapping $F$ the multiplier $\textbf{M}_{m,F,F}$ is a positive
operator, and if $m(x)\geq \delta > 0$ almost everywhere for some positive constant $\delta$ and
$\|m\|_{\infty}<\infty$ then $\textbf{M}_{m,F,F}$ is just the
frame operator of $\sqrt{m}F$ and so it is positive, self-adjoint
and invertible, cf. \cite{BBR}.
\par
By using analysis and synthesis operators for $F$ and $G$, it is easy to see that
\begin{equation}
\label{rep1}\textbf{M}_{m,F,G}=T_G \circ D_m \circ T^*_F
\end{equation}
where $D_m:L^{2}(X,\mu)\rightarrow L^{2}(X,\mu) $ is given by
$(D_m \varphi)(x)=m(x)\varphi(x)$. If $m\in L^{\infty}(X,\mu)$, then $D_m$ is bounded and
$\|D_m\|=\|m\|_{\infty}$, \cite{conw1}.
\par
If $m\in L^{\infty}(\RR^d, dx) $, then \cite[Proposition 3.6]{BBR} implies that
the multiplication operator $D_m$ on $ L^2(\RR^d, dx)$ (with $dx$ denoting the Lebesgue measure)
is compact if and only if $m \equiv 0$. This
constitutes an important difference between the discrete and the continuous case, see \cite{peter2}.
To prove sufficient conditions for compactness of tensor product
continuous frame multipliers a different approach than in the discrete setting has to be taken.
We closely follow the approach suggested in \cite{BBR}.
\vspace{5mm}
\subsection{Compact Multipliers}
Recall, a mapping $F$ is called norm bounded on $( X, \mu)$ if there exists a constant $C > 0$ such that $\| F(x) \| \le C$ for almost every $x \in X$.
Furthermore, the support of measurable function $m:X\rightarrow \mathbb{C}$ is of a finite measure if
there exists a subset $K \subseteq X$ with $\mu(K) < \infty$ such that $m(x) = 0$ for almost every $x \in X \setminus K$.
We can formulate \cite[Theorem 3.7]{BBR} in the tensor product setting:
\begin{thm} \label{Th:compact-01}
Let $F$ and $G$ be as in Definition \ref{definitioncontframemult}, and let either $F$ or $G$ be norm bounded. If $m: X\rightarrow \mathbb{C}$ is a
(essentially) bounded measurable function with support of finite measure, then $\textbf{M}_{m,F,G}$ given by \eqref{de:operatorM} is a compact
operator.
\end{thm}
The conclusion of Theorem \ref{Th:compact-01} remains the same if, instead of having the support of finite measure,
we assume that $m: X \to \mathbb{C}$ vanishes at infinity, i.e. for every $\varepsilon > 0$ there is a set of finite measure $K = K(\varepsilon)
\subseteq X$, $\mu(K) < \infty$, such that $m(x)\le \varepsilon$ for almost every $x \in X \setminus K$ (cf. \cite[Corollary 3.8]{BBR}).
If, in addition, we assume that \emph{both} $F$ and $G$ are norm bounded, then we have the following trace class, and Schatten $p-$class result which is a reformulation of \cite[Theorems 3.10 and 3.11]{BBR} to our setting:
\begin{thm} \label{sec:schatten1}
Let $F$ and $G$ be as in Definition \ref{definitioncontframemult} which are norm
bounded with norm bounds $L_F$ and $L_G$, respectively. Then the following is true:
\begin{enumerate}
\item If $m \in L^1(X,\mu)$, then
$\textbf{M}_{m,F,G}$ is a trace class operator with the trace norm estimate given by
$$\| \textbf{M}_{m,F,G} \|_{\mathcal{S}_1} \le \|m\|_{1}L_F L_G.$$
\item If $m \in L^p(X,\mu)$, $1 < p < \infty$,
then $\textbf{M}_{m,F,G}$ belongs to the Schatten p-class $\mathcal{S}_p(\h)$, with norm estimate
\[
\| \textbf{M}_{m,F,G} \|_{\mathcal{S}_p} \leq \| m \|_p \left( L_F L_G \right)^{\frac{1}{p}}\left(B_F B_G \right)^{\frac{p-1}{2p}}.
\]
\end{enumerate}
\end{thm}
We omit the proof since it follows by slight modifications of the proofs of \cite[Theorems 3.10 and 3.11]{BBR}.
Recall, if $A\in \mathcal{S}_1(\h)$, then its trace is defined to be
$$
\text{Tr}_{\h} (A) = \sum_n \langle A e_n, e_n \rangle,
$$
for any ONB in $\h$. We have $ |\text{Tr}_{\h} (A)| \leq \| A \|_{\mathcal{S}_1} $, with the equality if $A$ is a positive operator.
For tensor product Hilbert spaces $ \h = \h_1 \otimes \h_2 $, the following partial trace theorem holds.
\begin{thm} \label{th:partialtrace}
Let $\h$ be a tensor product Hilbert space $ \h = \h_1 \otimes \h_2 $, and let $A\in \mathcal{S}_1(\h)$.
Then there is a continuous and linear map
\begin{equation} \label{eq:parttracemap}
T: \mathcal{S}_1(\h) \rightarrow \mathcal{S}_1(\h_1)
\end{equation}
such that the following properties hold:
\begin{equation} \label{eq:parttraceformula}
T(A_1 \otimes A_2) = A_1 \text{{\em Tr}}_{\h_2} (A_2), \qquad \forall A_j \in \mathcal{S}_1(\h_j), \;\;\; j=1,2,
\end{equation}
\begin{equation} \label{eq:parttraceproprerty}
\text{{\em Tr}}_{\h_1} (T(A)) = \text{{\em Tr}}_{\h} (A), \qquad \forall A \in \mathcal{S}_1(\h).
\end{equation}
\end{thm}
Proof of Theorem \ref{th:partialtrace} is contained in the proof of \cite[Theorem 26.7]{BB}, and therefore omitted.
\par
If $T$ is the mapping given by \eqref{eq:parttracemap}, then
$T(A)$ is called the {\em partial trace} of $A$ with respect to $\h_1$. In a similar way we may define the partial trace of $A$ with respect to
$\h_2 $.
\par
In Section \ref{sec:quantum} we will use the following simple consequence of Definition \ref{definitioncontframemult} and Theorem \ref{th:partialtrace}.
\begin{cor}\label{cor:partialtrace}
Let $m_j $ be measurable functions on $X_j$, let $F_j$ and $G_j$ be continuous Bessel mappings (frames) for $\h_j$, $ j=1,2$, and
let $m= m_1 \otimes m_2 $, $ F = F_1 \otimes F_2 $, and $ G = G_1 \otimes G_2 $. If
$ \textbf{M}_{m,F,G} \in \mathcal{S}_1(\h_1 \otimes \h_2)$,
then its partial trace $T (\textbf{M}_{m,F,G} )$ with respect to $\h_1$ is a continuous Bessel (frame) multiplier given by
$$
T (\textbf{M}_{m_1,F_1,G_1} \otimes \textbf{M}_{m_2,F_2,G_2} ) = \textbf{M}_{m_1,F_1,G_1} \text{{\em Tr}}_{\h_2} (\textbf{M}_{m_2,F_2,G_2} ),
$$
i.e. it is a trace class operator of "the same form" as
$ \textbf{M}_{m,F,G} $.
Similar holds for the partial trace of $\textbf{M}_{m,F,G}$ with respect to $\h_2$.
\end{cor}
\section{Bilinear localization operators} \label{sec:STFTwavmult0}
In this section, we reveal bilinear localization operators as examples of tensor product continuous frame multipliers.
In the case of short-time Fourier transform multipliers (STFT multipliers),
the results from Section \ref{sec:multipliers} are in line with those of \cite{CKasso, Teof2018}, while
their interpretation in the case of wavelet multipliers (Calder\'on--Toeplitz operators)
and mixed STFT/wavelet multipliers seems to be new, although their "linear" counterparts are
well studied, see e.g. \cite[Section 3.4]{BBR} for a brief survey.
In addition, let us mention that the continuity properties of multipliers for the ridgelet transform given in \cite{LiWong}
can be derived from the results of \cite{BBR}.
\par
STFT multipliers, also known as time-frequency localization operators,
are used in signal analysis as a mathematical tool to extract specific features
of a signal from its phase space representations, \cite{da88}. In other contexts, they have been used as a quantization procedure \cite{Berezin71}, or as an approximation of pseudodifferential operators, cf. \cite{CRodino} and the references given there.
\par
We first recall some necessary facts.
\par
Let $T_xf(\cdot):=f(\cdot -x)$, $M_\omega f(\cdot ):=e^{2\pi i\omega \cdot }f(\cdot)$, and $D_a f(\cdot) :=|a|^{-d/2} f(\frac{\cdot}{a})$, denote
translation, modulation, and dilation operators, respectively, $x,\omega \in \mathbb{R}^{d}$,
$a \in \mathbb{R}\setminus \{ 0\}. $ These operators are unitary on $ L^2 (\mathbb{R}^d)$, and we use the notation
\begin{align*}
\pi(x,\omega) & = M_\omega T_x, \qquad & \text{for} \qquad & (x,\omega)\in \mathbb{R}^{2d}, \\
\paff (b,a) & = T_b D_a, \qquad &\text{for} \qquad & (b,a) \in \mathbb{R}^{d} \times (\mathbb{R}\setminus \{ 0 \}).
\end{align*}
\par
Let $ \hat g$ denote the Fourier transform of $g\in L^1 (\mathbb{R}^d)$ given by
${\hat {g}}(\omega)$ $ = \int g(t)e^{-2\pi i t\omega}dt$. This definition extends to $ L^2 (\mathbb{R}^d)$ by density arguments. We say that
$g \in L^{2}(\mathbb{R}^d)$ is an {\em admissible wavelet} if
\begin{equation} \label{eq:admissiblewavelet}
0 < C_{g}:=\int_{\mathbb{R}^{d}} \frac{|\hat{g}(\omega)|^{2}} {|\omega|}\,d\omega< +\infty.
\end{equation}
\begin{defn}\label{Def:transforms}
Let $ g\in L^{2}(\mathbb{R}^d)\setminus\{0\}$. The
short-time Fourier transform (STFT) of a function $f\in L^{2}(\mathbb{R}^d)$ with respect to the window function $g $ is given
by
$$
V_g f(x,\omega) := \int_{\mathbb{R}^d} f(t)\, {\overline {g(t-x)}} \, e^{-2\pi i\omega t}\,dt =
\langle f,M_\omega T_x g \rangle = \langle f, \pi(x,\omega) g \rangle,
$$
$ (x,\omega)\in \mathbb{R}^{2d}$.
If, in addition, \eqref{eq:admissiblewavelet} holds, i.e. $g$ is an admissible wavelet,
then the (continuous) wavelet transform of $f\in L^{2}(\mathbb{R}^d)$ with respect $g $ is given
by
$$
W_{g}(f)(b,a):= \int_{\mathbb{R}^{d}} f(t)\frac{1}{|a|^{\frac{d}{2}}}\overline{g(a^{-1} (t-b )}\,dt =
\langle f, T_b D_a g \rangle = \langle f, \paff (b,a) g\rangle,
$$
$ b\in \mathbb{R}^{d}, a \in \mathbb{R}\setminus \{ 0 \}.$
\end{defn}
\par
Definition \ref{Def:transforms} can be extended to various spaces of (generalized) functions, but we focus our attention here to
$ L^{2}(\mathbb{R}^d) $ to make the exposition of our main ideas more transparent.
\par
By the orthogonality relation (see e.g. \cite[Theorem 3.2.1]{Grobook})
\begin{equation} \label{eq:STFTortrel}
\langle V_{g_{1}} f_{1}, V_{g_{2}} f_{2} \rangle
=\langle f_{1}, f_{2}\rangle \overline{\langle g_{1},g_{2}\rangle}, \;\;
f_{1}, f_{2},\in L^{2}(\mathbb{R}^d), \;\;
g_{1}, g_{2} \in L^{2}(\mathbb{R}^d) \setminus \{ 0 \},
\end{equation}
if $ g_{1} = g_{2} = g$ it follows that $ \pi(x,\omega) g $ is a continuous tight frame for $L^{2}(\mathbb{R}^d)$
with respect to $ (\mathbb{R}^d \times \mathbb{R}^d, dx d\omega)$ and with bound $\|g\|^2$, for any $ g\in L^{2}(\mathbb{R}^d)\setminus\{0\}$.
If $ \| g \| = 1$, then we have a continuous Parseval frame.
Likewise, for the wavelet transform the following orthogonality relation holds:
\begin{equation} \label{eq:WTortrel}
\int_0 ^\infty \int_{\mathbb{R}^{d}} W_{g_1}(f_1)(b,a) \overline{ W_{g_2}(f_2)(b,a)} \frac{db da}{a^{d+1}} = C_{g_1,g_2} \langle f_{1},
f_{2}\rangle,
\;\;
f_{1}, f_{2}\in L^{2}(\mathbb{R}^d),
\end{equation}
if $g_{1}, g_{2}\in L^{2}(\mathbb{R}^d)$ are such that for almost all $\omega \in \mathbb{R}^d$ with $|\omega| = 1$,
$$
\int_0 ^\infty | \hat g_1 (s \omega) | | \hat g_2 (s \omega) | \frac{ds}{s} < \infty,
$$
and the constant $ C_{g_1,g_2} $ given by
$$
C_{g_1,g_2} := \int_0 ^\infty \overline{ \hat g_1 (s \omega) } \hat g_2 (s \omega ) \frac{ds}{s}
$$
is finite, non-zero, and independent on $\omega$, cf. \cite[Theorem 10.2]{Grobook}.
If $g\in L^{2}(\mathbb{R}^d)$ is an admissible and rotation invariant function, then
the orthogonality relation holds for $g=g_1=g_2$, and
$ \paff(b,a) g $ is a continuous tight frame for $L^{2}(\mathbb{R}^d)$ with respect to
$ ( \mathbb{R}^{d} \times \mathbb{R}\setminus \{ 0 \}, \frac{db da}{a^{d+1}}) $.
The frame bound is $1/C_{g,g}$, and if $g$ is suitably normed so that $C_{g,g} = 1$, then we have a continuous Parseval frame.
\par
Related continuous frame multipliers, called STFT and Calderon-Toeplitz multipliers, were discussed in \cite{BBR}. Here
we consider the tensor product space $ \h = L^{2}(\mathbb{R}^d) \otimes L^{2}(\mathbb{R}^d)$ instead.
\par
If $ \vec f, \vec \varphi \in \h$, then
$$
V_{\vec \varphi } \vec f(x,\omega)= \langle \vec f, \pi(x,\omega) \vec \varphi \rangle =
\int_{\mathbb{R} ^{2d}} \vec f(t)\, \overline{ \pi(x,\omega) \vec \varphi (t)}\,dt, \qquad x,\omega \in \mathbb{R}^{2d},
$$
and if $ \vec \varphi (t) = \varphi_1 \otimes \varphi_2 (t) = \varphi_1 (t_1) \varphi_2 (t_2), $ $ t = (t_1, t_2) \in \mathbb{R}^{d} \times
\mathbb{R}^{d},$ then $ V_{\vec \varphi } $ acts on a simple tensor $f_1 \otimes f_2 \in \h$ as
\begin{multline} \label{STFTtensor}
V_{\varphi _1 \otimes \varphi _2} (f_1 \otimes f_2) (x,\omega)
= \int_{\mathbb{R}^{2d}} (f_1 \otimes f_2) (t) \overline{ \pi(x,\omega) \varphi _1 \otimes \varphi _2 (t)} dt \\
= \iint_{\mathbb{R}^{d}} (f_1 \otimes f_2) (t) \overline{ \pi(x_1 ,\omega_1 ) \varphi _1 (t_1) \pi(x_2 ,\omega_2 ) \varphi _2 (t_2)} dt_1 dt_2.
\end{multline}
\begin{lem} \label{lm:STFTtensprod}
Let $ \h = L^{2}(\mathbb{R}^d) \otimes L^{2}(\mathbb{R}^d)$, and
$ \vec \varphi = \varphi_1 \otimes \varphi_2 \in \h \setminus \{ 0 \}$. Then
$$
\pi(x,\omega) \vec \varphi (t) = \pi(x_1 ,\omega_1 ) \varphi _1 (t_1) \pi(x_2 ,\omega_2 ) \varphi _2 (t_2)
$$
is a continuous tight frame for the tensor product space $\h$ with respect to
$ (\mathbb{R}^d \times \mathbb{R}^d, dx d\omega)$.
\end{lem}
\begin{proof}
From \eqref{eq:STFTortrel} and \eqref{STFTtensor} it follows that the orthogonality relation holds for simple tensors.
Now by \cite[Proposition 2.5]{RaNaDe} the orthogonality relation can be extended to $\h$, and we conclude that
$$
\pi(x,\omega) \vec \varphi (t) = \pi(x_1 ,\omega_1 ) \varphi _1 (t_1) \pi(x_2 ,\omega_2 ) \varphi _2 (t_2), \quad
x, \omega \in \mathbb{R}^{2d},
$$
with $t = (t_1, t_2) \in \mathbb{R}^{d} \times \mathbb{R}^{d},$ is a continuous tight frame for
$\h$, i.e.
$$
\langle \vec f, \pi(x,\omega) \vec \varphi \rangle = \|\vec f \| \| \vec \varphi \|.
$$
If, in addition, $\vec \varphi \in \h$ is chosen so that $ \| \vec \varphi \| = 1$, then $\pi(x,\omega) \vec \varphi$ is a Parseval frame.
\end{proof}
\par
Let $\vec \varphi =\varphi_1 \otimes \varphi_2, \vec \phi = \phi_1 \otimes \phi_2 \in \h,$
$ \| \vec \varphi \| = \| \vec \phi \| = 1$, and let $m : \mathbb{R}^{4d} \mapsto
\mathbb{C}$ be a measurable function.
Then the tensor product continuous frame multipliers of the form
$
M_{m, \pi(x,\omega) \vec \varphi, \pi(x,\omega) \vec \phi}
$
can be identified with bilinear localization operators considered in \cite{CorGro2003, Teof2018} (see Remark 1.2 in \cite{Teof2018}), i.e.
\begin{equation} \label{eq:Mblop}
\langle M_{m, \pi(x,\omega) \vec \varphi, \pi(x,\omega) \vec \phi} \vec{f}, \vec{g} \rangle =
\langle m V_{\varphi _1 \otimes \varphi _2} (f_1 \otimes f_2) ,
V_{\phi_1 \otimes \phi_2} (g_1 \otimes g_2) \rangle,
\end{equation}
$ f_1,f_2,g_1,g_2 \in L^{2}(\mathbb{R}^d) $.
The function $m$ is commonly called {\em the symbol} of the operator
$M_{m, \pi(x,\omega) \vec \varphi, \pi(x,\omega) \vec \phi}$.
Certain Schatten class properties of bilinear localization operators given by
\eqref{eq:Mblop} can be deduced from their linear counterparts given in e.g.
\cite{CorGro2003,CPRT2}. In these investigations, localization operators are interpreted as Weyl pseudodifferential operators.
We note that these results extend results from Section \ref{sec:multipliers} in the considered special case.
However, we present here a simple alternative proof of related particular result for the linear case given in \cite{CorGro2003}.
\begin{prop} \label{prop:STFTSp}
Let $ \h = L^{2}(\mathbb{R}^d) \otimes L^{2}(\mathbb{R}^d)$, and let
$ \varphi_1, \varphi_2, \phi_1, \phi_2 \in L^{2}(\mathbb{R}^d) \setminus \{ 0 \}$.
If $ m \in L^p (\mathbb{R}^{2d}),$ $ 1\leq p<\infty,$ then
$M_{m, \pi(x,\omega) \vec \varphi, \pi(x,\omega) \vec \phi} $ given by \eqref{eq:Mblop} belongs to
Schatten class $ \mathcal{S}_p(\h)$.
\end{prop}
\begin{proof}
By Lemma \ref{lm:STFTtensprod} it follows that $F = \pi(x,\omega) \vec \varphi$ and $G = \pi(x,\omega) \vec \phi $ are
continuous tight frames for $\h.$ Thus $M_{m, F, G} $ is a tensor product continuous frame multiplier, and by Theorem \ref{sec:schatten1}
it follows that $M_{m, F, G} \in \mathcal{S}_p(\h)$.
\end{proof}
Next we discuss bilinear Calder\'on-Toeplitz operators. To that end we consider time-scale shifts, and the left Haar measure $\mu = dbda/a^{d+1}$.
Let $ \varphi_1, \varphi_2 $ be admissible rotation invariant wavelets, $\vec\varphi = \varphi_1 \otimes \varphi_2 \in \h,$ and let $ \vec f \in \h$. Then the tensor product continuous wavelet transform is given by
$$
W_{\vec\varphi}( \vec f)(b,a) = \langle \vec f, \paff (b,a) \vec\varphi \rangle, \qquad
b\in \mathbb{R}^{2d}, a \in \mathbb{R}^2 \setminus \{ 0 \}.
$$
It acts on a simple tensor $f_1 \otimes f_2 \in \h$ as
\begin{multline} \label{WTtensor}
W_{\vec\varphi} (f_1 \otimes f_2) (b,a) =
W_{\varphi_1} (f_1 ) \otimes W_{\varphi_2} (f_2) (b_1, b_2, a_1, a_2) \\
= \langle f_1, \paff (b_1,a_1) \varphi_1 \rangle \langle f_2, \paff (b_2,a_2) \varphi_2 \rangle
\end{multline}
where $b = (b_1, b_2) \in \mathbb{R}^{2d} $, $ a= (a_1, a_2 ) \in \mathbb{R}^2 \setminus \{ 0 \}$,
and
$$
\paff (b,a) \vec\varphi = \paff (b_1,a_1) \varphi_1 \otimes \paff (b_2,a_2) \varphi_2.
$$
\begin{lem} \label{lm:WTtensprod}
Let $ \h = L^{2}(\mathbb{R}^d) \otimes L^{2}(\mathbb{R}^d)$, and
$\vec\varphi = \varphi_1 \otimes \varphi_2 \in \h,$ where $ \varphi_1$ and $\varphi_2 $ are admissible rotation invariant wavelets. Then
$$
\paff (b,a) \vec \varphi (t), \quad
b\in \mathbb{R}^{2d}, a \in \mathbb{R}^2 \setminus \{ 0 \},
$$
is a continuous tight frame for the tensor product space $\h$ with respect to
$ (\mathbb{R}^{2d} \times\mathbb{R}^2 \setminus \{ 0 \}, \frac{db da}{a^{d+1}})$.
\end{lem}
The proof is similar to the proof of Lemma \ref{lm:STFTtensprod}, and therefore omitted.
If $m : \mathbb{R}^{2d} \times \mathbb{R}^2 \setminus \{ 0 \} \mapsto \mathbb{C}$ is a measurable function, then the tensor product continuous frame multipliers of the form
\begin{equation} \label{eq:Mblopaffine}
M_{m, \paff (b,a) \vec \varphi, \paff (b,a) \vec \phi} \vec{f}, \vec{g} \rangle =
\langle m W_{\varphi _1 \otimes \varphi _2} (f_1 \otimes f_2) ,
W_{\phi_1 \otimes \phi_2} (g_1 \otimes g_2) \rangle,
\end{equation}
$ f_1,f_2,g_1,g_2 \in L^{2}(\mathbb{R}^d) $,
can be interpreted as a bilinear extension of (two)\-wavelet localization operators considered in
\cite{Wong2002}.
More precisely, we have the following result, which seems to be new (see also \cite[Theorem 19.11]{Wong2002}).
\begin{prop} \label{prop:waveletSp}
Let $ \h = L^{2}(\mathbb{R}^d) \otimes L^{2}(\mathbb{R}^d)$,
and let $ \varphi_1,$
$\varphi_2 $, $ \phi_1,$ and $ \phi_2 $ be admissible rotation invariant wavelets such that
$ \| \varphi_j \| = \| \phi_j \| = 1$, $ j =1,2$.
If $ m \in L^p (\mathbb{R}^{2d} \times \mathbb{R}^2 \setminus \{ 0 \}, \frac{db da}{|a|^{d+1}}) $, $ 1\leq p<\infty,$ then
$ M_{m, \paff (b,a) \vec \varphi, \paff (b,a) \vec \phi} $ given by \eqref{eq:Mblopaffine}
belongs to Schatten class $ \mathcal{S}_p(\h).$
\end{prop}
\begin{proof}
By Lemma \ref{lm:WTtensprod} it follows that $F = \paff (b,a) \vec \varphi$ and $G = \paff (b,a) \vec \phi $ are
continuous tight frames for $\h.$ Thus $M_{m, F, G} $ is a tensor product continuous frame multiplier, and by Theorem \ref{sec:schatten1}
it follows that $M_{m, F, G} \in \mathcal{S}_p(\h)$.
\end{proof}
\par
Finally, we combine STFT and wavelet continuous tight frames and consider bilinear localization operators of the ``mixed--form``.
\par
Consider the measurable space $(X,\mu) = (\mathbb{R}^{2d} \times (\mathbb{R}^{d} \times \mathbb{R}\setminus \{0\}), \mu )$
where $\mu$ is the product of $2d-$dimensional Lebesgue measure and the left Haar measure $ \frac{db da}{|a|^{d+1}}$.
If $\varphi \in L^2 (\mathbb{R}^{d})\setminus \{0\}$ and if $ \phi \in L^2 (\mathbb{R}^{d}) $ is an admissible and rotation invariant wavelet, then
we define the STFT-Wavelet transform on $ \h = L^2 (\mathbb{R}^{d})\otimes L^2 (\mathbb{R}^{d})$ as follows
\begin{equation} \label{eq:mixedtransform}
(V_\varphi \otimes W_\phi)(f_1 \otimes f_2) =
\langle f_1, \pi(x,\omega) \varphi \rangle \otimes \langle f_2, \paff (b,a) \phi \rangle
\end{equation}
By orthogonality relations \eqref{eq:STFTortrel} and \eqref{eq:WTortrel}, it follows that
$$
\langle (V_\varphi \otimes W_\phi)(f_1 \otimes f_2), (V_\varphi \otimes W_\phi)(g_1 \otimes g_2)\rangle_{L^2 (X)}
= \langle f_1 \otimes f_2, g_1 \otimes g_2\rangle_{\h} \|\varphi \|^2 \| \phi\|^2.
$$
Thus we conclude that
$ \pi(x,\omega) \varphi \paff (b,a) \phi $
is continuous tight frame for the tensor product space $\h$ (cf. Lemmas \ref{lm:STFTtensprod} and \ref{lm:WTtensprod}).
If $m : X \rightarrow \mathbb{C} $ is a measurable function, then the
related tensor product continuous frame multiplier is given by
\begin{multline} \label{eq:Mblopmixed}
M_{m, \pi(x,\omega) \varphi \paff (b,a) \phi, \pi(x,\omega) \varphi \paff (b,a) \phi} \vec{f}, \vec{g} \rangle \\
=
\int_X m (x) (V_\varphi \otimes W_\phi)(f_1 \otimes f_2) (x),
(V_\varphi \otimes W_\phi) (g_1 \otimes g_2) (x) d\mu (x),
\end{multline}
for $ f_1,f_2,g_1,g_2 \in L^{2}(\mathbb{R}^d) $.
In the same way as Propositions \ref{prop:waveletSp} and \ref{prop:STFTSp} we obtain the following.
\begin{prop} \label{prop:waveletSp2}
Let $ \h = L^{2}(\mathbb{R}^d) \otimes L^{2}(\mathbb{R}^d)$, and
$(X,\mu) = (\mathbb{R}^{2d} \times (\mathbb{R}^{d} \times \mathbb{R}\setminus \{0\}), dx d\omega \frac{db da}{|a|^{d+1}} )$.
Moreover, let
$\varphi \in L^2 (\mathbb{R}^{d})\setminus \{0\}$ and let $ \phi \in L^2 (\mathbb{R}^{d}) $ be an admissible and rotation invariant wavelet, such that $ \| \varphi \| = \| \phi \| = 1$.
If $ m \in L^p (X) $, and $F= \pi(x,\omega) \varphi \paff (b,a) \phi $, $ 1\leq p<\infty,$ then
$ M_{m, F, F} $ given by \eqref{eq:Mblopmixed}
belongs to Schatten class $ \mathcal{S}_p(\h).$
\end{prop}
\begin{proof} The result is a consequence of Theorem \ref{sec:schatten1} and the fact that $ F= \pi(x,\omega) \varphi \paff (b,a) \phi $ is a continuous tight frame for $\h$.
\end{proof}
\par
We refer to \cite{xxlgrospeck19} where a general approach based on the coorbit space theory is used to obtain
deep continuity results for related kernel type operators.
\section{Localization operators as density operators of quantum systems} \label{sec:quantum}
In this section we first briefly recall the notion of a density operator or a density matrix
(as presented in e.g. \cite[Section 19]{Hall}), and then identify specific tensor product continuous
frame multipliers as density operators. This opens the possibility to use more general results from
Sections \ref{sec:contframes} and
\ref{sec:multipliers} in the study of quantum systems.
\par
If $ \psi $ represents the wave function which describes the quantum system of
e.g. two spinless "distinguishable" particles moving in $\mathbb{R}^3 $, then
typically $ \psi = \psi (x,y) \in L^2 (\mathbb{R}^6)$, where $x$ is the position of the first particle,
and $y$ is the position of the second particle.
In general, there does not seem to be a way to associate a vector
$ \tilde \psi \in L^2 (\mathbb{R}^3) $ which could sensibly describe the state of the first (or second) particle, see \cite{BB, Hall}. To overcome
this obstacle a more general notion of the "state" of a quantum system is introduced by associating
expectation value of an observable on $ L^2 (\mathbb{R}^3) $ with respect to the wave function $\psi $.
This turned out to be the notion of {\em density operator} or {\em density matrix},
which is uniquely determined by a given family of expectation values.
A density operator on the Hilbert space $\h$ is simply a non-negative, self-adjoint operator
$ \rho \in \mathcal{S}_1(\h)$ such that
$ \text{Tr}_{\h} (\rho) = 1$.
A class of density operators, called Toeplitz operators is recently studied in \cite{deGosson2020, deGosson2021}. They correspond to quantum states obtained from a fixed function by position--momentum translations. This approach is closely related to the STFT multipliers, and we complement the investigations from \cite{deGosson2020} by considering the corresponding partial traces (reduced density operators).
By partial trace theorem (Theorem \ref{th:partialtrace}), a density operator of a subsystem can be related to partial trace of the density operator for the whole system.
This procedure may give a reasonable description of a subsystem of a bipartite system given by the tensor product Hilbert space
$\h = \h_1 \otimes \h_2$.
In particular, if $\rho \in \mathcal{S}_1(\h_1 \otimes \h_2)$ is of the form $ \rho = \rho_1 \otimes \rho_2 $, then the corresponding density
operators for subsystems $\h_j$, $ j = 1,2,$ given by partial trace theorem
are exactly $\rho_j$, $ j = 1,2$, cf. \cite[Theorem 19.13]{Hall}. Then the state is said to be {\em a separable state}.
The opposite direction, i.e. the existence of a pure state $ \rho$
such that given $\rho_j$, $ j = 1,2$, are its partial traces is considered in e.g. \cite{Kly}.
Recently, for given $\rho_j$, $ j = 1,2$,
necessary and sufficient conditions for the existence of $\rho$ with $ \text{supp} \rho \subset \mathcal{X} \subseteq \h $ such that
$\rho_j$, $ j = 1,2$, are its partial traces are given in \cite{FGZ}.
These investigations lead to interesting insights related to different types of operator convergence. For example, the weak convergence is not
preserved under the partial trace. We refer to \cite{FGZ} for details in that direction.
\par
It is known that characteristic functions of a certain region in phase space give rise to trace class localization operators and may serve to
extract
time-frequency features of a signal when restricted to that region, see \cite{da88}.
Thus, it seems plausible to identify convenient tensor product continuous frame multipliers as "localized versions" of density operators of
bipartite systems, and use their partial traces to study the features of a subsystem. Of course, to be appropriate candidate of a density operator,
a multipliers has to satisfy certain conditions.
For the convenience we call them {\em admissible} multipliers.
\par
\begin{defn}\label{defadmisscontframemult}
Let $\textbf{M}_{m,F,G}$ be a tensor product continuous Bessel (frame) multiplier of $F$
and $G$ with respect to the symbol $m$. Then, $\textbf{M}_{m,F,G}$ is {\em admissible} if it is non-negative, self-adjoint trace class operator
such that
$$
\text{Tr}_{\h} (\textbf{M}_{m,F,G}) = 1.
$$
\end{defn}
\par
Therefore, any admissible tensor product continuous Bessel (frame) multiplier
is a density operator.
As noted in Section \ref{sec:multipliers}, if $F$ is a continuous frame,
$m(x)\geq \delta > 0$ and $\|m\|_{\infty}<\infty$, then $\textbf{M}_{m,F,F}$ is positive, self-adjoint
and invertible. For a given $F$, the trace of $\textbf{M}_{m,F,F}$ depends on the symbol $m$, which can be designed in such a way to ensure that
$\textbf{M}_{m,F,F}$ is in fact an admissible multiplier.
To illustrate this idea we consider particular case of STFT multipliers.
\begin{thm}\label{thm:example}
Let there be given $\varphi, \phi \in L^2 (\mathbb{R}^{d}) $ such that $ \langle \varphi, \phi \rangle \neq 0$.
If $m \in L^1 (\mathbb{R}^{2d}) \cap L^\infty (\mathbb{R}^{2d}) $, then
$$
\text{{\em Tr}}_{\h} ( M_{m, \pi(x,\omega) \varphi, \pi(x,\omega) \phi}) = \langle \varphi, \phi \rangle \int_{\mathbb{R}^{2d}} m (x,\omega) dx d\omega,
\;\;\;
x,\omega \in \mathbb{R}^{d},
$$
where $ M_{m, \pi(x,\omega) \varphi, \pi(x,\omega) \phi} $ is weakly given by
$$
\langle M_{m, \pi(x,\omega) \varphi, \pi(x,\omega) \phi} f ,g \rangle
= \langle m V_{\varphi} f, V_{\phi} g \rangle, \;\;\; f,g \in L^2 (\mathbb{R}^{d}).
$$
\end{thm}
\begin{proof}
By Definition \ref{definitioncontframemult} and Lemma \ref{lm:STFTtensprod}
it follows that $ M_{m, \pi(x,\omega) \varphi, \pi(x,\omega) \phi} $ is a tensor product continuous frame multiplier. Furthermore, since $m \in L^1 (\mathbb{R}^{2d}) $ by Proposition \ref{prop:STFTSp} we have that
$ M_{m, \pi(x,\omega) \varphi, \pi(x,\omega) \phi} $ is a trace class operator.
The rest of the proof is similar to the proof of \cite[Theorem 16.1]{Wong2002} which is formulated in terms of
irreducible and square-integrable representations of locally compact Hausdorff groups. We give it here for the sake of completeness.
Let $ (e_n)_{n \in \mathbb{N}} $ be an ONB in $ L^2 (\mathbb{R}^{d})$. Then, by Fubini's theorem, Parseval's equality, and since $\pi(x,\omega)$ acts unitary on $
L^2 (\mathbb{R}^{d}) $ we obtain
\begin{multline*}
\text{Tr}_{\h} ( M_{m, \pi(x,\omega) \varphi, \pi(x,\omega) \phi}) =
\sum_{n=1} ^\infty \langle M_{m, \pi(x,\omega) \varphi, \pi(x,\omega) \phi} e_n, e_n \rangle
\\
=
\sum_{n=1} ^\infty \int_{\mathbb{R}^{2d}} m (x,\omega) \langle e_n, \pi(x,\omega) \varphi \rangle
\langle \pi(x,\omega) \phi , e_n \rangle dx d\omega
\\
= \int_{\mathbb{R}^{2d}} m (x,\omega) \sum_{n=1} ^\infty \langle e_n, \pi(x,\omega) \varphi \rangle
\langle \pi(x,\omega) \phi , e_n \rangle dx d\omega
\\
= \int_{\mathbb{R}^{2d}} m (x,\omega) \langle \pi(x,\omega) \varphi , \pi(x,\omega) \phi \rangle dx d\omega
\\
= \langle \varphi, \phi \rangle \int_{\mathbb{R}^{2d}} m (x,\omega) dx d\omega,
\end{multline*}
and the proof is finished.
\end{proof}
\begin{prop}\label{prop:example}
Let there be given $\varphi_j, \phi_j \in L^2 (\mathbb{R}^{d} \setminus \{ 0\} $, and let
$\vec \varphi = \varphi_1 \otimes \varphi_2$, $\vec \phi = \phi_1 \otimes \phi_2$.
If $m \in L^1 (\mathbb{R}^{4d}) \cap L^\infty (\mathbb{R}^{4d}) $ is chosen so that
\begin{equation} \label{eq:trace-symbol}
\int_{\mathbb{R}^{4d}} m (x,\omega) dx d\omega = \frac{1}{ \langle \vec \varphi,\vec \phi \rangle},
\end{equation}
then
$\text{{\em Tr}}_{\h} ( M_{m, \pi(x,\omega) \vec \varphi, \pi(x,\omega) \vec \phi}) = 1,$
where $ M_{m, \pi(x,\omega) \vec \varphi, \pi(x,\omega) \vec \phi}$ is
given by \eqref{eq:Mblop}.
If, in addition $\vec \varphi = \vec \phi,$ and $m > 0$, then $ M_{m, \pi(x,\omega) \vec \varphi, \pi(x,\omega) \vec \varphi}$
is an admissible multiplier.
\end{prop}
\begin{proof}
To proof the first part, it is enough to consider the extension of Theorem \ref{thm:example} to tensor product Hilbert space $ \h = L^2 (\mathbb{R}^{d}) \otimes L^2 (\mathbb{R}^{d})$.
The second part follows from the fact that $ M_{m, \pi(x,\omega) \vec \varphi, \pi(x,\omega) \vec \varphi}$ is the frame operator of
$\sqrt{m} \vec \varphi$ and so it is positive, self-adjoint and invertible. Since by Theorem \ref{thm:example} and \eqref{eq:trace-symbol}
$$
\text{Tr}_{\h} ( M_{m, \pi(x,\omega) \varphi, \pi(x,\omega) \phi}) =
\langle \vec \varphi,\vec \phi \rangle \int_{\mathbb{R}^{4d}} m (x,\omega) dx d\omega = \langle \vec \varphi,\vec \phi \rangle \cdot
\frac{1}{ \langle \vec \varphi,\vec \phi \rangle}= 1,
$$
it follows that $ M_{a, \pi(x,\omega) \vec \varphi, \pi(x,\omega) \vec \varphi}$ is an admissible multiplier.
\end{proof}
By Proposition \ref{prop:example} we have the following important conclusion, which can be interpreted as a description of a separable state of a bipartite quantum system. This also gives an affirmative partial
answer to the question of de Gosson \cite[Section 5]{deGosson2020} related to the restriction of
the structure of a density operator to its partial traces.
\begin{thm} \label{thm:density}
Let $ \h = \h_1 \otimes \h_2 = L^2 (\mathbb{R}^{d}) \otimes L^2 (\mathbb{R}^{d}) $,
$\vec \varphi = \varphi_1 \otimes \varphi_2 \in L^2 (\mathbb{R}^{2d}) \otimes L^2 (\mathbb{R}^{2d}) \setminus \{ 0\} $,
and let $ m_j (x_j, \omega_j) $ $ \in L^1 (\mathbb{R}^{2d}) \cap L^\infty (\mathbb{R}^{2d}) $ be positive functions such that
\begin{equation} \label{eq:trace-symbol=1}
\int_{\mathbb{R}^{2d}} m_j (x_j,\omega_j) dx_j d\omega_j = \frac{1}{\| \varphi_j \|}, \qquad j =1,2.
\end{equation}
Put $m (x,\omega) = m_1 (x_1, \omega_1) m_2 (x_2, \omega_2)$, and
$$ F= \pi(x,\omega) \vec \varphi = \pi(x_1,\omega_1) \varphi_1 \otimes \pi(x_2,\omega_2) \varphi_2 .$$
Then the operator $ M_{m, F,F}$
given by \eqref{eq:Mblop} is a density operator, and its partial trace
$T (\textbf{M}_{m, F, F} )$ with respect to $\h_j$ is the density operator
$ \textbf{M}_{m_j,\varphi_j,\varphi_j},$ $ j =1,2.$
\end{thm}
\begin{proof}
By Proposition \ref{prop:example} it follows that $ M_{m, F, F}$ is an admissible multiplier, and therefore it is a density operator.
Next, by Corollary \ref{cor:partialtrace} it follows that
$$T (\textbf{M}_{m, \pi(x,\omega) \vec \varphi, \pi(x,\omega) \vec \varphi} )=
\textbf{M}_{m_1,\varphi_1,\varphi_1} \text{Tr} (\textbf{M}_{m_2,\varphi_2,\varphi_2}).$$
From the assumptions of the theorem it follows that
$\textbf{M}_{m_1,\varphi_1,\varphi_1}$ and $ \textbf{M}_{m_2,\varphi_2,\varphi_2}$ are both admissible multipliers,
so that
$$T (\textbf{M}_{m, \pi(x,\omega) \vec \varphi, \pi(x,\omega) \vec \varphi} )=
\textbf{M}_{m_1,\varphi_1,\varphi_1},$$
and it is a density operator. Similarly for $ \textbf{M}_{m_2,\varphi_2,\varphi_2}$.
\end{proof}
In the same manner one can consider multipliers given by \eqref{eq:Mblopaffine} and \eqref{eq:Mblopmixed},
and use Propositions \ref{prop:waveletSp} and \ref{prop:waveletSp2} to obtain another types of density operators for which Theorem \ref{thm:density} holds as well.
These considerations can be further used in the study of different aspects of bipartite quantum systems.
\vspace{3mm}
\textbf{Acknowledgments}:
This work is supported by projects A\-NA\-C\-RES and TIFREFUS,
MPNTR of Serbia Grant No. 451--03--9/2021--14/200125,
{\em "Localization in Phase space: theoretical, numerical and practical aspects"} No. 19.032/961--103/19
MNRVOID Republic of Srpska, and
the project P 34624 {\em "Localized, Fusion and Tensors of Frames"} (LoFT)
of the Austrian Science Fund (FWF). The first author thanks Nora Simovich for help with typing.
|
1,314,259,994,166 | arxiv | \section{Introduction}
In 1970s, Erd\H{o}s \cite{E71} asked how many edges are needed in a graph on $n$ vertices,
in order to ensure the existence of a cycle of length exactly $n-r$?
Woodall \cite{W76} determined the Tur\'an numbers
of large cycles $C_{\ell}$ for $\ell\in [\lfloor\frac{n+3}{2}\rfloor,n]$
as follows.
\begin{thm}[Woodall \cite{W76}]\label{Thm:BW}
Let $G$ be a graph of order $n\geq 2k+3$ where $k\geq 0$ is an integer.
If $e(G)\geq \binom{n-k-1}{2}+\binom{k+2}{2}+1$, then $G$ contains
a $C_{\ell}$ for each $l\in [3,n-k]$.
\end{thm}
Define $\Gamma$ as a graph which consists of a clique of $n-k-1$ vertices and a clique of $k+2$
vertices sharing one common vertex. The graph $\Gamma$ shows Woodall's theorem is sharp.
In this paper, we shall first consider stability results of Woodall's
theorem following the recent trend.
So it is natural to
recall history of the related stability results of extremal results on cycles.
For non-hamiltonian graphs of order $n$ with given minimum degree,
Erd\H{o}s \cite{E62} proved the following result in 1962.
\begin{thm}[Erd\H{o}s \cite{E62}]
Let $G$ be a graph on $n$ vertices with $\delta(G)\geq k$ where $1\leq k\leq \lfloor\frac{n-1}{2}\rfloor$.
If $G$ is non-hamiltonian then
$$
e(G)\leq\max\left\{\binom{n-k}{2}+k^2,\binom{n-\lfloor\frac{n-1}{2}\rfloor}{2}+{\left\lfloor\frac{n-1}{2}\right\rfloor}^2\right\}.
$$
\end{thm}
As a key lemma to attack the following problem: Among all non-hamiltonian
graphs of order $n$ which have minimum degree
at least $k$, characterize the class of graphs which attain the maximum
spectral radius, the authors \cite{LN16} proved a stability result of Erd\H{o}s' theorem.
This result was also proved by F\"{u}redi, Kostochka and Luo \cite{FKL17}, independently.
\begin{thm}[Li and Ning \cite{LN16}, F\"{u}redi, Kostochka and Luo \cite{FKL17}]
Let $G$ be a graph of order $n\geq 6k+5$. If $\delta(G)\geq k\geq 1$
and
$$
e(G)>\binom{n-k-1}{2}+(k+1)^2,
$$
then $G$ is hamiltonian, unless $G$ is a subgraph of $K_k\vee (kK_1+K_{n-2k})$
or a subgraph of $K_{1}\vee (K_{n-k-1}+K_{k})$.
\end{thm}
In 1977, Kopylov \cite{K77} determined a sharp edge condition for the circumference
of a 2-connected graph. In 2016, F\"{u}redi, Kostochka, and
Verstra\"{e}te \cite{FKV16} proved a stability version of Erd\H{o}s-Gallai
theorem, and finally (together with Luo) \cite{FKLV18} completed
the stability version of Kopylov's theorem \cite{K77}.
In fact, Kopylov's theorem is a special case of a conjecture due to Woodall \cite{W76}, which
refers to the sharp edge condition for circumference of a 2-connected graph
with given minimum degree.
Recently, Ma and Ning \cite{MN20} proved a stability version of Woodall's conjecture.
In this paper, we shall prove a stability result of Theorem \ref{Thm:BW}.
Let us introduce some notation.
\begin{definition}
Let $k$ and $n\geq k+1$ be integers. We define $\mathcal{F}_{n,k}$
to be a family of graphs, such that a graph $G\in\mathcal{F}_{n,k}$
if and only if $G$ is a graph of order $n$ in which there is a
subgraph $K\cong K_{n-k}$, and for each component $H$ of $G-V(K)$,
$V(H)$ is a clique and all vertices in $H$ are adjacent to a same
vertex in $K$. Specially, the graph $L_{n,k}\cong
K_1\vee(K_{n-k-1}+K_k)$ is the one in $\mathcal{L}_{n,k}$ with
maximum number of edges.
\end{definition}
\begin{thm}\label{Thm:LiNing-stability}
Let $G$ be a graph of order $n\geq \max\{6k+17,\frac{(k+4)(k+5)}{2}\}$
where $k\geq 0$. If $$e(G)\geq\binom{n-k-2}{2}+\binom{k+3}{2},$$ then
$G$ is weakly pancyclic with girth 3. Suppose that
$G$ contains no $C_{n-k}$. Then one of the following holds:\\
(a) $G\subseteq L$ for some $L\in\mathcal{L}_{n,k+1}$;\\
(b) $G=L_{n,k+2}\cong K_1\vee (K_{n-k-3}+K_{k+2})$;\\
(c) $k=0$ and $G\subseteq\varGamma_{n,2}:=K_2\vee (K_{n-4}+2K_1)$;\\
(d) $k=1$ and $G=\varGamma_{n,3}:=K_2\vee (K_{n-5}+3K_1)$.
\end{thm}
As a non-trivial byproduct, we give a solution to the following open
problems proposed in \cite{GN19}. By $\rho(G)$ and $q(G)$ we
denote the spectral radius and signless Laplacian spectral radius of
the graph $G$.
\begin{problem}[\cite{GN19}]\label{Prob1}
Let $G$ be a connected graph of order $n$ and $k\geq 1$ be an
integer, where $n$ is sufficiently large compared to $k$.\\
(a) Suppose that $\rho(G)>\rho(L_{n,k})$.
Does $G$ contain a $C_{n-k+1}$?\\
(b) Suppose that $q(G)>q(L_{n,k})$. Does $G$ contain a $C_{n-k+1}$?
\end{problem}
Our answer is the following. When $k=2$, it implies all results in \cite{GN19}.
\begin{thm}\label{Thm:Spectrallargecylce}
Let $k\geq 1$ be an integer. Let $G$ be a graph of order $n$. If either\\
(a) $\rho(G)\geq\rho(L_{n,k})$ where $n\geq \max\{6k+11,\frac{(k+3)(k+4)}{2}\}$ or,\\
(b) $q(G)\geq q(L_{n,k})$ where $n\geq \max\{6k+11,k^2+2k+3\}$,\\
then $G$ contains a $C_{\ell}$ for each $\ell\in [3,n-k+1]$, unless
$G=L_{n,k}$.
\end{thm}
Our technique is to combine the stability methods in extremal graph theory
with spectral technique. Compared with the original method in \cite{LN16},
we need to find such a stability result of number of edges for
$\Omega(\sqrt{n})$ cycles of consecutive lengths,
which is the main new point.
Our second part is devoted to an open problem on cycles with consecutive lengths
due to Nikiforov \cite{N08}.
Bondy \cite{B71} proved that every
hamiltonian graph $G$ on $n$ vertices contains cycles of all lengths
$\ell \in [3,n]$ if $e(G)\geq \frac{n^2}{4}$, unless $n$ is even and $G$ is isomorphic to $K_{\frac{n}{2},\frac{n}{2}}$.
If one drops the condition that ``$G$ is hamiltonian" in Bondy's theorem, a theorem in Bollob\'{a}s'
textbook \cite[Corrolary~5.4]{B76} states such a graph
$G$ contains all cycles $C_{\ell}$ for
each $\ell \in [3,\left\lfloor\frac{n+3}{2}\right\rfloor]$.
Nikiforov \cite{N08} considered cycles of consecutive lengths
from a spectral perspective.
\begin{problem}[Nikiforov \cite{N08}]\label{Prob:2}
What is the maximum $C$ such that for all positive $\varepsilon<C$ and sufficiently large $n$, every
graph $G$ of order $n$ with $\rho(G)\geq \sqrt{\lfloor\frac{n^2}{4}\rfloor}$
contains a cycle of length $\ell$ for every $\ell\leq (C-\varepsilon)n$.
\end{problem}
One may guess $C=\frac{1}{2}$. However, the class of graphs $G=K_s\vee (n-s)K_1$ where
$s=\frac{(3-\sqrt{5})n}{4}$ (see \cite{N08}) shows
$C\leq \frac{(3-\sqrt{5})}{2}$.
Nikiforov \cite{N08} proved that $C\geq \frac{1}{320}$.
Ning and Peng \cite{NP20} slightly refined this as $C\geq \frac{1}{160}$.
Only very recently, Mingqing Zhai and Huiqiu Lin (private communication)
improved these results to $C\geq\frac{1}{7}$.
The second purpose of this article is to show that $C\geq \frac{1}{4}$
by completely different methods.
\begin{thm}\label{Thm:SpectraConsecutiveCycles}\footnote{If $0<\varepsilon<10^{-6}$,
then we can choose $N=2.5\times 10^{10}{\varepsilon}^{-1}$.}
Let $\varepsilon$ be real with $0<\varepsilon<\frac{1}{4}$. Then there
exists an integer $N:=N(\varepsilon)$, such that if $G$ is a graph on
$n$ vertices with $n\geq N$ and $\rho(G)>\sqrt{\lfloor\frac{n^2}{4}\rfloor}$,
then $G$ contains all cycles $C_{\ell}$ with $\ell \in [3,(\frac{1}{4}-\varepsilon)n]$.
\end{thm}
Let $G$ be a graph. We use $\omega(G)$ to denote clique number
of $G$. Let $G_1$ and $G_2$ be two vertex-disjoint graphs.
The \emph{union} of $G_1$ and $G_2$, denoted by $G_1+G_2$, is defined to be a graph
$G'$ with $V(G')=V(G_1)\cup V(G_2)$ and $E(G')=E(G_1)\cup E(G_2)$. The \emph{join} of $G_1$ and $G_2$, denoted by $G_1\vee G_2$,
is a new graph obtained from $G_1+G_2$ by adding all possible edges from
$G_1$ to $G_2$.
Let $A(G)$ be the adjacency matrix of a graph $G$ and $D$ be the degree matrix
of $G$. The \emph{spectral radius} of $G$, denoted by $\rho(G)$, is the largest eigenvalue
of $A(G)$. The \emph{signless Laplacian spectral radius} of $G$, denoted by $q(G)$, is the largest eigenvalue
of the signless Laplacian matrix $Q(G):=A(G)+D(G)$.
The paper is organized as follows. In Section \ref{Sec:Woodall}, we prove
a sharp version of Woodall's theorem and also a stability version of it.
In Section \ref{Sec:Spectral}, we answer Problem \ref{Prob1}
completely.
In Section \ref{Sec:Nikiforov}, we consider Nikiforov's open problem
on cycles with consecutive lengths. In the last section,
we mention some related problem for further study.
\section{Woodall's theorem updated}\label{Sec:Woodall}
We first refine Woodall's Theorem on Tur\'an number of large cycles
as follows. We call a graph weakly pancyclic if it contains all cycles of lengths
from the smallest one to the largest one.
\begin{thm}\label{Thm:RefinedBW}
Let $G$ be a graph of order $n\geq \max\{6k+11,\frac{(k+3)(k+4)}{2}\}$, where $k\geq 0$.
If $$e(G)\geq \binom{n-k-1}{2}+\binom{k+2}{2},$$ then
$G$ is weakly pancyclic with girth 3. Furthermore,
one of the following is true:\\
(a) $G$ contains a $C_{\ell}$ for each $\ell\in [3,n-k]$;\\
(b) $G=L_{n,k+1}\cong K_1\vee(K_{n-k-2}+K_{k+1})$.
\end{thm}
The proof of Theorem \ref{Thm:RefinedBW} needs the following three lemmas.
The \emph{circumference} of $G$, denoted by $c(G)$, is the length
of a longest cycle in $G$. The \emph{$n$-closure} $cl_n(G)$, is defined to be a graph of order $n$
by recursively joining any pair of non-adjacent vertices with degree sum
at least $n$ till there is no such pair.
\begin{lem}[Bondy and Chv\'atal \cite{BC76}]\label{Lem:BonChv}
Let $G$ be a graph of order $n$ and $C':=cl_n(G)$.
Then $c(G)=c(cl_n(G))$.
\end{lem}
\begin{lem}[\rm Bondy \cite{B71}]\label{Lem:Bondy}
Let $G$ be a graph of order $n$. If $c(G)=c$ and $e(G)>\frac{c(2n-c)}{4}$,
then $G$ is weakly pancyclic with girth 3.
\end{lem}
For the last lemma, its original form
in \cite{LN16} needs the condition ``$k\geq 1$''.
Here we prove the small case that $k=0$. This lemma is the key tool for our proof.
\begin{lem}\label{Lem:LiNing}
Let $G$ be a graph of order $n\geq 6k+5$, where $k\geq 0$. If
$G=cl_n(G)$ and $e(G)>\binom{n-k-1}{2}+(k+1)^2$
then $\omega(G)\geq n-k$.
\end{lem}
\begin{proof}
Recall that the case of $k\geq 1$ was proved in \cite{LN16}. Now set $k=0$.
Suppose that there exist two vertices $x,y\in V(G)$ such that $d(x)+d(y)\leq n-1$.
Let $H:=G-\{x,y\}$. Then $e(G)\leq e(H)+d(x)+d(y)\leq \binom{n-2}{2}+n-1=\binom{n-1}{2}+1$,
a contradiction. Thus, for any two nonadjacent vertices, the degree sum
of them is at least $n$. By the definition of $n$-closure, $G=K_n$ and so $\omega(G)=n$.
\end{proof}
We are in stand for proving Theorem \ref{Thm:RefinedBW}.
\noindent {\bf Proof of Theorem \ref{Thm:RefinedBW}.} Suppose that
$G$ is a graph satisfying the condition. We first show that $G$ is
weakly pancyclic with girth 3. Let $c:=c(G)$. By Lemma
\ref{Lem:Bondy}, we only need to show that
$\binom{n-k-1}{2}+\binom{k+2}{2}>\frac{c(2n-c)}{4}$. If not, then we
have
$$\frac{nc}{2}-\frac{c^2}{4}\geq\frac{n^2-(2k+3)n}{2}+(k+1)(k+2),$$
which implies that $$c^2-2nc+2(n^2-(2k+3)n)+4(k+2)(k+1)\leq 0.$$
However, the discriminant of quadratic form
$$\varDelta=(2n)^2-4\left(2(n^2-(2k+3)n)+4(k+1)(k+2)\right)<0$$ for
$n\geq 2k+5$, a contradiction. This proves the first part of the
theorem.
Now let $G'=cl_n(G)$. Since
$$e(G')\geq e(G)\geq\binom{n-k-1}{2}+\binom{k+2}{2}\geq
\binom{n-k-2}{2}+(k+2)^{2}+1$$ for $n\geq \max\{6k+11,
\frac{(k+3)(k+4)}{2}\}$, by Lemma \ref{Lem:LiNing}, $\omega(G')\geq
n-k-1$. This implies that $c(G')\geq n-k-1$. If $c(G')\geq n-k$,
then $c(G)=c(G')\geq n-k$ by Lemma \ref{Lem:BonChv}. Recall that $G$
is weakly pancyclic, implying that (a) holds. So assume that
$c(G')\leq n-k-1$. Since $c(G')\geq \omega(G')$, we have
$\omega(G')=n-k-1$.
Let $S$ be a clique of $G'$ with $|S|=n-k-1$, let $K=G'[S]$ and
$H=G-S$. Thus $K$ is complete. Let $H_1$ be an arbitrary component
of $H$. If $|N_{G'}(H_1)\cap S|\geq 2$, then clearly $c(G')\geq
n-k$, a contradiction. Thus we conclude that $|N_{G'}(H_1)\cap
S|\leq 1$ for every component $H_1$ of $H$. Specially, every vertex
$v\in V(H)$ has $|N_{G'}(v)\cap S|\leq 1$. Now
\begin{align*}
e(G'-S) & =e(G')-e(K)-e_{G'}(S,V(H))\geq e(G)-e(K)-e_{G'}(S,V(H))\\
& \geq\binom{n-k-1}{2}+\binom{k+2}{2}-\binom{n-k-1}{2}-(k+1)=\binom{k+1}{2}.
\end{align*}
Since $|V(H)|=k+1$, we infer that $V(H)$ is a $(k+1)$-clique and
equality holds in the above formula. This implies that $G=G'$ and
$|N_G(v)\cap S|=1$ for every $v\in V(H)$. Recall that $|N(H)\cap
S|=1$. All vertices in $H$ have a common neighbor in $S$. We obtain
that $G=L_{n,k+1}$, and (b) holds. The proof is complete.
\hfill{\rule{4pt}{8pt}}
\medskip
We further prove a stability result of Theorem \ref{Thm:BW} as
follows.
\noindent {\bf Proof of Theorem \ref{Thm:LiNing-stability}.} The
argument used here is similar to Theorem \ref{Thm:RefinedBW}.
However, more details are needed. We first claim that $G$ is weakly
pancyclic with girth 3. By Lemma \ref{Lem:Bondy}, we shall show that
$\binom{n-k-2}{2}+\binom{k+3}{2}>\frac{c(2n-c)}{4}$. Suppose to the
contrary that $c^2-2nc+2(n^2-(2k+5)n)+4(k+2)(k+3)\leq 0$. However,
$$(2n)^2-4\left(2(n^2-(2k+5)n)+4(k+2)(k+3)\right)<0$$ when $n\geq
2k+7$, a contradiction. This proves the first part of the theorem.
Let $G':=cl_n(G)$. If $c(G')\geq n-k$, then by Lemma
\ref{Lem:BonChv}, $c(G)=c(G')\geq n-k$. Recall that $G$ is weakly
pancyclic, implying that $G$ contains $C_{n-k}$. So we assume that
$c(G')\leq n-k-1$. Since $$e(G')\geq e(G)\geq
\binom{n-k-2}{2}+\binom{k+3}{2}\geq \binom{n-k-3}{2}+(k+3)^2+1$$ for
$n\geq \frac{(k+4)(k+5)}{2}$. By Lemma
\ref{Lem:LiNing}, $\omega(G')\geq n-k-2$ for $n\geq 6k+17$. If $\omega(G')\geq n-k$.
then $c(G')\geq\omega(G')\geq n-k$, a contradiction. Now we assume
that $\omega(G')=n-k-2$ or $\omega(G')=n-k-1$. Let $S$ be a clique
of $G'$ with $|S|=\omega(G')$, $K=G'[S]$ and $H=G'-S$.
\underline{Case A. $\omega(G')=n-k-1$.} Let $H_1$ be an arbitrary
component of $H$. If $|N_{G'}(H_1)\cap S|\geq 2$, then $c(G')\geq
n-k$ (recall that $S$ is a clique of $G'$), a contradiction. Thus,
every component $H_1$ of $G'-S$ satisfies
$|N_{G'}(H_1)\cap S|\leq 1$. It follows $G\subseteq G'\subseteq
F\in\mathcal{F}_{n,k+1}$ for some $F$, and (a) holds.
\underline{Case B. $\omega(G')=n-k-2$.} Set $T=\{v\in V(H):
|N_{G'}(v)\cap S|\geq 2\}$. We distinguish the following subcases.
\underline{Case B.1. $|T|=0$.} In this case, every vertex $v\in
V(H)$ has $|N_{G'}(v)\cap S|\leq 1$. Now
\begin{align*}
e(G'-S) & =e(G')-e(K)-e_{G'}(S,V(H))\geq e(G)-e(K)-e_{G'}(S,V(H))\\
& \geq\binom{n-k-2}{2}+\binom{k+3}{2}-\binom{n-k-2}{2}-(k+2)=\binom{k+2}{2}.
\end{align*}
Since $|V(H)|=k+2$, we infer that $V(H)$ is a $(k+2)$-clique and
equality holds in the above formula. This implies that $G=G'$ and
$|N_G(v)\cap S|=1$ for every $v\in V(H)$. If $|N(H)\cap S|\geq 2$,
then clearly $c(G)\geq n-k$, a contradiction. This implies that all
vertices in $H$ have a common neighbor in $S$. We obtain that
$G=L_{n,k+2}$, and (b) holds.
\underline{Case B.2. $|T|=1$.} Let $v_1$ be the unique vertex in
$T$. Let $H_1$ be an arbitrary component of $H-v_1$. If $v_1\in
N_{G'}(H_1)$, then $N_{G'}(H_1)\cap S=\emptyset$; for otherwise
$c(G')\geq n-k$. Furthermore, If $|N_{G'}(H_1)\cap S|\geq 2$, then
there are two independent edges between $S$ and $V(H_1)$ (notice
that in $G'$, every vertex in $H_1$ has at most 1 neighbors in $S$),
implying that $c(G')\geq n-k$, a contradiction. Thus,
$|N_{G'}(H_1)\cap(S\cup\{v_1\})|\leq 1$ for every component $H_1$ of
$G'-(S\cup\{v_1\})$. This implies that $G\subseteq G'\subseteq
F\in\mathcal{F}_{n,k+1}$ and (a) holds.
\underline{Case B.3. $|T|\geq 2$.} Let $v_1$ be a vertex in $T$ and
$u_1,u_2$ be two vertices in $N_{G'}(v_1)\cap S$. For any other
vertex $v_2\in T$, we have that $N_{G'}(v_2)\cap S=\{u_1,u_2\}$, for
otherwise $c(G')\geq n-k$. Furthermore, $N_{G'}(v_1)=\{u_1,u_2\}$.
In brief, we have $N_{G'}(T)\cap S=\{u_1,u_2\}$. If there are
two vertices in $T$ which are adjacent in $G'$, then $c(G')\geq n-k$, a
contradiction. So $T$ is independent in $G'$. For any
vertex $v\in V(G)\backslash(S\cup T)$, we claim that
$|N_{G'}(v)\cap(S\cup T)|\leq 1$. Indeed, as $v\notin T$, $v$ cannot
have two neighbors in $S$. If $N_{G'}(v)$ contains two vertices in
$T$ or contains one vertex in $T$ and one vertex in $S$, then we
have $c(G')\geq n-k$, a contradiction. Set $t=|T|$. Notice that
$2\leq t\leq k+2$. Now
\begin{align*}
e(G') & =e(K)+e_{G'}(S,T)+e_{G'}(S\cup T,V(G)\backslash(S\cup T))+e(H-T)\\
& \leq\binom{n-k-2}{2}+2t+(k+2-t)+\binom{k+2-t}{2}\\
& =\binom{n-k-2}{2}+\binom{k+3}{2}+\frac{t^2-(2k+1)t}{2}\\
&\leq e(G)+\frac{t(t-2k-1)}{2}.
\end{align*}
This implies that $t\geq 2k+1$. Combining with $2\leq t\leq k+2$, it
can only be that $k=0$ and $t=2$, or $k=1$ and $t=3$. In each case $V(G)=S\cup
T$. For the first case, we have $G\subseteq G'=\varGamma_{n,2}$, and (c)
holds. For $k=1$ and $t=3$, $G'=\varGamma_{n,3}$. Moreover, equality
holds in the above inequalities, implying that $G=G'$ and (d) holds.
\hfill{\rule{4pt}{8pt}}
\section{Spectral results}\label{Sec:Spectral}
Let $G$ be a graph and $u,v\in V(G)$. We use $G[u\rightarrow v]$ to denote
a new graph obtained from $G$, by replacing all edges $uw$ by $vw$,
where $w\in N_G(u)\backslash (N_G(v)\cup \{v\})$.
Following Brouwers' book, we call this as ``Kelmans operation''.
In this article, we need some results on spectral properties of graphs under Kelmans operation.
These theorems will play important roles in our answers to Problem \ref{Prob1}.
\begin{thm}[Csikv\'ari \cite{C09}]\label{Thm:C09}
Let $G$ be a graph and $u,v\in V(G)$. Let $G':=G[u\rightarrow v]$.
Then $\rho(G')\geq \rho(G)$.
\end{thm}
\begin{thm}[Li and Ning \cite{LN16}]\label{Thm:LN}
Let $G$ be a graph and $u,v\in V(G)$. Let $G':=G[u\rightarrow v]$.
Then $q(G')\geq q(G)$.
\end{thm}
The following spectral inequalities help us to invert our problems
into ones in extremal style.
\begin{thm}[Hong \cite{H93}]\label{Thm:Hong}
Let $G$ be a graph on $n$ vertices and $m$ edges. If $\delta(G)\geq 1$
then $\rho(G)\leq \sqrt{2m-n+1}$.
\end{thm}
\begin{thm}[Das \cite{Das}]\label{Thm:Das}
Let $G$ be a graph on $n$ vertices and $m$ edges.
Then
$q(G)\leq \frac{2m}{n-1}+n-2$.
\end{thm}
The following two lemmas will be used to determine the extremal graphs.
\begin{lem}\label{Lem:extremal}
Let $G$ be a graph. Suppose that $G$ is a subgraph of
a member in $\mathcal{F}_{n,k}$, where $n\geq 2k+1$.\\
(a) If $\rho(G)\geq \rho(L_{n,k})$, then $G=L_{n,k}$.\\
(b) If $q(G)\geq q(L_{n,k})$, then $G=L_{n,k}$.
\end{lem}
\begin{proof}
(a) Let $F\in\mathcal{F}_{n,k}$ with $G\subseteq F$. Since
$G\subseteq F$, $\rho(G)\leq\rho(F)$, with equality if and only
if $G=F$ (recall that $F$ is connected). Let $K$
be the complete subgraph of $F$ with $|K|=n-k$.
Let $H_1,H_2,\ldots,H_t$ be the components of $F-K$, and let $v_i$,
$i\in[1,t]$, be the unique vertex in $N(H_i)\cap V(K)$. By
a series of Kelmans operation from $v_i$ to $v_1$ for all $v_i\neq
v_1$, we get a graph $F'$ which is a subgraph of $L_{n,k+1}$. By
Theorem \ref{Thm:C09},
$$\rho(G)\leq\rho(F)\leq\rho(F')\leq\rho(L_{n,k}),$$
equality holds if and only if $G=F=F'\cong L_{n,k}$. This proves the
statement (a).
(b) The proof is almost the same as the one of (a). We just use Theorem
\ref{Thm:LN} instead of Theorem \ref{Thm:C09} in the whole proof.
We omit the details.
\end{proof}
\begin{lem}\label{Lem:compared}
Let $n,k$ be integers where $k\geq 1$. Then\\
(a) $\rho(L_{n,k})>\rho(L_{n,k+1})$ for $n\geq 2k+4$;
$\rho(F_{n,1})>\rho(\varGamma_{n,2})$ for $n\geq 6$;
$\rho(F_{n,2})>\rho(\varGamma_{n,3})$ for $n\geq 4$.\\
(b) $q(L_{n,k})>q(L_{n,k+1})$ for $n\geq 2k+4$;
$q(F_{n,1})>q(\varGamma_{n,2})$ for $n\geq 6$;
$q(F_{n,2})>q(\varGamma_{n,3})$ for $n\geq 1$.
\end{lem}
\begin{proof}
(a) Let $V(L_{n,k+1})=X\cup Y\cup \{z\}$, where $X\cup \{z\}$ is the $(k+2)$-clique
in $L_{n,k+1}$ and $Y\cup \{z\}$ is the $(n-k-1)$-clique in $L_{n,k+1}$.
Choose $x\in X$. $L_{n,k}$ can be obtained from $L_{n,k+1}$
by deleting all edges $xx'$ for $x'\in X$ and adding all edges $xy'$ for $y'\in Y$.
Let $M$ be the Perron vector with respect
to $\rho(L_{n,k+1})$, where $x,y,w$ correspond to the eigencomponent of vertices
in $X$, the vertices in $Y$ and the vertex $z$. Let $\rho_1:=\rho(L_{n,k+1})$.
By eigenequation, we have $\rho_1x=kx+z$ and $\rho_1y=(n-k-3)y+z$. It follows
that $(\rho_1-k)x=(\rho_1-(n-k-3))y$. Since $n\geq 2k+4$, we have $y>x$.
Then by Rayleigh quoit,
we have
\begin{align*}
\rho(L_{n,k})-\rho(L_{n,k+1})&\geq 2(n-k-2)xy-2kx^2=2x((n-k-2)y-kx>0.
\end{align*}
This proves $\rho(L_{n,k})>\rho(L_{n,k+1})$ for $n\geq 2k+4$.
Let $M'$ be the Perron vector with respect
to $q(L_{n,k+1})$, where $x,y,w$ correspond to the eigencomponent of vertices
in $X$, the vertices in $Y$ and the vertex $z$. Let $q_1:=q(L_{n,k+1})$.
By eigenequation, we have $q_1x=(2k+1)x+z$ and $\rho_1y=(2n-2k-5)y+z$. It follows
that $(q_1-(2k+1))x=(q_1-(2n-2k-5))y$. If $n\geq 2k+4$, then $y>x$.
Then by Rayleigh quoit,
we have
\begin{align*}
q(L_{n,k})-q(L_{n,k+1})&\geq (n-k-2)(x+y)^2-k(x+x)^2>0.
\end{align*}
This proves $q(L_{n,k})>q(L_{n,k+1})$ for $n\geq 2k+4$.
(b) $\rho(\varGamma_{n,2})\leq \sqrt{2e(\varGamma_{n,2})-n+1}=\sqrt{n^2-6n+15}<n-2=\rho(K_{n-1})<\rho(F_{n,1})$
for $n\geq 6$.
$q(\varGamma_{n,2})\leq \frac{2e(\varGamma_{n,2})}{n-1}+n-2\leq 2(n-2)=q(K_{n-1})<q(F_{n,1})$ for $n\geq 6$.
(c) $\rho(\varGamma_{n,3})\leq \sqrt{2e(\varGamma_{n,3})-n+1}=\sqrt{n^2-8n+25}<n-3=\rho(K_{n-2})<\rho(F_{n,2})$
for $n\geq 4$.
$q(\varGamma_{n,2})\leq \frac{2e(\varGamma_{n,2})}{n-1}+n-2\leq 2(n-3)=q(K_{n-2})<q(F_{n,2})$.
\end{proof}
\noindent
{\bf Proof of Theorem \ref{Thm:Spectrallargecylce}.} If $G$ is
disconnected, then we can add some edges between
different components recursively, and get a connected graph $G'$ with
$\rho(G')>\rho(G)$ and $q(G')>q(G)$. Since the added edges are
not contained in any cycle, if $G'$ contains some cycles, then so
does $G$. Thus we only deal with the case that $G$ is connected.
Suppose that (a) holds. Furthermore, suppose that $G$
does not contains a $C_{\ell}$ for every $\ell\in [3,n-k+1]$.
We shall show that $G=L_{n,k}$.
By Theorem \ref{Thm:Hong}, we have
$$\sqrt{2e(G)-n+1}\geq \rho(G)\geq \rho(L_{n,k+1})\geq n-k-1.$$
It follows that $2e(G)\geq(n-k-1)^2+n-1$. Note that
$$\frac{(n-k-1)^2+n-1}{2}\geq \binom{n-k-1}{2}+\binom{k+2}{2}$$
for $n\geq \frac{(k+2)^2}{2}$. By Theorem \ref{Thm:LiNing-stability},
$G$ is weakly pancyclic with girth 3 for
$n\geq \max\{6k+11,\frac{(k+3)(k+4)}{2}\}$. Furthermore, if $G$ does not
contain a $C_{n-k+1}$, then one of the following is true: (1)
$G\subseteq F$ for some $F\in \mathcal{F}_{n,k}$; (2) $G=L_{n,k+1}$;
(3) $k=1$ and $G\subseteq\varGamma_{n,2}$, or $k=2$ and
$G\subseteq\varGamma_{n,3}$.
By Lemma \ref{Lem:extremal} and Lemma \ref{Lem:compared}, $G=L_{n,k}$.
Suppose that (b) holds. By Theorem \ref{Thm:Das}, we obtain
$$\frac{2e(G)}{n-1}+n-2\geq q(G)\geq q(F_{k+1})\geq 2(n-k-1),$$
which implies that $e(G)\geq \frac{n^2-(2k+1)n+2k}{2}$. Note that
$\frac{n^2-(2k+1)n+2k}{2}\geq \binom{n-k-1}{2}+\binom{k+2}{2}$ for
$n\geq k^2+2k+2$. By Theorem \ref{Thm:LiNing-stability}, $G$ is weakly
pancyclic with girth 3. Furthermore, if $G$ does not contain a
$C_{n-k+1}$, then one of the following is true: (1) $G\subseteq F$
for some $F\in \mathcal{F}_{n,k}$; (2) $G=L_{n,k+1}$; (3) $k=1$ and
$G\subseteq\varGamma_{n,2}$, or $k=2$ and
$G\subseteq\varGamma_{n,3}$. By Lemma \ref{Lem:extremal}
and Lemma \ref{Lem:compared}, $G=L_{n,k}$.
The proof is complete.
\hfill{\rule{4pt}{8pt}}
\section{One open problem of Nikiforov }\label{Sec:Nikiforov}
This section is devoted to an open problem by Nikiforov \cite{N08}.
Before the proof, we collect various results that will be used in our arguments.
We first prove one edge condition for even cycles.
\begin{thm}\label{Thm:Erdos-Gall-Voss}
Let $G$ be a graph on $n$ vertices and $e(G)$ edges.
If $G$ contains no even cycle of length more than $2k$, where
$k\geq 1$ is an integer, then $e(G)\leq \frac{(2k+1)(n-1)}{2}$.
\end{thm}
\begin{thm}[Voss and Zuluaga \cite{VZ77}]\label{Thm:Voss-Zuluage}
(1) Every 2-connected graph $G$ with $\delta(G)\geq r\geq 3$ having at least $2r+1$
vertices contains an even cycle of length at least $2r$.
(2) Every 2-connected non-bipartite graph $G$ with $\delta(G)\geq r\geq 3$ having at least $2r+1$
vertices contains an odd cycle of length at least $2r-1$.
\end{thm}
\begin{thm}[Ore \cite{O61}]\label{Thm:Ore}
Let $G$ be a graph on $n$ vertices. If $G$ contains no Hamilton cycle, then
$e(G)\leq \binom{n-1}{2}+1$.
\end{thm}
A graph is called a \emph{theta graph} if it consists of three paths starting and
ending with two same vertices and are internal-disjoint. The following
lemma is very basic.
\begin{lem}\label{Lem:Theta}
Let $G$ be a graph containing no theta graphs. Then each component of $G$
is an edge or a cycle.
\end{lem}
\noindent
{\bf Proof of Theorem \ref{Thm:Erdos-Gall-Voss}.}
If $n\leq 2k+1$, then $e(G)\leq \binom{n}{2}\leq \frac{(2k+1)(n-1)}{2}$.
If $n=2k+2$, then by Theorem \ref{Thm:Ore}, we have $e(G)\leq \binom{2k+1}{2}+1\leq \frac{(2k+1)(n-1)}{2}$.
Next, we assume $n\geq 2k+3$.
Let $k=1$. We shall prove that if a graph on $n$ vertices contains
no even cycles then $e(G)\leq \frac{3(n-1)}{2}$.
By Lemma \ref{Lem:Theta}, every component of $G$ is an edge or an odd cycle. Let $c$
be the number of components which are odd cycles. We use induction to prove that
$e(G)\leq n+c-1\leq n-1+\frac{n-1}{2}=\frac{3(n-1)}{2}$.
In the following, we suppose $k\geq 2$.
Let $v\in V(G)$ with $d_G(v)=\delta(G)$, and $G':=G-v$. Note that $G'$ satisfies that
$v(G')\geq2k+2$ and $G'$ contains no even cycle of length more than $2k$.
By induction hypothesis, if $d(v)\leq k$, then we have
$e(G)=e(G')+\delta\leq \frac{(2k+1)(n-2)}{2}+k<\frac{(2k+1)(n-1)}{2}$,
as required. Thus, $\delta(G)\geq k+1\geq 3$. If $G$ is 2-connected,
then by Theorem \ref{Thm:Voss-Zuluage}, $G$ contains an even cycle
of length at least $2k+2$, a contradiction. Thus, $G$ contains a cut-vertex
or is disconnected. For each case, we use induction to each component
and compute the number of edges. The proof is complete. \hfill{\rule{4pt}{8pt}}
The following spectral inequality was originally proposed by Guo, Wang and Li \cite{GWL19}
as a conjecture and proved by Sun and Das \cite{SD20}.
\begin{thm}[Sun and Das \cite{SD20}]\label{Thm:Sun-Das}
Let $G$ be a graph with minimum degree $\delta(G)\geq 1$. For any $v\in V(G)$,
we have $\rho^2(G-v)\geq\rho^2(G)-2d(v)+1$.
\end{thm}
By Theorem \ref{Thm:Sun-Das}, we deduce a result for graphs with isolated vertices.
\begin{lem}\label{Lem:Sun-Das}
Let $G$ be a graph. For any $v\in V(G)$, we have
$$\rho^2(G)\leq \rho^2(G-v)+2d(v).$$
\end{lem}
For a graph $G$, denote by $ec(G)$ the length of a longest even cycle of $G$ and
$oc(G)$ the length of a longest odd cycle of $G$.
\begin{thm}[Gould, Haxell and Scott \cite{GHS02}]\label{Thm:GHS}
For every real number $c>0$, there exists a constant $K:=K(c)=\frac{7.5\times 10^5}{c^5}$ depending only on $c$
such that the following holds. Let $G$ be a graph with $n\geq \frac{45K}{c^4}$ vertices
and minimum degree at least $cn$. Then $G$ contains a cycle of length $t$
for every even $t\in [4,ec(G)-K]$ and every odd $t\in [K,oc(G)-K]$.
\end{thm}
Now we give the proof of Theorem \ref{Thm:SpectraConsecutiveCycles}.
\noindent
{\bf Proof of Theorem \ref{Thm:SpectraConsecutiveCycles}.}
If $G$ is disconnected, for example, $G$ contains
$t$ components, then we can add $t-1$ edges to
make it connected and 1-edge-connected, i.e., each
new edge is a edge-cut of the new graph $G'$.
Note that $\rho(G')\geq \rho(G)$.
For any integer $k\geq 3$, $G$ contains a cycle
of length $k$ if and only if $G'$ contains a cycle
of length $k$. Thus, we can assume that $G$ is connected.
By Theorem \ref{Thm:Hong}, we have
$$\frac{n^2-1}{4}\leq \rho^2(T_{n,2})<\rho^2(G)\leq
2m-n+1.$$ One can compute that
$2m\geq\frac{n^2+4n+3}{4}$. Thus, the average degree
$d(G):=\frac{2m}{n}>\frac{n}{4}$.
Let $H$ be a subgraph of $G$ defined by a sequence of graphs
$G_0,G_1,\ldots,G_k$ such that:\\
(1) $G=G_0$, $H=G_k$;\\
(2) for every $i\in[0,k-1]$, there is $v_i\in V(G_i)$ such that
$d_{G_i}(v_i)\leq\frac{n}{8}$ and $G_{i+1}=G_i-v_i$;\\
(3) for every $v\in V(G_k)$, $d_{G_k}(v)>\frac{n}{8}$.\\
We claim that $d(H)>\frac{n}{4}$. Suppose not the case. Then there
is a smallest $i\in[1,k]$ with $d(G_i)\leq\frac{n}{4}$. This implies
that
$$d(G_{i-1})=\frac{2d(v_{i-1})+|G_i|d(G_i)}{|G_i|+1}\leq\frac{n}{4},$$
a contradiction. Thus, we conclude that $d(H)>\frac{n}{4}$ and
$\delta(H)>\frac{n}{8}$.
\underline{Case A: Even cycle.} Note that
$e(H)=\frac{d(H)|H|}{2}>\frac{\frac{n}{4}(|H|-1)}{2}$. By Theorem
\ref{Thm:Erdos-Gall-Voss}, $ec(H)>\frac{n}{4}$. Recall that
$\delta(H)>\frac{n}{8}$. By Theorem \ref{Thm:GHS}, $H$ contains all
even cycles $C_{\ell}$ with $\ell \in [4,ec(G)-K]$ if
$|H|\geq45\cdot8^4\cdot K$, where $K=K(\frac{1}{8})=\frac{7.5\times 10^{5}}{(\frac{1}{8})^5}$ be the constant in
Theorem \ref{Thm:GHS}. Clearly $|H|>\frac{n}{4}$. Let $n_1$ be an
integer satisfying
$$(i)\ \frac{n_1}{4}\geq 45\cdot8^4\cdot K;\ (ii)\ \varepsilon n_1\geq K.$$
Now if $n\geq \max\{1.9\times 10^{16},\frac{2.5\times 10^{10}}{\varepsilon}\}$,
then $G$ contains all even cycles $C_{\ell}$
with $\ell \in [4,(\frac{1}{4}-\varepsilon)n]$.
\underline{Case B: Odd cycle.} Set $h=|H|$. By Lemma
\ref{Lem:Sun-Das}, we have
$$
\rho^2(G)\leq\rho^2(H)+2\sum_{i=0}^{k-1}d_{G_i}(v_i)\leq\rho^2(H)+2k\cdot\frac{n}{8}=\rho^2(H)+\frac{kn}{4},
$$
where $G_i$, $v_i$ are those in the definition of $H$, and $k=n-h$.
This implies that $\rho(H)\geq\frac{\sqrt{nh-1}}{2}$. Since
$\rho(G)>\sqrt{\lfloor\frac{n^2}{4}\rfloor}$, by Nosal's theorem \cite{N70}
and Mantel's theorem, $G$ contains a triangle, and so is
non-bipartite. If $h<n$, then
$\rho(H)>\frac{\sqrt{nh-1}}{2}\geq\sqrt{\lfloor\frac{h^2}{4}\rfloor}$,
and $H$ is non-bipartite as well. In any case we infer $H$ is
non-bipartite.
Let $F$ be a subgraph of $H$ defined by a sequence of graphs
$H_0,H_1,\ldots,H_k$ such that:\\
(1) $H=H_0$, $F=H_k$;\\
(2) for every $i\in[0,k-1]$, there is a cut-vertex $v_i$ of $H_i$ and $H_{i+1}=H_i-v_i$;\\
(3) $H_k$ has no cut-vertex.\\
Note that the component number $w(H_{i+1})\geq w(H_i)+1$. Clearly
$w(H)\leq 8$, for otherwise $H$ will have a vertex of degree less
than $\frac{n}{8}$. We claim that $w(F)\leq 8$. Suppose to the
contrary that there is a smallest $i$ with $w(H_i)\geq 9$. Notice that
$i\leq 8$, implying that $\delta(H_i)>\frac{n}{8}-8$. As $w(H_i)\geq
9$, $H_i$ has a vertex with degree less that
$\frac{|H_i|}{9}<\frac{n}{9}$, a contradiction when $n\geq577$. Thus
we conclude that $w(F)\leq 8$, and specially, $v(F)\geq h-7$.
By Lemma \ref{Lem:Sun-Das}, we have
$$
\rho^2(H)\leq\rho^2(F)+2\sum_{i=0}^{k-1}d_{H_i}(v_i)\leq\rho^2(F)+2k(h-1)\leq\rho^2(F)+14(h-1).
$$
Since $d(H)>\frac{n}{4}$, we obtain $h>\frac{n}{4}+1$. Thus,
\begin{equation*}
\begin{split}
\rho(F) & \geq\sqrt{\rho^2(H)-14(h-1)}\geq\sqrt{\frac{nh-1}{4}-14(h-1)}\\
& =\sqrt{\left(\frac{n}{4}-14\right)h+\frac{55}{4}}>\sqrt{\left(\frac{n}{4}-14\right)\left(\frac{n}{4}+1\right)+\frac{55}{4}}\geq\frac{n}{4}-7\\
\end{split}
\end{equation*}
when $n\geq85$.
Recall that $F$ has no cut-vertex, i.e., every component of $F$ is
2-connected. Let $F_1$ be a component of $F$ with
$\rho(F_1)=\rho(F)$. Thus we have $\delta(F_1)\geq\frac{n}{8}-7$ and
$\rho(F_1)>\frac{n}{4}-7$. Specially $|F_1|>\frac{n}{4}-6$.
We claim that $\delta(F_1)\geq\frac{|F_1|}{8}$. Recall that
$\delta(H)\geq\frac{n}{8}\geq\frac{|H|}{8}$, we assume that $F_1\neq
H$. This implies that $F$ has a second component $F_2$. Since
$\delta(F)\geq\frac{n}{8}-k$, we have $|F_2|\geq\frac{n}{8}-k+1$
(here $k$ is that in definition of $F$). This implies that
$|F_1|\leq h-k-(\frac{n}{8}-k+1)<\frac{7h}{8}$. Thus
$\delta(F_1)\geq\frac{n}{8}-7\geq\frac{7n/8}{8}\geq\frac{|F_1|}{8}$
when $n\geq 448$.
Now we show that $F_1$ is non-bipartite. Recall that $H$ is
non-bipartite. So we assume that $F_1\neq H$. By the analysis above
we have $|F_1|<\frac{7h}{8}$. Thus
$$\rho^2(F_1)=\rho^2(F)\geq\left(\frac{n}{4}-14\right)h+\frac{55}{4}\geq\left(\frac{h}{4}-14\right)h+\frac{55}{4}>\frac{(7h/8)^2}{4}>\frac{|F_1|^2}{4}$$
when $h\geq 238$. Since $h>\frac{n}{4}+1$, we have that $F$ is
non-bipartite when $n\geq 944$.
By Theorem \ref{Thm:Voss-Zuluage}, $oc(F_1)\geq
2\delta(F_1)-1\geq\frac{n}{4}-15$. By Theorem \ref{Thm:GHS}, $F_1$
contains all odd cycles $C_{\ell}$ for
$\ell\in[K,\frac{n}{4}-15-K]$, where $K=K(\frac{1}{8})$ is the
constant as in Theorem \ref{Thm:GHS}. A theorem of
Nikiforov \cite[Theorem~1]{N08}\footnote{By refining the proof, one can let $N=8400$.}
states that there exists a sufficiently large $N$
such that any graph of order $n\geq N$ has a cycle of length $\ell$ for every
$\ell \in[3,\frac{n}{320}]$. Let
$n_2$ be an integer such that
$$(i)\ n_2\geq\max\{944,N\};\ (ii)\ \frac{n_2}{320}\geq K;\ (iii)\
\varepsilon n_2\geq K+15.$$
We only need $n_2\geq \max\{N, 7.9\times 10^{12},\frac{2.5\times 10^{10}}{\varepsilon}\}$.
Now if $n\geq\max\{n_1, n_2\}$, then $G$
contains all cycles $C_{\ell}$ with $\ell \in
[3,(\frac{1}{4}-\varepsilon)n]$.
The proof is complete. \hfill{\rule{4pt}{8pt}}
\section{A concluding remark}
Nikiforov \cite{N10} proposed two nice conjectures on cycles of small lengths.
He conjectured that: (a) every graph on sufficiently large order $n$
contains a $C_{2k+1}$ or a $C_{2k+2}$ if $\rho(G)\geq \rho(S_{n,k})$,
unless $G=S_{n,k}$ where $S_{n,k}:=K_k\vee (n-k)K_1$; and (b) every graph on sufficiently large order $n$
contains a $C_{2k+2}$ if $\rho(G)\geq \rho(S^+_{n,k})$,
unless $G=S^+_{n,k}$ where $S^+_{n,k}$ is obtained from $S_{n,k}$
by adding an edge in the $n-k$ isolated vertices. One can easily compute
that $\rho(S_{n,k})=\Omega (\sqrt{n})$ and $\rho(S^+_{n,k})=\Omega (\sqrt{n})$.
If these conjectures will be confirmed, then we maybe obtain tight
spectral conditions for $C_{\ell}$ where $\ell \in [3,\Omega(\sqrt{n})]\cup [n-\Omega(\sqrt{n}),n]$.
It is mysterious to determine tight spectral conditions
for $C_{\ell}$, where $0<\lim\limits_{n\rightarrow \infty} \frac{\ell}{n}=c<1$,
such as $C_{\lfloor\frac{n}{2}\rfloor}$ and $C_{\lceil\frac{n}{2}\rceil}$
and etc.
|
1,314,259,994,167 | arxiv | \section{Introduction}
Canonical Correlation Analysis (CCA) \cite{CCA1,CCA2}, is a classic statistical method for finding the maximally correlated linear transformations of two modalities (or views). Using modalities $\myvec{X}\in \mathbb{R}^{D^x \times N}$ and $\myvec{Y}\in \mathbb{R}^{D^y \times N}$, which are centered and have $N$ samples with $D^x$ and $D^y$ features, respectively. CCA seeks canonical vectors $\myvec{a}\in \mathbb{R}^{D^x},\text{ and }\myvec{b}\in \mathbb{R}^{D^y}, \text{ such that },\myvec{u}=\myvec{a^T X },\text{ and } \myvec{v=b^T Y}$, will maximize the sample correlations between the \textit{canonical variates}, i.e.
\begin{equation} \label{eq:cca}
\underset{\myvec{a,b}\neq 0}{\operatorname{max}}\quad{\rho(\myvec{a^T X ,b^T Y })=\frac{\myvec{a^T X }\myvec{Y^T b } }{\|\myvec{a^T X }\|_2 \|\myvec{b^T Y }\|_2} }.
\end{equation}
These canonical vectors could be identified by solving a generalized eigen pair problem
In order to identify non-linear relations between input variables, several extensions of CCA have been proposed. Kernel methods such as Kernel CCA \cite{bach2002kernel}, Non-parametric CCA \cite{michaeli2016nonparametric} or Multi-view Diffusion maps \cite{lindenbaum2020multi,salhov2020multi} learn the non-linear relations in reproducing Hilbert spaces. These methods have several shortcomings: they are limited to a designed kernel, they require ${\cal{O}}(N^2)$ computations for training, and they have poor interpolation and extrapolation capabilities. Deep CCA \cite{DCCA} overcomes these limitations by learning two non-linear transformations parametrized using neural networks.
Canonical correlation models have been widely used in biology \cite{cca_bio}, neuroscience \cite{cca_brain}, medicine \cite{cca_eeg}, and engineering \cite{cca_fault}, for unsupervised or semi-supervised learning. By extracting meaningful dimensionality reduced representations, CCA improves downstream tasks such as clustering, classification, or manifold learning. One key limitation of these models is that they typically require more samples than features, i.e., $N>D^x, D^y$. However, if we have more variables than samples, the estimation based on the closed-form solution of the CCA problem (in Eq.~\ref{eq:cca}) breaks \cite{suo2017sparse}. Moreover, in high dimensional data, often some of the variables do not measure the phenomenon that is common to both modalities (therefore are not correlated) and thus should be omitted from the transformations. For these reasons, there has been a growing interest in studying sparse CCA models.
Sparse CCA (SCCA) models seek for linear transformations which are based on a sparse subset of variables from the input modalities $\myvec{X}$ and $\myvec{Y}$. Sparsifying the feature space not only removes degeneracy's inherent to $N<D^x, D^y$, but also improves interpretability and prevents overfitting. To encourage sparsity of the canonical vectors $\myvec{a}$ and $\myvec{b}$, several authors~\cite{wiesel2008greedy,cai2019} propose an $\ell_0$ regularized variant of Eq.~\ref{eq:cca}. However, the schemes in \cite{wiesel2008greedy,cai2019} are greedy and therefore may lead to suboptimal solutions. As demonstrated by \cite{waaijenborg2008quantifying,parkhomenko2009sparse,witten2009penalized,hardoon2011sparse,suo2017sparse} replacing the $\ell_0$ norm by $\ell_1$ is differentiable and leads to a sparse solution to Eq.~\ref{eq:cca}. However, these schemes are limited to linear transformations and may lead to shrinkage of the canonical vectors due to the $\ell_1$ regularizer. There has been limited work on extending these models to sparse non-linear CCA. Specifically, there are two kernel-based extensions: two-stage kernel
CCA (TSKCCA) by \cite{yoshida2017sparse} and SCCA based on Hilbert-Schmidt Independence Criterion (SCCA-HSIC) by \cite{uurtio2018sparse}. However, these models suffer from the same limitations as KCCA and do not scale to a high dimensional regime.
This paper presents $\ell_0$-CCA, a simple yet effective method for learning correlated representation based on sparse subsets of the input variables by minimizing an $\ell_0$ regularized loss. Our $\ell_0$ regularization relies on a recently proposed Gaussian-based continuous relaxation of Bernoulli random variables, termed gates \cite{yamada2018feature}. The gates are applied to the input features to sparsify the canonical vectors. The parameters of the gates and of the model are trained jointly via gradient descent to maximize the correlation between the representations of $\myvec{X}$ and $\myvec{Y}$ while simultaneously selecting only the subsets of most correlated input features. By modeling the transformations using two neural networks, our method provides a natural solution to sparse non-linear correlation analysis. We apply the proposed method to synthetic data and demonstrate that our approach can improve the estimation of the canonical vectors compared with existing sparse CCA models. Then, we use the proposed scheme on several real datasets and demonstrate that it leads to more reliable and interpretable representations than other linear and non-linear data fusion schemes.
\begin{figure}[tb!]
\begin{center}
\vskip -0.in
\includegraphics[width=0.95\textwidth,height=0.3\textheight]{Figures/DGNN_arc.png}
\end{center}
\vskip -0.in
\caption{Illustration of $\ell_0$-DCCA. Data from two views propagate through stochastic gates (defined in Eq.~\ref{eq:stg}). The gates output is fed into two neural sub-nets that have a shared loss (see Eq. \ref{eq:dgcca}). We compute this shared loss based on the neural sub-nets outputs (with dimension $d=3$ in this example). Our shared loss combines a total correlation term with a differentiable regularization term which induces sparsity in the input variables.}
\vskip -0.in
\label{fig:dgnn_arc}
\end{figure}
\section{Sparse CCA}
The problem in Eq.~\ref{eq:cca} has a closed-form solution based on the eigendecomposition of $\myvec{C}^{-1}_{x}\myvec{C}_{xy}\myvec{C}^{-1}_{y}\myvec{C}_{yx}$ and $\myvec{C}^{-1}_{y}\myvec{C}_{yx}\myvec{C}^{-1}_{x}\myvec{C}_{xy}$, where $\myvec{C}_x,\myvec{C}_y$ are within view sample covariance matrices and $\myvec{C}_{xy},\myvec{C}_{yx}$ are cross-view sample covariance matrices. However, if $N$ is smaller than the number of input variables ($D^x$ or $D^y$), the sample covariance may be rank deficient, and the closed-form solution becomes meaningless. To overcome this limitation, we consider the problem of sparse CCA.
Sparse CCA deals with identifying a maximally correlated representation which is restricted to a linear combination of a sparse subset of the input variables in $\myvec{X}$ and $\myvec{Y}$. The problem can be formulated as the following regularized minimization
\begin{equation}\label{eq:scca}
\underset{\myvec{a,b}}{\operatorname{min}} \quad -\rho (\myvec{a}^{T}\myvec{X}, \myvec{b}^T\myvec{Y} ) +\lambda^x \|\myvec{a}\|_0 +\lambda^y \|\myvec{b}\|_0 ,
\end{equation}
where $\lambda^x$ and $\lambda^y$ are regularization parameters which control the sparsify the input variables. If $\|\myvec{a}\|_0$ and $ \|\myvec{b}\|_0$ are smaller than $N$, we can remove the degeneracy inherent to Eq.~\ref{eq:cca}, and identify meaningful correlated representations based on a sparse subset of input variables. Nonetheless, for a large number of variables enumerating all the possible sparse solutions for $\myvec{a}$ and $\myvec{b}$ makes the bruit-force solution of the problem stated in Eq.~\ref{eq:scca} intractable.
\section{Probabilistic Reformulation of Sparse CCA}\label{sec:prob}
The sparse CCA problem formulated in Eq.~\ref{eq:scca} becomes intractable for large $D^x$ or $D^y$, due to the $\ell_0$ constraint. Moreover, due to the discrete nature of the $\ell_0$ norm, the problem is not amenable to gradient-based optimization schemes. Fortunately, as demonstrated in sparse regression, probabilistic models such as the spike-and-slab \cite{george1993variable,kuo1998variable,zhou2009non,polson2019bayesian} provide a compelling alternative. More recently, differentiable probabilistic models such as \cite{louizos2017learning,yamada2018feature} were proposed for sparse supervised learning. Here, we adopt these ideas by rewriting the canonical vectors as $\myvec{\alpha}=\myvec{\theta}^x\odot \myvec{s}^x $ and $\myvec{\beta}=\myvec{\theta}^y\odot \myvec{s}^y$, where $\odot $ denotes the Hadamard product (element wise multiplication), and $\myvec{\theta}^x\in\mathbb{R}^{{D}^x},\myvec{\theta}^y\in \mathbb{R}^{{D}^y}$. The vectors $\myvec{s}^x\in \{0,1\}^{D^x}$, $\myvec{s}^y\in \{0,1\}^{D^y}$ are Bernoulli random vectors with parameters $\myvec{\pi}^x=(\pi^x_1,...,\pi^x_{D^x})$ and $\myvec{\pi}^y=(\pi^y_1,...,\pi^y_{D^y})$. These Bernoulli variables, act as gates and sparsify the coefficients of the canonical vectors. Now, the problem in Eq.~\ref{eq:scca} can be reformulated as an expectation minimization, which is parameterized by $\myvec{\pi}^x$ and $\myvec{\pi}^y$. Specifically, based on the following theorem, we can reformulate the problem in Eq.~\ref{eq:scca}.
\begin{theorem}
The solution of the sparse CCA problem in Eq.~\ref{eq:scca} is equivalent to the solution of the following probabilistic problem.
\begin{equation}
\label{eq:pscca}
\underset{\myvec{\pi^x,\pi^y,\theta^x,\theta^y}}{\min} \mathbb{E} \big\lbrack - \rho (\myvec{\alpha}^T\myvec{X},\myvec{\beta}^T\myvec{Y} )+ \lambda^x\|\myvec{s}^x\|_0+\lambda^y\|\myvec{s}^y\|_0\big\rbrack,
\end{equation}
where $\myvec{\alpha}=\myvec{\theta}^x\odot \myvec{s}^x $ and $\myvec{\beta}=\myvec{\theta}^y\odot \myvec{s}^y$ and the expectation is taken with respect to the random Bernoulli variables $\myvec{s}^x$ and $\myvec{s}^y$ (which are parameterized by $\myvec{\pi}^x$ and $\myvec{\pi}^y$).
\end{theorem}
Note that the expected values of the $\ell_0$ norms can be expressed using the Bernoulli parameters as $\mathbb{E} \|\mathbf{s}^x\|_0=\sum \pi^x_i$, and $\mathbb{E} \|\mathbf{s}^y\|_0=\sum \pi^y_i$. The proof relies on the fact that the optimal solution to Eq.~\ref{eq:scca} is a valid solution to Eq.~\ref{eq:pscca}, and vice versa. The proof is presented in the Appendix, Section \ref{sec:proof}, and follows the same construction as the proof of Theorem 1 in \cite{yin2020probabilistic}.
\subsection{Continuous relaxation for Sparse CCA}
Due to the discrete nature of $\mathbf{s}^x$ and $\mathbf{s}^y$, differentiating the leading term in Eq.~\ref{eq:pscca} is not straightforward. Although solutions such as REINFORCE \cite{reinforce} or the straight-through estimator \cite{bengio2013representation} enable differentiating through discrete random variables, they still suffer from high variance. Furthermore, they require many Monte Carlo samples for effective training \cite{tucker2017rebar}. Recently, several authors \cite{maddison2016concrete,jang2016categorical,louizos2017learning} have demonstrated that using a continuous reparametrization of discrete random variables can reduce the variance of the gradient estimates. Here, following the reparametrization presented in \cite{yamada2018feature,lindenbaum2020let}, we use Gaussian-based relaxation for the Bernoulli random variables.
Each relaxed Bernoulli variables $\textnormal{z}_i$ is defined by drawing from a centered Gaussian $\epsilon_i \sim N(0,\sigma^2)$, then shifting it by $\mu_i$ and truncating its values using the following hard Sigmoid function
\begin{equation}\label{eq:stg}
\textnormal{z}_i=\max(0,\min(1,\mu_i+\epsilon_i)).
\end{equation}
Using these relaxed Bernoulli variables, we can define the gated canonical vectors as $\myvec{\alpha}=\myvec{\theta}^x\odot \mathbf{z}^x $ and $\myvec{\beta}=\myvec{\theta}^y\odot \mathbf{z}^y $.
We incorporate these vectors into the objective defined in Eq.~\ref{eq:pscca}, in which the $\ell_0$ regularization terms can be expressed as
$$\mathbb{E} \|{\mathbf{z}}^x \|_0= \sum_{i=1}^{D^x} \mathbb{P}({\textnormal{z}}^x_i \ge 0) = \sum_{i=1}^{D^x}\left(\frac{1}{2} - \frac{1}{2} \erf\left(-\frac{\mu^x_i}{\sqrt{2}\sigma}\right) \right), $$ where $\erf()$ is the Gaussian error function, and is defined similarly for $\mathbb{E} \|{\mathbf{z}}^y \|_0$.
To learn to model parameters $\myvec{\theta}^x,\myvec{\theta}^y$ and gate parameters $\myvec{\mu}^x,\myvec{\mu}^y$ we first draw realizations for the gates, then we update the parameters by applying gradient descent to minimize
$$\mathbb{E} \big\lbrack - \rho (\myvec{\alpha}^T\myvec{X},\myvec{\beta}^T\myvec{Y} )+ \lambda^x\|\mathbf{z}^x\|_0+\lambda^y\|\mathbf{z}^y\|_0\big\rbrack. $$ Post training, we remove the stochasicity from the gates and use all variables such that $\textnormal{z}^x_i=\max(0,\min(1,\mu^x_i))>0$ (defined similarly for $\mathbf{z}^y$).
\subsection{Gate Initialization}
If the gates are initialized with $\mu_i=0.5$, they will approximate ``fair'' Bernoulli variables. This is a reasonable choice if no prior knowledge about the solution is available; however, we can utilize the closed-form solution of the CCA problem to derive a more suitable parameter initialization for the gate. Specifically, given the empirical covariance matrix $\myvec{C}_{xy}=\frac{\myvec{XY^T}}{(N-1)}$, we denote the thresholded covariance matrix by $\bar{\myvec{C}}_{xy}$, with values defined as follows
\[
(\bar{C}_{xy})_{ij}=
\begin{cases}
({{C}}_{xy})_{i,j},& \text{if } |({C}_{xy})_{i,j}|>\delta\\
0, & \text{otherwise},
\end{cases}
\]
where $\delta$ is the selected threshold value based on the desired sparsity for $\myvec{X}$ and $\myvec{Y}$, specifically, if we assume that $r$ percent of the values should be zeroed, then $\delta$ is set to be the $r$-th percentile of $|(\myvec{C}_{xy})|$. Then, we compute the leading singular vectors $\myvec{u}$ and $\myvec{v}$ of $\bar{\myvec{C}}_{xy}$. We further threshold the absolute values of these vectors using the same percentile used for $\bar{\myvec{C}}_{xy}$.
The initial values of the parameters of the gates are then defined by $\myvec{\mu}^x=\myvec{\bar{u}}+0.5$, and $\myvec{\mu}^y=\myvec{\bar{v}}+0.5$, where $\myvec{\bar{u}}$ and $\myvec{\bar{v}}$ are the thresholded versions of the absolute value of the singular vectors. Using this procedure, we increase the initial probability for all gates in the support of $\myvec{\bar{u}}$ and $\myvec{\bar{v}}$ based on the singular vectors of $\bar{\myvec{C}}_{xy}$.
\subsection{$\ell_0$-Deep CCA}
\label{sec:l0_dcca}
To extend our model to non-linear function estimation, we can formulate the problem of sparse non-linear CCA by modeling the transformations using deep nets as in \cite{DCCA,wang2015deep}. We introduce two random Bernoulli relaxed vectors into the input layers of two neural networks trained in tandem to maximize the total correlation. Denoting the random gating vectors ${\mathbf{z}}^x$ and ${\mathbf{z}}^y$ for view ${\bm{X}}$ and ${\bm{Y}}$, respectively, our $\ell_0$-Deep CCA ($\ell_0$-DCCA) loss is defined by
\begin{equation}\label{eq:dgcca}
\mathbb{E} \big[ -\bar{\rho}(\myvec{f(\hat{\myvec{X}}}),\myvec{g(\myvec{\hat{ Y}}})) +\lambda^x \| {\mathbf{z}}^x\|_0 +\lambda^y \| {\mathbf{z}}^y\|_0 \big],
\end{equation}
where $\myvec{f(\hat{\myvec{X}}})=\myvec{f(\mathbf{z}^x\odot{{X}}|\theta^x)}\in \mathbb{R}^{d \times N}$, and $\myvec{g(\hat{\myvec{Y}}})=\myvec{g(\mathbf{z}^y\odot{{Y}}|\theta^y)}\in \mathbb{R}^{d \times N}$ are modeled as deep networks with model parameters $\myvec{\theta}=(\myvec{\theta^x},\myvec{\theta^y})$, and gate parameters $\myvec{\mu}=(\myvec{\mu^x},\myvec{\mu^y})$. The functions $\myvec{f}$ and $\myvec{g}$ embed the data into a $d$-dimensional space. The functional $\bar{\rho}(\myvec{f(\hat{\myvec{X}}}),\myvec{g(\myvec{\hat{ Y}}}))$ measures the total correlation between the two $d$ dimensional outputs of the deep nets, this is the sum over $d$ correlation values computed between pairs of coordinates. Exact details on the computation of this term appear in the following subsection. The regularization parameters $\lambda^x$, $\lambda^y$ control the sparsity of the input variables. The vectors ${\mathbf{z}}^x$ and ${\mathbf{z}}^y$ are Bernoulli relaxed vectors, with elements defined based on Eq.~\ref{eq:stg}.
Figure.~\ref{fig:dgnn_arc} highlights the proposed architecture. We first pass both observed modalities through the gates. Then, we feed these into view-specific neural sub-nets. Finally, we minimize the shared loss term in Eq.~\ref{eq:dgcca} by optimizing the parameters of the gates and the neural sub-nets.
\subsection{Algorithm details}\label{sec:alg_details}
Denoting the centered representations for $\myvec{X},\myvec{Y}$ by $\myvec{\Psi}^x, \myvec{\Psi}^y \in \mathbb{R}^{d\times N}$ (computed using the coupled neural sub-nets), respectively, the empirical covariance matrix between these representations can be expressed as $\widehat{\myvec{C}}_{xy}=\frac{1}{N-1}{\myvec{\Psi}}^x ({\myvec{\Psi}}^y)^T$. Using a similar notations, we express the regularized empirical covariance matrices of $\myvec{X}$ and $\myvec{Y}$ as $\widehat{\myvec{C}}_{x}=\frac{1}{N-1}{\myvec{\Psi}}^x ({\myvec{\Psi}}^x)^T+\gamma \myvec{I}$ and $\widehat{\myvec{C}}_{y}=\frac{1}{N-1}{\myvec{\Psi}}^y ({\myvec{\Psi}}^y)^T+\gamma \myvec{I}$, where the matrix $\gamma \myvec{I}$ ($\gamma>0$) is added to stabilize the invertibility of $\widehat{\myvec{C}}_{x}$ and $\widehat{\myvec{C}}_{y}$. The total correlation in Eq. \ref{eq:dgcca} (i.e. $\bar{\rho}(\myvec{f(\hat{\myvec{X}}}),\myvec{g(\myvec{\hat{ Y}}}))$) can be expressed using the trace of $\widehat{\myvec{C}}^{-1/2}_{y}\widehat{\myvec{C}}_{yx}\widehat{\myvec{C}}^{-1}_{x}\widehat{\myvec{C}}_{xy}\widehat{\myvec{C}}^{-1/2}_{y}$.
To learn the parameters of the gates $\myvec{\mu}$ and of the representations $\myvec{\theta}$ we apply full batch gradient decent to the loss in Eq.~\ref{eq:dgcca}. Specifically, we use Monte Carlo sampling to estimate the left part of Eq.~\ref{eq:dgcca}. This is repeated for several steps (epochs), using one Monte Carlo sample between each gradient step as suggested by \cite{louizos2017learning} and \cite{yamada2018feature}, and it worked well in our experiments. After training, we remove the stochastic part of the gates and use only the variables $i^x\in \{1,...,D^x\}$ and $i^y\in \{1,...,D^y\}$ such that ${\textnormal{z}}^x_{i^x} >0$ and ${\textnormal{z}}^y_{i^y} >0$. Empirically, we observe that our method also works well with stochastic gradient descent, as long as the batch size is not too small. Alternatively, for small batches we can use a variant of the total correlation loss as was presented in \cite{wang2015stochastic} or \cite{chang2018scalable}. In the Appendix section \ref{sec:gcca}, we present a pseudo-code of $\ell_0$-DCCA , and extend our formulation for a multi-modal setting (more than two views).
In Algorithm \ref{alg:pseudocode} we provide a pseudocode description of the proposed approach.
\begin{algorithm}[t!]
\caption{$\ell_0$-DCCA}
\label{alg:pseudocode}
\begin{algorithmic}
\STATE {\bfseries Input:} Coupled data, $\{\myvec{X},\myvec{Y}\}$, regularization parameters $\lambda^x,\lambda^y$, number of epochs $T$.
\STATE Initialize the gate parameters: $\mu^x_{i}=0.5$ for $i=1,\ldots,D^x$, and $\mu^y_{i}=0.5$ for $i=1,\ldots,D^y$.
\FOR{$t=1$ {\bfseries to} $T$}
\STATE Sample a stochastic gate (STG) vectors $\mathbf{z}^x,\mathbf{z}^y$ defined based on \eqref{eq:stg}.
\STATE Apply the STG to the data ${\hat{\myvec{X}}}={\mathbf{z}^x\odot{{X}}}$ and ${\hat{\myvec{Y}}}={\mathbf{z}^y\odot{{Y}}}$.
\STATE Compute the loss $ L= \mathbb{E} \big[ -\bar{\rho}(\myvec{f(\hat{\myvec{X}}}),\myvec{g(\myvec{\hat{ Y}}})) +\lambda^x \| {\mathbf{z}}^x\|_0 +\lambda^y \| {\mathbf{z}}^y\|_0 \big]$, where the calculation of the total correlation $\bar{\rho}$ is based on Section \ref{sec:alg_details}.
\STATE Update $\myvec{\theta}^x= \myvec{\theta}^x - \gamma \nabla_{\myvec{\theta}^x} {{L}}, \quad \myvec{\theta}^y= \myvec{\theta}^y - \gamma \nabla_{\myvec{\theta}^y} {{L}}$,\\
$\quad \myvec{\pi}^x= \myvec{\pi}^x - \gamma \nabla_{\myvec{\pi}^x} {{L}}, \quad \myvec{\pi}^y= \myvec{\pi}^y - \gamma \nabla_{\myvec{\pi}^y} {{L}}$
\ENDFOR
\STATE Return $s$ features with largest $\mu_i$.
\end{algorithmic}
\end{algorithm}
\section{Related Work}
The problem of of $\ell_0$ sparse CCA was studied in \cite{wiesel2008greedy,cai2019}. However, both rely on a greedy heuristic which iteratively adds non zero elements to the support of $\myvec{a}$ and $\myvec{b}$. We have implemented \cite{wiesel2008greedy} and observed that it does not converge to the correct canonical vectors across our synthetic evaluation in Section \ref{sec_synt_exa}.
Alternatively, as proposed by \cite{suo2017sparse}, and $\ell_1$-regularized problem can be described as
\begin{align*}
\myvec{a,b} &= \operatorname{argmin} \big[- \mathrm{Cov}(\myvec{a^T X }, \myvec{b^T Y }) +\tau_1\| \myvec{a}\|_1+\tau_2\| \myvec{b}\|_1 \big], \\
&\text{subject to} \quad \| \myvec{a^T X} \|_2 \leq 1, \quad \|\myvec{b^T Y}\|_2 \leq 1,
\end{align*}
where $\tau_1$ and $\tau_2$ are regularization parameters. A number of variants have been proposed to this problem \cite{waaijenborg2008quantifying,parkhomenko2009sparse,witten2009penalized,hardoon2011sparse}, but they all suffer from parameter shrinkage and therefore lead to a solution which is less consistent with the correct canonical vectors (see Table \ref{tab:linear}).
\section{Experimental Results}
We validate the effectiveness of the proposed approach on a wide range of tasks. First, using synthetic data, we demonstrate that $\ell_0$-CCA correctly identifies the canonical vectors in a challenging regime of $N \ll D^x,D^y$. Next, using a coupled video dataset, we demonstrate that $\ell_0$-CCA can identify the common information from high dimensional data, and embed it into correlated, low-dimensional representations. Then, we use noisy images from MNIST and multi-channel seismic data to demonstrate that $\ell_0$-DCCA finds meaningful representations of the data even in a noisy regime. Finally, we use $\ell_0$-DCCA to improve cancer sub-type classification using high dimensional genetic measurements. We use NA to denote simulations which did not converge. In all experiments validation sets are used for early stopping of the training procedure and for tuning $\lambda=\lambda^x=\lambda^y$. We refer the reader to the Appendix for a complete description of the baselines, training procedure and parameters choice for all methods.
\subsection{Synthetic Example}
\label{sec_synt_exa}
To generate samples from $\myvec{X},\myvec{Y}\in \mathbb{R}^{D\times N}$, we follow the procedure described in \cite{suo2017sparse}, considering data sampling from the following distribution $$\begin{pmatrix} \myvec{X} \\ \myvec{Y} \end{pmatrix} \sim N (\begin{pmatrix} \myvec{0} \\ \myvec{0} \end{pmatrix},\begin{pmatrix} \myvec{\Sigma}_x & \myvec{\Sigma}_{xy} \\ \myvec{\Sigma}_{yx} & \myvec{\Sigma}_{y} \end{pmatrix} ),$$ where $\myvec{\Sigma}_{xy}=\rho_0\myvec{\Sigma}_{x}(\myvec{\phi}\myvec{\eta}^T)\myvec{\Sigma}_{y} $. We study three cases for the covariance matrices $\myvec{\Sigma}=\myvec{\Sigma}_{x}=\myvec{\Sigma}_{y}$. \\
{\bf Model I.} Identity: $\myvec{\Sigma}=\myvec{I}_{D}$.\\
{\bf Model II.} Toeplitz: $\Sigma_{i,j}=\rho_0^{|i-j|},i,j=1,...,D$.\\
{\bf Model III.} Sparse inverse matrices: $\Sigma_{i,j}=\frac{\bar{\Sigma}_{i,j}}{\sqrt{\bar{\Sigma}_{i,i} \bar{\Sigma}_{j,j}}}$, \\where $\myvec{\bar{\Sigma}}=\myvec{{\Gamma}}^{-1}$, $\Gamma_{i,j}=\mathbbm{1}_{i=j}+0.5\mathbbm{1}_{i=j}+0.4\mathbbm{1}_{i=j}$.
For all three cases, the vectors $\myvec{\phi},\myvec{\eta}\in \mathbb{R}^{D},$ are sparse with $5$ nonzero elements and $\rho_0=0.9$. The indices of the active elements are chosen randomly with values equal to $1/\sqrt{5}$. In this setting, the canonical vectors $\myvec{a}$ and $\myvec{b}$ maximizing the correlation objective in Eq. \ref{eq:cca} are $\myvec{\phi}$ and $\myvec{\eta}$, respectively (see Proposition $1$ in \cite{suo2017sparse}).
\begin{figure*}[htb!]
\begin{center}
\includegraphics[width=0.42\textwidth]{Figures/lin_combined_new.png}
\includegraphics[width=0.42\textwidth]{Figures/linear_tmp2.png}
\end{center}
\vskip -0. in
\caption{Left: Regularization path of $\ell_0$-CCA on data generated from the linear model I (described in Section \ref{sec_synt_exa}). Values on the left $y$-axis (green) represent the sum of active gates (by expectation). Values on the right $y$-axis (blue) represent the empirical correlation between the
estimated representations, i.e. $\hat{\rho}=\myvec{\hat{\phi}}^T \myvec{X}^T \myvec{Y} \myvec{\hat{\eta}}$, where $\myvec{\hat{\phi}}$ and $\myvec{\hat{\eta}}$ are the estimated canonical vectors. Dashed lines indicate the correct number of active coefficients (10) and true correlation $\rho$ (0.9). Note that for small values of $\lambda=\lambda^x=\lambda^y$, the model selects many variables and attains a higher correlation value; in this case, $\ell_0$-CCA suffers from overfitting. Right: True canonical vector $\myvec{\phi}$ along with the estimated vectors using $\ell_0$-CCA ($\myvec{\hat{\phi}}$) and CCA ($\myvec{\hat{a}}$). Due to the small sample size, CCA overfits and fails to identify the correct canonical vectors. }
\vskip -0. in
\label{fig:linear}
\end{figure*}
Using Model I, we first generate $N=400$ samples, with $D=800$, and estimate the canonical vectors based on CCA and $\ell_0$-CCA. In Fig.~\ref{fig:linear}, we present a regularization path of the proposed scheme. Specifically, we apply $\ell_0$-CCA to the data described above using various values of $\lambda=\lambda^x=\lambda^y$. We present the $\ell_0$ of active gates (by expectation) along with the empirical correlation between the extracted representations, defined by $\hat{\rho}=\rho(\myvec{\hat{\phi}}^T \myvec{X}, \myvec{\hat{\eta}}^T\myvec{Y})$. As evident from the left panel, a wide range of $\lambda$ values leads to the correct number of active coefficients ($10$) and the correct correlation value ($\rho_0=0.9$). This is an indication of the robustness of the method to a wide range of $\lambda$ values. Next, in the right panel we present the values of $\myvec{\phi}$, the $\ell_0$-CCA estimate (using $\lambda=30$) of the canonical vector $\myvec{\hat{\phi}}$, and the CCA spectral based estimate of the canonical vector $\myvec{\hat{a}}$. Due to the low number of samples, the solution by CCA is wrong and not sparse, while the $\ell_0$-CCA solution correctly identifies the support of $\myvec{\phi}$.
\begin{table*}[htbp!]
\centering
\begin{tabular}{|l|l|l|l|}
\hline
&\multicolumn{3}{|c||}{Model I- Identity covariance} \\
\hline
($N,D^x,D^y$) & (400,800,800) & (500,600,600) & (700,1200,1200) \\
\hline
PMA \cite{witten2009penalized} & (1.170,1.170) & (0.850,0.850) & (1.090,1.090) \\
IP-SCCA \cite{mai2019iterative} & (1.658,1.647) & (1.051,1.051) & (1.544,1.542) \\
SCCA-I \cite{hardoon2011sparse} & (1.602,1.140) & (1.143,0.282) & (1.160,0.181) \\
SCCA-II \cite{gao2017sparse} & (0.060,0.066) & (0.053,0.057) & (0.045,0.043) \\
mod-SCCA \cite{suo2017sparse} & (0.056,0.062) & (0.05,0.056) & (0.045,0.043) \\
$\ell_0$-CCA & (\textbf{0.003,0.009)} & (\textbf{0.002,0.002)} & (\textbf{0.001,0.002)} \\
\hline
&\multicolumn{3}{|c||}{Model II- Toeplitz covariance} \\
\hline
PMA \cite{witten2009penalized} & (1.038,1.067) & (1.115,0.943) & (1.098,0.890) \\
IP-SCCA \cite{mai2019iterative} & (NA,NA) & (NA,NA) & (NA,NA) \\
SCCA-I \cite{hardoon2011sparse} & (1.382,1.357) & (1.351,1.299) & (1.219,1.186) \\
SCCA-II \cite{gao2017sparse} & (0.213,0.296) & (0.145,0.109) & (0.110,0.088) \\
mod-SCCA \cite{suo2017sparse} & (0.173,0.218) & (0.136,0.098) & (0.109,0.086) \\
$\ell_0$-CCA & (\textbf{0.101,0.079)} & (\textbf{0.098,0.072}) & (\bf{0.026,0.039}) \\
\hline
&\multicolumn{3}{|c||}{Model III- Sparse Inverse covariance} \\
\hline
PMA \cite{witten2009penalized} & (0.930,1.050) & (0.670,0.450) & (0.760,0.580) \\
IP-SCCA \cite{mai2019iterative} & (0.654,0.653) & (0.092,0.091) & (0.282,0.285) \\
SCCA-I \cite{hardoon2011sparse} & (1.375,0.966) & (1.041,0.502) & (0.985,0.364) \\
SCCA-II \cite{gao2017sparse} & (0.129,0.190) & (0.069,0.062) & (0.051,0.047) \\
mod-SCCA \cite{suo2017sparse} & ({\bf{0.092}},0.149) & (0.068,0.059) & (0.050,0.044) \\
$\ell_0$-CCA & (0.108,\bf{0.103}) & (\bf{0.026,0.036}) & (\bf{0.009,0.005}) \\
\hline
\end{tabular}%
\caption{Evaluating the estimation quality of the canonical vectors $\myvec{\psi}$ and $\myvec{\eta}$. Each pair indicates ($e_{\myvec{\phi}},e_{\myvec{\eta}}$), which are the estimation errors of $\myvec{\phi}$ and $\myvec{\eta}$ respectively. We compare the proposed $\ell_0$-CCA to other sparse CCA schemes considering three types of covariance matrices for generating $\myvec{X}$ and \myvec{Y}, and different dimensions ($N,D^x,D^y$). The description of all three covariances appears in Section \ref{sec_synt_exa}. In each example, we highlight the smallest error obtained across all methods using boldface.}
\label{tab:linear}%
\end{table*}%
Next, we evaluate the estimation error of $\myvec{\phi}$ using $e_{\myvec{\phi}}=2(1-|\myvec{\phi}^T\hat{\myvec{\phi}}|)$, and $e_{\myvec{\eta}}$ is defined similarly. In Table~\ref{tab:linear} we present the estimation errors of $\myvec{\phi}$ and $\myvec{\rho}$ (averaged over $100$ simulations) for Models I, II and III (identity, Toeplitz and sparse inverse covariance matrices). As baselines, we compare the performance to $5$ leading sparse CCA models. As evident from these experiments, $\ell_0$-CCA significantly outperforms all baselines in its ability to learn the correct canonical vectors. In the Appendix, section \ref{sec:runtime} we provide a runtime evaluation of the method for different values of $N$ and $D$.
\subsection{Multi View of Spinning Puppets}
As an illustrative example, we use a dataset collected by \cite{talmon} for multiview learning. The authors have generated two videos capturing rotations of $3$ desk puppets. One camera captures two puppets, while the other captures another two, where one puppet is shared across cameras. A snapshot from both cameras appears in the top row of Fig.~\ref{fig:bulldog_example}. All puppets are placed on spinning devices that rotate the dolls at different frequencies. In both videos, there is a shared underlying parameter, namely the rotation of the common bulldog.
We use a subset of the spinning puppets dataset, with $400$ images from each camera. Each image has $240 \times 320=76800$ pixels (using a grayscaled version of the colored image); therefore, there are more features than samples, and direct application of CCA would fail. We apply the proposed scheme using $\lambda^y=\lambda^x=50$, a linear activation and embedding dimension $d=2$. $\ell_0$-CCA converges to embedding with a total correlation of $1.99$ using $372$ and $403$ pixels from views $\myvec{X}$ and $\myvec{Y}$. The active gates are presented in the right panels of Fig. \ref{fig:bulldog_example}. In this example, the active gates highlight subsets of pixels that overlap with the common spinning Bulldog. This indicates the ability of $\ell_0$-CCA to identify correlated variables in the regime of $N<D^x,D^y$.
\begin{figure*}[htb!]
\begin{center}
\vskip -0.in
\includegraphics[width=0.24\textwidth]{Figures/Buldog_new_large1.png}
\includegraphics[width=0.24\textwidth]{Figures/Buldog_new_large2.png}
\includegraphics[width=0.24\textwidth]{Figures/gates_large_1N.png}
\includegraphics[width=0.24\textwidth]{Figures/gates_large_2N.png}
\end{center}
\vskip -0. in
\caption{Left: two samples from the spinning puppets videos. The videos capture the rotation of $3$ desk puppets. Arrows indicate the spinning direction of each puppet. We use $\ell_0$-CCA to identify a sparse subset of pixels that are correlated from both videos. Right: the values of the gates for each video $\mathbf{z}^x$ and $\mathbf{z}^y$. After training, the values of the gates are binary (i.e., $\{0,1\}$), with $372$ and $403$ active gates for the left and right videos, respectively. $\ell_0$-CCA correctly select correlated subsets of pixels that highlight the common puppet (Bulldog) in this example. }
\vskip -0. in
\label{fig:bulldog_example}
\end{figure*}
\begin{figure*}[htb!]
\begin{center}
\vskip -0. in
\includegraphics[width=0.4\textwidth]{Figures/puppets_embedding_large1.png}
\includegraphics[width=0.4\textwidth]{Figures/puppets_embedding_large2.png}
\end{center}
\vskip -0.0 in
\caption{The correlated $\ell_0$-CCA representations (visualized using \cite{lindenbaum2020multi}) of the Yoda+Bulldog video (left) and Bulldog+Bunny (right). We superimpose each embedding with $6$ images corresponding to $6$ points in the embedding spaces. The transformations are based on the information described by pixels whose gates are active (presented in Fig.~\ref{fig:bulldog_example}). The resulting embeddings are correlated with each other, with a total correlation of $\bar{\rho}=1.99$. The structure captured by the embeddings correctly represent the angular rotation of the Bulldog, which is the common latent parameter in this experiment.}
\vskip -0.0 in
\label{fig:bulldog_embedding}
\end{figure*}
In Fig.~\ref{fig:bulldog_embedding}, we present the coupled two-dimensional embedding of both videos. Both embeddings are correlated with each other and reveal the angular orientation of the Bulldog (which is the common latent parameter in this experiment). Note that adjacent images in the embedding are not necessarily contiguous in the original ambient space because the Bunny and the Yoda puppets are correctly gated since they can not contribute to the correlated embedding.
\begin{figure*}[htb!]
\begin{center}
\vskip -0.in
\includegraphics[width=0.4\textwidth]{Figures/MNISTV2.png}
\includegraphics[width=0.4\textwidth]{Figures/MNISTV1.png}
\end{center}
\vskip -0. in
\caption{Images from the coupled noisy MNIST dataset. In the bottom right of both panels, we presents the active gates (white values within a green frame). There are $277$ and $258$ active gates for view I and II, respectively. }
\vskip -0. in
\label{fig:MNIST}
\end{figure*}
\subsection{Noisy MNIST}
MNIST~\cite{lecun2010mnist} consists of $28 \times 28$ gray-scale digit images. We use a two noisy variants of MNIST as our coupled views. The first view is created by adding noise drawn uniformly from $[0,1]$ to all pixels. The second view is created by placing a random patch from a natural image in the background of the handwritten digits. Random samples from these modalities are presented in Fig.~\ref{fig:MNIST}. Each view consists of $62,000$ samples, of which we use $40,000$ for training, $12,000$ for testing and $10,000$ are used as a validation set.
We train $\ell_0$-DCCA to embed the data into a correlated $10$ dimensional space while selecting subsets of input pixels. Our model selects $277$, and $258$ pixels from both modalities respectively (see bottom right corner of Fig.~\ref{fig:MNIST}). Next, we evaluate the quality of learned embedding by applying $k$-means to the stacked embedding of both views. We run $k$-means (with $k=10$) using $20$ random initializations and record the run with the smallest sum of square distances from centroids. Given the cluster assignment, $k$-means clustering accuracy (KM) and mutual information (MI) are measured using the true labels.
Additionally, we train a Linear-SVM (SVM) model on our train and validation datasets. SVM classification accuracy is measured on the remaining test set. The embedding provided by $\ell_0$-DCCA leads to higher classification and clustering results compared with several linear and non-linear modality fusion models appear in Table~\ref{tbl:mnist_sesmic_perf}. In the Appendix, we provide the implementation details and present experimental results demonstrating the performance of $\ell_0$-DCCA for various values of $\lambda=\lambda^x=\lambda^y$.
\subsection{Seismic Event Classification}
Next, we evaluate the method using a dataset of seismic events studied by \cite{Ofir,lindenbaum2016multi}. Here, we focus on $537$ explosions which are categorized into $3$ quarries. Each event is recorded using two directional channels facing east (E) and north (N); these comprise the coupled views for the correlation analysis. Following the analysis by \cite{Ofir}, the input features are sonogram representations of the seismic signal. Sonograms are time-frequency representations with bins equally tempered on a logarithmic scale. Each sonogram $\myvec{z} \in \mathbb{R}^{1157}$ with $89$ time bins and $13$ frequency bins. An example of sonograms from both channels appears in the top row of Fig. \ref{fig:sono}.
We create the noisy seismic data by adding sonograms computed based on vehicle noise from \footnote{https://bigsoundbank.com/search?q=car}. Examples of noisy sonograms appear in the middle row of Fig.~\ref{fig:sono}. We hold out $20 \%$ of the data as a validation set, and train $\ell_0$-DCCA to embed the data in $3$ dimensions. In Table~\ref{tbl:mnist_sesmic_perf} we present the MI, $k$-means and SVM accuracies computed based on $\ell_0$-DCCA embedding. Furthermore, we compare the performance with several other baselines. Here, the proposed scheme improves performance in all $3$ metrics while identifying a subset of $17$ and $16$ features from channel E and N, respectively. The active gates are presented in the bottom row of Fig.~\ref{fig:sono}. Our results indicate that even in the presence of strong noise, $\ell_0$-DCCA correctly activates the gates in frequency bins that coincide with the energy stamps of the primary and secondary waves (P and S in the top left of Fig. \ref{fig:sono}).
\begin{figure*}[htb!]
\begin{center}
\vskip -0. in
\includegraphics[width=0.46\textwidth]{Figures/Clean_sono1.png}
\includegraphics[width=0.46\textwidth]{Figures/Clean_sono2.png}
\includegraphics[width=0.46\textwidth]{Figures/Noisy_sono1.png}
\includegraphics[width=0.46\textwidth]{Figures/Noisy_sono2.png}
\includegraphics[width=0.46\textwidth]{Figures/sono11-gatesN.png}
\includegraphics[width=0.46\textwidth]{Figures/sono22-gatesN.png}
\end{center}
\vskip -0. in
\caption{Top: Clean sample sonograms of an explosion based on the E and N channels (left and right, respectively). Arrows highlight the Primary (P) and Secondary (S) waves caused by the explosion. Middle: Noisy sonograms generated by adding sonograms of vehicle recordings. Bottom: the active gates for both channels. Note that the gates are active at time-frequency bins which correspond to the P and S waves (see top left figure). }
\vskip -0. in
\label{fig:sono}
\end{figure*}
\subsection{Cancer Sub-Type classification}
Accurate classification of cancer sub-type can be vital for extending the life span of patients by personalized treatments \cite{zhu2020application}. This task is challenging since the number of measured genes (features) is typically much larger than the number of observations. Here, we use multi-modal observations from the METABRIC data \cite{metabric} and attempt to find correlated representations to improve cancer sub-type classification. The data consists of $1,112$ breast cancer patients which are annotated by $10$ subtypes based on InClust \cite{dawson2013new}. We observe tow modalities, namely the RNA gene expression data, and Copy Number Alteration (CNA) data. The dimensions of these modalities are $15,709$ and $47,127$, respectively. We compute the $\ell_0$-DCCA $10$ dimensional embedding (and all baseline embeddings) and demonstrate using $k$-means and SVM that the representation identified using $\ell_0$-DCCA can lead to more accurate cancer sub-type classification (see Table \ref{tbl:mnist_sesmic_perf}).
\begin{table*}[]
\begin{adjustbox}{width=1 \columnwidth,center}
\begin{tabular}{|c||c|c|c||c|c|c|c|c|c|c|c|}
\hline
&\multicolumn{3}{|c||}{Noisy MNIST \cite{lecun2010mnist}}& \multicolumn{3}{c|}{Seismic \cite{Ofir}} & \multicolumn{3}{c|}{METABRIC \cite{metabric}}\\
\cline{2-10}
Method & MI & KM (\%) & SVM (\%) & MI & KM (\%) & SVM (\%) & MI & KM (\%) & SVM (\%) \\
\hline\hline
Raw Data & 0.130 & 16.6 & 86.6 & 0.001 & 35.7 & 41.3 & 0.58 & 36.5 & 63.8 \\
PCA \cite{pearson1901principal} & 0.130 & 16.6 & 89.3 & 0.002 & 38.8 & 41.3 & 0.08 & 19.2 & 23.6 \\
CCA \cite{CCA} & 1.290 & 66.4 & 75.8 & 0.003 & 38.1 & 40.4 & 0.20 & 20.7 & 24.1 \\
mod-SCCA \cite{suo2017sparse} & 0.342 & 23.9 & 63.1 & 0.610 & 71.7 & 86.9 & 0.12 & 21.0 & 26.0\\
SCCA-HSIC \cite{uurtio2018sparse} & NA & NA & NA & 0.003 & 38.7 & 49.5 & NA & NA & NA \\
KCCA \cite{bach2002kernel} & 0.943 & 50.2 & 85.3 & 0.006 & 38.4 & 92.5 & 0.35 & 30.8 & 61.3\\
grad-KCCA \cite{uurtio2019large} & NA & NA & NA & 0.005 & 40.9 & 41.4 & 0.50 & 32.6 & 47.8 \\
multiview-ICA \cite{richard2020modeling} & 1.750 & 88.0 & 90.0 & 0.748 & 90.1 & 94.2 & 0.74 & 44.7 & 62.8 \\
NCCA \cite{michaeli2016nonparametric} & 1.030 & 47.5 & 77.2 & 0.700 & 86.8 & 91.4 & 0.72 & 48.7 & 63.7 \\
DCCA \cite{DCCA} & 1.970 & 93.2 & 93.2 & 0.830 & 94.9 & 94.6 & 0.79 & 45.2 & 72.1 \\
DCCAE \cite{wang2015deep} & 1.940 & 91.8 & 94.0 & 0.92 & 97.0 & 97.0 & 0.68 & 42.9 & 69.0 \\
$\ell_0$-DCCA & \textbf{2.05} & \textbf{95.4} & \textbf{95.5} & \bf{0.97} & \bf{ 98.1} & \bf{97.2} & \bf{ 0.88} & \bf{ 50.3} & \bf{ 74.1} \\
\hline
\end{tabular}
\end{adjustbox}
\caption{Evaluation of correlated embedding extracted from the Noisy MNIST, Seismic, and METABRIC (cancer type) datasets. The representation extracted by $\ell_0$-DCCA leads to higher clustering and classification accuracy compared with several baselines. }
\vskip -0. in
\label{tbl:mnist_sesmic_perf}
\end{table*}
\section{Conclusion}
This paper presents a method for learning sparse non-linear transformations that maximize the canonical correlations between two modalities. Our approach is realized by gating the input layers of two neural networks, which are trained to optimize their output's total correlations. Input variables are gated using a regularization term which encourages sparsity. We further propose a novel scheme to initialize the gates based on a thresholded cross-covariance matrix. Our method can learn informative correlated representations even when the number of variables far exceeds the number of samples. Finally, we demonstrate that the proposed scheme outperforms existing algorithms for linear and non-linear canonical correlation analysis.
\newpage
\bibliographystyle{unsrt}
|
1,314,259,994,168 | arxiv | \section*{Introduction and notations}
In their ICM talk \cite{SU06}, Skinner and Urban outline a program to connect the order of vanishing of the $L$-functions of certain
polarized regular motives with the rank of the associated Bloch-Kato Selmer groups.
Their strategy is to deform the motives along certain $p$-adic eigenfamilies of Galois representations to construct the expected extensions.
They introduce the notion \emph{finite slope families} to encode the local properties of these $p$-adic families. One may view finite slope families as generalizations of the $p$-adic families arising from Coleman-Mazur eigencurve, which is formulated as weakly refined families by Bellaiche-Chenevier \cite{BC06}, in the sense that a finite slope family may have \emph{multiple} constant Hodge-Tate weights $k_1,\dots, k_r\in\mathbb{Z}$ and a Zariski dense subset of crystalline points which have prescribed crystalline periods with Hodge-Tate numbers $k_1,\dots,k_r$. Skinner and Urban then use the (unproved) analytic continuation of these crystalline periods to deduce that the expected extensions lie in the Selmer groups. Most recently, Harris, Lan, Taylor and Thorne construct Galois representations for (non-self dual) regular algebraic cuspidal automorphic representations of $\mathrm{GL}(n)$ over CM fields \cite{HLTT}. Their construction also involves $p$-adic deformations, and it turns out that these Galois representations live in certain $p$-adic families which generalize Skinner-Urban's finite slope families by replacing crystalline periods with semi-stable periods. Furthermore, to show that the Galois representations constructed by them are geometric as predicted by the philosophy of Langlands correspondence, one needs the analytic continuation of semi-stable periods for these families.
In this paper, we make use of the notion finite slope families to encode the local properties of the $p$-adic families of Galois representations in \cite{HLTT}; this generalizes the original definition of Skinner-Urban. Our main result is then to prove the analytic continuation of semi-stable periods for such families. This will provide a necessary ingredient to Skinner-Urban's ICM program. Besides, we recently learned from Taylor that, in an ongoing project of Ila Varma, she will establish the aforementioned geometric properties of Galois representations based on the results of this paper and a previous one of us \cite{L12}. We also note that recently Shah proves some results about interpolating Hodge-Tate and de Rham periods in families of $p$-adic Galois representations which may be applied to some related situations \cite{S}.
As the $p$-adic families over Coleman-Mazur eigencurve are special cases of finite slope families, our result generalizes the famous result of Kisin on the analytic continuation of crystalline periods for such families \cite{Ki03}. However, even in the crystalline case, our strategy and techniques are completely different from his. In fact, in Kisin's original work as well as the recent enhancement made by us \cite{L12}, one crucially relies on the fact that the families have only one constant Hodge-Tate weight, which is obviously not the case for general finite slope families. On the other hand, the work presented in this paper is inspired by the works of Berger and Colmez on families of de Rham representations \cite{BC07} and Kedlaya, Pottharst and Xiao on the cohomology of families of $\m$-modules \cite{KPX}. For a finite slope family, by adapting the techniques of \cite{KPX}, we first cut out a sub-family of $\m$-modules, which is expected to be generated by the desired semi-stable periods, after making a proper and surjective base change. We then develop a theory of families of Hodge-Tate and de Rham $\m$-modules with bounded Hodge-Tate weights. Finally we prove some analogues of Berger-Colmez for such families of $\m$-modules, and use them to conclude that the sub-family of $\m$-modules is semi-stable.
In the remainder of this introduction, we give more precise statements about our results.
We fix a finite extension $K$ of $\Q$.
Let $K_0$ be the maximal unramified sub-extension of $K$, and let $f=[K_0:\Q]$.
\begin{defn}\label{def:fs}
Let $X$ be a reduced and separated rigid analytic space over $K$. A \emph{finite slope family} of $p$-adic representations of dimension $d$ over $X$ is a locally free coherent $\OO_X$-module $V_X$ of rank $d$ equipped with a continuous $G_K$-action and together with the following data
\begin{enumerate}
\item[(1)]a positive integer $c$,
\item[(2)]a monic polynomial $Q(T)\in\OO_X(X)[T]$ of degree $m$ with unit constant term,
\item[(3)]a subset $Z$ of $X$ such that for all $z$ in $Z$, $V_z$ is semi-stable with non-positive Hodge-Tate weights, and for all $B\in\mathbb{Z}$ the set of $z$ in $Z$ such
that $V_z$ has $d-c$ Hodge-Tate weights less than $B$ is Zariski dense in $X$,
\item[(4)]for $z\in Z$, a $K_0\otimes_{\Q}k(z)$-direct summand $\mathcal{F}_{z}$ of $D^+_{\mathrm{st}}(V_z)$ which is free of rank $c$ and stable under $\varphi$ and $N$ such that $\varphi^f$ has characteristic polynomial $Q(z)(T)$ and all Hodge-Tate weights of $\mathcal{F}_z$ lie in $[-b,0]$ for some $b$ which is independent of $z$.
\end{enumerate}
\end{defn}
Our main results are as follows.
\begin{theorem}\label{thm:main}
Let $V_X$ be a finite slope family over $X$. Then there exists a surjective proper morphism $X'\ra X$ so that $(K\otimes_{K_0}D^+_{\mathrm{st}}(V_{X'}))^{Q(\varphi)=0}$ has a rank $c$ locally free coherent $K_0\otimes_{\Q}\OO_{X'}$-submodule which specializes to a rank $c$ free $K_0\otimes_{\Q}k(x)$-submodule in $\D_\rig^\dag(V_x)$ for any $x\in X'$. As a consequence, $D^+_{\mathrm{st}}(V_x)^{Q(\varphi)(x)=0}$ has a free $K_0\otimes_{\Q}k(x)$-submodule of rank $c$ for any $x\in X$.
\end{theorem}
The following corollary is clear.
\begin{cor}
Let $V_X$ be a finite slope family over $X$. If $V_z$ is crystalline for any $z\in Z$, then there exists a surjective proper morphism $X'\ra X$ so that $(K\otimes_{K_0}D^+_{\mathrm{crys}}(V_{X'}))^{Q(\varphi)=0}$ has a rank $c$ locally free coherent $K_0\otimes_{\Q}\OO_{X'}$-submodule which specializes to a rank $c$ free $K_0\otimes_{\Q}k(x)$-submodule in $\D_\rig^\dag(V_x)$ for any $x\in X'$. As a consequence, $D^+_{\mathrm{crys}}(V_x)^{Q(\varphi)(x)=0}$ has a free $K_0\otimes_{\Q}k(x)$-submodule of rank $c$ for any $x\in X$.
\end{cor}
\section*{Acknowledgements}
Thanks to Christopher Skinner, Richard Taylor and Ila Varma for useful communications. We especially thank Richard Taylor for suggesting a more concise definition of finite slope families.
\section{Families of $\m$-modules}
\begin{defn}
Let $A$ be a Banach algebra over $\Q$. For $s>0$, a \emph{$\varphi$-module} over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}A$ is a finite projective $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}A$-module $D_A^s$ equipped with an isomorphism
$$\varphi^*D_A^s\cong D_A^s\otimes_{\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}A}\mathbf{B}^{\dag,ps}_{\rig,K}\widehat{\otimes}_{\Q}A.$$ A \emph{$\varphi$-module} $D_A$ over $\mathbf{B}^{\dag}_{\rig,K}\widehat{\otimes}_{\Q}A$ is the base change to $\mathbf{B}^{\dag}_{\rig,K}\widehat{\otimes}_{\Q}A$ of a $\varphi$-module $D_A^s$ over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}A$ for some $s>0$.
A \emph{$\m$-module} over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}A$ is a $\varphi$-module $D_A^s$ over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}A$ equipped with a commuting semilinear continuous action of $\Gamma$. A \emph{$\m$-module} $D_A$ over $\mathbf{B}^{\dag}_{\rig,K}\widehat{\otimes}_{\Q}A$ is the base change to $\mathbf{B}^{\dag}_{\rig,K}\widehat{\otimes}_{\Q}A$ of a $\m$-module $D_A^s$ over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}A$ for some $s>0$.
\end{defn}
\begin{notation}
For a morphism $A\ra B$ of Banach algebras over $\Q$, we denote by $D^s_B$ (resp. $D_B$) the base change of $D^s_A$ (resp. $D_A$) to $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}B$ (resp. $\mathbf{B}^{\dag}_{\rig,K}\widehat{\otimes}_{\Q}B$). In the case when $A=S$ is an affinoid algebra over $\Q$ and $x\in M(S)$, we denote $D^s_{k(x)}$ (resp. $D_{k(x)}$) by $D_x^s$ (resp. $D_x$) instead.
\end{notation}
Let $S$ be an affinoid algebra over $\Q$. Recall that for sufficiently large $s$, a vector bundle over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}S$ consists of one finite flat module $D_S^{[s_1,s_2]}$ over each ring $\mathbf{B}^{[s_1,s_2]}_K\widehat{\otimes}_{\Q}S$ with $s\leq s_1\leq s_2$, together with isomorphisms
\[
D_S^{[s_1,s_2]}\otimes_{\mathbf{B}^{[s_1,s_2]}_{K}\widehat{\otimes}_{\Q}S}
\mathbf{B}^{[s_1',s_2']}_{K}\widehat{\otimes}_{\Q}S\cong D_S^{[s'_1,s'_2]}
\]
for all $s\leq s_1'\leq s_1\leq s_2\leq s_2'$ satisfying the cocycle conditions. A $\varphi$-bundle over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}S$ is a vector bundle $(D_S^{[s_1,s_2]})_{s\leq s_1\leq s_2}$ over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}S$ equipped with isomorphisms $\varphi^*D_S^{[s_1,s_2]}\cong D_S^{[ps_1,ps_2]}$ for all $s/p\leq s_1\leq s_2$ satisfying the obvious compatibility conditions. When $s$ is sufficiently large, by \cite[Proposition 2.2.7]{KPX}, the natural functor from the category of $\varphi$-modules over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}S$ to the category of $\varphi$-bundles over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}S$ is an equivalence of categories. Note that by its definition, one can glue $\varphi$-bundles over separated rigid analytic spaces. Therefore this equivalence of categories enables us to introduce the following definition.
\begin{defn}
Let $X$ be a separated rigid analytic space over $\Q$. A family of $\m$-modules $D_X$ over $X$ is a compatible family of $\m$-modules $D_S$ over $\mathbf{B}^{\dag}_{\rig,K}\widehat{\otimes}_{\Q}S$ for each affinoid subdomain $M(S)$ of $X$.
\end{defn}
The following theorem follows from \cite{BC07}, \cite{KL10} and \cite{L12}.
\begin{theorem}
Let $A$ be a Banach algebra over $\Q$, and let $V_A$ be a finite locally free $A$-linear representation of $G_K$. Then there is a $\m$-module $\D_\rig^\dag(V_A)$ over $\mathbf{B}^{\dag}_{\rig,K}\widehat{\otimes}_{\Q}A$ functorially associated to $V_A$. The rule $V_A\mapsto \D_\rig^\dag(V_A)$ is fully faithful and exact, and it commutes with base change in $A$.
\end{theorem}
Let $A$ be a Banach algebra over $K_0$. Recall that one has a canonical decomposition
\[
A\otimes_{\Q}K_0\cong\prod_{\sigma\in\mathrm{Gal}(K_0/\Q)}A_{\sigma}
\]
where each $A_{\sigma}$ is the base change of $A$ by the automorphism $\sigma$. Furthermore, the $\mathrm{Gal}(K_0/\Q)$-action permutes all $A_\sigma$'s in the way that $\tau(A_\sigma)=A_{\tau\sigma}$. For any $a\in A^\times$, we equip $A\otimes_{\Q}{K_0}$ with a $\varphi\otimes 1$-semilinear action $\varphi$ by setting
\[
\varphi((x_1,x_{\varphi},\dots, x_{\varphi^{f-1}}))=(ax_{\varphi^{f-1}},x_1,\dots,x_{\varphi^{f-2}})
\]
where $x_{\sigma}\in A_{\sigma}$ for each $\sigma\in\mathrm{Gal}(K_0/\Q)$; we denote this $\varphi$-module by $D_a$. It is clear that the $\varphi$-action on $D_a$ satisfies $\varphi^f=1\otimes a$.
We fix a uniformizer $\pi_K$ of $K$.
\begin{defn} For any continuous character $\delta:K^\times\ra A^\times$, we associate it a rank 1 $(\varphi,\Gamma)$-module $(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A)(\delta)$ over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A$ as follows. If $\delta|_{\OO_K^\times}=1$, we set $(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A)(\delta)=(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A)
\otimes_{A\otimes_{\Q}{K_0}}D_{\delta(\pi_K)}$ where we equip $D_{\delta(\pi_K)}$ with the trivial $\Gamma$-action. For general $\delta$, we write $\delta=\delta'\delta''$ such that $\delta'(\pi_K)=1$ and $\delta''|_{\OO_K^\times}=\mathrm{id}$. We view $\delta'$ as an $A$-valued character of $W_K$, and extend it to a character of $G_K$ continuously. We then set
\[
(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A)(\delta)=\D_\rig^\dagger(\delta')
\otimes_{\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A}
(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A)(\delta'').
\]
For a $(\varphi,\Gamma)$-module $D_A$ over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A$, we put $D_A(\delta)=D_A\otimes_{\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A}
(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A)(\delta)$.
Let $X$ be a separated rigid analytic space over $\Q$. For a continuous character $\delta:K^\times\ra \OO(X)^\times$ and a family of $\m$-module $D_X$ over $X$, we define the families of $\m$-modules $(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}\OO_X)(\delta)$ and $D_X(\delta)$ by gluing $(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S)(\delta)$ and $D_S(\delta)$ for all affinoid subdomains $M(S)$ respectively.
\end{defn}
\section{Cohomology of families of $\m$-modules}
Let $\Delta_K$ be the $p$-torsion subgroup of $\Gamma$. Choose $\gamma_K\in\Gamma_K$ whose image in $\Gamma/\Delta_K$ is a topological generator.
\begin{defn}
Let $S$ be an affnioid algebra over $\Q$. For a $\m$-module $D_S$ over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S$, we define the Herr complex $C^\bullet_{\varphi,\gamma_K}(D_S)$ of $D_S$ concentrated in degree $[0,2]$ as follows:
\[
C^{\bullet}_{\varphi,\gamma_K}(D_S)=
[D_S^{\Delta_K}\stackrel{d_{1}}{\longrightarrow}D_S^{\Delta_K}\oplus D_S^{\Delta_K}
\stackrel{d_{2}}{\longrightarrow}D_S^{\Delta_K}]
\]
with $d_1(x) = ((\gamma_K - 1)x, (\varphi - 1)x)$ and $d_2(x,y) =
(\varphi - 1)x - (\gamma_K - 1)y$. One shows that this complex is independent of the choice of $\gamma_K$ up to canonical quasi-isomorphism. Its cohomology group is denoted by $H^\bullet(D_S)$.
\end{defn}
By the main result of \cite{KPX}, one knows that $H^i(D_S)$ is a finite $S$-module and commutes with flat base change in $S$. This enables a cohomology theory for families of $\m$-modules over general rigid analytic spaces.
\begin{defn}
Let $X$ be a separated rigid analytic space over $\Q$, and let $D_X$ be a family of $\m$-modules over $X$. For each $0\leq i\leq 2$, we define $H^\bullet(D_X)$ to be the cohomology of the complex
\[
C^{\bullet}_{\varphi,\gamma_K}(D_X)=
[D_X^{\Delta_K}\stackrel{d_{1}}{\longrightarrow}D_X^{\Delta_K}\oplus D_X^{\Delta_K}
\stackrel{d_{2}}{\longrightarrow}D_X^{\Delta_K}]
\]
with $d_1(x) = ((\gamma_K - 1)x, (\varphi - 1)x)$ and $d_2(x,y) =
(\varphi - 1)x - (\gamma_K - 1)y$. For each $0\leq i\leq 2$, $H^i(D_X)$ is therefore the coherent $\OO_X$-module obtained by gluing $H^i(D_S)$ for all affinoid subdomains $M(S)$ of $X$.
\end{defn}
As a consequence of finiteness of the cohomology of families of $\m$-modules, by a standard argument we see that locally on $X$, the complex $C^{\bullet}_{\varphi,\gamma_K}(D_X)$ is quasi isomorphic to a complex of locally free coherent sheaves concentrated in degree $[0,2]$. This would enable us to flatten the cohomology of families of $\m$-modules by blowing up the base $X$. The following lemma is a rearrangement of some arguments in \cite[\S6]{KPX}.
\begin{lemma}\label{lem:modification}
Let $X$ be a reduced, separated and irreducible rigid analytic space over $K$, and let $D_X$ be a family of $\m$-modules of rank $d$ over $X$. Then the following are true.
\begin{enumerate}
\item[(1)]The exists a proper birational morphism $\pi:X'\ra X$ of reduced rigid analytic spaces over $K$ so that $H^0(D_{X'})$ is flat and $H^i(D_{X'})$ has Tor-dimensions $\leq 1$ for each $i=1,2$.
\item[(2)]Suppose that $D'_{X}$ is a family of $\m$-modules over $X$ of rank $d'$, and that $\lambda: D'_X\ra D_X$ be a morphism between them so that for any $x\in X$, the image of $\lambda_x$ is a $\m$-submodule of rank $d$ of $D_x$. Then there exists a proper birational morphism $\pi:X'\ra X$ of reduced rigid analytic spaces over $K$ so that the cokernel of $\pi^*\lambda$ has Tor-dimension $\leq 1$.
\end{enumerate}
\end{lemma}
\begin{proof}
The upshot is that for a bounded complex $(C^\bullet,d^\bullet)$ of locally free coherent sheaves on $X$, there exists a blow up $\pi:X'\ra X$, which depends only on the quasi-isomorphism class of $(C^\bullet,d^\bullet)$, so that $\pi^*d^i$ has flat image for each $i$. Furthermore, the construction of $X'$ commutes with dominant base change in $X$ (see \cite[Corollary 6.2.5]{KPX} for more details). Thus for (1), we can construct $X'$ locally and then glue. For (2),
let $Q_X$ denote the cokernel of $\lambda$. For any $x\in X$, since the image of $\lambda_x$ is a $\m$-submodule of rank $d$, by \cite[Lemma 5.3.1]{L12}, we get that $Q_x$ is killed by a power of $t$. Now let $M(S)$ be an affinoid subdomain of $X$, and suppose that $D_S^s$ and $D'^s_S$ are defined for some suitable $s>0$. For $r>s$, set $Q_S^{[s,r]}=D^{[s,r]}_S/\lambda(D'^{[s,r]}_S)$. Since for any $x\in M(S)$, the fiber of $Q_S^{[s,r]}$ at $x$ is killed by a power of $t$, we get that $Q_S^{[s,r]}$ is killed by $t^k$ for some $k>0$. This yields that $Q_S^{[s,r]}$ is a finite $S$-module. Now we apply \cite[Corollary 6.2.5(1)]{KPX} to a finite presentation of $Q_S^{[s,ps]}$ to get a blow up $Y$ of $M(S)$ so that the pullback of $Q_S^{[s,ps]}$ has Tor-dimension $\leq1$. Using the fact $(\varphi^n)^*Q_S^{[s,ps]}\cong Q_S^{[p^ns,p^{n+1}s]}$, we see that $Y$ is also the blow up obtained by applying \cite[Corollary 6.2.5(1)]{KPX} to a finite presentation of $Q_S^{[s,p^{n+1}s]}$ for any positive integer $n$. It therefore follows that for any $r>s$, the pullback of $Q_S^{[s,r]}$ has Tor-dimension $\leq 1$; hence the pullback of $Q_S$ has Tor-dimension $\leq 1$. Furthermore, the blow ups for all affinoid subdomains $M(S)$ glue to form a blow up $X'$ of $X$ which satisfies the desired condition.
\end{proof}
\begin{lemma}\label{lem:ker-birational}
Let $X$ be a reduced, separated and irreducible rigid analytic space over $K$. Let $D'_X$ and $D_{X}$ be families of $\m$-modules over $X$ of ranks $d'$ and $d$ respectively, and let $\lambda: D'_X\ra D_X$ be a morphism between them. Suppose that for any $x\in X$, the image of $\lambda_x$ is a $\m$-submodule of rank $d$ of $D_x$. Then there exists a proper birational morphism $\pi:X'\ra X$ of reduced rigid analytic spaces over $K$ such that the kernel of $\pi^*\lambda$ is a family of $\m$-modules of rank $d'-d$ over $X'$, and there exists a Zariski open dense subset $U\subset X'$ such that $(\ker(\pi^*\lambda))_x=\ker((\pi^*\lambda)_x)$ for any $x\in U$.
\end{lemma}
\begin{proof}
Let $Q_X$ be the cokernel of $\lambda$. By the previous Lemma, we may suppose that $Q_X$ has Tor-dimension $\leq1$ after adapting $X$. Now let $P_X$ denote the kernel of $\lambda$. For any $x\in X$, the Tor spectral sequence computing the cohomology of the complex $[D_{X}\stackrel{\lambda}{\longrightarrow}D'_{X}]\otimes^{\mathbf{L}}_{\OO_{X}}k(x)$ gives rise to a short exact sequence
\[
0\longrightarrow P_x\longrightarrow\ker(\lambda_x)\longrightarrow\mathrm{Tor}_1(Q_X,k(x))\longrightarrow0.
\]
Since the image of $\lambda_x$ is a $\m$-module of rank $d$, $\ker(\lambda_x)$ is a $\m$-module of rank $d'-d$. Since $Q_X$ is killed by a power of $t$ locally on $X$, we get that the last term of the exact sequence is killed by a power of $t$. This yields that $P_x$ is a $\m$-module of rank $d'-d$. We therefore conclude that $P_X$ is a family of $\m$-modules of rank $d'-d$ over $X$ by \cite[Corollary 2.1.9]{KPX}. Furthermore, since $Q_X$ has Tor-dimension $\leq1$, by \cite[Lemma 6.2.7]{KPX}, we get that the set of $x\in X$ for which $\mathrm{Tor}_1(Q_X,k(x))\neq0$ forms a nonwhere dense Zariski closed subset of $X$; this yields the rest of the lemma.
\end{proof}
The following proposition modifies part of \cite[Theorem 6.2.9]{KPX}.
\begin{prop}\label{prop:cohomology}
Let $X$ be a reduced, separated and irreducible rigid analytic space over $K$. Let $D_X$ be a family of $\m$-modules of rank $d$ over $X$, and let $\delta:K^\times\ra \OO(X)^\times$ be a continuous character. Suppose that there exist a Zariski dense subset $Z$ of closed points of $X$ and a positive integer $c\leq d$ such that for every $z\in Z$, $H^0(D_z^{\vee}(\delta_z))$ is a
$c$-dimensional $k(z)$-vector space.
Then there exists a proper birational morphism $\pi:X'\ra X$ of reduced rigid analytic spaces over $K$ and a morphism $\lambda: D_{X'}\ra M_{X'}=(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}\OO_{X'})(\delta)\otimes_{\OO_{X'}}L$ of $\m$-modules, where $L$ is a locally free coherent $\OO_{X'}$-module of rank $c$ equipped with trivial $\varphi,\Gamma$-actions, such that
\begin{enumerate}
\item[(1)]for any $x\in X'$, the image of $\lambda_{x}$ is a $\m$-submodule of rank $c$;
\item[(2)]the kernel of $\lambda$ is a family of $\m$-modules of rank $d-c$ over $X'$, and there exists a Zariski open dense subset $U\subset X'$ such that $(\ker\lambda)_x=\ker(\lambda_x)$ for any $x\in U$.
\end{enumerate}
\end{prop}
\begin{proof}
Using Lemma \ref{lem:modification}, we first choose a proper birational morphism $\pi:X'\ra X$ with $X'$ reduced such that $N_{X'}=\pi^*(D^{\vee}_{X}(\delta))$ satisfies the conditions that $H^0(N_{X'})$ is flat and $H^i(N_{X'})$ has Tor-dimension $\leq 1$ for each $i=1,2$. Then for any $x\in X'$, the base change spectral sequence $E^{i,j}_2=\mathrm{Tor}_{-i}(H^j(N_{X'}),k(x))\Rightarrow H^{i+j}(N_x)$ gives a short exact sequence
\[
0\longrightarrow H^0(N_{X'})\otimes_{\OO_{X'}}k(x)\longrightarrow H^0(N_x)\longrightarrow \mathrm{Tor}_1(H^1(N_{X'}),k(x))\longrightarrow0
\]
As $H^1(N_{X'})$ has Tor-dimension $\leq1$, by \cite[Lemma 6.2.7]{KPX}, the set of $x\in X'$ for which the last term of the above exact sequence does not vanish forms a nowhere dense Zariski closed subset $V$. For any $z\in\pi^{-1}(Z)/V$, we deduce from the above exact sequence that $H^0(N_{X'})\otimes_{\OO_{X'}}k(z)$ is a $c$-dimensional $k(z)$-vector space. Since $H^0(N_{X'})$ is flat and $\pi^{-1}(Z)/V$ is a Zariski dense subset of $X'$, we get that $H^0(N_{X'})$ is locally free of constant rank $c$. Let $L$ be its dual; then the natural map $(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}\OO_{X'})H^0(N_{X'})\ra N_{X'}$
gives a map $\lambda:D_{X'}\ra M_{X'}=(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}\OO_{X'})(\delta)\otimes_{\OO_{X'}}L$. For any $x\in X'$, since the map $H^0(N_{X'})\otimes_{\OO_{X'}}k(x)\longrightarrow H^0(N_x)$ is injective, we get that the image of $\lambda_x$ is a rank $c$ $\m$-submodule of $M_x$. We thus conclude the proposition using the previous lemma.
\end{proof}
\section{Families of Hodge-Tate $\m$-modules}
From now on, let $S$ be a reduced affinoid algebra over $K$.
\begin{defn}\label{def:HT}
Let $D_S$ be a $\m$-module of rank $d$ over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S$. For any positive integer $n$, if $D_S^{r_n}$ is defined, we set
\[
\D^n_{\Sen}(D_S)=D_S^{r_n}\otimes_{\mathbf{B}^{\dag,r_n}_{\rig,K}\widehat{\otimes}_{\Q}S}K_n\otimes_{\Q}S.
\]
We call $D_S$ \emph{Hodge-Tate with Hodge-Tate weights in $[a,b]$} if
there exists a positive integer $n$ such that
the natural map
\begin{equation}\label{eq:def-HT}
(\oplus_{a\leq i\leq b}\D^n_\Sen(D_S(-i)))^\Gamma\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S)[t,t^{-1}]\longrightarrow \oplus_{i\in\mathbb{Z}}\D_\Sen^n(D_S(-i))
\end{equation}
is an isomorphism. We denote by $h_{HT}(D_S)$ the smallest $n$ which satisfies this condition, and we define $D_{\mathrm{HT}}(D_S)=(\oplus_{a\leq i\leq b}\D^{h_{HT}(D_S)}_\Sen(D_S(-i)))^\Gamma$.
\end{defn}
\begin{lemma}\label{lem:HT-inv}
Let $D_S$ be a Hodge-Tate $\m$-module over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S$ with weights in $[a,b]$. Then for any $n\geq h_{HT}(D_S)$, (\ref{eq:def-HT}) is an isomorphism and $\D_\Sen^n(D_S(-i))^{\Gamma}=\D_\Sen^{h_{HT}(D_S)}(D_S(-i))^{\Gamma}$ for any $i\in [a,b]$. As a consequence, we have
$(\oplus_{a\leq i\leq b}\D_\Sen^n(D_S(-i)))^\Gamma=D_{\mathrm{HT}}(D_S)$.
\end{lemma}
\begin{proof}
Tensoring with $K_{n}\otimes_{\Q}S[t,1/t]$ on both sides of the map
\[
(\oplus_{a\leq i\leq b}\D^{h_{HT}(D_S)}_\Sen(D_S(-i)))^\Gamma\otimes_{K\otimes_{\Q}S}(K_{h_{HT}(D_S)}
\otimes_{\Q}S)[t,t^{-1}]\longrightarrow \oplus_{i\in\mathbb{Z}}\D_\Sen^{h_{HT}(D_S)}(D_S(-i))
\]
We get that the natural map
\[
(\oplus_{a\leq i\leq b}\D^{h_{HT}(D_S)}_\Sen(D_S(-i)))^\Gamma\otimes_{K\otimes_{\Q}S}(K_{n}\otimes_{\Q}S)[t,t^{-1}]\longrightarrow \oplus_{i\in\mathbb{Z}}\D_\Sen^{n}(D_S(-i))
\]
is an isomorphism. Taking $\Gamma$-invariants on both sides, we get
\[
(\oplus_{a\leq i\leq b}\D^{h_{HT}(D_S)}_\Sen(D_S(-i)))^\Gamma=(\oplus_{a\leq i\leq b}\D^{n}_\Sen(D_S(-i)))^\Gamma.
\]
This yields the lemma.
\end{proof}
\begin{remark}
If $D_S$ is Hodge-Tate with weights in $[a,b]$, taking $\Gamma$-invariants on both sides of (\ref{eq:def-HT}), we see that $\D^n_\Sen(D_S(-i))^{\Gamma}=0$ for any $n\geq h_{HT}(D_S)$ and $i\notin [a,b]$.
\end{remark}
\begin{lemma}\label{lem:HT}
If $D_S$ is a Hodge-Tate $\m$-module over $\mathbf{B}^\dag_{\rig,K}\widehat{\otimes}_{\Q}S$ with weights in $[a,b]$, then for any morphism $S\ra R$ of affinoid algebras over $K$, $D_R$ is Hodge-Tate with weights in $[a,b]$ and $h_{HT}(D_R)\leq h_{HT}(D_S)$. Furthermore, the natural map $\D^n_\Sen(D_S(i))^\Gamma\otimes_{S}R\ra\D^n_\Sen(D_R(i))^\Gamma$ is an isomorphism for any $i\in\mathbb{Z}$ and $n\geq h_{HT}(D_S)$. As a consequence, the natural map $D_{\mathrm{HT}}(D_S)\otimes_SR\ra D_{\mathrm{HT}}(D_R)$ is an isomorphism.
\end{lemma}
\begin{proof}
Let $n\geq h_{HT}(D_S)$. Tensoring with $R$ over $S$ on both sides of (\ref{eq:def-HT}), we get that the natural map
\[
(\oplus_{a\leq i\leq b}\D^n_\Sen(D_S(-i))^\Gamma\otimes_SR)\otimes_{K\otimes_{\Q}R}(K_n\otimes_{\Q}R)[t,t^{-1}]\longrightarrow \oplus_{i\in\mathbb{Z}}\D_\Sen^n(D_R(-i)).
\]
is an isomorphism. Comparing $\Gamma$-invariants on both sides, we get that the natural map
\[
\D^n_\Sen(D_S(-i))^\Gamma\otimes_{S}R\ra\D^n_\Sen(D_R(-i))^\Gamma
\]
is an isomorphism for any $a\leq i\leq b$. This implies that the natural map
\[
(\oplus_{a\leq i\leq b}\D^n_\Sen(D_R(-i))^\Gamma\otimes_{K\otimes_{\Q}R}(K_n\otimes_{\Q}R)[t,t^{-1}]\longrightarrow \oplus_{i\in\mathbb{Z}}\D_\Sen^n(D_R(-i)).
\]
is an isomorphism. This proves the lemma.
\end{proof}
\begin{cor}
If $D_S$ is a Hodge-Tate $\m$-module of rank $d$ over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S$, then $D_{\mathrm{HT}}(D_S)$ is a locally free coherent $K\otimes_{\Q}S$-module of rank $d$.
\end{cor}
\begin{proof}
By the previous lemma, it suffices to treat the case that $S$ is a finite extension of $K$; this is clear from the isomorphism (\ref{eq:def-HT}).
\end{proof}
\begin{defn}
Let $X$ be a reduced and separated rigid analytic space over $K$, and let $D_X$ be a family of $\m$-modules of rank $d$ over $X$. We call $D_X$ \emph{Hodge-Tate} with weights in $[a,b]$ if for some (hence any) admissible cover $\{M(S_i)\}_{i\in I}$ of $X$, $D_{S_i}$ is Hodge-Tate with weights in $[a,b]$ for any $i\in I$. We define $D_{\mathrm{HT}}(D_X)$ to be the gluing of all $D_{\mathrm{HT}}(D_{S_i})$'s.
\end{defn}
\begin{lemma}\label{lem:HT-criterion}
Let $D_S$ be a $\m$-module over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S$. Then (\ref{eq:def-HT}) is an isomorphism if and only if the natural map
\begin{equation}\label{eq:lem-HT}
\oplus_{a\leq i\leq b}\D_\Sen^n(D_S)^{\Gamma_n=\chi^i}\longrightarrow\D_\Sen^n(D_S)
\end{equation}
is an isomorphism. Furthermore, if this is the case, then (\ref{eq:def-HT}) holds for $n$.
\end{lemma}
\begin{proof}
For the ``$\Rightarrow$'' part, since (\ref{eq:def-HT}) is an isomorphism, we deduce that
\begin{equation}\label{eq:lem-HT-2}
\D_\Sen^n(D_S)=\oplus_{a\leq i\leq b}t^i\cdot\D^n_{\Sen}(D_S(-i))^\Gamma
\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S).
\end{equation}
Note that $t^i\cdot\D^n_{\Sen}(D_S(-i))^\Gamma\subseteq\D_\Sen^n(D_S)^{\Gamma_n=\chi^i}$. Hence (\ref{eq:lem-HT-2}) implies that (\ref{eq:lem-HT}) is surjective. On the other hand, it is clear that (\ref{eq:def-HT}) is injective; hence it is an isomorphism. Conversely, suppose that (\ref{eq:lem-HT}) is an isomorphism. Note that
\[
\D_\Sen^n(D_S)^{\Gamma_n=\chi^i}=t^i\cdot\D_\Sen^n(D_S(-i))^{\Gamma_n}=(t^i\cdot\D_\Sen^n(D_S(-i))^\Gamma)
\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S),
\]
where the latter equality follows from \cite[Proposition 2.2.1]{BC07}. This implies that $D_S$ satisfies (\ref{eq:lem-HT-2}), yielding that $D_S$ satisfies (\ref{eq:def-HT}).
\end{proof}
\begin{prop}\label{prop:HT-family}
Let $D_S$ be a $\m$-module over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}S$. Suppose that there exists a Zariski dense subset $Z\subset M(S)$ such that $D_z$ is Hodge-Tate with weights in $[a,b]$ for any $z\in Z$ and $\sup_{z\in Z}\{h_{HT}(D_z)\}<\infty$. Then $D_S$ is Hodge-Tate with weights in $[a,b]$.
\end{prop}
\begin{proof}
Let $n\geq\sup_{z\in Z}\{h_{HT}(D_z)\}$ such that $D_S^n$ is defined, and let $\gamma$ be a topological generator of $\Gamma_n$. For any $a\leq i\leq b$, let $p_i$ denote the operator
$\prod_{a\leq j\leq b, j\neq i}\frac{\gamma-\chi^{j}(\gamma)}{\chi^i(\gamma)-\chi^j(\gamma)}$,
and let $M_i=p_i(\D_\Sen^n(D_S))$. It is clear that $p_i$ is the identity on $\D_{\Sen}^n(D_S)^{\Gamma_n=\chi^i}$; hence $\D_{\Sen}^n(D_S)^{\Gamma_n=\chi^i}\subseteq M_i$. On the other hand, for any $z\in Z$, since $D_z$ is Hodge-Tate with weights in $[a,b]$ and $h_{HT}(D_z)\leq n$, we deduce from Lemma \ref{lem:HT-criterion} that $p_i(\D_\Sen^n(D_z))=\D^n_\Sen(D_z)^{\Gamma_n=\chi^i}$. This implies that $M_i$ maps onto $\D^n_\Sen(D_z)^{\Gamma_n=\chi^i}$ under the specialization $\D_\Sen^n(D_S)\ra \D_\Sen^n(D_z)$. Since $Z$ is Zariski dense, we conclude $M_i\subseteq\D^n_\Sen(D)^{\Gamma_n=\chi^i}$; hence $M_i=\D^n_\Sen(D)^{\Gamma_n=\chi^i}$.
Let $M=\oplus_{a\leq i\leq b}M_i$. We claim that the natural inclusion $M\subseteq \D_\Sen^n(D_S)$ is an isomorphism. In fact, for any $z\in Z$, since $\D_\Sen^n(D_z)=\oplus_{a\leq i\leq b}\D_\Sen^n(D_z)^{\Gamma_n=\chi^i}$, we have that $M$ maps onto $\D_\Sen^n(D_z)$. Thus $\D^n_\Sen(D_S)/M$ vanishes at $z$. We therefore conclude $\D^n_\Sen(D_S)/M=0$ because $Z$ is Zariski dense. By Lemma \ref{lem:HT-criterion} and the claim, we conclude that $D_S$ is Hodge-Tate with weights in $[a,b]$.
\end{proof}
\section{Families of de Rham $\m$-modules}
\begin{defn}\label{def:dR}
Let $D_S$ be a $\m$-module of rank $d$ over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S$. For any positive integer $n$, if $D_S^{r_n}$ is defined, we set
\[
\D^{+,n}_{\dif}(D_S)=D_S^{r_n}\otimes_{\mathbf{B}^{\dag,r_n}_{\rig,K}\widehat{\otimes}_{\Q}S}(K_n\otimes_{\Q}S)[[t]], \qquad
\D^{n}_{\dif}(D_S)=\D^{+,n}_{\dif}(D_S)[1/t].
\]
We equip $\D_\dif^n(D_S)$ with the filtration $\mathrm{Fil}^i\D_\dif^n(D_S)=t^i\D_\dif^{+,n}(D_S)$. We call $D_S$ \emph{de Rham with weights in $[a,b]$} if there exists a positive integer $n$ such that
\begin{enumerate}
\item[(1)]
the natural map
\begin{equation}\label{eq:def-de Rham}
\D^n_\dif(D_S)^\Gamma\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S)[[t]][1/t]\longrightarrow \D_\dif^n(D_S)
\end{equation}
is an isomorphism;
\item[(2)]$\mathrm{Fil}^{-b}(\D^n_\dif(D_S)^\Gamma)=D_S$ and $\mathrm{Fil}^{-a+1}(\D^n_\dif(D_S)^\Gamma)=0$
where $\mathrm{Fil}^{i}(\D^n_\dif(D_S)^\Gamma)$ is the induced filtration on $\D^n_\dif(D_S)^\Gamma$.
\end{enumerate}
We denote by $h_{dR}(D_S)$ the smallest $n$ which satisfies these conditions, and we define $D_{\mathrm{dR}}(D_S)=\D^{h_{dR}(D_S)}_\dif(D_S)^\Gamma$.
\end{defn}
\begin{lemma}\label{lem:dR-inv}
Let $D$ be a de Rham $\m$-module over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S$. Then for any $n\geq h_{dR}(D_S)$, $\D^n_\dif(D_S)^\Gamma=D_{\mathrm{dR}}(D_S)$
\end{lemma}
\begin{proof}
We tensor $K_{n+1}\otimes_{\Q}S[[t]][1/t]$ on both sides of the map
\[
\D^{h_{dR}(D_S)}_\dif(D_S)^\Gamma\otimes_{K\otimes_{\Q}S}(K_{h_{dR}(D_S)}\otimes_{\Q}S)[[t]][1/t]\longrightarrow \D_\dif^{h_{dR}(D_S)}(D_S),
\]
yielding that the map
\[
\D^{h_{dR}(D_S)}_\dif(D_S)^\Gamma\otimes_{K\otimes_{\Q}S}(K_{n}\otimes_{\Q}S)[[t]][1/t]\longrightarrow \D_\dif^{n}(D_S).
\]
is an isomorphism. Comparing $\Gamma$-invariants on both sides, we get the desired result.
\end{proof}
\begin{lemma}\label{lem:dR-HT}
If $D$ is a de Rham $\m$-module of rank $d$ over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S$ with weights in $[a,b]$, then $D$ is Hodge-Tate with weights in $[a,b]$ and $h_{HT}(D_S)\leq h_{dR}(D_S)$. Furthermore, we have $\mathrm{Gr}^iD_{\mathrm{dR}}(D_S)=\D_\Sen^n(D_S(i))^\Gamma$ under the identification $\mathrm{Gr}^i\D_\dif^n(D_S)=\D_\Sen^n(D_S(i))$ for any $n\geq h_{dR}(D_S)$.
\end{lemma}
\begin{proof}
Let $n\geq h_{dR}(D_S)$. Since (\ref{eq:def-de Rham}) is an isomorphism, we deduce that the natural map of graded modules
\begin{equation}\label{eq:lem-dR-HT}
\oplus_{i\in\mathbb{Z}}\mathrm{Gr}^iD_{\mathrm{dR}}(D_S)
\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S)[t,t^{-1}]\longrightarrow \oplus_{i\in\mathbb{Z}}\D_\Sen^n(D_S(i))
\end{equation}
is surjective. On the other hand, since $t^i\cdot\mathrm{Gr}^{-i}D_{\mathrm{dR}}(D_S)\subset \D_{\Sen}^n(D_S)$, we have that the natural map
\[
\oplus_{a\leq i\leq b}t^i\cdot\mathrm{Gr}^{-i}D_{\mathrm{dR}}(D_S)\ra \D_\Sen^n(D_S)
\]
is injective. This implies that (\ref{eq:lem-dR-HT}) is injective; hence it is an isomorphism. Comparing the $\Gamma$-invariants on both sides, we get $\mathrm{Gr}^iD_{\mathrm{dR}}(D_S)=\D_\Sen^n(D_S(i))^\Gamma$ for each $i\in\mathbb{Z}$. This proves the lemma.
\end{proof}
\begin{lemma}\label{lem:dR}
If $D_S$ is a de Rham $\m$-module over $\mathbf{B}^\dag_{\rig,K}\widehat{\otimes}_{\Q}S$, then for any morphism $S\ra R$ of affinoid algebras over $K$, $D_R$ is de Rham with weights in $[a,b]$ and $h_{dR}(D_R)\leq h_{dR}(D_S)$. Furthermore, the natural maps $\mathrm{Fil}^i D_{\mathrm{dR}}(D_S)\otimes_{S}R\ra \mathrm{Fil}^iD_{\mathrm{dR}}(D_R)$ are isomorphisms for all $i\in \mathbb{Z}$.
\end{lemma}
\begin{proof}
Let $n\geq h_{dR}(D_S)$. Tensoring with $(K_n\otimes_{\Q}R)[[t]][1/t]$ on both sides of (\ref{eq:def-de Rham}), we get that the natural map
\begin{equation}\label{eq:lem-dR}
(\D^n_\dif(D_S)^\Gamma\otimes_S R)\otimes_{K\otimes_{\Q}R}(K_n\otimes_{\Q}R)[[t]][1/t]\longrightarrow \D_\dif^n(D_R).
\end{equation}
is an isomorphism. Comparing $\Gamma$-invariants on both sides of (\ref{eq:lem-dR}), we get that the natural map $\D^n_\dif(D_S)^\Gamma\otimes_{S}R\ra\D^n_\dif(D_R)^\Gamma$
is an isomorphism; hence $D_R$ is de Rham. Then by Lemmas \ref{lem:HT} and \ref{lem:dR-HT}, we deduce that the natural map
$\mathrm{Gr}^i(D_{\mathrm{dR}}(D_S))\otimes_SR\ra\mathrm{Gr}^i(D_{\mathrm{dR}}(D_R))$ is an isomorphism.
This implies the rest of the lemma.
\end{proof}
\begin{cor}
If $D_S$ is a de Rham $\m$-module of rank $d$ over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q} S$, then $D_{\mathrm{dR}}(D_S)$ is a locally free coherent $K\otimes_{\Q}S$-module of rank $d$.
\end{cor}
\begin{proof}
We first note that for each $i\in\mathbb{Z}$, $\mathrm{Gr}^i(D_{\mathrm{dR}}(D_S))$, which is isomorphic to $\D_\Sen^n(D_S(i))^\Gamma$ by Lemma \ref{lem:dR-HT}, is a coherent $K\otimes_{\Q}S$-module. We then deduce that $D_{\mathrm{dR}}(D_S)$ is a coherent $K\otimes_{\Q}S$-module. Using Lemma \ref{lem:dR}, it then suffices to treat the case that $S$ is a finite extension of $K$; this follows easily from the isomorphism (\ref{eq:def-de Rham}).
\end{proof}
\begin{defn}
Let $X$ be a reduced and separated rigid analytic space over $K$, and let $D_X$ be a family of $\m$-modules of rank $d$ over $X$. We call $D_X$ \emph{de Rham} if for some (hence any) admissible cover $\{M(S_i)\}_{i\in I}$ of $X$, $D_{S_i}$ is de Rham with weights in $[a,b]$ for any $i\in I$. We define $D_{\mathrm{dR}}(D_X)$ to be the gluing of all $D_{\mathrm{dR}}(D_{S_i})$'s.
\end{defn}
\begin{lemma}\label{lem:dR-weight}
If $D_S$ is a de Rham $\m$-module over $\mathbf{B}^\dag_{\rig,K}\widehat{\otimes}_{\Q}S$ of rank $d$ with weights in $[a,b]$, then $t^{-a}\D_\dif^{+,n}(D_S)\subset D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S)[[t]]\subset t^{-b}\D_\dif^{+,n}(D_S)$ for any $n\geq h_{dR}(D_S)$.
\end{lemma}
\begin{proof}
Since $\mathrm{Gr}^{-b}D_{\mathrm{dR}}(D_S)=D_{\mathrm{dR}}(D_S)$, we get $D_{\mathrm{dR}}(D_S)\subset t^{-b}\D^{+,n}_\dif(D_S)$; hence $D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S)[[t]]\subset t^{-b}\D_\dif^{+,n}(D_S)$. By the proof of Lemma \ref{lem:dR-HT}, we know that the natural map (\ref{eq:lem-dR-HT}) is an isomorphism of graded modules. By the facts that $\mathrm{Gr}^iD_{\mathrm{dR}}(D_S)=0$ for $i\geq -a+1$ and $\mathrm{Fil}^i\D_\dif^n(D_S)$ is $t$-adically complete, we thus deduce that $t^{-a}\D_\dif^{+,n}(D_S)\subset D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S)[[t]]$.
\end{proof}
\begin{lemma}
Let $D_S$ be a Hodge-Tate $\m$-module over $\mathbf{B}^\dag_{\rig,K}\widehat{\otimes}_{\Q}S$ with weights in $[a,b]$. Then for any $k\geq b-a+1$, $i\in[a,b]$, $n\geq h_{HT}(D_S)$ and $\gamma\in\Gamma_n$, the map $\gamma-\chi^i(\gamma):t^k\D_\dif^{+,n}(D_S)\ra t^k\D_\dif^{+,n}(D_S)$ is bijective.
\end{lemma}
\begin{proof}
Since $\D_\dif^{+,n}(D_S)$ is $t$-adically complete, it suffices to show that
\[
\gamma-1:t^k\D_\dif^{+,n}(D_S)/t^{k+1}\D_\dif^{+,n}(D_S)\ra t^k\D_\dif^{+,n}(D_S)/t^{k+1}\D_\dif^{+,n}(D_S)
\]
is bijective for any $k\geq b-a+1$. Note that $t^k\D_\dif^{+,n}(D_S)/t^{k+1}\D_\dif^{+,n}(D_S)$ is isomorphic to $\D_\Sen^n(D_S(k))$ as a $\Gamma$-module. Note that $\D^n_\Sen(D_S(k))=\oplus_{a\leq j\leq b}(\D^n_\Sen(D_S))^{\Gamma_n=\chi^{j+k}}$ by Lemma \ref{lem:HT-criterion}. Since $j+k\geq b+1$ for all $j\in [a,b]$, we deduce that $\gamma-\chi^i(\gamma)$ is bijective on $\D^n_\Sen(D_S(k))$.
\end{proof}
\begin{lemma}\label{lem:dR-criterion}
Let $D_S$ be a Hodge-Tate $\m$-module over $\mathbf{B}^\dag_{\rig,K}\widehat{\otimes}_{\Q}S$ with weights in $[a,b]$. Then $D_S$ is de Rham if and only if there exists a positive integer $n\geq h_{HT}(D_S)$ such that $\Pi_{i=a}^{2b-a}(\gamma-\chi(\gamma)^i)\D_\dif^{+,n}(D_S)\subset t^{b-a+1}\D_\dif^{+,n}(D_S)$. Furthermore, if this is the case, then (\ref{eq:def-de Rham}) holds for $n$.
\end{lemma}
\begin{proof}
Suppose that $D_S$ is de Rham. Let $n\geq h_{dR}(D_S)$, and put
\[
N=D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S)[[t]].
\]
Since $D$ has weights in $[a,b]$, by Lemma \ref{lem:dR-weight}, we have $t^{-a}\D_\dif^{+,n}(D_S)\subset N\subset t^{-b}\D_\dif^{+,n}(D_S)$. On the other hand, by the construction of $N$, it is clear that $(\gamma-1)N\subset tN$. It therefore follows that
\[
\Pi_{i=a}^{2b-a}(\gamma-\chi(\gamma)^i)\D_\dif^{+,n}(D_S)\subset
\Pi_{i=a}^{2b-a}(\gamma-\chi(\gamma)^i)(t^aN)\subset t^{2b-a+1}N\subset t^{b-a+1}\D_\dif^{+,n}(D_S).
\]
Now suppose $\Pi_{i=a}^{2b-a}(\gamma-\chi(\gamma)^i)\D_\dif^{+,n}(D_S)\subset t^{b-a+1}\D_\dif^{+,n}(D_S)$ for some $n\geq h_{HT}(D_S)$. We claim that for any $j\in[a,b]$ and $a\in(\D^n_\Sen(D_S))^{\Gamma_n=\chi^j}$, we can lift $a$ to an element in $(\D_{\dif}^{+,n}(D_S))^{\Gamma_n=\chi^j}$. In fact, let $\tilde{a}$ be any lift of $a$ in $\D_\dif^{+,n}(D_S)$, and let $\tilde{b}=\prod_{a\leq i\leq 2b-a, i\neq j}\frac{\gamma-\chi^i(\gamma)}{\chi^j(\gamma)-\chi^i(\gamma)}\tilde{a}$ where $\gamma$ is a topological generator of $\Gamma_n$; it is clear that $\tilde{b}$ is also a lift of $a$. Furthermore, by assumption, we have $(\gamma-\chi^j(\gamma))(\tilde{b})\in \Pi_{i=a}^{2b-a}(\gamma-\chi(\gamma)^i)\D_\dif^{+,n}(D_S)\subset t^{b-a+1}\D^{+,n}_\dif(D_S)$. By the previous lemma, we choose some $\tilde{c}\in t^{b-a+1}\D^{+,n}_\dif(D_S)$ satisfying $(\gamma-\chi^j(\gamma))(\tilde{b})=(\gamma-\chi^j(\gamma))(\tilde{c})$. It is then clear that $\tilde{b}-\tilde{c}$ is a desired lift of $a$. Since $\D^n_\Sen(D_S)=\oplus_{a\leq i\leq b}(\D^n_\Sen(D_S))^{\Gamma_n=\chi^i}$, we have that $(\D^n_\Sen(D_S))^{\Gamma_n=\chi^i}$ is locally free for each $i\in[a,b]$. By shrinking $M(S)$, we may further suppose that each $(\D^n_\Sen(D_S))^{\Gamma_n=\chi^i}$ is free. We then deduce from the claim that there exists a free $K_n\otimes_{\Q}S$-module $M\subseteq(\D_\dif^{n}(D_S))^{\Gamma_n}$ such that the natural map
\[
M\otimes_{K_n\otimes_{\Q}S}(K_n\otimes_{\Q}S)[[t]][1/t]\longrightarrow \D_\dif^n(D_S)
\]
is an isomorphism. It follows that the natural map
\[
M^\Gamma\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S)[[t]][1/t]\longrightarrow \D_\dif^n(D_S)
\]
is an isomorphism because $M=M^{\Gamma}\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S)$ by \cite[Proposition 2.2.1]{BC07}. Taking $\Gamma$-invariants on both sides, we get $M^{\Gamma}=(\D_\dif^n(D_S))^\Gamma$. This implies that $D_S$ is de Rham.
\end{proof}
\begin{prop}\label{prop:dR-family}
Let $D_S$ be a $\m$-module over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}S$. Suppose that there exists a Zariski dense subset $Z\subset M(S)$ such that $D_z$ is de Rham with weights in $[a,b]$ for any $z\in Z$ and $\sup_{z\in Z}\{h_{dR}(D_z)\}<\infty$. Then $D_S$ is de Rham with weights in $[a,b]$.
\end{prop}
\begin{proof}
By Proposition \ref{prop:HT-family}, we first have that $D_S$ is Hodge-Tate with weights in $[a,b]$. Let $n\geq \max\{h_{HT}(D_S),\sup_{z\in Z}\{h_{dR}(D_z)\}\}$. By Lemma \ref{lem:dR-criterion}, we have
\[
\Pi_{i=a}^{2b-a}(\gamma-\chi(\gamma)^i)\D_\dif^{+,n}(D_z)\subset t^{b-a+1}\D_\dif^{+,n}(D_z)
\]
for any $z\in Z$. This implies $\Pi_{i=a}^{2b-a}(\gamma-\chi(\gamma)^i)\D_\dif^{+,n}(D_S)\subset t^{b-a+1}\D_\dif^{+,n}(D_S)$ because $Z$ is Zariski dense. Hence $D_S$ is de Rham by Lemma \ref{lem:dR-criterion} again.
\end{proof}
\section{$P$-adic local monodromy for families of de Rham $\m$-modules}
The main goal of this section is to prove the $p$-adic local monodromy for families of de Rham $\m$-modules. The proof is similar to Berger-Colmez's proof of the $p$-adic local monodromy for families of de Rham representations \cite[\S6]{BC07}. Indeed, with the results we have proved in \S2 and \S3, the proof from [\emph{loc.cit.}] go over verbatim. We therefore often sketch our proof and refer the reader to [\emph{loc.cit.}] for more details.
We fix $E$ to be a finite extension of the products of the complete residue fields of the Shilov boundary of $M(S)$.
\begin{prop}\label{prop:N_dR}
Let $D_S$ be a de Rham $\m$-module of rank $d$ over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S$ with weights in $[a,b]$. For any $s>0$ such that $n(s)\geq h_{dR}(D_S)$, let
\[
N_s(D_E)=\{y\in t^{-b}D^{s}_E\hspace{2mm}\text{such that}\hspace{2mm}\iota_n(y)\in D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}E)[[t]]\hspace{1mm}\text{for each}\hspace{2mm}n\geq n(s)\}.
\]
Then the following are true.
\begin{enumerate}
\item[(1)]The $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q} E$-module $N_s(D_E)$ is free of rank $d$ and stable under $\Gamma$.
\item[(2)]We have
$N_s(D_E)\otimes_{\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}E,\iota_n}(K_n\otimes_{\Q}E)[[t]]
=D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}E)[[t]]$ for each $n\geq n(s)$.
\end{enumerate}
Furthermore, if we put $N_{\mathrm{dR}}(D_E)=N_s(D_E)\otimes_{\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}E}
\mathbf{B}^{\dag}_{\rig,K}\widehat{\otimes}_{\Q}E$, then the following are true.
\begin{enumerate}
\item[(3)]The $\mathbf{B}^{\dag}_{\rig,K}\widehat{\otimes}_{\Q} E$-module $N_{\mathrm{dR}}(D_E)$ is free of rank $d$, stable under $\Gamma$, and independent of the choice of $s$.
\item[(4)]We have $\varphi^*(N_{\mathrm{dR}}(D_E))=N_{\mathrm{dR}}(D_E)$ and $\nabla(N_{\mathrm{dR}}(D_E))\subset t\cdot N_{\mathrm{dR}}(D_E)$.
\end{enumerate}
\end{prop}
\begin{proof}
Since the localization map $\iota_n$ is continuous, we first have that $N_s(D_E)$ is a closed $\mathbf{B}_{\rig,K}^{\dag,s}\widehat{\otimes}_{\Q}E$-submodule of $t^{-b}D_E^{s}$. It follows that
$N_s(D_E)$ is a finite locally free $\mathbf{B}_{\rig,K}^{\dag,s}\widehat{\otimes}_{\Q}E$-module because $\mathbf{B}_{\rig,K}^{\dag,s}\widehat{\otimes}_{\Q}E$ is isomorphic to a finite product of Robba rings. On the other hand, by
Lemma \ref{lem:dR-weight}, we get that $t^{-a}D_E^{s}$ is contained in $N_s(D_E)$. We thus conclude that $N_s(D_E)$ is a free $\mathbf{B}_{\rig,K}^{\dag,s}\widehat{\otimes}_{\Q}E$-module of rank $d$. To show (2), we proceed as in the proof of \cite[Proposition 6.1.1]{BC07}. For any $y\in D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}E)[[t]]$ and $w\geq \max\{0,b-a\}$, since
\[
D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}E)[[t]]\subset t^{-b}\D_\dif^{+,n}(D_E)
\]
by Lemma \ref{lem:dR-weight}, we may pick some $y_0\in t^{-b}D_E^{s}$ such that $\iota_n(y_0)-y\in t^w
D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}E)[[t]]$. Let $t_{n,w}$ be the function defined in \cite[Lemme I.2.1]{LB04}. It follows that
\[
\iota_m(t_{n,w}y_0)\in t^{w-b}\D_\dif^{+,n}(D_E)=t^{w-b+a}(t^{-a}\D_\dif^{+,n}(D_E))\subset D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_m\otimes_{\Q}E)[[t]]
\]
for $m>n$
and
\[
\iota_n(t_{n,w}y_0)-y\in t^{w}D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}E)[[t]].
\]
This implies that the natural map $N_s(D_E)\ra D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}E)[[t]]/(t^w)$ is surjective; this proves (2). We get (3) immediately from (2). The first half of (4) follows from the fact that $\iota_{n+1}\circ \varphi=\iota_n$. Note that $\iota_n(\nabla(N_s(D_E)))=\nabla(\iota_n(N_s(D_E)))\subset tD_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}E)[[t]]$ for any $n\geq n(s)$; this proves the second half of (4).
\end{proof}
\begin{prop}\label{prop:monodromy}
Keep notations as in Proposition \ref{prop:N_dR}. Then there exists a finite extension $L$ over $K$ such that
\[
M=(N_{\mathrm{dR}}(D_E)\otimes_{\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}E}
\mathbf{B}_{\log,L}^\dag\widehat{\otimes}_{\Q}E)^{I_L}
\]
is a free $L_0'\otimes_{\Q}E$-module of rank $d$ and the natural map
\begin{equation*}
\begin{split}
M\otimes_{L_0'\otimes_{\Q}E}
\mathbf{B}_{\log,L}^\dag\widehat{\otimes}_{\Q}E
\longrightarrow N_{\mathrm{dR}}(D_E)\otimes_{\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}E}
\mathbf{B}_{\log,L}^\dag\widehat{\otimes}_{\Q}E
\end{split}
\end{equation*}
is an isomorphism.
\end{prop}
\begin{proof}
Let $f'=[K_0':\Q]$. Note that there is a canonical decomposition $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}E\cong\prod_{i=0}^{f'-1}\r_E^{(i)}$
where each $\r_E^{(i)}$ is isomorphic to $\r_E$ and stable under $\Gamma_K$, and satisfies $\varphi(\r_E^{(i)})\subset\r_E^{(i+1)}$ ($\r_E^{(f')}=\r_E^{(0)}$). Let $N^{(i)}_{\mathrm{dR}}(D_E)=N_{\mathrm{dR}}(D_E)
\otimes_{\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}E}\r_E^{(i)}$. It follows that each $N_{\mathrm{dR}}^{(i)}(D_E)$ is stable under $\partial=\nabla/t$ and $\varphi^{f'}$; hence it is a $p$-adic differential equation with a Frobenius structure. By the versions of the $p$-adic local monodromy theorem proved by Andr\'e \cite{An} or Mebkhout \cite{Meb}, we conclude that each $N^{(i)}_{\mathrm{dR}}(D_E)$ is potentially unipotent. This yields the proposition using the argument of \cite[Proposition 6.2.2]{BC07} and \cite[Corollaire 6.2.3]{BC07}.
\end{proof}
\begin{lemma}\label{lem:monodromy}
Keep notations as in Proposition \ref{prop:monodromy}, and let
\[
M=(N_s(D_E)\otimes_{\mathbf{B}_{\rig,K}^{\dag,s}\widehat{\otimes}_{\Q}E}
\mathbf{B}_{\log,K}^{\dag,s}\widehat{\otimes}_{\Q}E)^{I_L}
\]
for sufficiently large $s$. Then for any $n\geq n(s)$, we have
\begin{equation}\label{eq:lem-monodromy}
L\otimes_{L_0}\iota_n(M)=(\D_\dif(D_E\otimes_{\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}E}
\mathbf{B}_{\rig,L}^\dag\widehat{\otimes}_{\Q}E))^{I_L}
\end{equation}
\end{lemma}
\begin{proof}
By the previous proposition, the left hand side of (\ref{eq:lem-monodromy}) is a free $L\otimes_{L_0}L_0'\otimes_{\Q}E$-module of rank $d$. On the other hand, since $((L_n\otimes_{\Q}E)[[t]][1/t])^{I_L}=L\otimes_{L_0}L_0'\otimes_{\Q}E$, we deduce that the right hand side of (\ref{eq:lem-monodromy}), which obviously contains the left hand side, is an $L\otimes_{L_0}L_0'\otimes_{\Q}E$-module generated by at most $d$-elements. This yields the desired identity.
\end{proof}
\section{Proof of the main theorem}
We start by making some preliminary reductions. After a finite surjective base change of $X$, we may assume that $Q(T)$ factors as $\prod_{i=1}^m(T-F_i)$. By reordering the $f_i$'s and throwing away some points of $Z$ we may further assume that for all $z\in Z$, $v_p(F_i(z))\geq v_p(F_j(z))$ if $i>j$ and $F_i(z)\neq F_j(z)$ if $F_i\neq F_j$. We then set $\F_{i,z}=
D_{\mathrm{st}}^+(V_z)^{(\varphi^f-F_1(z))\cdots(\varphi^f-F_{i}(z))=0}$ for all $z\in Z$ and $1\leq i\leq m$.
Using Definition \ref{def:fs}(3), we may suppose that $\F_{i,z}\subseteq \F_z$ for all $z\in Z$ and $1\leq i\leq m$ by shrinking $Z$. Furthermore, by the fact that $N\varphi=p\varphi N$ and the condition that $v_p(F_i(z))\geq v_p(F_j(z))$ if $i>j$, we see that $N=0$ on each graded piece $\F_{i,z}/\F_{i-1,z}$.
Let $c_{i,z}$ be the rank of $ \F_{i,z}/\F_{i-1,z}$ over $K_0\otimes k(z)$, and partition $Z$ into finitely many subsets according to the sequence $c_{i,z}$. One of these subsets of $Z$ must still be Zariski dense. Replace $Z$ by this subset and set $c_i = c_{i,z}$ for any $z$ in this subset.
For $z\in Z$, we will inductively set $\m$-submodules $\mathrm{Fil}_{i,z}\subset\D_\rig^\dag(V_z)$ for $1\leq i\leq m$ such that $D_{\mathrm{st}}(\mathrm{Fil}_{i,z})=\F_{i,z}$. For $i=1$, since $V_z$ has non-positive Hodge-Tate weights and $N(\F_{1,z})=0$, we have
\[
\F_{1,z}=(D^+_{\mathrm{crys}}(V_z))^{\varphi^f=F_1(z)}\subset\D_\rig^\dag(V_z)^{\Gamma}
\]
by Berger's dictionary. Let $\mathrm{Fil}_{1,z}$ be the saturation of the $\m$-submodule generated by $\mathcal{F}_{1,z}$. Now suppose we have set $\mathrm{Fil}_{i-1,z}$ for some $i\geq 2$. It follows that
\[
D_{\mathrm{st}}^+(\D_\rig^\dag(V_z)/\mathrm{Fil}_{i-1,z})=D_{\mathrm{st}}^+(V_z)/\F_{i-1,z}.
\]
Note that
\[
\F_{i,z}/\F_{i-1,z}=(D_{\mathrm{st}}^+(V_z)/\F_{i-1,z})^{\varphi^f=F_{i}(z),N=0}.
\]
Hence
\[
\F_{i,z}/\F_{i-1,z}=D^+_{\mathrm{crys}}(\D_\rig^\dag(V_z)/\mathrm{Fil}_{i,z})^{\varphi^f=F_{i}(z)}\subset
(\D_\rig^\dag(V_z)/\mathrm{Fil}_{i-1,z})^\Gamma.
\]
We then set $\mathrm{Fil}_{i,z}$ to be the preimage of the saturation of the $\m$-submodule of $\D_\rig^\dag(V_z)/\mathrm{Fil}_{i-1,z}$ generated by $\F_{i,z}/\F_{i-1,z}$.
Now for each $1\leq i\leq m$, we define the character $\delta_i:K\ra\OO(X)^\times$ by setting $\delta_i(p)=F_i^{-1}$ and $\delta_i(\OO_K^\times)=1$. Let $D_X=\D_\rig^\dag(V_X)^{\vee}$.
\begin{lemma}\label{lem:de Rham-part}
Suppose that $X$ is irreducible. Then for each $0\leq i\leq m$, there exists a proper birational morphism $\pi:X'\ra X$ and a sub-family of $\m$-modules $D^{(i)}_{X'}\subset D_{X'}$ over $X'$ of rank $d-c_1-\dots-c_i$ such that
\begin{enumerate}
\item[(1)]
for any $x\in X'$, the natural map $D_x^{(i)}\ra D_x$ is injective;
\item[(2)]
there exists a Zariski open dense subset $U$ of $X'$ such that for any $z\in Z'=\pi^{-1}(Z)\cap U$, the natural map $D^{(i)}_z\ra D_z$ is the dual of the projection $\D_\rig^\dag(V_{\pi(z)})\ra \D_\rig^\dag(V_{\pi(z)})/\mathrm{Fil}_{i,\pi(z)}$.
\end{enumerate}
\end{lemma}
\begin{proof}
We proceed by induction on $i$. The initial case is trivial. Suppose that for some $1\leq i\leq m$, the lemma is true for $i-1$.
Note that $\mathcal{F}_{i,z}/\mathcal{F}_{i-1,z}$ maps into $\D_\rig^\dag(V_{z})/\mathrm{Fil}_{i,z}$ for any $z\in Z$. Since $\F_{i,z}/\F_{i-1,z}=(D_{\mathrm{crys}}^+(V_z)/\F_{i-1,z})^{\varphi^f=F_{i}(z)}$, we get that $(D^{(i)}_z)^{\vee}(\pi^{*}(\delta_i)(z))$ has $k(z)$-dimension $c_i$ for any $z\in Z'$. Since $Z'$ is Zariski dense in $X'$, by Proposition \ref{prop:cohomology}, after adapting $X'$ and $U$, we may find a sub-family of $\m$-modules $D^{(i)}_{X'}$ of $D^{(i-1)}_{X'}$ with rank $d-c_1-\dots-c_i$ such that
\begin{enumerate}
\item[(1')]$D_x^{(i)}\ra D_x^{(i-1)}$ is injective for any $x\in X'$;
\item[(2')]for any $z\in \pi^{-1}(Z)\cap U$, $D_z^{(i)}$ is the kernel of the dual of the map
\[
(\mathbf{B}_{\rig,K}^\dag\otimes_{\Q}k(z))\cdot(\mathcal{F}_{i,\pi(z)}/\mathcal{F}_{i-1,\pi(z)})\ra \D_\rig^\dag(V_{\pi(z)})/\mathrm{Fil}_{i,\pi(z)}.
\]
\end{enumerate}
It is clear that (1') and (2') imply (1) and (2) respectively; this finishes the inductive step.
\end{proof}
To prove Theorem \ref{thm:main}, we also need the following lemma.
\begin{lemma}
Let $V_S$ be a free $S$-linear representation of $G_K$ of rank $d$. Then there exists a positive integer $m(V_S)$ such that for any $x\in M(S)$ and $a\in\D_\dif^{+}(V_x)$, if $a$ is $\Gamma$-invariant, then $a\in\D_\dif^{+,m(V_S)}(V_x)$.
\end{lemma}
\begin{proof}
This is a consequence of the Tate-Sen method. Using \cite[Th\'eor\`{e}me 4.2.9]{BC07}, we first choose a finite extension $L$ over $K$ and some positive integer $m$ so that $\D_{\rig,L}^{\dag,r_m}(V_S)$ is a free $\mathbf{B}_{\rig,L}^{\dag,r_m}\widehat{\otimes}_{\Q}S$-module with a basis $\mathrm{e}=(e_1,\dots,e_d)$. Let $\gamma$ be a topological generator of $\Gamma_{L_m}$ and write $\gamma(e)=eG$ for some $G\in\mathrm{GL}_d(\mathbf{B}_{\rig,L}^{\dag,r_m}\widehat{\otimes}_{\Q}S)$. Recall that by the classical work of Tate \cite{T}, we know that there exists a constant $c>0$ such that $v_p((\gamma-1)x)\leq v_p(x)+c$ for any nonzero $x\in (1-R_{L,m})\widehat{L}_\infty$, where $R_{L,m}:\widehat{L}_\infty\ra L_m$ is Tate's normalized trace map. Since the localization map $\iota_m:\mathbf{B}_{\rig,L}^{\dag,r_m}\ra L_m[[t]]$ is continuous, by enlarging $m$, we may suppose that the constant term of $\iota_m(G)-1$ has norm less than $p^{-c}$. We fix some $m_0\in\mathbb{N}$ such that $K_\infty\cap L_m=K_{m_0}\cap L_m$.
Now let $a\in\D_\dif^{+,K_n}(V_x)^\Gamma$ for some $x\in X$ and $n\geq m$. We will show that $a\in\D_\dif^{+,K_{m_0}}(V_x)^\Gamma$. Since $\iota_m(\mathrm{e})$ forms a basis of $\D^{+,L_n}_{\dif}(V_S)$, we may write $a=\iota_m(\mathrm{e})(x)A$ for some
\[
A\in \mathrm{M}_{d\times1}((L_n\otimes_{\Q}k(x))[[t]]).
\]
The $\Gamma$-invariance of $a$ implies $\iota_m(G(x))\gamma(A)=A$; thus $(1-R_{L,m})\iota_m(G(x))\gamma(A)=(1-R_{L,m})A$. Note that $\iota_m(G(x))$ has entries in $(L_m\otimes_{\Q}k(x))[[t]]$. It follows that $(G(x)-1)B=(1-\gamma^{-1})B$ where $B=(1-R_{L,m})A$. Let $B_0$ be the constant term of $B$. If $B_0\neq0$, then the constant term of $(\iota_m(G(x))-1)B$ has valuation $\geq v(\iota_m(G(x))-1)+v(B_0)>v(B_0)+c$ whereas the constant term $(1-\gamma^{-1})B_0$ of $(1-\gamma^{-1})B$ has valuation $\leq v(B_0)+c$; this yields a contradiction. Hence $B_0=0$. Iterating this argument, we get $B=0$. Hence $a\in \D_\dif^{+,L_m}(V_x)\cap\D_\dif^{+,K_n}(V_x)\subset\D_\dif^{+,K_{m_0}}(V_x)$. Thus we may choose $m(V_S)=m_0$.
\end{proof}
\emph{Proof of Theorem 0.2}.
We retain the notations as above. By passing to irreducible components, we may suppose that $X$ is irreducible. We then apply Lemma \ref{lem:de Rham-part} to $V_X$. Note that $V_{X'}$ is again a finite slope family over $X'$ with the Zariski dense set of crystalline points $\pi^{-1}(Z)$. We may suppose that $X'=X$. Let $\lambda:\D^\dag_{\rig}(V_X)=D^{\vee}_X\ra (D_X^{(m)})^{\vee}$ be the dual of $D_X^{(m)}\ra D_X$, and let $P_X=\ker(\lambda)$. For any $x\in X$, since $D^{(m)}_x\ra D_x$ is injective, we get that the image of $\lambda_x$ is a $\m$-submodule of rank $d-c_1-\cdots-c_m$. Thus by Lemma \ref{lem:ker-birational}, after adapting $X$, we may assume that $P_X$ is a family of $\m$-modules of rank $c_1+\cdots+c_m$, and there exists a Zariski open dense subset $U\subset X$ such that $P_x=\ker(\lambda_x)$ for any $x\in U$. Note that $\ker(\lambda_z)=\mathrm{Fil}_{i,z}$ for any $z\in Z$. Thus by replacing $Z$ with $Z\cap U$, we may assume that $P_z=\mathrm{Fil}_{i,z}$ for any $z\in Z$. We claim that $P_{X}$ is de Rham with weights in $[-b,0]$. To do so, we set $Y$ to be the set of $x\in X$ for which $P_x$ is de Rham with weights in $[a,b]$. By the previous lemma, we see that for any affinoid subdomain $M(S)\subset X$, there exists an integer $m(V_S)$ such that if $P_x$ is de Rham for some $x\in M(S)$, then $h_{dR}(P_x)\leq m(V_S)$. We then deduce from Proposition \ref{prop:dR-family} that $Y\cap M(S)$ is a Zariski closed subset of $M(S)$. Hence $Y$ is a Zariski closed subset of $X$. On the other hand, since $P_z$ is de Rham with weights in $[-b,0]$, we get $Z\subset Y$; thus $Y=X$ by the Zariski density of $Z$. Furthermore, using Proposition \ref{prop:dR-family} and the previous lemma again, we deduce that $P_X$ is de Rham with weights in $[-b,0]$. As a consequence, we obtain a locally free coherent $\OO_X\otimes_{\Q}K$-module $D_{\mathrm{dR}}(P_X)$ of rank $c_1+\cdots+c_m$.
The next step is to show that for any $x\in X$, $D_{\mathrm{dR}}(P_x)$ is contained in $D^+_{\mathrm{st}}(V_x)\otimes_{K_0}K$. Let $Y$ be the set of $x\in X$ satisfying this condition. We first show that $Y$ is a Zariski closed subset of $X$. For this, it suffices to show that $Y\cap M(S)$ is a Zariski closed subset of $M(S)$ for any affinoid subdomain $M(S)$ of $X$. To show this, we employ the $p$-adic local monodromy for families of de Rham $\m$-modules. As in \S5, let $E$ be the product of the complete residue fields of the Shilov boundary of $M(S)$. Since $P_S$ is a family of de Rham $\m$-modules with weights in $[-b,0]$, by Lemma \ref{lem:monodromy}, there exists a finite extension $L$ of $K$ such that for sufficiently large $s$ and $n\geq n(s)$, we have
\[
L\otimes_{L_0}\iota_n(M)=(\D_\dif(P_E\otimes_{\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}E}
\mathbf{B}_{\rig,L}^\dag\widehat{\otimes}_{\Q}E))^{I_L}
\]
for
$M=(N_s(P_E)\otimes_{\mathbf{B}_{\rig,K}^{\dag,s}\widehat{\otimes}_{\Q}E}
\mathbf{B}_{\log,K}^{\dag,s}\widehat{\otimes}_{\Q}E)^{I_L}$; furthermore, $N_s(P_E)\subset P_E^{s}$. Thus
\[
\iota_n(M)\subset \iota_n(P_E\otimes_{\mathbf{B}_{\rig,K}^{\dag,s}\widehat{\otimes}_{\Q}E}
\mathbf{B}_{\log,K}^{\dag,s}\widehat{\otimes}_{\Q}E)\subset
\iota_n(\D_\rig^\dag(V_E)\otimes_{\mathbf{B}_{\rig,K}^{\dag,s}\widehat{\otimes}_{\Q}E}
\mathbf{B}_{\log,K}^{\dag,s}\widehat{\otimes}_{\Q}E)\subset\mathbf{B}^+_{\mathrm{st}}\widehat{\otimes}_{\Q}V_E.
\]
Note that $D_{\mathrm{dR}}(P_E)\subset \D_\dif^+(P_E)\subset\D_\dif^+(V_E)\subset\mathbf{B}_{\mathrm{dR}}^+\widehat{\otimes}_{\Q}V_E$. This yields
\[
D_{\mathrm{dR}}(P_E)\subset (\mathbf{B}^+_{\mathrm{st}}\widehat{\otimes}_{\Q}V_E)\otimes_{L_0}L\cap \mathbf{B}_{\mathrm{dR}}^+\widehat{\otimes}_{\Q}V_E=
(\mathbf{B}^+_{\mathrm{st}}\widehat{\otimes}_{\Q}V_E)\otimes_{L_0}L.
\]
We therefore deduce from \cite[Lemme 6.3.1]{BC07} that
\[
D_{\mathrm{dR}}(P_S)\subset (\mathbf{B}^+_{\mathrm{st}}\widehat{\otimes}_{\Q}V_E)\otimes_{L_0}L\cap
\mathbf{B}_{\mathrm{dR}}^+\widehat{\otimes}_{\Q}V_S=(\mathbf{B}^+_{\mathrm{st}}\widehat{\otimes}_{\Q}V_S)\otimes_{L_0}L.
\]
It follows that $Y\cap M(S)$, which is the set of $x\in M(S)$ such that $D_{\mathrm{dR}}(P_x)\subset (\mathbf{B}^+_{\mathrm{st}}\otimes_{\Q}V_x)\otimes_{K_0}K$, is Zariski closed in $M(S)$.
To conclude the theorem, it then suffices to show that $D_{\mathrm{dR}}(P_x)\subset (D^+_{\mathrm{st}}(V_x)\otimes_{K_0}K)^{Q(\varphi)(x)=0}$ for any $x\in X$; here we $K$-linearly extend the $\varphi^f$-action to $D^+_{\mathrm{st}}(V_x)\otimes_{K_0}K$. Note that $\mathrm{Fil}_{m,z}$ is semi-stable with $D_{\mathrm{st}}(\mathrm{Fil}_{m,z})=\mathcal{F}_{m,z}$. This implies that $Q(\varphi)(D_{\mathrm{dR}}(P_X))$ vanishes at $z$, yielding that $Q(\varphi)(D_{\mathrm{dR}}(P_X))=0$ by the Zariski density of $Z$.
|
1,314,259,994,169 | arxiv | \section{Introduction and summary}
In \cite{L}, Landsberg studied the issue of whether equilibrium is always an entropy maximum, having in view nonextensive (e.g., gravitational) systems, which has been recently revived under a different point of view in \cite{LYEM}. In the process, he arrived at a connection between the properties of homogeneity, superadditivity and concavity of a real-valued function. Unfortunately, he did not formulate this connection as a theorem. Thirring attempted to do so in his beautiful introduction to Lieb's selecta \cite{ThirrLi} as follows (we present the version used by Landsberg, which replaces subadditivity by superadditivity and convexity by concavity):
\begin{proposition}
\label{prop:T}
Let $x \to f(x)$ be a map from a convex set of $\mathbf{R}^{d}$ into $\mathbf{R}$. Then any two of the conditions
\begin{itemize}
\item [$a.)$] (H) (Homogeneity) $f(\lambda x) = \lambda f(x) \mbox{ for all } \lambda \in \mathbf{R}_{+}$;
\item [$b.)$] (Sp) (Superadditivity) $f(x_{1}+x_{2}) \ge f(x_{1}) + f(x_{2})$;
\item [$c.)$] (Cc) (Concavity) $f[\lambda x_{1}+ (1-\lambda) x_{2}] \ge \lambda f(x_{1}) + (1-\lambda) f(x_{2})$ for $0\le \lambda \le 1$.
\end{itemize}
implies the third.
\end{proposition}
The formulae $a.)$, $b.)$, $c.$ are equivalent to the following \cite{ThirrLi}:
\begin{itemize}
\item [$a.)$] (H) (Homogeneity) $f(\lambda x) = \lambda f(x) \mbox{ for all } \lambda \in \mathbf{R}_{+}$;
\item [$b.)$] (S) (Subadditivity) $f(x_{1}+x_{2}) \le f(x_{1}) + f(x_{2})$;
\item [$c.)$] (Cv) (Convexity) $f[\lambda x_{1}+ (1-\lambda) x_{2}] \le \lambda f(x_{1}) + (1-\lambda) f(x_{2})$ for $0\le \lambda \le 1$.
\end{itemize}
Moreover \cite{ThirrLi} (H) and (S) are conditions for stability agains implosion and explosion, respectively, and (C) is the thermodynamic stability condition.
Proposition ~\ref{prop:T} cannot be true as stated, due to the counterexample given in the proof of the forthcoming theorem ~\ref{th:2.1} in section 2. The additional assumption required there (\eqref{(2.1.2)} of section 2, together with Assumption A) throws some additional light into the proposed relationship between (H), (Sp) and (Cc), because it yields a necessary and sufficient condition for the validity of a theorem of type of theorem ~\ref{prop:T}. Its physical significance will be left to the conclusion in section 4, after the applications to statistical thermodynamics, due to Thirring \cite{ThirrLi} and Landsberg \cite{L}, have been briefly revisited in section 3.
The main ideas in the proof of the forthcoming theorem, whose statement and proof are provided in pages 3-6, are due to the late Peter Landsberg and Walter Thirring, and therefore we call it the Landsberg-Thirring theorem. Our own modest contribution was to find (what we believe is) the natural framework for a theorem, which may, however, be of general interest in real analysis, because, on the one hand, it relates three basic properties of real-valued functions, and, on the other, there seem to be few necessary and sufficient criteria for super (sub) additivity, since the classic works (\cite{HP}, \cite{Ros})- see, however, \cite{Bru}. Possible generalizations to a noncommutative setting may also be envisaged \cite{UUG}, for an introduction see also section 8.1 of \cite{Carlen} and references given there.
\section{Main Theorem}
The function $f_{0}$ in the counterexample to proposition ~\ref{prop:T} given in the proof of theorem ~\ref{th:2.1} below is defined on a convex \emph{open} set $X$ and exhibits a singularity (of the second kind) at a point (chosen without loss of generality as the origin of the Cartesian coordinate system). This motivates the introduction of the following simple framework.
\emph{Assumption A} Let $X$ be a convex open cone in $\mathbf{R}^{d}$, $1\le d <\infty$, and $\overline{X}$ denote its closure in $\mathbf{R}^{d}$. Thus, $\overline{X}$ is a closed convex cone (see, e.g., \cite{Fenchel}), and
\begin{equation}
(0, \cdots, 0) \in \overline{X} \setminus X
\label{(2.1.1)}
\end{equation}
By definition, $X$ is closed under addition and multiplication by a scalar in $\mathbf{R}_{+}$.
\space
\begin{theorem}
\label{th:2.1}
Let $X$ be as in Assumption A, and $f$ be a real-valued function on $X$.
A necessary and sufficient condition for the statement that any two of the properties (H), (Sp) and (Cc) for $f$ imply the third is
\begin{equation}
\liminf_{(x_{1},\cdots,x_{d}) \to (0,\cdots,0)} f(x_{1}, \cdots, x_{d}) \ge 0
\label{(2.1.2)}
\end{equation}
\end{theorem}
\begin{proof}
Let Assumption A be valid. We need only show that
\begin{equation}
(Cc) \land (Sp) \Rightarrow (H)
\label{(2.1.3)}
\end{equation}
if and only if \eqref{(2.1.2)} holds. We first show necessity. Let $d=1$, $X= \mathbf{R}_{+}=(0,\infty)$ and
\begin{equation}
0<c< \infty
\label{(2.1.4)}
\end{equation}
be given and define
\begin{equation}
\label{(2.1.5)}
h(x)=\begin{cases}\log(cx),& \mbox{ if } 0<x\le\frac{2}{c}\\0, \mbox{ otherwise } \end{cases}
\end{equation}
and
\begin{equation}
\label{(2.1.6)}
g(x)=\begin{cases} \log(2)+(\frac{c}{2})^{2}(x-\frac{2}{c}),& \mbox{ if } \frac{2}{c}\le x< \infty\\0, \mbox{ otherwise } \end{cases}
\end{equation}
Notice that we have chosen the angular coefficient of the straight line equal to the tangent to the logarithmic function
at the point $\frac{2}{c}$.
Further, define the function on $(0,\infty)$:
\begin{equation}
\label{(2.1.7)}
f_{0}(x) = h(x) + g(x)
\end{equation}
By \eqref{(2.1.5)} and \eqref{(2.1.6)}, $f_{0}$ is continuous, and has a continuous derivative at the point $\frac{2}{c}$.
The function $h$ is superadditive on $(0,\frac{2}{c}]$ because
\begin{equation}
\label{(2.1.8)}
\log[(c(x_{1}+x_{2})] \ge \log(cx_{1})+\log(cx_{2})=\log(c^{2}x_{1}x_{2})
\end{equation}
is true whenever
$$
x_{1}+x_{2} \ge cx_{1}x_{2}
$$
or
\begin{equation}
\label{(2.1.9)}
\frac{x_{1}+x_{2}}{x_{1}x_{2}} \ge c
\end{equation}
Superadditivity of $h$ on $(0,\frac{2}{c}]$ means, by definition \eqref{(2.1.5)}, that \eqref{(2.1.8)} holds
for all $x_{1},x_{2} \in (0,\frac{2}{c}]$ such that $x_{1}+x_{2}$ is also an element of $(0,\frac{2}{c}]$, i.e., such that
\begin{equation}
\label{(2.1.10)}
0<x_{1}+x_{2}\le \frac{2}{c}
\end{equation}
By \eqref{(2.1.9)}, \eqref{(2.1.8)} holds under \eqref{(2.1.10)} due to the elementary inequalities
\begin{equation}
\label{(2.1.11)}
\frac{1}{x_{1}} + \frac{1}{x_{2}} \ge \frac{2}{x_{1}^{1/2}x_{2}^{1/2}} \ge c
\end{equation}
We now consider the remaining case
\begin{equation}
\label{(2.1.12)}
x_{1}+x_{2} > \frac{2}{c}
\end{equation}
In case \eqref{(2.1.12)}, we may have two different cases:
\begin{itemize}
\item [$a.)$] $x_{1} \le \frac{2}{c}$ and $x_{2}>\frac{2}{c}$;
\item [$b.)$] $x_{1}>\frac{2}{c}$ and $x_{2}> \frac{2}{c}$
\end{itemize}
Of course, the case a.) with $x_{1}$ and $x_{2}$ exchanged is the same.
In case a.),
$$
f_{0}(x_{1}+x_{2}) = g(x_{1}+x_{2}) = \tilde{g}(x_{1})+g(x_{2}) \ge h(x_{1})+g(x_{2})=f_{0}(x_{1})+f_{0}(x_{2})
$$
where $\tilde{g}$ denotes the natural extension of $g$ to $(0,\infty)$, by the remark after equation \eqref{(2.1.6)}.
In case b.),
$$
f_{0}(x_{1}+x_{2}) = g(x_{1}+x_{2}) = g(x_{1})+g(x_{2})=f_{0}(x_{1})+f_{0}(x_{2})
$$
which completes the proof of superadditivity of the function $f_{0}$.
The function $h$, defined by \eqref{(2.1.5)}, satisfies $\frac{d^{2}h}{dx^{2}} \le 0$
under condition \eqref{(2.1.4)}, and is, therefore, concave on $(0,\frac{2}{c})$, and $g$, defined by \eqref{(2.1.6)},
being linear, is concave as well on $(\frac{2}{c},\infty)$. The function $f_{0}$, given by \eqref{(2.1.7)}, is continuous
on $0,\infty)$ and has the property that through every point of the curve $y=f_{0}(x)$ there is at least one line
which lies wholly above the curve. Indeed, for the point $x=\frac{2}{c}$, at which the second derivative of $f_{0}$ is
discontinuous, such a line is the tangent to the curve at the point. Thus, by \cite{HLP}, p. 95, $f_{0}$ is concave on
$(0,\infty)$. This example trivially generalizes to $\mathbf{R}^{d}$, by taking
\begin{equation}
\label{(2.1.13)}
\tilde{f}_{0}(x_{1}, \cdots x_{d}) = \sum_{i=1}^{d} f_{0}(x_{i})
\end{equation}
for $(x_{1}, \cdots x_{d}) \in \mathbf{R}_{+} \times \cdots \times \mathbf{R}_{+}$.
Finally, (H) obviously fails for $f_{0}$, and consequently for $\tilde{f}_{0}$, and necessity is proved.
In order to show sufficiency, assume a real-valued function $f$ satisfies \eqref{(2.1.2)} and both (Cc) and (Sp) on a convex open $X \in \mathbf{R}^{d}$ satisfying Assumption A. By (Cc),
\begin{equation}
f(\lambda x + (1-\lambda)y) \ge \lambda f(x) + (1-\lambda) f(y) \mbox{ with } x,y \in X \mbox{ and } 0 \le \lambda \le 1
\label{(2.2)}
\end{equation}
Since $f$ satisfies \eqref{(2.2)} on an open set, it is continuous there (see \cite{HLP}, Theorem 111, p.91), and therefore \eqref{(2.2)} yields, for all $x \in X$,
\begin{eqnarray*}
\lim_{y \to (0,\cdots 0)} f(\lambda x+ (1-\lambda)y) = f(\lambda x) \ge\\
\ge \lambda f(x) + (1-\lambda) \liminf_{y \to (0,\cdots 0)} f(y)
\end{eqnarray*}
from which, by \eqref{(2.1.2)},
\begin{equation}
\label{(2.3)}
f(\lambda x) \ge \lambda f(x)
\end{equation}
Choosing, now, $n \in \mathbf{N}$ and $\lambda = \frac{1}{n}$, we obtain from \eqref{(2.3)}
\begin{equation}
\label{(2.4)}
f(x) \le n f(\frac{x}{n}) \mbox{ for all } x \in X
\end{equation}
We further obtain from (Sp),
\begin{equation}
\label{(2.5)}
f(nx) \ge n f(x) \mbox{ for all } x \in X
\end{equation}
Let $w \in X$ and, given $n \in \mathbf{N}$, define $x \in X$ by $nx=w$. Then, by \eqref{(2.5)},
\begin{equation}
\label{(2.6)}
f(w) \ge n f(\frac{w}{n})
\end{equation}
From \eqref{(2.4)} and \eqref{(2.6)},
\begin{equation}
\label{(2.7)}
f(w) = n f(\frac{w}{n}) \mbox{ for all } w \in X
\end{equation}
By \eqref{(2.7)}, replacing $n$ by $m$, and writing $u=\frac{w}{m}$, we find $f(mu)=mf(u) \mbox{ for all } u \in X$, and, finally,
\begin{equation}
\label{(2.8)}
f(n^{-1}mu) = n^{-1}m f(u) \mbox{ for all } u \in X \mbox{ and for all } n,m \in \mathbf{N}
\end{equation}
Take, now, any irrational number $\lambda \in \mathbf{R}_{+}$, and let $\frac{p_{k}}{q_{k}}$ be the continued fraction approximants
of $\lambda$ (\cite{Khinchin}, p.18). By the continuity of $f$, $f(\frac{p_{k}}{q_{k}}u) \to f(\lambda u)$ as $k \to \infty$
and \eqref{(2.8)} finally yields (H).
\end{proof}
In the case $d=1$, that is, for functions $f$ of one real variable, the l.h.s. of \eqref{(2.1.3)} implies that the function $f$
which satisfies (H) is in fact trivial:
\begin{proposition}
\label{prop:3.1}
If $d=1$, under assumption \eqref{(2.1.2)}, \eqref{(2.1.3)} implies that the function satisfying (H) is trivial,
i.e., $f=cx$ for $c$ a given constant. The analogue of this assertion no longer holds if $d=2$.
\end{proposition}
\begin{proof}
Let $h(x)=\frac{f(x)}{x}$. By \cite{HLP}, Theorem 103, p.83, and \cite{HP}, p.239, under the assumed
concavity, $f$ is superadditive on $(0,\infty)$ iff $h$ is nondecreasing. Thus, the l.h.s. of \eqref{(2.1.3)}
implies that $h$ is nondecreasing. Let $x \in (0,\infty)$, $\lambda \in (0,1]$
be given. Then, by \eqref{(2.3)},
$$
f(\lambda x) \ge \lambda f(x)
$$
by the assumption \eqref{(2.1.2)}. Division by $\lambda x$ gives $\frac{f(\lambda x)}{\lambda x} \ge \frac{f(x)}{x}$. Thus $h$ is nonincreasing
on $(0,\infty)$, and, by the previous result, it must also be nondecreasing and thus be a constant
on $(0,\infty)$, i.e., $\frac{f(x)}{x}=c$.
For $d=2$ the example $s_{ph}(E,V)$ in subsection 3.3 provides a counterexample to the assertion of the proposition, since
$s_{ph}(E,V) \ne c_{1}E+c_{2}V$, with $c_{1},c_{2}$ given constants.
\end{proof}
\begin{remark}
\label{Remark 2.1}
If $\phi$ is a function on $[0,1]$ defined by $\phi(x)=0$ if $x \in [0,1)$, while $\phi(1)=1$, it is convex on $[0,1]$ but not continuous on $[0,1]$. Since continuity is important in in the proof of theorem ~\ref{th:2.1}, this is an indication that the assumption that the domain of the function $f$ is an open convex set $X$ in theorem ~\ref{th:2.1} is natural even in the case that the function is bounded in the closure $\overline{X}$ of $X$.
\end{remark}
\begin{remark}
\label{Remark 2.2}
The counterexample \eqref{(2.1.13)} is suggested by the entropy of a free, classical gas.
\end{remark}
\begin{remark}
\label{Remark 2.3}
Proposition ~\ref{prop:3.1} shows that theorem ~\ref{th:2.1} is non-trivial only in the case $d \ge 2$, i.e., in the case
of functions of several variables. In the applications to statistical thermodynamics, to which we now turn, one is
typically concerned with at least two variables, as the forthcoming three examples demonstrate.
\end{remark}
\section{Applications to statistical thermodynamics}
In this section we briefly review three applications to statistical thermodynamics, on the light of theorem ~\ref{th:2.1}. In section 3.1 we revisit non-relativistic gravitational systems, following Thirring \cite{ThirrLi}. This is the most important application, in which theorem ~\ref{th:2.1} is natural, because the origin lies outside the range of physical values of the variables involved. The second one, in section 3.2, is the Kerr-Newman black-hole, and is due to Landsberg \cite{L}, but there no thermodynamic limit is involved, similarly to the third one, free photons, in section 3.3.
\subsection{Non-relativistic gravitational systems}
This application is based on the model of $N$ fermions interacting via Newtonian attractions, as represented by \eqref{(3.1.1)}. The quantum thermodynamics of this model was derived by Hertel, Narnhofer and Thirring \cite{HNT}.
In \cite{HNT}, the system of $N$ electrically neutral, massive fermions of one species, interacting by Newtonian forces, was studied as a model of a neutron star. We shall follow the excellent summary by Sewell \cite{Se} (see also \cite{NaSe} for a detailed study of the equilibrium states of the system). The Hamiltonian on a Hilbert space ${\cal H}_{N,V}$ is given by
\begin{equation}
H_{N,V} = -\frac{\hbar^{2}}{2m}\sum_{j=1}^{N} \Delta_{j}-\kappa m^{2}\sum_{j\ne k \in V;j,k=1}^{N}r_{jk}^{-1}
\label{(3.1.1)}
\end{equation}
where $\kappa$ is the gravitational constant, $\Delta_{j}$ the Laplacian for the $j-th$ particle, and $r_{j,k}$ the distance between the $jth$ and the $kth$ particle. Due to the long-range and attractive nature of the interactions, it is well explained in \cite{Se} and derived in \cite{HNT} that the statistical thermodynamics of the model may be formulated within a framework in which, for each value of $N$, the system is confined to a spatial region $\Omega_{N}$ such that the volume $V_{N}$ of $\Omega_{N}$ is proportional to $N^{-1}$. It may then be proved that if the energy $E_{N}$ of the system is such that $N^{-7/3}E_{N} \to e$ and $NV_{N} \to v$ as $N \to \infty$, the microcanonical specific entropy $N^{-1}S_{N}$ converges to a function $s$ of $(e,v)$. In order to preserve the original \emph{extensive} variables $E,V,N$, we proceed as in (\cite{ThirrLi}, p.5), defining the function
\begin{equation}
s(E,V,N) \equiv \lim_{\lambda \to \infty} \frac{1}{\lambda} S(\lambda^{-7/3}E, \lambda^{-1}V, \lambda N)
\label{(3.1.2)}
\end{equation}
where
\begin{equation}
S(E,V,N) \equiv \log \dim ({\cal H}_{N,V}^{E})
\label{(3.1.3)}
\end{equation}
and ${\cal H}_{N,V}^{E}$ denotes the subspace of ${\cal H}_{N,V}$ satisfying the condition
\begin{equation}
Tr_{{\cal H}_{N,V}^{E}} H(N,V) < E
\label{(3.1.4)}
\end{equation}
We shall in the following consider a \emph{fixed} number $N$ of particles (as in \cite{LYPR}), and take the limit in \eqref{(3.1.2)} along $\lambda \in \mathbf{N}$, i.e., along the positive integers: we shall denote the function so defined by $s(E,V)$. This number $N$ is assumed to be an integer, and $N \ge 1$. Note that $N=0$ is a \emph{crucial} value in \cite{L}, but would be ambiguous in \eqref{(3.1.2)}.
Our first application of theorem ~\ref{th:2.1} consists in choosing there $d=2$ and $X \equiv \mathbf{R}_{-}^{E} \times \mathbf{R}_{+}^{V}$, where the superscripts refer to the variables $E$ and $V$. This choice satisfies Assumption A. Indeed, it follows from the framework just described that, and the attractive character of the interactions in \eqref{(3.1.1)}, that
\begin{equation}
-\infty < E < 0
\label{(3.2)}
\end{equation}
as well as
\begin{equation}
0 < V < \infty
\label{(3.3)}
\end{equation}
are the ranges of the variables $E$ and $V$. Further, both the point $(E,V)=(0,0)$ and the half-axes $(\mathbf{R}_{-},0)$ and $(0,\mathbf{R}_{+})$ lie in the complementary region of the physical values of the quantities $(E,V)$. The (quantum) microcanonical entropy $S(E,V,N)= \log \dim ({\cal H}_{N,V}^{E}) \ge 0$, and thus
\begin{equation}
s(E,V) \ge 0 \mbox{ if } (E,V) \in \mathbf{R}_{-} \times \mathbf{R}_{+}
\label{(3.4)}
\end{equation}
Therefore \eqref{(2.1.2)} holds for $s(E,V)$, i.e.,
\begin{equation}
\liminf_{(E,V) \to (0,0)} s(E,V) \ge 0
\label{(3.5)}
\end{equation}
By construction, $s(E,V)$ does not satisfy (H), but it does satisfy (Sp). The latter property is most easily seen to hold from the property of the inverse function $e(S,V)$, which is \emph{subadditive} (\cite{ThirrLi}, \cite{HNT}), as a consequence of the attractive nature of the interactions (see also \cite{Ru}, pp. 42,65). We thus arrive at
\begin{proposition}
\label{prop:3.1}
The function $s(E,V)$ does not satisfy (Cc), i.e., thermodynamic stability fails for model \eqref{(3.1.1)}.
\end{proposition}
(Cc) is the condition of thermodynamic stability. The standard manifestation of \emph{non-} (Cc) is the nonpositivity of the specific heat: $s$ becomes convex with respect to $E$, leading to a phase transition of van der Waals type. For model \eqref{(3.1.1)} this was shown in \cite{HNT} (see also \cite{HT} for a soluble model). We refer to \cite{Thirr1} for the discussion of the stage in the stellar evolution in which such behavior is expected.
\begin{remark}
\label{Remark 3.1}
When the specific entropy is regarded as a function of the state, represented by a density matrix $\rho$, then $s$ is \emph{subadditive} rather than superadditive (see, e.g., \cite{Carlen}, Theorem 6.5, pg. 122, and further for a comprehensive account of various related results). In this framework, instead of continuity of $s$ , upper semicontinuity obtains instead \cite{Sewell1}. This property is crucial in the dynamic proof of the second law in \cite{therm3}.
\end{remark}
As remarked in \cite{Se}, for very large $N$ (of the order of $10^{60}$), the nonrelativistic model \eqref{(3.1.1)} becomes unphysical, because the mean particle velocities become comparable to the velocity of light. If the star's mass exceeds the Chandrasekhar limit, rigorously analysed in \cite{LYau}, it is believed that its collapse leads to a black-hole. A very nice account of black-hole thermodynamics may be found in section 4 of \cite{Se}, see also \cite{SePLA}. A special model thereof is the Kerr-Newman black-hole, considered in \cite{L}, to which we now come.
\subsection{The Kerr-Newman black-hole}
A (classical) Kerr-newman black-hole of charge $Q$, angular momentum $J$ and mass (energy) $M$ is assumed to be described by the Beckenstein-Hawking entropy $S_{BH}$, defined by (see \cite{Wald}):
\begin{equation}
S_{BH} = \pi(2M^{2}+2M\sqrt{M^{2}-a^{2}-Q^{2}}-Q^{2})
\label{(3.6)}
\end{equation}
where
\begin{equation}
a = \frac{J}{M}
\label{(3.7)}
\end{equation}
and the inequality
\begin{equation}
M^{2} > a^{2}+Q^{2}
\label{(3.8)}
\end{equation}
is assumed. A rigorous derivation of the second law of thermodynamics for black-holes is given in \cite{SePLA}. As remarked and explained by Sewell (\cite{Se}, \cite{SePLA}), $S_{BH}$ has, at most, an information-theoretic content, not a statistical thermodynamic one. This fact is due to the idealization involved in models such as \eqref{(3.6)}-\eqref{(3.8)}, essentially the characterization of a black-hole by only three quantities, without any microstructure (Wheeler's ``no-hair theorem''). It follows from the explicit formula \eqref{(3.6)} that $S_{BH}$ does not satisfy (H). Let, for simplicity, $Q=0$, and define the variables $x=(M,J)$, with ranges $M \in (0,\infty)$ and $J \in (0,\infty)$. It may be shown that $S_{BH}$ is strictly superadditive, i.e.,
\begin{equation}
S_{BH}(x_{1}+x_{2}) > S_{BH}(x_{1}) + S_{BH}(x_{2})
\label{(3.9)}
\end{equation}
(see \cite{L}, p. 161)). Choosing $X= \mathbf{R}_{+} \times \mathbf{R}_{+}$, we see that Theorem ~\ref{th:2.1} is (trivially) applicable to $S_{BH}(x)$, with $S_{BH}$ continuous at $x=(0,0)$, and $S_{BH}(0,0)=0$, asserting that thermodynamic stability (Cc) must be violated. It is straightforward to show this for the Kerr-Newman black-hole, because $S_{BH}$ is twice continuously differentiable in each of its variables, and, in this case, a necessary condition for (Cc) is (see, e.g., \cite{HLP},p.81, (3.12.4))
\begin{equation}
\frac{\partial^{2}S_{BH}}{\partial M^{2}} \le 0
\label{(3.10)}
\end{equation}
We find
\begin{equation}
\frac{\partial^{2}S_{BH}}{\partial M^{2}} \ge 4
\label{(3.11)}
\end{equation}
and, therefore, thermodynamic stability indeed fails for the Kerr-Newman black-hole, as predicted by Theorem ~\ref{th:2.1}. Thus, altough $S_{BH}$ is not a true statistical thermodynamical entropy, by Sewell's previously mentioned remarks, it is, nevertheless, reassuring that it has ``inherited'' the instability properties non-(H) and non-(Cc) from the models of nonrelativistic gravitational systems.
\subsection{The free photon gas}
The entropy of the free photon gas has curious properties from the thermodynamical standpoint (see \cite{LYPR} and references given there). It is given by
\begin{equation}
s_{ph}(E,V) = E^{3/4}V^{1/4}
\label{(3.12)}
\end{equation}
defined on $\mathbf{R}_{+} \times \mathbf{R}_{+}$, with $s_{ph}$ continuous at $(0,0)$, and $s(0,0)=0$, thus satisfying \eqref{(2.1.2)} trivially. $s_{ph}$ clearly satisfies (H). Further, it is easily seen to be concave by (\cite{HLP}, p.81, (3.12.4)), but \emph{not} strictly concave, because, denoting partial derivatives by superscripts, $(s_{ph}^{EV})^{2}-s_{ph}^{EE}s_{ph}^{VV}=0$, and, by (3.12.5) of \cite{HLP}, p.81, $(s^{EV})^{2}-s^{EE}s^{VV}>0$ is necessary for strict concavity of a function $s$. As a consequence of theorem ~\ref{th:2.1}, $s_{ph}$ satisfies (Sp), a property which is (surprisingly) cumbersome to prove directly. Strict superadditivity does not, however, follow from the theorem, because of the previous remark on strict concavity.
\section{Conclusion}
As remarked by Thirring (\cite{ThirrLi}, p. 5), (H) has the interpretation of stability against implosion. On the other hand, the property of subadditivity of the inverse function $e(S,V)$ has the interpretation of stability against explosion: ``one gains energy by putting two parts together''. Due to the Newtonian attraction, therefore, strict superadditivity of $s(E,V)$ is therefore expected, which indeed holds for the model in subsection 3.1, and is even inherited by the black-hole model of section 3.2. This intuition is, of course, not applicable to the free photon gas of subsection 3.3, and, indeed, as remarked there, strict superadditivity is not a direct consequence of theorem ~\ref{th:2.1}.
Another indication that subadditivity of the energy may be the general property responsible for the equivalence between stability in the sense of (H) and thermodynamic stability is, as remarked by Thirring in \cite{ThirrLi}, the universal character of van der Waals forces (which are attractive) for neutral assemblies of atoms or molecules \cite{LT}.
The main general obstruction to the validity of (2) in applications to statistical thermodynamics (with $f$ taken as the entropy function) is posed by \emph{classical} statistical mechanics: the classical entropy is unbounded from below \cite{RR} (see also remark ~\ref{Remark 2.1}). This adds a further link to the importance of quantum theory in various stability and instability aspects of cosmic bodies.
\section{Acknowledgements}
I am very much indebted to Geoffrey Sewell for pointing out an error in a previous version of this manuscript, and thank him, as well as Pedro L. Ribeiro, for their interest. Special thanks are due to Elliott Lieb for his kind remarks on this paper, as well as a correction. I should like to thank the referee for useful remarks, as well as for his positive appraisal of the present paper.
|
1,314,259,994,170 | arxiv | \section{Introduction and brief review}
In the last few years an enormous effort has been made in order to establish
the
connection between
string theories \cite{green} (especially $E_8 \times E_8$ heterotic string
\cite{gross})
and low energy
physics. Different schemes for constructing classical string vacua have
arisen during this time. Using these schemes it has been possible to build up
four--dimensional
strings that resemble the Standard Model in many aspects, e.g. $SU(3) \times
SU(2) \times U(1)_Y $ gauge group and three generations of particles with the
correct
representations [3--6]. In spite of these achievements
there remain many pending
questions. In
particular there is a large number of classical vacuum states, which reduces
the
predictive
power of the theory. At present there are no dynamical criteria to prefer a
particular
vacuum, so the best we can do is to study their phenomenological
characteristics
in order to
select the viable vacua. In this respect, orbifold compactifications
\cite{dixon} have proved to be
very interesting four--dimensional string constructions since they can pass
succesfully a
certain number of low energy tests \cite{jcasas}. However, not all the
experimental constraints
have been
used in order to probe the phenomenological potential of orbifolds. The best
example of this
is the observed structure of fermion masses and mixing angles \cite{jacasas}.
Concerning the last point, a crucial ingredient in order to relate theory and
observation is
the complete knowledge of the theoretical Yukawa couplings. This knowledge
includes a certain
number of aspects:
\begin{enumerate}
\item[i)] Physical states that enter the couplings.
\item[ii)] Allowed couplings.
\item[iii)] Numerical values of the Yukawa couplings and dependence of these
values on the
physical parameters that define the string vacuum (e.g. the size of the
compactified space).
\item[iv)] Number of different Yukawa couplings and phenomenological viability
of the scheme
($i.e.$ fitting of the observed pattern of fermion masses and mixing angles by
the theoretical Yukawa couplings).
\end{enumerate}
In this sense only for prime Abelian orbifolds, $i.e.$ $Z_3$ and $Z_7$ are the
Yukawa couplings completely known \cite{hamidi,jacasas,gomez}. For the
other orbifolds points i) and ii) have
recently been studied in ref.\cite{kobayashi}. General expressions of
$Z_n$ Yukawa couplings have been
determined in ref.\cite{burwick}. However, although very useful, they are not
explicit enough to
lucidate points iii) and iv), specially when deformations of the compactified
space are
considered. Undoubtedly, a better knowledge of the Yukawa couplings is of
utmost importance
to select or discard explicit string constructions with a highly non--trivial
test. This is
the main purpose of this paper, $i.e.$ to answer points i), ii), iii), iv)
for {\em all} the $Z_n$
orbifolds. Besides the phenomenological motivation, there are strong
theoretical
reasons to
completely determine the Yukawa couplings. In particular it is the only way to
know the moduli
dependence of the matter Lagrangian and, in consequence, the superpotential.
This allows the
examination of the properties of the action under target-space modular
transformations (e.g. $R
\rightarrow 1/R$ ) \cite{ferrara}. It is also necessary in order to discuss
supersymmetry breaking dynamics \cite{lalak}
and cosmological implications (note that moduli play the role of Brans-Dicke
fields in four
dimensions) of these theories.
Let us review briefly $Z_n$ orbifold constructions. A $Z_n$ orbifold is
constructed by
dividing $R^6$ by a six--dimensional lattice $\Lambda$ modded by some $Z_n$
symmetry, called the point group P. The space group S is defined
as $S=\Lambda\times P$, $i.e.$ $S=\{(\gamma,u);\ \gamma\in P,\
u\in\Lambda\}$. The requirement of having $N=1$ supersymmetry in
four dimensions and the absence of tachyons restrict the number
of possible point groups \cite{dixon}. The complete list is given in the
first two columns of Table 1, where the so--called twist
$\theta$ ($i.e.$ the generator of P) is represented in an
orthogonal complex basis of the six--dimensional space.
$\Lambda$ must be chosen so that $\theta$ acts
crystallographically on it. If the realization of $\theta$ on
the lattice coincides with the Coxeter element of a rank--six
Lie algebra root lattice, the orbifold is of the Coxeter
type. A list of Coxeter orbifolds, taken from ref. \cite{markushevich}, is
given in
the third column of Table 1. Some additional examples of Coxeter
orbifolds can be found in ref. \cite{kobayashi}. The lattice of the
$Z_8$--II orbifold, $SO(5)\times SO(8)$, corresponds in fact to
a generalized Coxeter orbifold where the Coxeter element has
been multiplied by an outer automorphism. Non--Coxeter orbifolds
can also be constructed. An example of a non--Coxeter orbifold
(the $Z_4$ one with $[SO(4)]^3$ lattice) is studied in Section 3.
The total number of possible lattices associated with each $Z_n$
orbifold can be found in ref. \cite{ono}. As will become clear in the text,
some
properties of the Yukawa couplings for a particular $Z_n$ orbifold depend
on the lattice chosen, while others do not.
It is important to point out that in a string orbifold
construction the lattice $\Lambda$ can get deformations
compatible with the point group \cite{jacasas,gomez}. These degrees of freedom
correspond to the untwisted moduli surviving compactification.
Deformations play an important role on the value of the Yukawa couplings.
We are interested in the couplings between twisted fields (the
untwisted sector has already been studied \cite{hamidi} and is physically
less interesting \cite{jacasas}). As we will see, these couplings present a
very rich range, which is extremely attractive as the
geometrical origin of the observed variety of fermion masses
\cite{hamidi,ibanez,jacasas}. A
twisted string satisfies $x(\sigma=2\pi)=gx(\sigma=0)$ as the
boundary condition, where $g$ is an element of the space group
whose point group component is non--trivial. Owing to the boundary
condition a twisted string is attached to a fixed point of $g$.
Physical twisted fields are associated with conjugation classes
of the space group rather than with particular elements \cite{dixon}. For
example $\{hgh^{-1},\ h\in S \}$ is the conjugation class of $g$. For
prime orbifolds conjugation classes are in one--to--one
correspondence with the fixed points of $P$. For non--prime
orbifolds the situation is a bit more involved since two
different fixed points under $\theta^n$ may be connected by
$\theta^m,\ m<n$. Then both of them correspond to the same conjugation
class.
The paper is organized as follows. In Section 2 we expound the various steps
necessary to obtain the final spectrum of Yukawa couplings for each
orbifold,
taking the $Z_6$--I as a guide example. These steps include: determination of
the
geometrical structure of the orbifold and deformation parameters, physical
states,
calculation of explicit Yukawa couplings, and counting of different couplings.
Section 3
is devoted to a comparative study of the $[SO(4)]^3$ and $[SU(4)]^2$ $Z_4$
orbifolds. This
shows which properties of the couplings depend on the lattice chosen and which
do not.
Furthermore, the $[SO(4)]^3$ case provides an example of a non--Coxeter
orbifold. Besides
this, the $Z_4$ orbifold allows one to see the physical meaning of a (1,2)
modulus (absent in
the $Z_6$--I case) and its effect in the Yukawa coupling values. The complete
results for the
rest of $Z_n$ orbifolds are given in Appendix 1 and summarized in Table 1.
\section{The method}
Several steps are necessary in order to obtain the final spectrum of
Yukawa couplings for each orbifold. We explain these steps in
the present section, taking the $Z_6$--I orbifold as an illustrative example.
The
reason for this choice is that prime orbifolds ($Z_3$ and $Z_7$) have already
been studied in depth in references \cite{hamidi,jacasas,gomez}. A complete
exposition of the method followed here can be found in ref. \cite{gomez}. It
is,
however, convenient to discuss the present example in some detail, since
non--prime
orbifolds exhibit certain features which are absent in the prime ones.
\subsection{Geometrical structure}
The twist $\theta$ of the $Z_6$--I
orbifold has the form (see Table 1)
\begin{equation}
\theta={\rm diag}(e^{i\alpha},e^{i\alpha},e^{-2i\alpha}),\;\;\;\;
\alpha=\frac{2\pi}{6}
\label{diag61}
\end{equation}
in the complex orthogonal basis $\{(\tilde e_1,\tilde e_2),\
(\tilde e_3,\tilde e_4),\ (\tilde e_5,\tilde e_6)\ \}$. Very
often it is more suitable to work in the lattice basis
$\{e_1,...,e_6\}$, which in this case is simply a set of simple
roots of $G_2^2\times SU(3)$. Then $\theta$ acts as the Coxeter
element
\begin{eqnarray}
\begin{array}{lcl}
\theta e_1 = -e_1- e_2 ,& & \theta e_2 = 3e_1+2e_2 ,\\
\theta e_3 = -e_3- e_4 ,& & \theta e_4 = 3e_3+2e_4 ,\\
\theta e_5 = e_6 ,& & \theta e_6 = -e_5- e_6 .
\end{array}
\label{teta61}
\end{eqnarray}
Note that we have labelled as $e_5,e_6$ the simple roots of
$SU(3)$. As mentioned above $\Lambda$ can get
deformations compatible with the point group. These degrees of
freedom correspond to the Hermitian part of the five untwisted
(1,1) moduli surviving compactification $N_{1\bar 1},N_{2\bar
2},N_{3\bar 3},N_{1\bar 2},N_{2\bar 1},$ where $N_{i\bar j}=
|i>_R \otimes \alpha_{\bar j L}^{-1}|0>_L;\;\; |0>_L$ ($|0>_R$) being
the left (right) vacuum, $\alpha_L$ is an oscillator operator
and $i$ ($\bar j$) is a holomorphic (antiholomorphic) index.
Note that under a deformation the actuation of $\theta$ on the
lattice basis, eq. (\ref{teta61}), remains the same. Then $P$
invariance imposes the following relations:
\begin{eqnarray}
\begin{array}{ccc}
|e_2|\;=\; \sqrt{3}\;|e_1|, & |e_4|\;=\;\sqrt{3}\;|e_3|, & |e_5|\;=\;|e_6|, \\
\alpha_{12}\;=\; -\sqrt{3}/2, & \alpha_{34}\;=\;-\sqrt{3}/2, &
\alpha_{56}\;=\;-1/2, \\
\alpha_{24}\;=\;\alpha_{13},& \alpha_{23}\;=\;-\sqrt{3}\alpha_{13}-\alpha_{14},
&
\alpha_{ij}=0 \;\;\; i=1,2,3,4 \;\; j=5,6
\end{array}
\label{reldef61}
\end{eqnarray}
where $\alpha_{ij}=\cos\theta_{ij}$ and
$e_ie_j=|e_i||e_j|\cos\theta_{ij}$. Therefore we can take the
5 {\em deformation degrees of freedom}
\begin{eqnarray}
\begin{array}{c}
R_i=|e_i|;\;\;\;i=1,3,5 \\
\alpha_{13},\;\alpha_{14}
\end{array}
\label{pardef61}
\end{eqnarray}
$R_i$ are global scales of the three sublattices
($G_2,G_2,SU(3)$), and for $\alpha_{13}=\alpha_{14}=0$ we
recover the rigid $G_2\times G_2\times SU(3)$ lattice. The
connection between the lattice basis $(e_1,...,e_6)$ and the
orthogonal basis $(\tilde e_1,...,\tilde e_6)$ in which $\theta$
takes the form (\ref{diag61}) is given by
\begin{eqnarray}
e_i&=& A_i\;[\cos(\varphi_1^i) \tilde e_1 + \sin(\varphi_1^i) \tilde e_2] +
B_i\;[\cos(\varphi_2^i) \tilde e_3 + \sin(\varphi_2^i) \tilde e_4 ] \nonumber\\
e_{i+1}&=& -\sqrt{3}[A_i\;(\sin(\frac{\pi}{3}-\varphi_1^i) \tilde e_1 +
\cos(\frac{\pi}{3}-\varphi_1^i) \tilde e_2) + B_i\;(
\sin(\frac{\pi}{3}-\varphi_2^i) \tilde e_3 +
\cos(\frac{\pi}{3}-\varphi_2^i) \tilde e_4)] \nonumber\\
e_j&=& R_5\;[ \cos((j-5)\frac{2\pi}{3}+\phi)
\tilde e_5 + \sin((j-5) \frac{2\pi}{3}+\phi) \tilde e_6 ]\;;\;\;i=1,3\;\;j=5,6
\label{latort61}
\end{eqnarray}
where $\varphi_1^1,\;\varphi_1^3,\;\varphi_2^1,\;\varphi_2^3,\;\phi$ are
arbitrary angles that are
irrelevant for our results and
\[
\begin{array}{ll}
A_1 = R_1 \frac{1}{\sqrt{2}}[\Delta_1 \pm \Delta_3]^{1/2}&
B_1 = R_1 \frac{1}{\sqrt{2}}[\Delta_2 \mp \Delta_3]^{1/2}\\
A_3 = k_2 R_3 \sqrt{2}[\Delta_1 \pm \Delta_3]^{-1/2}&
B_3 = k_1 R_3 \sqrt{2}[\Delta_2 \mp \Delta_3]^{-1/2}
\end{array}
\]
with
\[
\begin{array}{c}
k_i=2 R_1R_3 (-1)^i [\alpha_{13}
\sin(\frac{\pi}{3}+\varphi_i^1-\varphi_i^3)+\alpha_{14}
\cos(\varphi_i^1-\varphi_i^3)]
[\sin(\varphi_2^1-\varphi_2^3-\varphi_1^1+\varphi_1^3)]^{-1}\;\; i=1,2 \\
\Delta_1=1+k_2^2-k_1^2\;\;\;\;\;\Delta_2=1-k_2^2+k_1^2\;\;\;\;\;\Delta_3=[(1-k_1
^2-k_2^2
)^2-4k_1^2k_2^2]
^{1/2} .
\end{array}
\]
Let us consider now the fixed points under the action of the
point group. $f_n$ is a fixed point under $\theta^n$ if it
satisfies $f_n=\theta^n f_n+u,\ u\in\Lambda$. As $Z_6$--I is a
non--prime orbifold, a point fixed by $\theta^n\;(n \neq 1)$ may be
not fixed by $\theta^m\;(m \neq n)$. Consequently, the fixed
points under $\theta$, $\theta^2$ and $\theta^3$ must be
considered separately ($\theta^4$, $\theta^5$ are simply the
antitwists of $\theta^2$ and $\theta$). It is easy to check from
(\ref{teta61}) that there are three different fixed points under
$\theta$. Working in the lattice basis their coordinates (up to
lattice translations) are
\begin{eqnarray}
f_1^{(1)}&=& g_1^{(0)} \otimes g_1^{(0)} \otimes \hat g_1^{(0)}, \nonumber \\
f_1^{(2)}&=& g_1^{(0)} \otimes g_1^{(0)} \otimes \hat g_1^{(1)}, \nonumber \\
f_1^{(3)}&=& g_1^{(0)} \otimes g_1^{(0)} \otimes \hat g_1^{(2)}
\label{fix161}
\end{eqnarray}
with
\begin{eqnarray}
\begin{array}{llll}
g_1^{(0)} = (0,0), & \hat g_1^{(0)} = (0,0), &
\hat g_1^{(1)} = (\frac{1}{3},\frac{2}{3}), & \hat g_1^{(2)} =
(\frac{2}{3},\frac{1}{3}) .
\end {array}
\end {eqnarray}
Similarly under $\theta^2$ there are 27 fixed points. 12 of them
are connected to the others by a $\theta$ rotation
\begin{eqnarray}
\begin{array}{ll}
f_2^{(1)}= g_2^{(0)} \otimes g_2^{(0)} \otimes \hat g_2^{(0)}, & \\
f_2^{(2)}= g_2^{(0)} \otimes g_2^{(0)} \otimes \hat g_2^{(1)}, & \\
f_2^{(3)}= g_2^{(0)} \otimes g_2^{(0)} \otimes \hat g_2^{(2)}, & \\
f_2^{(4)}= g_2^{(0)} \otimes g_2^{(1)} \otimes \hat g_2^{(0)}\sim
g_2^{(0)} \otimes g_2^{(2)} \otimes \hat g_2^{(0)},&
f_2^{(10)}= g_2^{(1)} \otimes g_2^{(1)} \otimes \hat g_2^{(0)}\sim
g_2^{(2)} \otimes g_2^{(2)} \otimes \hat g_2^{(0)}, \\
f_2^{(5)}= g_2^{(0)} \otimes g_2^{(1)} \otimes \hat g_2^{(1)}\sim
g_2^{(0)} \otimes g_2^{(2)}\otimes \hat g_2^{(1)} ,&
f_2^{(11)}= g_2^{(1)} \otimes g_2^{(1)} \otimes \hat g_2^{(1)}\sim
g_2^{(2)} \otimes g_2^{(2)} \otimes \hat g_2^{(1)} , \\
f_2^{(6)}= g_2^{(0)} \otimes g_2^{(1)} \otimes \hat g_2^{(2)}\sim
g_2^{(0)} \otimes g_2^{(2)} \otimes \hat g_2^{(2)} ,&
f_2^{(12)}= g_2^{(1)} \otimes g_2^{(1)} \otimes \hat g_2^{(2)}\sim
g_2^{(2)} \otimes g_2^{(2)} \otimes \hat g_2^{(2)} , \\
f_2^{(7)}= g_2^{(1)} \otimes g_2^{(0)} \otimes \hat g_2^{(0)}\sim
g_2^{(2)} \otimes g_2^{(0)} \otimes \hat g_2^{(0)} , &
f_2^{(13)}= g_2^{(1)} \otimes g_2^{(2)} \otimes \hat g_2^{(0)}\sim
g_2^{(2)} \otimes g_2^{(1)} \otimes \hat g_2^{(0)} , \\
f_2^{(8)}= g_2^{(1)} \otimes g_2^{(0)} \otimes \hat g_2^{(1)}\sim
g_2^{(2)} \otimes g_2^{(0)} \otimes \hat g_2^{(1)} ,&
f_2^{(14)}= g_2^{(1)} \otimes g_2^{(2)} \otimes \hat g_2^{(1)}\sim
g_2^{(2)} \otimes g_2^{(1)} \otimes \hat g_2^{(1)} , \\
f_2^{(9)}= g_2^{(1)} \otimes g_2^{(0)} \otimes \hat g_2^{(2)}\sim
g_2^{(2)} \otimes g_2^{(0)} \otimes \hat g_2^{(2)} , &
f_2^{(15)}= g_2^{(1)} \otimes g_2^{(2)} \otimes \hat g_2^{(2)}\sim
g_2^{(2)} \otimes g_2^{(1)} \otimes \hat g_2^{(2)}
\end{array}
\label{fix261}
\end{eqnarray}
with
\begin {eqnarray}
\begin {array}{lll}
g_2^{(0)} = (0,0), & g_2^{(1)} = (0, \frac{1}{3}), & g_2^{(2)}
=(0,\frac{2}{3}),\\
\hat g_2^{(0)} = (0,0), & \hat g_2^{(1)} = (\frac{1}{3}, \frac{2}{3}), &
\hat g_2^{(2)} = (\frac{2}{3},\frac{1}{3}) .
\end {array}
\end {eqnarray}
Consequently, there are 15 conjugation classes under $\theta^2$.
Finally, under $\theta^3$, there are 16 fixed tori that are the
direct product of 16 fixed points in the sublattice
$(e_1,...,e_4)$ by the 2--torus defined by the sublattice
$(e_5,e_6)$. (Notice that $\theta^3$ is trivial in the $SU(3)$
root lattice.) 15 of these fixed points are connected between themselves
by $\theta$ rotations
\begin{eqnarray}
\begin {array}{lclcl}
f_3^{(1)}= g_3^{(0)} \otimes g_3^{(0)}, & & & & \\
f_3^{(2)}= g_3^{(1)} \otimes g_3^{(0)}&\sim& g_3^{(2)} \otimes g_3^{(0)}&\sim&
g_3^{(3)} \otimes g_3^{(0)},\\
f_3^{(3)}= g_3^{(0)} \otimes g_3^{(1)}&\sim& g_3^{(0)} \otimes g_3^{(2)}&\sim&
g_3^{(0)} \otimes g_3^{(3)},\\
f_3^{(4)}= g_3^{(1)} \otimes g_3^{(1)}&\sim& g_3^{(2)} \otimes g_3^{(2)}&\sim&
g_3^{(3)} \otimes g_3^{(3)},\\
f_3^{(5)}= g_3^{(2)} \otimes g_3^{(1)}&\sim& g_3^{(3)} \otimes g_3^{(2)}&\sim&
g_3^{(1)} \otimes g_3^{(3)},\\
f_3^{(6)}= g_3^{(3)} \otimes g_3^{(1)}&\sim& g_3^{(1)} \otimes g_3^{(2)}&\sim&
g_3^{(2)} \otimes g_3^{(3)}
\end{array}
\label{fix361}
\end{eqnarray}
with
\begin{eqnarray}
\begin{array}{llll}
g_3^{(0)} = (0,0), & g_3^{(1)}= (0,\frac{1}{2}), &
g_3^{(2)}= (\frac{1}{2},0), & g_3^{(3)}= (\frac{1}{2},\frac{1}{2}) .
\end{array}
\label{pfix361}
\end{eqnarray}
The direct product under the $(e_5,e_6)$ torus has been
understood. Consequently, there are 6 conjugation classes under $\theta^3$.
A similar analysis for other orbifolds can be found in Appendix 1.
\subsection{Physical states}
The next step is to determine which are the physical states.
These must be invariant under a total $Z_6$ transformation which,
besides the twist $\theta$ in the 6--dimensional space, includes
a $Z_6$ gauge transformation, usually represented by a shift
$V^I$ (the so--called embedding) on $\Lambda_{E_8 \times E_8}$ and a shift
$v^t$
on $\Lambda_{SO(10)}$. Accordingly one has to construct for each $\theta^k$
sector
linear combinations of states, associated with $\theta^k$ fixed points, that
are
eigenstates of $\theta$ \cite{ohtsubo,kobayashi}. If $f_k$ is a fixed point
of $\theta^k$ such that
$l$ is the smallest number giving $\theta^lf_k=f_k+u,\;u\in\Lambda$, then the
eigenstates of $\theta$ have the form
\begin{eqnarray}
\begin{array}{c}
|f_k>+\;e^{-i\gamma}|\theta f_k>+... +\; e^{-i(l-1)\gamma}|\theta^{l-1}
f_k>\\
\gamma=\; \frac{2\pi p}{l}\;,\;\;\; p=\;1,2,...,l
\end{array}
\label{fisstgen}
\end{eqnarray}
with eigenvalue $e^{-i\gamma}$ (obviously, if $k=1$, then $l=1$
and eq. (\ref{fisstgen}) is trivial). Under a $Z_6$ transformation
the complete state gets a phase \cite{ohtsubo,kobayashi}
\begin{eqnarray}
\Delta(k,e^{i\gamma})&=&\exp \left\{ 2\pi i [-\frac{1}{2}k(\sum_{I} (V^I)^2 -
\sum_{t} (v^t)^2 ) + \right. \nonumber\\
& & \left. +\sum_{I} (P^I+kV^I)V^I - \sum_{t} (p^t + k v^t)v^t]
\right\} \exp\{i\gamma\} ,
\label{DELTA}
\end{eqnarray}
where $p^t$ is the NSR part momentum put on the $SO(8)$ weight lattice and
$P^I$ is the transverse 8--dim. momentum ($E_8 \times E_8$ root momentum)
fulfilling the right--mover and left--mover massless conditions respectively.
Then $\Delta(k,e^{i\gamma})=1$ for physical states.
Let us apply this to the $E_8\times E_8$ heterotic string
compactified on the $Z_6$--I orbifold with
$V^I=\frac{1}{6}(1,1,-2,0,...,0)(0,...,0)$, $i.e.$ the standard embedding.
The unbroken gauge group is $(E_6\times SU(2)\times U(1))\times
E_8$. In the $\theta$ sector there are three physical states
transforming as (27)'s of $E_6$ corresponding to $|f_1^{(1)}>,
|f_1^{(2)}>, |f_1^{(3)}>$ respectively (see eq. (\ref{fix161})).
In the $\theta^2$ sector we can construct 27 eigenstates of
$\theta$ (see eq. (\ref{fix261}))
\begin{eqnarray}
|f_2^{(1)}>,\;\; |f_2^{(2)}>,\;\; |f_2^{(3)}>,\;\;
\left\{|f_2^{(j)}>+ e^{-i\gamma} |\theta
f_2^{(j)}>\right\}_{j=4,...,15}\;,\;\;\;
\gamma=\pi,2\pi .
\label{fisst261}
\end{eqnarray}
After some algebra, only the symmetric
combinations survive ($i.e.$ $\Delta(k,e^{i\gamma})\;=\;1$ for them),
giving rise to 15 (27)'s under $E_6$. Similarly in the
$\theta^3$ sector we can construct 16 eigenstates of
$\theta$ (see eq. (\ref{fix361}))
\begin{eqnarray}
|f_3^{(1)}>,\;\;
\left\{|f_3^{(j)}>+ e^{-i\gamma}|\theta
f_3^{(j)}>+ e^{-i2\gamma}|\theta^2
f_3^{(j)}> \right\}_{j=2,...,6}\;,\;\;\;
\gamma=\frac{2\pi}{3},\;\frac{4\pi}{3},2\pi .
\label{fisst361}
\end{eqnarray}
In this case there survive 6 (27)'s, corresponding to the
symmetric combinations, and 5 ($\overline{27}$)'s, corresponding to
the $\gamma=2\pi/3$ combinations. We have performed a similar analysis
for each orbifold.
\subsection{Allowed Yukawa couplings}
Let us now analyse the allowed Yukawa couplings between physical
states. A twisted string associated with a fixed point $f$ and a
rotation $\theta^j$ is closed due to the action of
$g=(\theta^j,(I-\theta^j)f)$, so the corresponding conjugation
class is given by $\{
(\theta^k,u)(\theta^j,(I-\theta^j)f)(\theta^{-k},-\theta^{-k}u)
\}$, with $k\in Z,\; u\in \Lambda$. After some algebra, the general
expression of the conjugation class of $g$ is
\begin{equation}
\left( \; \theta^j,\; (I-\theta^j) \left[ (f+\Lambda)\cup(\theta
f+\Lambda)\cup....\cup(\theta^{j-1} f+\Lambda) \right] \right) .
\label{conclgen}
\end{equation}
The set of translations $(1-\theta^j)\;\{\bigcup
(\theta^kf+\Lambda), k=0,...,j-1\}$ is called the coset
associated with $\theta^j$ and $f$ (note that the cosets
associated with $f$ and $\theta^kf$ are obviously the same). For
a trilinear coupling of twisted fields $T_1T_2T_3$ to be allowed,
the product of the respective conjugation classes should contain
the identity. This implies, in particular, that the product of
the three point group elements
$\theta^{j_1}\theta^{j_2}\theta^{j_3}$ should be 1 (this is the
so--called point group selection rule). For the $Z_6$--I
orbifold this implies that only $\theta\theta^{2}\theta^{3}$,
$\theta^2\theta^{2}\theta^{2}$ and $\theta\theta\theta^{4}$
couplings have to be considered. A straightforward application of
the H--momentum conservation \cite{hamidi,cvetic,nilles} shows that the
$\theta\theta\theta^{4}$ couplings are also forbidden.
Furthermore for the $\theta\theta^{2}\theta^{3}$
couplings we must require
\begin{eqnarray}
(I,0) &\in& \left( \theta, \;(I-\theta)(f_1+\Lambda) \right) \;
\left( \; \theta^2,\; (I-\theta^2) \left[ (f_2 + \Lambda) \cup (\theta
f_2+\Lambda)
\right] \right) \nonumber\\
& &\left( \theta^3,\;(I-\theta^3) \left[ (f_3 + \Lambda) \cup (\theta f_3 +
\Lambda ) \cup (\theta^2 f_3 + \Lambda) \right] \right) ,
\label{selruge61}
\end{eqnarray}
which leads to the so--called space group selection rule for the
coupling $\theta\theta^{2}\theta^{3}$ in the $Z_6$--I orbifold
\begin{equation}
f_1\ +\ (I+\theta)f_2\ -\ (I+\theta+\theta^2)f_3\;\in\;\Lambda
\label{selru161} .
\end{equation}
It should be noticed that if the space group selection rule
(\ref{selru161}) is satisfied for three fixed points $f_1,f_2,f_3$,
then it is also satisfied for $\theta^{k_1}f_1$,
$\theta^{k_2}f_2$, $\theta^{k_3}f_3$, and, consequently, for all
the physical combinations
$\sum_{k_1}e^{-ik_1\gamma_1}|\theta^{k_1}f_1>$,
$\sum_{k_2}e^{-ik_2\gamma_2}|\theta^{k_2}f_2>$,
$\sum_{k_3}e^{-ik_3\gamma_3}|\theta^{k_3}f_3>$, see
eq. (\ref{fisstgen}). For the case at hand, $i.e.$ the $Z_6$--I
orbifold, one can consider 270 kinds of couplings
$f_1^{(j_1)}f_2^{(j_2)}f_3^{(j_3)}$ of the
$\theta\theta^{2}\theta^{3}$ type, from which only 90 are
allowed: those for which the $e_5,e_6$ components ($i.e.$ the
$SU(3)$ sublattice projection) of $f_1-f_2$ are vanishing. So,
if we write the fixed points
\begin{equation}
\left.
\begin {array}{rcl}
f_1 &=& g_1^{(0)} \otimes g_1^{(0)} \otimes \hat g_1^{(k_1)}\\
f_2 &=& g_2^{(i_2)}\otimes g_2^{(j_2)}\otimes \hat g_2^{(k_2)}\\
f_3 &=& g_3^{(i_3)}\otimes g_3^{(j_3)}\otimes [\alpha(e_5)+\beta(e_6)]
\end {array}
\right\}
\begin {array} {l}
k_1,k_2,i_2,j_2=0,1,2,\\
i_3,j_3=0,1,2,3,\\
\alpha,\beta \in R .
\end {array}
\end{equation}
the selection rule is
\begin{equation}
\begin {array}{lcr}
k_1 &=& k_2 .
\end {array}
\end{equation}
At this point it is important to stress the following fact: for two given
$f_1,\;f_2$,
the third fixed point $f_3$ (corresponding to $\theta^3$) is not determined
uniquely \footnote{This also happens for given $f_1,\;f_3$ but not for
$f_2,\;f_3$.}. We say
that the space group selection rule {\em is not diagonal}. From the physical
point
of view this is extremely important, since it allows for non--diagonal fermion
mass
matrices and, hence, a non--trivial Kobayashi--Maskawa matrix. This feature is
absent for prime orbifolds. On the other hand the space selection rule for the
$\theta^2\theta^2\theta^2$ couplings simply reads
\begin{equation}
f_1\ +\ f_2\ +\ f_3\;\in\;\Lambda
\label{selru261}
\end{equation}
which is {\em diagonal}. Note, however, that in this case the selection rule
can
be satisfied by some representatives of the conjugation classes and not by
others.
In this case there are 3375 couplings to consider, from which only 369 are
allowed.
These are
\begin{equation}
\left.
\begin {array}{lcr}
i_1+i_2+i_3&=&0 \\
j_1+j_2+j_3&=&0 \\
k_1+k_2+k_3&=&0
\end {array}
\right\}
\;\;
mod.\;3 .
\end{equation}
\noindent {denoting}
\begin{equation}
\left.
\begin {array}{rcl}
f_1 &=& g_2^{(i_1)}\otimes g_2^{(j_1)} \otimes \hat g_2^{(k_1)}\\
f_2 &=& g_2^{(i_2)}\otimes g_2^{(j_2)} \otimes \hat g_2^{(k_2)}\\
f_3 &=& g_2^{(i_3)}\otimes g_2^{(j_3)} \otimes \hat g_2^{(k_3)}
\end {array}
\right\}\;\;\;
\begin {array} {l}
i_1,i_2,i_3=0,1,2\\
j_1,j_2,j_3=0,1,2\\
k_1,k_2,k_3=0,1,2 .
\end {array}
\end{equation}
Let us finally note that the product of the $\theta$ eigenvalues
of the physical combinations of fixed points involved in the
coupling should be one, otherwise the coupling is vanishing. For
example, in the $Z_6$--I orbifold the following
$\theta\theta^{2}\theta^{3}$ coupling
\begin{equation}
|f_1^{(j_1)}>\; (|f_2^{(j_2)}> + |\theta f_2^{(j_2)}>)\;
(|f_3^{(j_3)}> + e^{-\frac{2\pi i}{3}} |\theta f_3^{(j_3)}> + e^{-\frac{4 \pi
i}{3}}
|\theta^2 f_3^{(j_3)}>)
\label{excou161}
\end{equation}
is forbidden on these grounds, since, due to $\theta$
invariance, it is equal to
\begin{equation}
|f_1^{(j_1)}>\; (|f_2^{(j_2)}> + |\theta f_2^{(j_2)}>)\;
|f_3^{(j_3)}> \;\;
(1+e^{-\frac{2\pi i}{3}}+e^{-\frac{4\pi i}{3}})
= 0 .
\label{excou261}
\end{equation}
This result can also be obtained for the standard embedding case from
gauge invariance since the state considered in the $\theta^3$
sector corresponds to a ($\overline{27}$), while the others are
(27)'s. In consequence all the couplings to be considered in the
$Z_6$--I involve symmetric combinations of fixed points ($i.e.$
$\theta$ eigenvalue $=1$) exclusively. We have performed the
previous analysis for all the $Z_n$ orbifolds. In all cases only couplings
between symmetric combinations of
fixed points survive. We do not really know what is the
fundamental principle behind this rule (if any), but it has
important consequences. For instance, in ref. \cite{ohtsubo} it was
suggested that the phases of the non--zero $\theta$ eigenvalue
states (see eq. (\ref{fisstgen})) could be the geometrical origin
of the phases of the Kobayashi--Maskawa matrix. Clearly, the
present rule excludes this possibility.
\subsection{Calculation of Yukawa couplings}
We are interested in couplings of the type $\psi\psi\phi$ (i.e.
fermion--fermion--boson). A trilinear string scattering
amplitude is given by the correlator
$<V_1(z_1)V_2(z_2)V_3(z_3)>$ of the vertex operators creating
the corresponding states. Complete expressions for the vertex
operators of the fields under consideration can be found in
refs. \cite{hamidi,cvetic}. As has been pointed out \cite{hamidi}
the non--vanishing
Yukawa couplings are essentially given by the bosonic twist
correlator $<\sigma_1(z_1)\sigma_2(z_2)\sigma_3(z_3)>$, where
$\sigma_i$ represents a twist field creating the appropriate
twisted ground state. According to subsection 2.2 $\sigma$ fields
for physical states are, in general, linear combinations of
$\sigma$ fields associated with specific rotations and fixed
points, say $\sigma_{\theta^j,f}$. For example, for a physical
state in the $\theta^j$ sector whose twist part is given by
$\sum_{k=0,...,l-1}e^{-ik\gamma}|\theta^{k}f>$ (the meaning of
$\gamma$ and $l$ is given in eq. (\ref{fisstgen})) the
corresponding twist field is simply
$\sum_{k=0,...,l-1}e^{-ik\gamma}\sigma_{\theta^j,\theta^{k}f}$.
According to the result of the previous subsection only
symmetric combinations ($\gamma=2\pi$) are relevant for trilinear
couplings, so the correlator
$<\sigma_1(z_1)\sigma_2(z_2)\sigma_3(z_3)>$ associated with a
$\theta^{j_1}\theta^{j_2}\theta^{j_3}$ will take the form
\begin {eqnarray}
\lefteqn{<\sigma_1(z_1)\;\sigma_2(z_2)\;\sigma_3(z_3)>=} \nonumber\\
& &\left( \sqrt{l_1l_2l_3} \right) ^{-1}\;
\sum_{k_1=0}^{l_1 -1} \;\sum_{k_2=0}^{l_2 -1} \;\sum_{k_3=0}^{l_3 -1}
<\sigma_{\theta^{j_1},\theta^{k_1}f^{(1)}}(z_1)\;
\sigma_{\theta^{j_2},\theta^{k_2}f^{(2)}}(z_2)\;
\sigma_{\theta^{j_3},\theta^{k_3}f^{(3)}}(z_3)> ,
\label{sigcorr1}
\end{eqnarray}
where the square root is a normalization factor. The correlation
functions on the right--hand side are evaluated following
standard lines \cite{hamidi}. They are defined by
\begin {eqnarray}
\lefteqn{<\sigma_{\theta^{j_1},\theta^{k_1}f^{(1)}}(z_1)
\sigma_{\theta^{j_2},\theta^{k_2}f^{(2)}}(z_2)
\sigma_{\theta^{j_3},\theta^{k_3}f^{(3)}}(z_3)>=}\nonumber\\
& & \int D X \; e^{-S} \;\sigma_{\theta^{j_1},\theta^{k_1}f^{(1)}}(z_1)
\sigma_{\theta^{j_2},\theta^{k_2}f^{(2)}}(z_2)
\sigma_{\theta^{j_3},\theta^{k_3}f^{(3)}}(z_3) .
\label{sigcorr2}
\end {eqnarray}
Owing to the Gaussian character of the action $S$
\begin{equation}
S= \frac{1}{4 \pi} \int d^2 z ( \partial{X} \; \bar{\partial} \bar {X} + \bar
{\partial}
X \partial\bar{X} ) \;,
\label{action1}
\end{equation}
where $X=X_1+iX_2$ and a sum over the three complex coordinates
is understood, the scattering amplitude can be separated into a
classical and a quantum part \cite{hamidi}
\begin{equation}
Z=Z_{qu}\sum_{<X_{cl}>}\exp(-S_{cl}) .
\label{Zcorr}
\end{equation}
The quantum contribution represents a global factor for all
the couplings with the same
$\theta^{j_1}\theta^{j_2}\theta^{j_3}$ pattern in a given
orbifold; so the physical information mostly resides in the
classical contribution. Eventually, a total normalization factor,
which depends on the size of the compactified space,
has to be determined with the help of the four--point
correlation function. The final task lies in writing the
couplings in terms of the physically significant parameters,
i.e. those that parametrize the size and shape of the orbifold.
Let us consider, for the sake of definiteness, a
$\theta\theta^2\theta^3$ coupling in our guide example, the
$Z_6$--I orbifold. The classical contribution, see
eq. (\ref{Zcorr}), to a
$<\sigma_{\theta}\sigma_{\theta^2}\sigma_{\theta^{-3}}>$
correlator has been determined in references \cite{gomez,burwick}, so we escape
here the details of the calculation. The result is that the
contribution of the classical (instantonic) solutions to the
classical action, eq. (\ref{action1}), for a three--point correlation
on the sphere, is
\begin{eqnarray}
S_{cl}^i&=& \frac {1}{4 \pi} \frac {|\sin (2\pi k_i/N)|\;|\sin (3\pi k_i/N)|}
{|\sin(\pi k_i/N)|}\; |v_i|^2
\nonumber \\
v&\in& (f_2-f_3+\Lambda)_{\perp}
\label{Scl1}
\end{eqnarray}
where $i=1,2,3$ denotes the corresponding complex coordinate
(and thus the projection over the associated $z$--plane), the
fields $X^i$ are twisted by $\exp(2\pi k_i/N)$ ($k_1=1, k_2=1,
k_3=2$ and $N=6$ for the $Z_6$--I orbifold) and
$(f_2-f_3+\Lambda)_{\perp}$ selects only $(f_2-f_3+\Lambda)$
shifts that are orthogonal to the invariant plane (this means
that we can choose $(f_2-f_3)_{i=3}=0$ and
$\Lambda=\Lambda_{G_2\times G_2}$). Several comments are in order
here. First it is clear that the $i=3$ plane ($i.e.$ the invariant
plane) does not contribute to the classical action, and the
coupling for $i=3$ behaves much as an untwisted one. In fact in
the invariant plane the three strings must be attached to the
same fixed point, $i.e.$ $(f_1)_{i=3}=(f_2)_{i=3}=(f_3)_{i=3}$.
These facts are general for all the couplings, in any orbifold,
when fixed tori are involved. Second, the $v$--coset in
(\ref{Scl1}) does not depend on $f_1$ since in the calculation
of $X_{cl}(z)$, $z_1$ has been sent to infinity by using
$SL(2,C)$ invariance. We call this the 2--3 picture (for more
details see ref. \cite{gomez}). Equation (\ref{Scl1}) can be expressed in the
1--2 and 1--3 pictures as well
\begin{eqnarray}
S_{cl(1-2)}^i&=& \frac {1}{16 \pi} \frac { |\sin(\pi k_i/N)| }{ |\sin(2 \pi
k_i/N)| \;
|\sin(3 \pi k_i/N)| } \;
|v_i^{(12)}|^2 \nonumber \\
v^{(12)}&\in& (I-\theta^2)
(f_1-f_2+\Lambda_{12})_{\perp}\;,
\label{Scl2}
\end{eqnarray}
\begin{eqnarray}
S_{cl(1-3)}^i&=& \frac {1}{16 \pi} \frac {|\sin(\pi k_i/N)|}
{|\sin(2 \pi k_i/N) |\; |\sin(3 \pi k_i/N)| }\;
|v_i^{(13)}|^2 \nonumber \\
v^{(13)}&\in& (I-\theta^3)(f_1-f_3+\Lambda_{13})_{\perp}\;,
\label{Scl3}
\end{eqnarray}
where
\begin{eqnarray}
\begin{array}{c}
\Lambda_{12}= (I+\theta+\theta^2)\Lambda + \omega\;,\;\;
\Lambda_{13}= (I+\theta)\Lambda+ \omega \;, \\
\omega = (I+\theta+\theta^2)f_3 - (\theta +\theta^2)f_2 -f_1 .
\end{array}
\label{L1213}
\end{eqnarray}
We can check by using the space group selection rule
(\ref{selru161}) that there is a one--to--one correspondence
between $S_{cl(1-2)},S_{cl(1-3)},S_{cl(2-3)}$. The 2--3 picture,
eq. (\ref{Scl1}) is the most convenient one since
$\Lambda_{12},\Lambda_{13}$ are subsets of the original lattice
$\Lambda$. Furthermore $S_{cl(1-2)}$ and $S_{cl(1-3)}$ depend on
the three fixed points considered $f_1,f_2,f_3$; while
$S_{cl(2-3)}$ depends only on $f_2,f_3$. \footnote{This difference
can be understood recalling that for $f_2,f_3$ given, the space
group selection rule (\ref{selru161}) determines $f_1$ uniquely,
which does not hold for the other two possibilities.}
We can now write the complete form of the correlator using
eqs. (\ref{Zcorr},\ref{Scl1})
\begin{equation}
<\sigma_{\theta}\sigma_{\theta^2}\sigma_{\theta^{3}}>=\;N\; \sqrt{l_2\; l_3}\;
\sum_{u\in\Lambda_{\perp}} \exp \left\{ -\frac{1}{2 \pi} \sin (\frac{\pi}{3})
\left[ (f_{23}+u)_1^2 + (f_{23}+u)_2^2 \right] \right\} ,
\label{sigcorr3}
\end{equation}
where $(f_{23}+u)_i$ is the $i$--plane projection of $(f_2-f_3
+u)$, $\Lambda_\perp = \Lambda_{G_2\times G_2}$, and $N$ is the
properly normalized quantum part \cite{burwick}
\begin{equation}
N=\; \sqrt{V_{\perp}} \; \frac {1}{2 \pi} \;\frac {\Gamma(\frac{5}{6})
\Gamma(\frac{2}{3})}{\Gamma(\frac {1}{6}) \Gamma(\frac {1}{3}) } ,
\label{Nz6}
\end{equation}
with $V_{\perp}$ the volume of the $G_2\times G_2$ unit cell.
General expressions for the couplings similar to
eq. (\ref{sigcorr3}) can be found in ref. \cite{burwick} for all the $Z_n$
orbifolds.
We have
performed the calculation in all the cases, checking that the results of
the mentioned reference are correct.
Expression (\ref{sigcorr3}) is not explicit enough for most purposes. For
example it does not allow
examination of the transformation properties of the Yukawa couplings under
target--space
modular transformations (e.g. $R\; \rightarrow \; 1/R$). From a
phenomenological
point of
view eq. (\ref{sigcorr3}) does not exhibit the dependence of the value of the
coupling on
physical quantities, $i.e.$ those that parametrize the size and shape of the
compactified
space. In fact eq. (\ref{sigcorr3}) is not even good enough to calculate the
final value of
the coupling numerically, especially when deformations are considered. The key
point in order to do this is to write $(f_{23}+u)_i$ in terms of
$(e_1,...,e_6)$,
$i.e.$ the lattice basis. This can be done with the help of the results of
subsection 2.1, see eq. (\ref{latort61}). Then the correlator (\ref{sigcorr3})
appears as an explicit function of the
deformation parameters of the compactified space. Substituting the resulting
expression in
(\ref{sigcorr1}) we obtain the final Yukawa coupling, which can be writen in a
quite compact way
\begin{eqnarray}
C_{\theta\theta^2\theta^3}&=& N \; \sqrt{l_2\; l_3}\;
\;\sum_{\vec{u} \in Z^4} \exp \left[-\frac{\sqrt{3}}{4 \pi}
(\vec{f_{23}}+\vec{u})^{\top} M (\vec{f_{23}}+\vec{u}) \right] \nonumber\\
&=& \sqrt{V_{\perp}}\;\; \sqrt{l_2\; l_3}\; \frac {1}{2 \pi}\; \frac
{\Gamma(\frac{5}{6})
\Gamma(\frac{2}{3})}{\Gamma(\frac {1}{6}) \Gamma(\frac {1}{3}) }
\; \vartheta
\left[
\begin{array}{c}
\vec{f_{23}} \\ 0
\end {array}
\right]
[ 0,\; \Omega]
\label{C123}
\end{eqnarray}
where $\vec{f_{23}}$ represents the first four components of $(f_2-f_3)$
($i.e.$
those
corresponding to the $G_2 \times G_2$ sublattice basis $(e_1,...,e_4)$ ), and
\begin{equation}
\vartheta
\left[
\begin{array}{c}
\vec{f_{23}} \\ 0
\end {array}
\right]
[ 0,\; \Omega]
= \sum_{\vec{u} \in Z^4} \exp \left[ i \pi
(\vec{f_{23}}+\vec{u})^{\top} \Omega (\vec{f_{23}}+\vec{u}) \right]\;,\;\;\;\;
\Omega=\frac {i \sqrt{3}}{4 \pi^2} M
\label{thetajac}
\end{equation}
with
\begin {eqnarray}
\Omega= \frac {i \sqrt{3}}{4 \pi^2}
\left(
\begin{array}{cccc}
R_1^2 & -\frac{3}{2} R_1^2 & R_1 R_3 \alpha_{13} & \sqrt{3} R_1 R_3 \alpha_{14}
\\
-\frac{3}{2} R_1^2 & 3 R_1^2 & -R_1 R_3 (3\alpha_{13}+\sqrt{3}\alpha_{14}) &
3 R_1 R_3 \alpha_{13} \\
R_1 R_3 \alpha_{13} & -R_1 R_3 (3\alpha_{13}+\sqrt{3}\alpha_{14}) & R_3^2 &
-\frac{3}{2} R_3^2 \\
\sqrt{3} R_1 R_3 \alpha_{14} & 3 R_1 R_3 \alpha_{13} & -\frac{3}{2} R_3^2 &
R_3^2
\end {array}
\right)
\label{Omega61}
\end {eqnarray}
where the deformation parameters $R_i^2,\; \alpha_{ij}$ have been
defined in eqs. (\ref{reldef61}, \ref{pardef61}).
It is worthwhile to have a look at eq. (\ref{C123}) to realize
which are the physical quantities on which the value of the
coupling depends. First $C_{\theta\theta^2\theta^3}$ depends on
the relative positions in the lattice of the relevant fixed
points to which the physical fields are attached. This
information is condensed in $\vec{f_{23}}$. Second
$C_{\theta\theta^2\theta^3}$ depends on the size and shape of
the compactified space, which is reflected in the orbifold
compactification parameters $(R_i^2, \alpha_{13}, \alpha_{14})$
appearing in $\Omega$ and (implicitely) in $V_{\perp}$. Note
that both pieces of information appear in a completely distinguishable way
from each other in eq. (\ref{C123}). Notice also that the deformation
parameter $R_5^2$ does not appear in $\Omega$. This is due to
the fact that $R_5$ parametrizes the size of the $i=3$
sublattice, $i.e.$ the fixed torus, and we have learnt that for
$i=3$ the coupling is equivalent to an untwisted one. This is a
general fact for all the orbifold couplings in which fixed tori
are involved (e.g. it does not occur for the
$\theta^2\theta^2\theta^2$ coupling of the $Z_6$--I orbifold, see below).
We say that $R_5$ is not an {\em effective deformation parameter} for the
$\theta\theta^2\theta^3$ couplings. The number of effective deformation
parameters (4 in this case) is
physically relevant since it is strongly related to the number of different
Yukawa couplings
and their corresponding sizes.
For the other twisted coupling $\theta^2\theta^2\theta^2$ in the $Z_6$--I
orbifold, the expression of the coupling can be
calculated in the same way as in the $\theta\theta^2\theta^3$ case, and is
given
by
\begin {eqnarray}
C_{\theta^2\theta^2\theta^2} &=& F(l_1,l_2,l_3) \;
N \sum_{v \in (f_3-f_2+\Lambda)} \exp [-\frac
{\sqrt{3}}{8\pi} \; |v|^2] \nonumber\\
&=& F(l_1,l_2,l_3) \; N \sum_{\vec{u} \in Z^6} \exp [-\frac
{\sqrt{3}}{8\pi} \; (\vec{f_{23}}+\vec{u})^{\top} M (\vec{f_{23}}+\vec{u})]
\nonumber\\
&=& F(l_1,l_2,l_3) \; N \vartheta
\left[
\begin{array}{c}
\vec{f_{23}} \\
0
\end {array}
\right]
[0, \Omega] ,
\end {eqnarray}
where $F=1$ for $l_1=1$ or $l_2=1$ or $l_3=1$ and $F=\frac{1}{\sqrt{2}}$ for
$l_1=l_2=l_3=2$. $l_i$ is the number of elements in the conjugation class
associated with $f_i$, see eq. (\ref{fisstgen}). $\vec{f_{23}}$ represents the
components
of $(f_2-f_3)$ in the lattice basis $(e_1,...,e_6)$. The global normalization
factor
and the $\Omega$ matrix are given by
\begin{equation}
\begin{array}{c}
N= \sqrt{V_{\Lambda}}\; \frac{3^{3/4}}{8 \pi^3} \;
\frac{\Gamma^6(\frac{2}{3})}{\Gamma^3(\frac{1}{3})}\\
\Omega = i\frac{\sqrt{3}}{8\pi^2}\;
\left(
\begin {array}{cccccc}
a & -\frac{3}{2}a & b & c & 0 & 0\\
-\frac{3}{2}a & 3 a & -3b-c & 3 b & 0 & 0 \\
b & -3b-c & d & -\frac{3}{2}d & 0 & 0\\
c & 3 b & -\frac{3}{2} d & 3 d & 0 & 0 \\
0 & 0 & 0 & 0 & e & -\frac{1}{2} e \\
0 & 0 & 0 & 0 & -\frac{1}{2} e & e
\end {array}
\right)\;\;\;
\begin{array}{l}
a= R_1^2 \\
b= R_1R_3 \alpha_{13} \\
c= \sqrt{3} R_1 R_3 \alpha_{14} \\
d= R_3^2 \\
e= R_5^2 .
\end{array}
\end{array}
\end{equation}
Clearly the number of effective parameters is 5.
We have performed a similar analysis for all the trilinear twisted
couplings in all the $Z_n$ orbifolds, giving the number of effective
deformation parameters in each case. The results are expounded in
Appendix 1.
\subsection{Accidental symmetries and the number of different couplings}
We are now ready to count the {\em number of different couplings} that appear
in
each
orbifold.
{}From the physical point of view this is one of the most relevant questions
about
a
string construction, since it is directly related to the possibility of
reproducing the
observed pattern of fermion masses and mixing angles. Unfortunately, this
task is probably the
most tedious part of the work presented here. Again, we expound in some detail
the analysis for the two possible couplings in the $Z_6$--I orbifold. Let us
begin
with the twisted coupling $\theta\theta^2\theta^3$.The corresponding results
for
other orbifolds can be found in Appendix 1.
The first point is that the $\Omega$ matrix appearing in the Jacobi theta
function of the
coupling, eqs. (\ref{C123}--\ref{Omega61}), is universal for all the
$\theta\theta^2\theta^3$ couplings. This means that the differences between the
Yukawa
couplings come
exclusively from the sum in $(f_{23}+u)$ in the classical part of the
correlation. Hence, two
couplings
\begin {eqnarray}
C\sim\vartheta \left[ \begin{array}{c} \vec{f_{23}} \\ 0 \end{array} \right]
\left[ \;
0,\Omega \right]\;\; \mbox{ and }\;\;
C'\sim\vartheta\;\left[ \begin{array}{c} \vec{f_{23}}'\\ 0 \end{array} \right]
\; \left[
\; 0,\Omega \right]
\label{ccou61}
\end {eqnarray}
\vspace{.5 cm}
will have the same value if there exists an integer unimodular transformation
$U$ $(i.e.
\;\;U\in GL(4,Z),\; \mid U \mid = \pm1 ) $ such that
\begin {eqnarray}
U^{\top}\;\Omega\;U &=& \Omega \\
\label{uo61}
U\;\vec{f_{23}} &=& \vec{f_{23}}'\;+\;\vec{v}\;\;,\;\;\;\;\vec{v}\in Z^4 .
\label{uf61}
\end {eqnarray}
Then, if the previous equations are true for some $U$, there is a one--to--one
correspondence
between the terms of the series defining \(\vartheta \left[ \begin{array}{c}
\vec{f_{23}} \\
0 \end{array}
\right] \), see eq. (\ref{thetajac}), and those of \(\vartheta \left[
\begin{array}{c}
\vec{f_{23}}' \\ 0 \end{array} \right] \). So we have to look for $U$--matrices
satisfying
(\ref{uo61}). There is a set of $U$--matrices that always fulfil (\ref{uo61}).
These are \(
\{ I,-I \} \) and \( \{ {\theta^n, n\in Z} \} \). To check the latter, note
that
when the sum
(\ref{thetajac}) is expressed in the complex orthogonal basis, the exponent
takes a diagonal
form
\begin{eqnarray}
\sum_{u\in{\Lambda_\bot}}\; \exp{ \{ \;a_1(f_{23}+u)_1^2+a_2(f_{23}+u)_2^2\;\}
}
\label{ort61}
\end{eqnarray}
as can be seen from (\ref{sigcorr3}). Then the terms multiplying the
coefficients
$a_i$ are
unchanged under $\theta^n$ twists, since these correspond to make rotations in
each $i$--plane.
This argument is always valid because the factorization (\ref{ort61}) is a
consequence of the
fact that the classical contributions can be computed in each $i$--plane
separately. In
addition to the group generated by $\{-I,\theta\}$ , there can be "accidental
$U$--symmetries" leaving $\Omega$ unchanged in eq. (\ref{uo61}). Some of these
symmetries con
be spontaneously broken when deformations are taken into account. After
inspection it turns
out that, for the case at hand, these accidental symmetries are generated by
\begin{eqnarray}
U_1=\left(
\begin{array}{r r}
e^{i\alpha} & \\
& 1
\end{array}
\right)
\;\;\;
U_2=\left(
\begin{array}{r r}
1 & \\
& e^{i\alpha}
\end{array}
\right)
\;\;\;
U_3=\left(
\begin{array}{r r}
0 &1 \\
1 & 0
\end{array}
\right)
\label{us61}
\end{eqnarray}
(when expressed in the complex orthogonal basis) plus products of these
matrices
by $\{-I,\;
\theta \}$. $U_1,\;U_2,\;U_3$ are broken when deformations are
considered.
Now, two Yukawa couplings $C,\;C'$ are equal (in the non--deformed case)
if $\vec{f_{23}}$ and $\vec{f_{23}}'$ are
connected as in (\ref{uf61}) by one of the $U$--matrices mentioned above. The
analysis has to
be performed for the 90 $\theta\theta^2\theta^3$ allowed couplings,
see subsection 2.3.
The result is that for the rigid $G_2 \times G_2 \times SU(3)$ lattice
$(\;i.e.\; R_1^2 =
R_3^2=R_5^2\;,\;\alpha_{13}=\alpha_{14}=0\;)$ there are 10 different couplings,
corresponding
to the following set of $\vec{f_{23}}$ shifts (in $G_2 \times G_2$)
\begin{equation}
\begin{array}{l}
l_3=1\;l_2=1\;:
\begin{array}{lll}
(0,0) \otimes (0,0), & &
\end{array}\\
l_3=3\;l_2=1\;:
\begin{array}{lll}
(0,0) \otimes (0,\frac{1}{2}), &(0,\frac{1}{2}) \otimes (0,\frac{1}{2}), &
\end{array}\\
l_3=1\;l_2=2\;:
\begin{array}{lll}
(0,0) \otimes (0,\frac{1}{3}), &(0,\frac{1}{3}) \otimes (0,\frac{1}{3}), &
\end{array}\\
l_3=3\;l_2=2\; \left\{
\begin{array}{lll}
(0,0) \otimes (0,\frac{1}{6}), &(0,\frac{1}{6}) \otimes (0,\frac{1}{6}), &
(0,\frac{1}{2}) \otimes (0,\frac{1}{3}), \\
(0,\frac{1}{3}) \otimes (0,\frac{1}{6}), & (0,\frac{1}{2}) \otimes
(0,\frac{1}{6}) & .
\end{array}
\right.
\end{array}
\end{equation}
The meaning of $l_i$ and its influence in the couplings are given
in eqs. (\ref{fisstgen}),(\ref{sigcorr1}).
With deformations the symmetry of the $\Omega$ matrix is smaller,
as explained above, and it turns out to be 30 different couplings
\begin{equation}
\begin{array}{l}
l_3=1\;l_2=1\;:
\begin{array}{lll}
(0,0) \otimes (0,0), & &
\end{array}\\
l_3=1\;l_2=2\;:
\begin{array}{llll}
(0,0) \otimes (0,\frac{1}{3}), &(0,\frac{1}{3}) \otimes (0,0), &
(0,\frac{1}{3}) \otimes (0,\frac{1}{3}), &(0,\frac{1}{3}) \otimes
(0,\frac{2}{3}),
\end{array}\\
l_3=3\;l_2=1\;
\left\{
\begin{array}{llll}
(0,0) \otimes (0,\frac{1}{2}), &(0,\frac{1}{2}) \otimes (0,0), &
(0,\frac{1}{2}) \otimes (0,\frac{1}{2}), & (0,\frac{1}{2}) \otimes
(\frac{1}{2},0), \\
(0,\frac{1}{2}) \otimes (\frac{1}{2},\frac{1}{2}),& & &
\end{array}
\right.\\
l_3=3\;l_2=2\; \left\{
\begin{array}{llll}
(0,0) \otimes (0,\frac{1}{6}) &(0,\frac{1}{6}) \otimes (0,0), &
(0,\frac{1}{6}) \otimes (0,\frac{1}{6}), & (0,\frac{1}{6}) \otimes
(0,\frac{5}{6}), \\
(0,\frac{1}{6}) \otimes (\frac{1}{2},\frac{1}{3}), &
(0,\frac{1}{6}) \otimes (\frac{1}{2},\frac{2}{3}), &
(0,\frac{1}{6}) \otimes (\frac{1}{2},\frac{1}{6}), &
(0,\frac{1}{6}) \otimes (\frac{1}{2},\frac{5}{6}), \\
(0,\frac{1}{3}) \otimes (0,\frac{1}{2}), &
(0,\frac{1}{2}) \otimes (0,\frac{1}{3}), &
(0,\frac{1}{2}) \otimes (0,\frac{1}{6}), &
(0,\frac{1}{2}) \otimes (\frac{1}{2},\frac{1}{3}), \\
(0,\frac{1}{2}) \otimes (\frac{1}{2},\frac{1}{6}), &
(0,\frac{1}{6}) \otimes (0,\frac{1}{2}), &
(\frac{1}{2},\frac{1}{3}) \otimes (0,\frac{1}{2}), &
(\frac{1}{2},\frac{1}{6}) \otimes (0,\frac{1}{2}), \\
(0,\frac{1}{6}) \otimes (0,\frac{1}{3}), &
(0,\frac{1}{6}) \otimes (0,\frac{2}{3}), &
(0,\frac{1}{3}) \otimes (0,\frac{1}{6}), &
(0,\frac{2}{3}) \otimes (0,\frac{1}{6})
\end{array}
\right.
\end{array}
\end{equation}
The absolute and relative size of these 30 couplings obviously depend on the
value of the deformation parameters, as reflected in
eqs. (\ref{C123}--\ref{Omega61}).
Performing an analysis similar to the $\theta\theta^2\theta^3$ case,
we find out the number of inequivalent shifts for the
$\theta^2\theta^2\theta^2$
coupling. For the non--deformed case there are 8 different couplings, namely
\begin{equation}
\vec{f_{23}}=
\left[
\begin{array}{l}
l_1=1\;{\rm or}\;l_2=1\;{\rm or}\;l_3=1\;\;
\left\{
\begin{array}{lll}
g_2^{(0)} \otimes g_2^{(0)} \otimes \hat g_2^{(0)}, &
g_2^{(0)} \otimes g_2^{(0)} \otimes \hat g_2^{(1)},&
g_2^{(0)} \otimes g_2^{(1)} \otimes \hat g_2^{(0)}, \\
g_2^{(0)} \otimes g_2^{(1)} \otimes \hat g_2^{(1)}, &
g_2^{(1)} \otimes g_2^{(1)} \otimes \hat g_2^{(0)}, &
g_2^{(1)} \otimes g_2^{(1)} \otimes \hat g_2^{(1)}
\end{array}
\right. \\
l_1=l_2=l_3=2 \;:\;\;\;
\begin{array}{lll}
g_2^{(0)} \otimes g_2^{(0)} \otimes \hat g_2^{(0)}, &
g_2^{(0)} \otimes g_2^{(0)} \otimes \hat g_2^{(1)} & .
\end{array}
\end{array}
\right.
\end{equation}
For deformations the number is increased to 12
different couplings given by the following shifts
\begin{equation}
\vec{f_{23}}=
\left[
\begin{array}{l}
l_1=1\;{\rm or}\;l_2=1\;{\rm or}\;l_3=1\;\;
\left\{
\begin{array}{lll}
g_2^{(0)} \otimes g_2^{(0)} \otimes \hat g_2^{(0)}, &
g_2^{(0)} \otimes g_2^{(0)} \otimes \hat g_2^{(1)}, &
g_2^{(0)} \otimes g_2^{(1)} \otimes \hat g_2^{(0)}, \\
g_2^{(0)} \otimes g_2^{(1)} \otimes \hat g_2^{(1)}, &
g_2^{(1)} \otimes g_2^{(1)} \otimes \hat g_2^{(0)}, &
g_2^{(1)} \otimes g_2^{(1)} \otimes \hat g_2^{(1)}, \\
g_2^{(1)} \otimes g_2^{(0)} \otimes \hat g_2^{(0)}, &
g_2^{(1)} \otimes g_2^{(0)} \otimes \hat g_2^{(1)}, &
g_2^{(1)} \otimes g_2^{(2)} \otimes \hat g_2^{(0)}, \\
g_2^{(1)} \otimes g_2^{(2)} \otimes \hat g_2^{(1)} & &
\end{array}
\right. \\
l_1=l_2=l_3=2 \;:\;\;\;
\begin{array}{lll}
g_2^{(0)} \otimes g_2^{(0)} \otimes \hat g_2^{(0)}, &
g_2^{(0)} \otimes g_2^{(0)} \otimes \hat g_2^{(1)} & .
\end{array}
\end{array}
\right.
\end{equation}
We have performed a similar analysis for all the $Z_n$ orbifolds, the results
are in Appendix 1. In all cases we have checked by computer that the number of
different Yukawa couplings is correct.
\section{A comparative study of the $[SO(4)]^3$ and $[SU(4)]^2$ $Z_4$
orbifolds}
Although most of the aspects of orbifold Yukawa couplings have been
adequately
illustrated in the previous section by the $Z_6$--I example, there are still
some
interesting features that can be exhibited in the framework of a $Z_4$
orbifold.
In
particular we will see the physical meaning of a (1,2) modulus (absent in the
$Z_6$--I
case) and its effect in the Yukawa coupling values. Furthermore, the comparison
of the
Yukawa couplings of a $Z_4$ orbifold, formulated in an $[SO(4)]^3$ lattice,
with
those of
a $Z_4$ orbifold, formulated in an $[SU(4)]^2$ lattice, will show us which
properties of the
couplings depend on the chosen lattice and which do not. Moreover, the
$[SO(4)]^3$
case provides an example of a non--Coxeter orbifold.
The twist of a $Z_4$ orbifold in an orthogonal complex basis has the form
(see Table 1)
\begin{eqnarray}
\theta={\rm diag}(e^{i\alpha},e^{i\alpha},e^{-2i\alpha}),\;\;\;\;
\alpha=\frac{2\pi}{4} .
\label{dz41}
\end{eqnarray}
Again, the lattice $\Lambda$ can get deformations compatible with the twist
$\theta$. These
degrees of freedom correspond to the Hermitian part of the five (1,1) moduli
surviving compactification,
\(N_{1\bar1},\;\;N_{2\bar2},\;\;N_{3\bar3},\;\;N_{1\bar2},
\;\;N_{2\bar1}\) with $N_{i\bar j}=|i>_R \otimes \alpha_{\bar j L}^{-1}|0>_L$,
and the (1,2) modulus \( N_{33}=|3>_R \otimes \alpha_{3L}^{-1} |0>_L\).
(Notice that no $N_{ij}$ moduli appeared in the $Z_6$--I case.)
Untwisted moduli can be easily expressed in terms of $g_{mn}$, $b_{mn}$ (
$m,\;n$ = 1,...,6),
$i.e.$ the internal metric and torsion respectively. It is easy to check,
however, that $N_{33}$ contains only $g_{mn}$ degrees of freedom, more
precisely $(g_{55}-g_{66})$ and $g_{56}$. Therefore both Re$(N_{33})$
and Im$(N_{33})$ correspond to deformation parameters. In order to see what
these parameters are, let us choose first an $[SO(4)]^3$ root lattice, with
basis
$(e_1,...,e_6)$, as a lattice on which the twist $\theta$, see eq.
(\ref{dz41}),
acts crystallographically as %
\begin{eqnarray}
\begin{array} {lll}
\theta e_1 = e_2 ,& \theta e_3 = e_4 ,& \theta e_5 = -e_5, \\
\theta e_2 = -e_1, & \theta e_4 = -e_3, & \theta e_6 = -e_6 .
\end {array}
\label{t41}
\end{eqnarray}
Then, as in subsection 2.1, $P$ invariance impose the following relations
\begin{eqnarray}
\begin{array}{ll}
|e_1|=|e_2|, & |e_3|=|e_4| ,\\
\alpha_{ij}=0\;\;\; i=1,2,3,4\;\;j=5,6 & \alpha_{14}=-\alpha_{23},\\
\alpha_{12}=\alpha_{34}=0, & \alpha_{13}=\alpha_{24}
\end{array}
\label{rpd41}
\end{eqnarray}
where $\alpha_{ij}=\cos \theta_{ij}$ and $e_i e_j= |e_i| |e_j| \cos
\theta_{ij}$. Therefore we can
take the seven deformation degrees of freedom as
\begin{eqnarray}
\begin {array}{c}
R_i=|e_i|\;\;\;\; i=1,3,5,6, \\
\alpha_{13},\;\alpha_{14},\;\alpha_{56} .
\end {array}
\label{pd41}
\end{eqnarray}
Now it is easy to see that the two deformation parameters coming from $N_{33}$
correspond to a variation of the relative size of $|e_5|$ and $|e_6|$ and to
the
$\theta_{56}$ angle; thus allowing for a rhomboid-like lattice from the
original third
$SO(4)$ sublattice. It is remarkable however that, as will be seen shortly,
the
deformation parameters coming from $N_{33}$ are not involved in the Yukawa
couplings.
Let us briefly summarize the main results of the $[SO(4)]^3$ $Z_4$
orbifold. They have been obtained performing an analysis similar to that
followed in the
previous section for the $Z_6$--I one.
There are 16 fixed points under $\theta$ in this orbifold, given by
\begin{equation}
f_1^{(ijk)} = g_1^{(i)}\otimes g_1^{(j)}\otimes g_1^{(k)} \;;\;\;\;\; i,j=0,2
\;;\;\;k=0,1,2,3 \label{ft41}
\end{equation}
where
\[
\begin{array}{llll}
g_1^{(0)}=(0,0)\;, & g_1^{(1)}= (\frac{1}{2},0)\;, &
g_1^{(2)}=(\frac{1}{2},\frac{1}{2}) \;, & g_1^{(3)}= (0, \frac{1}{2}) .
\end{array}
\]
Under $\theta$ each fixed point is associated with a conjugation class in a
one--to--one correspondence. Under $\theta^2$
there are 16 fixed tori that are the product of 16 fixed points in the
sublattice
$(e_1,e_2,e_3,e_4)$ by the 2--torus defined by the sublattice $(e_5,e_6)$ (with
or without
deformations). Six of these 16 fixed points are connected to the others through
$\theta$
rotations. The fixed points are (in the first two $SO(4)$'s)
\begin{equation}
f_2^{(ij)} = g_2^{(i)} \otimes g_2^{(j)} \;;\;\;\; i,j=0,1,2,3
\label{ftt41}
\end{equation}
with $g_2^{(i)}=g_1^{(i)}$. And we can see that
\[
\begin{array}{ccccccc}
g_2^{(0)}\otimes g_2^{(1)} & \sim & g_2^{(0)}\otimes g_2^{(3)} ,& &
g_2^{(1)}\otimes g_2^{(0)}&\sim
& g_2^{(3)}\otimes g_2^{(0)},\\
g_2^{(2)}\otimes g_2^{(1)} & \sim & g_2^{(2)}\otimes g_2^{(3)} ,& &
g_2^{(1)}\otimes g_2^{(2)}&\sim
& g_2^{(3)}\otimes g_2^{(2)},\\
g_2^{(1)}\otimes g_2^{(1)} & \sim & g_2^{(3)}\otimes g_2^{(3)} ,& &
g_2^{(1)}\otimes g_2^{(3)}&\sim
& g_2^{(3)}\otimes g_2^{(1)} .
\end{array}
\]
Note that in the two first $SO(4)$'s $g_2^{(0)}$ and $g_2^{(2)}$ are fixed
points under
$\theta$ while $\theta g_2^{(1)} \rightarrow g_2^{(3)}$. Consequently there
are 10 $\theta^2$
conjugation classes and, as was explained in subsection 2.3, only symmetric
combinations of fixed points (for the conjugation classes with more than one
fixed point)
take part in the Yukawa couplings.
For this orbifold all the twisted couplings are of the $\theta \theta \theta^2$
type
and the selection rule reads
\begin{equation}
f_1 + f_2 - (I + \theta) f_3 \in \Lambda ,
\label{rs41}
\end{equation}
where $f_3$ is the $\theta^2$ fixed point. Denoting the fixed points by
\begin{eqnarray}
\left.
\begin {array}{rcl}
f_1 &=& g_1^{(i_1)}\otimes g_1^{(j_1)}\otimes g_1^{(k_1)}\\
f_2 &=& g_1^{(i_2)}\otimes g_1^{(j_2)}\otimes g_1^{(k_2)}\\
f_3 &=& g_2^{(i_3)}\otimes g_2^{(j_3)}\otimes [\alpha(e_5)+\beta(e_6)]
\end {array}
\right\}
\;\;
\begin {array} {l}
i_1,i_2,j_1,j_2=0,2,\\
k_1,k_2,i_3,j_3=0,1,2,3\\
\alpha,\beta \in R ,
\end {array}
\label{fp12341}
\end {eqnarray}
see eqs. (\ref{ft41}--\ref{ftt41}), the selection rule is simply
\begin{eqnarray}
\left.
\begin {array}{c}
i_1+i_2+2i_3=0 \\
j_1+j_2+2j_3=0 \\
k_1=k_2
\end {array}
\right\}\;\; mod.\;4 .
\label{rsm41}
\end{eqnarray}
The number of allowed couplings is 160.
It is clear now that the third $SO(4)$ lattice always enters in the couplings
as
the fixed
torus associated with the $\theta^2$ field. Then the coupling in
this invariant plane is of the untwisted type and, consequently, the
deformation parameters
for the third $SO(4)$ sublattice ($i.e.\;\; R_5$, $R_6$, $\alpha_{56}$; see
eq. (\ref{pd41}))
do not affect the value of the coupling. Two of
these parameters are precisely those coming from $N_{33}$. Remarkably enough we
have checked that
this is a general property for all the orbifolds: (1,2) moduli are not involved
in the expressions
of the Yukawa couplings. It looks as though there is a selection rule
(unknown to us)
forbidding this kind of dependences.
For the case where $f_3$ is also a $\theta$ fixed
point, the value of the coupling in the 2--3 picture is
\begin {eqnarray}
C_{\theta\theta\theta^2} &=& N \sum_{v \in (f_2-f_3+\Lambda)_\bot} \exp [-\frac
{1}{4\pi} (
|v_1|^2 + |v_2|^2 )] \nonumber\\
& = & N \sum_{v \in (f_2-f_3+\Lambda)_\bot} \exp [-\frac {1}{4\pi}
\vec{v}^{\top} M \vec{v}]
\\
& = & N\;\; \vartheta
\left[
\begin {array}{c}
\vec{f_{23}} \nonumber\\
0
\end{array}
\right]
[ 0 , \Omega ] ,
\label{ac41}
\end {eqnarray}
where $(f_2-f_3 + \Lambda)_{\bot}$ selects only $(f_2-f_3+\Lambda)$ shifts
that
are
orthogonal to the invariant sublattice, $i.e.$ the third $SO(4)$ lattice.
Thus, $( f_2 - f_3 + \Lambda)_{\bot}$ has non--zero components in the first
two $SO(4)$'s only. Similarly $\vec{f_{23}}$ represents the four components of
$(f_2-f_3)$ in the basis $(e_1,...,e_4)$ of the first $SO(4)$ lattices. Finally
\begin{eqnarray}
\begin{array}{c}
N = \sqrt{V_{\perp}}\; \frac{1} {2\pi}\; \frac {\Gamma ^2
(\frac{3}{4})}{\Gamma^2 (\frac
{1}{4})}
\\ \\
M= (-4\pi^2i) \Omega =
\left(
\begin{array}{cccc}
R_1^2 & 0 & R_1R_3\alpha_{13} & R_1R_3\alpha_{14} \\
0 & R_1^2 & -R_1R_3\alpha_{14} & R_1R_3\alpha_{13} \\
R_1R_3\alpha_{13} & -R_1R_3\alpha_{14} & R_3^2 & 0 \\
R_1R_3\alpha_{14} & R_1R_3\alpha_{13} & 0 & R_3^2
\end{array}
\right)
\end{array}
\label{mac41}
\end{eqnarray}
where $V_{\perp}$ is the volume of the unit cell of the first two
$SO(4) \times SO(4)$ sublattice orthogonal to the invariant plane.
If $f_3$ is not fixed by $\theta$, see eq. (\ref{mac41}), the result is exactly
the same but
multiplying $C_{\theta\theta\theta^2}$ by $\sqrt{2}$. Clearly the number of
effective
deformation parameters is 4. The number of {\em different} Yukawa couplings,
from the
160 allowed ones, is 6 (without deformations), corresponding to
\begin{equation}
\vec{f_{23}}=
\left[
\begin{array}{l}
l_3=1\;:\;
\begin{array}{lll}
g_1^{(0)} \otimes g_1^{(0)}, & g_1^{(2)} \otimes g_1^{(0)}, & g_1^{(2)} \otimes
g_1^{(2)},
\end{array}\\
l_3=2\;:\;
\begin{array}{lll}
g_2^{(0)} \otimes g_2^{(1)}, & g_2^{(2)} \otimes g_2^{(1)}, & g_2^{(1)} \otimes
g_2^{(1)}
\end{array}
\end{array}
\right.
\end{equation}
and 10 (when deformations are considered), namely
\begin{equation}
\vec{f_{23}}=
\left[
\begin{array}{l}
l_3=1\;:\;
\begin{array}{llll}
g_1^{(0)} \otimes g_1^{(0)}, & g_1^{(2)} \otimes g_1^{(0)}, & g_1^{(0)} \otimes
g_1^{(2)},
& g_1^{(2)} \otimes g_1^{(2)},
\end{array}\\
l_3=2\; \left\{
\begin{array}{lll}
g_2^{(0)} \otimes g_2^{(1)}, & g_2^{(2)} \otimes g_2^{(1)}, & g_2^{(1)} \otimes
g_2^{(1)}, \\
g_2^{(1)} \otimes g_2^{(0)}, & g_2^{(1)} \otimes g_2^{(2)}, & g_2^{(1)} \otimes
g_2^{(3)} .
\end{array}
\right.
\end{array}
\right.
\end{equation}
We would like to compare all the previous results with those of the $Z_4$
orbifold based on
a Coxeter twist acting on an $[SU(4)]^2$ root lattice. This will illustrate
what
aspects of the
orbifold dynamics are independent of the chosen lattice and what aspects do
not.
Furthermore, for the $[SU(4)]^2$ $Z_4$ orbifold, the lattice cannot be
decomposed as the
direct product of an invariant sublattice under $\theta^2$ times an orthogonal
sublattice,
as happened in the $[SO(4)]^3$ case. This peculiarity, which is shared by
other
orbifolds, introduces some additional complications which we would like to
show.
The Coxeter
element in the $[SU(4)]^2$ root lattice is of the form
\begin{eqnarray}
\begin{array} {lll}
\theta e_1= e_2, & \theta e_2 = e_3, & \theta e_3 = -e_1-e_2-e_3, \\
\theta e_4= e_5, & \theta e_5 = e_6, & \theta e_6 = -e_4-e_5-e_6 .
\end{array}
\label{t42}
\end{eqnarray}
The 7 deformation parameters coming from
$(N_{1\bar1},\;\;N_{2\bar2},\;\;N_{3\bar3},\;\;N_{1\bar2},\;\;N_{2\bar1},\;\;N_{
33})$ are
\begin{eqnarray}
\begin {array}{c}
R_i = |e_i|,\;\;i=1,4 \\
\alpha_{12},\;\; \alpha_{14},\;\;\alpha_{15},\;\;
\alpha_{16},\;\; \alpha_{45},
\end {array}
\label{pd42}
\end{eqnarray}
where $( e_1,e_2,e_3)$ is the basis of the first $SU(4)$, and $(e_4,e_5,e_6)$
the basis of
the second one. Equation (\ref{pd42}) should be compared with
eq. (\ref{pd41}), $i.e.$
its analogue
in the $[SO(4)]^3$ case. Clearly the geometrical interpretation of the
deformation parameters
is different for each one. Other parameters of the $SU(4)^2$ lattice are
related
to the previous ones by
\begin{eqnarray}
\begin{array}{ll}
|e_1| = |e_2| = |e_3|, & |e_4| = |e_5| = |e_6|, \\
\alpha_{23}=\alpha_{12}, & \alpha_{34} = \alpha_{16}, \\
\alpha_{13}=-1-2\alpha_{12}, & \alpha_{24} = \alpha_{35} =
-\alpha_{14}-\alpha_{15}-\alpha_{16,}\\
\alpha_{25}=\alpha_{36}=\alpha_{14},\;\;\; & \alpha_{56}=\alpha_{45}, \\
\alpha_{26}=\alpha_{15}, & \alpha_{46}=-1-2\alpha_{45} .
\end{array}
\label{opd42}
\end{eqnarray}
It is important to point out that the $[SU(4)]^2$ lattice cannot be
consistently
deformed
into an $[SO(4)]^3$ one. To see this, note that the invariant sublattice under
the action of
the Coxeter element (\ref{t42}) is generated by $(e_1+e_3)$ and $(e_4+e_6)$. If
such a
deformation existed, these vectors could be identified with the basis of the
invariant
$SO(4)$ sublattice in the $[SO(4)]^3$ case. Now, it can be shown that we cannot
construct a
basis of $[SU(4)]^2$ with $(e_1+e_3)$ , $(e_4+e_6)$ and four additional
lattice
vectors
orthogonal to these (with or without deformations). In fact, it is easy to
check that the
$[SU(4)]^2$ Coxeter element (\ref{t42}) has the same form as the twist $\theta$
of the
$[SO(4)]^3$ case, $i.e.$ eq. (\ref{t41}), when acting in the following set of
lattice vectors
\begin{eqnarray}
\begin{array}{lll}
\tilde e_1 = e_1+e_2, & \tilde e_3 = e_1+e_3, & \tilde e_5 = e_5 + e_6 ,\\
\tilde e_2 = e_2+e_3, & \tilde e_4 = e_4+e_5, & \tilde e_6 = e_4 + e_6 .
\end{array}
\label{bo42}
\end{eqnarray}
Notice that when deformations are included, $(\tilde e_1,\tilde e_2,\tilde
e_4,\tilde e_5)$
remain orthogonal to $(\tilde e_3,\tilde e_6)$. Actually,
$(\tilde e_1,\tilde e_2,\tilde e_3,\tilde e_4,\tilde e_5,\tilde e_6)$ generate
an $[SO(4)]^3$ sublattice of the $[SU(4)]^2$ lattice but they are not a basis
of
the whole lattice. Anyway the $\tilde e_i$ will be of help below.
The number of fixed points is the same in both cases. For the case at hand,
$[SU(4)]^2$,
there are 16 fixed points under $\theta$ which can be expressed as
\begin{equation}
f_1^{(ij)}= g_1^{(i)} \otimes g_1^{(j)} \;\;;\;\;\;\; i,j=0,1,2,3
\label{pft42}
\end{equation}
with
\[
\begin{array}{llll}
g_1^{(0)} = (0,0,0) , & g_1^{(1)} = (\frac{1}{4}, \frac{1}{2}, \frac{3}{4}), &
g_1^{(2)} = (\frac{1}{2}, 0 ,\frac{1}{2}), &
g_1^{(3)} = (\frac{3}{4}, \frac{1}{2}, \frac{1}{4}) .
\end{array}
\]
Under $\theta^2$ there is a fixed torus generated by $(e_1+e_3)$ and
$(e_4+e_6)$. Then we
can form 16 fixed tori as products of this fixed torus by the following 16
$\theta^2$ fixed
points (six of them connected to the others by $\theta$ rotations)
\begin{eqnarray}
\begin{array}{ccccccc}
g_2^{(0)} \otimes g_2^{(0)} ,& & g_2^{(2)} \otimes g_2^{(2)}, & & g_2^{(0)}
\otimes g_2^{(2)} ,& & g_2^{(2)} \otimes g_2^{(0)} ,\\
g_2^{(0)} \otimes g_2^{(1)} &\sim & g_2^{(0)} \otimes
g_2^{(3)} ,& & g_2^{(2)} \otimes g_2^{(1)} &\sim & g_2^{(2)} \otimes g_2^{(3)},
\\
g_2^{(1)} \otimes g_2^{(0)} &\sim & g_2^{(3)} \otimes
g_2^{(0)} ,& & g_2^{(1)} \otimes g_2^{(2)} &\sim & g_2^{(3)} \otimes g_2^{(2)},
\\
g_2^{(1)} \otimes g_2^{(1)} &\sim & g_2^{(3)} \otimes
g_2^{(3)} ,& & g_2^{(1)} \otimes g_2^{(3)} &\sim & g_2^{(3)} \otimes g_2^{(1)}
\end{array}
\label{pftt42}
\end{eqnarray}
where
\[
g_2^{(0)} = (0,0,0), \;\; g_2^{(1)} = \frac {1}{2}(1,1,0), \;\; g_2^{(2)} =
\frac{1}{2}(1,0,1), \;\; g_2^{(3)} = \frac{1}{2} (0,1,1) .
\]
Note that $g_2^{(0)}$ and $g_2^{(2)}$ are fixed under $\theta$ but $g_2^{(1)}$
and $g_2^{(3)}$ are
connected by a $\theta$ rotation, $\theta g_2^{(1)} \rightarrow g_2^{(3)}$.
As in the $[SO(4)]^3$ case there are 10 conjugation classes. The space group
selection rule also has the same form %
\begin{equation}
f_1 + f_2 - (I + \theta) f_3 \in \Lambda .
\label{rs42}
\end{equation}
Denoting the fixed points by
\begin{eqnarray}
\begin {array}{c}
f_1 = g_1^{(i_1)}\otimes g_1^{(j_1)},\;\;
f_2 = g_1^{(i_2)}\otimes g_1^{(j_2)},\;\;
f_3 = [g_2^{(i_3)}\otimes g_2^{(j_3)}]\otimes[\alpha(e_1+e_3)+\beta(e_4+e_6)],
\\
i_1,i_2,i_3,j_1,j_2,j_3=0,1,2,3,\;\;\alpha,\beta \in R
\end {array}
\end{eqnarray}
eq. (\ref{rs42}) can be expressed as
\begin{eqnarray}
\left.
\begin {array}{lcr}
i_1 + i_2 + 2i_3 &=& 0\\
j_1 + j_2 + 2j_3 &=& 0
\end {array}
\right\}
\;\; mod.\; 4 .
\label{rsm42}
\end{eqnarray}
(Note that in spite of eqs. (\ref{rs42}--\ref{rsm42}) being formally identical
with
(\ref{rs41}--\ref{rsm41}) the meaning of the vectors implicitely involved is
quite
different.) The number of allowed couplings is again the same, 160. The Yukawa
coupling, if
$f_3$ is fixed by $\theta$, is given by
\begin {eqnarray}
C_{\theta\theta\theta^2} &=&
\bar N \sum_{v \in (f_2-f_3+\Lambda)_\bot} \exp [-\frac {1}{4\pi}
\vec{v}^{\top}
\bar
M \vec{v}] .
\label{ac42}
\end {eqnarray}
As usual, the arrows denote components in the lattice basis $(e_1,...e_6)$.
The subscript $\bot$ means that only $v$ shifts orthogonal to the invariant
plane (defined
by $(e_1+e_3)$ and $(e_4+e_6)$) have to be considered. If $f_3$ is not fixed by
$\theta$ the
previous expression has to be multiplied by a $\sqrt {2}$ factor. $\bar N$ and
$\bar M$ are
given by
\begin{equation}
\begin{array}{c}
\bar N = \sqrt{V_{\perp}} \; \frac{1} {2\pi} \; \frac {\Gamma ^2
(\frac{3}{4})}{\Gamma^2
(\frac {1}{4})}
\\ \\
\bar M =
\left(
\begin {array}{cccccc}
a & b & -a-2b & e & f & g \\
b & a & b & -e-f-g & e & f \\
-a-2b & b & a & g & -e-f-g & e \\
e & -e-f-g & g & c & d & -c-2d \\
f & e & -e-f-g & d & c & d \\
g & f & e & -c-2d & d & c
\end {array}
\right) \\
\begin {array}{lllllll}
a = R_1^2 &
b = R_1^2 \alpha_{12} &
c = R_4^2 &
d = R_4^2 \alpha_{45} &
e = R_1 R_4 \alpha_{14} &
f = R_1 R_4 \alpha_{15} &
g = R_1 R_4 \alpha_{16} .
\end{array}
\end{array}
\label{mac42}
\end{equation}
where $V_{\perp}$ is the volume of the sublattice orthogonal to the invariant
plane
(see below).
By addition of lattice vectors we can always choose $f_2$ and $f_3$ in
(\ref{ac42}) such that
$f_2-f_3$ is orthogonal to the invariant plane. Then $f_2-f_3$ can be expressed
in the
"basis" (\ref{bo42}) as
\begin{equation}
f_2-f_3=x_1 \tilde e_1 +x_2 \tilde e_2 + x_4 \tilde e_4 + x_5 \tilde e_5\;.
\label{23bo}
\end{equation}
We can check that $x_i=0, \frac{1}{2}$ (up to lattice vectors) for all the
choices of
$f_2$, $f_3$. However it is amusing to see that many of the possibilities are
in
fact
equivalent. Consider, for definiteness, the case $f_2-f_3 = 0$, $i.e.\; x_i=0$
in
(\ref{23bo}). Now we can add to $f_2-f_3$ any shift contained in the invariant
plane,
\begin{equation}
f_2-f_3 = \alpha (e_1+e_3) + \beta (e_4 + e_6)\;,\;\;\; \alpha, \beta \in R .
\label{23planoinv}
\end{equation}
Demanding $v=f_2-f_3+\Lambda$ to be orthogonal to the invariant plane we find
constraints
for $\alpha$ and $\beta$. A shift \( \sum_{i} a_i e_i \) is orthogonal to the
invariant
plane if it satisfies the condition \( a_1-a_2+a_3=0\) and \( a_4 -a_5+a_6=0 \)
(with or
without deformations). Then
\begin{equation}
v=\alpha(e_1+e_3)+\beta(e_4+e_6)+\sum_{i=1}^{6} n_i e_i \;,\;\;\;\; n_i\in Z
\label{exv42}
\end{equation}
is orthogonal if
\begin{equation}
(\alpha,\beta) =
(0,0),\;\;(0,\frac{1}{2}),\;\;(\frac{1}{2},0),\;\;(\frac{1}{2},\frac{1}{2}),
\label{vv42}
\end{equation}
up to lattice vectors. Then $v$ can be expressed as
\begin {eqnarray}
v &=& (n_1+\alpha) \tilde e_1 + (n_3+\alpha) \tilde e_2 +
(n_4+\beta) \tilde e_4 + (n_6+ \beta) \tilde e_5 .
\label{difv42}
\end{eqnarray}
Therefore we have to sum up four possibilities for $(x_1,x_2,x_4,x_5)$, namely
\begin{equation}
(0,0,0,0),\;\;(\frac{1}{2},\frac{1}{2},0,0),\;\;(0,0,\frac{1}{2},\frac{1}{2}),\;
\;
(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2}) .
\label{sv42}
\end{equation}
This is characteristic of the lattices that cannot be decomposed as the direct
product of an
invariant sublattice times an orthogonal sublattice. In particular it did not
happen in the
$[SO(4)]^3$ lattice.
In order to write the coupling we have to add to each case in (\ref{sv42})
lattice vectors orthogonal to the invariant plane, $i.e.$ of the form
\begin{equation}
u_{\bot}= n_1\tilde e_1+n_2 \tilde e_2 + n_4 \tilde e_4 +n_5 \tilde e_5,
\label{vo42}
\end{equation}
as is reflected in (\ref{difv42}). Now we can express the coupling
(\ref{ac42}), which
contained a $6 \times 6\;\bar M$ matrix, as a sum of four $\vartheta$ functions
defined in the
four-dimensional lattice $(\tilde e_1, \tilde e_2, \tilde e_4, \tilde e_5)$
\begin {eqnarray}
C_{\theta\theta\theta^2} &=&
\bar N \sum_{\tilde f_{23}}\;\; \sum_{\tilde v \in (\tilde
f_{23}+\Lambda_\bot)}
\exp
[-\frac {1}{4\pi}
\vec{\tilde{v}}^{\top} \bar M' \vec{\tilde{v}}] \nonumber\\
& = &\bar N \sum_{\tilde f_{23}} \vartheta
\left[
\begin {array}{c}
\vec{\tilde f_{23}} \\
0
\end{array}
\right]
[ 0 , \Omega' ]
\label{acjab42}
\end {eqnarray}
where $\vec{\tilde{v}}$ and $\vec{\tilde f_{23}}$ are the components in
$(\tilde e_1, \tilde e_2, \tilde e_4, \tilde e_5)$ of $v$ and $(f_2-f_3)$
respectively. $\vec{\tilde f_{23}}$ runs over the possibilities displayed in
(\ref{sv42}), and
$\Omega'$ is given by
\begin{equation}
\Omega' = i \frac {1}{4 \pi^2}\bar M' = i \frac {1}{4 \pi^2}
\left(
\begin {array}{cccc}
\bar a & 0 &\bar b & \bar c \\
0 &\bar a & -\bar c & \bar b \\
\bar b & -\bar c & \bar d & 0 \\
\bar c & \bar b & 0 & \bar d
\end {array}
\right) \;\;\;
\begin{array}{rcl}
\bar a&=& 2 R_1^2 (1+\alpha_{12})\\
\bar b&=& R_1 R_4 (\alpha_{14} -\alpha_{16})\\
\bar c&=& R_1 R_4 (\alpha_{14}+2\alpha_{15}+\alpha_{16})\\
\bar d&=& 2 R_4^2 (1+\alpha_{45}) .
\end{array}
\label{omega42}
\end{equation}
Note that there are 4 effective deformation parameters, as in the $[SO(4)]^3$
case.
Besides (\ref{sv42}), there are three other inequivalent possibilities for
$f_3-f_2$, namely
\begin{equation}
\begin{array} {llll}
\{\;(0,0,0,\frac {1}{2}), & (0,0,\frac{1}{2},0), &
(\frac{1}{2},\frac{1}{2},0,\frac{1}{2}),&
(\frac{1}{2},\frac{1}{2},\frac{1}{2},0)\;\},\\
\{\;(0,\frac {1}{2},0,0), & (0,\frac {1}{2},\frac{1}{2},\frac {1}{2}), &
(\frac{1}{2},0,0,0),& (\frac{1}{2},0,\frac{1}{2},\frac {1}{2})\;\},\\
\{\;(0,\frac {1}{2},0,\frac {1}{2}), & (0,\frac {1}{2},\frac{1}{2},0), &
(\frac{1}{2},0,0,\frac{1}{2}),& (\frac{1}{2},0,\frac{1}{2},0)\;\} .
\end {array}
\label{osv42}
\end{equation}
Taking into account that the coupling gets a factor $\sqrt{2}$ if $f_3$ is not
fixed by
$\theta$, this gives 8 different Yukawa couplings when deformations are
considered, and 6
without deformations (the two first possibilities in (\ref{osv42}) are equal).
This differs from
the $[SO(4)]^3$ case, where there were 10 and 6 respectively. Note that the
matrix $\Omega'$,
eq. (\ref{omega42}), appearing in the coupling is formally identical to that
of $[SO(4)]^3$,
eq. (\ref{mac41}). However, as we have seen, the structure of possible shifts
is
very different. In
any case the number of effective deformation parameters is the same for both
cases.
\section{Conclusions}
We have calculated the complete twisted Yukawa couplings for all
the $Z_n$ orbifold constructions in the most general case, i.e.
when deformations of the compactified space are considered. This
includes a certain number of tasks. Namely, determination of the allowed
couplings, calculation of the explicit dependence of the Yukawa
couplings values on the moduli expectation values (i.e. the
parameters determining the size and shape of the compactified
space), etc. Some progress in this direction has recently been
made but without arriving at such explicit expressions as those given
in this paper. This is an essential ingredient in order to
relate theory and observation. In particular it
allows a counting of the {\em different} Yukawa couplings
for each orbifold (with and without deformations), which
is crucial to determine the phenomenological
viability of the different schemes, since it is directly related
to the fermion mass hierarchy. In this sense some orbifolds
(e.g. $Z_3$, $Z_4$, $Z_6$--I, $Z_8$--I, $Z_{12}$--I)
have much better phenomenological prospects than
others (e.g. $Z_7$, $Z_6$--II, $Z_8$-II, $Z_{12}$--II).
The results for the whole set of Coxeter
orbifolds are summarized in Table 1. Other facts concerning
the phenomenological profile of $Z_n$ orbifolds are also
discussed, e.g. the existence of non--diagonal entries in the
fermion mass matrices, which is related to a non--trivial
structure of the Kobayashi--Maskawa matrix. In this sense
non--prime orbifolds are favoured over prime
ones which do not have off--diagonal entries in
the mass matrices at this fundamental level.
The results of this paper give the precise form in which moduli
fields are coupled to twisted matter. This is essential in order
to study in detail other important issues. Namely, the
supersymmetry breaking mechanism by gaugino condensation (in
which the moduli develop an additional non--perturbative
superpotential), and cosmological implications (note that the
moduli are also coupled to gravity in a Jordan-Brans-Dicke--like
way). The level of explicitness given in the paper is also necessary
for more theoretical matters
(e.g. the study of the transformation properties of the Yukawa
couplings under target--space modular transformations like
$R\rightarrow 1/R$ ). Concerning the last aspect we have found some
appealing results, such as the fact that (1,2) moduli never appear
in the expressions of the Yukawa couplings. Likewise, (1,1) moduli associated
with fixed tori which are involved in the Yukawa coupling, do not affect
the value of the coupling. It is worth noticing that the above mentioned
moduli are precisely the only ones which contribute to the string loop
corrections to gauge coupling constants \cite{pipa}.
\vspace{.2 cm}
\noindent{\bf ACKNOWLEDGEMENTS}
\noindent The work of J.A.C. was supported in part by the C.I.C.Y.T., Spain.
The work of F.G. was supported by an F.P.I. grant, Spain. C.M. is grateful
to the members of the Departamento de F\'{\i}sica de Part\'{\i}culas,
Universidad
de Santiago de Compostela, Spain, for their kind hospitality. F.G. thanks
J. Mas for very useful discussions.
\newpage
\noindent\underline{\bf{APPENDIX 1}}
\begin{quotation}\noindent{
We follow a notation as compact as possible. The precise meaning of all the
concepts
appearing here is explained in detail in the text for the $Z_6$--I and $Z_4$
examples.
}\end{quotation}
\vspace{0.8 cm}
\noindent{\underline{\bf{ORBIFOLD $Z_3$}}}
\vspace{.5 cm}
\noindent{\underline {Twist}
$\;\; \theta={\rm
diag}(e^{i\alpha},e^{i\alpha},e^{-2i\alpha}) ,\;\;\;\;
\alpha=\frac{2\pi}{3} $}
\noindent{\underline {Lattice}
$\;\; [SU(3)]^3 $}
\noindent{\underline {Coxeter element}}
\[
\begin{array}{lll}
\theta e_i=e_{i+1}, & \theta e_{i+1}= -e_i-e_{i+1}, & i=1,3,5
\end {array}
\]
\noindent{\underline {Deformation parameters}}
\indent{Relations}
\[
\begin{array}{c}
|e_i|^2=|e_{i+1}|^2, \;\;\; \alpha_{i,i+1}=-\frac{1}{2}, \;\;\;
\alpha_{i,j}=\alpha_{i+1,j+1},\\
\alpha_{i,j}+\alpha_{i,j+1}+\alpha_{i+1,j}=0,\;\;\;\;
i,j=1,3,5\;\;\;i<j
\end{array}
\]
\[
\alpha_{ij}\equiv\cos(\theta_{ij})
\]
\indent{Degrees of freedom (9)}
\[
R_i = |e_i|, \;\; \alpha_{i,j},\;\; \alpha_{i,j+1}, \;\; i,j=1,3,5 \;\; i<j
\]
\noindent{\underline{Lattice basis ($e_i$) in terms of orthogonal basis
($\tilde e_i$)}}
\indent{Not necessary in this case.}
\vspace{.5 cm}
\noindent{\underline {Fixed points of $\theta$} (27)}
\[
\begin {array}{c}
f_1^{(ijk)}= g_1^{(i)} \otimes g_1^{(j)} \otimes g_1^{(k)}\;,\;\; i,j,k=
0,1,2
,\\
g_1^{(0)} = (0,0)\;,\;\; g_1^{(1)} =(\frac{1}{3},\frac{2}{3})\;,\;\;
g_1^{(2)} = (\frac{2}{3},\frac{1}{3})
\end {array}
\]
\vspace{.5 cm}
\noindent {\underline {Coupling $\theta\theta\theta$}}
\indent {Selection rule}
\[
f_1+f_2+f_3 \in \Lambda
\]
\indent {Denoting}
\[
\begin {array}{l}
f_1 = g_1^{(i_1)}\otimes g_1^{(j_1)}\otimes g_1^{(k_1)}\\
f_2 = g_1^{(i_2)}\otimes g_1^{(j_2)}\otimes g_1^{(k_2)}\\
f_3 = g_1^{(i_3)}\otimes g_1^{(j_3)}\otimes g_1^{(k_3)}
\end {array}
\;\;\;\;
\begin {array}{c}
i_1,i_2,i_3 = 0,1,2\\
j_1,j_2,j_3 = 0,1,2\\
k_1,k_2,k_3 = 0,1,2
\end {array}
\]
\indent{the selection rule reads}
\[
\left.
\begin {array}{lcr}
i_1 + i_2 + i_3 &=& 0\\
j_1 + j_2 + j_3 &=& 0\\
k_1 + k_2 + k_3 &=& 0
\end {array}
\right\}
\;\; mod.\; 3
\]
\indent{Number of allowed couplings: 729}
\indent{Expression of the coupling}
\begin {eqnarray*}
C_{\theta\theta\theta} &=&
N \sum_{v \in (f_3-f_2+\Lambda)} \exp [-\frac {1}{4\pi} \; \sin (\frac {2
\pi} {3})\; |v|^2] \\
&=&
N \sum_{\vec{u} \in Z^6} \exp [-\frac {\sqrt{3}}{8\pi} \;
(\vec{f_{23}}+\vec{u})^{\top} M
(\vec{f_{23}}+\vec{u})] \\
&=&
N \; \vartheta
\left[
\begin {array}{c}
\vec {f_{23}} \\
\ 0 \\
\end {array}
\right]
[0,\Omega]
\end {eqnarray*}
\indent {with}
\begin {eqnarray*}
\Omega &=& i \frac {\sqrt{3}}{8 \pi^2} M ,\;\;\;\;
N= \sqrt{V_{\Lambda}}\; \frac{3^{3/4}}{8 \pi^3} \;
\frac{\Gamma^6(\frac{2}{3})}{\Gamma^3 (\frac{1}{3})}\\
\Omega &=& i \frac {\sqrt{3}}{8 \pi^2}
\left(
\begin {array}{cccccc}
R_1^2 & -\frac {R_1^2} {2} & R_1 R_3 \alpha_{13} & R_1R_3 \alpha_{14} &
R_1 R_5 \alpha_{15} & R_1 R_5 \alpha_{16} \\
- \frac{R_1^2}{2} & R_1^2 & R_1R_3 \alpha_{23} &
R_1R_3 \alpha_{13} & R_1R_5 \alpha_{25} & R_1R_5 \alpha_{15}\\
R_1R_3 \alpha_{13} & R_1R_3 \alpha_{23} & R_3^2 & -\frac
{R_3^2}{2} & R_3R_5 \alpha_{35} & R_3R_5 \alpha_{36}\\
R_1R_3 \alpha_{14} & R_1R_3 \alpha_{13} & -\frac{R_3^2}{2} & R_3^2 &
R_3R_5\alpha_{45} & R_3R_5 \alpha_{35}\\
R_1R_5 \alpha_{15} & R_1R_5 \alpha_{25} & R_3R_5 \alpha_{35} &
R_3R_5 \alpha_{45} & R_5^2 & -\frac{R_5^2}{2} \\
R_1 R_5 \alpha_{16} & R_1 R_5 \alpha_{15} & R_3R_5 \alpha_{36} &
R_3R_5 \alpha_{35} & -\frac{R_5^2}{2} & R_5^2
\end {array}
\right)
\end {eqnarray*}
\[
\alpha_{23}=-(\alpha_{13}+\alpha_{14})\;,\;\;
\alpha_{25}=-(\alpha_{15}+\alpha_{16})\;,\;\;
\alpha_{45}=-(\alpha_{35}+\alpha_{36})
\]
\indent{Number of effective parameters: 9}
\indent{Number of different couplings without deformations: 4}
\indent{corresponding to the following $\vec f_{23} $ shifts}
\[
\vec{f_{23}}=
g_1^{(0)}\otimes g_1^{(0)} \otimes g_1^{(0)} ,\;\;
g_1^{(1)}\otimes g_1^{(0)} \otimes g_1^{(0)} ,\;\;
g_1^{(1)}\otimes g_1^{(1)} \otimes g_1^{(0)} ,\;\;
g_1^{(1)}\otimes g_1^{(1)} \otimes g_1^{(1)}
\]
\indent{Number of different couplings with deformations: 14}
\indent{corresponding to the following $\vec f_{23} $ shifts}
\[
\vec{f_{23}}= \left\{
\begin{array}{l}
g_1^{(0)}\otimes g_1^{(0)} \otimes g_1^{(0)} ,\;\;
g_1^{(1)}\otimes g_1^{(0)} \otimes g_1^{(0)} ,\;\;
g_1^{(0)}\otimes g_1^{(1)} \otimes g_1^{(0)} ,\;\;
g_1^{(0)}\otimes g_1^{(0)} \otimes g_1^{(1)} ,\\
g_1^{(1)}\otimes g_1^{(1)} \otimes g_1^{(0)} ,\;\;
g_1^{(1)}\otimes g_1^{(0)} \otimes g_1^{(1)} ,\;\;
g_1^{(0)}\otimes g_1^{(1)} \otimes g_1^{(1)} ,\;\;
g_1^{(1)}\otimes g_1^{(2)} \otimes g_1^{(0)} ,\\
g_1^{(1)}\otimes g_1^{(0)} \otimes g_1^{(2)} ,\;\;
g_1^{(0)}\otimes g_1^{(1)} \otimes g_1^{(2)} ,\;\;
g_1^{(1)}\otimes g_1^{(1)} \otimes g_1^{(1)} ,\;\;
g_1^{(1)}\otimes g_1^{(1)} \otimes g_1^{(2)} ,\\
g_1^{(1)}\otimes g_1^{(2)} \otimes g_1^{(2)} ,\;\;
g_1^{(1)}\otimes g_1^{(2)} \otimes g_1^{(1)}
\end{array}
\right.
\]
\vspace {1.0 cm}
\noindent{\underline{\bf{ORBIFOLD $Z_4$}}}
\indent{See Section 3}
\vspace {1.0 cm}
\noindent{\underline{\bf{ORBIFOLD $Z_6$--I}}}
\indent{See Section 2}
\vspace {1.0 cm}
\noindent{\underline{\bf{ORBIFOLD $Z_6$--II}}}
\vspace{.5 cm}
\noindent{\underline {Twist}
$ \theta={\rm
diag}(e^{i\alpha},e^{2i\alpha},e^{-3i\alpha}) ,\;\;\;\;
\alpha=\frac{2\pi}{6} $}
\noindent{\underline {Lattice}
$ SU(6) \otimes SU(2) $}
\noindent{\underline {Coxeter element}}
\[
\begin{array}{llll}
\theta e_i=e_{i+1}, & i=1,...,4, &
\theta e_5=-e_1-e_2-e_3-e_4-e_5, & \theta e_6=-e_6
\end{array}
\]
\noindent{\underline {Deformation parameters}}
\indent{Relations}
\[
\begin{array}{ll}
|e_1|=|e_2|=|e_3|=|e_4|=|e_5|, &
\alpha_{12}=\alpha_{23}=\alpha_{34}=\alpha_{45}=
-\frac{1}{2}(1+\alpha_{14}+2\alpha_{15}),\\
\alpha_{15}=\alpha_{13}=\alpha_{24}=\alpha_{35},&
\alpha_{16}=-\alpha_{26}=\alpha_{36}=-\alpha_{46}=\alpha_{56},\\
\alpha_{14}=\alpha_{25} &
\end{array}
\]
\[
\alpha_{ij}\equiv\cos(\theta_{ij})
\]
\indent{Degrees of freedom (5)}
\[
\begin {array}{lllll}
R_1= |e_1|, & R_6= |e_6|, &
\alpha_{14},& \alpha_{15}, &
\alpha_{16}
\end {array}
\]
\vspace{.5 cm}
\noindent{\underline{Lattice basis ($e_i$) in terms of orthogonal basis
($\tilde e_i$)}}
\begin{eqnarray*}
e_i&=& \sum_{j=1,3,5} A_j[\cos(\varphi_j+(i-1) b_j \alpha) \tilde e_j
+ \sin(\varphi_j+(i-1) b_j \alpha) \tilde e_{j+1}] \;\;\;\; i=1,...,5 \\
e_6&=& R_6 [\cos(\varphi_3 + \Delta) \tilde e_5 + \sin(\varphi_3 + \Delta)
\tilde e_6 ]
\end{eqnarray*}
\indent{with $\alpha=\frac{\pi}{3}$ and $b_1=1$, $b_2=2$, $ b_3=3$}
\[
\begin{array}{llll}
\cos (\Delta) = \frac{ \sqrt{3} \alpha_{16}}{\sqrt{1+2\alpha_{15}}}, &
A_1= \frac {R_1}{\sqrt{6}} \sqrt{1-3\alpha_{14}-4\alpha_{15}},&
A_3= \frac {R_1}{\sqrt{2}} \sqrt{1+\alpha_{14}},&
A_5= \frac {R_1}{\sqrt{3}} \sqrt{1+2 \alpha_{15}}
\end{array}
\]
\indent{$\varphi_1,\;\varphi_2,\;\varphi_3$ are free parameters.}
\vspace{.5 cm}
\noindent{\underline {Fixed points of $\theta$} (12)}
\[
\begin {array}{c}
f_1^{(ij)}= g_1^{(i)}\otimes \hat g_1^{(j)}\;,\;\;\; i=0,1,...,5,\;\;\;j = 0,1
\end {array}
\]
\[
\begin{array}{lll}
g_1^{(0)} = (0,0,0,0,0)\;,& g_1^{(1)} =\frac{1}{6} (5,4,3,2,1)\;,&
g_1^{(2)} =\frac{1}{6} (4,8,6,4,2)\;,\\
g_1^{(3)} =\frac{1}{6} (3,6,9,6,3)\;,&
g_1^{(4)} =\frac{1}{6} (2,4,6,8,4)\;,&
g_1^{(5)} =\frac{1}{6} (1,2,3,4,5)\;,\\
\hat g_1^{(0)} =(0)\;,& \hat g_1^{(1)} =(\frac {1}{2}) &
\end {array}
\]
\vspace{.5 cm}
\noindent{\underline {Fixed points of $\theta^2$} (9)}
\indent{Fixed torus: $\alpha(e_1+e_3+e_5)+\beta(e_6)\;,\;\;\;\; \alpha,\;\beta
\in R$}
\[
\begin{array}{c}
f_2^{(i)}= g_2^{(i)}\otimes[\alpha(e_1+e_3+e_5)+\beta(e_6)] \;,\;\;\;
i=0,1,...,8,\;\;
\alpha,\beta \in R \\
\end {array}
\]
\[
\begin {array}{lll}
g_2^{(0)} =(0,0,0,0,0), & \tilde g_2^{(1)} =\frac{1}{3}(0,1,1,2,2), &
g_2^{(2)} =\frac{1}{3}(0,2,2,1,1), \\
g_2^{(3)}=\frac{1}{3}(1,0,-2,0,1) ,&
g_2^{(4)} =\frac{1}{3}(1,1,2,2,0) ,& g_2^{(5)}=\frac{1}{3}(2,2,1,1,0), \\
g_2^{(6)} =\frac{1}{3}(-1,0,2,0,-1), & g_2^{(7)}=\frac{1}{3}(2,-2,0,2,-2), &
g_2^{(8)} =\frac{1}{3}(1,-1,0,1,-1)
\end{array}
\]
\indent {Note that $\theta : g_2^{(1)}\rightarrow g_2^{(4)},
g_2^{(2)}\rightarrow g_2^{(5)}, \;\; \theta: g_2^{(3)}\rightarrow g_2^{(6)}$}
\indent {Number of conjugation classes: 6}
\vspace{.5 cm}
\noindent{\underline {Fixed points of $\theta^3$} (14)}
\indent{Fixed torus: $\alpha(e_1+e_4)+\beta(e_2+e_5)\;,\;\; \alpha,\;\beta
\in R$}
\[
\begin{array}{c}
f_3^{(ij)}= g_3^{(i)} \otimes \hat g_3^{(j)} \otimes
[\alpha(e_1+e_4)+\beta(e_2+e_5)]
\;,\;\;\; i=0,1,...,7,\;\;\;j=0,1,\;\;\; \alpha,\beta \in R \\
\end {array}
\]
\[
\begin {array}{lll}
g_3^{(0)} =(0,0,0,0,0)\;, & g_3^{(1)}=\frac{1}{2} (1,1,1,0,0)\;,&
g_3^{(2)} =\frac{1}{2} (1,1,0,-1,-1)\;, \\
g_3^{(3)} =\frac{1}{2} (0,1,1,1,0)\;,&
g_3^{(4)} =\frac{1}{2} (1,0,0,1,0)\;, & g_3^{(5)} =\frac{1}{2} (0,0,1,1,1)\;,\\
g_3^{(6)} =\frac{1}{2} (0,1,0,0,1)\;, & g_3^{(7)} =\frac{1}{2} (1,0,1,0,1)\;,&
\\
\hat g_3^{(0)} = (0)\;, &
\hat g_3^{(1)} = (\frac{1}{2}) &
\end{array}
\]
\indent {Note that in the $SU(6)$ lattice $\theta : g_3^{(1)}
\rightarrow g_3^{(3)} \rightarrow g_3^{(5)}$ and $\theta : g_3^{(2)}
\rightarrow g_3^{(4)} \rightarrow g_3^{(6)}$}
\indent {Number of conjugation classes: 8}
\vspace{.5 cm}
\noindent {\underline {Coupling $\theta\theta^2\theta^3$}}
\indent {Selection rule}
\[
f_1+(I+\theta)f_2-(I+\theta+\theta^2)f_3 \in \Lambda
\]
\indent {Denoting}
\[
\left.
\begin {array}{rcl}
f_1 &=& g_1^{(i_1)}\otimes \hat g_1^{(j_1)} \\
f_2 &=& g_2^{(i_2)}\otimes [\alpha (e_1+e_3+e_5)+\beta (e_6)]\\
f_3 &=& g_3^{(i_3)}\otimes \hat g_3^{(j_3)} \otimes [\gamma (e_1+e_4)+\delta
(e_2+e_5)]
\end {array}
\right\}\;\;\;
\begin {array} {l}
i_1=0,1,...,5\;,\\
j_1,j_3=0,1\;,\\
i_2=0,1,...,8\;,\\
i_3=0,1,...,7\;,\\
\alpha,\beta,\gamma,\delta \in R
\end {array}
\]
\indent{the selection rule reads}
\[
\left.
\begin {array}{l}
i_1+2i_2+3i_3=0 \\
j_1=j_3
\end {array}
\right\}
\;\;
mod.\;6
\]
\indent{Number of allowed couplings: 48}
\indent{Expression of the coupling}
\begin {eqnarray*}
C_{\theta\theta^2\theta^3} &=& \sqrt{l_2 \; l_3}\;\;
N \; \sum_{v \in (f_3-f_2+\Lambda)_{\perp}} \exp [-\frac
{\sqrt{3}}{4\pi} \;|v_1|^2 ]
\end {eqnarray*}
\begin{quotation}
\noindent{where $l_i$ is the number of elements in the $f_i$ conjugation class
and
$(f_3-f_2+\Lambda)_{\perp}$ denotes elements orthogonal to the two
invariant planes}
\end{quotation}
\[
(f_3-f_2+\Lambda)_{\perp} = \sum_{i=1}^{6} (h_1^i+n_1^i)(\frac{1}{2}
e_1+e_2+e_3+ \frac {1}{2}
e_4)+ (h_2^i+n_2^i)(- e_1-e_2+e_4+ e_5)
\]
\indent{where denoting $\vec{\bar{f_{23}^i}}\equiv(h_1^i,h_2^i)$,
$\vec{\bar{f_{23}^i}}$ is always}
\[
\begin {array}{lll}
\vec{\bar{f_{23}^1}}=(0,0) & \vec{\bar{f_{23}^2}}=(0,\frac{1}{2}) &
\vec{\bar{f_{23}^3}}=(\frac{1}{3},\frac{1}{3}) \\
\vec{\bar{f_{23}^4}}=(\frac{1}{3},\frac{5}{6}) &
\vec{\bar{f_{23}^5}}=(\frac{2}{3},\frac{2}{3}) &
\vec{\bar{f_{23}^6}}=(\frac{2}{3},\frac{5}{6})
\end{array}
\]
\indent{with $n_1^i,n_2^i \in Z$. The coupling takes the final form}
\begin {eqnarray*}
C_{\theta\theta^2\theta^3} &=& \sqrt{l_2 \; l_3}\;\;
N \;\sum_{i} \;\; \sum_{u \in Z^2} \exp [-\frac
{\sqrt{3}}{4\pi} \;(\vec{\bar{f_{23}^i}}+ \vec{u})^{\top} M
(\vec{\bar{f_{23}^i}}+ \vec{u})] \\
&=& \sqrt{l_2 \; l_3}\;\;
N \;\sum_{i} \vartheta
\left[
\begin{array}{c}
\vec{\bar{f_{23}^i}} \\
0
\end {array}
\right]
[0, \Omega]
\end {eqnarray*}
\indent{with}
\[
\Omega = \;i \frac{\sqrt{3}}{4\pi^2} M =\;i \frac{\sqrt{3}}{2\pi^2}
R_1^2 (1-3\alpha_{14}-4\alpha_{15})
\left(
\begin {array}{rr}
\frac{1}{4} & -\frac{1}{4} \\
-\frac{1}{4} & 1
\end {array}
\right) ,\;\;\;\;\;
N= \sqrt{V_{\perp}}\; \frac{1}{2 \pi} \;
\sqrt{ \frac{\Gamma(\frac{5}{6}) \Gamma(\frac{2}{3})}
{\Gamma(\frac{1}{3}) \Gamma(\frac{1}{6})} }
\]
\indent{with $V_{\perp}$ the volume of the unit cell generated by
$\{\frac{1}{2} e_1+e_2+e_3+ \frac {1}{2}e_4, e_1+e_2-e_4- e_5\}$}
\indent{Number of effective parameters: 1}
\indent{Number of different couplings without deformations: 4}
\indent{Number of different couplings with deformations: 4}
\vspace{.5 cm}
\noindent {\underline {Coupling $\theta\theta\theta^4$}}
\indent {Selection rule}
\[
f_1+f_2-(I+\theta)f_3 \in \Lambda
\]
\indent {Denoting}
\[
\left.
\begin {array}{rcl}
f_1 &=& g_1^{(i_1)}\otimes \hat g_1^{(j_1)}\\
f_2 &=& g_1^{(i_2)}\otimes \hat g_1^{(j_2)}\\
f_3 &=& g_2^{(i_3)}\otimes [\alpha (e_1+e_3+e_5)+\beta (e_6)]
\end {array}
\right\}\;\;\;
\begin {array} {l}
i_1,i_2=0,1,...,5\;,\\
j_1,j_2=0,1\;,\\
i_3=0,1,...,8\;,\\
\alpha,\beta \in R
\end {array}
\]
\indent{the selection rule reads}
\[
\left.
\begin {array}{l}
i_1+i_2+4i_3=0 \\
j_1=j_2
\end {array}
\right\}
\;\;
mod.\;6
\]
\indent{Number of allowed couplings: 72}
\indent{Expression of the coupling}
\begin {eqnarray*}
C_{\theta\theta\theta^4} &=& \sqrt{l_3}\;\;
N \; \sum_{v \in (f_3-f_2+\Lambda)_{\perp}} \exp [-\frac
{\sqrt{3}}{8\pi} \; (|v_1|^2+|v_2|^2)]
\end {eqnarray*}
\begin{quotation}
\noindent{where $(f_3-f_2+\Lambda)_{\perp}$ denotes that the coset
elements must be orthogonal to the
($e_1+e_3+e_5,e_6$) plane}
\end{quotation}
\[
(f_3-f_2+\Lambda)_{\perp} = \sum_{i=1}^{3} [
(h_1^i+n_1^i)(e_1-e_3)+(h_2^i+n_2^i)(e_2+e_3)+
(h_3^i+n_3^i)(e_3+e_4)+(h_4^i+n_4^i)(e_5-e_3) ]
\]
\begin{quotation}
\noindent{where denoting $\vec{\bar{f_{23}^i}}=(h_1^i,...,h_4^i)$
there are two possible tri--plets
of values for $\vec{\bar{f_{23}^i}}$ depending on the values of $f_2,f_3$}
\end{quotation}
\[
\begin{array}{l}
\begin{array}{lll}
\vec{\bar{f_{23}^1}}=(0,0,0,0)
&\vec{\bar{f_{23}^2}}=(\frac{1}{3},0,0,\frac{1}{3}) &
\vec{\bar{f_{23}^3}}=(\frac{2}{3},0,0,\frac{2}{3})\\
\end{array}\\
{\rm and}\\
\begin{array}{lll}
\vec{\bar{f_{23}^1}}=(\frac{1}{3},\frac{2}{3},\frac{1}{3},\frac{2}{3}) &
\vec{\bar{f_{23}^2}}=(\frac{2}{3},\frac{2}{3},\frac{1}{3},0) &
\vec{\bar{f_{23}^3}}=(0,\frac{2}{3},\frac{1}{3},\frac{1}{3}) \\
\end{array}
\end{array}
\]
\indent{with $n_1^i,n_2^i,n_3^i,n_4^i\;\in\;Z$. Finally the coupling takes the
form}
\begin{eqnarray*}
C_{\theta\theta\theta^4} &=& \sqrt{l_3}\;\;
N \sum_i \;\sum_{\vec{u} \in Z^4} \exp [-\frac
{\sqrt{3}}{8\pi} \;(\vec{\bar{f_{23}^i}}+\vec{u})^{\top} M
(\vec{\bar{f_{23}^i}}+\vec{u})] \\
&=& \sqrt{l_3}\;\;
N \;\sum_{i} \vartheta
\left[
\begin{array}{c}
\vec{\bar{f_{23}^i}} \\
0
\end {array}
\right]
[0, \Omega]
\end {eqnarray*}
\indent{with}
\[
N= \sqrt{V_{\perp}}\; \frac{1}{2 \pi} \;
\sqrt{ \frac{\Gamma(\frac{5}{6}) \Gamma(\frac{2}{3})}
{\Gamma(\frac{1}{3}) \Gamma(\frac{1}{6})} }\;,\;\;\;\;
\Omega = \; i \frac{\sqrt{3}}{8\pi^2} M
\]
\begin{quotation}
\noindent{$V_{\perp}$ is the unit cell volume
of the sublattice orthogonal to the invariant plane}
\end{quotation}
\[
\begin{array}{l}
\Omega = \; i \frac{\sqrt{3}}{8\pi^2}
\left(
\begin {array}{cccc}
2a & -a & \frac{a+c-2b}{2} & -a \\
-a & b & c & \frac{a+c-2b}{2}\\
\frac{a+c-2b}{2} & c & b & -a \\
-a & \frac{a+c-2b}{2} & -a & 2a
\end {array}
\right)
\begin{array}{l}
a=R_1^2(1-\alpha_{15})\\
b=R_1^2(1-\alpha_{14}-2\alpha_{15})\\
c=R_1^2(\alpha_{14}+\alpha_{15})
\end{array}
\end{array}
\]
\indent{Number of effective parameters: 3}
\indent{Number of different couplings without deformations: 4}
\indent{Number of different couplings with deformations: 4}
\vspace {1.0 cm}
\noindent{\underline{\bf{ORBIFOLD $Z_7$}}}
\vspace{.5 cm}
\noindent{\underline {Twist}
$ \theta={\rm
diag}(e^{i\alpha},e^{2i\alpha},e^{-3i\alpha}) ,\;\;\;\;
\alpha=\frac{2\pi}{7} $}
\noindent{\underline {Lattice}
$ SU(7) $}
\noindent{\underline {Coxeter element}}
\[
\begin{array}{lll}
\theta e_i = e_{i+1}, & i=1,...,5, &
\theta e_6 = -e_1-e_2-e_3-e_4-e_5-e_6
\end{array}
\]
\noindent{\underline {Deformation parameters}}
\indent{Relations}
\[
\begin{array}{ll}
|e_1|=|e_2|=|e_3|=|e_4|=|e_5|=|e_6|, &
\alpha_{12}=\alpha_{23}=\alpha_{34}=\alpha_{45}=\alpha_{56},\\
\alpha_{13}=\alpha_{24}=\alpha_{35}=\alpha_{46}=\alpha_{16},&
\alpha_{14}=\alpha_{25}=\alpha_{36}=\alpha_{15}=\alpha_{26}
= -\frac{1}{2}-\alpha_{12}-\alpha_{13}
\end{array}
\]
\[
\alpha_{ij}\equiv\cos(\theta_{ij})
\]
\indent{Degrees of freedom (3)}
\[
\begin {array}{lll}
R= |e_1|, & \alpha_{12}, & \alpha_{13}
\end {array}
\]
\vspace{.5 cm}
\noindent{\underline{Lattice basis ($e_i$) in terms of orthogonal basis
($\tilde
e_i$)}}
\[
e_i = \sum_{j=1,3,5} R_j[\cos((i-1)b_j\alpha + \varphi_j) \tilde e_j +
\sin((i-1)b_j\alpha + \varphi_j) \tilde e_{j+1} ] \;\;\;\;\; i=1,...,6
\]
\indent{with $\alpha=\frac{2\pi}{7}$ and $b_1=1,\; b_3=2,\; b_5=4$}
\[
\begin{array}{c}
R_1^2= R^2 [\alpha_{12} (\alpha_5^2-\alpha_1^2) +
\alpha_{13}(\alpha_5^2-\alpha_3^2)+
\frac{1}{2}\alpha_5^2] \\
R_3^2= R^2 [\alpha_{12} (\alpha_1^2-\alpha_3^2) +
\alpha_{13}(\alpha_1^2-\alpha_5^2)+
\frac{1}{2}\alpha_1^2] \\
R_5^2= R^2 [\alpha_{12} (\alpha_3^2-\alpha_5^2) +
\alpha_{13}(\alpha_3^2-\alpha_1^2)+
\frac{1}{2}\alpha_3^2] \\
\alpha_i^2= \frac{4}{7}[1-\cos(b_i \alpha)]\;,\;\;\;i=1,3,5
\end{array}
\]
\indent{$\varphi_1,\;\varphi_2,\;\varphi_3$ are free parameters.}
\vspace{.5 cm}
\noindent{\underline {Fixed points of $\theta$} (7)}
\[
\begin {array}{lll}
f_1^{(0)}= (0,0,0,0,0,0), &
f_1^{(1)}= \frac{1}{7}(6,5,4,3,2,1), &
f_1^{(2)}= \frac{1}{7}(5,3,1,6,4,2), \\
f_1^{(3)}= \frac{1}{7}(4,1,5,2,6,3), &
f_1^{(4)}= \frac{1}{7}(3,6,2,5,1,4), &
f_1^{(5)}= \frac{1}{7}(2,4,6,1,3,5), \\
f_1^{(6)}= \frac{1}{7}(1,2,3,4,5,6)& &
\end {array}
\]
\vspace{.5 cm}
\noindent {\underline {Coupling $\theta\theta^2\theta^4$}}
\indent {Selection rule}
\[
f_1+2f_2-3f_3 \in \Lambda
\]
\indent {Denoting}
\[
\left.
\begin {array}{l}
f_1 = f_1^{(i_1)}\\
f_2 = f_1^{(i_2)}\\
f_3 = f_1^{(i_3)}\\
\end {array}
\right\}\;\;\;
\begin {array} {l}
i_1,i_2,i_3 =0,1,...,6\;,\\
\end {array}
\]
\indent{the selection rule reads}
\[
\begin {array}{l}
i_1+2i_2-3i_3=0
\end {array}
\;\;
mod.\;7
\]
\indent{Number of allowed couplings: 49}
\indent{Expression of the coupling}
\begin {eqnarray*}
C_{\theta\theta^2\theta^4} &=&
N \; \sum_{v \in (f_3-f_2+\Lambda)} \exp \left[ -\frac
{1}{4\pi}\;\sin(\alpha)\sin(2 \alpha)\sin(3 \alpha) \;
\left( \frac{|v_1|^2}{\sin^2(3\alpha)} +\frac{|v_2|^2}{\sin^2(\alpha)} +
\frac{|v_3|^2}{\sin^2(2\alpha )} \right) \right] \\
&=& N \; \sum_{\vec{u} \in Z^6} \exp [-\frac
{1}{4\pi}\; (\vec{f_{23}}+\vec{u})^{\top} M (\vec{f_{23}}+\vec{u})] \\
&=& N \; \vartheta
\left[
\begin{array}{c}
\vec{f_{23}} \\
0
\end {array}
\right]
[0, \Omega]
\end {eqnarray*}
\[
\Omega= i \frac{1}{4\pi^2} M \;\;\;\;
N= \sqrt{V_{\Lambda}}\; \left[ \frac{1}{2 \pi} \right]^{3/2} \;
\left[ \frac{\Gamma(\frac{3}{7}) \Gamma(\frac{5}{7}) \Gamma(\frac{6}{7}) }
{\Gamma(\frac{1}{7}) \Gamma(\frac{2}{7}) \Gamma(\frac{4}{7})} \right]^{3/2}
\]
\[
\begin{array}{l}
\Omega= i \frac{1}{4\pi^2} \;\sin(\alpha) \sin(2 \alpha) \sin(3 \alpha)\;
\left(
\begin {array}{rrrrrr}
a & b & c & d & d & c \\
b & a & b & c & d & d \\
c & b & a & b & c & d \\
d & c & b & a & b & c \\
d & d & c & b & a & b \\
c & d & d & c & b & a
\end {array}
\right)
\;\;\;
\begin{array}{l}
a = \frac{R_1^2}{\sin^2(3\alpha)}+ \frac{R_2^2}{\sin^2(\alpha)}+
\frac{R_3^2}{\sin^2(2\alpha)} \\
\\
b = \frac{R_1^2 \cos(\alpha) }{\sin^2(3\alpha)}+ \frac{R_2^2
\cos(2\alpha)}{\sin^2(\alpha)}+
\frac{R_3^2 \cos(3\alpha)}{\sin^2(2\alpha)} \\
\\
c = \frac{R_1^2 \cos(2\alpha)}{\sin^2(3\alpha)}+
\frac{R_2^2 \cos(3\alpha)}{\sin^2(\alpha)}+ \frac{R_3^2
\cos(\alpha)}{\sin^2(2\alpha)} \\
\\
d =\frac{R_1^2 \cos(3\alpha)}{\sin^2(3\alpha)}+ \frac{R_2^2
\cos(\alpha)}{\sin^2(\alpha)}+
\frac{R_3^2 \cos(2\alpha)}{\sin^2(2\alpha)}
\end{array}
\end{array}
\]
\indent{Number of effective parameters: 3}
\indent{Number of different couplings without deformations: 2}
\indent{corresponding to the following $\vec f_{23} $ shifts}
\[
\vec{f_{23}}=\{f_1^{(0)},f_1^{(1)}\}
\]
\indent{Number of different couplings with deformations: 4}
\indent{corresponding to the following $\vec f_{23} $ shifts}
\[
\vec{f_{23}}=\{f_1^{(0)},f_1^{(1)},f_1^{(2)},f_1^{(3)}\}
\]
\vspace {1.0 cm}
\noindent{\underline{\bf{ORBIFOLD $Z_8$--I}}}
\vspace{.5 cm}
\noindent{\underline {Twist}
$ \theta={\rm
diag}(e^{i\alpha},e^{2i\alpha},e^{-3i\alpha}) ,\;\;\;\;
\alpha=\frac{2\pi}{8} $}
\noindent{\underline {Lattice}
$ SO(5) \otimes SO(9) $}
\noindent{\underline {Coxeter element}}
\[
\begin{array}{lll}
\theta e_1 = e_1+2 e_2, & \theta e_2= -e_1-e_2, &
\theta e_3= e_4 ,\\
\theta e_4=e_5 ,&
\theta e_5 = e_3+e_4+e_5+2 e_6, & \theta e_6 = -e_3-e_4-e_5-e_6
\end{array}
\]
\noindent{\underline {Deformation parameters}}
\indent{Relations}
\[
\begin{array}{lll}
|e_1|=\sqrt{2}|e_2|, & |e_3|=|e_4|=|e_5| , &
-2\alpha_{56} |e_6|=|e_3|, \\
\alpha_{12}= -\frac {1}{\sqrt{2}}, &
\alpha_{35}=0, & \alpha_{34}=\alpha_{45} ,\\
\alpha_{36}=\alpha_{46} ,&
\alpha_{36}= \frac {1}{2 \alpha_{56}} - \alpha_{56}, &
\alpha_{34}=\frac {1}{4 \alpha_{56}^2} - 1, \\
\alpha_{ij}=0\;\;\;\;i=1,2\;\;\;\;j=3,4,5,6 & &
\end{array}
\]
\[
\alpha_{ij}\equiv\cos(\theta_{ij})
\]
\indent{Degrees of freedom (3)}
\[
\begin {array}{lll}
R_1= |e_1|, & R_3= |e_3|, & \alpha_{56}
\end {array}
\]
\vspace{.5 cm}
\noindent{\underline{Lattice basis ($e_i$) in terms of orthogonal basis
($\tilde
e_i$)}}
\[
\begin{array}{rcl}
e_1&=& \frac {R_1}{2} \{ [(2+\sqrt{2})^{1/2} \cos (\varphi_1)
+(2-\sqrt{2})^{1/2} \sin
(\varphi_1)] \tilde e_1 +\\
& &+[-(2+\sqrt{2})^{1/2} \sin (\varphi_1) +(2-\sqrt{2})^{1/2} \cos
(\varphi_1)] \tilde e_2 \} \\
e_2&=& \frac {R_1}{2} \{ -[(2+\sqrt{2})^{1/2} \cos (\varphi_1)
+(2-\sqrt{2})^{1/2} \sin
(\varphi_1)] \tilde e_1 +\\
& &+[(2+\sqrt{2})^{1/2} \sin (\varphi_1) +(2-\sqrt{2})^{1/2} \cos
(\varphi_1)] \tilde e_2 \} \\
e_3&=&A[ \cos(\varphi_2) \tilde e_3 + \sin(\varphi_2) \tilde e_4 ]+
B[\cos( \varphi_3) \tilde e_5 + \sin(\varphi_3) \tilde e_6]\\
e_4&=&A [\cos(\alpha +\varphi_2) \tilde e_3 + \sin(\alpha +\varphi_2) \tilde
e_4]
-B[ \cos(\alpha + \varphi_3) \tilde e_5 + \sin(\alpha + \varphi_3) \tilde
e_6]\\
e_5&=&-A [\sin(\varphi_2) \tilde e_3 - \cos(\varphi_2) \tilde e_4 ]
-B[\sin( \varphi_3) \tilde e_5 - \cos( \varphi_3) \tilde e_6]\\
e_6&=&\frac{A}{2}[-\cos(\varphi_2)+ \sin(\varphi_2)- 2\cos(\varphi_2)
\cos(\alpha)] \tilde e_3 +\\
& &+\frac{A}{2}[-\cos(\varphi_2)- \sin(\varphi_2)- 2\sin(\varphi_2)
\cos(\alpha)] \tilde e_4 +\\
& &+\frac{B}{2}[-\cos(\varphi_3)+ \sin(\varphi_3)+ 2\cos(\varphi_3)
\cos(\alpha)] \tilde e_5 +\\
& &+\frac{B}{2}[-\cos(\varphi_3)- \sin(\varphi_3)+ 2\sin(\varphi_3)
\cos(\alpha)] \tilde e_6
\end{array}
\]
\indent{with $\alpha=\frac{2\pi}{8}$ and}
\[
\begin{array}{cc}
A= R_3 \left[ \frac{1+\sqrt{2}}{2} - \frac{1}{4 \sqrt{2} \alpha_{56}^2}
\right]^{1/2}, &
B= R_3 \left[ \frac{1-\sqrt{2}}{2} +\frac{1}{4 \sqrt{2} \alpha_{56}^2}
\right]^{1/2}
\end{array}
\]
\indent{$\varphi_1,\;\varphi_2,\;\varphi_3$ are free parameters.}
\vspace{.5 cm}
\noindent{\underline {Fixed points of $\theta$} (4)}
\[
\begin {array}{c}
f_1^{(ij)}= g_1^{(i)} \otimes \hat g_1^{(j)} \;,\;\;\; i=0,1,\;\;\;j = 0,1
\end {array}
\]
\[
\begin{array}{llll}
g_1^{(0)} = (0,0)\;,& g_1^{(1)} =\frac{1}{2} (1,0)\;,&
\hat g_1^{(0)} =(0,0,0,0)\;,& \hat g_1^{(1)}=\frac{1}{2} (1,0,1,0)
\end {array}
\]
\vspace{.5 cm}
\noindent{\underline {Fixed points of $\theta^2$} (16)}
\[
\begin{array}{c}
f_2^{(ij)}= g_2^{(i)} \otimes \hat g_2^{(j)} \;,\;\;\; i,j=0,1,2,3
\end {array}
\]
\[
\begin {array}{llll}
g_2^{(0)}=(0,0), & g_2^{(1)}=\frac{1}{2}(0,1), &
g_2^{(2)}=\frac{1}{2}(1,0), & g_2^{(3)}=\frac{1}{2}(1,1),\\
\hat g_2^{(0)}=(0,0,0,0), & \hat g_2^{(1)}=\frac{1}{2}(0,1,1,0) ,&
\hat g_2^{(2)}=\frac{1}{2}(1,0,1,0), & \hat g_2^{(3)}=\frac{1}{2}(1,1,0,0)
\end{array}
\]
\indent {Note that $\theta :g_2^{(1)}\rightarrow g_2^{(3)}$ and
$\theta : \hat g_2^{(1)} \rightarrow \hat g_2^{(3)}$.}
\indent {Number of conjugation classes: 10}
\vspace{.5 cm}
\noindent{\underline {Fixed points of $\theta^3$} (4)}
\indent{The same as for $\theta$.}
\[
\begin {array}{c}
f_3^{(ij)}= g_3^{(i)} \otimes \hat g_3^{(j)} \;,\;\;\; i=0,1,\;\;\;j = 0,1
\end {array}
\]
\[
\begin{array}{llll}
g_3^{(0)} = (0,0)\;,& g_3^{(1)} =\frac{1}{2} (1,0)\;,&
\hat g_3^{(0)} =(0,0,0,0)\;,& \hat g_3^{(1)}=\frac{1}{2} (1,0,1,0)
\end {array}
\]
\vspace{.5 cm}
\noindent{\underline {Fixed points of $\theta^4$} (16)}
\indent{Fixed torus: $\alpha(e_1)+\beta(e_2)\;,\;\; \alpha,\;\beta
\in R$}
\[
\begin{array}{c}
f_4^{(i)}= [\alpha(e_1)+\beta(e_2)] \otimes \hat g_4^{(i)} \;,\;\;\;
i=0,1,...,15,\;\;\;\alpha,\;\beta \in R
\end {array}
\]
\[
\begin {array}{llll}
\hat g_4^{(0)}=(0,0,0,0) ,&
\hat g_4^{(1)}=\frac{1}{2} (1,0,1,1), &
\hat g_4^{(2)}=\frac{1}{2} (1,0,0,0), &
\hat g_4^{(3)}=\frac{1}{2} (1,0,0,1), \\
\hat g_4^{(4)}=\frac{1}{2} (1,1,0,0), &
\hat g_4^{(5)}=\frac{1}{2} (0,0,0,1), &
\hat g_4^{(6)}=\frac{1}{2} (0,1,0,0), &
\hat g_4^{(7)}=\frac{1}{2} (0,0,1,1), \\
\hat g_4^{(8)}=\frac{1}{2} (1,0,1,0), &
\hat g_4^{(9)}=\frac{1}{2} (1,1,0,1), &
\hat g_4^{(10)}=\frac{1}{2} (0,0,1,0), &
\hat g_4^{(11)}=\frac{1}{2} (0,1,0,1), \\
\hat g_4^{(12)}=\frac{1}{2} (0,1,1,0), &
\hat g_4^{(13)}=\frac{1}{2} (0,1,1,1) &
\hat g_4^{(14)}=\frac{1}{2} (1,1,1,0), &
\hat g_4^{(15)}=\frac{1}{2} (1,1,1,1)
\end{array}
\]
\indent {Note that in the $SO(9)$ lattice}
\[
\begin{array}{ll}
\theta:\hat g_4^{(4)} \rightarrow \hat g_4^{(12)}\;, &
\theta:\hat g_4^{(1)} \rightarrow \hat g_4^{(3)} \rightarrow \hat g_4^{(9)}
\rightarrow \hat g_4^{(11)}\;,\\
\theta:\hat g_4^{(2)} \rightarrow \hat g_4^{(6)} \rightarrow \hat g_4^{(10)}
\rightarrow \hat g_4^{(14)}\;,&
\theta:\hat g_4^{(5)} \rightarrow \hat g_4^{(7)} \rightarrow \hat g_4^{(13)}
\rightarrow \hat g_4^{(15)}
\end {array}
\]
\indent {Number of conjugation classes: 6}
\vspace{.5 cm}
\noindent {\underline {Coupling $\theta^2\theta^2\theta^4$}}
\indent {Selection rule}
\[
f_1+f_2-(I+\theta^2)f_3 \in \Lambda
\]
\indent {Denoting}
\[
\left.
\begin {array}{rcl}
f_1 &=& g_2^{(i_1)} \otimes \hat g_2^{(j_1)} \\
f_2 &=& g_2^{(i_2)} \otimes \hat g_2^{(j_2)} \\
f_3 &=& [\alpha(e_1)+\beta(e_2)] \otimes \hat g_4^{(j_3)}
\end {array}
\right\}\;\;\;
\begin {array} {l}
i_1,i_2,j_1,j_2=0,1,2,3,\\
j_3=0,1,...,15\;,\\
\alpha,\beta \in R
\end {array}
\]
\indent{the selection rule reads}
\[
\left.
\begin {array}{l}
i_1=i_2 \\
j_1+(-1)^{(j_3+1)} j_2 = j_3
\end {array}
\right\} \;\;\;\; mod.\;4
\]
\indent{Number of allowed couplings: 84}
\indent{Expression of the coupling}
\begin {eqnarray*}
C_{\theta^2\theta^2\theta^4} &=&\frac{F(l_1,l_2,l_3)}{2} \;\;
N \; \{ \sum_{v \in (f_3-f_2+\Lambda)_{\perp}} \exp [-\frac
{1}{4\pi} \; |v|^2] + \sum_{v \in (\theta f_3-f_2+\Lambda)_{\perp}} \exp
[-\frac
{1}{4\pi} \; |v|^2] \} \\
&=& \frac{F(l_1,l_2,l_3)}{2} \;\; N \{ \vartheta
\left[
\begin{array}{c}
\vec{f_{23}} \\
0
\end {array}
\right]
[0, \Omega]
+
\vartheta
\left[
\begin{array}{c}
\vec{f'_{23}} \\
0
\end {array}
\right]
[0, \Omega]
\}
\end {eqnarray*}
\begin{quotation}
\noindent{where $(f_3-f_2+\Lambda)_{\perp}$ denotes that only coset
elements belonging to $SO(9)$ lattice are considered; $f_{23}=f_2-f_3$,
$f'_{23}=\theta f_2 -f_3$, the arrows denote components in the $SO(9)$ lattice.
$l_i$
is the number of elements in the $f_i$ conjugation class (in all the cases,
except
$l_1,l_2,l_3=2$, $f_{23}=f'_{23}$). Finally the values of $F(l_1,l_2,l_3)$ are}
\end{quotation}
\[
\begin{array}{rlrl}
l_1=l_2=l_3=1\;: & F=1 & l_1=l_2=1\;l_3=2 \;:& F=\sqrt{2} \\
l_1=l_2=1 \; l_3=4 \;: & F=2 & l_1=l_2=2 \; l_3=1\;: & F=1 \\
l_1=l_2=l_3=2\;: & F=\sqrt{2} & l_1=l_2=2 \; l_3=4 \;:& F=1 \\
l_1=1 (2) \; l_2=2 (1) \; l_3=4 \;:& F=\sqrt{2} & &
\end{array}
\]
\[
N= \sqrt{V_{\perp}}\; \frac{1}{2 \pi} \;
\frac{\Gamma^2(\frac{3}{4})}
{\Gamma^2(\frac{1}{4})} \;\;\;\;\;
\Omega = \; i \frac {1}{4 \pi^2}
\left(
\begin {array}{cccc}
a & b & 0 & c \\
b & a & b & c \\
0 & b & a & d \\
c & c & d & e
\end {array}
\right)
\begin{array}{l}
a=R_3^2 \\
b=R_3^2 [\frac{1}{4\alpha_{56}^2}-1]\\
c=R_3^2 [\frac{1}{2\alpha_{56}}-\alpha_{56}]\\
d=-\frac{R_3^2}{2}\\
e=\frac{R_3^2}{4\alpha_{56}^2}
\end{array}
\]
\indent{where $V_{\perp}$ is the volume of the $SO(9)$ lattice}
\indent{Number of effective parameters: 2}
\indent{Number of different couplings without deformations: 8}
\indent{corresponding to the following $\vec{f_{23}}$ shifts}
\[
\vec{f_{23}}= \;\; \left[
\begin {array}{l}
F=1 \; \left\{
\begin{array}{lll}
(0,0,0,0), & (\frac{1}{2},0,\frac{1}{2},0), & (\frac{1}{2},\frac{1}{2},0,0),
\end {array}
\right. \\
F=\sqrt{2} \; \left\{
\begin{array}{lc}
(\frac{1}{2},\frac{1}{2},0,0), & (\frac{1}{2},0,0,\frac{1}{2}), \\
(0,0,\frac{1}{2},\frac{1}{2}), & (0,0,0,0) \cup (\frac{1}{2},0,\frac{1}{2},0),
\end{array}
\right. \\
F=2\;\; (\frac{1}{2},0,0,0)
\end{array}
\right.
\]
\indent{Number of different couplings with deformations: 9}
\indent{corresponding to the following $\vec{f_{23}}$ shifts}
\[
\vec{f_{23}}= \;\; \left[
\begin {array}{l}
F=1 \; \left\{
\begin{array}{ll}
(0,0,0,0), & (\frac{1}{2},0,\frac{1}{2},0), \\
(\frac{1}{2},\frac{1}{2},0,0), & (0,\frac{1}{2},0,0),
\end {array}
\right. \\
F=\sqrt{2} \; \left\{
\begin{array}{cc}
(\frac{1}{2},\frac{1}{2},0,0), & (\frac{1}{2},0,0,\frac{1}{2}), \\
(0,0,\frac{1}{2},\frac{1}{2}), & (0,0,0,0) \cup (\frac{1}{2},0,\frac{1}{2},0),
\end{array}
\right. \\
F=2\;\; (\frac{1}{2},0,0,0)
\end{array}
\right.
\]
\vspace{.5 cm}
\noindent {\underline {Coupling $\theta\theta^2\theta^5$}}
\indent {Selection rule}
\[
f_1+(I+\theta)f_2-(I+\theta+\theta^2)f_3 \in \Lambda
\]
\indent {Denoting}
\[
\left.
\begin {array}{rcl}
f_1 &=& g_1^{(i_1)}\otimes \hat g_1^{(j_1)}\\
f_2 &=& g_2^{(i_2)}\otimes \hat g_2^{(j_2)}\\
f_3 &=& g_1^{(i_3)}\otimes \hat g_1^{(j_3)}
\end {array}
\right\}\;\;\;
\begin {array} {l}
i_1,i_3,j_1,j_3=0,1\;,\\
i_2,j_2=0,1,2,3\;,
\end {array}
\]
\indent{the selection rule reads}
\[
\left.
\begin {array}{lcr}
i_1+i_2+i_3&=&0 \\
j_1+j_2+j_3&=&0
\end {array}
\right\}
\;\;
mod.\;2
\]
\indent{Number of allowed couplings: 40}
\indent{Expression of the coupling}
\begin {eqnarray*}
C_{\theta\theta^2\theta^5} &=& \sqrt{l_2}\;\;
N \; \sum_{v \in (f_3-f_2+\Lambda)_{\perp}} \exp [-\frac
{1}{4\pi} \; (\frac {\sqrt{2}+1}{\sqrt{2}}|v_1|^2+|v_2|^2+\frac
{\sqrt{2}-1}{\sqrt{2}}|v_3|^2)] \\
&=& \sqrt{l_2}\;\;
N \;\; \sum_{\vec{u} \in Z^6} \exp [-\frac
{1}{4\pi} \; (\vec{f_{23}}+\vec{u})^{\top} M (\vec{f_{23}}+\vec{u})] \\
&=& \sqrt{l_2}\;\; N \; \vartheta
\left[
\begin{array}{c}
\vec{f_{23}} \\
0
\end {array}
\right]
[0, \Omega]
\end {eqnarray*}
\indent{with $l_2$ the number of elements in the $f_2$ conjugation class}
\[
\Omega = \; i \frac{1}{4 \pi^2} M \;\;\;\;
N= \sqrt{V_{\Lambda}}\; \left[ \frac{1}{2 \pi} \right]^{3/2}\;
\frac{\Gamma(\frac{7}{8}) \Gamma(\frac{3}{8})}
{\Gamma(\frac{1}{8}) \Gamma(\frac{5}{8})}
\frac{\Gamma^2(\frac{3}{4})}
{\Gamma^2(\frac{1}{4})}
\]
\[
\Omega = \; i \frac{1}{4 \pi^2}
\left(
\begin {array}{cccccc}
a & -a & 0 & 0 & 0 & 0 \\
-a & 2a & 0 & 0 & 0 & 0 \\
0 & 0 & b & c & 0 & e \\
0 & 0 & c & b & c & d \\
0 & 0 & 0 & c & b & e \\
0 & 0 & e & d & e & f
\end {array}
\right)
\begin{array}{l}
a=R_1^2\\
b=\frac{1}{\sqrt{2}}[(\sqrt{2}+1) A^2 + (\sqrt{2}-1) B^2]\\
c=\frac{1}{2}[(\sqrt{2}+1) A^2 - (\sqrt{2}-1) B^2]\\
d=-\frac{1}{2}[(\sqrt{2}+1)^2 A^2 + (\sqrt{2}-1)^2 B^2]\\
e=-\frac{1}{2\sqrt{2}}[(\sqrt{2}+1)^2 A^2 - (\sqrt{2}-1)^2 B^2] \\
f=\frac{1}{2\sqrt{2}}[(\sqrt{2}+1)^3 A^2 + (\sqrt{2}-1)^3 B^2]
\end{array}
\]
\indent{Number of effective parameters: 3}
\indent{Number of different couplings without deformations: 8}
\indent{corresponding to the following $\vec f_{23} $ shifts}
\[
\vec{f_{23}}= \; \left[
\begin{array}{l}
l_2=1 \;\left\{
\begin{array}{llll}
g_2^{(0)} \otimes \hat g_2^{(0)}, & g_2^{(0)} \otimes \hat g_2^{(2)}, &
g_2^{(2)} \otimes \hat g_2^{(0)}, & g_2^{(2)} \otimes \hat g_2^{(2)},
\end {array}
\right. \\
l_2=2 \; \left\{
\begin{array}{ll}
g_2^{(0)} \otimes \hat g_2^{(1)}, & g_2^{(1)} \otimes \hat g_2^{(0)},\\
g_2^{(2)} \otimes \hat g_2^{(1)}, & g_2^{(1)} \otimes \hat g_2^{(2)}
\end{array}
\right.
\end{array}
\right.
\]
\indent{Number of different couplings with deformations: 9}
\indent{corresponding to the following $\vec f_{23} $ shifts}
\[
\vec{f_{23}}= \; \left[
\begin{array}{l}
l_2=1 \;\left\{
\begin{array}{llll}
g_2^{(0)} \otimes \hat g_2^{(0)}, & g_2^{(0)} \otimes \hat g_2^{(2)}, &
g_2^{(2)} \otimes \hat g_2^{(0)}, & g_2^{(2)} \otimes \hat g_2^{(2)},
\end {array}
\right. \\
l_2=2 \; \left\{
\begin{array}{lll}
g_2^{(0)} \otimes \hat g_2^{(1)}, & g_2^{(1)} \otimes \hat g_2^{(0)}, &
g_2^{(1)} \otimes \hat g_2^{(1)}, \\
g_2^{(2)} \otimes \hat g_2^{(1)}, & g_2^{(1)} \otimes \hat g_2^{(2)} &
\end{array}
\right.
\end{array}
\right.
\]
\vspace {1.0 cm}
\noindent{\underline{\bf{ORBIFOLD $Z_8$--II}}}
\vspace{.5 cm}
\noindent{\underline {Twist}
$ \theta={\rm
diag}(e^{i\alpha},e^{3i\alpha},e^{-4i\alpha}) ,\;\;\;\;
\alpha=\frac{2\pi}{8} $}
\noindent{\underline {Lattice}
$ SO(4) \otimes SO(8) $}
\noindent{\underline {Twist in the lattice basis}}
\[
\begin{array}{lll}
\theta e_1=-e_1, & \theta e_2=-e_2, &
\theta e_3= e_4+e_5 ,\\
\theta e_4 = e_3+e_4+e_6, &
\theta e_5=-e_3-e_4-e_5-e_6, & \theta e_6 = -e_3-e_4
\end{array}
\]
\noindent{\underline {Deformation parameters}}
\indent{Relations}
\[
\begin{array}{lll}
|e_3|=|e_5|, & |e_4|= \frac{1}{\sqrt{2}} [|e_3|^2+|e_6|^2]^{1/2} ,&
\alpha_{35}=0 ,\\
\alpha_{56}=0, &
\alpha_{34}= \frac {1}{\sqrt{2}} \frac{[\frac {1}{2} |e_6|^2 - \frac {3}{2}
|e_3|^2]}
{|e_3|[|e_3|^2+|e_6|^2]^{1/2}} ,&
\alpha_{36}= \frac {[|e_3|^2-|e_6|^2]}{2|e_3||e_6|} ,\\
\alpha_{45}= \frac {1}{\sqrt{2}} \frac{[\frac {1}{2} |e_3|^2 - \frac {3}{2}
|e_6|^2]}
{|e_3|[|e_3|^2+|e_6|^2]^{1/2}}, &
\alpha_{46}= -\frac {1}{\sqrt{2}} \frac {[|e_3|^2 + |e_6|^2]^{1/2}}{|e_6|}, &
\alpha_{ij}=0\;\;\;i=1,2\;\;j=3,4,5,6
\end{array}
\]
\[
\alpha_{ij}\equiv\cos(\theta_{ij})
\]
\indent{Degrees of freedom (5)}
\[
\begin {array}{lllll}
R_1= |e_1|, &
R_2= |e_2|, & R_3= |e_3|, &
R_6= |e_6|, & \alpha_{12}
\end {array}
\]
\vspace{.5 cm}
\noindent{\underline{Lattice basis ($e_i$) in terms of orthogonal basis
($\tilde
e_i$)}}
\[
\begin{array}{rcl}
e_1&=& R_1 [\sin (\varphi_1+\theta_{12}) \tilde e_1 + \cos
(\varphi_1+\theta_{12}) \tilde e_2] \\
e_2&=& R_2 [\sin (\varphi_1) \tilde e_1 + \cos (\varphi_1) \tilde e_2] \\
e_3&=& A [\cos (\varphi_2) \tilde e_3 + \sin (\varphi_2) \tilde e_4] +
\rho_2 [\cos (\varphi_3) \tilde e_5 + \sin (\varphi_3) \tilde e_6] \\
e_4&=& \frac{A}{\sqrt{2}} [(\cos (\varphi_2)+(1+\sqrt{2})\sin (\varphi_2))
\tilde e_3 +
(-(1+\sqrt{2}) \cos (\varphi_2)+ \sin (\varphi_2) \tilde e_4]-\\
& &-\frac{\rho_2}{\sqrt{2}} [(\cos (\varphi_3)-(1-\sqrt{2})\sin (\varphi_3))
\tilde e_5 +
((1-\sqrt{2}) \cos (\varphi_3)+ \sin (\varphi_3) \tilde e_6] \\
e_5&=& -A [\sin (\varphi_2) \bar e_3 - \cos (\varphi_2) \tilde e_4] +
\rho_2 [\sin (\varphi_3) \tilde e_5 - \cos (\varphi_3) \tilde e_6] \\
e_6&=& -(1+\sqrt{2})A [\cos (\varphi_2) \tilde e_3 + \sin (\varphi_2) \tilde
e_4] +
(\sqrt{2}-1)\rho_2 [\cos (\varphi_3) \tilde e_5 + \sin (\varphi_3) \tilde e_6]
\end{array}
\]
\[
\begin{array}{ll}
A= \frac {R_3}{2^{5/4}} \left[ \left(\frac{R_6}{R_3} \right) ^2
-(1-\sqrt{2})^2 \right]^{1/2}, &
\rho_2= \frac {R_6}{2^{5/4}} \left[ \left(\frac{R_3}{R_6} \right) ^2
(1+\sqrt{2})^2
- 1 \right]^{1/2}
\end{array}
\]
\indent{$\varphi_1,\;\varphi_2,\;\varphi_3$ are free parameters.}
\vspace{.5 cm}
\noindent{\underline {Fixed points of $\theta$} (8)}
\[
\begin {array}{c}
f_1^{(ij)}= g_1^{(i)}\otimes \hat g_1^{(j)}\;,\;\;\; i=0,1,2,3,\;\;\;j = 0,1
\end {array}
\]
\[
\begin{array}{lll}
g_1^{(0)} = (0,0)\;,& g_1^{(1)} =\frac{1}{2} (1,0)\;,&
g_1^{(2)} =\frac{1}{2} (1,1)\;, \\
g_1^{(3)} =\frac{1}{2} (0,1)\;,&
\hat g_1^{(0)} =(0,0,0,0)\;,& \hat g_1^{(1)}=\frac{1}{2} (0,0,1,1)
\end {array}
\]
\vspace{.5 cm}
\noindent{\underline {Fixed points of $\theta^2$} (4)}
\indent{Fixed torus: $\alpha(e_1)+\beta(e_2)\;,\;\; \alpha,\;\beta
\in R$}
\[
\begin{array}{c}
f_2^{(i)}= [\alpha(e_1)+\beta(e_2)] \otimes \hat g_2^{(i)} \;,\;\;\;
i=0,1,2,3,\;\;
\alpha,\beta \in R\\
\end {array}
\]
\[
\begin {array}{llll}
\hat g_2^{(0)}=(0,0,0,0), & \hat g_2^{(1)}=\frac{1}{2}(1,0,1,0), &
\hat g_2^{(2)}=\frac{1}{2}(0,0,1,1), & \hat g_2^{(3)}=\frac{1}{2}(1,0,0,1)
\end{array}
\]
\indent {Note that in the $SO(8)$ lattice
$\theta : \hat g_2^{(1)} \rightarrow \hat g_2^{(3)}$.}
\indent {Number of conjugation classes: 3}
\vspace{.5 cm}
\noindent{\underline {Fixed points of $\theta^3$} (8)}
\indent{The same as for $\theta$.}
\[
\begin {array}{c}
f_3^{(ij)}= g_3^{(i)} \otimes \hat g_3^{(j)}\;,\;\;\; i=0,1,2,3,\;\;\;j = 0,1
\end {array}
\]
\[
\begin{array}{lll}
g_3^{(0)} = (0,0)\;,& g_3^{(1)} =\frac{1}{2} (1,0)\;,&
g_3^{(2)} =\frac{1}{2} (1,1)\;, \\
g_3^{(3)} =\frac{1}{2} (0,1)\;, &
\hat g_3^{(0)} =(0,0,0,0)\;,&
\hat g_3^{(1)} =\frac{1}{2} (0,0,1,1)
\end {array}
\]
\vspace{.5 cm}
\noindent{\underline {Fixed points of $\theta^4$} (16)}
\indent{Fixed torus: $\alpha(e_1)+\beta(e_2)\;,\;\; \alpha,\;\beta
\in R$}
\[
\begin{array}{c}
f_4^{(i)}= [\alpha(e_1)+\beta(e_2)] \otimes \hat g_4^{(i)} \;,\;\;\;
j=0,1,...,15,\;\; \alpha,\beta \in R
\end {array}
\]
\[
\begin {array}{llll}
\hat g_4^{(0)}=(0,0,0,0) ,&
\hat g_4^{(1)}=\frac{1}{2} (1,0,0,0), &
\hat g_4^{(2)}=\frac{1}{2} (0,1,0,0) ,&
\hat g_4^{(3)}=\frac{1}{2} (0,0,0,1), \\
\hat g_4^{(4)}=\frac{1}{2} (1,0,1,0), &
\hat g_4^{(5)}=\frac{1}{2} (0,1,1,0), &
\hat g_4^{(6)}=\frac{1}{2} (1,1,0,1), &
\hat g_4^{(7)}=\frac{1}{2} (1,1,0,0), \\
\hat g_4^{(8)}=\frac{1}{2} (0,0,1,1), &
\hat g_4^{(9)}=\frac{1}{2} (0,0,1,0), &
\hat g_4^{(10)}=\frac{1}{2} (0,1,1,1), &
\hat g_4^{(11)}=\frac{1}{2} (1,0,1,1), \\
\hat g_4^{(12)}=\frac{1}{2} (1,0,0,1), &
\hat g_4^{(13)}=\frac{1}{2} (1,1,1,1), &
\hat g_4^{(14)}=\frac{1}{2} (1,1,1,0), &
\hat g_4^{(15)}=\frac{1}{2} (0,1,0,0)
\end{array}
\]
\indent {Note that in the $SO(8)$ lattice}
\[
\begin{array}{ll}
\theta:\hat g_4^{(4)} \rightarrow \hat g_4^{(12)}, &
\theta:\hat g_4^{(1)} \rightarrow \hat g_4^{(5)} \rightarrow
\hat g_4^{(9)} \rightarrow \hat g_4^{(13)},\\
\theta:\hat g_4^{(2)} \rightarrow \hat g_4^{(6)} \rightarrow
\hat g_4^{(10)} \rightarrow \hat g_4^{(14)},&
\theta:\hat g_4^{(3)} \rightarrow \hat g_4^{(7)} \rightarrow
\hat g_4^{(11)} \rightarrow \hat g_4^{(15)}
\end{array}
\]
\indent {Number of conjugation classes: 6}
\vspace{.5 cm}
\noindent {\underline {Coupling $\theta\theta\theta^6$}}
\indent {Selection rule}
\[
f_1+f_2-(I+\theta)f_3 \in \Lambda
\]
\indent {Denoting}
\[
\left.
\begin {array}{rcl}
f_1 &=& g_1^{(i_1)}\otimes \hat g_1^{(j_1)}\\
f_2 &=& g_1^{(i_2)}\otimes \hat g_1^{(j_2)}\\
f_3 &=& [\alpha(e_1)+\beta(e_2)] \otimes \hat g_2^{(j_3)}
\end {array}
\right\}\;\;\;
\begin {array} {l}
i_1,i_2,j_3=0,1,2,3\;,\\
j_1,j_2=0,1\;,\\
\alpha,\beta \in R
\end {array}
\]
\indent{the selection rule reads}
\[
\left.
\begin {array}{l}
i_1=i_2 \\
j_1+j_2+j_3=0
\end {array}
\right\}
\;\;
mod.\;2
\]
\indent{Number of allowed couplings: 24}
\indent{Expression of the coupling}
\begin {eqnarray*}
C_{\theta\theta\theta^6} &=& \sqrt{l_3}\;\;
N \; \sum_{v \in (f_3-f_2+\Lambda)_{\perp}} \exp [-\frac
{\sqrt{2}}{8\pi} \; (|v_2|^2+|v_3|^2)] \\
&=& \sqrt{l_3}\;\;
N \;\; \sum_{\vec{u} \in Z^4} \exp [-\frac
{\sqrt{2}}{8\pi} \; (\vec{\bar{f_{23}}}+\vec{u})^{\top} M
(\vec{\bar{f_{23}}}+\vec{u})] \\
&=& \sqrt{l_3}\;\; N \; \vartheta
\left[
\begin{array}{c}
\vec{\bar{f_{23}}} \\
0
\end {array}
\right]
[0, \Omega]
\end {eqnarray*}
\begin{quotation}
\noindent{where the expression $(f_3-f_2+\Lambda)_{\perp}$ indicates that the
coset
elements must belong to $SO(8)$ and $\bar{f_{23}}$ is the restriction of
$f_{23}$ to $SO(8)$,
$l_3$ denotes the number of elements in the $f_3$ conjugation class, and the
arrows
denote the components in the $SO(8)$ lattice}
\end{quotation}
\[
N= \sqrt{V_{\perp}}\; \frac{1}{2 \pi} \;
\frac{\Gamma(\frac{7}{8}) \Gamma(\frac{5}{8})}
{\Gamma(\frac{1}{8}) \Gamma(\frac{3}{8})}
\;,\;\;
\Omega = \; i \frac {\sqrt{2}}{8\pi^2} M
\]
\[
\Omega= \; i \frac {\sqrt{2}}{8\pi^2}
\left(
\begin {array}{cccccc}
a & b & 0 & c \\
b & d & e & -\frac{d}{2} \\
0 & e & a & 0 \\
c & -\frac{d}{2} & 0 & f
\end {array}
\right)
\begin{array}{l}
a=R_3^2\\
b=-\frac{3}{4} R_3^2 + \frac{1}{4} R_6^2\\
c= \frac {1}{2} R_3^2 - \frac {1}{2} R_6^2\\
d= \frac {1}{2} R_3^2 + \frac {1}{2} R_6^2\\
e=-\frac{3}{4} R_6^2 + \frac{1}{4} R_3^2\\
f= R_6^2
\end{array}
\]
\indent{where $V_{\perp}$ is the volume of the $SO(8)$ lattice}
\indent{Number of effective parameters: 2}
\indent{Number of different couplings without deformations: 3}
\indent{Number of different couplings with deformations: 3}
\indent{corresponding to the following $\vec{\bar{f_{23}}}$ shifts}
\[
\vec{\bar{f_{23}}}= \; \left[
\begin {array}{l}
l_3=1 \; \left \{
\begin{array}{l}
g_2^{(0)},\\
g_2^{(2)},
\end{array}
\right. \\
l_3=2 \;\;\;\; g_2^{(1)}
\end{array}
\right.
\]
\vspace{.5 cm}
\noindent {\underline {Coupling $\theta\theta^3\theta^4$}}
\indent {Selection rule}
\[
f_1+f_2-(I+\theta+\theta^2+\theta^3)f_3 \in \Lambda
\]
\indent {Denoting}
\[
\left.
\begin {array}{rcl}
f_1 &=& g_1^{(i_1)}\otimes \hat g_1^{(j_1)}\\
f_2 &=& g_1^{(i_2)}\otimes \hat g_1^{(j_2)}\\
f_3 &=& [\alpha(e_1)+\beta(e_2)] \otimes \hat g_4^{(j_3)}
\end {array}
\right\}\;\;\;
\begin {array} {l}
i_1,i_2=0,1,2,3\;,\\
j_1,j_2=0,1\;, \\
j_3=0,1,...,15\;, \\
\alpha,\beta \in R
\end {array}
\]
\indent{the selection rule reads}
\[
\left.
\begin {array}{l}
i_1=i_2 \\
j_1+j_2+j_3=0
\end {array}
\right\}
\;\;
mod.\;2
\]
\indent{Number of allowed couplings: 48}
\indent{Expression of the coupling}
\begin {eqnarray*}
C_{\theta\theta^3\theta^4} &=& \sqrt{l_3}\;\;
N \; \sum_{v \in (f_3-f_2+\Lambda)_{\perp}} \exp [-\frac
{1}{4\pi} \; ((\sqrt{2}+1)|v_2|^2+(\sqrt{2}-1)|v_3|^2)] \\
&=& \sqrt{l_3}\;\;
N \;\; \sum_{\vec{u} \in Z^4} \exp [-\frac
{1}{4\pi} \; (\vec{\bar{f_{23}}}+\vec{u})^{\top} M
(\vec{\bar{f_{23}}}+\vec{u})] \\
&=& \sqrt{l_3}\;\; N \; \vartheta
\left[
\begin{array}{c}
\vec{\bar{f_{23}}} \\
0
\end {array}
\right]
[0, \Omega]
\end {eqnarray*}
\begin{quotation}
\noindent{where $(f_3-f_2+\Lambda)_{\perp}$ indicates that the coset
elements must belong to $SO(8)$,
and $V_{\perp}$ is the volume of the $SO(8)$ lattice}
\end{quotation}
\[
N= \sqrt{V_{\perp}}\; \frac{1}{2 \pi} \;
\frac{\Gamma(\frac{7}{8}) \Gamma(\frac{5}{8})}
{\Gamma(\frac{1}{8}) \Gamma(\frac{3}{8})} \;\;\;\;
\Omega = \; i \frac{1}{4\pi^2} M
\]
\[
\Omega=\; i \frac{1}{4\pi^2}
\left(
\begin {array}{cccccc}
a & b & 0 & e \\
b & c & d & d\\
0 & d & a & 0 \\
e & d & 0 & f
\end {array}
\right)
\begin{array}{l}
a=[(\sqrt{2}+1) A^2 + (\sqrt{2}-1) B^2]\\
b=\frac{1}{\sqrt{2}}[(\sqrt{2}+1) A^2 - (\sqrt{2}-1) B^2 ]\\
c=[(\sqrt{2}+2)(\sqrt{2}+1) A^2 - (\sqrt{2}-1)(\sqrt{2}-2) B^2]\\
d=-\frac{1}{\sqrt{2}}[(\sqrt{2}+1)^2 A^2 + (\sqrt{2}-1)^2 B^2 ]\\
e=-[(\sqrt{2}+1)^2 A^2 - (\sqrt{2}-1)^2 B^2]\\
f=[(\sqrt{2}+1)^3 A^2 + (\sqrt{2}-1)^3 B^2]
\end{array}
\]
\indent{Number of effective parameters: 2}
\indent{Number of different couplings without deformations: 6}
\indent{Number of different couplings with deformations: 6}
\indent{corresponding to the following $\vec{\bar{f_{23}}}$ shifts}
\[
\vec{\bar{f_{23}}}= \; \left[
\begin{array}{l}
l_3 =1 \; \left\{
\begin{array}{l}
g_4^{(0)}, \\
g_4^{(8)},
\end {array}
\right.\\
l_3=2 \;\;\;\;\; g_4^{(4)}, \\
l_3=4 \; \left\{
\begin {array}{l}
g_4^{(1)}, \\
g_4^{(2)}, \\
g_4^{(3)}
\end{array}
\right.
\end {array}
\right.
\]
\vspace {1.0 cm}
\noindent{\underline{\bf{ORBIFOLD $Z_{12}$--I}}}
\vspace{.5 cm}
\noindent{\underline {Twist}
$ \theta={\rm
diag}(e^{i\alpha},e^{4i\alpha},e^{-5i\alpha}) ,\;\;\;\;
\alpha=\frac{2\pi}{12} $}
\noindent{\underline {Lattice}
$ SU(3) \otimes F_4 $}
\noindent{\underline {Coxeter element}}
\[
\begin{array}{lll}
\theta e_1= e_2, & \theta e_2 = -e_1-e_2, &
\theta e_3= e_4 ,\\
\theta e_4= e_3+e_4+2 e_5, & \theta e_5=e_6, &
\theta e_6 = -e_3-e_4-e_5-e_6
\end{array}
\]
\noindent{\underline {Deformation parameters}}
\indent{Relations}
\[
\begin{array}{llll}
|e_1|=|e_2| , & |e_3|=|e_4|= \sqrt{2} |e_5| = \sqrt{2} |e_6|,&
\alpha_{12}=-\frac {1}{2} ,& \alpha_{45}=-\frac{1}{\sqrt{2}} ,\\
\alpha_{34}= \alpha_{56} , &
\alpha_{35}= \alpha_{46} = \alpha_{36} ,&
\alpha_{35}= -\frac{1}{2\sqrt{2}}[1+2\alpha_{34}], &
\alpha_{ij}=0\;\;\;i=1,2,\;\;j=1,2,3,4
\end{array}
\]
\[
\alpha_{ij}\equiv\cos(\theta_{ij})
\]
\indent{Degrees of freedom (3)}
\[
\begin {array}{lll}
R_1= |e_1|, & R_3= |e_3|, & \alpha_{34}
\end {array}
\]
\vspace{.5 cm}
\noindent{\underline{Lattice basis ($e_i$) in terms of orthogonal basis
($\tilde
e_i$)}}
\[
\begin{array}{l}
e_1= R_1 \cos(\phi_1) \tilde e_1 + R_1 \sin(\phi_1) \tilde e_2 \\
e_2= R_1 \cos(\phi_1+\alpha) \tilde e_1 + R_1 \sin (\phi_1 +\alpha ) \tilde e_2
\\
e_3= A \cos(\phi_2) \tilde e_3 + A \sin(\phi_2) \tilde e_4 + B \cos (\phi_3)
\tilde e_5 +
B \cos (\phi_3) \tilde e_6 \\
e_4= A \cos(\phi_2+\beta) \tilde e_3 + A \sin(\phi_2+\beta) \tilde e_4 + B \cos
(\phi_3+ 7\beta)
\tilde e_5 + B \cos (\phi_3+ 7 \beta) \tilde e_6 \\
e_5= \frac{A}{\sqrt{2}}[-\sin(\phi_2+\frac{5}{2}\beta) \tilde e_3 +\cos(\phi_2+
\frac{5}{2}\beta)\tilde e_4] +
\frac{B}{\sqrt{2}}[\sin(\phi_3+ \frac{11}{2}\beta) \tilde e_5
-\cos (\phi_3+\frac{11}{2}\beta) \tilde e_6 ]\\
e_6= \frac{A}{\sqrt{2}}[-\sin(\phi_2+\frac{7}{2}\beta) \tilde e_3 +\cos(\phi_2+
\frac{7}{2}\beta)\tilde e_4] +
\frac{B}{\sqrt{2}}[\sin(\phi_3+ \frac{1}{2}\beta) \tilde e_5
-\cos (\phi_3+\frac{1}{2}\beta) \tilde e_6 ]\\
\end{array}
\]
\[
\begin{array}{llll}
\alpha = \frac{2\pi}{3}, & \beta= \frac {\pi}{6}, &
A= R_3 \left[ \frac {1}{2} + \frac{\alpha_{34}}{\sqrt{3}} \right]^{1/2}, &
B= R_3 \left[ \frac {1}{2} - \frac{\alpha_{34}}{\sqrt{3}} \right]^{1/2}
\end{array}
\]
\indent{$\phi_1,\;\phi_2,\;\phi_3$ are free parameters}
\vspace{.5 cm}
\noindent{\underline {Fixed points of $\theta$} (3)}
\[
\begin {array}{c}
f_1^{(i)}= g_1^{(i)}\otimes \hat g_1^{(0)}\;,\;\;\; i=0,1,2
\end {array}
\]
\[
\begin{array}{llll}
g_1^{(0)} = (0,0)\;,& g_1^{(1)} =\frac{1}{3} (1,2)\;,&
g_1^{(2)} =\frac{1}{3} (2,1)\;, & \hat g_1^{(0)}=(0,0,0,0)\;
\end {array}
\]
\vspace{.5 cm}
\noindent{\underline {Fixed points of $\theta^2$} (3)}
\indent{The same as for $\theta$}
\[
\begin {array}{c}
f_2^{(i)}= g_2^{(i)} \otimes \hat g_2^{(0)}\;,\;\;\; i=0,1,2
\end {array}
\]
\[
\begin{array}{llll}
g_2^{(0)} = (0,0)\;,& g_2^{(1)} =\frac{1}{3} (1,2)\;,&
g_2^{(2)} =\frac{1}{3} (2,1)\;, & \hat g_2^{(0)}=(0,0,0,0)\;
\end {array}
\]
\vspace{.5 cm}
\noindent{\underline {Fixed points of $\theta^3$} (4)}
\indent{Fixed torus: $\alpha(e_1)+\beta(e_2)\;,\;\; \alpha,\;\beta
\in R$}
\[
\begin {array}{c}
f_3^{(i)}= [\alpha(e_1)+\beta(e_2)]\otimes \hat g_3^{(i)}\;,\;\;\;
i=0,1,2,3,\;\;
\alpha,\beta \in R
\end {array}
\]
\[
\begin{array}{llll}
\hat g_3^{(0)} =(0,0,0,0)\;,& \hat g_3^{(1)} =\frac{1}{2} (1,0,0,0)\;, &
\hat g_3^{(2)} =\frac{1}{2} (0,1,0,0)\;, & \hat g_3^{(3)} =\frac{1}{2}
(1,1,0,0)\;
\end {array}
\]
\indent{Note that in the $F_4$ lattice $\theta:\hat g_3^{(1)} \rightarrow \hat
g_3^{(2)}
\rightarrow \hat g_3^{(3)}$}
\indent {Number of conjugation classes: 2}
\vspace{.5 cm}
\noindent{\underline {Fixed points of $\theta^4$} (27)}
\[
\begin{array}{c}
f_4^{(ij)}= g_4^{(i)} \otimes \hat g_4^{(j)} \;,\;\;\;
i=0,1,2, \;\;\;\; j=0,1,...,8
\end {array}
\]
\[
\begin {array}{llll}
g_4^{(0)} = (0,0),&
g_4^{(1)} =\frac{1}{3} (1,2),&
g_4^{(2)} =\frac{1}{3} (2,1), &
\hat g_4^{(0)}=(0,0,0,0), \\
\hat g_4^{(1)}=\frac{1}{3} (2,1,2,0), &
\hat g_4^{(2)}=\frac{1}{3} (2,2,0,2), &
\hat g_4^{(3)}=\frac{1}{3} (1,0,2,2), &
\hat g_4^{(4)}=\frac{1}{3} (0,2,2,1), \\
\hat g_4^{(5)}=\frac{1}{3} (1,2,1,0), &
\hat g_4^{(6)}=\frac{1}{3} (1,1,0,1), &
\hat g_4^{(7)}=\frac{1}{3} (2,0,1,1), &
\hat g_4^{(8)}=\frac{1}{3} (0,1,1,2)
\end{array}
\]
\begin{quotation}
\noindent{Note that in the $F_4$ lattice $\theta :\hat g_4^{(1)} \rightarrow
\hat g_4^{(3)}
\rightarrow \hat g_4^{(5)} \rightarrow \hat g_4^{(7)}$
and $\theta: \hat g_4^{(2)} \rightarrow \hat g_4^{(4)} \rightarrow \hat
g_4^{(6)}
\rightarrow \hat g_4^{(8)}$}
\end{quotation}
\indent {Number of conjugation classes: 9}
\vspace{.5 cm}
\noindent{\underline {Fixed points of $\theta^5$} (3)}
\indent{The same as for $\theta$}
\[
\begin {array}{c}
f_5^{(i)}= g_5^{(i)}\otimes \hat g_5^{(0)}\;,\;\;\; i=0,1,2
\end {array}
\]
\[
\begin{array}{llll}
g_5^{(0)} = (0,0)\;,& g_5^{(1)} =\frac{1}{3} (1,2)\;,&
g_5^{(2)} =\frac{1}{3} (2,1)\;, & \hat g_5^{(0)}=(0,0,0,0)\;
\end {array}
\]
\vspace{.5 cm}
\noindent{\underline {Fixed points of $\theta^6$} (16)}
\indent{Fixed torus: $\alpha(e_1)+\beta(e_2)\;,\;\; \alpha,\;\beta
\in R$}
\[
\begin {array}{c}
f_6^{(i)}= [\alpha(e_1)+\beta(e_2)] \otimes \hat g_6^{(i)}\;,\;\;\;
i=0,1,...,15,\;\;
\alpha, \beta \in R
\end {array}
\]
\[
\begin{array}{llll}
\hat g_6^{(0)}=(0,0,0,0)\;,&
\hat g_6^{(1)} =\frac{1}{2} (1,1,1,1)\;,&
\hat g_6^{(2)} =\frac{1}{2} (0,0,0,1)\;, &
\hat g_6^{(3)} =\frac{1}{2} (0,0,1,0)\;,\\
\hat g_6^{(4)} =\frac{1}{2} (1,0,0,0)\;, &
\hat g_6^{(5)} =\frac{1}{2} (0,0,1,1)\;, &
\hat g_6^{(6)} =\frac{1}{2} (0,1,0,1)\;, &
\hat g_6^{(7)} =\frac{1}{2} (1,0,1,0)\;,\\
\hat g_6^{(8)} =\frac{1}{2} (0,1,0,0)\;,&
\hat g_6^{(9)} =\frac{1}{2} (0,1,1,1)\;,&
\hat g_6^{(10)} =\frac{1}{2} (1,1,0,1)\;,&
\hat g_6^{(11)}=\frac{1}{2} (0,1,1,0)\;,\\
\hat g_6^{(12)} =\frac{1}{2} (1,1,0,0)\;, &
\hat g_6^{(13)} =\frac{1}{2} (1,0,1,1)\;,&
\hat g_6^{(14)} =\frac{1}{2} (1,0,0,1)\;, &
\hat g_6^{(15)} =\frac{1}{2} (1,1,1,0)\;
\end {array}
\]
\begin{quotation}
\noindent{Note that in the $F_4$ lattice $\theta:\hat g_6^{(3)} \rightarrow
\hat
g_6^{(2)}
\rightarrow \hat g_6^{(1)} \rightarrow \hat g_6^{(11)} \rightarrow \hat
g_6^{(10)}
\rightarrow \hat g_6^{(9)}$ , $\theta: \hat g_6^{(7)}
\rightarrow \hat g_6^{(6)} \rightarrow \hat g_6^{(5)} \rightarrow \hat
g_6^{(15)}
\rightarrow \hat g_6^{(14)}
\rightarrow \hat g_6^{(13)}$ and $\theta: \hat g_6^{(4)}
\rightarrow \hat g_6^{(8)}
\rightarrow \hat g_6^{(12)}$}
\end{quotation}
\indent {Number of conjugation classes: 4}
\vspace{.5 cm}
\noindent {\underline {Coupling $\theta\theta^2\theta^9$}}
\indent {Selection rule}
\[
f_1+(I+\theta)f_2-(I+\theta+\theta^2)f_3 \in \Lambda
\]
\indent {Denoting}
\[
\left.
\begin {array}{rcl}
f_1 &=& g_1^{(i_1)}\otimes \hat g_1^{(0)}\\
f_2 &=& g_2^{(i_2)}\otimes \hat g_2^{(0)}\\
f_3 &=& [\alpha(e_1)+\beta(e_2)] \otimes \hat g_3^{(j_3)}
\end {array}
\right\}\;\;\;
\begin {array} {l}
i_1,i_2,j_3=0,1,2,3\;,\\
\alpha,\beta \in R
\end {array}
\]
\indent{the selection rule reads}
\[
\begin {array}{lcr}
i_1&=&i_2
\end {array}
\]
\indent{Number of allowed couplings: 6}
\indent{Expression of the coupling}
\begin {eqnarray*}
C_{\theta\theta^2\theta^9} &=& \sqrt{l_3}\;\;
N \; \sum_{v \in (f_3-f_2+\Lambda)_{\perp}} \exp [-\frac
{1}{4\pi} \sin(\frac{\pi}{6} \sin (\frac{\pi}{4}) \;
(\frac{|v_2|^2}{\sin(\frac{\pi}{12})}
+\frac{|v_3|^2}{\cos(\frac{\pi}{12})})] \\
&=& \sqrt{l_3}\;\;
N \;\; \sum_{u \in Z^4} \exp [-\frac
{\sqrt{2}}{4\pi} \; (\vec{\bar{f_{23}}}+\vec{u})^{\top} M
(\vec{\bar{f_{23}}}+\vec{u})] \\
&=& \sqrt{l_3}\;\; N \; \vartheta
\left[
\begin{array}{c}
\vec{\bar{f_{23}}} \\
0
\end {array}
\right]
[0, \Omega]
\end {eqnarray*}
\begin{quotation}
\noindent{where $(f_3-f_2+\Lambda)_{\perp}$ indicates that the coset
elements must belong to $F_4$, $l_3$ is the number of elements in the $f_3$
conjugation class,
the arrows denote components in the $F_4$ lattice, and $V_{\perp}$ is the
volume
of the $F_4$
lattice unit cell}
\end{quotation}
\[
N= \sqrt{V_{\perp}}\; \frac{1}{2 \pi} \;
\frac{\Gamma(\frac{5}{6})}{\Gamma(\frac{1}{6})} \;
\left[ \frac{\Gamma(\frac{11}{12}) \Gamma(\frac{5}{12})}
{\Gamma(\frac{1}{12}) \Gamma(\frac{7}{12})} \right] ^{1/2}\;,\;\;\;
\Omega = \; i \frac{\sqrt{2}}{4\pi^2} M
\]
\[
\Omega = \; i \frac{\sqrt{2}}{4\pi^2}
\left(
\begin {array}{cccc}
a & \frac{b\sqrt{3}}{2} & -\frac{b}{\sqrt{2}} & -\frac{b}{\sqrt{2}} \\
\frac{b\sqrt{3}}{2} & a & -\frac{a}{2} & \frac{b\sqrt{3}}{2} \\
-\frac{b}{\sqrt{2}} & -\frac{a}{2} & \frac{a}{2} & \frac{b\sqrt{3}}{4} \\
-\frac{b}{\sqrt{2}} & \frac{b\sqrt{3}}{2} & \frac{b\sqrt{3}}{4} & \frac{a}{2}
\end {array}
\right)
\begin{array}{l}
a=[A^2 \cos (\frac{\pi}{12})+ B^2 \sin (\frac{\pi}{12})]\\
b=[A^2 \cos (\frac{\pi}{12})- B^2 \sin (\frac{\pi}{12})]
\end{array}
\]
\indent{Number of effective parameters: 2}
\indent{Number of different couplings without deformations: 2}
\indent{Number of different couplings with deformations: 2}
\indent{corresponding to the following $\vec{\bar{ f_{23}}} $ shifts}
\[
\vec{\bar{f_{23}}} = \hat g_3^{(0)}, \; \hat g_3^{(1)}
\]
\indent{Note that this coupling is the same as $\theta^2\theta^3\theta^7$}
\vspace{.5 cm}
\noindent {\underline {Coupling $\theta\theta^4\theta^7$}}
\indent {Selection rule}
\[
f_1+(I+\theta+\theta^2+\theta^3)f_2-(I+\theta+\theta^2+\theta^3+\theta^4)f_3
\in
\Lambda
\]
\indent {Denoting}
\[
\left.
\begin {array}{rcl}
f_1 &=& g_1^{(i_1)}\otimes \hat g_1^{(0)}\\
f_2 &=& g_4^{(i_2)}\otimes \hat g_4^{(j_2)}\\
f_3 &=& g_5^{(i_3)}\otimes \hat g_5^{(0)}
\end {array}
\right\}\;\;\;
\begin {array} {l}
i_1,i_2,i_3 = 0,1,2,\\
j_2= 0,1,...,8
\end {array}
\]
\indent{the selection rule reads}
\[
\begin {array}{lcr}
i_1+i_2+i_3=0\;,\;\;\; mod.\;3
\end {array}
\]
\indent{Number of allowed couplings: 6}
\indent{Expression of the coupling}
\begin {eqnarray*}
C_{\theta\theta^4\theta^7} &=& \sqrt{l_2}\;\;
N \; \sum_{v \in (f_3-f_2+\Lambda)} \exp [-\frac
{1}{4\pi} \; (\frac{\sqrt{3}}{2} |v_1|^2 +
2\sqrt{3}(\frac{|v_2|^2}{\cos^2(\frac{\pi}{12})}
+\frac{|v_3|^2}{\sin^2(\frac{\pi}{12})}))] \\
&=& \sqrt{l_2}\;\;
N \;\; \sum_{\vec{u} \in Z^6} \exp [-\frac
{\sqrt{3}}{8\pi} \; (\vec{f_{23}}+\vec{u})^{\top} M (\vec{f_{23}}+\vec{u})]
\\
&=& \sqrt{l_2}\;\; N \; \vartheta
\left[
\begin{array}{c}
\vec{{f_{23}}} \\
0
\end {array}
\right]
[0, \Omega]
\end {eqnarray*}
\begin{quotation}
\noindent{$l_2$ is the number of elements in the $f_2$ conjugation class, and
the arrows denote
components in the $SU(3) \otimes F_4$ lattice}
\end {quotation}
\[
N= \sqrt{V_{\Lambda}}\; \frac{3^{1/4}}{4 \pi^2} \;
\frac{\Gamma^3(\frac{2}{3})}{\Gamma^2(\frac{1}{3})} \;
\left[ \frac{\Gamma(\frac{11}{12}) \Gamma(\frac{5}{12})}
{\Gamma(\frac{1}{12}) \Gamma(\frac{7}{12})} \right] \;,\;\;\;\;
\Omega = \; i \frac{\sqrt{2}}{4\pi^2} M
\]
\[
\begin{array}{l}
\Omega= \;i \frac{\sqrt{3}}{8\pi^2}
\left(
\begin {array}{cccccc}
R_1^2 & -\frac{R_1^2}{2} & 0 & 0 & 0 & 0 \\
-\frac{R_1^2}{2} & R_1^2 & 0 & 0 & 0 & 0 \\
0 & 0 & a & \frac{b\sqrt{3}}{2} & -\frac{b}{\sqrt{2}} & -\frac{b}{\sqrt{2}} \\
0 & 0 & \frac{b\sqrt{3}}{2} & a & -\frac{a}{2} & \frac{b\sqrt{3}}{2} \\
0 & 0 & -\frac{b}{\sqrt{2}} & -\frac{a}{2} & \frac{a}{2} & \frac{b\sqrt{3}}{4}
\\
0 & 0 & -\frac{b}{\sqrt{2}} & \frac{b\sqrt{3}}{2} & \frac{b\sqrt{3}}{4} &
\frac{a}{2}
\end {array}
\right)
\begin{array}{l}
a=4[A^2 \cos (\frac{\pi}{12})+ B^2 \sin (\frac{\pi}{12})]\\
b=4[A^2 \cos (\frac{\pi}{12})- B^2 \sin (\frac{\pi}{12})]
\end{array}
\end{array}
\]
\indent{Number of effective parameters: 3}
\indent{Number of different couplings without deformations: 4}
\indent{Number of different couplings with deformations: 4}
\indent{corresponding to the following $\vec{f_{23}} $ shifts}
\[
\vec{f_{23}}= \;
\left[
\begin{array}{l}
l_2=1 \;
\left\{
\begin{array}{l}
g_4^{(0)} \otimes \hat g_4^{(0)}, \\
g_4^{(1)} \otimes \hat g_4^{(0)},
\end{array}
\right.\\
l_2=2 \;
\left\{
\begin{array}{l}
g_4^{(0)} \otimes \hat g_4^{(1)}, \\
g_4^{(1)} \otimes \hat g_4^{(1)}
\end{array}
\right.
\end{array}
\right.
\]
\vspace{.5 cm}
\noindent {\underline {Coupling $\theta^2\theta^4\theta^6$}}
\indent {Selection rule}
\[
f_1+(I+\theta^2)f_2-(I+\theta^2+\theta^4)f_3 \in \Lambda
\]
\indent {Denoting}
\[
\left.
\begin {array}{rcl}
f_1 &=& g_2^{(i_1)}\otimes \hat g_2^{(0)}\\
f_2 &=& g_4^{(i_2)}\otimes \hat g_4^{(j_2)}\\
f_3 &=& [\alpha(e_1)+\beta(e_2)] \otimes \hat g_6^{(j_3)}
\end {array}
\right\}\;\;\;
\begin {array} {l}
i_1,i_2= 0,1,2,\\
j_2= 0,1,...,8,\\
j_3= 0,1,...,15,\\
\alpha,\beta \in R
\end {array}
\]
\indent{the selection rule reads}
\[
\begin {array}{lcr}
i_1=i_2 \;,
\end {array}
\]
\indent{Number of allowed couplings: 36}
\indent{Expression of the coupling (in all the cases except $l_3=6,\; l_2=4$)}
\begin {eqnarray*}
C_{\theta^2\theta^4\theta^6} &=&
N \sqrt{l_2\;l_3} \; \sum_{v \in (f_{23}+\Lambda)_{\perp}} \exp [-\frac
{1}{4\pi} \; \frac {\sin(\frac{\pi}{3})}{\sin(\frac{\pi}{6})} \; |v|^2] \\
&=&
N \sqrt{l_2\;l_3} \; \sum_{\vec{u} \in Z^4} \exp [-\frac
{\sqrt{3}}{4\pi} \; (\vec{\bar{f_{23}}}+\vec{u})^{\top} M
(\vec{\bar{f_{23}}}+\vec{u})] \\
&=& N \sqrt{l_2\;l_3} \;\vartheta
\left[
\begin{array}{c}
\vec{\bar{f_{23}}} \\
0
\end {array}
\right]
[0, \Omega]
\end {eqnarray*}
\begin{quotation}
\noindent{where $f_{23}=f_2-f_3$, $\bar f_{23}$ is the restriction of $f_{23}$
to the $F_4$
lattice; $(f_{23}+\Lambda)_{\perp}$ indicates that the coset must belong to
$F_4$
and $l_i$ is the number of elements in the $f_i$ conjugation class. $V_{\perp}$
is
the volume of the $F_4$ unit cell}
\end{quotation}
\indent{In the $l_3=6,\;l_2=4$ case}
\begin {eqnarray*}
C_{\theta^2\theta^4\theta^6} &=&
N \sqrt{6} \; \sum_{v \in (f_2-f_3+\Lambda)_{\perp}\cup (\theta
f_2-f_3+\Lambda)_{\perp}}
\exp [-\frac{1}{4\pi} \; \frac {\sin(\frac{\pi}{3})}{\sin(\frac{\pi}{6})} \;
|v|^2] \\
&=& N \sqrt{6} \; \{ \vartheta
\left[
\begin{array}{c}
\vec{\bar{f_{23}}} \\
0
\end {array}
\right]
[0, \Omega] +
\vartheta
\left[
\begin{array}{c}
\vec{\bar{f'_{23}}} \\
0
\end {array}
\right]
[0, \Omega] \}
\end {eqnarray*}
\[
N= \sqrt{V_{\perp}}\; \frac{1}{2\pi} \;
\left[ \frac{\Gamma(\frac{2}{3}) \Gamma(\frac{5}{6})}
{\Gamma(\frac{1}{6}) \Gamma(\frac{1}{3})} \right]\;,\;\;\;
\Omega = \;i \frac {\sqrt{3}}{4 \pi^2} M
\]
\[
\Omega = \;i \frac {\sqrt{3}}{4 \pi^2} R_3^2
\left(
\begin {array}{cccc}
1 & \alpha_{34} & -\frac{1}{4}[1+2\alpha_{34}] & -\frac{1}{4}[1+2\alpha_{34}]
\\
\alpha_{34} & 1 & -\frac{1}{2} & -\frac{1}{4}[1+2\alpha_{34}] \\
-\frac{1}{4}[1+2\alpha_{34}] & -\frac{1}{2} & \frac {1}{2} & \frac
{\alpha_{34}}{2} \\
-\frac{1}{4}[1+2\alpha_{34}] & -\frac{1}{4}[1+2\alpha_{34}] & \frac
{\alpha_{34}}{2} & \frac{1}{2}
\end {array}
\right)
\]
\indent{Number of effective parameters: 2}
\indent{Number of different couplings without deformations: 7 }
\indent{Number of different couplings with deformations: 7}
\indent{corresponding to the following $\vec{\bar{ f_{23}}} $ shifts}
\[
\vec{\bar{f_{23}}}=\; \left[
\begin{array}{l}
l_2=l_3=1\;\;(0,0,0,0) ,\\
l_2=1\;l_3=3\;\;(\frac{1}{2},0,0,0),\\
l_2=1\;l_3=6\;\;(0,0,\frac{1}{2},0),\\
l_2=4\;l_3=1\;\;(\frac{2}{3},\frac{1}{3},\frac{2}{3},0),\\
l_2=4\;l_3=3\;\;(\frac{1}{6},\frac{1}{3},\frac{2}{3},0),\\
l_2=4\;l_3=6\;\;\left\{
\begin{array}{l}
(\frac{2}{3},\frac{1}{3},\frac{1}{6},0)\cup
(\frac{2}{3},\frac{1}{3},\frac{2}{3},\frac{1}{2}),\\
(\frac{1}{6},\frac{1}{3},\frac{1}{6},0)\cup
(\frac{2}{3},\frac{5}{6},\frac{2}{3},\frac{1}{2})
\end{array}
\right.
\end{array}
\right.
\]
\vspace{.5 cm}
\noindent {\underline {Coupling $\theta^4\theta^4\theta^4$}}
\indent {Selection rule}
\[
f_1+f_2+f_3 \in \Lambda
\]
\indent {Denoting}
\[
\left.
\begin {array}{rcl}
f_1 &=& g_4^{(i_1)}\otimes \hat g_4^{(j_1)}\\
f_2 &=& g_4^{(i_2)}\otimes \hat g_4^{(j_2)}\\
f_3 &=& g_4^{(i_3)}\otimes \hat g_4^{(j_3)}
\end {array}
\right\}\;\;\;
\begin {array} {l}
i_1,i_2,i_3= 0,1,2,\\
j_1,j_2,j_3= 0,1,...,8,
\end {array}
\]
\indent{The selection rule reads}
\[
\begin {array}{l}
i_1+i_2+i_3=0\;\;\; mod. \; 3 \\
\left.
\begin{array}{cl}
j_{\sigma(1)}=0\;j_{\sigma(2)},j_{\sigma(3)}\neq 0 &
j_{\sigma(2)}-j_{\sigma(3)}=4 \\
j_{\sigma(1)}\;{\rm even}\;j_{\sigma(2)},j_{\sigma(3)}\; {\rm odd} &
j_{\sigma(3)}-5=j_{\sigma(2)}+1=j_{\sigma(1)}\\
j_{\sigma(1)}\;{\rm odd}\;j_{\sigma(2)},j_{\sigma(3)}\; {\rm even} &
j_{\sigma(3)}-7=j_{\sigma(2)}-5=j_{\sigma(1)}\\
j_{\sigma(1)},j_{\sigma(2)},j_{\sigma(3)}\; {\rm odd\; or \; even} &
j_{\sigma(3)}=j_{\sigma(2)}=j_{\sigma(1)}
\end {array}
\right\}\;\;\;
\begin {array}{l}
mod.\;8 \\
\sigma\;\equiv{\rm permutation\;of\; \{1,2,3\}}
\end {array}
\end {array}
\]
\indent{Number of allowed couplings: 189}
\indent{Expression of the coupling}
\begin {eqnarray*}
C_{\theta^4\theta^4\theta^4} &=&
N \;F(l_1,l_2,l_3)\; \sum_{v \in
(f_3-f_2+\Lambda)} \exp [-\frac
{\sqrt{3}}{8\pi} |v|^2] \\
&=& N \;F(l_1,l_2,l_3)\; \sum_{\vec{u}\in Z^6} \exp [-\frac
{\sqrt{3}}{8\pi} (\vec{f_{23}}+\vec{u})^{top} M (\vec{f_{23}}+\vec{u})]\\
&=& N \; F(l_1,l_2,l_3) \; [\vartheta
\left[
\begin{array}{c}
\vec{\bar{f_{23}}} \\
0
\end {array}
\right]
[0, \Omega]
\end {eqnarray*}
\indent{where $F=1$ except for the case $f_1=f_2=f_3$ and $l_1=l_2=l_3=4$
in which $F=\frac{1}{2}$}
\[
\Omega = \; i \frac{\sqrt{3}}{8\pi}\; M \;\;\;
N= \sqrt{V_{\Lambda}}\; \frac{3^{1/4}}{4\pi^2} \;
\left[ \frac{\Gamma^5(\frac{2}{3})}
{\Gamma^4(\frac{1}{6})} \right]
\]
\[
\begin{array}{l}
\Omega =\; i \frac{\sqrt{3}}{8\pi}\;
\left(
\begin {array}{cccccc}
R_1^2 & -\frac{R_1^2}{2} & 0 & 0 & 0 & 0 \\
- \frac {R_1^2}{2} & R_1^2 & 0 & 0 & 0 & 0 \\
0 & 0 & R_3^2 &R_3^2 \alpha_{34} & -R_3^2 \frac{1+2\alpha_{34}}{4} &
-R_3^2 \frac{1+2\alpha_{34}}{4} \\
0 & 0 & R_3^2 \alpha_{34} & R_3^2 & -R_3^2 \frac{1}{2} & -R_3^2
\frac{1+2\alpha_{34}}{4}\\
0 & 0 & -R_3^2 \frac{1+2\alpha_{34}}{4}& -R_3^2 \frac{1}{2} & R_3^2\frac {1}{2}
& R_3^2 \frac {\alpha_{34}}{2} \\
0 & 0 & -R_3^2 \frac{1+2\alpha_{34}}{4} & -R_3^2 \frac{1+2\alpha_{34}}{4}
& R_3^2 \frac {\alpha_{34}}{2} & R_3^2 \frac{1}{2}
\end {array}
\right)
\end{array}
\]
\indent{Number of effective parameters: 3}
\indent{Number of different couplings without deformations: 6}
\indent{Number of different couplings with deformations: 6}
\indent{corresponding to the following $\vec f_{23} $ shifts}
\[
\vec{f_{23}}=\;
\left[
\begin{array}{l}
F=\frac{1}{2} \left\{
\begin{array}{l}
(0,0,\frac{2}{3},\frac{1}{3},\frac{2}{3},0),\\
(\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{1}{3},\frac{2}{3},0),
\end{array}
\right. \\
F=1 \left\{
\begin{array}{l}
(0,0,0,0,0,0),\\
(\frac{1}{3},\frac{2}{3},0,0,0,0),\\
(0,0,\frac{2}{3},\frac{1}{3},\frac{2}{3},0),\\
(\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{1}{3},\frac{2}{3},0)
\end{array}
\right.
\end{array}
\right.
\]
\vspace {1.0 cm}
\noindent{\underline{\bf{ORBIFOLD $Z_{12}$--II}}}
\vspace{.5 cm}
\noindent{\underline {Twist}
$ \theta={\rm
diag}(e^{i\alpha},e^{5i\alpha},e^{-6i\alpha}) ,\;\;\;\;
\alpha=\frac{2\pi}{12} $}
\noindent{\underline {Lattice}
$ SO(4) \otimes F_4 $}
\noindent{\underline {Coxeter element}}
\[
\begin{array}{lll}
\theta e_1=-e_1, & \theta e_2=-e_2, & \theta e_3=e_4, \\
\theta e_4=e_3+e_4+2 e_5 ,& \theta e_5= e_6, &
\theta e_6 = -e_3-e_4-e_5-e_6
\end{array}
\]
\noindent{\underline {Deformation parameters}}
\indent{Relations}
\[
\begin{array}{lll}
|e_3|=|e_4|= \sqrt{2} |e_5| = \sqrt{2} |e_6|,&
\alpha_{45}=-\frac{1}{\sqrt{2}}, &
\alpha_{34}= \alpha_{56}, \\
\alpha_{35}= \alpha_{46} = \alpha_{36}, &
\alpha_{35}= -\frac{1}{2\sqrt{2}}[1+2\alpha_{34}] &
\alpha_{ij}=0\;\;\;i=1,2,\;\;j=3,4,5,6
\end{array}
\]
\[
\alpha_{ij}\equiv\cos(\theta_{ij})
\]
\indent{Degrees of freedom (5)}
\[
\begin {array}{lllll}
R_1= |e_1|, & R_2= |e_2|, & R_3= |e_3|, &
\alpha_{12}, &
\alpha_{34}
\end {array}
\]
\noindent{\underline{Lattice basis ($e_i$) in terms of orthogonal basis
($\tilde e_i$)}}
\indent{Not necessary in this case.}
\vspace{.5 cm}
\noindent{\underline {Fixed points of $\theta$} (4)}
\[
\begin {array}{c}
f_1^{(i)}= g_1^{(i)} \otimes \hat g_1^{(0)} \;,\;\;\; i=0,1,2,3
\end {array}
\]
\[
\begin{array}{lll}
g_1^{(0)} = (0,0)\;,& g_1^{(1)} =\frac{1}{2} (0,1)\;,&
g_1^{(2)} =\frac{1}{2} (1,1)\;, \\
g_1^{(3)} =\frac{1}{2} (1,0)\;, &
\hat g_1^{(0)}=(0,0,0,0)\;&
\end {array}
\]
\vspace{.5 cm}
\noindent{\underline {Fixed points of $\theta^2$} (1)}
\indent{Fixed torus: $\alpha(e_1)+\beta(e_2)\;,\;\; \alpha,\;\beta
\in R$}
\[
\begin {array}{c}
f_{i}^{(2)}=[\alpha(e_1)+\beta(e_2)] \otimes \hat g_2^{(0)}\;,\;\;\;
\alpha,\beta \in R
\end {array}
\]
\[
\begin{array}{l}
\hat g_2^{(0)}=(0,0,0,0)
\end {array}
\]
\vspace{.5 cm}
\noindent{\underline {Fixed points of $\theta^3$} (16)}
\[
\begin {array}{c}
f_3^{(ij)}= g_3^{(i)} \otimes \hat g_3^{(j)}\;,\;\;\;i,j=0,1,2,3
\end {array}
\]
\[
\begin{array}{llll}
g_3^{(0)} = (0,0)\;,&
g_3^{(1)} =\frac{1}{2} (0,1)\;,&
g_3^{(2)} =\frac{1}{2} (1,1)\;, &
g_3^{(3)} =\frac{1}{2} (1,0)\;, \\
\hat g_3^{(0)} =(0,0,0,0)\;,&
\hat g_3^{(1)} =\frac{1}{2} (1,0,0,0)\;,&
\hat g_3^{(2)} =\frac{1}{2} (1,1,0,0)\;, &
\hat g_3^{(3)} =\frac{1}{2} (0,1,0,0)
\end {array}
\]
\indent{Note that, in $F_4$ $, \theta : \hat g_3^{(1)} \rightarrow
\hat g_3^{(3)} \rightarrow \hat g_3^{(2)}$}
\indent {Number of conjugation classes: 8}
\vspace{.5 cm}
\noindent{\underline {Fixed points of $\theta^4$} (9)}
\indent{Fixed torus: $\alpha(e_1)+\beta(e_2)\;,\;\; \alpha,\;\beta
\in R$}
\[
\begin{array}{c}
f_4^{(i)}= [\alpha(e_1)+\beta(e_2)] \otimes \hat g_4^{(i)} \;,\;\;\;
i=0,1,...,8
\end {array}
\]
\[
\begin {array}{llll}
\hat g_4^{(0)}=(0,0,0,0), &
\hat g_4^{(1)}=\frac{1}{3} (2,1,2,0), &
\hat g_4^{(2)}=\frac{1}{3} (2,2,0,2), &
\hat g_4^{(3)}=\frac{1}{3} (1,0,2,2), \\
\hat g_4^{(4)}=\frac{1}{3} (0,2,2,1), &
\hat g_4^{(5)}=\frac{1}{3} (1,2,1,0), &
\hat g_4^{(6)}=\frac{1}{3} (1,1,0,1), &
\hat g_4^{(7)}=\frac{1}{3} (2,0,1,1), \\
\hat g_4^{(8)}=\frac{1}{3} (0,1,1,2) & & &
\end{array}
\]
\indent{Note that, in $F_4$, $\theta : \hat g_4^{(1)} \rightarrow \hat
g_4^{(3)}
\rightarrow \hat g_4^{(5)} \rightarrow \hat g_4^{(7)}$
and $\theta : \hat g_4^{(2)} \rightarrow \hat g_4^{(4)} \rightarrow \hat
g_4^{(6)}
\rightarrow \hat g_4^{(8)}$}
\indent {Number of conjugation classes: 3}
\vspace{.5 cm}
\noindent{\underline {Fixed points of $\theta^5$} (4)}
\indent{The same as for $\theta$.}
\[
\begin {array}{c}
f_5^{(i)}= g_5^{(i)} \otimes \hat g_5^{(0)}\;,\;\;\; i=0,1,2,3
\end {array}
\]
\[
\begin{array}{lll}
g_5^{(0)} = (0,0)\;,&
g_5^{(1)} =\frac{1}{2} (0,1)\;,&
g_5^{(2)} =\frac{1}{2} (1,1)\;, \\
g_5^{(3)} =\frac{1}{2} (1,0)\;, &
\hat g_5^{(0)}=(0,0,0,0) &
\end {array}
\]
\vspace{.5 cm}
\noindent{\underline {Fixed points of $\theta^6$} (16)}
\indent{Fixed torus: $\alpha(e_1)+\beta(e_2)\;,\;\; \alpha,\;\beta
\in R$}
\[
\begin {array}{c}
f_6^{(i)}= [\alpha(e_1)+\beta(e_2)]\otimes \hat g_6^{(i)}\;,\;\;\;
i=0,1,...,15,\;\;
\alpha,\beta \in R
\end {array}
\]
\[
\begin{array}{llll}
\hat g_6^{(0)}=(0,0,0,0)\;,&
\hat g_6^{(1)} =\frac{1}{2} (1,1,1,1)\;,&
\hat g_6^{(2)} =\frac{1}{2} (0,0,0,1)\;, &
\hat g_6^{(3)} =\frac{1}{2} (0,0,1,0)\;, \\
\hat g_6^{(4)} =\frac{1}{2} (1,0,0,0)\;, &
\hat g_6^{(5)} =\frac{1}{2} (0,0,1,1)\;, &
\hat g_6^{(6)} =\frac{1}{2} (0,1,0,1)\;, &
\hat g_6^{(7)} =\frac{1}{2} (1,0,1,0)\;, \\
\hat g_6^{(8)} =\frac{1}{2} (0,1,0,0)\;, &
\hat g_6^{(9)} =\frac{1}{2} (0,1,1,1)\;, &
\hat g_6^{(10)} =\frac{1}{2} (1,1,0,1)\;,&
\hat g_6^{(11)}=\frac{1}{2} (0,1,1,0)\;, \\
\hat g_6^{(12)} =\frac{1}{2} (1,1,0,0)\;, &
\hat g_6^{(13)} =\frac{1}{2} (1,0,1,1)\;, &
\hat g_6^{(14)} =\frac{1}{2} (1,0,0,1)\;, &
\hat g_6^{(15)} =\frac{1}{2} (1,1,1,0)
\end {array}
\]
\begin{quotation}
\noindent{Note that in $F_4$ $\theta:\hat g_6^{(3)} \rightarrow \hat g_6^{(2)}
\rightarrow \hat g_6^{(1)} \rightarrow \hat g_6^{(11)} \rightarrow \hat
g_6^{(10)}
\rightarrow \hat g_6^{(9)}$ , $\theta: \hat g_6^{(7)}
\rightarrow \hat g_6^{(6)} \rightarrow \hat g_6^{(5)} \rightarrow \hat
g_6^{(15)}
\rightarrow \hat g_6^{(14)}
\rightarrow \hat g_6^{(13)}$ and $\theta: \hat g_6^{(4)}
\rightarrow \hat g_6^{(8)}
\rightarrow \hat g_6^{(12)}$}
\end{quotation}
\indent {Number of conjugation classes: 4}
\vspace{.5 cm}
\noindent {\underline {Coupling $\theta\theta\theta^{10}$}}
\indent {Selection rule}
\[
f_1+f_2-(I+\theta)f_3 \in \Lambda
\]
\indent {Denoting}
\[
\left.
\begin {array}{rcl}
f_1 &=& g_1^{(i_1)}\otimes \hat g_1^{(0)}\\
f_2 &=& g_1^{(i_2)}\otimes \hat g_1^{(0)}\\
f_3 &=& [\alpha(e_1)+\beta(e_2)] \otimes \hat g_2^{(0)}
\end {array}
\right\}\;\;\;
\begin {array} {l}
i_1,i_2=0,1,2,3\;,
\alpha,\beta \in R
\end {array}
\]
\indent{The selection rule reads}
\[
\begin {array}{l}
i_1=i_2 \;,
\end {array}
\]
\indent{Number of allowed couplings: 4}
\indent{Expression of the coupling}
\begin {eqnarray*}
C_{\theta\theta\theta^{10}} &=&
N \; \sum_{v \in (f_3-f_2+\Lambda)_{\perp}} \exp [-\frac
{1}{4\pi} \sin(\frac{\pi}{6} |v|^2] \\
&=&
N \;\; \sum_{\vec{u} \in Z^4} \exp [ -\frac
{1}{4\pi} \sin(\frac{\pi}{6}\; (\vec{\bar{f_{23}}}+\vec{u})^{\top}M
(\vec{\bar{f_{23}}}+\vec{u})] \\
&=&
N\; \vartheta
\left[
\begin{array}{c}
\vec{\bar{f_{23}}} \\
0
\end {array}
\right]
[0, \Omega]
\end {eqnarray*}
\begin{quotation}
\noindent{where $\bar f_{23}$ is the restriction of $f_{23}$ to the $F_4$
lattice,
$(f_3-f_2+\Lambda)_{\perp}$ indicates that the coset
elements must belong to $F_4$ and $V_{\perp}$ is the volume
of the $F_4$ unit cell. In all cases $\bar{f_{23}}=0$. Finally}
\end{quotation}
\[
\Omega = i\frac{1}{4\pi^2} \sin(\frac{\pi}{6}) M \;,\;\;\;
N= \sqrt{V_{\perp}}\; \frac{1}{2 \pi} \;
\left[ \frac{\Gamma(\frac{11}{12}) \Gamma(\frac{7}{12})}
{\Gamma(\frac{1}{12}) \Gamma(\frac{5}{12})} \right]
\]
\[
\begin{array}{l}
\Omega =i \frac{1}{4\pi^2} \sin(\frac{\pi}{6})\; R_3^2
\left(
\begin {array}{cccc}
1 & \alpha_{34} & -\frac{1}{4}[1+2\alpha_{34}] & -\frac{1}{4}[1+2\alpha_{34}]
\\
\alpha_{34} & 1 & -\frac{1}{2} & -\frac{1}{4}[1+2\alpha_{34}] \\
-\frac{1}{4}[1+2\alpha_{34}] & -\frac{1}{2} & \frac {1}{2} & \frac
{\alpha_{34}}{2} \\
-\frac{1}{4}[1+2\alpha_{34}] & -\frac{1}{4}[1+2\alpha_{34}] & \frac
{\alpha_{34}}{2} & \frac{1}{2}
\end {array}
\right)
\end{array}
\]
\indent{Number of effective parameters: 2}
\indent{Number of different couplings without deformations: 1}
\indent{Number of different couplings with deformations: 1}
\indent{Note that this coupling is the same as $\theta^2\theta^5\theta^5$}
\vspace{.5 cm}
\noindent {\underline {Coupling $\theta\theta^3\theta^8$}}
\indent {Selection rule}
\[
f_1+(I+\theta+\theta^2)f_2-(I+\theta+\theta^2+\theta^3)f_3 \in \Lambda
\]
\indent {Denoting}
\[
\left.
\begin {array}{rcl}
f_1 &=& g_1^{(i_1)}\otimes \hat g_1^{(0)}\\
f_2 &=& g_3^{(i_2)}\otimes \hat g_3^{(j_2)}\\
f_3 &=& [\alpha(e_1)+\beta(e_2)] \otimes \hat g_4^{(j_3)}
\end {array}
\right\}\;\;\;
\begin {array} {l}
i_1,i_2,j_2=0,1,2,3\;,\\
j_3=0,1,...,8\;,\\
\alpha,\beta \in R
\end {array}
\]
\indent{the selection rule reads}
\[
\begin {array}{l}
i_1=i_2
\end {array}
\]
\indent{Number of allowed couplings: 6}
\indent{Expression of the coupling}
\begin {eqnarray*}
C_{\theta\theta^3\theta^8} &=&
N \; \sqrt{l_2 l_3} \; \sum_{v \in (f_3-f_2+\Lambda)_{\perp}} \exp [-\frac
{1}{4\pi} \frac{\sin(\frac{\pi}{3})\sin(\frac{\pi}{4})}{\sin(\frac{\pi}{12})}
|v|^2] \\
&=& N \; \sqrt{l_2 l_3}\; \sum_{\vec{u} \in Z^4} \exp [ -\frac
{1}{4\pi} \frac{\sin(\frac{\pi}{3})\sin(\frac{\pi}{4})}{\sin(\frac{\pi}{12})}
\;
(\vec{\bar{f_{23}}}+\vec{u})^{\top} M (\vec{\bar{f_{23}}}+\vec{u})] \\
&=&
N\;\sqrt{l_2 l_3}\; \vartheta
\left[
\begin{array}{c}
\vec{\bar{f_{23}}} \\
0
\end {array}
\right]
[0, \Omega]
\end {eqnarray*}
\begin{quotation}
\noindent{with the same notation as in the previous coupling,
$l_i$ is the number of elements of the
$f_i$ conjugation class and $V_{\perp}$ is the volume of the $F_4$ unit cell.
Finally}
\end{quotation}
\[
\Omega = i\frac{1}{4\pi^2}
\frac{\sin(\frac{\pi}{3})\sin(\frac{\pi}{4})}{\sin(\frac{\pi}{12})}\;M
\;,\;\;\;
N= \sqrt{V_{\perp}}\; \frac{1}{2 \pi} \; \frac
{\Gamma(\frac{3}{4})}{\Gamma(\frac{1}{4})}
\left[ \frac{\Gamma(\frac{11}{12}) \Gamma(\frac{7}{12})}
{\Gamma(\frac{1}{12}) \Gamma(\frac{5}{12})} \right]^{1/2}
\]
\[
\begin{array}{l}
\Omega= i\frac{1}{4\pi^2} \frac{\sin(\frac{\pi}{3})\sin(\frac{\pi}{4})}
{\sin(\frac{\pi}{12})}\; R_3^2 \left(
\begin {array}{cccc}
1 & \alpha_{34} & -\frac{1}{4}[1+2\alpha_{34}] & -\frac{1}{4}[1+2\alpha_{34}]
\\
\alpha_{34} & 1 & -\frac{1}{2} & -\frac{1}{4}[1+2\alpha_{34}] \\
-\frac{1}{4}[1+2\alpha_{34}] & -\frac{1}{2} & \frac {1}{2} & \frac
{\alpha_{34}}{2} \\
-\frac{1}{4}[1+2\alpha_{34}] & -\frac{1}{4}[1+2\alpha_{34}] & \frac
{\alpha_{34}}{2} & \frac{1}{2}
\end {array}
\right)
\end{array}
\]
\indent{Number of effective parameters: 2}
\indent{Number of different couplings without deformations: 4}
\indent{Number of different couplings with deformations: 4}
\indent{corresponding to the following $\vec{\bar{ f_{23}}} $ shifts}
\[
\vec{\bar{f_{23}}}= \; \left[
\begin{array}{ll}
l_2=1\;l_3=1 & (0,0,0,0), \\
l_2=3\;l_3=1 & (\frac{1}{2},0,0,0), \\
l_2=1\;l_3=4 & (\frac{2}{3},\frac{2}{3},0,\frac{2}{3}), \\
l_2=3\;l_3=4 & (\frac{1}{6},\frac{1}{6},0,\frac{2}{3})
\end{array}
\right.
\]
\vspace{.5 cm}
\noindent {\underline {Coupling $\theta^3\theta^3\theta^6$}}
\indent {Selection rule}
\[
f_1+f_2-(I+\theta^3)f_3 \in \Lambda
\]
\indent {Denoting}
\[
\left.
\begin {array}{rcl}
f_1 &=& g_3^{(i_1)}\otimes \hat g_3^{(j_1)}\\
f_2 &=& g_3^{(i_2)}\otimes \hat g_3^{(j_2)}\\
f_3 &=& [\alpha(e_1)+\beta(e_2)]\otimes \hat g_6^{(j_3)}
\end {array}
\right\}\;\;\;
\begin {array} {l}
i_1,i_2,j_1,j_2=0,1,2,3\;,\\
j_3=0,1,...,15\;,\\
\alpha, \beta \in R
\end {array}
\]
\indent{the selection rule reads}
\[
\left.
\begin {array}{l}
i_1=i_2 \\
j_1+(-1)^{(j_3+1)}j_2=j_3
\end {array}
\right\} \;\;\; mod.\; 4
\]
\indent{Number of allowed couplings: 56}
\indent{Expression of the coupling}
\indent {In all the cases, except for the case $l_1=l_2=l_3=3$}
\begin {eqnarray*}
C_{\theta^3\theta^3\theta^6} &=&
N \;F(l_1,l_2,l_3) \; \sum_{v \in (f_3- f_2+\Lambda)_{\perp}} \exp [-\frac
{1}{4\pi} \; |v|^2] \\
&=& N \; F(l_1,l_2,l_3) \; \sum_{\vec{u} \in Z^4} \exp [-\frac
{1}{4\pi} \; (\vec{\bar{f_{23}}}+\vec{u})^{\top} M
(\vec{\bar{f_{23}}}+\vec{u})] \\
&=&
N\;F(l_1,l_2,l_3) \; \vartheta
\left[
\begin{array}{c}
\vec{\bar{f_{23}}} \\
0
\end {array}
\right]
[0, \Omega]
\end {eqnarray*}
\begin{quotation}
\noindent{with the same notation as in the previous coupling.
$l_i$ is the number of elements of the
$f_i$ conjugation class. $F(l_1,l_2,l_3)$ is given by}
\end{quotation}
\[
\begin{array}{rlrl}
l_1=l_2=l_3=1 \;{\rm and}\;l_1=l_2=3 \; l_3=1 \;:& F=1 &
l_1=l_2=1 \; l_3=3 \;:& F=\sqrt{3} \\
l_1=1(3)\;l_2=3(1)\;l_3=6 \;:& F=\sqrt{2} &
l_1=l_2=3 \; l_3=6 \;:& F=2\sqrt{2}
\end{array}
\]
\indent{In the case $l_1=l_2=l_3=3$}
\begin {eqnarray*}
C_{\theta^3\theta^3\theta^6} &=&
N \;\frac{1}{\sqrt{3}}\; \sum_{v \in \cup_{p=0}^{2} (\theta^p f_3-
f_2+\Lambda)_{\perp}} \exp [-\frac
{1}{4\pi} \; |v|^2] \\
&=&
N\; \frac{1}{\sqrt{3}} \;\{ \vartheta
\left[
\begin{array}{c}
\vec{\bar{f_{23}}} \\
0
\end {array}
\right]
[0, \Omega] +
\vartheta
\left[
\begin{array}{c}
\vec{\bar{ f'_{23}}} \\
0
\end {array}
\right]
[0, \Omega] +
\vartheta
\left[
\begin{array}{c}
\vec{\bar{ f''_{23}}} \\
0
\end {array}
\right]
[0, \Omega] \}
\end {eqnarray*}
\indent{$f'_{23}=\theta f_2 -f_3$ and $f''_{23}= \theta^2 f_2 -f_3$}
\[
\Omega = i \frac{1}{4 \pi^2} M
\;\;\;\;
N= \sqrt{V_{\perp}}\; \frac{1}{2 \pi} \; \left[ \frac {\Gamma(\frac{3}{4})}
{\Gamma(\frac{1}{4})} \right]^2
\]
\indent{$V_{\perp}$ is the volume of the $F_4$ unit cell}
\[
\begin{array}{l}
\Omega = i \frac{1}{4 \pi^2} \; R_3^2
\left(
\begin {array}{cccc}
1 & \alpha_{34} & -\frac{1}{4}[1+2\alpha_{34}] & -\frac{1}{4}[1+2\alpha_{34}]
\\
\alpha_{34} & 1 & -\frac{1}{2} & -\frac{1}{4}[1+2\alpha_{34}] \\
-\frac{1}{4}[1+2\alpha_{34}] & -\frac{1}{2} & \frac {1}{2} & \frac
{\alpha_{34}}{2} \\
-\frac{1}{4}[1+2\alpha_{34}] & -\frac{1}{4}[1+2\alpha_{34}] & \frac
{\alpha_{34}}{2} & \frac{1}{2}
\end {array}
\right)
\end{array}
\]
\indent{Number of effective parameters: 2}
\indent{Number of different couplings without deformations: 6}
\indent{Number of different couplings with deformations: 6}
\indent{corresponding to the following $\vec{\bar{ f_{23}}} $ shifts}
\[
\vec{\bar{f_{23}}}=\; \left[
\begin{array}{ll}
l_1=l_2=l_3=1 & (0,0,0,0), \\
l_1=l_2=3\; l_3=1 & (\frac{1}{2},0,0,0), \\
l_1=l_2=1\; l_3=3 & (\frac{1}{2},0,0,0), \\
l_1=l_2=l_3=3 & (0,0,0,0) \cup (\frac{1}{2},0,0,0) \cup (0,\frac{1}{2},0,0), \\
l_1=1(3)\; l_2=3(1)\; l_3=6 & (0,\frac{1}{2},\frac{1}{2},0), \\
l_1=l_2=3\; l_3=6 & (\frac{1}{2},0,\frac{1}{2},0)
\end{array}
\right.
\]
\newpage
\vspace{0.3cm}
|
1,314,259,994,171 | arxiv | \section{Introduction}
\label{sec1}
\setcounter{equation}{0}
The hydrodynamic description of fluids is based on the notion of local equilibrium: in a cell, containing many atoms but still very small on the macroscopic scale, the fluid is in thermal equilibrium. The local equilibrium parameters change slowly in space-time and are governed by an autonomous system of evolution equations. This gives a very powerful method to study non-equilibrium systems and large-scale response functions. To carry out such a program one first has to identify the local conservation laws. For a fluid in physical space there are five: mass, momentum, and energy. To lowest order in the spatial gradients one obtains time-reversible evolution equations (Euler equations) and to second order dissipative corrections (Navier-Stokes equations) \cite{Resibois DeLeener,S91}. Recently \cite{cdy,BCDF16} there has been a lot of progress in generalizing the hydrodynamic picture to integrable systems in one space dimension, for which the number of conserved fields is extensive. A priori, it is not so clear whether the standard heuristic survives such a drastic extension. But the recent studies are encouraging. Furthermore, one has the worked out example of a hard rod fluid \cite{S82,dobrods,BS97}, for which the number of particles at given velocity is conserved. For the hard rod fluid the first order Euler-type equations are known and also their dissipative corrections. In particular, it is proved that local equilibrium is maintained throughout space-time on the macroscopic scale.
For quantum integrable models, even to write down generalized hydrodynamics might be difficult, let along to solve it. One has to know not only all conserved fields, which usually come together with integrability, but also their associated currents. From the conserved fields one constructs the generalized Gibbs ensemble (GGE) \cite{Eisrev,EFreview,VRreview}, which contains an infinite number of ``chemical potentials". For the Euler-type equation the average fields and currents are required in GGEs. In principle these are available for every Bethe-ansatz integrable model or integrable quantum field theory \cite{cdy,BCDF16}, the only information necessary being the spectrum of Bethe (or asymptotic) particles, their energies, and their scattering phases. These building blocks have been explicitly studied in particular for the XXZ spin chain \cite{BCDF16}, and for the sinh-Gordon model and its non-relativistic limit the Lieb-Liniger $\delta$-Bose gas \cite{cdy}, the latter being of interest in our note. For these models the Euler-type conservation equations have been derived, including force terms produced by external fields such as those from confining potentials \cite{dynote}. Their tentative dissipative corrections are not yet known, see the numerical study \cite{zlp17}.
The Euler-type equations can be numerically solved and compared to results on quantum evolutions in a variety of ways. There are integral equations for the general initial value problem (without external force) \cite{dsy}, which efficiently produce exact solutions by iteration. Using different methods, the collision of two clouds of particles in the Lieb-Liniger model are simulated, finding agreement with DMRG numerics \cite{bvkm2}. In the limit of zero temperature, the equations reduce to a finite family of hydrodynamic conservation laws \cite{ddky}. Thereby the evolution of density waves in the Lieb-Liniger model, with or without confining potentials, was analyzed, observing agreement with the exact quantum evolution based on the Bethe ansatz. An efficient molecular dynamics scheme has been proposed \cite{dyc}, which also accounts for external forces in the Lieb-Liniger model. For the problem of domain wall initial states \cite{DW1,DW2,DW3}, in which initially the GGE chemical potentials are constant except for a possible jump at the origin, exact analytic solutions have been obtained using generalized hydrodynamics \cite{cdy,BCDF16,ds17}.
In this note, we concentrate on stationary, homogeneous states, hence GGEs, and we explain how to compute the exact Drude weight and related quantities for the Lieb-Liniger model in the repulsive regime in arbitrary GGEs. Since the formalism is general, the method also applies, given the particle spectrum and scattering, to other integrable models, and, conjecturally and with appropriate modifications, to classical soliton-like gases \cite{dyc} and integrable classical field theory (perhaps using the results of \cite{dLM16}). The derivation makes use of generalized hydrodynamics by combining it with hydrodynamic projection methods \cite{Forster,Zwanzig}. As a preliminary step we remind the reader in Section \ref{sec2} how, for a finite number of conservation laws, the Drude weight is computed using hydrodynamic projections. We emphasize that it is important to regard all conserved quantities on equal footing, and thus the Drude weight as a matrix. By looking at a particular matrix element, one might miss the global structure. To prepare for the general case, we then consider in Section \ref{sec3} the Drude weight for a hard rod fluid \cite{S82}, see also \cite{dobrods,BS97}. In this case the Drude weight is an infinite-dimensional matrix, or bilinear functional. All is well-known material, but arranged in such a way as to emphasize the analogy with the Lieb-Liniger model.
Our main results, concerning the Lieb-Liniger model and more general integrable models, are reported in Section \ref{sec4}, but we list already here the main identities:
\begin{eqnarray}\label{intro1}
\int \mathrm{d} x\,\langle \mathfrak q_i(x,0) \mathfrak q_j(0,0)\rangle^\mathrm{c} &=& \int \mathrm{d}\theta\, \rho_\mathrm{p}(\theta)(1-\sigma n(\theta))h_i^\mathrm{dr}(\theta) h_j^\mathrm{dr}(\theta),\\ \label{intro2}
\int \mathrm{d} x\,\langle \mathfrak j_i(x,0) \mathfrak q_j(0,0)\rangle^\mathrm{c} &=& \int \mathrm{d}\theta\, \rho_\mathrm{p}(\theta)(1-\sigma n(\theta))v^\mathrm{eff}(\theta)h_i^\mathrm{dr}(\theta) h_j^\mathrm{dr}(\theta),\\ \label{intro3}
\lim_{t\to\infty} \int \mathrm{d} x\,\langle \mathfrak j_i(x,t) \mathfrak j_j(0,0)\rangle^\mathrm{c} &=& \int \mathrm{d}\theta\, \rho_\mathrm{p}(\theta)(1-\sigma n(\theta))v^\mathrm{eff}(\theta)^2h_i^\mathrm{dr}(\theta) h_j^\mathrm{dr}(\theta),\\\label{intro4}
\int \mathrm{d} t\,\langle \mathfrak j_i(0,t)\mathfrak j_j(0,0)\rangle^\mathrm{c} &=& \int \mathrm{d}\theta\, \rho_\mathrm{p}(\theta)(1-\sigma n(\theta))|v^\mathrm{eff}(\theta)|h_i^\mathrm{dr}(\theta) h_j^\mathrm{dr}(\theta),
\end{eqnarray}
where
\begin{equation}\nonumber
\sigma = 1,\,-1,\,0 \quad\mbox{for fermionic, bosonic, and classical gases respectively}.
\end{equation}
The quantity $\rho_{\rm p}$ is the density of particles per unit distance and per unit spectral parameter $\theta$, $n(\theta)$ is the usual occupation function of the (generalized) thermodynamic Bethe ansatz \cite{tba,moscaux} (or the classical free density \cite{ds17,dyc}), and $v^{\rm eff}(\theta)$ is the effective velocity \cite{bel,cdy,BCDF16}. The superscript $^{\rm dr}$ represents the dressing operation of the thermodynamic Bethe ansatz (see \cite{tba,cdy}). On the left-hand side are GGE connected correlation functions.
In quantum systems, one takes $\mathfrak q_i(x,t) = \h Q_i(x,t)$ and $\mathfrak j_i(x,t) = \h J_i(x,t)$, respectively the $i$-th conserved charge density and its current. In this case, $h_i(\theta)$ is the one-particle eigenvalue, at spectral parameter $\theta$, of the associated conserved charge $\int \mathrm{d} x\, \h Q_i(x,0)$. For the one-dimensional classical fluid of hard rods, one identifies $\theta$ with the particle velocity $v$, and takes $\mathfrak{q}_i(x,t) = \sum_{\ell} h_i(v_\ell)\delta(x-r_\ell)$ and $\mathfrak{j}_i(x,t)=\sum_{\ell} h_i(v_\ell)\dot r_\ell \delta(x-r_\ell)$, respectively the conserved density and its associated current for a weight $h_i(v)$, where $r_\ell$ is the position and $v_\ell$ the velocity of the $\ell$-th particle in the fluid. Conjecturally, this would also hold for classical soliton-like gases.
For Bethe integrable models formula \eqref{intro1} is an immediate consequence of the (generalized) thermodynamic Bethe ansatz formalism, which provides the exact free energy. For the Lieb-Liniger model \eqref{intro1} restricted to the density has also been derived using form factors \cite{dNP16}. Formula \eqref{intro2} can be viewed as a consequence of the exact current ``potential'' obtained in \cite{cdy}. The identity \eqref{intro3} is for the conventional Drude weight. Expression \eqref{intro4} gives the scaled covariance matrix of charge transfer, which we will refer to as ``Drude self-weight" (this is called zero-frequency noise in mesoscopic physics \cite{noisereview}). Such scaled cumulants form an important part of the large-deviation theory for non-equilibrium transport \cite{LD,Es}. We also obtain expressions for dynamical charge-charge, charge-current and current-current correlation functions at small wavelengths and large times. In the particular case of the Lieb-Liniger density-density correlation, our expression agrees with the result obtained from the form factor analysis \cite{dNP16}.
The first expressions for particular components of the Drude weight at nonzero temperature in interacting integrable models were obtained in the context of spin chains. Expressions were found, by various methods, for the charge-charge Drude weight in the Hubbard model \cite{FK98}, the spin-spin Drude weight in the XXZ chain \cite{Z99}, and the energy-energy Drude weight in the XXZ chain \cite{KS02,SK03}. In fact, the exact expression for the XXZ spin-spin Drude weight has been the subject of some debate. The situation was recently settled in a series of works \cite{P1,P2,IP13,IdN17,BVKM17}. In \cite{IdN17,BVKM17}, the spin-spin Drude weight was exactly evaluated by combining the hydrodynamic techniques of \cite{cdy,BCDF16} with a formula expressing it as a linear response of the non-equilibrium current to a change of driving potential \cite{IP13,VKM15,K17}. Our method and expression are however new. Formula \eqref{intro3} confirms and generalizes the early results \cite{FK98,Z99}. As a consistency check, we show in Section \ref{sec5} that it is reproduced in complete generality by the linear response calculation, thus further confirming that the numerical analysis of \cite{IP13,VKM15,K17} agrees with these early results.
We also show Section \ref{sec5} that our exact result for the Drude self-weight is reproduced by standard fluctuation relations \cite{Es,bd13}.\newpage
\section{Models with a finite number of conservation laws}
\label{sec2}
\setcounter{equation}{0}
Before embarking on the Lieb-Liniger model, we briefly discuss the generic structure for models with a finite number, say $m$, of locally conserved fields.
It is assumed that $m$ is already their maximal number. Thus for Galilean fluids in three dimensions $m=5$, while for generic anharmonic chains and one-dimensional fluids $m=3$.
For quantum spin chains generically only the energy is conserved, hence $m=1$.
In our context one spatial dimension is in focus, and thus we only mention \cite[App. A]{S15} and \cite{MS14}.
Microscopically we consider a one-dimensional system with $m$ locally conserved densities, $\mathfrak{q}_j(x,t), j = 1,...,m$ on space-time $(x,t) \in \mathbb{R}^2$,
and their associated currents, $\mathfrak{j}_j(x,t), j = 1,...,m$, satisfying
\begin{equation}\label{2.1}
\partial_t \mathfrak{q}_j(x,t) + \partial_x \mathfrak{j}_j(x,t) = 0.
\end{equation}
Classically $\mathfrak{q}_j$ and $\mathfrak{j}_j$ may be seen as functions on phase space. The fields may also be seen as generated by a multi-species stochastic particle system. Quantum mechanically
$\mathfrak{q}_j$ and $\mathfrak{j}_j$ would be operator fields indexed by $(x,t)$, with certain locality properties (see the brief discussion in the context of the Lieb-Liniger model). Their precise definition in terms of the underlying dynamics is not important at the present stage. Since $m$ is the maximal number of conservation laws, the microscopic system has an $m$-dimensional family of steady states, with distribution of the form $e^{-\sum_i \beta_i\int \mathrm{d} x\,{\frak q}_i(x)}$. These states may be labelled by the Lagrange parameters $\beta_i$, or by the mean value of the conserved quantities.
The time-stationary states are assumed to be invariant under spatial translations and the system is initialized in one of the time-stationary
states. Hence the underlying dynamics
is a space-time stationary random process, or a space-time invariant quantum field theory or quantum chain. We label the steady states by $\vec{u} \in \mathbb{R}^m$, and averages are denoted by $\langle \cdot \rangle_{\vec{u}}$. Since the steady states are completely specified by the averages of conserved densities $\mathfrak{q}_j(x,t)$, we set by definition
\begin{equation}\label{2.2}
\langle \vec{\mathfrak{q}}(x,t) \rangle_{\vec{u}} = \vec{u},
\end{equation}
independent of $x,t$. For connected averages we use the notation $\langle ab \rangle_{\vec{u}}^\mathrm{c} =
\langle ab\rangle_{\vec{u}}- \langle a \rangle_{\vec{u}}\langle b \rangle_{\vec{u}}$.
The average currents are denoted by
\begin{equation}\label{2.3}
\langle \vec{\mathfrak{j}}(x,t) \rangle_{\vec{u}} = \vec{\mathsf{j}}(\vec{u}).
\end{equation}
Any initial state which locally looks like one of the stationary states keeps this property under time evolution. In such situations, the state in space-time can be seen as locally stationary and homogeneous, and therefore completely characterized by a space-time function $\vec{u}(x,t)$. This is the usual hydrodynamic approximation. In this approximation, the parameters characterizing the local state are governed by the system of macroscopic conservation laws,
\begin{equation}\label{2.4}
\partial_t \vec{u}(x,t) + \partial _x \vec{\mathsf{j}}(\vec{u}(x,t)) =0.
\end{equation}
In terms of the microscopic system, \eqref{2.4} is approximately valid on suitably large scales.
Let us go back to homogeneous and stationary states. From a statistical physics perspective, of particular interest is the correlator of the conserved fields in the stationary set-up,
\begin{equation}\label{2.5}
S_{ij}(x,t) = \langle \mathfrak{q}_i(x,t)\mathfrak{q}_j(0,0) \rangle_{\vec{u}}^\mathrm{c},
\end{equation}
with the fixed parameter $\vec{u}$ characterizing the statistically space-time homogeneous state. One should think of $S(x,t)$ as an $m\times m$ matrix.
At such level of generality nothing can be said about the correlator. But on the hydrodynamic scale, which corresponds to large $x,t$, $S$ is linked to solutions of
\eqref{2.4} linearized as $\vec{u}+\epsilon \vec{\phi}$ with constant $\vec{u}$.
First we write the linearized equation, obtained to first order in the small parameter $\epsilon$, as
\begin{equation}\label{2.6}
\partial_t \vec{\phi}(x,t) + A \partial_x \vec{\phi}(x,t) = 0,
\end{equation}
where
\begin{equation}\label{2.7}
A_{ij}(\vec{u}) = \partial_{u_j} \mathsf{j}_i(\vec{u}).
\end{equation}
The matrix $A$ depends on $\vec{u}$ and acts only in component space.
As further input we need the static covariance matrix
\begin{equation}\label{2.8}
C_{ij} = \int \mathrm{d}x\, S_{ij}(x,t) = \int \mathrm{d}x\, S_{ij}(x,0)
\end{equation}
and the field-current correlator
\begin{equation}\label{2.9}
B_{ij}(\vec{u}) = \int \mathrm{d}x\, \langle \mathfrak{j}_i(x,0)\mathfrak{q}_j(0,0) \rangle_{\vec{u}}^\mathrm{c} .
\end{equation}
Note that, as $m\times m$-matrices,
\begin{equation}\label{2.10}
B = AC.
\end{equation}
This can be derived by the chain rule. Indeed, let $\beta_i$ be the conjugate potential to the conserved quantity $\int \mathrm{d} x\,{\tt q}_j(x)$ in the homogeneous stationary state. This means that
\begin{equation}\label{dbeta}
\frc{\partial}{\partial\beta_i} \langle {\frak a}(0,0)\rangle_{\vec u} = \int \mathrm{d} x\,
\langle \frak{q}_i(x,0){\frak a}(0,0)\rangle_{\vec u}^{\rm c}
\end{equation}
for any local field ${\frak a}(x,t)$, such as $\frak{q}_j(x,t)$ of $\frak{j}_j(x,t)$.
Hence, we find for instance $\partial_{\beta_i}\langle \frak{q}_j\rangle_{\vec u} = C_{ij}$. Therefore in compressed notation, we have $B = \partial_{\vec \beta}\langle \vec{\mathfrak{j}}\rangle_{\vec u} = \partial_{\vec \beta} \langle \vec{\frak{q}}\rangle \cdot \partial_{\vec u} \langle\vec{\mathfrak{j}}\rangle_{\vec u}$.
Then, one solves \eqref{2.6} with random initial conditions
characterized by the static covariance $C$. This amounts to evaluating $\t S(x,t) = \lim_{\lambda\to\infty} \lambda S(\lambda x,\lambda t)$ by solving the evolution equation
\begin{equation}\label{dS}
\partial_t \t S(x,t) + \partial_x \big(A\t S(x,t)\big) = 0
\end{equation}
with initial condition $\t S(x,0) = \delta(x) C$, consistent with an exponential decay of $S(x,0)$, a general feature of models in one dimension at strictly positive temperatures. Therefore, in the hydrodynamic approximation, small $k$, large $t$, one has
\begin{equation}\label{2.11}
\int \mathrm{d}x\,\mathrm{e}^{\mathrm{i}kx}S(x,t) \simeq \mathrm{e}^{\mathrm{i}ktA}C.
\end{equation}
Note in particular that changing variable to $x=\lambda x'$ and defining $k'=\lambda k$, after taking the limit $\lambda\to\infty$ with $k'$ fixed relation \eqref{2.11} holds for all values of $k'$, thus the inverse Fourier transform can be performed giving the correct initial condition.
Using only the conservation laws and space-time stationarity, in general one has the relation
\begin{equation}\label{2.12}
AC = CA^\mathrm{T},
\end{equation}
where ${}^\mathrm{T}$ denotes transpose. Of course, $C = C^\mathrm{T}$ by definition. But
\eqref{2.10} together \eqref{2.12} implies the less immediate symmetry
\begin{equation}\label{2.13}
B = B^\mathrm{T},
\end{equation}
which means in particular that the vector field $\vec{\mathsf{j}}(\vec{u})$ is the gradient of a potential.
The conventional definition of the Drude weight is
\begin{equation}\label{2.14}
D_{ij} = \lim_{t \to \infty}\frac{1}{t} \int_{0}^t \mathrm{d}t' \int \mathrm{d}x\, \langle \mathfrak{j}_j(x,t')\mathfrak{j}_i(0,0) \rangle_{\vec{u}}^\mathrm{c} = \lim_{t \to \infty} \int \mathrm{d}x\, \langle \mathfrak{j}_j(x,t)\mathfrak{j}_i(0,0) \rangle_{\vec{u}}^\mathrm{c},
\end{equation}
provided the limit exists. It is convenient to view this expression as resulting from the inner product
\begin{equation}\label{2.15}
\langle a | b \rangle = \int \mathrm{d}x\, \langle a(x) b(0) \rangle_{\vec{u}}^\mathrm{c}
\end{equation}
for general random fields, $a(x),b(x)$, which are statistically translation invariant in $x$. With respect to this scalar product, the conserved fields are in the time-invariant subspace. Assuming that the list of conserved fields is complete and the dynamics is sufficiently mixing\footnote{It is hard to establish exactly the conditions in which the dynamics would be sufficiently mixing, but the assumption is expected, on physical grounds, to be of very wide validity.}, one would expect that the time-invariant subspace is spanned by all the conserved total fields and hence the $t \to \infty$ limit is given by the projection onto this subspace (of course, with respect to the inner product \eqref{2.15}).
In the statistical theory of fluids this step is called the hydrodynamic projection.
With this reasoning, the long time limit in \eqref{2.14} is given by the projection onto the time-invariant subspace, which is given by
\begin{equation}\label{2.16}
D_{ij} = \sum_{i',j'=1}^m\langle \mathfrak{j}_i | \mathfrak{q}_{i'} \rangle (C^{-1})_{i'j'} \langle \mathfrak{q}_{j'} | \mathfrak{j}_j \rangle.
\end{equation}
Here the inverse operator $C^{-1}$ is required to have a properly normalized projection. Using \eqref{2.9}, in matrix notation the Drude weight reads
\begin{equation}\label{2.17}
D = B C^{-1}B = ACA^T.
\end{equation}
The well known lower bound of Mazur follows in replacing in \eqref{2.16} the orthogonal projection by a smaller one.
For the Lieb-Liniger model the definition \eqref{2.14} seems to be unaccessible. But \eqref{2.17} involves only static expectations,
hence a priori simpler than considering a long time limit. More details will be provided in Section 4.
The correlator $S(x,t)$ satisfies the second moment sum rule
\begin{equation}\label{2.18}
\lim_{t \to \infty} \frac{1}{t^2} \int \mathrm{d}xx^2\tfrac{1}{2}\big(S(x,t) + S(x,t)^\mathrm{T}\big)= D
\end{equation}
as a direct consequence of the conservation law, see the discussion in \cite{MS14} for a particular model. Thus the Drude weight can be viewed as providing a quantitative measure on how much and how fast an initial localized perturbation is spreading ballistically.
For the finer structure of the ballistic component one has to use \eqref{2.11}, however.
A related quantity of interest is the time-integrated self-current correlation, where in our context ``self'' refers to identical reference points
(say $x=0$ by translation invariance):
\begin{equation}
D^\mathrm{s}_{ij} = \int \mathrm{d} t\,\langle \mathfrak{j}_i(0,t)\mathfrak{j}_j(0,0) \rangle_{\vec{u}}^\mathrm{c}.
\end{equation}
This is the long-time limit of the covariance matrix of the charges transferred from the left, $x<0$, to the right, $x>0$, halves of the system, scaled by the inverse time, which is also referred to as zero-frequency noise in mesoscopic physics \cite{noisereview}. We call $D^\mathrm{s}$ simply the Drude self-weight. The diagonal entries $D^\mathrm{s}_{ii}$, the scaled second cumulants of charge transfer, are part of the large-deviation theory for non-equilibrium transport \cite{LD,Es}. The Drude self-weight also satisfies a sum rule,
\begin{equation}\label{sumruleN}
\lim_{t \to \infty} \frac{1}{t} \int \mathrm{d}x |x|\tfrac{1}{2}\big(S(x,t) + S(x,t)^\mathrm{T}\big)= D^\mathrm{s},
\end{equation}
see \cite{MS14} for a particular model.
\section{Drude weight of the classical hard rod fluid}
\label{sec3}
\setcounter{equation}{0}
The material of this section has been reported already elsewhere \cite{ds17}. Here the known properties are rewritten in such a way as to closely parallel
our discussion of the Lieb-Liniger model. This has two advantages: The first one is more pedagogical. The underlying physics of the hard rod fluid is much simpler than the one of the $\delta$-Bose gas and it is thus easier to see how the various theory elements arise. Secondly, conjectured identities may be
readily checked by using the hard rod fluid as test case.
The hard rod fluid consists of segments of length $a$ on the real line. The rods move according to their velocity until they collide, at which moment they simply exchange their velocities. Since the number of particles with given velocity is conserved, we now have an example with an infinite number of conservation laws, under the assumption that the velocity distribution is not concentrated on a finite set of $\delta$-functions. The precise definition of the fields and the
equilibrium measures can be found in \cite{S91}. Here we merely follow the blue-print of Section 2. On the hydrodynamic scale the basic object is
the density function $f(x,t;v)$, where the velocity $v\in\mathbb{R}$ denotes the label of the conserved field. The quantity $f(x,t;v)\mathrm{d}x\mathrm{d}v$ is the number of rods
in the volume element $[x,x+\mathrm{d}x]\times [v,v+\mathrm{d}v]$, assumed to be small on the macroscopic scale, but still containing many hard rods. In approximation, the function $f$ satisfies the system of conservation
laws
\begin{equation}\label{3.1}
\partial_t f(v) + \partial_x\big(v_{[f]}^\mathrm{eff}(v) f(v)\big) = 0,
\end{equation}
which is the analogue of \eqref{2.4}. The subscript $[f]$ recalls that the effective velocity $v_{[f]}^\mathrm{eff}(v)$ is a nonlinear functional of $f$.
Explicitly,
\begin{equation}\label{3.2}
v_{[f]}^\mathrm{eff}(v)= v + a(1 - a \rho)^{-1} \int_\mathbb{R} \mathrm{d} w\,(v-w)f(w)
= v + \frc{a\rho (v-u)}{1 - a \rho},
\end{equation}
which can also be written as
\begin{equation}\label{3.3}
v_{[f]}^\mathrm{eff}(v)= \frc{v - a \rho u}{1-a\rho}
\end{equation}
with mean density, resp. mean velocity,
\begin{equation}\label{3.4}
\rho = \int_\mathbb{R} \mathrm{d} v \,f(v), \quad u = {\rho}^{-1}\int_\mathbb{R} \mathrm{d} v\, vf(v).
\end{equation}
A generalized Gibbs ensemble (GGE) is specified by some density function $f(v)$ independent of $x$. Microscopically this means that the hard rods have uniform density and independent velocities with probability density function $\rho^{-1}f(v)$. Such background GGE is now regarded as prescribed. Test functions on velocity space are generically denoted by $\psi(v),\phi(v)$. We introduce the convolution operator
\begin{equation}\label{3.5}
T\psi(v) = - a\int \mathrm{d} w \,\psi(w)
\end{equation}
and the multiplication operator
\begin{equation}\label{3.6}
n \psi(v) = (1 -a\rho)^{-1} f(v)\psi(v).
\end{equation}
The dressing operation is defined by
\begin{equation}\label{3.7}
\psi^\mathrm{dr} = (1-Tn)^{-1}\psi = (1 + (1-a\rho)Tn)\psi.
\end{equation}
As we will see in Section \ref{sec4}, for the $\delta$-Bose gas the dressing operator is still of the form $(1-Tn)^{-1}$, with $T$ defined through the convolution with some function $\varphi$,
$T\psi(v) = (1/2\pi)\,\varphi*\psi(v)$. Thus Eq. \eqref{3.5} should be read as convolution with the constant function $\varphi(v) = -a$. Note that in the present case, the operator $-[(1-a\rho)/(a\rho)]\,Tn$ is the projector to the constant function, and the second identity in \eqref{3.7} holds only because of this projection property.
As discussed in \cite{ds17} linearizing \eqref{3.2} as $f +\epsilon\psi$ yields the linearized operator
\begin{equation}\label{3.8}
A = (1-nT)^{-1}v^\mathrm{eff}(1-nT).
\end{equation}
Here $v^\mathrm{eff}(v)$ is viewed as a multiplication operator, where for notational simplicity we dropped the subscript $[f]$. For the static covariance one obtains
\begin{equation}\label{3.9}
C = (1-nT)^{-1}f(1-Tn)^{-1},
\end{equation}
for the current-field covariance
\begin{equation}\label{3.10}
B = (1-nT)^{-1}fv^\mathrm{eff}(1-Tn)^{-1},
\end{equation}
and for the Drude weight
\begin{equation}\label{3.11}
D = (1-nT)^{-1}f(v^\mathrm{eff})^2(1-Tn)^{-1},
\end{equation}
where $f(v)$ and $v^\mathrm{eff}(v)$ act as multiplication operators. By straightforward multiplication one notes that the relations \eqref{2.10}, \eqref{2.12}, \eqref{2.13}, and \eqref{2.17} are satisfied.
Sometimes it is convenient to rewrite these relations as quadratic forms. For example
\begin{equation}\label{3.12}
\langle \phi, C\psi\rangle = \int \mathrm{d}v \,\phi(v) f(v) \psi(v) + a (a\rho - 2) \int \mathrm{d}v\, f(v) \phi(v) \int \mathrm{d}w\, f(w) \psi(w) .
\end{equation}
Microscopically one would consider the stationary random field $a_\psi(x) = \sum_\ell\psi(v_\ell)\delta(x - r_\ell)$,
where $r_\ell$ is the position and $v_\ell$ the velocity of the $\ell$-th hard rod. Then, as in \eqref{2.15}, $C$ is the covariance
\begin{equation}\label{3.13}
\langle \phi, C\psi\rangle = \langle a_\phi | a_\psi \rangle = \int \mathrm{d}x \,\langle a_\phi(x) a_\psi(0) \rangle_f^\mathrm{c},
\end{equation}
average in the GGE defined by $f(v)$. The first term on the right of \eqref{3.12} corresponds to the ideal gas contribution, while the second term results from the hard core repulsive potential.
\section{The repulsive $\delta$-Bose gas}
\label{sec4}
\setcounter{equation}{0}
The hydrodynamic theory outlined in Section \ref{sec2} is extended to the repulsive Lieb-Liniger $\delta$-Bose gas \cite{LL66}, which has an infinite number of conserved
charges. We however keep the notation general, since with minor adaptions the main results presented are in fact valid for other integrable models of fermionic type, including the XXZ quantum spin chain and integrable relativistic quantum field theory. The corresponding results for bosonic type integrable models are also stated, see Section \ref{sec6}.
In second quantization the Lieb-Liniger hamiltonian is given by
\begin{equation}\label{4.1}
H = \int \mathrm{d}x\, \big(\tfrac{1}{2} \partial_x \hat{\psi}(x)^*\partial_x \hat{\psi}(x) +c \hat{\psi}(x)^*\hat{\psi}(x)^*\hat{\psi}(x)\hat{\psi}(x)\big)
\end{equation}
with Bose field $\hat{\psi}(x)$, $x \in \mathbb{R}$, repulsive coupling constant $c >0$, and mass of the Bose particles $m = 1$.
$ H$ has an infinite number of conserved charges, labeled as $\hat{Q}_j$, $j = 0,1,...$\,. $\hat{Q}_0$ is the particle number, $\hat{Q}_1$
the total momentum, $\hat{Q}_2 = H$ the total energy, etc. The conserved charge $\hat{Q}_j$ has the density $\hat{Q}_j(x)$,
\begin{equation}\label{4.2}
\hat{Q}_j = \int \mathrm{d}x \,\hat{Q}_j(x).
\end{equation}
From the conserved charges one constructs the generalized Gibbs state through
\begin{equation}\label{4.3}
\rho_\mathrm{GG} = Z^{-1} \exp \Big[ - \sum_{j\geq0}\beta_j \hat{Q}_j\Big]
\end{equation}
with $\{\beta_j , j \geq 0\}$ the generalized inverse temperatures, equivalently chemical potentials. In the hydrodynamic approach
the Bose gas is initialized in a local equilibrium state of the form
\begin{equation}\label{4.4}
\rho_\mathrm{LE} = Z^{-1} \exp \Big[ - \sum_{j\geq0}\int \mathrm{d}x\, \beta_j (x)\hat{Q}_j(x)\Big]
\end{equation}
assuming that the chemical potentials are slowly varying on the scale of the typical interparticle and scattering distances. Generalized hydrodynamics asserts that in approximation such structure is propagated in time according to
\begin{equation}\label{4.5}
\rho_\mathrm{LE}(t) = \mathrm{e}^{-\mathrm{i}Ht} \rho_\mathrm{LE} \mathrm{e}^{\mathrm{i}Ht} \simeq Z^{-1} \exp \Big[ - \sum_{j\geq0}\int \mathrm{d}x \,\beta_j (x,t)\hat{Q}_j(x)\Big].
\end{equation}
The slow variation in space induces a correspondingly slow variation in time. It also means that averages of local observables at $(x,t)$ with respect to $\rho_\mathrm{LE}$ can be evaluated as averages with respect to $\rho_\mathrm{GG}$ with the properly adjusted values of
the chemical potentials $\{\beta_j(x,t),j\geq 0\}$.
\medskip\\
\textit{Remark}: For integrable lattice models, the conserved charges are written as sums over translates of local and quasi-local densities \cite{quasiloc}. Their currents, as computed from the conservation law, have the same structure. However for the $\delta$-Bose gas our formulas are tentative. The total charges $\hat{Q}_j$ are usually defined through the Bethe eigenfunctions of $\hat{Q}_2$ by replacing the $n$-particle energy $\sum_{\ell=1}^n (k_\ell)^2$ simply by $\sum_{\ell=1}^n (k_\ell)^j$. But the corresponding local charge densities are known only up $j = 4$. We refer to \cite{DK01} for a discussion.
Nevertheless one would hope that, at least for appropriate conserved charges, GGE averaged densities and currents and GGE connected two-point correlation functions are still meaningfully defined. The set of appropriate conserved charges is a subtle point. There are {\em bona fide} GGE states for which local densities have diverging averages \cite{dNWBC14}, although this does not imply divergence of their two-point correlation functions. One may restrict to the Hilbert space of pseudolocal charges, which, by the rigorous results of \cite{D17}, at least in quantum chains would be the Hilbert of functions $h(\theta)$ induced by the covariance inner product \eqref{2.15} or \eqref{2.13} (and thus, explicitly, \eqref{intro1}). Pseudolocal densities have finite integrated connected two-point functions by construction, and we expect all our results to hold for all such pseudolocal densities and their currents as long as the explicit formula gives a finite answer.\medskip
To lowest order in the variation, the family $\{\beta_j(x,t), j \geq 0\}$ satisfies a closed set of Euler-type equations, as explained in \cite{cdy,BCDF16}. We mostly follow the notation in \cite{cdy}. Instead of $\{\beta_j(x,t), j \geq 0\}$ it is more instructive to write down the evolution equation in terms of the quasiparticle density $\rho_\mathrm{p}(x,t;\theta)$ with $\theta \in \mathbb{R}$ the label of the conserved field. The density is governed by the system of conservation laws
\begin{equation}\label{4.6}
\partial_t \rho_\mathrm{p}(x,t;\theta) + \partial_x \big(v^\mathrm{eff}_{[\rho_p]}(x,t;\theta)\rho_\mathrm{p}(x,t;\theta)\big) = 0.
\end{equation}
Comparing with \eqref{3.1}, $\rho_\mathrm{p}(x,t;\theta)$ takes the role of the hard rod density $f(x,t;v)$. The effective velocity $v^\mathrm{eff}_{[\rho_p]}$
is a nonlinear functional of $\rho_\mathrm{p}(\cdot;\theta)$, which is local in $(x,t)$. Its precise definition will be given below. To have a more concise notation, we will mostly drop the dependence on $[\rho_p]$.
The Lieb-Liniger model has momentum $p(\theta) = \theta$ and kinetic energy $E(\theta) = \tfrac{1}{2} \theta^2$. As in \cite{cdy}, our results are valid for a general choice of $p,E$ and for future applications we retain this generality. Similarly, the higher-spin conserved charges in the Lieb-Liniger model can be chosen to have one-particle eigenvalues $h_j(\theta)=\theta^j/j!$, and our results hold for a general choice of a complete basis $h_j$ in Bethe-ansatz integrable models. In \cite{cdy}, the scattering amplitude is denoted by $\varphi(\theta)$, where
for the Lieb-Liniger model $\varphi(\theta) = 4c/(\theta^2 + 4c^2)$. Again such specific choice is not needed in the following derivation. The operator of convolution with $\varphi$ will be denoted by
\begin{equation}\label{4.7}
T\psi(\theta) = \frac{1}{2\pi} \int \mathrm{d}\alpha\, \varphi(\theta - \alpha) \psi(\alpha).
\end{equation}
As for hard rods, $\phi,\psi$ are our generic symbols for smooth test functions on label space.
Let us first explain $\rho_\mathrm{p}$ and $v^\mathrm{eff}$, for which it suffices to consider the spatially homogeneous state
$\rho_\mathrm{GG}$ with some prescribed chemical potentials $\{\beta_j, j \geq 0\}$. We define
\begin{equation}\label{4.8}
w(\theta) = \sum_{j\geq0} \beta_j h_j(\theta).
\end{equation}
The quasienergies, $\varepsilon(\theta)$, are the solutions to the integral equation
\begin{equation}\label{4.9}
\varepsilon(\theta) = w(\theta) -T\log(1+\mathrm{e}^{-\varepsilon}) (\theta).
\end{equation}
Note that
\begin{equation}\label{4.10}
\partial_{\beta_m}\varepsilon = h_m + Tn\partial_{\beta_m}\varepsilon,
\end{equation}
where
\begin{equation}\label{4.11}
n(\theta) = \frac{1}{1 + \mathrm{e}^{\varepsilon(\theta)}}
\end{equation}
and $n$ denotes multiplication by the occupation function $n(\theta)$, that is $(n\psi)(\theta) = n(\theta)\psi(\theta)$. As before we define the dressing transformation as
\begin{equation}\label{4.12}
\psi^\mathrm{dr} = (1 - Tn)^{-1}\psi.
\end{equation}
Hence
\begin{equation}\label{4.13}
\partial_{\beta_m}\varepsilon = (h_m)^\mathrm{dr}.
\end{equation}
The quasiparticle density satisfies
\begin{equation}\label{4.14}
n(\theta)^{-1} \rho_\mathrm{p}(\theta) = \tfrac{1}{2\pi}p'(\theta) + T\rho_\mathrm{p}(\theta),\quad 2\pi \rho_\mathrm{p}(\theta) = n(\theta)(p')^\mathrm{dr}(\theta).
\end{equation}
Through $\rho_\mathrm{p}$ the average conserved charge per unit length can be computed as
\begin{equation}\label{4.15}
\langle \hat{Q}_j(0)\rangle= \mathsf{q}_j = \int \mathrm{d}\theta \,
\rho_\mathrm{p}(\theta) h_j(\theta) = \tfrac{1}{2\pi} \int \mathrm{d}p(\theta)n(\theta) (h_j)^\mathrm{dr}(\theta).
\end{equation}
Here $\langle \cdot \rangle$ denotes the infinite volume GGE average (and below, in expressions such as
$\langle \hat{Q}_i(x) \hat{Q}_j(0)\rangle^\mathrm{c}$, the superscript will again refer to the usual connected correlation functions).
Surprisingly this formalism extends also to average currents. The local current density of the $j$-th conserved charge is given through
\begin{equation}\label{4.16}
\mathrm{i}[H,\hat{Q}_j(x)] + \partial_x \hat{J}_j(x) = 0
\end{equation}
and its average is \cite{cdy,BCDF16}
\begin{equation}\label{4.17}
\langle \hat{J}_j(0) \rangle = \mathsf{j}_j = \int \mathrm{d}\theta
\rho_\mathrm{p}(\theta)v^\mathrm{eff}(\theta)h_j(\theta) = \tfrac{1}{2\pi} \int \mathrm{d}E(\theta)n(\theta) (h_j)^\mathrm{dr}(\theta)
\end{equation}
with the effective velocity
\begin{equation}\label{4.18}
v^\mathrm{eff}(\theta) = \frac{(E')^\mathrm{dr}(\theta)}{(p')^\mathrm{dr}(\theta)}.
\end{equation}
\medskip\\
\textit{Remark}: For a well-defined dressing transformation, the operator $1 -Tn$ has to be invertible. Also, for the linear response computation in Section 5 we will need that
$v^\mathrm{eff}(\theta)$ is strictly increasing in $\theta$ and approximately linear for large $\theta$. Such properties can
be established for the Lieb-Liniger model, but more technical considerations are required which are outside this contribution.
\medskip
We now extend the general relations from Section 2, valid for a finite number of conserved fields, to the Lieb-Liniger model.
The charge-charge covariance matrix $C$ has to be deduced from the GGE of the $\delta$-Bose gas through
\begin{equation}\label{4.19}
C_{ij} = \int \mathrm{d}x \,\langle \hat{Q}_i(x)\hat{Q}_{j}(0) \rangle^\mathrm{c}_{\rho_\mathrm{p}}.
\end{equation}
This quantity has been considered in \cite{moscaux}, but our expression below seems to be new. We develop a method by which one can compute
also the charge-current correlation matrix $B$,
\begin{equation}\label{4.20}
B_{ij} = \int \mathrm{d}x\, \langle \hat{Q}_i(x)\hat{J}_{j}(0) \rangle^\mathrm{c}_{\rho_\mathrm{p}}.
\end{equation}
Then the Drude weight equals $D = BC^{-1}B$ and the linearization $ A = B C^{-1}$. As a consistency check, we will also show
that the so-determined $A$ agrees with linearizing \eqref{4.6} as $\rho_\mathrm{p} + \delta\psi$ with small $\delta$. One can also turn the logic the other way.
Given the charge correlator $C$ and $A$, which in addition uses only the average currents, we compute the matrices $B,D$.
As our main result, the matrices \eqref{4.19} and \eqref{4.20} of the Lieb-Liniger model are written in a form which can be viewed as a sort of diagonalization. Thereby we arrive at a fairly explicit expression for the Drude weight. It is convenient to use the operators $T$, $n$ introduced above, as well as the multiplication operators $\rho_{\rm p}$ and $v^{\rm eff}$. Writing $C_{ij} = \langle h_i,Ch_j\rangle = \int \mathrm{d} \theta\,h_i(\theta) (Ch_j)(\theta)$, and similarly for $B,\,D,\,A$ and $D^{\rm s}$, the following identities hold:\medskip\\
(i) \textit{charge-charge correlator}
\begin{equation}\label{4.21}
C = (1-nT)^{-1}\rho_\mathrm{p}(1-n)(1-Tn)^{-1},
\end{equation}
(ii) \textit{charge-current correlator}
\begin{equation}\label{4.29}
B = (1-nT)^{-1}\rho_\mathrm{p}(1-n)v^\mathrm{eff}(1-Tn)^{-1},
\end{equation}
(iii) \textit{Drude weight}
\begin{equation}\label{4.37}
D = (1-nT)^{-1}\rho_\mathrm{p}(1-n)(v^\mathrm{eff})^2(1-Tn)^{-1},
\end{equation}
(iv) \textit{linearized operator}
\begin{equation}\label{4.39}
A = (1-nT)^{-1}v^\mathrm{eff}(1-nT),
\end{equation}
(v) \textit{Drude self-weight}
\begin{equation}\label{curcur}
D^\mathrm{s} = (1-nT)^{-1}\rho_\mathrm{p}(1-n)|v^\mathrm{eff}|(1-Tn)^{-1}.
\end{equation}
In terms of linear combinations as
\begin{equation}\label{4.39a}
a_{\psi}(x) = \sum_{j=0}^\infty c_j \hat Q_j(x),\quad \psi(\theta) = \sum_{j=0}^\infty c_j h_j(\theta)
\end{equation}
with general coefficients $c_j$, this is
\begin{equation}\label{4.22}
\hspace{-31pt}\langle\phi,C\psi\rangle = \int \mathrm{d}\theta \rho_\mathrm{p}(\theta)(1-n(\theta))\phi^\mathrm{dr}(\theta) \psi^\mathrm{dr}(\theta),
\end{equation}
\begin{equation}\label{4.30}
\langle\phi,B\psi\rangle = \int \mathrm{d}\theta \rho_\mathrm{p}(\theta)(1-n(\theta))v^\mathrm{eff}(\theta)\phi^\mathrm{dr}(\theta) \psi^\mathrm{dr}(\theta),
\end{equation}
\begin{equation}\label{4.38}
\hspace{3pt}\langle\phi,D\psi\rangle = \int \mathrm{d}\theta \rho_\mathrm{p}(\theta)(1-n(\theta))v^\mathrm{eff}(\theta)^2\phi^\mathrm{dr}(\theta) \psi^\mathrm{dr}(\theta),
\end{equation}
\begin{equation}\label{4.40}
\hspace{-42pt}\langle\phi,A\psi\rangle = \int \mathrm{d}\theta v^\mathrm{eff}(\theta)\phi^\mathrm{dr}(\theta) (1 - nT)\psi (\theta),
\end{equation}
\begin{equation}\label{curcur2}
\hspace{3pt}\langle\phi,D^\mathrm{s}\psi\rangle = \int \mathrm{d}\theta \rho_\mathrm{p}(\theta)(1-n(\theta))|v^\mathrm{eff}(\theta)|\phi^\mathrm{dr}(\theta) \psi^\mathrm{dr}(\theta).
\end{equation}
For example
\begin{equation}
\langle \phi,C\psi\rangle = \int \mathrm{d} x\langle a_\phi(x)a_\psi(0)\rangle^\mathrm{c}_{\rho_\mathrm{p}} = \int \mathrm{d}\theta \rho_\mathrm{p}(\theta)(1-n(\theta))\phi^\mathrm{dr}(\theta) \psi^\mathrm{dr}(\theta)
\end{equation}
and correspondingly for $B,D, A, D^\mathrm{s}$.
\medskip\\
\textbf{Proof} of (i)-(iv): We start from the functional
\begin{equation}\label{4.25}
F_g = - \tfrac{1}{2\pi} \int \mathrm{d}\theta\, g(\theta) \log (1 + \mathrm{e}^{-\varepsilon(\theta)})
\end{equation}
with a yet arbitrary function $g$. Then (keeping implicit the argument $\theta$ of the integrand)
\begin{equation}\label{4.24}
\partial _{\beta_j} F_g = \tfrac{1}{2\pi} \int \mathrm{d}\theta gn \partial _{\beta_j}\varepsilon = \tfrac{1}{2\pi} \int \mathrm{d}\theta gn h_j^\mathrm{dr},
\end{equation}
where we used \eqref{4.13}. With \eqref{4.15} and \eqref{4.17}, we observe that the choices $g=p'$ and $g=E'$ give, respectively, the average densities and currents \cite{cdy},
\begin{equation}\label{Fpe}
\partial_{\beta_j} F_{p'} = \mathsf{q}_j,\quad
\partial_{\beta_j} F_{E'} = \mathsf{j}_j.
\end{equation}
Note that $F_{p'}$ is the free energy of the GGE \cite{tba,moscaux}, and $F_{E'}$ is the ``current free energy" obtained in \cite{cdy} where the second relation of \eqref{Fpe} was first derived.
Assume that $n$ depends smoothly on some parameter $\mu$. We take a second derivative in \eqref{4.10},
\begin{equation}\label{4.27}
\partial _{\mu}\partial _{\beta_j}\varepsilon = T \partial _{\mu}(n\partial _{\beta_j}\varepsilon) =T\big( \partial _{\mu}n\partial _{\beta_j}\varepsilon +n \partial _{\mu}\partial _{\beta_j}\varepsilon \big).
\end{equation}
Hence
\begin{equation}\label{4.23a}
\partial _{\mu}\partial _{\beta_j}\varepsilon = (1-Tn)^{-1}T(\partial _{\mu}n\partial _{\beta_j}\varepsilon).
\end{equation}
Taking a second derivative also in \eqref{4.24} and combining with \eqref{4.23a} yields the general relation
\begin{equation}\label{4.24aa}
\partial _{\mu}\partial _{\beta_j} F_g = \tfrac{1}{2\pi} \int \mathrm{d}\theta g^\mathrm{dr} \partial _{\mu}n\partial _{\beta_j}\varepsilon = \tfrac{1}{2\pi} \int \mathrm{d}\theta g^\mathrm{dr} \partial _{\mu}nh_j^\mathrm{dr}.
\end{equation}
With $\mu=\beta_i$, we find
\begin{equation}\label{4.24a}
\partial _{\beta_i}\partial _{\beta_j} F_g = - \tfrac{1}{2\pi} \int \mathrm{d}\theta g^\mathrm{dr} n(1-n) \partial _{\beta_i}\varepsilon\partial _{\beta_j}\varepsilon.
\end{equation}
We now set $\phi(\theta) = \sum_{i\geq 0} c_i h_i(\theta)$ and $\psi(\theta) = \sum_{j\geq 0} \tilde{c}_j h_j(\theta)$. Using \eqref{4.13} we arrive at the basic identity
\begin{equation}\label{4.24b}
\sum_{i,j \geq 0} c_i \tilde{c}_j\partial _{\beta_i}\partial _{\beta_j} F_g = - \tfrac{1}{2\pi} \int \mathrm{d}\theta g^\mathrm{dr} n(1-n) \phi^\mathrm{dr} \psi^\mathrm{dr}.
\end{equation}
Noting that for the choice $g = p'$, \eqref{Fpe} along with \eqref{dbeta} imply $C_{ij} = -\partial _{\beta_i}\partial _{\beta_j}F_{p'}$, \eqref{4.22} follows upon using the last relation in \eqref{4.14}. To establish \eqref{4.29}, we instead choose $g = E'$; then \eqref{Fpe} and \eqref{dbeta} give
\begin{equation}\label{4.36}
B_{ij} = \int \mathrm{d}x \langle \hat{Q}_i(x)\hat{J}_{j}(0) \rangle = - \partial _{\beta_i}\partial _{\beta_j} F_{E'}.
\end{equation}
Hence our claim follows from the basic identity \eqref{4.24b} together with the relations \eqref{4.14} and \eqref{4.18}.
Finally observing that $C^{-1} = (1-Tn)(\rho_\mathrm{p}(1-n))^{-1}(1-nT)$, the claims \eqref{4.38} and \eqref{4.40} are a consequence of $D = B C^{-1}B$
and $A = BC^{-1}$.
The missing piece is to reconfirm $A$ of \eqref{4.39}
by linearizing the Euler type equation \eqref{4.6}.
We linearize the current in \eqref{4.6} as $\rho_\mathrm{p} + \delta\psi$,
\begin{equation}\label{4.41}
\delta(v^\mathrm{eff}\rho_\mathrm{p}) = v^\mathrm{eff}\delta\psi + \rho_\mathrm{p} \delta v^\mathrm{eff}.
\end{equation}
For $v^\mathrm{eff}$ we use the identity Eq. (29) in \cite{cdy},
\begin{equation}\label{4.34}
p'v^\mathrm{eff} = E'+ 2\pi T(\rho_\mathrm{p}v^\mathrm{eff}) - 2\pi v^\mathrm{eff} T\rho_\mathrm{p},
\end{equation}
since $\rho_\mathrm{p}$ appears linearly. Then
\begin{equation}\label{4.42}
v^\mathrm{eff} = (\tfrac{1}{2\pi}p' - T\rho_\mathrm{p} + M_{\rho_\mathrm{p}})^{-1}\tfrac{1}{2\pi}E',
\end{equation}
where $M_{\rho_\mathrm{p}}$ is a multiplication operator by $(T\rho_\mathrm{p})$, acting as
\begin{equation}\label{4.43}
M_{\rho_\mathrm{p}}\psi(\theta) = \frac1{2\pi}
\int \mathrm{d}\alpha\, \varphi(\theta - \alpha)\rho_\mathrm{p}(\alpha)\psi(\theta).
\end{equation}
Variation of $\rho_\mathrm{p}$ yields
\begin{equation}\label{4.44}
\langle \phi, v^\mathrm{eff}_{[\rho_\mathrm{p} + \delta\psi]}\rangle - \langle \phi, v^\mathrm{eff}_{[\rho_\mathrm{p}]}\rangle=
\langle \phi , \rho_\mathrm{p}(\tfrac{1}{2\pi}p' - T\rho_\mathrm{p} + M_{\rho_\mathrm{p}})^{-1}(Tv^\mathrm{eff} - v^\mathrm{eff}T)\delta\psi\rangle.
\end{equation}
Thus our task is to show that
\begin{equation}\label{4.45}
(1-nT)^{-1}v^\mathrm{eff}(1-nT) = v^\mathrm{eff} + \rho_\mathrm{p}(\tfrac{1}{2\pi}p' - T\rho_\mathrm{p} + M_{\rho_\mathrm{p}})^{-1}(Tv^\mathrm{eff} - v^\mathrm{eff}T).
\end{equation}
Multiplying Eq \eqref{4.45} with $(1-nT)$ from the left yields
\begin{equation}\label{4.46}
n(Tv^\mathrm{eff} - v^\mathrm{eff}T) = (1 - nT)\rho_\mathrm{p}(\tfrac{1}{2\pi}p' - T\rho_\mathrm{p} +M_{\rho_\mathrm{p}})^{-1}(Tv^\mathrm{eff} - v^\mathrm{eff}T).
\end{equation}
In order to have equality, it is sufficient to show that
\begin{equation}
n =(1 - nT)\rho_\mathrm{p}(\tfrac{1}{2\pi}p' - T\rho_\mathrm{p} +M_{\rho_\mathrm{p}})^{-1}
\end{equation}
which is equivalent to
\begin{equation}\label{4.47}
n(\tfrac{1}{2\pi}p' - T\rho_\mathrm{p} + M_{\rho_\mathrm{p}}) = (1 - nT)\rho_\mathrm{p}.
\end{equation}
This is satisfied because of \eqref{4.14} and we have established \eqref{4.45}.\medskip\hspace*{\fill}\mbox{$\halmos$}
There is a physically interesting consequence for the time-dependent charge-charge correlator defined through
\begin{equation}\label{4.48}
\hat{S}_{ij}(k,t) = \int \mathrm{d}x\, \mathrm{e}^{\mathrm{i}kx} \langle\hat{Q}_i(x,t) \hat{Q}_j(0,0)\rangle^\mathrm{c}_{\rho_\mathrm{p}},
\end{equation}
compare with \eqref{2.11}. On the hydrodynamic scale, small $k$, large $t$, $\hat{S}_{ij}(k,t)$ is approximated by
\begin{equation}\label{4.49}
\hat{S}_{ij}(k,t) \simeq \langle h_i,\mathrm{e}^{\mathrm{i}ktA}C h_j\rangle = \int \mathrm{d}\theta\, \mathrm{e}^{\mathrm{i}ktv^\mathrm{eff}(\theta)}
\rho_\mathrm{p}(\theta)(1-n(\theta))(h_i)^\mathrm{dr}(\theta) (h_j)^\mathrm{dr}(\theta).
\end{equation}
For the special case of the density, $h_i =1, h_j =1$, such asymptotic behavior has been derived in \cite{dNP16} directly from the Bethe ansatz. Here we see that the structure of the correlator
holds in much greater generality.
We return to the still missing identity \eqref{curcur}. There is an exact sum rule which states
\begin{equation}\label{4.49a}
\int \mathrm{d}x |x|\tfrac{1}{2}\big(S_{ij}(x,t) + S_{ji}(x,t)\big)= \int_0^t \mathrm{d}s \int_0^t\mathrm{d}s' \langle \mathfrak{j}_j(0,s)\mathfrak{j}_i(0,s') \rangle^\mathrm{c},
\end{equation}
see \cite{MS14}. Using time-stationarity on the right-hand side and the approximation \eqref{4.49} for $S_{ij}(x,t)$ on the left, one arrives at the claimed
\eqref{curcur}.
The hydrodynamic approximation likewise extends to the other correlation functions.
Differentiating with respect to $t$ and using the conservation equations, one obtains
\begin{equation}\label{JQ}
\int \mathrm{d} x\, \mathrm{e}^{\mathrm{i}kx}\langle \hat J_i(x,t)\hat Q_j(0,0)\rangle^\mathrm{c}
\simeq
\int \mathrm{d}\theta\, \mathrm{e}^{\mathrm{i}ktv^\mathrm{eff}(\theta)}
\rho_\mathrm{p}(\theta)(1-n(\theta))v^\mathrm{eff}(\theta)(h_i)^\mathrm{dr}(\theta) (h_j)^\mathrm{dr}(\theta).
\end{equation}
Using space-time translation invariance of the averaging and further differentiating, we get
\begin{equation}\label{JJ}
\int \mathrm{d} x\, \mathrm{e}^{\mathrm{i}kx}\langle \hat J_i(x,t)\hat J_j(0,0)\rangle^\mathrm{c}
\simeq
\int \mathrm{d}\theta\, \mathrm{e}^{\mathrm{i}ktv^\mathrm{eff}(\theta)}
\rho_\mathrm{p}(\theta)(1-n(\theta))v^\mathrm{eff}(\theta)^2(h_i)^\mathrm{dr}(\theta) (h_j)^\mathrm{dr}(\theta).
\end{equation}
At $k=0$ one recovers the Drude weight \eqref{4.37}, in agreement with its basic definition \eqref{2.14}.
Further, integrating \eqref{JJ} over $t\in{\mathbb{R}}$, the left-hand side is proportional to $\delta(k)$, since the time-integrated current is position independent because of the conservation law. Equating with the integrated right-hand side yields again \eqref{curcur}.
In our discussion long times means ballistic (Eulerian) time scale, i.e. $kt = \mathcal{O}(1)$. The diffusive time scale, $t$ of order $k^{-2}$, is not covered
and as an input would require some information on
\begin{equation}\label{4.50}
\int_0^\infty \mathrm{d} t \Big(\int \mathrm{d} x\langle \hat {J}_i(x,t)\hat {J}_j(0,0)\rangle^\mathrm{c} - D_{ij}\Big),
\end{equation}
which currently seems to be out of reach.
Finally, we observe that, Fourier transforming \eqref{4.49} on $k$, the space-time dependent charge-charge correlator can be written in the form
\begin{equation}
S_{ij}(x,t) \simeq \int \mathrm{d}\theta\, \delta(x-v^\mathrm{eff}(\theta)t)
\rho_\mathrm{p}(\theta)(1-n(\theta))(h_i)^\mathrm{dr}(\theta) (h_j)^\mathrm{dr}(\theta),
\end{equation}
which has a clear physical interpretation: in the hydrodynamic limit, the correlation is built out of particles propagating, from the initial position $(0,0)$ to the position $(x,t)$, ballistically at the speeds $v^\mathrm{eff}(\theta)$. The equilibrium weight is encoded in $\rho_\mathrm{p}(1-n)$ and
$(h_i)^\mathrm{dr}$, resp. $(h_j)^\mathrm{dr}$, result from the observable at the start and end point. The other correlation functions, \eqref{JQ} and \eqref{JJ},
can be viewed in the corresponding way.
\medskip\\
\section{ Linear response}
\label{sec5}
\setcounter{equation}{0}
\subsection{Drude weight}
The Drude weight of the Lieb-Liniger model can also be obtained from a linear response for the current. One starts from a domain wall, which means the state \eqref{4.4}
with $\beta_i(x) = \beta_i - \tfrac{1}{2}\mu_i $ for $x < 0$, $\beta_i(x) = \beta_i + \tfrac{1}{2}\mu_i $ for $x > 0$, and $\beta_j = const.$ for $j\neq i$. The linear response of the $j$-th average current is defined through
\begin{equation}\label{linresp}
D_{ij} = \lim_{\mu_i\to0} \frc{\partial}{\partial\mu_i}
\lim_{t\to\infty}
\frc1t \int\mathrm{d} x\, \langle \hat J_j(x,t)\rangle_{\mu_i}.
\end{equation}
We will establish that this expression indeed agrees with \eqref{4.37}.
In the context of the XXZ and Hubbard model the prescription \eqref{linresp}, for the special case of charge, spin and energy currents with thermal Gibbs as reference state, is discussed in \cite{VKM15,K17} and used for a numerical computation of the associated components of the Drude weight. These results have been combined with generalized hydrodynamics in order to evaluate exactly these components of the Drude weight \cite{IdN17,BVKM17}. Earlier, a linear response formula for the Drude weight has been proposed and proved in \cite[sect 6]{IP13},
for the diagonal case ($i=j$) and with thermal Gibbs as reference state. However instead of an initial domain wall the authors consider an initial spatially homogeneous equilibrium state and perturb the dynamics by a linear potential of the form $\mu_i \int \mathrm{d} x x\hat{Q}_i(x)$. While leading to the same result, a numerical implementation seems to be more difficult when compared to the initial domain wall \eqref{linresp}.
Since the right-hand side of \eqref{linresp} is evaluated at large times, we can use the asymptotic form of the resulting current, which is known to be described by a local GGE of self-similar form. We thus change the integration variable to $\xi=x/t$,
\begin{equation}
D_{ij} = \lim_{\mu_i\to0} \frc{\partial}{\partial\mu_i}\int\mathrm{d} \xi\, \langle \hat J_j\rangle_{\xi,\mu_i} = \lim_{\mu_i\to0} \int\mathrm{d} \xi\, \frc{\partial\langle \hat J_j\rangle_{\xi,\mu_i}}{\partial\mu_i}.
\end{equation}
From \cite{cdy} it is known that
\begin{equation}
\langle \hat J_j\rangle_{\rm \xi, \mu_i} = \int \mathrm{d}\theta\, E' n h_j^{\rm dr},
\end{equation}
where
\begin{equation}
n(\theta) = n_\mathrm{L}(\theta)\chi(\theta>\theta_\star(\xi,\mu_i))
+n_\mathrm{R}(\theta)\chi(\theta<\theta_\star(\xi,\mu_i)).
\end{equation}
Here $\chi =1$, if the condition of the argument is satisfied, and $\chi =0$ otherwise,
$\theta_\star(\xi,\mu_i)$ is implicitly defined by the relation $v^{\rm eff}(\theta_\star(\xi,\mu_i))=\xi$,
\begin{equation}
n_\mathrm{L,R}(\theta) = \frc1{1+\mathrm{e}^{\varepsilon_\mathrm{L,R}(\theta)}},
\end{equation}
and $\varepsilon_\mathrm{L}$ (resp. $\varepsilon_\mathrm{R}$) is determined by \eqref{4.9}, where $w(\theta)$ is given by \eqref{4.8} with the replacement $\beta_i\leadsto \beta_i-\mu_i/2$ (resp. $\beta_i\leadsto \beta_i+\mu_i/2$).
Taking the derivative, the general relation \eqref{4.24aa} gives
\begin{equation}\label{6.6}
\frc{\partial \langle \hat J_j\rangle_{\xi, \mu_i}}{\partial\mu_i}\Big|_{\mu_i=0} =
\int \mathrm{d}\theta \big((E')^{\rm dr}h_j^{\rm dr} \partial_{\mu_i} n\big)_{\mu_i=0}.
\end{equation}
Note that $(n_L-n_R)\delta(\theta-\theta_\star)\partial_{\mu_i}\theta_\star|_{\mu_i=0}=0$ because $n_L=n_R$ at $\mu_i=0$. Hence we obtain
\begin{equation}
\partial_{\mu_i} n(\theta)|_{\mu_i=0}
=\partial_{\mu_i} n_L(\theta)|_{\mu_i=0}\,\chi(\theta>\theta_\star(\xi))
+\partial_{\mu_i} n_R(\theta)|_{\mu_i=0}\,\chi(\theta<\theta_\star(\xi))
\end{equation}
where $\theta_\star(\xi) = \theta_\star(\xi,\mu_i=0)$. From \eqref{4.11} and \eqref{4.13},
\begin{equation}\label{ee5.8}
\partial_{\mu_i} n_\mathrm{L,R}|_{\mu_i=0} =
\pm \tfrac{1}{2} h_i^{\rm dr} n(1-n),
\end{equation}
where $n$ is the equilibrium occupation function of the spatially homogeneous background state \eqref{4.3}. Thus, inserting to the integral
\eqref{6.6},
\begin{equation}\label{intermediate}
D_{ij} = \frc12 \int \mathrm{d}\xi\Big[\int_{\theta_*(\xi)}^\infty\mathrm{d}\theta\,
h_i^{\rm dr}h_j^{\rm dr} n(1-n)(E')^{\rm dr} -
\int_{-\infty}^{\theta_*(\xi)}\mathrm{d}\theta\,
h_i^{\rm dr}h_j^{\rm dr} n(1-n)(E')^{\rm dr}\Big].
\end{equation}
Note that the integrands do not depend on $\xi$. Let us abbreviate $g = h_i^{\rm dr}h_j^{\rm dr} n(1-n)(E')^{\rm dr}$. Then
\begin{equation}\label{6.7}
D_{ij} = \frc12 \int \mathrm{d}\xi \int\mathrm{d}\theta\,g(\theta) \big(\chi(\{\xi < v^\mathrm{eff}(\theta)\}) - \chi(\{\xi >v^\mathrm{eff}(\theta)\})\big).
\end{equation}
In approximation, $v^\mathrm{eff}$ is linear for large $|\theta|$. As can be checked for the Lieb-Liniger model, we assume that
\begin{equation}\label{6.8}
\int \mathrm{d}\theta\,|g(\theta)|( 1+ |\theta|^{1+\delta}) < \infty
\end{equation}
for some $\delta > 0$.
Then with vanishing error one can cut-off the $\xi$-integration and obtains
\begin{equation}\label{6.7a}
D_{ij} = \lim_{a \to \infty} \frc12 \int \mathrm{d}\theta\,g(\theta) \int_{-a}^a \mathrm{d}\xi\,
\big(\chi(\{\xi < v^\mathrm{eff}(\theta)\}) - \chi(\{\xi >v^\mathrm{eff}(\theta)\})\big)
=
\int \mathrm{d}\theta\, g(\theta) v^\mathrm{eff}(\theta),
\end{equation}
as claimed.
\subsection{Drude self-weight}\label{ss5.2}
A linear-response formula for $D^\mathrm{s}$ similar to that for the Drude weight is as follows. With the same protocol as in \eqref{linresp} for the quantity $\langle \hat J_j(0,t)\rangle_{\mu_i}$, one writes
\begin{equation}\label{Nj}
D^\mathrm{s}_{ij} = \lim_{\mu_i\to0} 2\frc{\partial}{\partial\mu_i}
\lim_{t\to\infty}
\langle \hat J_j(0,t)\rangle_{\mu_i}.
\end{equation}
General arguments for this relation can be given. If the GGE at $\mu_i=0$ is an equilibrium state (that is, time-reversal symmetric), then this relation follows from standard fluctuation relations of Cohen-Gallavotti type, which can be established by general principles \cite{JW04,Es,bd13} (here generalized to higher conserved charges). In general, however, a GGE state is not at equilibrium, as it may carry currents. Yet it is known that all eigenstates with real eigenvalues of $\mathcal{PT}$ symmetric hamiltonians can be chosen to be $\mathcal{PT}$ symmetric. Since the Lieb-Liniger model, as well as many other integrable models, is $\mathcal{PT}$ symmetric, then its GGEs also are. A different derivation of equality \eqref{Nj} based on $\mathcal{PT}$-symmetry was presented in \cite{Nat-Phys}.
The equality \eqref{Nj} is very similar to the linear-response formula for the Drude weight, the difference being that the current is not space-integrated, it is the current across the origin. The calculation is similar to the one presented in the previous subsection, with the difference that we only need to evaluate all quantities at $\xi=0$. Therefore, in \eqref{6.7a} the integral over $\xi$ is replaced by the integrand at $\xi=0$, and thus expression \eqref{Nj} coincides with \eqref{curcur2}.
In fact, without taking the $\mu_i=0$ limit in \eqref{Nj}, the resulting more general equality was derived under a certain property of ``pure transmission" \cite{bd13} (see also the derivation in \cite{Nat-Phys}). This is one of a family of equalities for higher cumulants referred to as ``extended fluctuation relations" \cite{bd13}. The pure transmission property holds in free particle models and in conformal field theory \cite{DW3}, and it was conjectured in \cite{bd13} to hold as well in interacting integrable models. However, we see here that this conjecture does not hold: had we not set $\mu_i=0$ after taking the derivative, the resulting expression would not have agreed with \eqref{curcur2}. The term proportional to $\delta(\theta-\theta_\star)$ discussed just after \eqref{6.6} does not contribute at $\xi=0$ even with $\mu_i\neq0$, because at $\xi=0$ we have $v^{\rm eff}(\theta_\star)=0$, thus $(E')^{\rm dr}=0$. However, keeping $\mu_i$ nonzero, we have instead of \eqref{ee5.8} the relation $\partial_{\mu_i} n_\mathrm{L,R} = \pm \tfrac{1}{2} (h_i)^{\rm dr}_{[n_{L,R}]} n_{L,R}(1-n_{L,R})$, where the index indicates that the dressing operation is with respect to the left (right) bath $n_L(\theta)$ ($n_R(\theta)$). Therefore we obtain \eqref{intermediate}, again without $\xi$ integration and instead at $\xi=0$, but where in the first $\theta$-integral $h_i^{\rm dr}$ is replaced by $(h_i)^{\rm dr}_{[n_{L}]}$, and in the second integral, by $(h_i)^{\rm dr}_{[n_{R}]}$. The resulting expression is therefore different from \eqref{curcur2}. It would be interesting to understand more at length the consequences of the lack of pure transmission, and the general arguments for \eqref{Nj} and related equalities for higher cumulants.
\section{Discussion}\label{sec6}
\setcounter{equation}{0}
There are a number of immediate generalizations to the above results. First, as mentioned in the introduction, the results \eqref{4.22} - \eqref{4.40} are expected to hold in Bethe-ansatz integrable models of fermionic type. In general, with multiple species of particles, $\theta$ stands for a multi-index, involving both the velocity (or the quasi-momentum) and the particle type, and integrals over $\theta$ include sums over particle types; see e.g. \cite{cdy,BCDF16,dynote}. Second, the (generalized) thermodynamic Bethe ansatz was also developed for models with bosonic statistics, see e.g. \cite{tba}. In this case, \eqref{4.9} is replaced by
\begin{equation}\label{4.9prime}
\varepsilon(\theta) = w(\theta) +T\log(1-\mathrm{e}^{-\varepsilon}) (\theta)
\end{equation}
and the occupation function is
\begin{equation}\label{4.11prime}
n(\theta) = \frac{1}{\mathrm{e}^{\varepsilon (\theta)}-1}.
\end{equation}
The dressing operation and the values for averages are otherwise of the same form. We use $-\partial_{\beta_j}n = n(1+n)\partial_{\beta_j}\varepsilon$ to repeat our computation from before and find that \eqref{4.21} - \eqref{curcur} remain valid provided $\rho_\mathrm{p}(1- n)$ is replaced by $\rho_\mathrm{p}(1 +n)$. The results are those expressed in \eqref{intro1} - \eqref{intro4} with $\sigma=-1$. Third, it should also be possible to generalize to classical soliton-like gases \cite{dyc}, taking inspiration from the hard rod model, although a precise discussion of this is beyond the scope of this paper. The expected results are those obtained using the classical (Boltzmann) occupation function $n(\theta) = e^{-\varepsilon(\theta)}$, giving \eqref{intro1} - \eqref{intro4} with $\sigma=0$. The factors $1-n$ (fermions), $1+n$ (bosons) and $1$ (classical particles) represent the effect of the statistics of the fundamental components of the gas. Correlations are reduced in the fermionic case when the occupation is larger because of the Fermi exclusion principle. On the contrary, bosons display a condensation effect, increasing correlations; while classical particles are not subject to any nontrivial statistics. For classical integrable field theory, radiative components may give rise to occupation functions with Rayleigh-Jeans form. We hope to present complete derivations in a future work.
Some of the techniques introduced here should generalize to other space-time phenomena on the Euler scale. For instance, since no entropy is produced, the general rule states that correlations are governed by the linearized Euler equations. If the initial state has spatial variations, then the linearization is with respect to a space-time dependent background, and one could write down the equation \eqref{dS} with space-time dependent linearization matrix $A$. Another possibility that can be accessed similarly is to have external potentials varying on the Euler scale. We leave for future works the analysis of such equations and of their solutions.
\vspace{1cm}
\noindent
{\bf Acknowledgments}. BD is grateful to A. Bastianello, T. Prosen, T. Yoshimura and G. Watts for discussions, and especially to T. Yoshimura for pointing out an argument that is used in subsection \ref{ss5.2}. We thank X. Zotos for email discussions and comments on the first version of this paper.
|
1,314,259,994,172 | arxiv | \chapter{Introduction}
This is an introductory set of lectures on the basic ideas and methods of effective field theories (EFTs). Other lectures at the school will go into more details about the most commonly used effective theories in high energy physics and cosmology. Professor Neubert's lectures~\cite{Neubert:dp}, delivered concurrently with mine, provide an excellent introduction to renormalization in quantum field theory (QFT), the renormalization group equation, operator mixing, and composite operators, and this knowledge will be assumed in my lectures. I also have some 20 year old lecture notes from the Schladming school~\cite{Manohar:1996cq} which should be read in conjunction with these lectures. Additional references are~\cite{Georgi:1985kw,Kaplan:1995uv,Rothstein:2003mp,Stewart:fv}.
The Les Houches school and these lecture notes focus on aspects of EFTs as used in high energy physics and cosmology which are relevant for making contact with experimental observations.
The intuitive idea behind effective theories is that you can calculate without knowing the exact theory. Engineers are able to design and build bridges without any knowledge of strong interactions or quantum gravity. The main inputs in the design are Newton's laws of mechanics and gravitation, the theory of elasticity, and fluid flow. The engineering design depends on parameters measured on macroscopic scales of order meters, such as the elastic modulus of steel. Short distance properties of Nature, such as the existence of weak interactions, or the mass of the Higgs boson are not needed.
In some sense, the ideas of EFT are ``obvious.'' However, implementing them in a mathematically consistent way in an interacting quantum field theory is not so obvious. These lectures provide pedagogical examples of how one actually implements EFT ideas in particle physics calculations of experimentally relevant quantities. Additional details on specific EFT applications are given in other lectures in this volume.
An EFT is a quantum theory in its own right, and like any other QFT, it comes with a regularization and renormalization scheme necessary to obtain finite matrix elements. One can compute $S$-matrix elements in an EFT from the EFT Lagrangian, with no additional external input, in the same way that one can compute in QED starting from the QED Lagrangian. In many cases, an EFT is the low-energy limit of a more fundamental theory (which might itself be an EFT), often called the ``full theory.''
Effective field theories allow you to compute an experimentally measurable quantity with some \emph{finite} error. Formally, an EFT has a small expansion parameter $\delta$, known as the power counting parameter. Calculations are done in an expansion to some order $n$ in $\delta$, so that the error is of order $\delta^{n+1}$. Determining the order in $\delta$ of a given diagram is done using what is referred to as a power counting formula.
A key aspect of EFTs is that one has a systematic expansion, with a well-defined procedure to compute higher order corrections in $\delta$. Thus one can compute to arbitrarily high order in $\delta$, and make the theoretical error as small as desired, by choosing $n$ sufficiently large. Such calculations might be extremely difficult in practice because higher order diagrams are hard to compute, but they are possible in principle. This is very different from modeling, e.g.\ the non-relativistic quark model provides a good description of hadron properties at the 25\% level. However, it is \emph{not} the first term in a systematic expansion, and it is not possible to systematically improve the results.
In many examples, there are multiple expansion parameters $\delta_1$, $\delta_2$, etc.\ For example, in heavy quark effective theory (HQET)~\cite{Isgur:1989vq,Isgur:1989ed,Manohar:2000dt,Shifman:1987rj}, $b$ decay rates have an expansion in $\delta_1=\ensuremath{\Lambda_{\text{QCD}}}/m_b$ and $\delta_2=m_b/M_W$. In such cases, one has to determine which terms $\delta_1^{n_1} \delta_2^{n_2}$ must be retained to reach the desired accuracy goal. Usually, but not always, the expansion parameter is the ratio of a low-energy scale such as the external momentum $p$, or particle mass $m$, and a short-distance scale usually denoted by $\Lambda$, $\delta = p/\Lambda$. In many examples, one also has a perturbative expansion in a small coupling constant such as $\alpha_s(m_b)$ for HQET.
EFT calculations to order $\delta^n$ depend on a finite number of Lagrangian parameters $N_n$. The number of parameters $N_n$ generally increases as $n$ increases. One gets parameter-free predictions in an EFT by calculating more experimentally measured quantities than $N_n$. For example, HQET computations to order $\ensuremath{\Lambda_{\text{QCD}}}^2/m_b^2$ depend on two parameters $\lambda_1$ and $\lambda_2$ of order $\ensuremath{\Lambda_{\text{QCD}}}^2$. There are many experimental quantities that can be computed to this order, such as the meson masses, form factors, and decay spectra~\cite{Manohar:2000dt}. Two pieces of data are used to fix $\lambda_1$ and $\lambda_2$, and then one has parameter-free predictions for all other quantities.
EFTs can be used even when the dynamics is non-perturbative. The most famous example of this type is chiral perturbation theory ($\chi$PT), which has an expansion in $p/\ensuremath{\Lambda_\chi}$, where $\ensuremath{\Lambda_\chi} \sim 1$\,GeV is the chiral symmetry breaking scale. Systematic computations in powers of $p/\ensuremath{\Lambda_\chi}$ are in excellent agreement with experiment~\cite{Gasser:1983yg,Pich:1995bw,Pich:cs,Weinberg:1978kz}.
The key ingredient used in formulating EFTs is locality, which leads to a separation of scales, i.e.\ factorization of the field theory amplitudes into short-distance Lagrangian coefficients and long-distance matrix elements. The short-distance coefficients are universal, and independent of the long-distance matrix elements computed~\cite{Wilson:1969zs}. The experimentally measured quantities $\O_i$ are then given as the product of these short-distance coefficients $C$ and long-distance matrix elements. Often, there are multiple coefficients and matrix elements, so that $\O_i = \sum_i C_{ij} M_j$. Sometimes, as in deep-inelastic scattering, $C$ and $M$ depend on a variable $x$ instead of an index $i$, and the sum becomes a convolution
\begin{align}
\label{1.1}
\O &= \int_0^1 \frac{\rd x}{x} C(x) M(x)\,.
\end{align}
The short distance coefficient $C(x)$ in this case is called the hard-scattering cross section, and can be computed in QCD perturbation theory. The long-distance matrix elements are the parton distribution functions, which are determined from experiment. The hard-scattering cross-section is universal, but the parton distribution functions depend on the hadronic target.
EFTs allow one to organize calculations in an efficient way, and to estimate quantities using the power counting formula in combination with locality and gauge invariance. The tree-level application of EFTs is straightforward; it is simply a series expansion of the scattering amplitude in a small parameter. The true power lies in being able to compute radiative corrections. It is worth repeating that EFTs are full-fledged quantum theories, and one can compute measurable quantities such as $S$-matrix elements \emph{without any reference or input from a underlying UV theory. } The 1933 Fermi theory of weak interactions~\cite{Fermi:1933jpa} was used long before the Standard Model was invented, or anyone knew about electroweak gauge bosons. Pion-nucleon scattering lengths \cite{Tomozawa:1966jm,Weinberg:1966kf} and $\pi-\pi$ scattering lengths~\cite{Weinberg:1966kf} were computed in 1966, without any knowledge of QCD, quarks or gluons.
\bigskip
Here are some warm-up exercises which will be useful later.
\begin{exercisebn}
Show that for a \emph{connected} graph, $V-I+L=1$, where $V$ is the number of vertices, $I$ is the number of internal lines, and $L$ is the number of loops. What is the formula if the graph has $n$ connected components?
\end{exercisebn}
\begin{exercisenn}
Work out the transformation of fermion bilinears $\overline \psi(\mathbf{x},t)\, \Gamma\, \chi(\mathbf{x},t)$ under $C$, $P$, $T$, where $\Gamma=P_L, P_R,\gamma^\mu P_L, \gamma^\mu P_R ,\sigma^{\mu \nu} P_L, \sigma^{\mu \nu}P_R$. Use your results to find the transformations under $CP$, $CT$, $PT$ and $CPT$.
\end{exercisenn}
\begin{exercisenn}\label{ex:nfierz}
Show that for $SU(N)$,
\begin{align}\label{sun}
[T^A]^\alpha_{\ \beta}\, [T^A]^{\lambda}_{\ \sigma} &= \frac12 \delta^\alpha_\sigma\, \delta^\lambda_\beta - \frac{1}{2N} \delta^\alpha_\beta \, \delta^\lambda_\sigma,
\end{align}
where the $SU(N)$ generators are normalized to $\text{Tr}\, T^A T^B=\delta^{AB}/2$. From this, show that
\begin{align}\
\delta^\alpha_{\ \beta}\, \delta^{\lambda}_{\ \sigma} &= \frac{1}{N} \delta^\alpha_\sigma\, \delta^\lambda_\beta + 2 [T^A]^\alpha_{\ \sigma} \, [T^A]^\lambda_{\ \beta}, \nn
[T^A]^\alpha_{\ \beta}\, [T^A]^{\lambda}_{\ \sigma} &= \frac{N^2-1}{2N^2} \delta^\alpha_\sigma\, \delta^\lambda_\beta - \frac{1}{N} [T^A]^\alpha_{\ \sigma}\, [T^A]^\lambda_{\ \beta}.
\end{align}
\end{exercisenn}
\begin{exercisenb}\label{ex:spinfierz}
Spinor Fierz identities are relations of the form
\begin{align*}
(\overline A\, \Gamma_1\, B)(\overline C\, \Gamma_2\, D) = \sum_{ij} c_{ij} (\overline C\, \Gamma_i\, B)(\overline A\, \Gamma_j\, D)
\end{align*}
where $A,B,C,D$ are fermion fields, and $c_{ij}$ are numbers. They are much simpler if written in terms of chiral fields using $\Gamma_i=P_L, P_R,\gamma^\mu P_L, \gamma^\mu P_R ,\sigma^{\mu \nu} P_L, \sigma^{\mu \nu}P_R$, rather than Dirac fields. Work out the Fierz relations for
\begin{align*}
& (\overline A P_L B)(\overline C P_L D), &&
(\overline A \gamma^\mu P_L B)(\overline C \gamma_\mu P_L D), &&
(\overline A \sigma^{\mu \nu} P_L B)(\overline C \sigma_{\mu \nu} P_L D), \nn
& (\overline A P_L B)(\overline C P_R D),&&
(\overline A \gamma^\mu P_L B)(\overline C \gamma_\mu P_R D), &&
(\overline A \sigma^{\mu \nu} P_L B)(\overline C \sigma_{\mu \nu} P_R D).
\end{align*}
Do not forget the Fermi minus sign. The $P_R \otimes P_R$ identities are obtained from the $P_L \otimes P_L$ identities by using $L \leftrightarrow R$.
\end{exercisenb}
\chapter{Examples}
In this section, we discuss some qualitative examples of EFTs illustrating the use of power counting, symmetries such as gauge invariance, and dimensional analysis. Some of the examples are covered in detail in other lectures at this school.
\section{Hydrogen Atom}
A simple example that should be familiar to everyone is the computation of the hydrogen atom energy levels, as done in a quantum mechanics class. The Hamiltonian for an electron of mass $m_e$ interacting via a Coulomb potential with a proton treated as an infinitely heavy point particle is
\begin{align}
\label{1.2}
\mathscr{H} &= \frac{\mathbf{p}^2}{2m_e} - \frac{\alpha}{r}\,.
\end{align}
The binding energies, electromagnetic transition rates, etc.\ are computed from eqn~(\ref{1.2}). The fact that the proton is made up of quarks, weak interactions, neutrino masses, etc.\ are irrelevant, and we do not need any detailed input from QED or QCD. The only property of the proton we need is that its charge is $+1$; this can be measured at long distances from the Coulomb field.
Corrections to eqn~(\ref{1.2}) can be included in a systematic way. Proton recoil is included by replacing $m_e$ by the reduced mass $\mu=m_e m_p/(m_e+m_p)$, which gives corrections of order $m_e/m_p$. At this point, we have included one strong-interaction parameter, the mass $m_p$ of the proton, which can be determined from experiments done at low energies, i.e.\ at energies much below $\ensuremath{\Lambda_{\text{QCD}}}$.
The hydrogen fine structure is calculated by including higher order (relativistic) corrections to the Hamiltonian, and gives corrections of relative order $\alpha^2$. The hydrogen hyperfine structure (the famous 21\,cm line) requires including the spin-spin interaction between the proton and electron, which depends on their magnetic moments. The proton magnetic moment $\mu_p=2.793 \,e \hbar/(2m_pc)$ is the second strong interaction parameter which now enters the calculation, and can be measured in low-energy NMR experiments. The electron magnetic moment is given by its Dirac value $-e \hbar/(2 m_e c)$.
Even more accurate calculations require additional non-perturbative parameters, as well as QED corrections. For example, the proton charge radius $r_p$, $g-2$ for the electron, and QED radiative corrections for the Lamb shift all enter to obtain the accuracy required to compare with precision experiments.
For calculations with an accuracy of $10^{-13}$\,eV $\sim 50$\,Hz, it is necessary to include the weak interactions. The weak interactions give a very small shift in the energy levels, and are a tiny correction to the energies. But they are the leading contribution to atomic parity violation effects. The reason is that the strong and electromagnetic interactions conserve parity. Thus the relative size of various higher-order contributions depends on the quantity being computed---there is no universal rule that can be unthinkingly followed in all examples. Even in the simple hydrogen atom example, we have multiple expansion parameters $m_e/m_p$, $\alpha$, and $m_p/M_W$.
\section{Multipole Expansion in Electrostatics}\label{sec:mult}
A second familiar example is the multipole expansion from electrostatics,
\begin{align} \label{1.3}
V(\mathbf{r}) &= \frac{1}{r} \sum_{l,m} b_{lm} \frac{1}{r^l} Y_{lm}(\Omega)\,,
\end{align}
which will illustrate a number of useful points. A sample charge configuration with its electric field and equipotential lines is shown in Fig.~\ref{fig:multipole}.
\begin{figure}
\centering
\includegraphics[height=6cm]{figs/Field1}\hspace{0.2cm}
\includegraphics[height=6cm]{figs/Field2}
\caption{\label{fig:multipole} The electric field and potential lines for two point charges of the same sign. The right figure is given by zooming out the left figure.}
\end{figure}
While the discussion below is in the context of the electrostatics example, it holds equally well for other EFT examples. If the typical spacing between charges in Fig.~\ref{fig:multipole} is of order $a$, eqn~(\ref{1.3}) can be written as
\begin{align}\label{1.3a}
V(\mathbf{r}) &= \frac{1}{r} \sum_{l,m} c_{lm} \left( \frac{a}{r} \right)^l Y_{lm}(\Omega) \,,
& b_{lm} &\equiv c_{lm}a^l\,,
\end{align}
using dimensionless coefficients $c_{lm}$.
\begin{itemize}
\item As written, eqn~(\ref{1.3a}) has two scales $r$ and $a$, with $r \gg a$. $r$ is the long-distance, or infrared (IR) scale, and $a$ is the short-distance or ultraviolet (UV) scale. The small expansion parameter is the ratio of the IR and UV scales $\delta = a/r$. The expansion is useful if the two scales are widely separated, so that $\delta \ll 1$. We often work in momentum space, so that the IR scale is $p \sim 1/r$, the UV scale is $\Lambda \sim 1/a$, and $\delta = p/\Lambda$.
\item A far away (low-energy) observer measures the potential $V(r)$ as a function of $r$ and $\Omega=(\theta,\phi)$. By Fourier analysis, the observer can determine the short distance coefficients $b_{lm}=c_{lm}a^l \sim c_{lm}/\Lambda^l$. These coefficients are dimensionful, and suppressed by inverse powers of $\Lambda$ as $l$ increases.
\item More accurate values of the potential are given by including more multipoles. The terms in eqn~(\ref{1.3},\ref{1.3a}) get smaller as $l$ increases. A finite experimental resolution implies that $c_{lm}$ can only be experimentally determined up to a finite maximum value $l_\text{max}$ that depends on the resolution. More accurate experiments probe larger $l_\text{max}$.
\item One can factor out powers of $a$, as shown in eqn~(\ref{1.3a}), and use $c_{lm}$ instead of $b_{lm}$. Then $c_{lm}$ are order unity. This is dimensional analysis. There is no precise definition of $a$, and any other choice for $a$ of the same order of magnitude works equally well. $a$ is given from observations by measuring $b_{lm}$ for large values of $r$, and inferring $a$ by letting $b_{lm} = c_{lm} a^l$, and seeing if some choice of $a$ makes all the $c_{lm}$ of order unity.
\item Some $c_{lm}$ can vanish, or be anomalously small due to an (approximate) symmetry of the underlying charge distribution. For example, cubic symmetry implies $c_{lm}=0$ unless $l \equiv 0$ (mod 2) and $m \equiv 0$ (mod 4). Measurements of $b_{lm}$ provide information about the short-distance structure of the charge distribution, and possible underlying symmetries.
\item More accurate measurements require higher order terms in the $l$ expansion. There are only a finite number, $(l_\text{max}+1)^2$, parameters including all terms up to order $l_\text{max}$.
\item We can use the $l$ expansion without knowing the underlying short-distance scale $a$, as can be seen from the first form eqn~(\ref{1.3}). The parameters $b_{lm}$ are determined from the variation of $V(r)$ w.r.t.\ the IR scale $r$. Using $b_{lm}=c_{lm}a^l $ gives us an estimate of the size of the charge distribution. We can determine the short-distance scale $a$ by accurate measurements at the long-distance scale $r \gg a$, or by less sophisticated measurements at shorter distances $r$ comparable to $a$.
\end{itemize}
The above analysis also applies to searches for BSM (beyond Standard Model) physics. Experiments are searching for new interactions at short distances $a \sim 1/\Lambda$, where $\Lambda$ is larger than the electroweak scale $v \sim 246$\,GeV. Two ways of determining the new physics scale are by making more precise measurements at low-energies, as is being done in $B$ physics experiments, or by making measurements at even higher energies, as at the LHC.
Subtleties can arise even in the simple electrostatic problem.
\begin{figure}
\centering
\includegraphics[height=7cm]{figs/Field3}
\caption{\label{fig:multipole2} A charge distribution with two intrinsic scales: $d$, the size of each clump, and $a$, the distance between clumps.}
\end{figure}
Consider the charge configuration shown in Fig.~\ref{fig:multipole2}, which is an example of a multiscale problem. The system has two intrinsic scales, a shorter scale $d$ characterizing the individual charge clumps, and a longer scale $a$ characterizing the separation between clumps. Measurements at large values of $r$ determine the scale $a$. Very accurate measurements of $c_{lm}$ can determine the shorter distance scale $d$. Discovering $d$ requires noticing patterns in the values of $c_{lm}$. It is much easier to determine $d$ if one knows ahead of time that there is a short distance scale $d$ that must be extracted from the data. $d$ can be easily determined by making measurements at shorter distances (higher energies) $d \ll r \ll a$, i.e.\ if one is allowed to measure the electrostatic potential between the two clumps of charges.
Multiscale problems are common in EFT applications. The Standard Model EFT (SMEFT) is an EFT used to characterize BSM physics. The theory has a scale $\Lambda$, of order a few TeV, which is the expected scale of BSM physics in the electroweak sector, as well as higher scales $\Lambda_{\slashed{L}}$ and $\Lambda_{\slashed{B}}$ at which lepton and baryon number are broken. $\chi$PT has the scales $m_\pi\sim 140$\,MeV, $m_K\sim 500$\, MeV and the chiral symmetry breaking scale $\ensuremath{\Lambda_\chi}\sim 1$\,GeV. HQET has the scales $m_b$, $m_c$ and $\ensuremath{\Lambda_{\text{QCD}}}$. EFT methods allow us to separate scales in a multi-scale problem, and organize the calculation in a systematic way.
\section{Fermi Theory of Weak Interactions}
The Fermi theory of weak interactions~\cite{Fermi:1933jpa} is an EFT for weak interactions at energies below the $W$ and $Z$ masses. It is a low-energy EFT constructed from the SM. The EFT power counting parameter is $\delta=p/M_W$, where $p$ is of order the momenta of particles in the weak decay. For example, in $\mu$ decay, $p$ is of order the muon mass. In hadronic weak decays, $p$ can be of order the hadron (or quark) masses, or of order $\ensuremath{\Lambda_{\text{QCD}}}$. The theory also has the usual perturbative expansions in $\alpha_s/(4\pi)$ and $\alpha/(4\pi)$. Historically, Fermi's theory was used for weak decay calculations even when the scales $M_W$ and $M_Z$ were not known. We will construct the Fermi interaction in Sec.~\ref{sec:fermi}.
\section{HQET/NRQCD}
Heavy quark effective theory (HQET) and non-relativistic QCD (NRQCD~\cite{Caswell:1985ui}) describe the low-energy dynamics of hadrons containing a heavy quark. The theories are applied to hadrons containing $b$ and $c$ quarks. In HQET, the expansion parameter is $\ensuremath{\Lambda_{\text{QCD}}}/m_Q$, where $m_Q=m_b,m_c$ is the mass of the heavy quark. The theory also has an expansion in powers of $\alpha_s(m_Q)/(4\pi)$. The matching from QCD to HQET can be done in perturbation theory, since $\alpha_s(m_Q)/(4\pi)$ is small, $\alpha_s(m_b) \sim 0.22$, $\alpha_s(m_b)/(4\pi) \sim0.02$. Calculations in HQET contain non-perturbative corrections, which can be included in a systematic way in an expansion in $\ensuremath{\Lambda_{\text{QCD}}}/m_Q$.
NRQCD is similar to HQET, but treats $Q \overline Q$ bound states such as the $\Upsilon$ meson. The heavy quarks move non-relativistically, and the expansion parameter is the velocity $v$ of the heavy quarks, which is of order $v \sim \alpha_s(m_Q)$.
HQET and NRQCD are covered in Professor T.~Mannel's lectures at this school~\cite{Mannel:fy}.
\section{Chiral Perturbation Theory}
Chiral perturbation theory describes the interactions of pions and nucleons at low momentum transfer. The theory was developed in the 1960's, and the method closest to the modern way of calculating was developed by Weinberg. $\chi$PT describes the low-energy dynamics of QCD. In this example, the full theory is known, but it is not possible to analytically compute the matching onto the EFT, since the matching is non-perturbative. Recent progress has been made in computing the matching numerically~\cite{Aoki:2016frl}. The two theories, QCD and $\chi$PT, are not written in terms of the same fields. The QCD Lagrangian has quark and gluon fields, whereas $\chi$PT has meson and baryon fields. The parameters of the chiral Lagrangian are usually fit to experiment.
Note that computations in $\chi$PT, such as Weinberg's calculation of $\pi\pi$ scattering, were done using $\chi$PT \emph{before} QCD was even invented. This example shows rather clearly that one can compute in an EFT without knowing the UV origin of the EFT.
The expansion parameter of $\chi$PT is $p/\ensuremath{\Lambda_\chi}$, where $\ensuremath{\Lambda_\chi} \sim 1$\,GeV is referred to as the scale of chiral symmetry breaking. $\chi$PT can be applied to baryons even though baryon masses are comparable to $\ensuremath{\Lambda_\chi}$. The reason is that baryon number is conserved, and so baryons can be treated as heavy particles analogous to heavy quarks in HQET as long as the momentum transfer is smaller than $\ensuremath{\Lambda_\chi}$. There is an interesting relation between the large-$N_c$ expansion of QCD and baryon chiral perturbation theory~\cite{Jenkins:1998wy,Manohar:1998xv}.
$\chi$PT is covered in Professor A.~Pich's lectures at this school~\cite{Pich:cs}.
\section{SCET}
Soft-collinear effective theory (SCET~\cite{Bauer:2000ew,Bauer:2000yr,Bauer:2001ct,Bauer:2001yt}) describes energetic QCD processes where the final states have small invariant mass compared to the center-of-mass energy of the collision, such as in jet production in high-energy $pp$ collisions. The underlying theory is once again QCD. The expansion parameters of SCET are $\ensuremath{\Lambda_{\text{QCD}}}/Q$, $M_J/Q$ and $\alpha_s(Q)/(4\pi)$, where $Q$ is the center-of-mass energy of the hard-scattering process, and $M_J$ is the invariant mass of the jet. SCET was originally developed for the decay of $B$ mesons to light particles, such as $B \to X_s \gamma$ and $B \to \pi \pi$.
SCET is covered in T.~Becher's lectures at this school~\cite{Becher:zt}.
\section{SMEFT}
SMEFT is the EFT constructed out of SM fields, and is used to analyze deviations from the SM, and search for BSM physics. The higher dimension operators in SMEFT are generated at a new physics scale $\Lambda$, which is not known. Nevertheless, one can still perform systematic computations in SMEFT, as should be clear from the multipole expansion example in Sec.~\ref{sec:mult}. SMEFT is discussed in Sec.~\ref{sec:smeft}.
\section{Reasons for using an EFT}
There are many reasons for using an EFT, which are summarized here. The points are treated in more detail later in these lectures, and also in the other lectures at this school.
\begin{itemize}
\item {\bf Every theory is an effective field theory.} For example, QED, the first relativistic quantum field theory developed, is an approximation to the SM. It is an EFT obtained from the SM by integrating out all particles other than the photon and electron.
\item {\bf EFTs simplify the computation by dealing with only one scale at a time:} For example the $B$ meson decay rate depends on $M_W$, $m_b$ and $\ensuremath{\Lambda_{\text{QCD}}}$, and one can get horribly complicated functions of the ratios of these scales. In an EFT, we deal with only one scale at a time, so there are no functions, only constants. This is done by using a series of theories, $\text{SM} \to \text{Fermi Theory} \to \text{HQET}$.
\item {\bf EFTs make symmetries manifest:} QCD has a spontaneously broken chiral symmetry, which is manifest in the chiral Lagrangian. Heavy quarks have an Isgur-Wise~\cite{Isgur:1989vq} spin-flavor symmetry under which $b \uparrow,\ b \downarrow, c \uparrow, c \downarrow$ transform as a four-dimensional representation of $SU(4)$. This symmetry is manifest in the HQET Lagrangian~\cite{Georgi:1990um}, which makes it easy to derive the consequences of this symmetry. Symmetries such as spin-flavor symmetry are only true for certain limits of QCD, and so are hidden in the QCD Lagrangian.
\item {\bf EFTs include only the relevant interactions:} EFTs have an explicit power counting estimate for the size of various interactions. Thus one can only include the relevant terms in the EFT Lagrangian needed to obtain the required accuracy of the final result.
\item {\bf Sum logs of the ratios of scales:} This allows one to use renormalization-group improved perturbation theory, which is more accurate, and has a larger range of validity than fixed order perturbation theory. For example, the semileptonic $B$ decay rate depends on powers
\begin{align}
\label{1.20}
\left( \frac{\alpha_s}{4\pi} \ln \frac{M_W}{m_b} \right)^n\,.
\end{align}
Even though $\alpha_s/(4\pi)$ is small, it is multiplied by a large log, and fixed order perturbation theory can break down. RG improved perturbation theory sums the corrections in eqn~(\ref{1.20}), so that the perturbation expansion is in powers of $\alpha_s/(4\pi)$, without a multiplicative log. The resummation of logs is even more important in SCET, where there are two powers of a log for each $\alpha_s$, the so-called Sudakov double logarithms.
The leading-log corrections are not small. For example, the strong interaction coupling changes by a factor of two between $M_Z$ and $m_b$,
\begin{align*}
\alpha_s(M_Z) &\sim 0.118, &
\alpha_s(m_b) &\sim 0.22.
\end{align*}
While summing logs might seem like a technical point, it is one of the main reasons why EFTs (or equivalent methods such as factorization formul\ae\ in QCD) are used in practice. In QCD collider processes, resummed cross sections can be dramatically different from fixed order ones.
\item {\bf Sum IR logs by converting them to UV logs:} This is related to the previous point. UV logs are summed by the renormalization group equations, since they are related to anomalous dimensions and renormalization counterterms. There is no such summation method for IR logs. However, IR logs in the full theory can be converted to UV logs in the EFT, which can then be summed by integrating the renormalization group equations in the EFT (see Sec.~\ref{sec:5.8}). QCD leads to a number of different effective theories, HQET, NRQCD, SCET and $\chi$PT. Each one is designed to treat a particular IR regime, and sum the corresponding IR logs.
\item {\bf Non-perturbative effects can be included in a systematic way:} In HQET, powers of $\ensuremath{\Lambda_{\text{QCD}}}$ are included through the matrix elements of higher dimension operators, giving the $(\ensuremath{\Lambda_{\text{QCD}}}/m_b)^n$ expansion.
\item {\bf Efficient method to characterize new physics:} EFTs provide an efficient way to characterize new physics, in terms of coefficients of higher dimension operators. This method includes the constraints of locality, gauge invariance and Lorentz invariance. All new physics theories can be treated in a unified framework using a few operator coefficients.
\end{itemize}
\chapter{The EFT Lagrangian}
\section{Degrees of Freedom}
To write down an EFT Lagrangian, we first need to determine the dynamical degrees of freedom, and thus the field content of the Lagrangian. In cases where the EFT is a weakly coupled low-energy version of a UV theory, this is simple---just retain the light fields. However, in many cases, identifying the degrees of freedom in an EFT can be non-trivial.
NRQCD describes $Q \bar Q$ bound states, and is an EFT which follows from QCD. One formulation of NRQCD has multiple gluon modes, soft and ultrasoft gluons, which describe different momentum regions contributing to the $Q \bar Q$ interaction. SCET describes the interactions of energetic particles, and is applicable to processes such as jet production by $q \bar q \to q \bar q$ interactions. It has collinear gluon fields for each energetic particle direction, as well as ultrasoft gluon fields.
A famous example which shows that there is no unique ``correct'' choice of fields to use in an interacting quantum field theory is the sine-Gordon -- Thirring model duality in $1+1$ dimensions~\cite{Coleman:1974bu}. The sine-Gordon model is a bosonic theory of a real scalar field with Lagrangian
\begin{align}
\mathscr{L} &= \frac12 \partial_\mu \phi\, \partial^\mu \phi + \frac{\alpha}{\beta^2} \cos \beta\phi,
\end{align}
and the Thirring model is a fermionic theory of a Dirac fermion with Lagrangian
\begin{align}
\mathscr{L} &= \bar \psi \left( i \slashed{\partial} - m \right) \psi -\frac12
g \left(\bar \psi \gamma^\mu \psi \right)^2.
\end{align}
Coleman showed that the two theories were \emph{identical}; they map into each other with the couplings related by
\begin{align}\label{13.3}
\frac{\beta^2 }{ 4 \pi} &= \frac{1 }{ 1 + g/\pi}.
\end{align}
The fermion in the Thirring model is the soliton of the sine-Gordon model, and the boson of the sine-Gordon model is a fermion-antifermion bound state of the Thirring model. The duality exchanges strongly and weakly coupled theories. This example also shows that one cannot distinguish between elementary and composite fields in an interacting QFT.
\section{Renormalization}
A quick summary of renormalization in QCD is presented here, to define the notation and procedure we will use for EFTs. A detailed discussion is given in Neubert's lectures~\cite{Neubert:dp}.
QCD is a quantum field theory with Lagrangian
\begin{align}
\mathscr{L} =-\frac14 F^A_{\mu \nu} F^{A\mu \nu} + \sum_{r=1}^{N_F} \left[ \overline \psi_r i \slashed{D} \psi_r - m_r \overline \psi_r \psi_r \right]
+ \frac{\theta g^2}{32 \pi^2} F^A_{\mu \nu} \widetilde F^{A\mu \nu}\,,
\label{3}
\end{align}
where $N_F$ is the number of flavors. The covariant derivative is $D_\mu = \partial_\mu + i g A_\mu$, and the $SU(3)$ gauge field is a matrix $A_\mu=T^A A^A_\mu$, where the generators are normalized to $\text{Tr}\, T^A T^B=\delta^{AB}/2$. Experimental limits on the neutron electric dipole moment give $\theta \lesssim 10^{-10}$, and we will neglect it here.
The basic observables in a QFT are $S$-matrix elements---on-shell scattering amplitudes for particles with physical polarizations. Green functions of $\psi$ and $A_\mu$, which are the correlation functions of products of fields, are gauge dependent and not experimental observables. The QCD Lagrangian eqn~(\ref{3}) is written in terms of fields, but \emph{fields are not particles}. The relation between $S$-matrix elements of particles and Green functions for fields is through the LSZ reduction formula~\cite{Lehmann:1954rq} explained in Sec.~\ref{sec:LSZ}. One can use \emph{any} field $\phi(x)$ to compute the $S$-matrix element involving a particle state $\ket{p}$ as long as
\begin{align}
\braket{p | \phi(x) | 0} \not=0\,,
\label{4}
\end{align}
i.e.\ the field can create a one-particle state from the vacuum.
Radiative corrections in QCD are infinite, and we need a regularization and renormalization scheme to compute finite $S$-matrix elements. The regularization and renormalization procedure is part of the \emph{definition} of the theory. The standard method used in modern field theory computations is to use dimensional regularization and the \ensuremath{\overline{\text{MS}}}\ subtraction scheme. We will use dimensional regularization in $d=4-2\epsilon$ dimensions. A brief summary of the procedure is given here.
The QCD Lagrangian for a single flavor in the $CP$-conserving limit (so that the $\theta$ term is omitted) that gives finite $S$-matrix elements is
\begin{subequations}
\begin{align}
\mathscr{L} &=-\frac14 F^A_{0\mu \nu} F_0^{A\mu \nu} + \overline \psi_0 i (\slashed{\partial} + i g_0 \slashed{A}_0) \psi_0 - m_0 \overline \psi_0 \psi_0
\label{5a} \\
&=-\frac14 Z_A F^A_{\mu \nu} F^{A\mu \nu} + Z_\psi \overline \psi i (\slashed{\partial} + i g \mu^\epsilon Z_g Z_A^{1/2}\slashed{A}) \psi - m Z_m Z_\psi \overline \psi \psi
\label{5}
\end{align}
\end{subequations}
where $\psi_0$, $A_0$, $g_0$ and $m_0$ are the bare fields and parameters, which are related to the renormalized fields and parameters
$\psi$, $A$, $g$ and $m$ by
\begin{align}
\psi_0 &= Z_\psi^{1/2} \psi, &
A_{0\mu} &= Z_A^{1/2} A_\mu, &
g_0 &= Z_g g \mu^\epsilon, &
m_0 &= Z_m m.
\label{6}
\end{align}
The renormalization factors $Z_a$ have an expansion in inverse powers of $\epsilon$,
\begin{align}
Z_a &= 1 + \sum_{k=1}^\infty \frac{Z^{(k)}_a}{\epsilon^k}, & a&=\psi,A,g,m,
\label{7}
\end{align}
with coefficients which have an expansion in powers of $\alpha_s=g^2/(4\pi)$,
\begin{align}
Z^{(k)}_a &= \sum_{r=1}^\infty Z^{(k,r)}_a \left( \frac{\alpha_s}{4\pi}\right)^r\,.
\label{8}
\end{align}
The renormalized parameters $g$ and $m$ are finite, and depend on $\mu$. The renormalization factors $Z_a$ are chosen to give finite $S$-matrix elements.
Separating out the $1$ from $Z_a$, the Lagrangian eqn~(\ref{5}) can be written as
\begin{align}
\mathscr{L}
&=-\frac14 F^A_{\mu \nu} F^{A\mu \nu} + \overline \psi i (\slashed{\partial} + i g \mu^\epsilon \slashed{A}) \psi - m \overline \psi \psi
+ \text{c.t.}
\label{9}
\end{align}
where $\text{c.t.}$ denotes the renormalization counterterms which are pure poles in $1/\epsilon$,
\begin{align}
\mathscr{L}_\text{c.t.}
&=-\frac14 \left(Z_A-1\right) F^A_{\mu \nu} F^{A\mu \nu} + \left(Z_\psi-1\right) \overline \psi i \slashed{\partial} \psi
+\left(Z_\psi Z_g Z_A^{1/2} -1\right) \overline \psi i g\mu^\epsilon \slashed{A} \psi \nn
& - \left(Z_\psi Z_m -1\right) m \overline \psi \psi \,.
\label{10}
\end{align}
The Lagrangian eqn~(\ref{5a}) contains 2 bare parameters, $g_0$ and $m_0$. The Lagrangian eqn~(\ref{5}) contains two renormalized parameters $g(\mu)$, $m(\mu)$ and the renormalization scale $\mu$. As discussed in Neubert's lectures~\cite{Neubert:dp}, the renormalization group equation, which follows from the condition that the theory is $\mu$-independent, implies that there are only two free parameters, for example $g(\mu_0)$ and $m(\mu_0)$ at some chosen reference scale $\mu_0$. The renormalization group equations determine how $m$ and $g$ must vary with $\mu$ to keep the observables the same. We will see later how the freedom to vary $\mu$ allows us to sum logarithms of the ratio of scales. The variation of renormalized parameters with $\mu$ is sometimes referred to as the renormalization group flow.
The bare parameters in the starting Lagrangian eqn~(\ref{5a}) are infinite. The infinities cancel with those in loop graphs, so that $S$-matrix elements computed are finite. Alternatively, one starts with the Lagrangian split up into the renormalized Lagrangian with finite parameters plus counterterms, as in eqn~(\ref{9}). The infinite parts of loop graphs computed from the renormalized Lagrangian are cancelled by the counterterm contributions, to give finite $S$-matrix elements. The two methods are equivalent, and give the usual renormalization procedure in the \ensuremath{\overline{\text{MS}}}\ scheme. Usually, one computes in perturbation theory in the coupling $g$, and determines the renormalization factors $Z_a$ order by order in $g$ to ensure finiteness of the $S$-matrix.
\begin{exercise}
Compute the mass renormalization factor $Z_m$ in QCD at one loop. Use this to determine the one-loop mass anomalous dimension $\gamma_m$,
\begin{align}
\mu \frac{\rd m}{\rd \mu} &= \gamma_m m,
\end{align}
by differentiating $m_0=Z_m m$, and noting that $m_0$ is $\mu$-independent.
\end{exercise}
\section{Determining the couplings}\label{sec:3.3}
How do we determine the parameters in the Lagrangian? The bare Lagrangian parameters are infinite, and cannot be measured directly. The renormalized Lagrangian parameters are finite. However, in general, they are scheme dependent, and also not directly measurable. In QCD, the \ensuremath{\overline{\text{MS}}}\ quark mass $m_b(\mu)$ is not a measurable quantity. Often, people refer to the quark pole mass $m_b^\text{pole}$ defined by the location of the pole in the quark propagator in perturbation theory. It is related to the \ensuremath{\overline{\text{MS}}}\ mass by
\begin{align}
\label{polemass}
m_b^\text{pole} &= m_b(m_b) \left[ 1 + \frac{4 \alpha_s(m_b)}{3 \pi} + \ldots \right]\,.
\end{align}
$m_b^\text{pole}$ is independent of $\mu$, and hence is renormalization-group invariant.
Nevertheless, $m_b^\text{pole}$ is not measurable---quarks are confined, and there is no pole in gauge-invariant correlation functions at $m_b^\text{pole}$. Instead one determines the $B$ meson mass $m_B$ experimentally. The quark mass $m_b(\mu)$ or $m_b^\text{pole}$ is fixed by adjusting it till it reproduces the measured meson mass. To actually do this requires a difficult non-perturbative calculation, since $m_b^\text{pole}$ and $m_B$ differ by order $\ensuremath{\Lambda_{\text{QCD}}}$ effects. In practice, one uses observables which are easier to compute theoretically, such as the electron energy spectrum in inclusive $B$ decays, or the the $e^+ e^- \to b \overline b$ cross section near threshold, to determine the quark mass. Similarly, the gauge coupling $g(\mu)$ is not an observable, and must be determined indirectly.
\begin{exercise}
Verify the one-loop relation between the \ensuremath{\overline{\text{MS}}}\ and pole masses, eqn~(\ref{polemass}).
\end{exercise}
Even in QED, the Lagrangian parameters are not direct observables. QED has two Lagrangian parameters, and two experimental inputs are used to fix these parameters. One can measure the electron mass $m_e^{\text{obs}}$ (which is the pole mass, since electrons are not confined), and the electrostatic potential at large distances, $-\alpha_\text{QED}/r$. These two measurements fix the values of the Lagrangian parameters $m_e(\mu)$ and $e(\mu)$. All other observables, such as positronium energy levels, the Bhabha scattering cross section, etc.\ are then determined, since they are functions of $m_e(\mu)$ and $e(\mu)$.
The number of Lagrangian parameters $N_\mathscr{L}$ tells you how many inputs are needed to completely fix the predictions of the theory. In general, one computes a set of observables $\left\{O_i\right\}, i=1,\ldots,N_\mathscr{L}$ in terms of the Lagrangian parameters. $N_\mathscr{L}$ observables are used to fix the parameters, and the remaining $N_O-N_\mathscr{L}$ observables are predictions of the theory:
\begin{align}
\label{1.12}
\underbrace{O_1,\ldots,O_{N_\mathscr{L}} }_{\text{observables}} \quad \longrightarrow \quad \underbrace{m_i(\mu), g(\mu), \ldots}_{\text{parameters}} \quad \longrightarrow \quad \underbrace{ O_{N_{\mathscr{L}+1}},\ldots}_{\text{predictions}} \,.
\end{align}
The Lagrangian plays the role of an intermediary, allowing one to relate observables to each other. The $S$-matrix program of the 1960's avoided any use of the Lagrangian, and related observables directly to each other using analyticity and unitarity.
Given a QFT Lagrangian $\mathscr{L}$, including a renormalization procedure, you can calculate $S$-matrix elements. No additional outside input is needed, and the calculation is often automated. For example, in QED, it is not necessary to know that the theory is the low-energy limit of the Standard Model (SM), or to consult an oracle to obtain the value of certain loop graphs. All predictions of the theory are encoded in the Lagrangian. A renormalizable theory has only a finite number of terms in the Lagrangian, and hence only a finite number of parameters. One can compute observables to arbitrary accuracy, at least in principle, and obtain parameter-free predictions.
The above discussion applies to EFTs as well, including the last bit about a finite number of parameters, provided that one works to a \emph{finite} accuracy $\delta^n$ in the power counting parameter. As an example, consider the Fermi theory of weak interactions, which we discuss in more detail in Sec.~\ref{sec:fermi}. The EFT Lagrangian in the lepton sector is
\begin{align}
\mathscr{L} &= \mathscr{L}_\text{QED} - \frac{4 G_F}{\sqrt 2} (\overline e \gamma^\mu P_L \nu_e)( \overline \nu_\mu \gamma_\mu P_L \mu) + \ldots\,,
\label{10a}
\end{align}
where $P_L=(1-\gamma_5)/2$, and $G_F=1.166 \times 10^{-5}\,\text{GeV}^{-2}$ has dimensions of inverse mass-squared. As in QCD or QED, one can calculate $\mu$-decay directly using eqn~(\ref{10a}) without using any external input, such as knowing eqn~(\ref{10a}) was obtained from the low-energy limit of the SM. The theory is renormalized as in eqn~(\ref{5a},\ref{5},\ref{6}). The main difference is that the Lagrangian eqn~(\ref{10a}) has an infinite series of operators (only one is shown explicitly), with coefficients which absorb the divergences of loop graphs. The expansion parameter of the theory is $\delta=G_F p^2$.
To a fixed order in $\delta$, the theory is just like a regular QFT. However, if one wants to work to higher accuracy, more operators must be included in $\mathscr{L}$, so that there are more parameters. If one insists on infinitely precise results, then there are an infinite number of terms and an infinite number of parameters. Thus an EFT is just like a regular QFT, supplemented by a power counting argument that tells you what terms to retain to a given order in $\delta$. The number of experimental inputs used to fix the Lagrangian parameters increases with the order in $\delta$.
In the $\mu$-decay example, $G_F$ can be fixed by the muon lifetime. The Fermi theory then gives a parameter-free prediction for the decay distributions, such as the electron energy spectrum, electron polarization, etc.
The parameters of the EFT Lagrangian eqn~(\ref{10a}) can be obtained from low-energy data. The divergence structure of the EFT is \emph{different} from that of the full theory, of which the EFT is a low-energy limit. This is not a minor technicality, but a fundamental difference. It is crucial in many practical applications, where IR logs can be summed by transitioning to an EFT.
In cases where the EFT is the low-energy limit of a weakly interacting full theory, e.g. the Fermi theory as the low-energy limit of the SM, one constructs the EFT Lagrangian to reproduce the same $S$-matrix as the original theory, a procedure known as matching. The full and effective theory are equivalent; they are different ways of computing the same observables. The change in renormalization properties means that fields in the EFT are not the same as fields in the full theory, even though they are often denoted by the same symbol. Thus the electron field $e$ in eqn~(\ref{10a}) is not the same as the field $e$ in the SM Lagrangian. The two agree at tree-level, but at higher orders, one has to explicitly compute the relation between the two. A given high-energy theory can lead to multiple EFTs, depending on the physical setting. For example, $\chi$PT, HQET, NRQCD and SCET are all EFTs based on QCD.
\section{Inputs}\label{sec:inputs}
I said at the start of the lectures that it was ``obvious'' that low-energy dynamics was insensitive to the short-distance properties of the theory. This is true provided the input parameters are obtained from low-energy processes computed using the EFT. QED plus QCD with five flavors of quarks is the low-energy theory of the SM below the electroweak scale. The input couplings can be determined from measurements below 100\,GeV.
Now suppose, instead, that the input couplings are fixed at high-energies, and their low-energy values are determined by computation. Given the QED coupling $\alpha(\mu_H)$ at a scale $\mu_H > m_t$ above the top-quark mass, for example, we can determine the low-energy value $\alpha(\mu_L)$ for $\mu_L$ smaller than $m_t$. In this case, $\alpha(\mu_L)$ is sensitive to high energy parameters, such as heavy masses including the top-quark mass. For example, if we vary the top-quark mass, then
\begin{align}
\label{3.14}
m_{t}\, \frac{{\rm d}}{ {\rm d} m_{t}} \left[ \frac{1 }{ {\alpha}(\mu_L)}\right] = - \frac{1 }{ 3 \pi}\,,
\end{align}
where $\mu_L < m_t$, and we have kept $\alpha(\mu_H)$ for $\mu_H > m_t$ fixed. Similarly, if we keep the strong coupling $\alpha_s(\mu_H)$ fixed for $\mu_H > m_t$, then the proton mass is sensitive to $m_t$,
\begin{align}
\label{3.15}
m_p \propto m_t^{2/27}.
\end{align}
The bridge-builder mentioned in the introduction would have a hard time designing a bridge if the density of steel depended on the top-quark mass via eqn~(\ref{3.15}). Luckily, knowing about the existence of top-quarks is not necessary. The density of steel is an experimental observable, and its measured value is used in the design. The density is measured in lab experiments at low-energies, on length scales of order meters, not in LHC collisions. How the density depends on $m_t$ or possible BSM physics is irrelevant. There is no sensitivity to high-scale physics if the inputs to low-energy calculations are from low-energy measurements. The short distance UV parameters are not ``more fundamental'' than the long-distance ones. They are just parameters. For example, in QED, is $\alpha(\mu > m_t)$ more fundamental than $\alpha_{\text{QED}}=1/(137.036)$ given by measuring the Coulomb potential as $r \to \infty$? It is $\alpha_{\text{QED}}$, for example, which is measured in quantum Hall effect experiments.
Combining low-energy EFTs with high-energy inputs mixes different scales, and leads to problems. The natural parameters of the EFT are those measured at low energies. Using high-energy inputs forces the EFT to use inputs that do not fit naturally into the framework of the theory. We will return to this point in Sec.~\ref{sec:naturalness}.
Symmetry restrictions from the high-energy theory feed down to the low-energy theory. QCD (with $\theta=0$) preserves $C$, $P$ and $CP$, and hence so does $\chi$PT. Causality in QFT leads to the spin-statistics theorem. This is a restriction which is imposed in quantum mechanics, and follows because the quantum theory is the non-relativistic limit of a QFT.
\begin{exercise}
Verify eqn~(\ref{3.14}) and eqn~(\ref{3.15}).
\end{exercise}
\section{Integrating Out Degrees of Freedom}
The old-fashioned view is that EFTs are given by integrating out high momentum modes of the original theory, and thinning out degrees of freedom as one evolves from the UV to the IR~\cite{Kadanoff:1966wm,Wilson:1971dc,Wilson:1973jj}. That is not what happens in the EFTs discussed in this school, which are used to describe experimentally observable phenomena, and it is not the correct interpretation of renormalization-group evolution in these theories.
In SCET, there are different collinear sectors of the theory labelled by null vectors $n_i=(1,\mathbf{n}_i)$, $\mathbf{n}_i^2=1$. Each collinear sector of SCET is the same as the full QCD Lagrangian, so SCET has multiple copies of the original QCD theory, as well as ultrasoft modes that couple the various collinear sectors. The number of degrees of freedom in SCET is much larger than in the original QCD theory.
In $\chi$PT, the EFT is written in terms of meson and baryon fields, whereas QCD is given in terms of quarks and gluons. Mesons and baryons are created by composite operators of quarks and gluons, but there is no sense in which the EFT is given by integrating out short-distance quarks and gluons.
The renormalization group equations are a consequence of the $\mu$ independence of the theory. Thus varying $\mu$ changes nothing measurable; $S$-matrix elements are $\mu$ independent. Nothing is being integrated out as $\mu$ is varied, and the theory at different values of $\mu$ is the same. The degrees of freedom do not change with $\mu$. The main purpose of the renormalization group equations is to sum logs of ratios of scales, as we will see in Sec.~\ref{sec:rge}.
It is much better to think of EFTs in terms of the physical problem you are trying to solve, rather than as the limit of some other theory. The EFT is then constructed out of the dynamical degrees of freedom (fields) that are relevant for the problem. The focus should be on what you want, not on what you don't want.
\chapter{Power Counting}
The EFT functional integral is
\begin{align}
\int \mathcal{D}\phi \ e^{i S}\,,
\end{align}
so that the action $S$ is dimensionless. The EFT action is the integral of a local Lagrangian density
\begin{align}
S &= \int \rd^\d x\ \mathscr{L}(x)\,,
\end{align}
(neglecting topological terms), so that in $\d$ spacetime dimensions, the Lagrangian density has mass dimension
$\d$,
\begin{align}
\left[ \mathscr{L}(x) \right] &= \d\,,
\end{align}
and is the sum
\begin{align}
\mathscr{L}(x) &= \sum_i c_i \, O_i(x)\,,
\end{align}
of local, gauge invariant, and Lorentz invariant operators $O_i$ with coefficients $c_i$. The operator dimension will be denoted by $\mathscr{D}$, and its coefficient has dimension $\d-\mathscr{D}$.
The fermion and scalar kinetic terms are
\begin{align}
S &= \int \rd^\d x\ \bar \psi \ i \slashed{\partial}\ \psi, & S &= \int \rd^\d x\ \frac 12 \partial_\mu \phi\, \partial^\mu \phi,
\end{align}
so that dimensions of fermion and scalar fields are
\begin{align}
\left[ \psi \right] &= \frac12 (\d-1), &
\left[ \phi \right] &= \frac12 (\d-2).
\end{align}
The two terms in the covariant derivative $D_\mu=\partial_\mu + i g A_\mu$ have the same dimension, so
\begin{align}
\label{2.7}
\left[D_\mu \right] &=1, &
\left[ gA_\mu \right] &= 1 \,.
\end{align}
The gauge field strength $X_{\mu\nu}=\partial_\mu A_\nu - \partial_\nu A_\mu + \ldots$ has a single derivative of $A_\mu$, so $A_\mu$ has the same dimension as a scalar field. This determines, using eqn~(\ref{2.7}), the dimension of the gauge coupling $g$,
\begin{align}
\left[A_\mu \right] &= \frac12 (\d-2), &
\left[g\right]=\frac12(4-\d)\,.
\end{align}
In $\d=4$ spacetime dimensions,
\begin{align}
\left[ \phi \right] &= 1, &
\left[ \psi \right] &= 3/2, &
\left[ A_\mu \right] &= 1, &
\left[D \right] &=1, &
[g] &=0 \,.
\end{align}
In $\d=4-2\epsilon$ dimensions, $\left[g\right]=\epsilon$, so in dimensional regularization, one usually uses a dimensionless coupling $g$ and writes the coupling in the Lagrangian as $g \mu^{\epsilon}$, as in eqn~(\ref{6}).
The only gauge and Lorentz invariant operators with dimension $\mathscr{D} \le \d=4$ that can occur in the Lagrangian are
\begin{align}
\label{2.10all}
\mathscr{D}=0:\quad& 1 \nn
\mathscr{D}=1:\quad & \phi \nn
\mathscr{D}=2:\quad& \phi^2 \nn
\mathscr{D}=3:\quad & \phi^3, \bar \psi \psi \nn
\mathscr{D}=4:\quad & \phi^4,\ \phi\, \bar \psi \psi,\ D_\mu \phi\, D^\mu \phi,\ \bar \psi \ i \slashed{D}\ \psi,\ X_{\mu \nu}^2\,.
\end{align}
Other operators, such as $D^2\phi$ vanish upon integration over $\rd^\d x$, or are related to operators already included eqn~(\ref{2.10all}) by integration by parts. In $\d=4$ spacetime dimensions, fermion fields can be split into left-chiral and right-chiral fields which transform as irreducible representations of the Lorentz group. The projection operators are $P_L=(1-\gamma_5)/2$ and $P_R=(1+\gamma_5)/2$. Left-chiral fermions will be denoted $\psi_L = P_L \psi$, etc.
Renormalizable interactions have coefficients with mass dimension $\ge 0$, and eqn~(\ref{2.10all}) lists the allowed renormalizable interactions in four spacetime dimensions. The distinction between renormalizable and non-renormalizable operators should be clear after Sec.~\ref{sec:4.2}.
In $\d=2$ spacetime dimensions
\begin{align}
\left[ \phi \right] &= 0, &
\left[ \psi \right] &= 1/2, &
\left[ A_\mu \right] &= 0, & \left[D \right] &=1, & [g]&=1,
\end{align}
so an arbitrary potential $V(\phi)$ is renormalizable, as is the $\left( \bar \psi \psi \right)^2$ interaction, so that the sine-Gordon and Thirring models are renormalizable. In $\d=6$ spacetime dimensions,
\begin{align}
\left[ \phi \right] &= 2, &
\left[ \psi \right] &= 5/2, &
\left[ A_\mu \right] &= 2, & \left[D \right] &=1, & [g]&=-1.
\end{align}
The only allowed renormalizable interaction in six dimensions is $\phi^3$. There are no renormalizable interactions above six dimensions.\footnote{There are exceptions to this in strongly coupled theories where operators can develop large anomalous dimensions.}
\begin{exercisebn}
In $\d=4$ spacetime dimensions, work out the field content of Lorentz-invariant operators with dimension $\mathscr{D}$ for $\mathscr{D}=1,\ldots,6$. At this point, do not try and work out which operators are independent, just the possible structure of allowed operators. Use the notation $\phi$ for a scalar, $\psi$ for a fermion, $X_{\mu\nu}$ for a field strength, and $D$ for a derivative. For example, an operator of type $\phi^2 D$ such as $\phi D_\mu \phi$ is not allowed because it is not Lorentz-invariant. An operator of type $\phi^2 D^2$ could be either $D_\mu \phi D^\mu \phi$ or $\phi D^2 \phi$, so a $\phi^2 D^2$ operator is allowed, and we will worry later about how many independent $\phi^2 D^2$ operators can be constructed.
\end{exercisebn}
\begin{exercisenb}
For $\d=2,3,4,5,6$ dimensions, work out the field content of operators with dimension $\mathscr{D} \le \d$, i.e. the ``renormalizable'' operators.
\end{exercisenb}
\section{EFT Expansion}
The EFT Lagrangian follows the same rules as the previous section, and has an expansion in powers of the operator dimension
\begin{align}
\label{2.17}
\mathscr{L}_\text{EFT} = \sum_{\mathscr{D} \ge 0 ,i} \frac{ c_i^{(\mathscr{D})} O_i^{(\mathscr{D})} }{ \Lambda^{\mathscr{D}-d}} =
\sum_{\mathscr{D} \ge 0} \frac{ \mathscr{L}_\mathscr{D}}{ \Lambda^{\mathscr{D}-d}}
\end{align}
where $O_i^{(\mathscr{D})}$ are the allowed operators of dimension $\mathscr{D}$. All operators of dimension $\mathscr{D}$ are combined into the dimension $\mathscr{D}$ Lagrangian $\mathscr{L}_\mathscr{D}$. The main difference from the previous discussion is that one does not stop at $\mathscr{D}=\d$, but includes operators of arbitrarily high dimension. A scale $\Lambda$ has been introduced so that the coefficients $c_i^{(\mathscr{D})} $ are dimensionless. $\Lambda$ is the short-distance scale at which new physics occurs, analogous to $1/a$ in the multipole expansion example in Sec.~\ref{sec:mult}. As in the multipole example, what is relevant for theoretical calculations and experimental measurements is the product $c_\mathscr{D} \Lambda^{\d-\mathscr{D}}$, not $c_\mathscr{D}$ and $\Lambda^{\d-\mathscr{D}}$ separately. $\Lambda$ is a convenient device that makes it clear how to organize the EFT expansion.
In $\d=4$,
\begin{align}
\mathscr{L}_\text{EFT} = \mathscr{L}_{\mathscr{D} \le 4} + \frac{\mathscr{L}_5 }{ \Lambda} + \frac {\mathscr{L}_6 }{ \Lambda^2} + \ldots
\end{align}
$\mathscr{L}_\text{EFT}$ is given by an infinite series of terms of increasing operator dimension. An important point is that the $\mathscr{L}_\text{EFT}$ has to be treated as an expansion in powers of $1/\Lambda$. If you try and sum terms to all orders, you violate the EFT power counting rules, and the EFT breaks down.
\section{Power Counting and Renormalizability}\label{sec:4.2}
Consider a scattering amplitude $\mathscr{A}$ in $\d$ dimensions, normalized to have mass dimension zero. If one works at some typical momentum scale $p$, then a single insertion of an operator of dimension $\mathscr{D}$ in the scattering graph gives a contribution to the amplitude of order
\begin{align}
\mathscr{A} \sim \left( \frac{p}{\Lambda} \right)^{\mathscr{D}-\d}
\end{align}
by dimensional analysis. The operator has a coefficient of mass dimension $1/\Lambda^{\mathscr{D}-\d}$ from eqn~(\ref{2.17}), and the remaining dimensions are produced by kinematic factors such as external momenta to make the overall amplitude dimensionless. An insertion of a set of higher dimension operators in a tree graph leads to an amplitude
\begin{align}
\label{2.20a}
\mathscr{A} &\sim \left( \frac{p}{\Lambda} \right)^n
\end{align}
with
\begin{align}
\label{2.20}
n&=\sum_i (\mathscr{D}_i-\d), & n&=\sum_i (\mathscr{D}_i-4) \ \text{in $\d=4$ dimensions},
\end{align}
where the sum on $i$ is over all the inserted operators. This follows from dimensional analysis, as for a single insertion. Equation~(\ref{2.20}) is known as the EFT power counting formula. It gives the $(p/\Lambda)$ suppression of a given graph.
The key to understanding EFTs is to understand why eqn~(\ref{2.20}) holds for \emph{any} graph, not just tree graphs. The technical difficulty for loop graphs is that the loop momentum $k$ is integrated over all values of $k$, $-\infty \le k \le \infty$, where the EFT expansion in powers of $k/\Lambda$ breaks down. Nevertheless, eqn~(\ref{2.20}) still holds. The validity of eqn~(\ref{2.20}) for any graph is explained in Sec.~\ref{sec:5.3}.
The first example of a power counting formula in an EFT was Weinberg's power counting formula for $\chi$PT. This is covered in Pich's lectures, and is closely related to eqn~(\ref{2.20}). Weinberg counted powers of $p$ in the numerator, whereas we have counted powers of $\Lambda$ in the denominator. The two are obviously related.
The power counting formula eqn~(\ref{2.20}) tells us how to organize the calculation. If we want to compute $\mathscr{A}$ to leading order, we only use $\mathscr{L}_{\mathscr{D} \le \d}$, i.e.\ the renormalizable Lagrangian. In $\d=4$ dimensions, $p/\Lambda$ corrections are given by graphs with a single insertion of $\mathscr{L}_5$; $(p/\Lambda)^2$ corrections are given by graphs with a single insertion of $\mathscr{L}_6$, or two insertions of $\mathscr{L}_5$, and so on. As mentioned earlier, we do not need to assign a numerical value to $\Lambda$ to do a systematic calculation. All we are using is eqn~(\ref{2.20}) for a fixed power $n$.
We can now understand the difference between renormalizable theories and EFTs. In an EFT, there are higher dimension operators with dimension $\mathscr{D} > \d$. Suppose we have a single dimension five operator (using the $\d=4$ example). Graphs with two insertions of this operator produce the same amplitude as a dimension six operator. In general, loop graphs with two insertions of $\mathscr{L}_5$ are divergent, and we need a counterterm which is an $\mathscr{L}_6$ operator. Even if we set the coefficients of $\mathscr{L}_6$ to zero in the renormalized Lagrangian, we still have to add a $\mathscr{L}_6$ counterterm with a $1/\epsilon$ coefficient. Thus the Lagrangian still has a coefficient $c_6(\mu)$. $c_6(\mu)$ might vanish at one special value of $\mu$, but in general, it evolves with $\mu$ by the renormalization group equations, and so it will be non-zero at a different value of $\mu$. There is nothing special about $c_6=0$ if this condition does not follow from a symmetry. Continuing in this way, we generate the infinite series of terms in eqn~(\ref{2.17}). We can generate operators of arbitrarily high dimension by multiple insertions of operators with $\mathscr{D}-\d>0$.
On the other hand, if we start only with operators in $\mathscr{L}_{\mathscr{D} \le \d}$, we do not generate any new operators, only the ones we have already included in $\mathscr{L}_{\mathscr{D} \le \d}$. The reason is that $\mathscr{D} - \d \le 0$ in eqn~(\ref{2.20}) so we only generate operators with $\mathscr{D} \le \d$. Divergences in a QFT are absorbed by local operators, which have $\mathscr{D} \ge 0$. Thus new operators generated by loops have $0 \le \mathscr{D} \le \d$, and have already been included in $\mathscr{L}$. We do not need to add counterterms with negative dimension operators, such as $1/\phi^2(x)$, since there are no divergences of this type. In general, renormalizable terms are those with $0\le \mathscr{D} \le d$, i.e.\ the contribution to $n$ in eqn~(\ref{2.20}) is non-positive.
Renormalizable theories are a special case of EFTs, where we formally take the limit $\Lambda \to \infty$. Then all terms in $\mathscr{L}$ have dimension $\mathscr{D} \le \d$. Scattering amplitudes can be computed to arbitrary accuracy, as there are no $p/\Lambda$ corrections. Theories with operators of dimensions $\mathscr{D} > \d$ are referred to as non-renormalizable theories, because an infinite number of higher dimension operators are needed to renormalize the theory. We have seen, however, that as long one is interested in corrections with some maximum value of $n$ in eqn~(\ref{2.20}), there are only a finite number of operators that contribute, and non-renormalizable theories (i.e. EFTs) are just as good as renormalizable ones.
\section{Photon-Photon Scattering}
\begin{figure}
\begin{center}
\includegraphics[width=3cm]{figs/fd1}\hspace{2cm}
\raise0.5cm\hbox{\includegraphics[width=2cm]{figs/fd2}}
\caption{\label{fig:gamgam} The left figure is the QED contribution to the $\gamma\gamma$ scattering amplitude from an electron loop. The right figure is the low-energy limit of the QED amplitude treated as a local $F_{\mu \nu}^4$ operator in the Euler-Heisenberg Lagrangian.}
\end{center}
\end{figure}
We now illustrate the use of the EFT power counting formula eqn~(\ref{2.20}) with some simple examples, which show the power of eqn~(\ref{2.20}) when combined with constraints from gauge invariance and Lorentz invariance.
Consider $\gamma\gamma$ scattering at energies much lower than the electron mass, $E \ll m_e$. At these low energies, the only dynamical degrees of freedom in the EFT are photons. Classical electromagnetism without charged particles is a free theory, but in QED, photons can interact via electron loops, as shown in Fig.~\ref{fig:gamgam}. In the EFT, there are no dynamical electrons, so the $4\gamma$ interaction due to electron loops is given by a series of higher dimension operators involving photon fields. The lowest dimension interactions that preserve charge conjugation are given by dimension eight operators, so the EFT Lagrangian has the expansion
\begin{align}
\label{2.21}
\mathscr{L} &= -\frac{1}{4} F_{\mu \nu}F^{\mu \nu} + \frac {\alpha^2}{ m_e^4} \left[ c_1
\left(F_{\mu \nu}F^{\mu \nu}\right)^2 + c_2
\left(F_{\mu \nu} \tilde F^{\mu \nu}\right)^2 \right] + \ldots\,.
\end{align}
This is the Euler-Heisenberg Lagrangian~\cite{Heisenberg:1935qt}.
We can compare eqn~(\ref{2.21}) with the general form eqn~(\ref{2.17}). We have used $m_e$ for the scale $\Lambda$, since we know that the higher dimension operators are generated by the electron loop graph in QED shown in Fig.~\ref{fig:gamgam}. Since QED is perturbative, we have included a factor of $e^4$ from the vertices, and $1/16\pi^2$ from the loop, so that $c_{1,2}$ are pure numbers.
The scattering amplitude computed from eqn~(\ref{2.21}) in the center-of-mass frame is
\begin{align}
\mathscr{A} \sim \frac{ \alpha^2 \omega^4 }{ m_e^4}\,,
\end{align}
where $\omega$ is the photon energy. The $\alpha^2/m_e^4$ factor is from the Lagrangian, and the $\omega^4$ factor is because each field-strength tensor is the gradient of $A_\mu$, and produces a factor of $\omega$. The scattering cross section $\sigma$ is proportional to $\abs{\mathscr{A}}^2$, and has mass dimension $-2$. The phase space integral is thus $\propto 1/\omega^2$ to get the correct dimensions, since $\omega$ is the only dimensionful parameter in the low-energy problem. The cross section is then
\begin{align}
\label{2.23}
\sigma \sim \left( \frac{\alpha^2 \omega^4 }{ m_e^4 } \right)^2
\frac{1 }{ \omega^2} \frac {1 }{ 16 \pi} \sim \frac{\alpha^4 \omega^6 }{ 16 \pi m_e^8}\,.
\end{align}
The $1/(16\pi)$ will be explained in Sec.~\ref{sec:nda}. The $\omega^6$ dependence of the cross section follows from the lowest operator being of dimension eight, so that $\mathscr{A} \propto 1/m_e^4$, and $\sigma \propto 1/m_e^8$,
\begin{align}
A \propto \frac{1}{m_e^4} \Rightarrow \sigma \propto \omega^6\,.
\end{align}
If we had assumed (incorrectly) that gauge invariance was not important and written the interaction operator generated by Fig.~\ref{fig:gamgam} as the dimension four operator
\begin{align}
\mathscr{L} &= c\, \alpha^2 (A_\mu A^\mu)^2
\end{align}
the cross section would be $\sigma \sim \alpha^4/(16 \pi \omega^2)$ instead. The ratio of the two estimates is $(\omega/m_e)^8$. For $\omega \sim 1$\,eV, the ratio is $10^{48}$!
An explicit computation~~\cite{Euler:1935zz,Euler:1936aa,Heisenberg:1935qt} gives
\begin{align}
c_1 = \frac{1 }{ 90}, \qquad c_2=\frac {7 }{ 360},
\end{align}
and~~\cite{Liang:2011sj}
\begin{align}
\sigma = \frac{ \alpha^4 \omega^6 }{ 16 \pi m_e^8} \frac {15568}{ 10125}\,.
\end{align}
Our estimate eqn~(\ref{2.23}) is quite good (about 50\% off), and was obtained with very little work.
For scalar field scattering, the interaction operator would be $\phi^4$, so that $\sigma \sim 1/(16\pi \omega^2)$, whereas Goldstone bosons such as pions have interactions $ \Pi^2 (\partial \Pi)^2/f^2$, so that $\sigma \sim \omega^4/(16 \pi f^4)$. Cross sections can vary by many orders of magnitude ($10^{48}$ between scalars and gauge bosons), so dimensional estimates such as this are very useful to decide whether a cross section is experimentally relevant before starting on a detailed calculation.
\section{Proton Decay}
Grand unified theories violate baryon and lepton number. The lowest dimension operators constructed from SM fields which violate baryon number are dimension six operators,
\begin{align}
\label{2.28}
\mathscr{L} \sim \frac{qqql}{M_G^2}.
\end{align}
These operators violate baryon number $B$ and lepton number $L$, but conserve $B-L$. The operator eqn~(\ref{2.28}) leads to the proton decay amplitude $p \to e^+ \pi^0$
\begin{align}\label{4.26}
\mathscr{A} \sim \frac{1}{M_G^2}\,,
\end{align}
and the proton decay rate
\begin{align}
\label{2.30}
\Gamma \sim \frac{m_p^5}{16 \pi M_G^4}\,.
\end{align}
In eqn~(\ref{2.30}), we have obtained a decay rate of the correct dimensions using the only scale in the decay rate calculation, the proton mass $m_p$, and the rule of $1/(16\pi)$ for the final state phase space discussed in Sec.~\ref{sec:nda}. The proton lifetime is
\begin{align}
\tau =\frac{1}{\Gamma} \sim \left(\frac{M_G}{10^{15} \, \hbox{GeV}}\right)^4 \times 10^{30} \ \hbox{years}
\end{align}
EFT power counting provides a natural explanation for baryon number conservation. In the SM, baryon number is first violated at dimension six, leading to a long proton lifetime.
If baryon number were violated at dimension five (as happens in some supersymmetric models), eqn~(\ref{4.26}) would be replaced by
$\mathscr{A} \sim 1/M_G$, and the proton decay rate is
\begin{align}
\Gamma \sim \frac{m_p^3}{16 \pi M_G^2}\,.
\end{align}
The proton lifetime is very short,
\begin{align}
\tau =\frac{1}{\Gamma} \sim \left(\frac{M_G}{10^{15} \, \hbox{GeV}}\right)^2 \times 1 \ \hbox{years},
\end{align}
and is ruled out experimentally.
\section{$n-\overline n$ Oscillations}
In some theories, baryon number is violated but lepton number is not. Then proton decay is forbidden. The proton is a fermion, and so its decay products must contain a lighter fermion. But the only fermions lighter than the proton carry lepton number, so proton decay is forbidden. These theories do allow for a new experimental phenomenon, namely $n - \overline n$ oscillations, which violates only baryon number.
The lowest dimension operator that leads to $n -\overline n$ oscillations, is the $\Delta B=2$ six-quark operator
\begin{align}
\mathscr{L} \sim \frac{q^6}{M_G^5},
\end{align}
which is dimension nine, and suppressed by five powers of the scale $M_G$ at which the operator is generated. This leads to an oscillation amplitude
\begin{align}
\mathscr{A} \sim \left(\frac{m_n}{M_G}\right)^5\,,
\end{align}
which is strongly suppressed.
\section{Neutrino Masses}
The lowest dimension operator in the SM which gives a neutrino mass is the $\Delta L=2$ operator of dimension five (see Sec.~\ref{sec:dim5}),
\begin{align}
\label{4.33}
\mathscr{L} \sim \frac{ (H^\dagger \ell)(H^\dagger \ell)}{M_S},
\end{align}
generated at a high scale $M_S$ usually referred to as the seesaw scale. eqn~(\ref{4.33}) gives a Majorana neutrino mass of order
\begin{align}
m_\nu \sim \frac{v^2}{M_S}
\end{align}
when $SU(2) \times U(1)$ symmetry is spontaneously broken by $v \sim 246$\,GeV. Using $m_\nu \sim 10^{-2}$ eV leads to a seesaw scale $M_S \sim 6\times 10^{15}$~GeV. Neutrinos are light if the lepton number violating scale $M_S$ is large.
\section{Rayleigh Scattering}
The scattering of photons off atoms at low energies also can be analyzed using our power counting results. Here low energies means energies small enough that one does not excite the internal states of the atom, which have excitation energies of order electron-Volts.
The atom can be treated as a neutral particle of mass $M$, interacting with the electromagnetic field. Let $\psi(x)$ denote a field operator that creates an atom at the point $x$. Then the effective Lagrangian for the atom is
\begin{align}
\label{2.36}
\mathscr{L} = \psi^\dagger\left(i \partial_t - \frac{\partial^2 }{ 2M} \right) \psi + \mathscr{L} _{\rm
int},
\end{align}
where $\mathscr{L} _{\rm int}$ is the interaction term. From eqn~(\ref{2.36}), we see that $\left[\psi\right]=3/2$. Since the atom is neutral, covariant derivatives acting on the atom are ordinary derivatives, and do not
contain gauge fields. The gauge field interaction term is a function of the electromagnetic field strength $F_{\mu\nu}=({\bf E},{\bf B})$. Gauge invariance forbids terms which depend only on the vector potential $A_\mu$. At low energies, the dominant interaction is one which involves the lowest dimension operators,
\begin{align}
\label{2.37}
\mathscr{L} _{\rm int} = a_0^3 \ \psi^\dagger \psi \left(c_E \mathbf{E}^2 +c_B \mathbf{B}^2\right)\,.
\end{align}
An analogous $\mathbf{E\cdot B}$ term is forbidden by parity conservation. The operators in eqn~(\ref{2.37}) have $\mathscr{D}=7$, so we have written their coefficients as dimensionless constants times $a_0^3$. $a_0$ is the size of the atom, which controls the interaction of photons with the atom, and $[a_0]=-1$. The photon only interacts with the atom when it can resolve its charged constituents, the electron and nucleus, which are separated by $a_0$, so $a_0$ plays the role of $1/\Lambda$ in eqn~(\ref{2.37}).
The interaction eqn~(\ref{2.37}) gives the scattering amplitude
\begin{align}
\mathscr{A} \sim a_0^3 \omega^2\,,
\end{align}
since the electric and magnetic fields are gradients of the vector potential, so each factor of $\bf E$ or $\bf B$ produces a factor of $\omega$. The scattering cross-section is proportional to $\abs{\mathscr{A}}^2$. This has the correct dimensions to be a cross-section, so the phase-space is dimensionless, and
\begin{align}\label{4.38}
\sigma \propto a_0^6\ \omega^4.
\end{align}
Equation~(\ref{4.38}) is the famous $\omega^4$ dependence of the Rayleigh scattering cross-section, which explains why the sky is blue---blue light is scattered 16 times more strongly than red light, since it has twice the frequency.
The argument above also applies to the interaction of low-energy gluons with $Q \bar Q$ bound states such as the $J/\psi$ or $\Upsilon$. The Lagrangian is eqn~(\ref{2.37}) where $\mathbf{E}^2$ and $\mathbf{B}^2$ are replaced by their QCD analogs, $\mathbf{E}^A \cdot \mathbf{E}^A$ and $\mathbf{B}^A \cdot \mathbf{B}^A$. The scale $a_0$ is now the radius of the QCD bound state. The Lagrangian can be used to find the interaction energy of the $Q\bar Q$ state in nuclear matter. The $\psi$ field is a color singlet, so the only interaction with nuclear matter is via the the gluon fields. The forward scattering amplitude off a nucleon state is
\begin{align}\label{4.39}
\mathscr{A} &= a_0^3\braket{ N | c_E \mathbf{E}^A \cdot \mathbf{E}^A +c_B \mathbf{B}^A \cdot \mathbf{B}^A | N}
\end{align}
Equation~(\ref{4.39}) is a non-perturbative matrix element of order $\ensuremath{\Lambda_{\text{QCD}}}^2$. It turns out that it can evaluated in terms of the nucleon mass and the quark momentum fraction measured in DIS~\cite{Luke:1992tm}. The binding energy $U$ of the $Q\bar Q$ state is related to $\mathscr{A}$ by
\begin{align}
U &= \frac{n \mathscr{A}}{2 M_N} \,,
\end{align}
where $n$ is the number of nucleons per unit volume in nuclear matter. The $1/(2M_N)$ prefactor is because nucleon states in field theory are normalized to $2M_N$ rather than to $1$, as in quantum mechanics. Just using dimensional analysis, with $n \sim \ensuremath{\Lambda_{\text{QCD}}}^3$, $\mathscr{A} \sim a_0^3 \ensuremath{\Lambda_{\text{QCD}}}^2$, and neglecting factors of two,
\begin{align}
U &= \frac{a_0^3 \ensuremath{\Lambda_{\text{QCD}}}^5}{M_N}\,.
\end{align}
With $a_0 \sim 0.2 \times 10^{-15}$\,m for the $J/\psi$, and $\ensuremath{\Lambda_{\text{QCD}}} \sim 350$\,MeV, the binding energy is $U \sim 5$\,MeV.
\section{Low energy weak interactions}\label{sec:fermi}
The classic example of an EFT is the Fermi theory of low-energy weak interactions. The full (UV) theory is the SM, and we can match onto the EFT by transitioning to a theory valid at momenta small compared to $M_{W,Z}$. Since the weak interactions are perturbative, the matching can be done order by order in perturbation theory.
The $W$ boson interacts with quarks and leptons via the weak current:
\begin{align}
j^\mu_W &= V_{ij}\ (\bar u_i\, \gamma^\mu\, P_L\, d_j) + (\bar \nu_\ell\, \gamma^\mu\, P_L\, \ell) ,
\end{align}
where $u_i=u,c,t$ are up-type quarks, $d_j=d,s,b$ are down-type quarks, and $V_{ij}$ is the CKM mixing matrix. There is no mixing matrix in the lepton sector because we are using neutrino flavor eigenstates, and neglecting neutrino masses.
\begin{figure}
\begin{center}
\includegraphics[width=3cm]{figs/fig1}
\caption{\label{fig:tree} Tree-level diagram for semileptonic $b \to c$ decay.}
\end{center}
\end{figure}
The tree-level amplitude for semileptonic $b \to c$ decay from Fig.~\ref{fig:tree} is
\begin{align}
\mathscr{A} &= \left(\frac{-ig}{\sqrt2}\right)^2 V_{cb}
\left(\bar c\, \gamma^\mu\, P_L\, b\right)
\left(\bar \ell\, \gamma^\nu\, P_L\, \nu_{\ell}\right)
\left(\frac{-ig_{\mu\nu}}{p^2-M_W^2}\right),
\end{align}
where $g/\sqrt{2}$ is the $W$ coupling constant.
For low momentum transfers, $p \ll M_W$, we can expand the $W$ propagator,
\begin{align}
\label{2.45}
\frac{1}{p^2-M_W^2} = -\frac{1}{M_W^2}\left(1+\frac{p^2}{M_W^2} + \frac{p^4}{M_W^4}
+ \ldots\right),
\end{align}
giving different orders in the EFT expansion parameter $p/M_W$. Retaining only the first term gives
\begin{figure}
\begin{center}
\includegraphics[width=3cm]{figs/fig2}
\end{center}
\caption{\label{fig:tree2} $b \to c$ vertex in the Fermi theory.}
\end{figure}
\begin{align}
\mathscr{A} &= \frac i{M_W^2}\left(\frac{-ig}{\sqrt2}\right)^2 V_{cb} \
\left(\bar c\, \gamma^\mu\, P_L\, b\right)\left(\bar \ell\, \gamma_\mu\, P_L\, \nu_\ell
\right)+\mathcal{O}\left(\frac{1}{M_W^4}\right)\,,
\end{align}
which is the same amplitude as that produced by the local Lagrangian
\begin{align}
\label{2.47}
\mathscr{L} &= -\frac{g^2}{2 M_W^2}V_{cb} \
\left(\bar c\, \gamma^\mu\, P_L\, b\right)\left(\bar \ell\, \gamma_\mu\, P_L\, \nu_\ell
\right)+\mathcal{O}\left(\frac{1}{M_W^4}\right).
\end{align}
eqn~(\ref{2.47}) is the lowest order Lagrangian for semileptonic $b \to c$ decay in the EFT, and is represented by the vertex in Fig.~\ref{fig:tree2}. It is usually written, for historical reasons, in terms of $G_F$
\begin{align}
\label{2.48}
\frac{G_F}{\sqrt2} \equiv \frac{g^2}{8 M_W^2}=\frac{1}{2v^2},
\end{align}
where $v \sim 246$\,GeV is the scale of electroweak symmetry breaking,
\begin{align}
\label{2.49}
\mathscr{L} & = -\frac{4 G_F}{\sqrt2}\, V_{cb}\,
\left(\bar c\, \gamma^\mu\, P_L\, b\right) \left(\bar \ell\, \gamma^\mu\, P_L\, \nu_\ell\right)\,.
\end{align}
Similarly, the $\mu$ decay Lagrangian is
\begin{align}
\label{2.50}
\mathscr{L} & = -\frac{4 G_F}{\sqrt2}\,
\left(\bar \nu_\mu\, \gamma^\mu\, P_L\, \mu\right) \left(\bar e\, \gamma^\mu\, P_L\, \nu_e\right)\,.
\end{align}
The EFT Lagrangian eqn~(\ref{2.49},\ref{2.50}) is the low-energy limit of the SM. The EFT no longer has dynamical $W$ bosons, and the effect of $W$ exchange in the SM has been included via dimension-six four-fermion operators. The procedure used here is referred to as ``integrating out'' a heavy particle, the $W$ boson.
The Lagrangian eqn~(\ref{2.49},\ref{2.50}) has been obtained by expanding in $p/M_W$, i.e.\ by treating $M_W$ as large compared with the other scales in the problem. Weak decays computed using eqn~(\ref{2.49},\ref{2.50}) still retain the complete dependence on low energy scales such as $m_b$, $m_c$ and $m_\ell$. Using eqn~(\ref{2.49}) gives the $b$ lifetime,
\begin{align}
\label{4.50}
\Gamma(b \to c \ell \overline \nu_\ell) = \frac{\abs{V_{cb}}^2 G_F^2 m_b^5}{192\pi^3}\ f\left( \frac{m_c^2}{m_b^2}\right),
\end{align}
where we have neglected $m_\ell$, and
\begin{align}
f\left(\rho \right) = 1 -8 \rho+8 \rho^3-\rho^4-12 \rho^2 \ln \rho,\qquad \rho=\frac{m_c^2}{m_b^2}.
\end{align}
Equation~(\ref{4.50}) gives the full $m_c/m_b$ dependence of the decay rate, but drops terms of order $m_b/M_W$ and $m_c/M_W$. The full $m_\ell/m_b$ dependence can also be included by retaining $m_\ell$ in the decay rate calculation. The use of the EFT Lagrangian eqn~(\ref{2.49}) simplifies the calculation. We could have achieved the same simplification by computing Fig.~\ref{fig:tree} in the SM, and expanding the amplitude using eqn~(\ref{2.45}). The true advantages of EFT show up in higher order calculations including radiative corrections from loop graphs, which cannot be computed by simply expanding the SM amplitude.
The Fermi Lagrangian can be used to compute electroweak scattering cross sections such as the neutrino cross section. Here we give a simple dimensional estimate of the cross section,
\begin{align}
\sigma &\sim \frac{1}{16 \pi} \left( \frac{4 G_F}{\sqrt 2} \right)^2 E^2_\text{CM} \sim \frac{1}{2\pi} G_F^2 E^2_\text{CM} \,,
\end{align}
where the $G_F$ factor is from the weak interaction Lagrangian, $1/(16 \pi)$ is two-body phase space, and $E_\text{CM}$ gives $\sigma$ the dimensions of a cross section. For neutrino scattering off a fixed target, $E_{CM}^2 = 2 E_\nu M_T$, so neutrino cross sections grow linearly with the neutrino energy. Neutrino cross sections are weak as long as $E_\nu$ is much smaller the electroweak scale.
\begin{exercise}
Compute the decay rate $\Gamma( b \to c e^- \overline \nu_e)$ with the interaction Lagrangian
\begin{align*}
L &= -\frac{4 G_F}{\sqrt 2}V_{cb} ( \overline c \gamma^\mu P_L b)(\overline \nu_e \gamma_\mu P_L e)
\end{align*}
with $m_e \to 0$, $m_\nu \to 0$, but retaining the dependence on $\rho = m_c^2/m_b^2$. It is convenient to write the three-body phase space in terms of the variables $x_1=2E_e/m_b$ and $x_2=2 E_\nu/m_b$.
\end{exercise}
\section{$M_W$ vs $G_F$}
The weak interactions have two parameters, $g$ and $M_W$, and the Fermi Lagrangian in eqn~(\ref{2.49}) depends only on the combination $G_F$ in eqn~(\ref{2.48}). Higher order terms in the expansion eqn~(\ref{2.45}) are of the form
\begin{align}
\label{2.52}
-\frac{4G_F}{\sqrt 2} \left[ 1 + \frac{p^2}{M_W^2} + \ldots \right]
=-\frac{2}{v^2} \left[ 1 + \frac{p^2}{M_W^2} + \ldots \right]
\end{align}
so that the EFT momentum expansion is in powers of $\delta=p/M_W$, even though the first term in Eq~(\ref{2.52}) is $\propto 1/v^2$. The expansion breaks down for $p \sim M_W = g v/2$, which is smaller than $v \sim 246$\,GeV.
Despite the theory having multiple scales $M_W$ and $v$, we can still use our EFT power counting rules of Sec.~\ref{sec:4.2}. From the $\mu$ decay rate computed using eqn~(\ref{2.50})
\begin{align}
\label{2.53}
\Gamma(\mu \to e \nu_\mu \overline \nu_e) = \frac{G_F^2 m_\mu^5}{192\pi^3}\,,
\end{align}
and the experimental value of the $\mu$ lifetime $2.197 \times 10^{-6}$\,s, we obtain $G_F \sim 1.16 \times 10^{-5}\, \text{GeV}^{-2}$. Using $G_F \sim 1/\Lambda^2$ gives $\Lambda \sim 300$\,GeV. This indicates that we have an EFT with a scale of order $\Lambda$. This is similar to the multipole expansion estimate for $a$.
We can then use the power counting arguments of Sec.~\ref{sec:4.2}. They show that the leading terms in the decay amplitude are single insertions of dimension-six operators, the next corrections are two insertions of dimension-six or one insertion of dimension-eight operators, etc. None of these arguments care about the precise value of $\Lambda$. They allow one to group terms in the expansion of similar size.
Dimension-eight corrections are $p^2/\Lambda^2$ suppressed. In $\mu$-decay, this implies that dimension-eight corrections are suppressed by $m_\mu^2/\Lambda^2$. The power counting estimate using either $\Lambda \sim M_W$ or $\Lambda \sim v$ shows that they are very small corrections. We can check that these corrections are small from \emph{experiment}. The Lagrangian eqn~(\ref{2.50}) predicts observables such as the phase-space distribution of $\mu$ decay events over the entire Dalitz plot, the polarization of the final $e^-$, etc.\ Comparing these predictions, which neglect dimension-eight contributions, with experiment provides a test that eqn~(\ref{2.50}) gives the correct decay amplitude. Very accurate experiments which are sensitive to deviations from the predictions of eqn~(\ref{2.50}), i.e.\ have an accuracy $m_\mu^2/M_W^2 \sim 10^{-6}$, can then be used to probe dimension-eight effects, and determine the scale $M_W$.
Historically, when the SM was developed, $G_F$ was fixed from $\mu$ decay, but the values of $M_W$ and $M_Z$ were not known. Their values \emph{were not needed} to apply the Fermi theory to low-energy weak interactions. The value of $M_Z$ was determined by studying the energy dependence of parity violation in electron scattering through $\gamma-Z$ interference effects. This fixed the size of the dimension-eight $p^2/M_Z^2$ terms in the neutral current analog of eqn~(\ref{2.52}), and determined the scale at which the EFT had to be replaced by the full SM, with dynamical gauge fields.
\chapter{Loops}
The real power of EFTs becomes apparent when computing loop corrections. There are several tricky points that must be understood before EFTs can be used at the loop level, which are explained in this section.
\begin{figure}
\begin{center}
\includegraphics[width=2cm]{figs/fd3}
\end{center}
\caption{\label{fig:loop1} One-loop correction to $\phi\phi$ scattering from a $\phi^6$ interaction.}
\end{figure}
For simplicity consider an EFT of a scalar field $\phi$, with a higher dimension operator
\begin{align}
\mathscr{L} &= \mathscr{L}_{\mathscr{D} \le 4} + \frac{c_6}{\Lambda^2} \frac{1}{6!} \phi^6 \,.
\label{3.1}
\end{align}
The dimension-six operator gives a contribution to $\phi-\phi$ scattering from the graph in Fig.~\ref{fig:loop1},
\begin{align}
\label{3.2}
\mathscr{A} &= -\frac{c_6}{2\Lambda^2} \int \frac{{ d^4 k}}{(2\pi)^4}\frac{1}{k^2-m_\phi^2}\,.
\end{align}
The EFT is valid for $k < \Lambda$, so we can use a momentum-space cutoff $\Lambda_c < \Lambda$. The scalar mass $m_\phi$ is much smaller than $\Lambda_c$, since $\phi$ is a particle in the EFT. Neglecting $m_\phi$, the integral gives
\begin{align}
\label{3.3}
\mathscr{A} &\approx -\frac{c_6}{2\Lambda^2} \frac{\Lambda_c^2}{16\pi^2}.
\end{align}
The integral eqn~(\ref{3.2}) is quadratically divergent, which gives the quadratic cutoff dependence in eqn~(\ref{3.3}). Similarly, a dimension eight operator $\phi^4 (\partial_\mu \phi)^2$ with coefficient $c_8/\Lambda^4$ has an interaction vertex $k^2/\Lambda^4$, and gives a contribution
\begin{align}
\label{3.4}
\mathscr{A} &=-\frac{c_8}{\Lambda^4} \int \frac{{ d^4 k}}{(2\pi)^4}\frac{k^2}{k^2-m_\phi^2}\approx -\frac{c_8}{\Lambda^4} \frac{\Lambda_c^4}{16\pi^2}\,,
\end{align}
since the integral is quartically divergent.
The results eqn~(\ref{3.3},\ref{3.4}) lead to a violation of the power counting formula eqn~(\ref{2.20}), and the EFT expansion in powers of $1/\Lambda$ breaks down, since $\Lambda_c$ is the same order as $\Lambda$. Loops with insertions of higher dimension operators give contributions of leading order in the $1/\Lambda$ expansion, which need to be resummed. One could try using $\Lambda_c \ll \Lambda$, but this turns out not to work. Firstly, $\Lambda_c$ is an artificial scale that has been introduced, with no connection to any physical scale. In the end, all $\Lambda_c$ dependence must cancel. For example, the weak interactions would require introducing a cutoff scale $m_b \ll \Lambda_c \ll M_W$ to keep the power divergences in eqn~(\ref{3.3},\ref{3.4}) under control, and this would be an artificial scale that cancels in final results. Furthermore, cutoffs do not allow one to sum large logarithms, which is one of the main reasons why EFTs are used in the first place, since we are restricted to $\Lambda_c \ll \Lambda$. A cutoff has other problems as well, it violates important symmetries such as gauge invariance and chiral symmetry. In fact, nobody has successfully performed higher order log-resummation in EFTs with non-Abelian gauge interactions using a cutoff. Wilson proposed a set of axioms~\cite{Wilson:1972cf} for good regulators which are discussed in Ref.~\cite[Chapter 4]{Collins:1984xc}.
Often, you will see discussions of EFTs where high momentum modes with $k > \Lambda_c$ are integrated out, and the cutoff is slowly lowered to generate an infrared theory. While ideas like this were historically useful, this is not the way to think of an EFT, and it is not the way EFTs are actually used in practice.
Let us go back to a loop graph such as eqn~(\ref{3.1}), and for now, retain the cutoff $\Lambda_c$. In addition to the contribution shown in eqn~(\ref{3.3}), the loop graph also contains non-analytic terms in $m_\phi$. In more complicated graphs, there would also be non-analytic terms in the external momentum $p$. Loop graphs have a complicated analytic structure in $p$ and $m_\phi$, with branch cuts, etc.\ The discontinuities across branch cuts from logs in loop graphs are related to the total cross section via the optical theorem. The non-analytic contributions are crucial to the EFT, and are needed to make sure the EFT respects unitarity. The non-analytic part of the integral can be probed by varying $m_\phi$ and $p$, and arises from $k \sim m_\phi, p$, i.e.\ loop momenta of order the physical scales in the EFT. For loop momenta of order $\Lambda_c$, $m_\phi,p \ll \Lambda_c$, one can expand in $m_\phi$ and $p$, and the integral gives analytic but $\Lambda_c$ dependent contributions such as eqn~(\ref{3.3}).
The high-momentum part of the integral is analytic in the IR variables, and has the same structure as amplitudes generated by local operators. This is the concept of locality mentioned in the introduction. Thus the integral has non-analytic pieces we want, plus local pieces that depend on $\Lambda_c$. The cutoff integral is an approximation to the actual integral in the full theory. Thus the local pieces computed as in eqn~(\ref{3.3}) are not the correct ones. In fact, in theories such as $\chi$PT where the UV theory is not related perturbatively to the EFT, the UV part of the integral is meaningless. Luckily, locality saves the day. The local pieces have the same structure as operators in the EFT Lagrangian, so they can be absorbed into the EFT Lagrangian coefficients. The EFT coefficients are then adjusted to make sure the EFT gives the correct $S$-matrix, a procedure referred to as ``matching.'' The difference in UV structure of the full theory and the EFT is taken care of by the matching procedure. In the end, we only need the EFT to reproduce the non-analytic dependence on IR variables; the analytic dependence is absorbed into Lagrangian coefficients.
An explicit calculation is given in Sec.~\ref{sec:5.5}.
To actually use EFTs in practice, we need a renormalization scheme that automatically implements the procedure above---i.e.\ it gives the non-analytic IR dependence without any spurious analytic contributions that depend on $\Lambda_c$. Such a scheme also maintains the EFT power counting, since no powers of a high scale $\Lambda_c$ appear in the numerator of loop integrals, and cause the EFT expansion to break down. Dimensional regularization is a regulator that satisfies the required properties. It has the additional advantage that it maintains gauge invariance and chiral symmetry.
\section{Dimensional Regularization}\label{sec:5.1}
The basic integral we need is
\begin{align}
\label{3.5}
\mu^{2\epsilon} \int \frac{\rd^\d k}{(2\pi)^\d} \frac{\left(k^2\right)^a}{\left(k^2-M^2\right)^b}
&= \frac{i \mu^{2\epsilon}}{\left(4\pi\right)^{\d/2}} \frac{(-1)^{a-b} \Gamma(\d/2+a) \Gamma(b-a-\d/2)}{\Gamma(\d/2) \Gamma(b)}
\left(M^2\right)^{\d/2+a-b}\,
\end{align}
where $\d=4-2\epsilon$. The $\mu^{2\epsilon}$ prefactor arises from $\mu^\epsilon$ factors in coupling constants, as in eqn~(\ref{6}). Equation~(\ref{3.5}) is obtained by analytically continuing the integral from values of $a$ and $b$ where it is convergent. Integrals with several denominators can be converted to eqn~(\ref{3.5}) by combining denominators using Feynman parameters.
\begin{exercise}
Verify eqn~(\ref{3.5}) by first analytically continuing to Euclidean space, and then switching to spherical polar coordinates in $\d$ dimensions.
\end{exercise}
The integral eqn~(\ref{3.5}) is then expanded in powers of $\epsilon$. As an example,
\begin{align}
\label{3.6}
I&= \mu^{2\epsilon} \int \frac{\rd^\d k}{(2\pi)^\d} \frac{1}{\left(k^2-M^2\right)^2}
= \frac{i \mu^{2\epsilon}}{\left(4\pi\right)^{2-\epsilon}} \frac{\Gamma(\epsilon)}{\Gamma(2)}
\left(M^2\right)^{-\epsilon}\,, \nn
&=
\frac{i}{16 \pi^2} \left[\frac{1}{\epsilon} -\gamma+ \log \frac{4 \pi \mu^2}{M^2} + \mathcal{O}\left(\epsilon\right) \right]\,,
\end{align}
where $\gamma=0.577$ is Euler's constant. In the \ensuremath{\overline{\text{MS}}}\ scheme, we make the replacement
\begin{align}
\label{3.7}
\mu^2 = \bar \mu^2 \frac{e^{\gamma}}{4\pi}\,,
\end{align}
so that
\begin{align}
\label{3.6a}
I &=
\frac{i}{16 \pi^2} \left[\frac{1}{\epsilon} + \log \frac{\bar\mu^2}{M^2} + \mathcal{O}\left(\epsilon\right) \right]\,.
\end{align}
The $1/\epsilon$ part, which diverges as $\epsilon \to 0$, is cancelled by a counterterm, leaving the renormalized integral
\begin{align}
\label{3.8}
I + \text{c.t.}
&=
\frac{i}{16 \pi^2} \log \frac{\bar\mu^2}{M^2} \,.
\end{align}
The replacement eqn~(\ref{3.7}) removes $\log 4\pi$ and $-\gamma$ pieces in the final result.
There are several important features of dimensional regularization:
\begin{itemize}
\item $\bar \mu$ only appears as $\log \bar \mu$, and there are no powers of $ \bar \mu$. The only source of $\bar \mu$ in the calculation is from powers of $\mu^\epsilon$ in the coupling constants, and expanding in $\epsilon$ shows that only $\log \mu$ (and hence $\log \bar \mu$) terms occur.
\item Scaleless integrals vanish,
\begin{align}
\label{3.9}
\mu^{2\epsilon} \int \frac{\rd^\d k}{(2\pi)^\d} \frac{\left(k^2\right)^a}{\left(k^2\right)^b}
&= 0\,.
\end{align}
This follows using eqn~(\ref{3.5}) and taking the limit $M \to 0$. Since integrals in dimensional regularization are defined by analytic continuation, the limit $M \to 0$ is taken assuming $\d/2+a-b >0$ so that the limit vanishes. Analytically continuing to $\d/2+a-b \le 0$, the integral remains $0$. The vanishing of scaleless integrals plays a very important role in calculations using dimensional regularization.
\item There are no power divergences. For example, the quadratically divergent integral
\begin{align}
\label{3.10}
\mu^{2\epsilon} \int \frac{\rd^\d k}{(2\pi)^\d} \frac{1}{\left(k^2-m^2\right)}
&= -\frac{i \mu^{2\epsilon}}{\left(4\pi\right)^{\d/2}} \Gamma(-1+\epsilon) \left(m^2\right)^{1-\epsilon} \nn
&= \frac{i}{16 \pi^2} \left[\frac{m^2}{\epsilon} + m^2 \log \frac{\bar \mu^2}{m^2} + m^2 + \mathcal{O}\left(\epsilon\right) \right]\,,
\end{align}
depends only on powers of the IR scale $m$. There is no dependence on any UV scale (such as a cutoff), nor any power-law dependence on $\bar\mu$. Similarly, the integral
\begin{align}
\label{3.11}
\mu^{2\epsilon} \int \frac{\rd^\d k}{(2\pi)^\d} \frac{\left(k^2\right)}{\left(k^2-m^2\right)}
&= \frac{i \mu^{2\epsilon}}{\left(4\pi\right)^{\d/2}} \frac{\Gamma(3-\epsilon) \Gamma(-2+\epsilon)}{\Gamma(2-\epsilon) \Gamma(1)}
\left(m^2\right)^{2-\epsilon} \nn
&= \frac{i}{16 \pi^2} \left[\frac{m^4}{\epsilon} + m^4 \log \frac{\bar \mu^2}{m^2} + m^4 + \mathcal{O}\left(\epsilon\right) \right]\,,
\end{align}
so the quartic divergence of the integral turns into the IR scale $m$ to the fourth power.
\end{itemize}
The structure of the above integrals is easy to understand. Evaluating integrals using dimensional regularization is basically the same as evaluating integrals using the method of residues. Values of $\d,a,b$ are assumed such that the integrand vanishes sufficiently fast as $k \to \infty$ that the contour at infinity can be thrown away. The integrand is then given by the sum of residues at the poles. The location of the poles is controlled by the denominators in the integrand, which only depend on the physical scales in the low-energy theory, such as particle masses and external momenta. Dimensional regularization automatically gives what we want---it keeps all the dependence on the physical parameters, and throws away all unphysical dependence on high-energy scales. It is the simplest physical regulator, and the one used in all higher order calculations.
\newpage
\section{No Quadratic Divergences}\label{sec:quad}
\begin{figure}
\begin{center}
\includegraphics[width=2cm]{figs/quad.pdf}
\caption{\label{fig:quad}One loop correction to the Higgs mass from the $-\lambda (H^\dagger H)^2$ interaction.}
\end{center}
\end{figure}
Let us look at the scalar graph Fig.~\ref{fig:quad} which gives a correction to the Higgs mass in the SM,
\begin{align}
\delta m_H^2 &=-12 \lambda \mu^{2\epsilon} \int \frac{\rd^d k}{(2\pi)^d} \frac{1}{\left(k^2-m_H^2\right)} \,,
\end{align}
where $\lambda$ is the Higgs self-coupling. You will have heard endless times that Fig.~\ref{fig:quad} gives a correction
\begin{align}
\label{3.13}
\delta m_H^2 \propto \Lambda^2\,,
\end{align}
to the Higgs mass
that depends quadratically on the cutoff. This is supposed to lead to a naturalness problem for the SM, because the Higgs is so much lighter than $\Lambda$, which is taken to be at the GUT scale or Planck Scale. The naturalness problem also goes by the names of hierarchy problem or fine-tuning problem.
The above argument for the naturalness problem is \emph{completely bogus}. The regulator used for the SM is dimensional regularization, which respects gauge invariance. The actual value of the integral is eqn~(\ref{3.10}). Adding the renormalization counterterm cancels the $1/\epsilon$ piece, resulting in a correction to the Higgs mass
\begin{align}\label{5.15}
\delta m_H^2 &= -12 \lambda m_H^2 \left[ \log \frac{m_H^2}{\bar \mu^2} + 1 \right]\,,
\end{align}
which is proportional to the Higgs mass. There is no quadratic mass shift proportional to the cutoff; there is no cutoff. The argument eqn~(\ref{3.13}) is based on a regulator that violates gauge invariance and the Wilson axioms, and which is never used for the SM in actual calculations. Bad regulators lead to bad conclusions.
\begin{exercise}
Compute the one-loop scalar graph Fig.~\ref{fig:quad} with a scalar of mass $m$ and interaction vertex $-\lambda \phi^4/4!$ in the \ensuremath{\overline{\text{MS}}}\ scheme. Verify the answer is of the form eqn~(\ref{5.15}). The overall normalization will be different, because this exercise uses a real scalar field, and $H$ in the SM is a complex scalar field.
\end{exercise}
\section{Power Counting Formula}\label{sec:5.3}
We can now extend the power counting formula eqn~(\ref{2.20}) to include loop corrections. If we consider a loop graph with an insertion of EFT vertices with coefficients of order $1/\Lambda^a$, $1/\Lambda^b$, etc. then any amplitude (including loops) will have the $\Lambda$ dependence
\begin{align}
\frac{1}{\Lambda^a} \frac{1}{\Lambda^b} \ldots = \frac{1}{\Lambda^{a+b+ \ldots} }
\end{align}
simply from the product of the vertex factors. The discussion of Sec.~\ref{sec:5.1} shows that the only scales which can occur in the numerator after doing the loop integrals are from poles in Feynman propagator denominators. These poles are at scales in the EFT, none of which is parametrically of order $\Lambda$. Thus there are no compensating factors of $\Lambda$ in the numerator, i.e.\ the power of $\Lambda$ is given by the vertex factors alone, so eqn~(\ref{2.20}), also holds for loop graphs.
Loop graphs in general are infinite, and the infinities ($1/\epsilon$ poles) are cancelled by renormalization counterterms. The EFT must include all operators necessary to absorb these divergences. From $n=\sum_i (\mathscr{D}_i-4)$, we see that if there is an operator with $\mathscr{D} >4$, we will generate operators with arbitrary high dimension. Thus an EFT includes all possible higher dimension operators consistent with the symmetries of the theory. Dimension-six operators are needed to renormalize graphs with two insertions of dimension-five operators; dimension-eight operators are needed to renormalize graphs with two insertions of dimension-six operators (see Fig.~\ref{fig:5.3}), etc.
\begin{figure}
\begin{center}
\begin{tikzpicture}[scale=0.25]
\draw (0,-4)+(20:6) arc (20:160:6);
\draw (0,4)+(200:6) arc (200:340:6);
\filldraw (4.48,0) circle (0.15);
\filldraw (-4.48,0) circle (0.15);
\draw (4.48,-1.5) node [align=center] { $\scriptstyle \mathscr{L}_6$};
\draw (-4.48,-1.5) node [align=center] { $\scriptstyle \mathscr{L}_6$};
\end{tikzpicture}
\end{center}
\caption{\label{fig:5.3} Graph with two insertions of dimension-six operators, which requires a dimension-eight counterterm.}
\end{figure}
and we have to keep the entire expansion in higher dimension operators
\begin{align}
\mathscr{L}_\text{EFT} = \mathscr{L}_{\mathscr{D} \le 4} + \frac{\mathscr{L}_5 }{ \Lambda} + \frac{\mathscr{L}_6 }{ \Lambda^2} + \ldots\,.
\end{align}
Even if we focus just on the low-dimension operators, it is understood that the higher dimension operators are still present. It also makes no sense to set their coefficients to zero. Their coefficients depend on $\bar \mu$, and on other choices such as the gauge-fixing term, etc.\ and so setting them to zero is a random unmotivated choice which will no longer hold at a different value of $\bar \mu$ unless the operator is forbidden by a symmetry.
\section{An Explicit Computation}
We now analyze a simple toy example, and explicitly compute a one-loop amplitude in the full theory, in the EFT, and discuss the matching between the two. The toy example is a two-scale integral that will be evaluated using EFT methods. The entire argument applies almost without change to a practical example, the derivation of the HQET Lagrangian to one-loop~\cite{Manohar:1997qy}.
Consider the integral
\begin{align}
\label{3.17}
I_F&=g^2 \mu^{2\epsilon} \int \frac{{\rm d}^\d k}{(2\pi)^\d}\ \frac{1}{(k^2-m^2)(k^2-M^2)}
\end{align}
where we will take $m \ll M$. $M$ is the UV scale, and $m$ is the IR scale. Integrals such as eqn~(\ref{3.17}) arise in loop calculations of graphs with intermediate heavy and light particles, such as in Fig.~\ref{fig:full}. In eqn~(\ref{3.17}), we have set the external momenta to zero to get a simple integral which we can analyze to all orders in $m/M$.
\begin{figure}
\begin{center}
\begin{tikzpicture}[scale=0.5]
\draw (-4,2) -- (4,2);
\draw (-4,-2) -- (4,-2);
\filldraw (0,2) circle (0.15);
\filldraw (0,-2) circle (0.15);
\draw[dashed] (0,2) .. controls (2,1) and (2,-1) .. (0,-2);
\draw[dashed,line width=1pt] (0,2) .. controls (-2,1) and (-2,-1) .. (0,-2);
\end{tikzpicture}
\end{center}
\caption{\label{fig:full} A graph that gives a loop integral of the form eqn~(\ref{3.17}). The solid lines are light external fields. The thin dashed line is a light particle with mass $m$. The thick dashed line is a heavy particle of mass $M$ that is not in the EFT.}
\end{figure}
The integral can be done exactly in $\d=4-2\epsilon$ dimensions
\begin{align}
\label{3.18}
I_F&=g^2\mu^{2\epsilon}\int \frac{{\rm d}^\d k}{(2\pi)^\d}\ \frac{1}{(k^2-m^2)(k^2-M^2)} \nn
&= \frac{ig^2}{16\pi^2}\left[\frac{1}{\epsilon} - \log \frac{M^2}{\bar\mu^2}
+ \frac{m^2}{M^2-m^2}\log\frac{m^2}{M^2} +1\right] \,,
\end{align}
where we have switched to the \ensuremath{\overline{\text{MS}}}\ scheme using eqn~(\ref{3.7}).
$I_F$ is a relatively simple integral because there are only two mass scales in the denominator. An integral with three denominators with
unequal masses gives rise to dilogarithms.
The heavy particle $M$ can be integrated out, as was done for the $W$ boson. The heavy particle propagator is expanded in a power series,
\begin{align}
\label{3.19}
\frac{1}{k^2-M^2} = -\frac{1}{M^2}\left(1+\frac{k^2}{M^2} + \frac{k^4}{M^4}
+ \ldots\right).
\end{align}
The loop graph in the EFT is a series of contributions, one from each term in eqn~(\ref{3.19}),
\begin{align}
\label{3.20}
I_\text{EFT}&=g^2 \mu^{2\epsilon}\int \frac{{\rm d}^\d k}{(2\pi)^\d}\ \frac{1}{ (k^2-m^2)}\left[-\frac{1}{M^2}-\frac{k^2}{M^4}-\frac{k^4}{M^6}-\ldots\right] \nn
&= \frac{ig^2}{16\pi^2 M^2 }\left[-\frac{m^2}{\epsilon}
+ m^2 \log \frac{m^2}{\bar \mu^2}-m^2\right]
+ \frac{ig^2}{16\pi^2 M^4 }\left[-\frac{m^4}{\epsilon}
+ m^4 \log \frac{m^2}{\bar \mu^2}-m^4\right] \nn
&+ \frac{ig^2}{16\pi^2 M^6 }\left[-\frac{m^6}{\epsilon}
+ m^6 \log \frac{m^2}{\bar\mu^2}-m^6\right]
+\ldots\,.
\end{align}
The series in eqn~(\ref{3.20}) is sufficiently simple in this example that we can sum it up,
\begin{align}
\label{3.21}
I_\text{EFT}&= \frac{ig^2}{16\pi^2}\left[-\frac{1}{\epsilon}\ \frac{m^2}{M^2-m^2}
+ \frac{m^2}{M^2-m^2}\log \frac{m^2}{\bar\mu^2}-\frac{m^2}{M^2-m^2}\right]\,,
\end{align}
to compare with $I_F$. However, it is best to think of $I_\text{EFT}$ in the expanded form eqn~(\ref{3.20}), since the EFT is an expansion in powers of $1/M$.
There are several important points to note:
\begin{itemize}
\item The two results $I_F$ and $I_\text{EFT}$ are different. The order of integration and expansion matters.
\item The $1/\epsilon$ terms do not agree, they are cancelled by counterterms which differ in the full and EFT. The two theories have \emph{different} counterterms and hence \emph{different} anomalous dimensions. In our example, the $1/\epsilon$ terms in eqn~(\ref{3.20}) give the anomalous dimensions of the $1/M^2$, $1/M^4$, $1/M^6$, etc.\ operators. Each operator has its own anomalous dimension.
\item The full theory and the EFT are independent theories adjusted to give the same $S$-matrix. One can use different regulators or gauge-fixing for the two theories.
\item The $\log m^2$ terms, which are non-analytic in the IR scale, agree in the two theories. This is the part of $I_F$ which \emph{must} be reproduced in the EFT.
\item The $\log M^2$ non-analytic terms in $M$ are not present in the EFT integral. This must be the case, because in the EFT calculation, we integrated an expression which was a power series in $1/M$, and had no non-analytic terms in $M$.
\item The difference between $I_F$ and $I_{\text{EFT}}$ is from the UV part of the integral, and is local in the IR mass scale $m$, so that $I_F-I_{\text{EFT}}$ is local (i.e.\ analytic) in $m$. This difference is called the matching contribution to the Lagrangian, and is included in the EFT result by absorbing it into shifts of the EFT Lagrangian coefficients.
\item $I_F$ has $\log M^2/m^2$ terms, which involve the ratio of the UV and IR scales. These logs can be summed using the RGE in the EFT.
\end{itemize}
\begin{exercise}\label{ex5.2}
Compute $I_F$ and $I_{\text{EFT}}$ given in eqns~(\ref{3.18},\ref{3.20}) in dimensional regularization in $\d=4-2\epsilon$ dimensions. Both integrals have UV divergences, and the $1/\epsilon$ pieces are cancelled by counterterms. Determine the counterterm contributions $I_{F,\text{ct}}$, $I_{\text{EFT},\text{ct}}$ to the two integrals.
\end{exercise}
\section{Matching}\label{sec:5.5}
The infinite parts of $I_F$ and $I_\text{EFT}$ are cancelled by counterterms in the full theory and the EFT, respectively. The difference of the two renormalized integrals is the matching contribution
\begin{align}
\label{3.22}
I_M &=\left[ I_F + I_{F,\text{c.t.}} \right] - \left[ I_\text{EFT} + I_{\text{EFT,c.t.}} \right] \nn
&= \frac{ig^2}{16\pi^2}\left[\left( \log \frac{\bar \mu^2}{M^2}+1\right)
+ \frac{m^2}{M^2}\left( \log \frac{\bar\mu^2}{M^2}+1\right)+\ldots\right].
\end{align}
The terms in parentheses are matching corrections to terms of order 1, order $1/M^2$, etc.\ from integrating out the heavy particle with mass $M$. They are analytic in the IR scale $m$. In our simple toy example, the $(m^2/M^2)^r$ corrections are corrections to the coefficient of the $\chi^4$ operator, where $\chi$ is the external field in Fig.~\ref{fig:full}. If the mass $m$ is generated from a $\lambda \phi^4/4!$ interaction when a light field $\phi$ gets a vacuum expectation value $\vev{\phi}=v$, $m^2=\lambda v^2/3$, then one can treat $m^2$ as $\lambda \phi^2/3$, and the series eqn~(\ref{3.22}) is an expansion in $\chi^4 (\phi^2)^r$ operators of increasing dimension. For this reason, we refer to the $1/M$ expansion as being in operators of increasing dimension.
The logarithm of the ratio of IR and UV scales $m$ and $M$ can be written as
\begin{align}
\label{3.23}
\log \frac{m^2}{M^2} = \underbrace{ - \log \frac{M^2}{\bar \mu^2} }_{\text{matching}} +
\underbrace{\log \frac{m^2}{\bar \mu^2}}_{\text{EFT}}\,,
\end{align}
where the scales have been separated using $\bar \mu$. The first piece is in the matching condition eqn~(\ref{3.22}), and the second in the EFT result eqn~(\ref{3.21}). We have separated a two-scale calculation into two one-scale calculations. A single scale integral is far easier to compute than a multi-scale integral, so the two-step calculation is much easier to do in practice.
\begin{exercise}\label{ex5.4}
Compute $I_M \equiv \left( I_F + I_{F,\text{ct}} \right) - \left( I_{\text{EFT}} + I_{\text{EFT},\text{ct}} \right)$ and show that it is analytic in $m$.
\end{exercise}
\section{Summing Large Logs}
The full theory result $I_F$ has $\log M^2/m^2$ terms, which is the ratio of a UV and an IR scale. At higher orders, one gets additional powers of the log,
\begin{align}
\label{3.24}
\left[ \frac{g^2}{16\pi^2} \log \frac{M^2}{m^2} \right]^n\,.
\end{align}
If $M \gg m$, perturbation theory can break down when $g^2/(16 \pi^2) \log M^2/m^2 \sim 1$. QCD perturbation theory often breaks down because of such logs, and it is necessary to sum these corrections.
In the EFT approach, $I_F$ has been broken into two pieces, the matching $I_M$ and the EFT result $I_\text{EFT}$. $I_M$ only involves the high scale $M$, and logs in $I_M$ depend on the ratio $M/ \bar \mu$. These logs are not large if we choose $\bar \mu \sim M$. $I_M$ can be computed in perturbation theory with $\bar \mu \sim M$, and perturbation theory is valid as long as $g^2/(16\pi^2)$ is small, a much weaker condition than requiring $g^2/(16 \pi^2) \log M^2/m^2$ to be small.
Similarly, $I_\text{EFT}$ only involves the scale $m$, and logs in $I_\text{EFT}$ are logs of the ratio $m/\bar \mu$. The EFT logs are not large if we choose $\bar \mu \sim m$. Thus we can compute $I_M$ and $I_\text{EFT}$ if we use two different $\bar\mu$ values. The change in $\bar \mu$ is accomplished by using the renormalization group equations in the EFT.
\section{A Better Matching Procedure}\label{sec:5.7}
While we argued that single-scale integrals were much easier to evaluate than multi-scale ones, the way we computed $I_M$ as the difference $I_F-I_\text{EFT}$ still required first computing the multi-scale integral $I_F$. And if we know $I_F$, don't we essentially have the answer we want anyway? Why bother with constructing an EFT in the first place?
It turns out there is a much simpler way to compute the matching that does not rely on first computing $I_F$. $I_F$ and $I_\text{EFT}$ both contain terms non-analytic in the infrared scale, but the difference $I_M$ is analytic in $m$,
\begin{align}
\underbrace{ I_M(m) }_\text{analytic} &= \underbrace{ I_F(m) }_\text{non-analytic} -\underbrace{ I_\text{EFT}(m) }_\text{non-analytic} \,.
\end{align}
Therefore, we can compute $I_M$ by expanding $I_F-I_\text{EFT}$ in an expansion in the IR scale $m$. This drops the non-analytic pieces, but we know they cancel in $I_F-I_\text{EFT}$.
The expansion of $I_F$ is
\begin{align}
\label{3.25}
I_F^{(\text{exp})} &=g^2 \mu^{2\epsilon} \int \frac{{\rm d}^\d k}{(2\pi)^\d}\ \frac{1}{k^2-M^2} \left[\frac{1}{k^2} + \frac{m^2}{k^4} + \ldots \right] .
\end{align}
The expansion of $I_\text{EFT}$ is
\begin{align}
\label{3.26}
I_\text{EFT}^{(\text{exp})} &=g^2 \mu^{2\epsilon}\int \frac{{\rm d}^\d k}{(2\pi)^\d}\ \left[\frac{1}{k^2} + \frac{m^2}{k^4} + \ldots \right] \left[-\frac{1}{M^2}-\frac{k^2}{M^4}-\ldots\right].
\end{align}
Both $I_F^{(\text{exp})}$ and $I_\text{EFT}^{(\text{exp})}$ have to be integrated term by term. The expansions $I_F^{(\text{exp})}$ and
$I_\text{EFT}^{(\text{exp})}$ drop non-analytic terms in $m$, and do not sum to give $I_F$ and $I_\text{EFT}$. However, the non-analytic terms in $m$ cancel in the difference, so $I_F^{(\text{exp})}-I_\text{EFT}^{(\text{exp})}$ does sum to give $I_M$.
Non-analytic terms in dimensional analysis arise from contributions of the form
\begin{align}
\frac{1}{\epsilon} m^\epsilon &= \frac{1}{\epsilon} + \log m + \ldots
\end{align}
in integrals done using dimensional regularization. In eqns~(\ref{3.25},\ref{3.26}), we first expand in the IR scale $m$, and then expand in $\epsilon$.
In this case,
\begin{align}
\frac{1}{\epsilon} m^\epsilon &= \frac{1}{\epsilon} \left[ m^\epsilon\Bigr|_{m=0} + \epsilon m^{\epsilon-1}\Bigr|_{m=0} + \ldots \right].
\end{align}
In dimensional regularization, the $m=0$ limit of all the terms in the square brackets vanishes. Expanding in $m$ sets all non-analytic terms in $m$ to zero.
$I_\text{EFT}^{(\text{exp})}$ has to be integrated term by term. Each term is a scaleless integral, and vanishes. For example the first term in the product is
\begin{align}
\label{3.27}
&g^2 \mu^{2\epsilon}\int \frac{{\rm d}^\d k}{(2\pi)^\d}\ \left[\frac{1}{k^2} \right] \left[-\frac{1}{M^2}\right]
= -\frac{1}{M^2} g^2 \mu^{2\epsilon}\int \frac{{\rm d}^\d k}{(2\pi)^\d}\ \frac{1}{k^2} =0\,.
\end{align}
This is not an accident of our particular calculation, but completely general. $I_\text{EFT}$ was given by expanding the integrand of $I_F$ in inverse powers of the UV scale $M$. $I_\text{EFT}^{(\text{exp})}$ is given by taking the result and expanding the integrand in powers of the IR scale $m$. The resulting integrand has all scales expanded out, and so is scaleless and vanishes. $I_F^{(\text{exp})}$, on the other hand, now only depends on the UV scale $M$; the IR scale $m$ has been expanded out. Integrating term by term reproduces eqn~(\ref{3.22}) for the matching integral $I_M$. Thus the matching is given by evaluating $I_F$ with all IR scales expanded out. This is a much easier way to compute $I_M$ than computing $I_F$ and $I_{\text{EFT}}$ and taking the difference.
\begin{exercisebn}
Compute $I_F^{(\text{exp})}$, i.e.\ $I_F$ with the IR $m$ scale expanded out
\begin{align*}
I_F^{(\text{exp})} &= -i \mu^{2\epsilon} \int \frac{\rd^d k}{(2\pi)^d} \frac{1}{(k^2-M^2)}\left[\frac{1}{k^2} + \frac{m^2}{k^4} + \ldots \right] \,.
\end{align*}
Note that the first term in the expansion has a $1/\epsilon$ UV divergence, and the remaining terms have $1/\epsilon$ IR divergences.
\end{exercisebn}
\begin{exercisenb}
Compute $I_F^{(\text{exp})} + I_{F,\text{ct}}$ using $I_{F,\text{ct}}$ determined in Exercise~\ref{ex5.2}. Show that the UV divergence cancels, and the remaining $1/\epsilon$ IR divergence is the same as the UV counterterm $I_{\text{EFT},ct}$ in the EFT.
\end{exercisenb}
Something remarkable has happened. We have taken $I_F$, and expanded term by term in inverse powers of $1/M$, i.e.\ by assuming $k \ll M$, to get $I_\text{EFT}$. Then we have taken the original $I_F$ and expanded term by term in powers of $m$, i.e.\ by assuming $k \gg m$, to get $I_F^{(\text{exp})}=I_M$. The sum of the two contributions is exactly the original integral $I_F$. Adding two different expansions of the same integrand recovers the original result, not \emph{twice} the original result. The agreement is exact. One might worry that we have counted the region $m \ll k \ll M$ in both integrals. But this double-counting region is precisely $I_\text{EFT}^{(\text{exp})}$, and vanishes in dimensional regularization. It does not vanish with other regulators, such as a cutoff. One can understand why the EFT method works by using the analogy of dimensional regularization with integration using the method of residues. The $I_F$ integrand has UV poles at $M$ and IR poles at $m$. Expanding out in $1/M$ to get $I_\text{EFT}$ leaves only the IR poles. Expanding out in $m$ leaves only the UV poles in $I_M$. The sum of the two has all poles, and gives the full result.
Dimensional regularized integrals are evaluated with $k$ set by the physical scales in the problem. There are no artificial scales as in a cutoff regulator that lead to spurious power divergences which have to be carefully subtracted away.
The method of regions~\cite{Beneke:1997zp} is a very clever procedure for evaluating Feynman integrals which is closely related to the above discussion. One finds momentum regions which lead to poles in the integrand, expands in a power series in each region, and integrates term-by-term using dimensional regularization. Adding up the contributions of all the momentum regions gives the original integrand. In our example, the two regions were the hard region $k \sim M$, and the soft region $k \sim m$. The method of regions provides useful information to formulate an EFT, but it is not the same as an EFT. In an EFT, one has a Lagrangian, and the EFT amplitudes are given by computing graphs using Feynman rules derived from the Lagrangian. One cannot add or subtract modes depending on which momentum region contributes to an EFT graph. For example, in HQET, graphs get contributions from momenta of order $m_b$, and of order $m_c$. Nevertheless, HQET only has a single gluon field, not separate ones for each scaling region. In the method of regions, the contribution of different regions can depend on how loop momenta are routed in a Feynman graph, though the total integral given by summing all regions remains unchanged. In an EFT, the Lagrangian and Feynman rules do not depend on the momentum routing used.
\section{UV and IR Divergences}\label{sec:5.8}
Let us look in more detail at the $1/\epsilon$ terms. The original integral $I_F$ can have both UV and IR divergences. In our example, it only has a UV divergence. The terms in $I_\text{EFT}$ are expanded in $k^2/M^2$ and become more and more UV divergent. The terms in $I_F^{(\text{exp})}$ are expanded in $m^2/k^2$ and become more and more IR divergent. Dimensional regularization regulates both the UV and IR divergences. It will be useful to separate the divergences into UV and IR, and label them by $\eUV$ or $\eIR$. In reality, there is only one $\epsilon=\eUV=\eIR$ given by $\epsilon=(4-\d)/2$. At higher loops, one has to be careful about mixed divergences which are the product of IR and UV divergences.
The log divergent (in $\d=4)$ scaleless integral vanishes
\begin{align}
\int \frac{{\rm d}^\d k}{(2\pi)^\d}\ \frac{1}{k^4} &=0.
\end{align}
It is both UV and IR divergent, and can be split into UV divergent and IR divergent integrals
\begin{align}
\label{3.29}
\int \frac{{\rm d}^\d k}{(2\pi)^\d}\ \frac{1}{k^4}=\int \frac{{\rm d}^\d k}{(2\pi)^\d}\ \left[\frac{1}{k^2(k^2-m^2)} - \frac{m^2}{k^4(k^2-m^2)}\right]\,,
\end{align}
by introducing an arbitrary mass scale $m$. The first term is UV divergent, and the second is IR divergent.
Using $\eUV,\eIR$, and evaluating the pieces, eqn~(\ref{3.29}) becomes
\begin{align}
\label{3.29b}
\int \frac{{\rm d}^\d k}{(2\pi)^\d}\ \frac{1}{k^4} &= \frac{i}{16\pi^2}\left[ \frac{1}{\eUV} - \frac{1}{\eIR} \right]=0.
\end{align}
Log divergent scaleless integrals vanish because of the cancellation of $1/\eUV$ with $1/\eIR$. Power law divergent scaleless integrals simply vanish, and do not produce $1/\epsilon$ poles, e.g.\
\begin{align}
\label{3.29a}
\int \frac{{\rm d}^\d k}{(2\pi)^\d}\ \frac{1}{k^2} &= 0\,,&
\int \frac{{\rm d}^\d k}{(2\pi)^\d}\ 1 &= 0\,,
\end{align}
so there are no quadratic or quartic divergences in dimensional regularization.
Let us go back to our matching example. $I_F$ and $I_\text{EFT}$ have the same IR behavior, because the EFT reproduces the IR of the full theory. Now consider a particular term in $I_F^{(\text{exp})}$ with coefficient $m^r$,
\begin{align}
\label{3.31}
I_F^{(\text{exp})}(m) &= \sum_r m^r \ I_F^{(r)} \,.
\end{align}
We have expanded out the IR scale $m$, so there can be IR divergences which would otherwise have been regulated by $m$. The integral is a single scale integral depending only on $M$, and has the form
\begin{align}
\label{3.31a}
I_F^{(r)} &= \frac{A^{(r)}}{\eUV} + \frac{B^{(r)}}{\eIR} + C^{(r)}\,,
\end{align}
where $A^{(r)}$ is the UV divergence, $B^{(r)}$ is the IR divergence, and $C^{(r)}$ is the finite part. For example from eqn~(\ref{3.25})
\begin{align}
\label{3.32a}
I_F^{(0)} &=g^2 \mu^{2\epsilon} \int \frac{{\rm d}^\d k}{(2\pi)^\d}\ \frac{1}{k^2-M^2} \frac{1}{k^2}= \frac{ig^2}{16\pi^2} \left[
\frac{1}{\eUV}+ \log \frac{\bar\mu^2}{M^2}+1\right], \nn
I_F^{(2)} &=g^2 \mu^{2\epsilon} \int \frac{{\rm d}^\d k}{(2\pi)^\d}\ \frac{1}{k^2-M^2} \frac{1}{k^4}= \frac{ig^2}{16\pi^2}\frac{1}{M^2}\left[
\frac{1}{\eIR}+ \log \frac{\bar\mu^2}{M^2}+1\right],
\end{align}
so that
\begin{align}
\label{3.36}
A^{(0)} &= \frac{ig^2}{16\pi^2}, & A^{(2)} &= 0, \nn
B^{(0)} &= 0, & B^{(2)} &= \frac{ig^2}{16\pi^2}\frac{1}{M^2}, \nn
C^{(0)} &= \frac{ig^2}{16\pi^2} \left[ \log \frac{\bar\mu^2}{M^2}+1\right], &
C^{(2)} &= \frac{ig^2}{16\pi^2} \frac{1}{M^2} \left[ \log \frac{\bar\mu^2}{M^2}+1\right]\,.
\end{align}
Now look at the terms in $I_\text{EFT}^{(\text{exp})}$,
\begin{align}
\label{3.32b}
I_\text{EFT}^{(\text{exp})} (m) &= \sum_r m^r \ I_\text{EFT}^{(r)} \,.
\end{align}
$I_\text{EFT}^{(\text{exp})}$ is a scaleless integral, and vanishes. However, we can still pick out the log divergent terms, and write $0$ in the form
eqn~(\ref{3.29b}).
In general, we have
\begin{align}
I_\text{EFT}^{(r)} &= -\frac{B^{(r)}}{\eUV} + \frac{B^{(r)}}{\eIR} =0\,,
\end{align}
and there is no finite piece, since the integral vanishes. $B^{(r)}$ is the \emph{same} as in eqn~(\ref{3.31a}), because the two integrals have the same IR divergence, so the $1/\eIR$ terms must agree.
In our example, from eqn~(\ref{3.26}),
\begin{align}
\label{3.32}
I_\text{EFT}^{(0)} &=0\ \hbox{since there is no $m^0/k^4$ term} ,\nn
I_\text{EFT}^{(2)} &=-g^2 \frac{1}{M^2} \mu^{2\epsilon} \int \frac{{\rm d}^d k}{(2\pi)^d}\ \frac{1}{k^4}
= - \frac{i}{16 \pi^2}\frac{1}{M^2}\left[ \frac{1}{\eUV} - \frac{1}{\eIR} \right] ,
\end{align}
so that
\begin{align}
\label{3.40}
B^{(0)} &= 0, & B^{(2)} &= \frac{ig^2}{16\pi^2}\frac{1}{M^2},
\end{align}
which agree with $B^{(0)}$ and $B^{(2)}$ in eqn~(\ref{3.36}), as expected. The renormalized expression for $I_F^{(r)}$ is given by adding the full theory counterterm $-A^{(r)} /\eUV$,
\begin{align}
\label{3.36a}
I_F^{(r)} + I_{F,\text{c.t.}}^{(r)} &= \frac{B^{(r)} }{\eIR} + C^{(r)} \,,
\end{align}
and the renormalized expression for $I_\text{EFT}^{(r)}$ by adding the EFT counterterm $B^{(r)} /\eUV$,
\begin{align}\label{3.36c}
I_\text{EFT}^{(r)} + I_\text{EFT,c.t.}^{(r)} &= \frac{B^{(r)} }{\eIR}\,.
\end{align}
Note that one does not cancel IR divergences by counterterms. The difference of eqn~(\ref{3.36a}) and eqn~(\ref{3.36c}) is
\begin{align}
I_M^{(r)} &= \left[ I_F^{(r)} + I_{F,\text{c.t.}}^{(r)} \right] - \left[ I_\text{EFT}^{(r)} + I_\text{EFT,c.t.}^{(r)} \right] = C^{(r)} .
\end{align}
The infrared divergences cancel between the two, leaving only the finite part $C^{(r)}$.
\begin{exercisebn}
Compute $I_{\text{EFT}}^{(\text{exp})}$, i.e.\ $I_{\text{EFT}}$ with the IR $m$ scale expanded out. Show that it is a scaleless integral which vanishes. Using the known UV divergence from Exercise~\ref{ex5.2}, write it in the form
\begin{align*}
I_{\text{EFT}}^{(\text{exp})} &=-B \frac{1}{16\pi^2} \left[ \frac{1}{\eUV}-\frac{1}{\eIR} \right]\,,
\end{align*}
and show that the IR divergence agrees with that in $I_F^{(\text{exp})} + I_{F,ct}$.
\end{exercisebn}
\begin{exercisenb}
Compute $\left(I_F^{(\text{exp})} + I_{F,ct}\right)-\left(I_{\text{EFT}}^{(\text{exp})} + I_{\text{EFT},ct} \right)$ and show that all the $1/\epsilon$ divergences (both UV and IR) cancel, and the result is equal to $I_M$ found in Exercise~\ref{ex5.4}.
\end{exercisenb}
This gives the prescription for the matching condition: Expand $I_F$ in IR scales, and keep only the finite part. However, we have obtained some new information. The anomalous dimension in the full theory is proportional to the UV counterterm $-A$. The anomalous dimension in the EFT is proportional to the EFT counterterm $B$, which can be different from $A$. By the argument just given, $B$ is the IR divergence of the full theory. By using an EFT, we have converted IR divergences (i.e.\ the $\log m$ terms) in the full theory into UV divergences in the EFT. This converts IR logs into UV logs, which can be summed using the renormalization group. In the EFT, $\log M/m$ terms in the full theory are converted to $\log \bar \mu/m$ terms, since $M \to \infty$ in the EFT. These are summed by the EFT renormalization group equations.
\begin{exercise}
Make sure you understand why you can compute $I_M$ simply by taking $I_F^{(\text{exp})} $ and dropping all $1/\epsilon$ terms (both UV and IR).
\end{exercise}
Finally, if we do the EFT calculation without expanding out the IR scale $m$, then the EFT calculation is no longer IR divergent and can have a finite part,
\begin{align}
I_\text{EFT}^{(r)} &= -\frac{B^{(r)}}{\eUV} + D^{(r)}\,,
\end{align}
where the UV divergence remains the same as before. The finite part of the full amplitude $I_F$ has been split into $C^{(r)}+D^{(r)}$, with $C^{(r)}$ from the matching and $D^{(r)}$ from the EFT. In our example,
\begin{align}
D^{(0)} &= 0, \nn
D^{(2)} &= \frac{ig^2}{16\pi^2} \frac{1}{M^2}\left[ \log \frac{m^2}{\mu^2}-1 \right],
\end{align}
from eqn~(\ref{3.20}).
\section{Summary}
It has taken a while to get to the final answer, but we can now summarize our results. The general procedure is simple to state:
\begin{itemize}
\item Compute the full theory graphs expanding in all IR scales. The integrals are single-scale integrals involving only the high scale $M$. Drop the $1/\epsilon$ terms from both UV and IR divergences. This gives $C^{(r)}(\mu)$. To avoid large logarithms, $\mu$ should be chosen to be of order the high scale $M$. The starting values of the EFT coefficient at the high scale are $C^{(r)}(\mu\sim M)$.
\item Evolve the EFT down from $\mu \sim M$ to a low scale $\mu \sim m$ using the renormalization group equations in the EFT. This sums logs of the ratios of scales, $\ln M/m$.
\item Compute in the EFT using $\mu \sim m$. There are no large logs in the EFT calculation.
\item Combine the pieces to get the final result.
\end{itemize}
One computation has been broken up into several much simpler calculations, each of which involves a single scale.
\begin{exercisebn}
Compute the QED on-shell electron form factors $F_1(q^2)$ and $F_2(q^2)$ expanded to first order in $q^2/m^2$ using dimensional regularization to regulate the IR and UV divergences. This gives the one-loop matching to heavy-electron EFT. Note that it is much simpler to \emph{first} expand and then do the Feynman parameter integrals. A more difficult version of the problem is to compute the on-shell quark form factors in QCD, which gives the one-loop matching to the HQET Lagrangian. For help with the computation, see Ref.~\cite{Manohar:1997qy}. Note that in the non-Abelian case, using background field gauge is helpful because the amplitude respects gauge invariance on the external gluon fields.
\end{exercisebn}
\begin{exercisenn}
\item The SCET matching for the vector current $\overline \psi \gamma^\mu \psi$ for the Sudakov form factor is a variant of the previous problem. Compute $F_1(q^2)$ for on-shell massless quarks, in pure dimensional regularization with $Q^2=-q^2 \not =0$. Here $Q^2$ is the big scale, whereas in the previous problem $q^2$ was the small scale. The spacelike calculation $Q^2>0$ avoids having to deal with the $+i 0^+$ terms in the Feynman propagator which lead to imaginary parts. The timelike result can then be obtained by analytic continuation.
\end{exercisenn}
\begin{exercisenb}
\item Compute the SCET matching for timelike $q^2$, by analytically continuing the previous result. Be careful about the sign of the imaginary parts.
\end{exercisenb}
\section{RG Improved Perturbation Theory}\label{sec:rge}
We have mentioned several times that renormalization group improved perturbation theory is better than fixed order perturbation theory. To understand the difference, consider an example where an operator coefficient $c(\mu)$ satisfies the one-loop renormalization group equation
\begin{align}
\label{3.45}
\mu \frac{\rd}{\rd \mu}c(\mu) &=\left[ \gamma_0 \frac{g^2(\mu)}{16\pi^2} + \mathcal{O}\left(\frac{g^2(\mu)}{16\pi^2} \right)^2 \right] c(\mu) ,
\end{align}
where $\gamma_0$ is a constant. The evolution of $g(\mu)$ is given by the $\beta$-function equation
\begin{align}\label{3.46}
\mu \frac{\rd g(\mu)}{\rd \mu} &= -b_0 \frac{g^3(\mu)}{16\pi^2} + \mathcal{O}\left[\frac{g^5(\mu)}{(16\pi^2)^2} \right] .
\end{align}
As long as $g^2(\mu)/(16\pi^2)$ is small, we can integrate the ratio of eqn~(\ref{3.45}) and eqn~(\ref{3.46}) to get
\begin{align}
\label{3.47}
\frac{c(\mu_1)}{c(\mu_2)} &= \left[ \frac{\alpha_s(\mu_1)}{\alpha_s(\mu_2)} \right]^{-\gamma_0/(2b_0)}, & \alpha_s(\mu) &= \frac{g^2(\mu)}{4\pi}.
\end{align}
Integrating eqn~(\ref{3.45},\ref{3.46}) term by term, or equivalently, expanding eqn~(\ref{3.47}) gives
\begin{align}
\label{3.48}
\frac{c(\mu_1)}{c(\mu_2)} &= 1 + \gamma_0 \frac{\alpha_s(\mu_1)}{4\pi} \log \frac{\mu_1}{\mu_2}
-\frac12 \gamma_0 (2b_0 -\gamma_0) \left[\frac{\alpha_s(\mu_1)}{4\pi} \log \frac{\mu_1}{\mu_2} \right]^2 \nn
& +\frac16 \gamma_0 (2b_0 -\gamma_0) (4b_0 -\gamma_0) \left[\frac{\alpha_s(\mu_1)}{4\pi} \log \frac{\mu_1}{\mu_2} \right]^3 + \ldots
\end{align}
The renormalization group sums the leading log (LL) series $\alpha_s^n \log^n$, as can be seen from eqn~(\ref{3.48}). One can show that the higher order corrections in eqn~(\ref{3.45},\ref{3.46}) do not contribute to the leading log series, since they are suppressed by $\alpha_s/(4\pi)$ without a log. Including the two-loop terms gives the next-to-leading-log (NLL) series $\alpha_s^n \log^{n-1}$, the three-loop terms give the NNLL series $\alpha_s^n \log^{n-2}$, etc.
The change in $g(\mu)$ and $c(\mu)$ can be very large, even if $\alpha_s(\mu)$ is small. For example, in the strong interactions, $\alpha_s(M_Z) \approx 0.118$ and $\alpha_s(m_b) \approx 0.22$, a ratio of about two. Even though both values of $\alpha_s$ are small, weak decay operator coefficients also change by about a factor of two between $M_Z$ and $m_b$, as shown below.
\subsection{Operator Mixing}
Summing logs using the renormalization group equations allows us to include operator mixing effects in a systematic way. This is best illustrated by the simple example of non-leptonic weak
$b \to c$ decays via the effective Lagrangian
\begin{align}
L = -\frac{4 G_F}{\sqrt2}V_{cb} V_{ud}^*\
\left(c_1 O_1 + c_2 O_2 \right),
\end{align}
where the two operators and their tree-level coefficients at $\mu=M_W$ are
\begin{align}
O_1 &=\left(\bar c^\alpha\, \gamma^\mu\, P_L\, b_\alpha\right)\left(\bar d^\beta\, \gamma_\mu\, P_L\, u_\beta
\right), & c_1 &= 1 + \mathcal{O}\left(\alpha_s\right), \\
O_2 &=\left(\bar c^\alpha\, \gamma^\mu\, P_L\, b_\beta\right)\left(\bar d^\beta\, \gamma_\mu\, P_L\, u_\alpha\
\right),& c_2 &=0+ \mathcal{O}\left(\alpha_s\right),
\end{align}
where $\alpha$ and $\beta$ are color indices. Since the $W$ boson is color-singlet, only $O_1$ is produced by the tree-level graph. $O_2$ is generated by loop graphs involving gluons, which are suppressed by a power of $\alpha_s$.
The renormalization group equations can be computed from the one-loop graph in Fig.~\ref{fig:anomdim}~\cite{Gilman:1979bc},
\begin{figure}
\centering
\includegraphics[width=2cm]{figs/fd4}
\caption{\label{fig:anomdim}. Graph contributing to the anomalous dimension of $O_1$ and $O_2$. One has to sum over gluon exchange between all possible pairs of lines, and also include wavefunction corrections.}
\end{figure}
\begin{align}
\label{5.52}
\mu \frac{\rd}{\rd \mu} \left[ \begin{array}{cc} c_1 \\ c_2 \end{array} \right]
= \frac{\alpha_s}{4\pi} \left[ \begin{array}{cc} -2 & 6 \\ 6 & -2 \end{array} \right] \left[ \begin{array}{cc} c_1 \\ c_2 \end{array} \right].
\end{align}
\begin{exercise}
Compute the anomalous dimension mixing matrix in eqn~(\ref{5.52}).
\end{exercise}
The anomalous dimension matrix is not diagonal, which is referred to as operator mixing. In this simple example, the equations can be integrated by taking the linear combinations $c_\pm = c_1 \pm c_2$,
\begin{align}
\mu \frac{\rd}{\rd \mu} \left[ \begin{array}{cc} c_+ \\ c_- \end{array} \right]
&= \frac{\alpha_s}{4\pi} \left[ \begin{array}{cc} 4 & 0 \\ 0 & -8 \end{array} \right] \left[ \begin{array}{cc} c_+ \\ c_- \end{array} \right],
\end{align}
which decouples the equations. The solution is
\begin{align}
\label{3.54}
\frac{c_+(\mu_1)}{c_+(\mu_2)} &= \left[ \frac{\alpha(\mu_1)}{\alpha(\mu_2)} \right]^{-6/23}, &
\frac{c_-(\mu_1)}{c_-(\mu_2)} &= \left[ \frac{\alpha(\mu_1)}{\alpha(\mu_2)} \right]^{12/23},
\end{align}
using eqn~(\ref{3.47}), with $b_0=11-2/3 n_f=23/3$ and $n_f=5$ dynamical quark flavors. With $\alpha_s(m_b) \sim 0.22$ and $\alpha_s(M_Z) \sim 0.118$,
\begin{align}
\label{3.55}
\frac{c_+(m_b)}{c_+(M_W)} &= 0.85, &
\frac{c_-(m_b)}{c_-(M_W)} &= 1.38,
\end{align}
so that
\begin{align}
\label{3.55a}
c_1(m_b) &\approx 1.12, & c_2(m_b) &\approx -0.27\,.
\end{align}
A substantial $c_2$ coefficient is obtained at low scales, even though the starting value is $c_2(M_W)=0$.
Equation~(\ref{3.48}) for the general matrix case is
\begin{align}
\label{3.58}
\mathbf{c}(\mu_1) &= \biggl[ 1 + \bm{\gamma}_0 \frac{\alpha_s(\mu_1)}{4\pi} \log \frac{\mu_1}{\mu_2}
-\frac12 \bm{\gamma}_0 (2b_0 -\bm{\gamma}_0) \left[\frac{\alpha_s(\mu_1)}{4\pi} \log \frac{\mu_1}{\mu_2} \right]^2 \nn
& +\frac16 \bm{\gamma}_0 (2b_0 -\bm{\gamma}_0) (4b_0 - \bm{\gamma}_0) \left[\frac{\alpha_s(\mu_1)}{4\pi} \log \frac{\mu_1}{\mu_2} \right]^3 + \ldots \biggr] \mathbf{c}(\mu_2),
\end{align}
where $\bm{\gamma}_0$ is a matrix and $\mathbf{c}$ is a column vector. Equation~(\ref{3.58}) shows that $c_2(m_b)$ in eqn~(\ref{3.55a}) is a leading-log term, even though it starts at $c_2(M_W)=0$. In examples with operator mixing, it is difficult to obtain the leading-log series eqn~(\ref{3.58}) by looking at graphs in the full theory. The method used in practice to sum the leading-log series is by integrating anomalous dimensions in the EFT.
The above discussion of renormalization group equations and operator mixing also holds in general EFTs. The EFT Lagrangian is an expansion in higher dimension operators,
\begin{align}
\mathscr{L} &= \mathscr{L}_{\mathscr{D} \le 4} + \frac{1}{\Lambda} c^{(5)}_i O^{(5)}_i + \frac{1}{\Lambda^2} c^{(6)}_i O^{(6)}_i + \ldots\,.
\end{align}
The running of the coupling constants in $\mathscr{L}_{\mathscr{D} \le 4}$ is given by the usual $\beta$-functions of the low-energy theory, e.g.\ by the QCD and QED $\beta$-functions. The other terms in $\mathscr{L}$ are higher dimension operators, and their anomalous dimensions are computed in the same way as eqn~(\ref{5.52}) for the weak interactions. The additional piece of information we have is the EFT power counting formula. This leads to RGE equations of the form
\begin{align}
\mu \frac{\rd}{\rd \mu} c^{(5)}_i &= \gamma^{(5)}_{ij} c^{(5)}_j\,, \nn
\mu \frac{\rd}{\rd \mu} c^{(6)}_i &= \gamma^{(6)}_{ij} c^{(6)}_j + \gamma_{ijk}\, c^{(5)}_j c^{(5)}_k\,,
\end{align}
and in general
\begin{align}
\mu \frac{\rd}{\rd \mu} c^{(D)}_i &= \gamma_{i j_1 j_2 \ldots j_r} c^{(D_1)}_{j_1} \ldots c^{(D_r)}_{j_r} \,,
\end{align}
with $D-4 = \sum_i (D_i-4)$, where the anomalous dimensions $\gamma$ are functions of the coupling constants in $\mathscr{L}_{\mathscr{D} \le 4}$. The renormalization group equations are \emph{non-linear}. Graphs with two insertions of a dimension-five operator need a dimension-six counterterm leading to the $c^{(5)}_j c^{(5)}_k$ term in the anomalous dimension for $c^{(6)}_i$, etc. In the presence of mass terms such as $m_H^2$, one also gets mixing to $D-4 < \sum_r (D_r-4)$ operators, e.g.
\begin{align}
\mu \frac{\rd}{\rd \mu} c^{(4)}_i &= m_H^2 \gamma^{(6 \to 4)}_{ij} c^{(6 )}_j + \ldots \,.
\end{align}
as in SMEFT~\cite{Jenkins:2013zja}.
\chapter{Field Redefinitions and Equations of Motion}
\section{LSZ Reduction Formula}\label{sec:LSZ}
Experimentally observable quantities in field theory are $S$-matrix elements, whereas what one computes from the functional integral are correlation functions of quantum fields. The LSZ reduction formula relates the two. For simplicity, we discuss a theory with a scalar field $\phi(x)$. The momentum space Green's functions are defined by
\begin{figure}
\begin{center}
\includegraphics[width=4cm]{figs/fd5}
\end{center}
\caption{\label{fig:green} Green's function with 3 incoming particles and 4 outgoing particles.}
\end{figure}
\begin{align}
\label{6.1}
&G(q_1,\ldots,q_m;p_1,\ldots,p_n) \nn
&=
\prod_{i=1}^m \int \rd^4 y_i\ e^{i q_i \cdot y_i}\prod_{j=1}^n \int \rd^4 x_j\ e^{-i p_j \cdot x_j} \braket{0| T \left\{ \phi(y_1) \ldots \phi(y_m) \phi (x_1) \ldots \phi(x_n) \right\} |0}
\end{align}
where the momenta $p_i$ are incoming, and momenta $q_i$ are outgoing, as shown in Fig.~\ref{fig:green}. These Green's functions can be computed in perturbation theory using the usual Feynman diagram expansion. The $\phi$ propagator in Fig.~\ref{fig:prop} is a special case of eqn~(\ref{6.1}),
\begin{align}
\label{6.2}
D(p) &= \int \rd^4 x\ e^{i p \cdot x} \braket{0| T \left\{ \phi (x) \phi (0) \right\} |0}\,.
\end{align}
If the field $\phi(x)$ can produce a single particle state $\ket{p}$ with invariant mass $m$ from the vacuum,
\begin{figure}
\begin{center}
\includegraphics[width=2.5cm]{figs/fd6}
\caption{\label{fig:prop} Two-point function $D(p)$.}
\end{center}
\end{figure}
\begin{align}
\label{6.3}
\braket{p | \phi(x)|0} \not=0\,,
\end{align}
then the propagator $D(p)$ has a pole at $p^2=m^2$,
\begin{align}
D(p) &\sim \frac{i \, \mathcal{R}}{p^2-m^2+i \epsilon} + \text{non-pole terms}.
\end{align}
$\phi$ is called an interpolating field for $\ket{p}$.
The wavefunction factor $\mathcal{R}$ is defined by
\begin{align}
\lim_{\substack{p^2\to m^2 \\ p^0 > 0}} \left(p^2-m^2\right) D(p) & \equiv i \, \mathcal{R}\,.
\end{align}
$\mathcal{R}$ is finite, since $D(p)$, the renormalized propagator, is finite.
The $S$-matrix is computed from the Green's function by picking out the poles for each particle,
\begin{align}\label{6.7}
& \lim_{\substack{q_i^2\to m^2 \\ q_i^0 > 0} } \lim_{\substack{p_j^2\to m^2 \\ p_j^0 > 0}}\prod_{i=1}^m \left(q_i^2-m^2\right) \prod_{j=1}^n \left(p_j^2-m^2\right) G(q_1,\ldots,q_m;p_1,\ldots,p_n) \nn
&=\prod_{i=1}^m \left(i \sqrt \mathcal{R}_i \right) \prod_{j=1}^n \left(i\, \sqrt \mathcal{R}_j \right) \ \ {}_{\text{out}}\! \braket{q_1,\ldots,q_m |p_1,\ldots, p_n }_{\text{in}},
\end{align}
i.e.\ the $n+m$ particle pole of the Green's function gives the $S$-matrix up to wavefunction normalization factors. Equation~(\ref{6.7}) is called the LSZ reduction formula~\cite{Lehmann:1954rq}. The only complication for fermions and gauge bosons is that one has to contract with spinors $u(p,s), v(p,s)$ and polarization vectors $\epsilon^{(s)}_\mu(p)$.
The important feature of eqn~(\ref{6.7}) is that the derivation only depends on eqn~(\ref{6.3}), so that any interpolation field can be used. Particle states are given by the physical spectrum of the theory, and Green's functions are given by correlation functions of fields. $S$-matrix elements, which are the physical observables, depend on particle states, not fields. Fields and particles are not the same.
\section{Field Redefinitions}
It is now easy to see why field redefinitions do not change the $S$-matrix. The LSZ reduction formula does not care what field is used. To understand this in more detail, consider the functional integral
\begin{align}
\label{6.7a}
Z[J] &= \int D \phi\ e^{ i \int L[\phi] + J \phi}.
\end{align}
The Green's functions
\begin{align}
\braket{0| T \left\{ \phi(x_1) \ldots \phi(x_r)\right\} |0} &=\frac{ \int D \phi\ \phi(x_1) \ldots \phi(x_r) \ e^{ i S(\phi)} }{ \int D \phi\ e^{ i S(\phi)} } \,,
\end{align}
are given by
\begin{align}
\braket{0| T \left\{ \phi(x_1) \ldots \phi(x_r)\right\} |0} &= \left. \frac{1}{Z[J]}\ \frac{\delta}{i \, \delta J(x_1)}\ldots \frac{\delta}{i \, \delta J(x_r)}\ Z[J] \ \right|_{J=0} \,.
\end{align}
Consider a local field redefinition,
\begin{align}
\label{6.10}
\phi(x) &= F[ \phi^\prime(x) ]\,,
\end{align}
such as
\begin{align}
\phi(x) &= \phi^\prime(x)+c_1 \partial^2\phi^\prime(x) +c_2 \phi^\prime(x)^3\,.
\end{align}
The field redefinition $F[ \phi^\prime(x) ]$ can involve integer powers of $\phi$ and a finite number of derivatives.
Then $L^\prime$ defined by
\begin{align}
L[\phi(x)]=L[F[\phi^\prime(x))]=L^\prime[\phi^\prime(x)]\,,
\end{align}
is the new Lagrangian after the field redefinition eqn~(\ref{6.10}).
The functional integral $Z^\prime$ with the new field $\phi^\prime(x)$ and
Lagrangian $L^\prime$
\begin{align}
\label{6.7aa}
Z^\prime [J] &= \int D \phi^\prime\ e^{ i \int L^\prime[\phi^\prime] + J \phi^\prime} = \int D \phi\ e^{ i \int L^\prime[\phi] + J \phi}\,,
\end{align}
gives correlation functions of $\phi^\prime$ computed using $L^\prime[\phi^\prime]$, or equivalently, correlation functions of $\phi$ computed using $L^\prime[\phi]$, since $\phi^\prime$ is a dummy integration variable and can be replaced by $\phi$.
The original functional integral eqn~(\ref{6.7a}) under the change of variables eqn~(\ref{6.10}) becomes
\begin{align}\label{6.13}
Z[J] &= \int D \phi^\prime\ \abs{\frac{\delta F}{\delta \phi^\prime}} e^{ i \int L^\prime[\phi^\prime] + J F[\phi^\prime]}\,.
\end{align}
The Jacobian $\abs{{\delta F}/{\delta \phi^\prime}}$ is unity in dimensional regularization, except for the special case of a fermionic
chiral transformation, where there is an anomaly~\cite{Fujikawa:1979ay}. Neglecting anomalies, and dropping primes on the dummy variable $\phi^\prime$ gives
\begin{align}\label{6.14}
Z[J] &= \int D \phi\ e^{ i \int L^\prime[\phi] + J F[\phi]}.
\end{align}
Thus $Z[J]$, which gives the Green's functions of $\phi$ computed using Lagrangian $L[\phi]$ by eqn~(\ref{6.7a}), also gives the Green's functions of $F[\phi]$ computed using Lagrangian $L^\prime[\phi]$. In contrast, $Z^\prime[J]$ gives the correlation functions of $\phi$ computed using the new Lagrangian $L^\prime[\phi]$. The two correlation functions are different, so Green's functions change under a field redefinition. However, the $S$-matrix remains unchanged. $Z[J]$ computes the $S$-matrix using Lagrangian $L^\prime[\phi]$ and $F[\phi]$ as the interpolating field, by eqn~(\ref{6.14}). $Z^\prime[J]$ computes
the $S$-matrix using Lagrangian $L^\prime[\phi]$ and $\phi$ as the interpolating field, by eqn~(\ref{6.7aa}). The $S$-matrix does not care about the choice of interpolating field (i.e.\ field redefinition) as long as
\begin{align}
\braket{p | F[\phi] | 0} \not= 0,
\end{align}
so a field redefinition leaves the $S$-matrix unchanged.
In field theory courses, we study renormalizable Lagrangians with terms of dimension $\leqslant 4$. The only field redefinitions allowed are linear transformations,
\begin{align}
\phi_i^\prime &= C_{ij}\ \phi_j \,.
\end{align}
These are used to put the kinetic term in canonical form,
\begin{align}
\frac12 \partial_\mu \phi_i \, \partial^\mu \phi^i.
\end{align}
In an EFT, there is much more freedom to make field redefinitions, since the Lagrangian includes higher dimensional operators. One makes field redefinitions that respect the EFT power counting, e.g.\
\begin{align}
\phi \to \phi + \frac{1}{\Lambda^2} \phi^3 + \ldots
\end{align}
and work order by order in $1/\Lambda$. Field redefinitions are often used to put EFT Lagrangians in canonical form. The EFT Lagrangian is then given by matching from the full theory, followed by a field redefinition, so fields in the EFT are not the same as in the full theory.
\section{Equations of Motion}
A special case of field redefinitions is the use of equations of motion~\cite{Georgi:1991ch,Politzer:1980me}. Let $E[\phi]$ be the \emph{classical} equation of motion
\begin{align}
E[\phi] &\equiv \frac{\delta S}{\delta \phi}.
\end{align}
For example, if
\begin{align}
\mathscr{L} &= \frac 12 \partial_\mu \phi \partial^\mu \phi - \frac 12 m^2 \phi^2 - \frac{1}{4!} \lambda \phi^4,
\end{align}
$E[\phi]$ is
\begin{align}
E[\phi] &= - \partial^2 \phi(x) - m^2 \phi(x) - \frac{1}{3!} \lambda \phi^3(x)\,.
\end{align}
Let $\theta$ be an operator with a factor of the classical equation of motion,
\begin{align}\label{6.23}
\theta[\phi] &= F[\phi] E[\phi] = F[\phi] \frac{\delta S}{\delta \phi},
\end{align}
and consider the functional integral
\begin{align}
\label{6.24}
Z[J,\widetilde J] &= \int D \phi\ e^{ i \int L[\phi] + J\, \phi +\widetilde J \theta[\phi]} .
\end{align}
The correlation function
\begin{align}
\braket{0| T \left\{ \phi(x_1) \ldots \phi(x_n) \theta (x) \right\} | 0}
\end{align}
with one insertion of the equation-of-motion operator $\theta$ is given by evaluating
\begin{align}
\braket{0| T \left\{ \phi(x_1) \ldots \phi(x_n) \theta (x) \right\} | 0} &=\left. \frac{1}{Z[J,\widetilde J]}\ \frac{\delta}{i \, \delta J(x_1)}\ldots \frac{\delta}{i \, \delta J(x_r)}
\frac{\delta}{i \, \delta \widetilde J(x)}\ Z[J,\widetilde J]\ \right|_{J=\widetilde J=0}\,.
\end{align}
Make the change of variables
\begin{align}
\phi &= \phi^\prime - \widetilde J F[\phi^\prime]
\end{align}
in the functional integral eqn~(\ref{6.24}),
\begin{align}
Z[J,\widetilde J] &=
\int D \phi^\prime \abs{\frac{\delta \phi}{\delta \phi^\prime}} \ e^{ i \int L[\phi^\prime]-\left.\frac{\delta S}{\delta \phi} \right|_{\phi^\prime}\widetilde J F[\phi^\prime] + J \phi^\prime - J \widetilde J F[\phi^\prime] + \widetilde J \theta[\phi^\prime] +
\mathcal{O}(\widetilde J)^2} , \nn
&=
\int D \phi^\prime \abs{\frac{\delta \phi}{\delta \phi^\prime}} \ e^{ i \int L[\phi^\prime] + J \phi^\prime - J \widetilde J F[\phi^\prime] +
\mathcal{O}(\widetilde J)^2} ,
\end{align}
by eqn~(\ref{6.23}).
The Jacobian
\begin{align}
\abs{\frac{\delta \phi(x)}{\delta \phi^\prime(y)}} &= \det\left[ \delta(x-y) - \widetilde J\
\frac{\delta F[\phi^\prime(x)]}{\delta \phi^\prime(y)}\right]\,,
\end{align}
is unity in dimensional regularization. Relabeling the dummy integration variable as $\phi$ gives
\begin{align}
\label{6.30}
Z[J,\widetilde J] &=
\int D \phi \ e^{ i \int L[\phi]+ J \phi - J \widetilde J F[\phi] +
\mathcal{O}(\widetilde J)^2} .
\end{align}
Taking the $\widetilde J$ derivative and setting $\widetilde J=0$ gives, by using the equality of eqn~(\ref{6.24}) and eqn~(\ref{6.30}),
\begin{align}\label{6.31}
\int D \phi\ \theta(x)\ e^{ i \int L[\phi]+ J \phi } &= -\int D \phi\ J(x) F[\phi(x)]\
e^{ i \int L[\phi]+ J \phi } \,.
\end{align}
Differentiating multiple times w.r.t.\ $J$ gives the equation-of-motion Ward identity
\begin{align}\label{6.32}
& \braket{0 | T \left\{ \phi(x_1) \ldots \phi (x_n) \theta(x) \right\} | 0} \nn
&= i \sum_r \delta(x-x_r)
\braket{0|T \left\{ \phi(x_1) \ldots \cancel{\phi(x_r)} \ldots \phi (x_n) F[\phi(x_r)] \right\} | 0} .
\end{align}
The $S$ matrix element with an insertion of $\theta$ vanishes,
\begin{align}\label{6.33}
{}_{\text{out}}\! \braket{q_1,\ldots,q_m |\theta|p_1,\ldots, p_n }_{\text{in}} = 0\,,
\end{align}
because it is given by picking out the term with $m+n$ poles on the l.h.s. of eqn~(\ref{6.32}). But the r.h.s.\ shows that the matrix element of the $r{}^{\text{th}}$ term has no pole in $p_r$, because of the $\delta$ function. Each term in the sum vanishes, leading to eqn~(\ref{6.33}). As a result, equation-of-motion operators can be dropped because they do not contribute to the $S$-matrix.
Note that eqn~(\ref{6.33}) implies that the \emph{classical} equations of motion can be dropped. The equations of motion have quantum corrections, but the Ward identity eqn~(\ref{6.33}) is for the classical equations of motion without the quantum corrections. The Ward identity holds even for insertions of the equation-of-motion operator in loop graphs, where the particles are off-shell, and do not satisfy the classical equations of motion.
Using the equations of motion is a special case of a field redefinition. Consider the field redefinition (with $\epsilon \ll 1$):
\begin{align}\label{6.34}
\phi(x) &= \phi^\prime(x)+\epsilon\, F[\phi^\prime(x)]\,.
\end{align}
The change in the Lagrangian due to eqn~(\ref{6.34}) is
\begin{align}\label{6.35}
L[\phi] &= L[\phi^\prime] +\epsilon \,F[\phi^\prime] \frac{\delta S[\phi^\prime]}{\delta \phi^\prime}+ \mathcal{O}\left(\epsilon^2\right)= L[\phi^\prime] +\epsilon \, \theta[\phi^\prime] + + \mathcal{O}\left(\epsilon^2\right)\,.
\end{align}
We have already seen that a field redefinition leaves the $S$-matrix invariant. Thus the $S$-matrix computed with the new Lagrangian $L^\prime [\phi] = L[\phi] + \epsilon \theta [\phi]$ is the same as that computed with $L[\phi]$.\footnote{Remember $\phi$ is a dummy variable, so we can use $L^\prime[\phi]$ instead of $L^\prime[\phi^\prime]$.} Thus we can shift the Lagrangian by equation-of-motion terms. The way equations-of-motion are used in practice is to eliminate operators with derivatives in the EFT Lagrangian.
\begin{exercise}
The classical equation of motion for $\lambda \phi^4$ theory,
\begin{align*}
L &= \frac12 (\partial_\mu \phi)^2 - \frac 12 m^2 \phi^2 - \frac{\lambda}{4!} \phi^4\,,
\end{align*}
is
\begin{align*}
E[\phi] &= (-\partial^2-m^2)\phi - \frac{\lambda}{3!} \phi^3\,.
\end{align*}
The EOM Ward identity for $\theta = F[\phi] E$ is eqn~(\ref{6.32}). Integrate both sides
with
\begin{align*}
\int \rd x\ e^{-i q \cdot x} \prod_i \int \rd x_i\ e^{-i p_i \cdot x_i}
\end{align*}
to get the momentum space version of the Ward identity
\begin{align*}
\braket{0 | T \left\{ \widetilde \phi(p_1) \ldots \widetilde \phi(p_n) \widetilde \theta(q) \right \} | 0} &= i \sum_{r=1}^n
\braket{0 | T \left\{ \widetilde \phi(p_1) \ldots \cancel{\widetilde \phi(p_r)} \ldots \widetilde \phi(p_n) \widetilde F(q+p_r) \right\} | 0}\,.
\end{align*}
(a) Consider the equation of motion operator
\begin{align*}
\theta_1 &= \phi \, E[\phi] = \phi (-\partial^2-m^2)\phi - \frac{\lambda}{3!} \phi^4\,,
\end{align*}
and verify the Ward identity by explicit calculation at order $\lambda$ (i.e.\ tree level) for $\phi \phi$ scattering, i.e. for $\phi \phi \to \phi \phi$. \\
(b) Take the on-shell limit $p_r^2 \to m^2$ at fixed $q \not = 0$ of
\begin{align*}
\prod_r (-i) (p_r^2-m^2) \times \text{Ward Identity}\,,
\end{align*}
and verify that both sides of the Ward identity vanish. Note that both sides do not vanish if one first takes $q=0$ and then takes the on-shell limit. \\
(c) Repeat the above calculation to order $\lambda^2$, i.e.\ one loop. \\
(d) Repeat to one loop for the equation of motion operator
\begin{align*}
\theta_2 &= \phi^3 \, E[\phi] = \phi^3 (-\partial^2-m^2)\phi - \frac{\lambda}{3!} \phi^6\,.
\end{align*}
\end{exercise}
As an example of the use of the equations-of-motion, suppose we have an EFT Lagrangian
\begin{align}\label{6.36}
\mathscr{L} &= \frac 12 \partial_\mu \phi \partial^\mu \phi - \frac 12 m^2 \phi^2 - \frac{1}{4!} \lambda \phi^4 + \frac{c_1}{\Lambda^2} \phi^3 \partial^2 \phi
+ \frac{c_6}{\Lambda^2} \phi^6 + \ldots\,.
\end{align}
Then making the field redefinition
\begin{align}\label{6.37}
\phi \to \phi + \frac{c_1}{\Lambda^2} \phi^3\,,
\end{align}
gives the new Lagrangian
\begin{align}\label{6.38}
\mathscr{L} &= \frac 12 \partial_\mu \phi \partial^\mu \phi - \frac 12 m^2 \phi^2 - \frac{1}{4!} \lambda \phi^4 + \frac{c_1}{\Lambda^2} \phi^3 \partial^2 \phi
+ \frac{c_6}{\Lambda^2} \phi^6 \nn
&+ \frac{c_1}{\Lambda^2}\phi^3\left[ -\partial^2\phi - m^2 \phi - \frac{\lambda}{3!} \phi^3 \right] + \ldots \nn
&= \frac 12 \partial_\mu \phi \partial^\mu \phi - \frac 12 m^2 \phi^2 - \left[ \frac{1}{4!} \lambda + \frac{c_1}{\Lambda^2} m^2 \right]\phi^4
+ \left[ \frac{c_6}{\Lambda^2} - \frac{c_1}{\Lambda^2} \frac{\lambda}{3!} \right] \phi^6 + \ldots\,.
\end{align}
The two Lagrangians eqn~(\ref{6.36}) and eqn~(\ref{6.38}) give the same $S$-matrix. In eqn~(\ref{6.38}), we have eliminated the $\phi^3\partial^2 \phi$ operator at the cost of redefining the coefficients of the $\phi^4$ and $\phi^6$ operators. The EFT power counting has been maintained in going from eqn~(\ref{6.36}) to eqn~(\ref{6.38}). It is easier to do computations with eqn~(\ref{6.38}) rather than eqn~(\ref{6.36}), because eqn~(\ref{6.38}) has fewer independent operators. In EFTs, one usually applies the equations of motion to eliminate as many operators with derivatives as possible.
The calculation above only retained terms up to dimension six. If one works to dimension eight, one has to retain the terms quadratic in $c_1/\Lambda^2$ in the transformed Lagrangian. These terms are second order in the equation of motion. Working to second order in the equations of motion is tricky~\cite{Jenkins:2017dyc,Manohar:1997qy,Manohar:1993qn}, and it is best to systematically use field redefinitions to eliminate operators to avoid making mistakes.
Using field redefinitions rather than the equations of motion also clears up some subtleties. For example, the fermion kinetic term is
\begin{align}
\overline \psi\, i \slashed{D}\, \psi .
\end{align}
This operator vanishes using the fermion equation of motion $i \slashed{D}\, \psi = 0$. However, it is not possible to eliminate this term by a field redefinition, so one cannot eliminate the fermion kinetic energy using the equations of motion. One can eliminate higher order terms such as $\phi^2 \overline \psi\, i \slashed{D}\, \psi $. Another interesting example is given in Ref.~\cite{Jenkins:2017dyc}.
\begin{exercisebn}
Write down all possible $C$-even dimension six terms in eqn~(\ref{2.21}), and show how they can be eliminated by field redefinitions.
\end{exercisebn}
\begin{exercisenb}
Take the heavy quark Lagrangian
\begin{align*}
{\mathcal L}_v &= \bar Q_v \left\{ i v \cdot D + i {{\,\raise.15ex\hbox{/}\mkern-13.5mu D}}_\perp \frac{1}{ 2 m + i
v \cdot D} i {{\,\raise.15ex\hbox{/}\mkern-13.5mu D}}_\perp \right\} Q_v \nonumber \\
&= \bar Q_v \left\{ i v \cdot D - \frac{1}{2 m} {{\,\raise.15ex\hbox{/}\mkern-13.5mu D}}_\perp
{{\,\raise.15ex\hbox{/}\mkern-13.5mu D}}_\perp +\frac {1}{4 m^2} {{\,\raise.15ex\hbox{/}\mkern-13.5mu D}}_\perp \left(iv\cdot D\right)
{{\,\raise.15ex\hbox{/}\mkern-13.5mu D}}_\perp + \ldots \right\} Q_v \nonumber
\end{align*}
and use a sequence of field redefinitions to eliminate the $1/m^2$ suppressed $v \cdot D$ term. The equation of motion for the heavy quark field is $(i v \cdot D)Q_v=0$, so this example shows how to eliminate equation-of-motion operators in HQET. Here $v^\mu$ is the velocity vector of the heavy quark with $v \cdot v=1$, and
\begin{align*}
D_\perp^\mu \equiv D^\mu - (v \cdot D) v^\mu\,.
\end{align*}
If you prefer, you can work in the rest frame of the heavy quark, where $v^\mu=(1,0,0,0)$, $v \cdot D=D^0$ and $ D_\perp^\mu = (0,\mathbf{D})$. See Ref.~\cite{Manohar:1997qy} for help.
\end{exercisenb}
In general, there are are many equation-of-motion operators $E_i$. Under renormalization, these operators mix among themselves,
\begin{align}
\mu\frac{\rd}{\rd \mu} E_i &= \gamma_{ij} E_j\,,
\end{align}
where $\gamma_{ij}$ can be gauge dependent. The reason is that the l.h.s. vanishes when inserted in an $S$-matrix element, and this needs to hold for all values of $\mu$. $E_i$ are not observable quantities, and their anomalous dimensions can depend on choice of gauge. For non-equation-of-motion operators $O_i$, the anomalous dimensions take the form
\begin{align}
\mu\frac{\rd}{\rd \mu} O_i &= \gamma_{ij} O_j + \Gamma_{ik} E_k.
\end{align}
An operator $O_i$ is not an equation-of-motion operator if $O_i$ contributes to $S$-matrix elements. Under $\mu$ evolution, these operators can mix with $\{E_i\}$, since $\{E_i\}$ have zero contributions to $S$-matrix elements. Since $O_i$ are observable, $\gamma_{ij}$ is gauge independent, but $\Gamma_{ik}$ can be gauge dependent.
A well-known example of the use of equations-of-motion is for penguin graphs in the weak interactions~\cite{Gilman:1979bc}, shown in Fig.~\ref{fig:penguin}.
\begin{figure}
\centering
\includegraphics[width=2cm]{figs/fd7}
\caption{\label{fig:penguin} Penguin graph in the weak interactions.}
\end{figure}
The penguin graph is divergent, and requires the counterterm
\begin{align}\label{6.42}
\mathscr{L}&= \frac{4G_F}{\sqrt 2} \frac{c_P}{\epsilon} g(\overline \psi \gamma^\mu T^A \psi) \left( D^\nu F_{\mu \nu}\right)^A\,.
\end{align}
The penguin counterterm is eliminated from the Lagrangian by making a field redefinition,
\begin{align}\label{6.44}
\mathscr{L}&= \frac{4G_F}{\sqrt 2}\frac{c_P}{\epsilon} g ( \overline \psi \gamma^\mu T^A \psi) \left( D^\nu F_{\mu \nu}\right)^A
\to \frac{4G_F}{\sqrt 2} \frac{c_P}{\epsilon} g (\overline \psi \gamma^\mu T^A \psi) g (\overline \psi \gamma_\mu T^A \psi)\,,
\end{align}
and replacing it by a four-quark operator.
The field redefinition needed for eqn~(\ref{6.44}) is
\begin{align}
A^A_\mu \to A^A_\mu - \frac{4G_F}{\sqrt 2}\frac{c_P}{\epsilon} g \overline \psi \gamma^\mu T^A \psi\,,
\end{align}
which is a field redefinition with an infinite coefficient. Green's functions using the redefined Lagrangian eqn~(\ref{6.44}) are infinite, but the $S$-matrix is finite. There is no counterterm to cancel the penguin graph divergence, but the on-shell four-quark amplitude gets both the penguin and counterterm contributions (Fig.~\ref{fig:penguin2}) and is finite.
\begin{figure}
\centering
\includegraphics[width=2cm]{figs/fd8}\hspace{2cm}
\raise0.375cm\hbox{\includegraphics[width=1.5cm]{figs/fd9}}
\caption{\label{fig:penguin2} Penguin and four-quark contribution to $qq \to qq$.}
\end{figure}
\chapter{Decoupling of Heavy Particles}\label{sec:decoupling}
Heavy particles do not decouple in a mass-independent subtraction scheme such as \ensuremath{\overline{\text{MS}}}. For example, the one-loop QCD $\beta$-function coefficient is $b_0=11-2/3 n_f$, where $n_f$ is the number of quark flavors. Thus $b_0$ has the same value for all $\mu$, independent of the quark masses. One expects that the top quark only contributes to the $\beta$-function for $\mu \gg m_t$, and no longer contributes when $\mu \ll m_t$, i.e.\ heavy particles decouple at low energy.
To understand the decoupling of heavy particles, consider the contribution of a charged lepton of mass $m$ to the one-loop $\beta$ function in QED. The diagram Fig.~\ref{fig:qed} in dimensional regularization gives
\begin{align}\label{7.1}
& i \frac{e^2}{2\pi^2}\left(p_\mu p_\nu -p^2 g_{\mu\nu}\right)
\left[ \frac{1}{6\epsilon} - \int_0^1 dx\ x(1-x)\
\log\frac{m^2-p^2 x(1-x)}{\overline\mu^2}\right] \nn
\equiv & \, i\left(p_\mu p_\nu -p^2 g_{\mu\nu}\right) \Pi(p^2)
\end{align}
where $p$ is the external momentum.
\begin{figure}
\begin{center}
\includegraphics[width=4cm]{figs/fig7}
\end{center}
\caption{\label{fig:qed} One loop contribution to the QED $\beta$-function
from a fermion of mass $m$}
\end{figure}
\section{Momentum-Subtraction Scheme}
Consider a mass-dependent scheme, the momentum space subtraction scheme, where one subtracts the value of the graph at a Euclidean momentum point $p^2=-\mu_M^2$, to get the renormalized vacuum polarization function,
\begin{align}
\Pi_{\text{mom}}(p^2,m^2,\mu_M^2) &= - \frac{e^2}{2\pi^2}
\left[\int_0^1 dx\ x(1-x)\ \log\frac{m^2-p^2 x(1-x)}{m^2+\mu_M^2 x(1-x)} \right].
\end{align}
The fermion contribution to the QED $\beta$-function is obtained by acting on $\Pi$ with $(e/2)\mu_M\, \rd/\rd \mu_M$,
\begin{align}
\beta_{\text{mom}}\left(e\right)&=-\frac{e}{ 2} \mu_M \frac{\rd }{\rd \mu_M} \frac{e^2}{ 2\pi^2}
\left[\int_0^1 dx\ x(1-x)\ \log \frac{m^2-p^2 x(1-x)}{ m^2+\mu_M^2 x(1-x)}\right]\cr
&= \frac{e^3}{ 2\pi^2}\int_0^1 dx\ x(1-x)\ \frac{\mu_M^2 x (1-x)}{ m^2+\mu_M^2 x(1-x)}.
\end{align}
The fermion contribution to the $\beta$-function is plotted in Fig.~\ref{fig:82}. When the fermion mass $m$ is small compared with the renormalization point $\mu_M$, $m\ll \mu_M$, the $\beta$-function contribution is
\begin{align}
\beta\left(e\right) \approx \frac{e^3}{ 2\pi^2}\int_0^1 dx\ x(1-x) = \frac
{e^3}
{12\pi^2}.
\end{align}
As the renormalization point passes through $m$, the fermion decouples, and for $\mu_M\ll m$, its contribution to $\beta$ vanishes as
\begin{align}
\beta\left(e\right) \approx \frac{e^3}{2\pi^2}
\int_0^1 dx\ x(1-x) \frac{\mu_M^2 x (1-x)}{ m^2}
=\frac {e^3}{60 \pi^2} \frac{\mu_M^2}{m^2} \to 0
\end{align}
Thus in the momentum space scheme, we see the expected behavior that heavy particles decouple, which is an example of the Appelquist-Carazzone decoupling theorem~\cite{Appelquist:1974tg}.
\begin{figure}
\begin{center}
\includegraphics[width=6cm]{figs/smfig}
\end{center}
\caption{\label{fig:82} Contribution of a fermion of mass $m$ to the QED $\beta$-function. The result is given for the momentum-space subtraction scheme, with renormalization scale $\mu_M$. The $\beta$ function does not attain its limiting value of $e^3/12\pi^2$ until $\mu_M \gtrsim 10\, m$. The fermion decouples for $\mu_M\ll m$.}
\end{figure}
\section{The \ensuremath{\overline{\text{MS}}}\ Scheme}
In the \ensuremath{\overline{\text{MS}}}\ scheme, one subtracts only the $1/\epsilon$ pole of eqn~(\ref{7.1}), so
\begin{align}\label{8.7}
\Pi_{\ensuremath{\overline{\text{MS}}}}(p^2,m^2,\overline\mu^2) &= - \frac{e^2}{2\pi^2}
\left[\int_0^1 dx\ x(1-x)\ \log\frac{m^2-p^2 x(1-x)}{\overline \mu^2} \right].
\end{align}
The fermion contribution to the QED $\beta$-function is obtained by acting with $(e/2) \bar\mu\, \rd/\rd\bar\mu$ on $\Pi$,
\begin{align}
\beta_{\ensuremath{\overline{\text{MS}}}}\left(e\right)&=- \frac{e}{2}\bar \mu \frac{\rd}{\rd \bar \mu} \frac{e^2}{ 2\pi^2}
\left[\int_0^1 dx\ x(1-x) \log \frac{m^2-p^2 x(1-x)}{ \overline\mu^2}\right]\nn
&= \frac{e^3}{ 2\pi^2}\int_0^1 dx\ x(1-x) = \frac {e^3}{12\pi^2},
\end{align}
which is independent of the fermion mass and $\bar\mu$.
The fermion contribution to the $\beta$-function in the \ensuremath{\overline{\text{MS}}}\ scheme does not vanish as $m\gg \bar\mu$, so the fermion does not decouple as it should. There is another problem: from eqn~(\ref{8.7}), the finite part of the Feynman graph in the \ensuremath{\overline{\text{MS}}}\ scheme at low momentum is
\begin{align}
\Pi_{\ensuremath{\overline{\text{MS}}}}(0,m^2,\overline\mu^2) &= -\frac{e^2 }{ 2\pi^2}
\left[\int_0^1 dx\ x(1-x) \log \frac{m^2}{ \bar\mu^2}\right].
\end{align}
For $\bar\mu\ll m$ the logarithm becomes large, and perturbation theory breaks down. These two problems are related. The large finite part corrects for the fact that the value of the running coupling used at low energies is ``incorrect,'' because it was obtained using the ``wrong'' $\beta$-function.
The two problems can be solved at the same time by integrating out heavy particles. One uses a theory including the heavy fermion as a dynamical field when $m<\bar\mu$, and a theory without the fermion field when $m>\bar\mu$. Effects of the heavy particle in the low energy theory are included via higher dimension operators, which are suppressed by inverse powers of the heavy particle mass. The matching condition of the two theories is that $S$-matrix elements for light particle scattering in the low-energy theory must equal the $S$-matrix elements for light particle scattering in the high-energy theory. Schematically, one matches
\begin{align}
\mathscr{L}^{(n_l+1)} & \to \mathscr{L}^{(n_l)}\,,
\end{align}
from a theory with $n_l$ light particles and one heavy particle to a theory with $n_l$ light particles. The effects of the heavy particles are absorbed into changes in the coefficients of $\mathscr{L}$. These are referred to as threshold corrections. Thus at the matching scale, $\mathscr{L}$ changes, both in terms of the field content and the values of the Lagrangian coefficients. However, nothing discontinuous is going on, and the physics (i.e. $S$-matrix elements) are continuous across the threshold. The description changes, but the resulting $S$-matrix elements remain the same.
In our example, we can integrate out the heavy lepton at the matching scale $\bar\mu$. The effect of the one-loop heavy lepton graph Fig.~\ref{fig:qed} can be expanded for $p^2 \ll m^2$ as
\begin{align}\label{8.7exp}
\Pi_{\ensuremath{\overline{\text{MS}}}}(p^2,m^2,\overline\mu^2) &= - \frac{e^2}{2\pi^2}
\int_0^1 dx\ x(1-x)\ \left\{ \log\frac{m^2}{\overline \mu^2}+ \log \left[ 1 - \frac{p^2}{m^2} x(1-x) \right]
\right\} \nn
&= - \frac{e^2}{2\pi^2}
\int_0^1 dx\ x(1-x)\ \left\{ \log\frac{m^2}{ \overline\mu^2} - \frac{p^2}{m^2} x(1-x) + \ldots \right\} \nn
&=-\frac16 \log \frac{m^2}{\overline\mu^2} + \frac{p^2}{30 m^2} + \mathcal{O}\left( \frac{p^4}{m^4} \right)\,.
\end{align}
The first term is included in $\mathscr{L}^{(n_l)}$ by a shift in the gauge kinetic term. Rescaling the gauge field to restore the kinetic term to its canonical normalization $-F_{\mu\nu}^2/4$ gives a shift in the gauge coupling constant,
\begin{align}\label{7.11}
\frac{1}{e_L^2(\overline\mu)} &= \frac{1}{e_H^2(\overline\mu)} - \frac{1}{12\pi^2} \log \frac{m^2}{\overline\mu^2} .
\end{align}
where $e_{L}$ is the gauge coupling in the low-energy theory, and $e_H$ is the gauge coupling in the high-energy theory. The $\overline \mu$ dependence of the threshold correction is related to the difference in $\beta$-functions of the two theories.
The second term in eqn~(\ref{8.7exp}) gives a dimension six operator in the low-energy theory,
\begin{align}
\mathscr{L} &= \frac{e^2}{240 \pi^2 m^2} \partial_\alpha F_{\mu \nu} \partial^\alpha F^{\mu \nu},
\end{align}
and so on. While the Lagrangian has changed at $\mu_M$, the $S$-matrix has not. The change in the Lagrangian is exactly the same as the contribution from Fig.~\ref{fig:qed}, which is present in the high energy theory but not in the low-energy theory.
\begin{exercisebn}
Verify that the first term in eqn~(\ref{8.7exp}) leads to the threshold correction in the gauge coupling given in eqn~(\ref{7.11}). If one matches at $\bar \mu=m$, then $e_L(\bar \mu)=e_H(\bar \mu)$, and the gauge coupling is continuous at the threshold. Continuity does not hold at higher loops, or when a heavy scalar is integrated out.
\end{exercisebn}
\begin{exercisenb}
Assume the threshold correction is of the form
\begin{align*}
\frac{1}{e_L^2(\overline\mu)} &= \frac{1}{e_H^2(\overline\mu)} + c \log \frac{m^2}{\overline\mu^2} \,.
\end{align*}
Find the relation between $c$ and the difference $\beta_H-\beta_L$ of the $\beta$-functions in the two theories, and check that this agrees with eqn~(\ref{7.11}).
\end{exercisenb}
\chapter{Naive Dimensional Analysis}\label{sec:nda}
There is a slightly more sophisticated version of the EFT power counting formula which is referred to as naive dimensional analysis (NDA)~\cite{Manohar:1983md}. It is a power counting formula that keeps track of the $4\pi$ factors from loop graphs. If $\phi$, $\psi$ and $X_{\mu \nu}$, $g$, $y$, $\lambda$ denote generic scalar fields, fermion fields, gauge field-strength tensors, gauge couplings, Yukawa couplings and $\phi^4$ couplings, then the NDA formula says that an operator in the EFT should be normalized as
\begin{align}
\widehat O &= f^{2} \Lambda^{2} \left[\frac{\partial}{\Lambda}\right]^{N_p} \left[\frac{\phi}{ f } \right]^{N_\phi} \left[\frac{A}{ f } \right]^{N_A} \left[\frac{\psi}{ f \sqrt{\Lambda}} \right]^{N_\psi} \left[\frac{g}{ 4\pi } \right]^{N_g}
\left[\frac{y}{ 4\pi } \right]^{N_y} \left[\frac{ \lambda}{ 16\pi^2}\right]^{N_\lambda}.
\label{fnda4}
\end{align}
where $\Lambda$ and $f$ are related by
\begin{align}\label{10.2}
\Lambda = 4\pi f\,,
\end{align}
and $\Lambda$ is the scale of the EFT derivative expansion. With this normalization, EFT coefficients are expected to be of order unity,
\begin{align}
\mathscr{L} &= \sum \widehat C_i \widehat O_i\,,
\end{align}
with $\widehat C_i \sim 1$.
A generalization of NDA to $\d$ dimensions can be found in Ref.~\cite{Gavela:2016bzc}. From eqn~(\ref{fnda4}),
\begin{align}
\frac{D}{\Lambda} &= \frac{\partial + i g A}{\Lambda}= \frac{\partial}{\Lambda} +i\left[ \frac{g}{4 \pi } \right] \left[ \frac{A}{f}\right]
\label{16}
\end{align}
so that both parts of a covariant derivative have the same power counting.
Loop graphs in the EFT maintain the NDA form, i.e.\ an arbitrary graph with insertions of operators of the form eqn~(\ref{fnda4}) generates an operator of the same form. The proof, which relies on counting $1/(16\pi^2)$ factors from each loop and the topological identity for a connected graph $V-I+L=1$, where $V$ is the number of vertices, $I$ the number of internal lines, and $L$ the number of loops, is left as an exercise.
\begin{exercise}\label{ex:nda}
\item Show that the power counting formula eqn~(\ref{fnda4}) for an EFT Lagrangian is self-consistent, i.e.\ an arbitrary graph with insertions of vertices of this form generates an interaction which maintains
the same form. (See \cite{Gavela:2016bzc} and \cite{Manohar:1983md}). Show that eqn~(\ref{fnda4}) is equivalent to
\begin{align*}
\widehat O &\sim \frac{\Lambda^4}{16 \pi^2 } \left[\frac{\partial}{\Lambda}\right]^{N_p} \left[\frac{ 4 \pi\, \phi}{ \Lambda} \right]^{N_\phi}
\left[\frac{ 4 \pi\, A}{ \Lambda } \right]^{N_A} \left[\frac{ 4 \pi \, \psi}{\Lambda^{3/2}}\right]^{N_\psi} \left[ \frac{g}{4 \pi } \right]^{N_g}
\left[\frac{y}{4 \pi } \right]^{N_y} \left[\frac{\lambda}{16 \pi^2 }\right]^{N_\lambda} .
\end{align*}
\end{exercise}
Using the more sophisticated power counting of eqn~(\ref{fnda4}) instead of only counting factors of $\Lambda$ makes a big difference in estimating the coefficients of higher dimension terms in the Lagrangian. For example, the four-quark dimension six operator is normalized to
\begin{align*}
\widehat O &= f^2 \Lambda^2 \frac{\left( \overline \psi \gamma^\mu \psi\right)^2}{(f \sqrt{\Lambda})^4} =
\frac{1}{f^2} \left( \overline \psi \gamma^\mu \psi\right)^2 = \frac{16\pi^2}{\Lambda^2} \left( \overline \psi \gamma^\mu \psi\right)^2 \,.
\end{align*}
The extra $16\pi^2$ makes a difference of $\sim 150$ in the normalization of the operator.
In $\chi$PT, the Lagrangian is written in terms of
\begin{align}
U(x) &= e^{2 i \Pi(x)/f}\,,
\end{align}
where $\Pi(x)$ is a matrix of pion fields. $U(x)$ satisfies eqn~(\ref{fnda4}), since every $\Pi$ comes with a factor $1/f$. The normalization of the two-derivative term in the chiral Lagrangian is
\begin{align}
\widehat O &= \Lambda^2 f^2\ \frac{\partial U}{\Lambda} \frac{\partial U^\dagger}{\Lambda} = f^2\, \partial_\mu U \partial^\mu U^\dagger
\end{align}
which is the usual normalization of the kinetic term.
The four-derivative term is normalized to
\begin{align}\label{10.7}
\widehat O &=\Lambda^2 f^2\ \frac{\partial U}{\Lambda} \frac{\partial U^\dagger}{\Lambda} \frac{\partial U}{\Lambda} \frac{\partial U^\dagger}{\Lambda} =
\frac{1}{16 \pi^2}\ \partial_\mu U \partial^\mu U^\dagger \partial_\mu U \partial^\mu U^\dagger\,.
\end{align}
The four-derivative coefficients in the chiral Lagrangian are usually denoted by $L_i$, and eqn~(\ref{10.7}) shows that one expects
$L_i \sim 1/(16\pi^2) \sim 4 \times 10^{-3}$, which is true experimentally (see~\cite{Pich:cs}).
The difference between $f = 93$\,MeV and $\Lambda=4\pi f = 1.2$\,GeV is very important for $\chi$PT. The value of $f$ is fixed from the experimental value of the $\pi \to \mu \overline \nu_\mu$ decay rate. If we did not keep track of the $4\pi$ factors, this would imply that the scale $\Lambda$ of $\chi$PT is $\Lambda \sim f$, and $\chi$PT breaks down for momenta of order $f$. If this is the case, $\chi$PT is not very useful, since the pion mass is around $140$\,MeV, so $\chi$PT breaks down for on-shell pions. Luckily, eqn~(\ref{10.2}) says that $\Lambda_\chi$, the scale of the $\chi$PT derivative expansion is $4\pi f$~\cite{Manohar:1983md} which is much larger than $f$, so that $\chi$PT is valid for $\pi-\pi$ scattering at low momentum. Loop corrections in pion $\chi$PT are of order $[m_\pi/(4 \pi f)]^2 \sim 0.014$, and are a few percent. $\chi$PT for kaons has corrections of order
$[m_K/(4 \pi f)]^2 \sim 0.2$.
The NDA formula eqn~(\ref{fnda4}) implies that if all operators in the Lagrangian are normalized using NDA, then an arbitrary loop graph gives
\begin{align}
\delta\widehat C_i & \sim \prod_{k} \widehat C_{k}\,,
\label{46}
\end{align}
where the graph has insertions of Lagrangian terms $\widehat C_k \widehat O_k$, and produces an amplitude of the form $\widehat C_i \widehat O_i$. All the $4\pi$ factors have disappeared, and one obtains a very simple form for the amplitudes. The results are equally valid for strongly and weakly coupled theories.
The NDA formula eqn~(\ref{46}) also shows that in strongly coupled theories $\widehat C \lesssim 1$~\cite{Manohar:1983md}. The reason is that if $\widehat C \gg 1$, then the hierarchy of equations eqn~(\ref{46}) is unstable, because higher order contributions to $\widehat C_i$ are much larger than $\widehat C_i$. On the other hand, there is no inconsistency if $\widehat C_i \ll 1$, since all this implies is that higher order corrections are small, a sign of a weakly coupled theory. eqn~(\ref{46}) shows that an interaction becomes strongly coupled when $\widehat C \sim 1$. For the dimension-four interactions, strong coupling is when gauge couplings are $g \sim 4\pi$, Yukawa couplings are $y \sim 4\pi$ and scalar self-couplings are $\lambda \sim (4\pi)^2$.
One can use NDA for cross sections as well as amplitudes. A cross section is the imaginary part of the forward scattering amplitude, so one can estimate cross sections by using NDA for the forward amplitude, and then multiplying by $\pi$, since the imaginary part comes from $\log(-1)=i \pi$. Since two-body final states give a one-loop forward scattering diagram, and $n$-body final states give a $n-1$ loop diagram, the $4\pi$ counting rules for phase space are: $1/(16 \pi)$ for the first two particles, and $1/(16\pi^2)$ for each additional particle. We used this $4\pi$ counting rule earlier in these lectures in our estimates of cross sections.
\chapter{Invariants}
EFT Lagrangians are constructed using gauge and Lorentz invariant operators which are polynomials in the basic fields. Classifying these operators is a fun topic which is extensively studied in the mathematics invariant theory literature. I discuss invariant theory briefly in this section.
For an elementary summary, see Refs.\cite{Hanany:2010vu,Jenkins:2009dy}.
Start with the simple example of a theory with $N_f$ fermions with mass term
\begin{align}
\mathscr{L} &= - \overline \psi_L M \psi_R + \text{h.c.}\,,
\end{align}
where $M$ is an $N_f \times N_f$ matrix.
We can make a field redefinition (ignoring anomalies),
\begin{align}\label{10.2a}
\psi_L &\to L \psi_L, &
\psi_R &\to R \psi_R,
\end{align}
under which
\begin{align}
M &\to L M R^\dagger\,.
\end{align}
Under $CP$, $M \to M^*$. The $S$-matrix is invariant under the field redefinition eqn~(\ref{10.2a}), and depends only on invariants constructed from $M$. To eliminate $R$, define
\begin{align}
X &\equiv M M^\dagger, & X &\to L X L^\dagger,
\end{align}
which transforms only under $L$. Then the invariants are
\begin{align}
I_{2n} &=\vev{X^n} \,,
\end{align}
where $2n$ is the degree of the invariant in the basic object $M$, and $\vev{\,\cdot\,}$ denotes a trace. Suppose $N_f=1$. Then $X$ is a $1\times1$ matrix, and
\begin{align}
\vev{X^2} &=I_4 = I_2^2=\vev{X}^2, & \vev{X^3}=I_6 &= I_2^3=\vev{X}^3,
\end{align}
and there is one independent invariant of every even degree, $I_{2n}=I_2^n=\vev{X}^n$.
The Hilbert series is defined as
\begin{align}
H(q) &= \sum_{n=0}^\infty N_n q^n\,
\end{align}
where $N_n$ is the number of invariants of degree $n$, and $N_0=1$ by convention. In the $1\times 1$ matrix example,
\begin{align}\label{9.8}
H(q)&=1+q^2+q^4 +\ldots = \frac{1}{1-q^2}\,.
\end{align}
The denominator of $H(q)$ in eqn~(\ref{9.8}) tells us that there is one generator of degree two, which is $\vev{X}$, and that all invariants are given by powers of this generator. Given $I_2$, we can determine the fermion mass, $m = \sqrt{I_2}$, as a real, non-negative number. The invariant is $CP$ even, since under $CP$, $X \to X^*$, and $\vev{X} \to \vev{X^*}=\vev{X^\dagger}=\vev{X}$ since $X$ is Hermitian, and the trace is invariant under transposition of the matrix.
The next case is $N_f=2$, with invariants
\begin{align}
\vev{X},\ \vev{X^2},\ \vev{X^3}, \ldots \,.
\end{align}
These are not all independent, because the Cayley-Hamilton theorem implies
\begin{align}
\vev{X^3} &= \frac32 \vev{X}\vev{X^2}-\frac12 \vev{X}^3\,,
\end{align}
for any $2 \times 2$ matrix.
This identity eliminates all traces of $X^n$ for $n\ge 3$. There is one invariant of degree $2$, $\vev{X}$, two of degree four $\vev{X}^2$ and $\vev{X^2}$, etc.\ The Hilbert series is
\begin{align}
H(q)&=1+q^2+2q^4 +\ldots = \frac{1}{(1-q^2)(1-q^4)}\,.
\end{align}
The denominator factors imply that all invariants are generated by products of $\vev{X}$ and $\vev{X^2}$. Given $\vev{X}$ and $\vev{X^2}$, we can find the two masses by solving
\begin{align}\label{10.12}
\vev{X} &= m_1^2+m_2^2, &
\vev{X^2} &= m_1^4+m_2^4\,.
\end{align}
For $N_f=3$, the generators are $\vev{X}$, $\vev{X^2}$, $\vev{X^3}$. Higher powers are eliminated by the Cayley-Hamilton theorem,
\begin{align}
\vev{ X^4} &= \frac1{6}\vev{X}^4 - \vev {X}^2 \vev{X^2}
+ \frac 4 3 \vev {X^3} \vev X + \frac 1 2 \vev{X^2}^2\,,
\end{align}
and the Hilbert series is
\begin{align}
H(q)&=1+q^2+2q^4 +\ldots = \frac{1}{(1-q^2)(1-q^4)(1-q^6)}\,.
\end{align}
\begin{exercise}
By explicit calculation, show that
\begin{align*}
\left[\frac 12 \vev{A}^2- \frac12 \vev{A^2}\right] \mathbf{1} - \vev{A}A +A^2 &= 0\,,\nn
\frac 16 \vev{A}^3-\frac12\vev{A}\vev{A^2}+\frac13\vev{A^3} &=0 \,,
\end{align*}
for a general $2 \times 2$ matrix $A$ and that
\begin{align*}
\vev{A}\vev{B}\vev{C}-\vev{A}\vev{BC}-\vev{B}\vev{AC}-\vev{C}\vev{AB}+\vev{ABC}+\vev{ACB}
&=0\,.
\end{align*}
for general $2 \times 2$ matrices $A,B,C$.
Identities analogous to this for $3 \times 3$ matrices are used in $\chi$PT to remove $L_0$ and to replace it by $L_{1,2,3}$, as discussed by Pich in his lectures~\cite{Pich:cs}.
\end{exercise}
Now consider the case of two quark types, $u$ and $d$, in the SM. There are two mass matrices $M_u$ and $M_d$ which transform as
\begin{align}\label{9.15}
M_u &\to L M_u R_u^\dagger\,, &
M_d &\to L M_d R_d^\dagger\,.
\end{align}
Equation~(\ref{9.15}) results because the right handed quarks $u_R$ and $d_R$ are independent fields with independent transformations $R_u$ and $R_d$ in the SM, whereas the left-handed quarks are part of a weak doublet,
\begin{align}
q_L &= \left[ \begin{array}{c} u_L \\ d_L \end{array} \right]\,,
\end{align}
so $L_u=L_d=L$. To construct invariants, we can eliminate $R_{u,d}$ by constructing
\begin{align}
X_u &= M_u M_u^\dagger, &
X_d &= M_d M_d^\dagger,
\end{align}
which transform as
\begin{align}
X_u &\to L X_u L^\dagger ,&
X_d &\to L X_d L^\dagger \,.
\end{align}
For $N_f=1$, $X_u$ and $X_d$ are numbers, and the only independent invariants are $\vev{X_u}$ and $\vev{X_d}$, and the Hilbert series is
\begin{align}
H(q) &= \frac{1}{(1-q^2)^2}\,.
\end{align}
For $N_f=2$, the independent generators are $\vev{X_u}$, $\vev{X_d}$, $\vev{X_u^2}$, $\vev{X_d^2}$ and $\vev{X_u X_d}$, and
\begin{align}\label{10.20}
H(q) &= \frac{1}{(1-q^2)^2(1-q^4)^3}\,.
\end{align}
\begin{exercise}
Show that for $N_f=2$, all invariants are generated by the independent invariants $\vev{X_u}$, $\vev{X_d}$, $\vev{X_u^2}$, $\vev{X_d^2}$ and $\vev{X_u X_d}$.
\end{exercise}
$\vev{X_u}$ and $\vev{X_u^2}$ determine the two $u$-quark masses $m_u$ and $m_c$ as in eqn~(\ref{10.12}). $\vev{X_d}$ and $\vev{X_d^2}$ determine the two $d$-quark masses $m_d$ and $m_s$. $\vev{X_u X_d}$ determines the Cabibbo angle,
\begin{align}
\vev{X_u X_d} &= (m_u^2 m_d^2+ m_c^2 m_s^2) - (m_c^2-m_u^2)(m_s^2-m_d^2)\sin^2 \theta\,.
\end{align}
If $m_u=m_c$ or if $m_d=m_s$, $\theta$ is not defined (or can be rotated away).
All the invariants are $CP$ even, so there is no $CP$ violation in the quark sector for two quark flavors. For example, under $CP$,
\begin{align}
\vev{X_u X_d} &\to \vev{X_u^* X_d^*} = \vev{(X_u^* X_d^*)^T} = \vev{ X_d^\dagger X_u^\dagger} = \vev{X_d X_u} = \vev{X_u X_d}
\end{align}
since $X_u$ and $X_d$ are Hermitian, and the trace is invariant under transposition and cyclic permutation.
The first non-trivial example is $N_f=3$. The $CP$ even generators are
\begin{align}
\label{9.22}
\vev{X_u},\ \vev{X_u^2},\ \vev{X_u^3},\ \vev{X_d},\ \vev{X_d^2},\ \vev{X_d^3},\ \vev{X_u X_d}, \vev{X_u^2 X_d}, \vev{X_u X_d^2},\
\vev{X_u^2 X_d^2}.
\end{align}
They determine the quark masses $m_{u,c,t}$, $m_{d,s,b}$, and the three CKM angles $\theta_{12},\theta_{13},\theta_{23}$. However, the terms in eqn~(\ref{9.22}) do not generate all the invariants. We also have the $CP$ odd invariant
\begin{align}
I_- &= \vev{X_u^2 X_d^2 X_u X_d} - \vev{X_d^2 X_u^2 X_d X_u} = \frac13 \vev{\left[X_u,X_d\right]^3}\,.
\end{align}
and the $CP$ even invariant
\begin{align}
I_+=\vev{X_u^2 X_d^2 X_u X_d} + \vev{X_d^2 X_u^2 X_d X_u}\,.
\end{align}
$I_+$ is not independent; it can be written as a linear combination of the lower order invariants in eqn~(\ref{9.22}).
While $I_-$ is not a linear combination of the invariants in eqn~(\ref{9.22}), it turns out that $I_-^2$ \emph{is} a linear combination. This is an example of a relation among the invariants. There also can be relations among relations, which are known as syzygies. Thus the independent invariants are arbitrary products of powers of eqn~(\ref{9.22}) plus $I_-$ to at most the first power. This gives the Hilbert series for $N_f=3$
\begin{align}
\label{9.24}
H(q) &= \frac{1+q^{12}}{(1-q^2)^2(1-q^4)^3(1-q^6)^4(1-q^8)}\,,
\end{align}
where the $+q^{12}$ in the numerator is the contribution from $I_-$. $I_-$ is related to the Jarlskog invariant $J$,
\begin{align}\label{9.27}
I_- &= 2 i (m_c^2-m_u^2)(m_t^2-m_c^2)(m_t^2-m_u^2)(m_s^2-m_d^2)(m_b^2-m_s^2)(m_b^2-m_d^2) J,
\end{align}
where
\begin{align}\label{9.28}
J &= \text{Im}\, \left[ V_{11} V_{12}^* V_{22} V_{21}^* \right] = c_{12} s_{12} c_{13} s_{13}^2 c_{23} s_{23} s_\delta,
\end{align}
using the CKM matrix convention of the PDG~\cite{Patrignani:2016xqp}.
The $CP$-even invariants in eqn~(\ref{9.22}) determine $J^2$, and hence $ J$ but an overall sign. The invariant $I_-$ fixes the sign. This analysis should be familiar from the study of $CP$ violation in the SM. By measuring $CP$ conserving decay rates, one can determine the lengths of the three sides of the unitarity triangle. This determines the triangle (including the area, which is a measure of $CP$ violation) up to an overall reflection, which is fixed by the sign of $J$. Thus, one can determine if $CP$ is violated only from $CP$ conserving measurements.
\newpage
\begin{exercisebn}
Show that the invariant
\begin{align*}
I_- &= \vev{X_u^2 X_d^2 X_u X_d} - \vev{X_d^2 X_u^2 X_d X_u}\,,
\end{align*}
is the lowest order $CP$-odd invariant made of the quark mass matrices. Show that $I_-$ also can be written in the form
\begin{align*}
I_- &= \frac13 \vev{\left[X_u,X_d\right]^3}\,,
\end{align*}
and explicitly work out $I_-$ in the SM using the CKM matrix convention of the PDG~\cite{Patrignani:2016xqp}. Verify eqns~(\ref{9.27},\ref{9.28}).
\end{exercisebn}
\begin{exercisenb}
Compute the Hilbert series for the ring of invariants generated by
$x,y,z$ (each of dimension 1), and invariant under the transformation $(x,y,z) \to (-x,-y,-z)$.
\end{exercisenb}
The general structure of $H(q)$ is the ratio of a numerator $N(q)$ and a denominator $D(q)$,
\begin{align}
H(q) &= \frac{N(q)}{D(q)},
\end{align}
where the denominator $D(q)$ is a product of the form
\begin{align}
D(q) &= (1- q^{n_1})^{r_1} (1- q^{n_2})^{r_2} \ldots
\end{align}
and the numerator $N(q)$ is a polynomial with non-negative coefficients of degree $d_N$ which is palindromic, i.e.
\begin{align}
q^{d_N} N(1/q) &= N(q)\,.
\end{align}
The number of denominator factors $\sum r_i$ is the number of parameters~\cite{knop}. In eqn~(\ref{9.24}) the number of parameters is 10, which are the six masses, 3 angles and one phase.
As a non-trivial example, the lepton sector of the seesaw theory for $n_g=2$ generations has invariants generated by the mass matrices for the charged leptons $m_E$, neutrinos $m_\nu$ and the singlet Majorana mass matrix $M$. The Hilbert series is~\cite{Jenkins:2009dy}
\begin{align}
H(q) &= \frac{1+q^6+3q^8+2q^{10}+3q^{12}+q^{14}+q^{20}}{(1-q^2)^3(1-q^4)^5(1-q^6)(1-q^{10})}\,,
\end{align}
which has a palindromic numerator. The numerator is of degree twenty, and the coefficients are $1,0,0,0,0,0,1,0,3,0,2,0,3,0,1,0,0,0,0,0,1$, which is the same string read in either direction.
To construct an EFT, we have basic fields $\psi(x)$, $\phi(x)$, etc.\ which transform under various symmetries, and we want to construct invariant Lagrangians which are polynomials in the basic fields. This is a problem in invariant theory, with a few additional requirements.
\begin{itemize}
\item We can act with covariant derivatives on fields, $D_\mu \phi(x)$, to get an object that transforms the same way as $\phi(x)$ under gauge and flavor symmetries, but adds an extra Lorentz index.
\item We can drop total derivatives since they vanish when integrated to get the action. Equivalently, we are allowed to integrate by parts.
\item We can make field redefinitions or equivalently use the equations of motion to eliminate operators.
\end{itemize}
Counting invariants including these constraints seems simple, but there is a subtlety. Terms such as
\begin{align}
\partial_\mu (\phi^\dagger \partial^\mu \phi - \partial^\mu \phi^\dagger \phi )
\end{align}
vanish because they are a total derivative, and also by using the equations of motion. We have to make sure we do not double count the terms eliminated by these two conditions. This is a non-trivial problem that was recently solved in Ref.~\cite{Henning:2015alf} using representations of the conformal group. The HQET/NRQCD dimension-eight operators were recently classified with the help of invariants~\cite{Kobach:2017xkw}.
\chapter{SMEFT}\label{sec:smeft}
\begin{table}
\renewcommand{\arraystretch}{1.5}
\setlength{\arraycolsep}{0.125cm}
\begin{align}
\begin{array}{c|c|ccc}
& \text{Lorentz} & SU(3) & SU(2) & U(1) \\
\hline
G_{\mu \nu} & (1,0)+(0,1) & 8 & 1 & 0 \\
W_{\mu \nu} & (1,0)+(0,1) & 1 & 3 & 0 \\
B_{\mu \nu} & (1,0)+(0,1) & 1 & 1 & 0 \\
H & (0,0) & 1 & 2 & \frac12 \\
q & (1/2,0) & 3 & 2 & \frac16 \\
l & (1/2,0) & 1 & 2 & -\frac 12 \\
u & (0,1/2) & 3 & 1 & \frac23 \\
d & (0,1/2) & 3 & 1 & -\frac13 \\
e & (0,1/2) & 1 & 1 & -1 \\
\end{array}
\end{align}
\caption{\label{tab:SM} Fields of the Standard Model. The Lorentz group is $SU(2)_L \times SU(2)_R$. The fermions have a generation index $n_g=1,2,3$.}
\end{table}
The SMEFT is an EFT constructed using the basic fields of the SM given in Table~\ref{tab:SM}. For an extensive recent review, see Ref.~\cite{Brivio:2017vri}. The dimension-four terms give the usual SM Lagrangian. There is only a single $U(1)$ gauge field in the SM. In theories with multiple Abelian gauge fields, the general kinetic energy for the $U(1)$ gauge fields has the form
\begin{align}
\mathscr{L} &= -\frac14 C_{ij} F_{\mu \nu}^{(i)} F_{\mu \nu}^{(j)}\,,
\end{align}
where $C$ is a real symmetric matrix with positive eigenvalues, which is referred to as kinetic mixing~\cite{Galison:1983pa,Holdom:1985ag},
Constructing the higher dimension operators in SMEFT is not easy. It is useful to note that Lorentz invariance requires that fermion fields come in pairs. The allowed fermion bilinears written in terms of chiral fields are
\begin{align}
&\overline \psi_L \gamma^\mu \psi_L, &
&\overline \psi_R \gamma^\mu \psi_R, &
&\overline \psi_L \psi_R, &
& \overline \psi_L \sigma^{\mu \nu} \psi_R, &
&\overline \psi_R \psi_L, &
& \overline \psi_R \sigma^{\mu \nu} \psi_L .
\end{align}
One can always replace a right-handed field $\psi_R$ by its charge-conjugate left-handed field $\psi^c_L$,
\begin{align}\label{11.4}
\psi_R &= C \psi_L^{c*},
\end{align}
where $C=i \gamma^2$. Thus we can use either a right-handed $e^-_R$ field, or a left-handed $e^+_L$ field. The SMEFT is usually written using left-handed $SU(2)$ doublet fields, and right-handed $SU(2)$ singlet fields, as shown in Table~\ref{tab:SM}.
Mass terms and dipole interactions are written in terms of left-handed field bilinears
\begin{align}
\overline \psi_R \psi_L &= \psi_L^{cT} C \psi_L, &
\overline \psi_R \sigma^{\mu \nu} \psi_L &= \psi_L^{cT} C \sigma^{\mu \nu} \psi_L.
\end{align}
In general, if there are multiple left-handed fields, the mass and dipole operators are
\begin{align}\label{11.6}
&\psi_{Lr}^{T} C \psi_{Ls}, &
&\psi_{Lr}^{T} C \sigma^{\mu \nu} \psi_{Ls},
\end{align}
where $r,s$ are flavor indices. The mass term is symmetric in $rs$, and the dipole term is antisymmetric in $rs$. One still has to ensure that the terms in eqn~(\ref{11.6}) respect gauge invariance, so that a mass term $e^{+T}_L C e^-_L$ is allowed, but not $e^{-T}_L C e^-_L$.
Left-handed fields transform as $(1/2,0)$ under the Lorentz group, so that the fermion bilinear $ \chi_L^T C \Gamma\psi_L$ transforms as $(1/2,0) \otimes (1/2,0) = (0,0) \oplus (1,0)$. The $(0,0)$ representation is $\chi^T_L C \psi_L$ and the $(1,0)$ representation is $\chi^T_L C \sigma^{\mu \nu}\psi_L$. The $(1,0)$ representation is self-dual because of the self-duality condition on $\sigma^{\mu \nu}P_L$,
\begin{align}\label{11.7}
\frac{i}{2} \epsilon^{\alpha \beta \mu \nu} \sigma_{\mu \nu} P_L &= \sigma^{\alpha \beta} P_L \,.
\end{align}
Similarly, the right-handed matrix satisfies the anti-self-duality condition
\begin{align}\label{11.8}
\frac{i}{2} \epsilon^{\alpha \beta \mu \nu} \sigma_{\mu \nu} P_R &= -\sigma^{\alpha \beta} P_R\,.
\end{align}
\begin{exercisebn}\label{ex:11.1}
Show that $(\psi_{Lr}^T C \psi_{Ls})$ is symmetric in $rs$ and $(\psi_{Lr}^T C \sigma^{\mu \nu} \psi_{Ls})$ is antisymmetric in $rs$.
\end{exercisebn}
\begin{exercisenb}
Prove the duality relations eqns~(\ref{11.7},\ref{11.8}). The sign convention is $\gamma_5 = i \gamma^0 \gamma^1 \gamma^2 \gamma^3$ and $\epsilon_{0123}=+1$.
\end{exercisenb}
The lowest dimension term in the SMEFT with $\mathscr{D} >4$ is the dimension-five term
\begin{align}
\label{eq:smeft5}
\mathscr{L}^{(5)} &= C_{\substack{5 \\ rs}}\epsilon^{ij} \epsilon^{kl} (l_{ i r}^T\, C\, l_{k s}) H_j H_l + \text{h.c.}\,.
\end{align}
Here $r,s$ are flavor indices, and $i,j,k,l$ are $SU(2)$ gauge indices. The coefficient $C_{\substack{5 \\ rs}}$ is symmetric in $rs$, by Exercise~\ref{ex:11.1}. $\mathscr{L}^{(5)}$ is a $\Delta L=2$ interaction, and gives a Majorana mass term to the neutrinos when $H$ gets a vacuum expectation value.
It can be shown~\cite{Kobach:2016ami} that invariant operators constructed from SM fields satisfy
\begin{align}\label{kobach}
\frac12(\Delta B-\Delta L) \equiv \mathscr{D} \quad \mod 2\,.
\end{align}
Thus a $\mathscr{D}=5$ operator cannot conserve both baryon and lepton number.
\begin{exercisebn}
\item Show that eqn~(\ref{eq:smeft5}) is the unique dimension-five term in the SMEFT Lagrangian.
\end{exercisebn}
\begin{exercisenb}
Show that eqn~(\ref{eq:smeft5}) generates a Majorana neutrino mass when $H$ gets a vacuum expectation value, and find the neutrino mass matrix $M_\nu$ in terms of $C_5$ and $v$.
\end{exercisenb}
At dimension-six there are eight different operator classes, $X^3$, $H^6$, $H^4D^2$, $X^2 H^2$, $\psi^2 H^3$, $\psi^2 XH$, $\psi^2 H^2 D$ and $\psi^4$, in terms of their field content. Determining the independent operators is a non-trivial task~\cite{Buchmuller:1985jz,Grzadkowski:2010es}. Here I discuss a few aspects of the analysis.
The four-quark operators $\psi^4$ can be simplified using Fierz identities. Consider invariants made from two $\overline l \,\Gamma\, l$ bilinears. Since $l$ is a left-handed field, the only gamma-matrix allowed is $\Gamma=\gamma^\mu$. Bilinears constructed from $l$ can be either $SU(2)$ singlets or $SU(2)$ triplets, so the $l^4$ invariants are
\begin{align}
Q_{\substack{ll \\ prst}} &= (\overline l_{ip} \gamma^\mu l^i{}_r )(\overline l_{js} \gamma_\mu l^j{}_t ), \nn
Q^{(3)}_{\substack{ll \\ prst}} &= (\overline l_{ip} \gamma^\mu [\tau^a]^i{}_j l^j{}_r )(\overline l_{ks} \gamma_\mu [\tau^a]^k{}_m l^m{}_t ),
\end{align}
where $p,r,s,t$ are generation (flavor) indices and $i,j,k,m$ are weak $SU(2)$ indices. Using the $SU(2)$ Fierz identity (Exercise~\ref{ex:nfierz})
\begin{align}
\label{9.39su2}
[\tau^a]^i{}_j [\tau^a]^k{}_m &= 2 \delta^i_m \delta^k_j - \delta^i_j \delta^k_m,
\end{align}
the second bilinear can be written as
\begin{align}\label{12.41}
Q^{(3)}_{\substack{ll \\ prst}} &= 2 (\overline l_{ip} \gamma^\mu l^j{}_r )(\overline l_{js} \gamma_\mu l^i{}_t )- (\overline l_{ip} \gamma^\mu l^i{}_r )(\overline l_{js} \gamma_\mu l^j{}_t ).
\end{align}
Applying the spinor Fierz identity (Exercise~\ref{ex:spinfierz})
\begin{align}
\label{9.40}
(\overline \psi_1 \gamma^\mu P_L \psi_2)
(\overline \psi_3 \gamma_\mu P_L \psi_4)
&= (\overline \psi_1 \gamma^\mu P_L \psi_4)
(\overline \psi_3 \gamma_\mu P_L \psi_2)
\end{align}
on the first term of eqn~(\ref{12.41}) gives
\begin{align}
\label{9.41}
Q^{(3)}_{\substack{ll \\ prst}} &= 2 (\overline l_{ip} \gamma^\mu l^i{}_t )(\overline l_{js} \gamma_\mu l^j{}_r )- (\overline l_{ip} \gamma^\mu l^i{}_r )(\overline l_{js} \gamma_\mu l^j{}_t )
= 2 Q_{\substack{ll \\ ptsr}} - Q_{\substack{ll \\ prst}}\,.
\end{align}
Equation~(\ref{9.41}) implies that we do not need to include $Q^{(3)}_{\substack{ll \\ prst}}$ operators, as they are linear combinations of $Q_{ll}$ operators, so the independent $l^4$ operators are $Q_{ll}$.
For $l q$ operators,
\begin{align}
\label{9.42}
Q^{(1)}_{\substack{lq \\ prst}} &= (\overline l_{ip} \gamma^\mu l^i{}_r )(\overline q_{\alpha js} \gamma_\mu q^{\alpha j}{}_t ), \nn
Q^{(3)}_{\substack{lq \\ prst}} &= (\overline l_{ip} \gamma^\mu [\tau^a]^i{}_j l^j{}_r )(\overline q_{\alpha ks} \gamma_\mu [\tau^a]^k{}_m q^{\alpha m}{}_t ),
\end{align}
the identity eqn~(\ref{9.40}) cannot be used since it would produce $(\overline l q)$ bilinears. Thus both $lq$ operators in eqn~(\ref{9.42}) are independent.
For four-quark operators
$(\overline q \gamma^\mu q)(\overline q \gamma_\mu q)$, there are four possible gauge invariants, written schematically as
\begin{align}
\label{9.43}
1 \otimes 1,\quad \tau^a \otimes \tau^a,\quad T^A \otimes T^A,\quad \tau^a T^A \otimes \tau^a T^A,
\end{align}
depending on what gauge generators are inserted in each bilinear. The $SU(N)$ version of eqn~(\ref{9.39su2}) from Exercise~\ref{ex:nfierz}
\begin{align}
\label{9.39}
[T^A]^\alpha{}_\beta [T^A]^\lambda{}_\sigma &=
\frac12 \delta^\alpha_\sigma \delta^\lambda_\beta - \frac{1}{2N} \delta^\alpha_\beta
\delta^\lambda_\sigma\,,
\end{align}
can be used for the color generators with $N=3$. One can view the index contractions for the $SU(2)$ and $SU(3)$ generators as either direct or swapped, i.e.\ in $(\overline q_1 \gamma^\mu q_2)(\overline q_3 \gamma_\mu q_4)$ contracted between $q_1,q_2$ and $q_3,q_4$, or between $q_1,q_4$ and $q_2,q_3$. Then the four possible terms in eqn~(\ref{9.43}) are
\begin{align}
\label{9.40a}
\text{direct},\quad SU(2)\ \text{swapped},\quad SU(3)\ \text{swapped},\quad \text{both swapped}.
\end{align}
The spinor Fierz identity eqn~(\ref{9.40}) exchanges the $q$ fields, so it swaps both the $SU(2)$ and $SU(3)$ indices, and hence converts
\begin{align}
\label{9.40b}
\text{direct} \leftrightarrow \text{both swapped} \qquad SU(2)\ \text{swapped} \leftrightarrow SU(3)\ \text{swapped}.
\end{align}
Thus there are only two independent invariants out of the four in eqn~(\ref{9.43}), which are chosen to be $1 \otimes 1$ and $\tau^a \otimes \tau^a$.
For $\psi^4$ operators involving $\sigma^{\mu\nu}$, the duality relations eqns~(\ref{11.7},\ref{11.8}) can be used to eliminate $\epsilon_{\mu \nu \alpha \beta}$ contracted with $\sigma$ matrices. One also has the relation
\begin{align}\label{10.21}
(\overline A \sigma^{\mu \nu} P_L B)(\overline C \sigma_{\mu \nu} P_R D) &=0
\end{align}
The left-hand side is a Lorentz singlet in the tensor product $(1,0) \otimes (0,1)= (1,1)$, and so must vanish.
Using the above results, one can determine the independent $\psi^4$ operators.
\begin{exercise}
Prove eqn~(\ref{10.21}).
\end{exercise}
\section{SMEFT Operators}
Since the SMEFT is playing an increasingly important role in current research, I will summarize the operators in SMEFT up to dimension six. The number of operators of each type is listed, and their $CP$ property is given as a subscript. For non-Hermitian operators $\O$, $\O+\O^\dagger$ is $CP$ even, and $\O-\O^\dagger$ is $CP$-odd. The flavor indices have not been included for notational simplicity. For example, including flavor indices, $Q_{eW}$ is $Q_{\substack{eW \\ p r}}$ and $Q_{ll}$ is $Q_{\substack{ll \\ prst}}$, etc.
Table~\ref{tab:oplist} gives a summary of the SMEFT operators up to dimension six. For $n_g=3$, there are 6 $\Delta L=2$ operators plus their Hermitian conjugates, 273 $\Delta B=\Delta L=1$ operators plus their Hermitian conjugates, and 2499 Hermitian $\Delta B=\Delta L=0$ operators~\cite{Alonso:2013hga}. For $n_g=1$, there are 76 Hermitian $\Delta B=\Delta L=0$ operators. In the literature, you will often see that there are 59 $\Delta B=\Delta L=0$ operators. This counts the number of operator types listed in the tables below. Some of the operators, such as $(H^\dagger H)^3$ are Hermitian, whereas others, such as
$(H^\dag H)(\bar l e H)$ are not, and count as two Hermitian operators. Hermitian operators have a real coefficient in the Lagrangian, whereas non-Hermitian operators have a complex coefficient. Counting Hermitian operators is equivalent to counting real Lagrangian parameters.
\begin{table}
\begin{align*}
\begin{array}{c|c||c|c|c||c|c|c}
\text{dim} & & \multicolumn{3}{c||}{n_g = 1} & \multicolumn{3}{c}{n_g=3} \\
\hline
& & CP\text{-even}& CP\text{-odd} & \text{Total} & CP\text{-even}& CP\text{-odd} & \text{Total} \\
\hline
5 & \Delta L = 2 & & & 1 & & & 6 \\
5 & \Delta L = -2 & & & 1 & & & 6 \\
\hline
6 & \Delta B = \Delta L = 1 & & & 4 & & & 273 \\
6 & \Delta B = \Delta L = -1 & & & 4 & & & 273 \\
\hline\hline
6 & X^3 & 2 & 2 & 4 & 2 & 2 & 4 \\
6 & H^6 & 1 & 0 & 1 & 1& 0 & 1\\
6 & H^4 D^2 & 2 & 0 & 2 & 2 & 0 & 2 \\
6 & X^2 H^2 & 4 & 4 & 8 & 4 & 4 & 8 \\
6 & \psi^2 H^3 & 3 & 3 & 6 & 27 & 27 & 54 \\
6 & \psi^2 X H & 8 & 8 & 16 & 72 & 72 & 144 \\
6 & \psi^2 H^2 D & 8 & 1 & 9 & 51 & 30 & 81 \\
6 & (\bar L L )(\bar LL ) & 5 & 0 & 5 & 171 & 126 & 297 \\
6 & (\bar RR )(\bar RR ) & 7 & 0 & 7 & 255 & 195 & 450 \\
6 & (\bar L L )(\bar RR ) & 8 & 0 & 8 & 360 & 288 & 648 \\
6 & (\bar L R )(\bar RL )+\text{h.c.} & 1 & 1 & 2 & 81 & 81 & 162 \\
6 & (\bar L R )(\bar LR) +\text{h.c.} & 4 & 4& 8 & 324 & 324 & 648\\
\hline
&\text{Total}\ \Delta B=\Delta L =0& 53 & 23 & 76 & 1350 & 1149 & 2499
\end{array}
\end{align*}
\caption{\label{tab:oplist} Number of operators of each type in the SMEFT up to dimension six.}
\end{table}
\subsection{Dimension $5$}\label{sec:dim5}
The dimension five operators $Q_5$ are $\Delta L=2$ operators.
\begin{align*}
\renewcommand{\arraystretch}{1.5}
\begin{array}[t]{c|c|c}
\multicolumn{3}{c}{\boldsymbol{(LL)HH+\text{h.c.}}} \\
\hline
Q_{5} &\frac12 n_g(n_g+1) & \epsilon^{ij} \epsilon^{k\ell} (l_{ip}^T C l_{kr} ) H_j H_\ell \\
\hline
\text{Total} & \frac12 n_g(n_g+1) + \text{h.c.}
\end{array}
\end{align*}
There are $n_g(n_g+1)/2$ $\Delta L=2$ operators, and $n_g(n_g+1)/2$ $\Delta L=-2$ Hermitian conjugate operators. $CP$ exchanges the $\Delta L= \pm 2$ operators. The $\Delta L=\pm 2$ operators give a Majorana neutrino mass when the weak interactions are spontaneously broken. Since neutrino masses are very small, the $\Delta L = \pm 2$ operators are assumed to be generated at a very high scale (which could be the GUT scale).
\subsection{Dimension $6,\ \Delta B=\Delta L=1$}
The dimension six operators can be divided into several groups. The first group are the $\Delta B=\Delta L=1$ operators and their Hermitian conjugates.
\begin{align*}
\renewcommand{\arraystretch}{1.5}
\begin{array}[t]{c|c|c}
\multicolumn{3}{c}{\boldsymbol{\Delta B = \Delta L = 1 +\text{h.c.}}} \\
\hline
Q_{duql} &n_g^4 & \epsilon^{\alpha \beta \gamma} \epsilon^{ij} (d^T_{\alpha p} C u_{\beta r} ) (q^T_{\gamma i s} C l_{jt}) \\
Q_{qque} & \frac12 n_g^3(n_g+1) & \epsilon^{\alpha \beta \gamma} \epsilon^{ij} (q^T_{\alpha i p} C q_{\beta j r} ) (u^T_{\gamma s} C e_{t}) \\
Q_{qqql} & \frac13 n_g^2(2n_g^2+1) & \epsilon^{\alpha \beta \gamma} \epsilon^{i\ell} \epsilon^{jk} (q^T_{\alpha i p} C q_{\beta j r} ) (q^T_{\gamma k s} C l_{\ell t}) \\
Q_{duue} & n_g^4 & \epsilon^{\alpha \beta \gamma} (d^T_{\alpha p} C u_{\beta r} ) (u^T_{\gamma s} C e_{t}) \\
\hline
\text{Total} & \frac 16 n_g^2 (19 n_g^2+3n_g+2) + \text{h.c.}
\end{array}
\end{align*}
The $\Delta B=\Delta L=1$ operators violate baryon number, and lead to proton decay. They are generated in unified theories, and are suppressed by two powers of the GUT scale.
\subsection{Dimension $6,\ X^3$}
There are 2 $CP$-even and $2$ $CP$-odd operators with three field-strength tensors. In this and subsequent tables, the $CP$ property is shown as a subscript.
\begin{align*}
\renewcommand{\arraystretch}{1.5}
\begin{array}[t]{c|c|c}
\multicolumn{3}{c}{\boldsymbol{X^3}} \\
\hline
Q_G & 1_+ & f^{ABC} G_\mu^{A\nu} G_\nu^{B\rho} G_\rho^{C\mu} \\
Q_{\widetilde G} & 1_- & f^{ABC} \widetilde G_\mu^{A\nu} G_\nu^{B\rho} G_\rho^{C\mu} \\
Q_W & 1_+ & \epsilon^{IJK} W_\mu^{I\nu} W_\nu^{J\rho} W_\rho^{K\mu} \\
Q_{\widetilde W} & 1_- & \epsilon^{IJK} \widetilde W_\mu^{I\nu} W_\nu^{J\rho} W_\rho^{K\mu} \\
\hline
\text{Total} & 2_+ + 2_-
\end{array}
\end{align*}
\subsection{Dimension $6,\ H^6$}
There is a single operator involving six Higgs fields. It adds a $h^6$ interaction of the physical Higgs particle to the SMEFT Lagrangian after spontaneous symmetry breaking.
\begin{align*}
\renewcommand{\arraystretch}{1.5}
\begin{array}[t]{c|c|c}
\multicolumn{3}{c}{\boldsymbol{H^6}} \\
\hline
Q_H & 1_+ & (H^\dag H)^3 \\
\hline
\text{Total} & 1_+
\end{array}
\end{align*}
\newpage
\subsection{Dimension $6,\ H^4 D^2$}
\begin{align*}
\renewcommand{\arraystretch}{1.5}
\begin{array}[t]{c|c|c}
\multicolumn{3}{c}{\boldsymbol{H^4 D^2}} \\
\hline
Q_{H\Box} & 1_+ & (H^\dag H)\Box(H^\dag H) \\
Q_{H D} & 1_+ & \ \left(H^\dag D_\mu H\right)^* \left(H^\dag D_\mu H\right) \\
\hline
\text{Total} & 2_+
\end{array}
\end{align*}
\subsection{Dimension $6,\ X^2 H^2$}
\begin{align*}
\renewcommand{\arraystretch}{1.5}
\begin{array}[t]{c|c|c}
\multicolumn{3}{c}{\boldsymbol{X^2 H^2}} \\
\hline
Q_{H G} & 1_+ & H^\dag H\, G^A_{\mu\nu} G^{A\mu\nu} \\
Q_{H\widetilde G} & 1_- & H^\dag H\, \widetilde G^A_{\mu\nu} G^{A\mu\nu} \\
Q_{H W} & 1_+ & H^\dag H\, W^I_{\mu\nu} W^{I\mu\nu} \\
Q_{H\widetilde W} & 1_- & H^\dag H\, \widetilde W^I_{\mu\nu} W^{I\mu\nu} \\
Q_{H B} & 1_+ & H^\dag H\, B_{\mu\nu} B^{\mu\nu} \\
Q_{H\widetilde B} & 1_- & H^\dag H\, \widetilde B_{\mu\nu} B^{\mu\nu} \\
Q_{H WB} & 1_+ & H^\dag \tau^I H\, W^I_{\mu\nu} B^{\mu\nu} \\
Q_{H\widetilde W B} & 1_- & H^\dag \tau^I H\, \widetilde W^I_{\mu\nu} B^{\mu\nu} \\
\hline
\text{Total} & 4_+ + 4_-
\end{array}
\end{align*}
The $X^2H^2$ operators are very important phenomenologically. They lead to $gg \to h$ and $h \to \gamma \gamma$ vertices, and contribute to Higgs production and decay. The corresponding SM amplitudes start at one loop, so LHC experiments are sensitive to $X^2H^2$ operators via interference effects with SM amplitudes~\cite{Grojean:2013kd,Manohar:2006gz}.
\subsection{Dimension $6,\ \psi^2 H^3$}
\begin{align*}
\renewcommand{\arraystretch}{1.5}
\begin{array}[t]{c|c|c}
\multicolumn{3}{c}{\boldsymbol{(\bar L R) H^3+ \text{h.c.}}} \\
\hline
Q_{eH} & n_g^2 & (H^\dag H)(\bar l_p e_r H) \\
Q_{uH} & n_g^2 & (H^\dag H)(\bar q_p u_r \widetilde H ) \\
Q_{dH} & n_g^2 & (H^\dag H)(\bar q_p d_r H)\\
\hline
\text{Total} & 3 n_g^2 + \text{h.c.}
\end{array}
\end{align*}
These operators are $H^\dagger H$ times the SM Yukawa couplings, and violate the relation that the Higgs boson coupling to fermions is proportional to their mass.
\subsection{Dimension $6,\ \psi^2 X H $}\label{sec:dipole6}
\begin{align*}
\renewcommand{\arraystretch}{1.5}
\begin{array}[t]{c|c|c}
\multicolumn{3}{c}{\boldsymbol{(\bar L R) X H + \text{h.c.}}} \\
\hline
Q_{eW} & n_g^2 & (\bar l_p \sigma^{\mu\nu} e_r) \tau^I H W_{\mu\nu}^I \\
Q_{eB} & n_g^2 & (\bar l_p \sigma^{\mu\nu} e_r) H B_{\mu\nu} \\
Q_{uG} & n_g^2 & (\bar q_p \sigma^{\mu\nu} T^A u_r) \widetilde H \, G_{\mu\nu}^A \\
Q_{uW} & n_g^2 & (\bar q_p \sigma^{\mu\nu} u_r) \tau^I \widetilde H \, W_{\mu\nu}^I \\
Q_{uB} & n_g^2 & (\bar q_p \sigma^{\mu\nu} u_r) \widetilde H \, B_{\mu\nu} \\
Q_{dG} & n_g^2 & (\bar q_p \sigma^{\mu\nu} T^A d_r) H\, G_{\mu\nu}^A \\
Q_{dW} & n_g^2 & (\bar q_p \sigma^{\mu\nu} d_r) \tau^I H\, W_{\mu\nu}^I \\
Q_{dB} & n_g^2 & (\bar q_p \sigma^{\mu\nu} d_r) H\, B_{\mu\nu} \\
\hline
\text{Total} & 8 n_g^2 + \text{h.c.}
\end{array}
\end{align*}
When $H$ gets a VEV, these operators lead to dipole operators for transitions such as $\mu \to e \gamma$, $b \to s \gamma$ and $b \to s g$.
\subsection{Dimension $6,\ \psi^2 H^2 D $}
\begin{align*}
\renewcommand{\arraystretch}{1.5}
\begin{array}[t]{c|c|c}
\multicolumn{3}{c}{\boldsymbol{\psi^2 H^2 D }} \\
\hline
Q_{H l}^{(1)} & \frac12 n_g(n_g+1)_+ + \frac12 n_g (n_g-1)_- & (H^\dag i\overleftrightarrow{D}_\mu H)(\bar l_p \gamma^\mu l_r)\\
Q_{H l}^{(3)} & \frac12 n_g(n_g+1)_+ + \frac12 n_g (n_g-1)_- & (H^\dag i\overleftrightarrow{D}^I_\mu H)(\bar l_p \tau^I \gamma^\mu l_r)\\
Q_{H e} & \frac12 n_g(n_g+1)_+ + \frac12 n_g (n_g-1)_- & (H^\dag i\overleftrightarrow{D}_\mu H)(\bar e_p \gamma^\mu e_r)\\
Q_{H q}^{(1)} & \frac12 n_g(n_g+1)_+ + \frac12 n_g (n_g-1)_- & (H^\dag i\overleftrightarrow{D}_\mu H)(\bar q_p \gamma^\mu q_r)\\
Q_{H q}^{(3)} & \frac12 n_g(n_g+1)_+ + \frac12 n_g (n_g-1)_- & (H^\dag i\overleftrightarrow{D}^I_\mu H)(\bar q_p \tau^I \gamma^\mu q_r)\\
Q_{H u} & \frac12 n_g(n_g+1)_+ + \frac12 n_g (n_g-1)_- & (H^\dag i\overleftrightarrow{D}_\mu H)(\bar u_p \gamma^\mu u_r)\\
Q_{H d} & \frac12 n_g(n_g+1)_+ + \frac12 n_g (n_g-1)_- & (H^\dag i\overleftrightarrow{D}_\mu H)(\bar d_p \gamma^\mu d_r)\\
Q_{H u d} + \text{h.c.} & n_g^2 + \text{h.c.} & i(\widetilde H ^\dag D_\mu H)(\bar u_p \gamma^\mu d_r)\\
\hline
\text{Total} & \frac12 n_g(9n_g+7)_++\frac12 n_g(9n_g-7)_-
\end{array}
\end{align*}
The $\psi^2 H^2 D$ operators modify the coupling of electroweak bosons to fermions. $(Q_{Hud} \pm Q_{Hud}^\dagger)$ are $CP$-even/odd combinations, and contribute $n_g^2$ $CP$-even and $n_g^2$ $CP$-odd operators to the total.
\subsection{Dimension $6,\ (\bar LL)(\bar LL)$}
The $\psi^4$ operators can be grouped into different sets, depending on the chirality properties of the operators. We have seen earlier why the $(\bar LL)(\bar LL)$ invariants are the ones listed in the table.
\begin{align*}
\renewcommand{\arraystretch}{1.5}
\begin{array}[t]{c|c|c}
\multicolumn{3}{c}{\boldsymbol{(\bar LL)(\bar LL) }} \\
\hline
Q_{ll} & \frac14 n_g^2(n_g^2+3)_+ + \frac14 n_g^2(n_g^2-1)_- & (\bar l_p \gamma_\mu l_r)(\bar l_s \gamma^\mu l_t) \\
Q_{qq}^{(1)} & \frac14 n_g^2(n_g^2+3)_+ + \frac14 n_g^2(n_g^2-1)_- & (\bar q_p \gamma_\mu q_r)(\bar q_s \gamma^\mu q_t) \\
Q_{qq}^{(3)} & \frac14 n_g^2(n_g^2+3)_+ + \frac14 n_g^2(n_g^2-1)_- & (\bar q_p \gamma_\mu \tau^I q_r)(\bar q_s \gamma^\mu \tau^I q_t) \\
Q_{lq}^{(1)} & \frac12 n_g^2(n_g^2+1)_+ + \frac12 n_g^2(n_g^2-1)_- & (\bar l_p \gamma_\mu l_r)(\bar q_s \gamma^\mu q_t) \\
Q_{lq}^{(3)} & \frac12 n_g^2(n_g^2+1)_+ + \frac12 n_g^2(n_g^2-1)_- & (\bar l_p \gamma_\mu \tau^I l_r)(\bar q_s \gamma^\mu \tau^I q_t) \\
\hline
\text{Total} & \frac14 n_g^2(7 n_g^2+13)_+ + \frac74 n_g^2(n_g^2-1)_-
\end{array}
\end{align*}
\subsection{Dimension $6,\ (\bar RR)(\bar RR)$}
\begin{align*}
\renewcommand{\arraystretch}{1.5}
\begin{array}[t]{c|c|c}
\multicolumn{3}{c}{\boldsymbol{(\bar RR)(\bar RR) }} \\
\hline
Q_{ee} & \frac18 n_g(n_g+1)(n_g^2+n_g+2)_+ + \frac18 (n_g-1)n_g(n_g+1)(n_g+2)_- & (\bar e_p \gamma_\mu e_r)(\bar e_s \gamma^\mu e_t) \\
Q_{uu} & \frac14 n_g^2(n_g^2+3)_+ + \frac14 n_g^2(n_g^2-1)_- & (\bar u_p \gamma_\mu u_r)(\bar u_s \gamma^\mu u_t) \\
Q_{dd} & \frac14 n_g^2(n_g^2+3)_+ + \frac14 n_g^2(n_g^2-1)_- & (\bar d_p \gamma_\mu d_r)(\bar d_s \gamma^\mu d_t) \\
Q_{eu} & \frac12 n_g^2(n_g^2+1)_+ + \frac12 n_g^2(n_g^2-1)_- & (\bar e_p \gamma_\mu e_r)(\bar u_s \gamma^\mu u_t) \\
Q_{ed} & \frac12 n_g^2(n_g^2+1)_+ + \frac12 n_g^2(n_g^2-1)_- & (\bar e_p \gamma_\mu e_r)(\bar d_s\gamma^\mu d_t) \\
Q_{ud}^{(1)} & \frac12 n_g^2(n_g^2+1)_+ + \frac12 n_g^2(n_g^2-1)_- & (\bar u_p \gamma_\mu u_r)(\bar d_s \gamma^\mu d_t) \\
Q_{ud}^{(8)} & \frac12 n_g^2(n_g^2+1)_+ + \frac12 n_g^2(n_g^2-1)_- & (\bar u_p \gamma_\mu T^A u_r)(\bar d_s \gamma^\mu T^A d_t) \\
\hline
\text{Total} & \frac18 n_g (21n_g^3+2n_g^2+31n_g+2)_+ + \frac18 n_g(n_g^2-1) (21n_g+2)_-
\end{array}
\end{align*}
\subsection{Dimension $6,\ (\bar LL)(\bar RR)$}
\begin{align*}
\renewcommand{\arraystretch}{1.5}
\begin{array}[t]{c|c|c}
\multicolumn{3}{c}{\boldsymbol{(\bar LL)(\bar RR) }} \\
\hline
Q_{le} &\frac12 n_g^2(n_g^2+1)_+ + \frac12 n_g^2(n_g^2-1)_- & (\bar l_p \gamma_\mu l_r)(\bar e_s \gamma^\mu e_t) \\
Q_{lu} &\frac12 n_g^2(n_g^2+1)_+ + \frac12 n_g^2(n_g^2-1)_- & (\bar l_p \gamma_\mu l_r)(\bar u_s \gamma^\mu u_t) \\
Q_{ld} &\frac12 n_g^2(n_g^2+1)_+ + \frac12 n_g^2(n_g^2-1)_- & (\bar l_p \gamma_\mu l_r)(\bar d_s \gamma^\mu d_t) \\
Q_{qe} & \frac12 n_g^2(n_g^2+1)_+ + \frac12 n_g^2(n_g^2-1)_- & (\bar q_p \gamma_\mu q_r)(\bar e_s \gamma^\mu e_t) \\
Q_{qu}^{(1)} & \frac12 n_g^2(n_g^2+1)_+ + \frac12 n_g^2(n_g^2-1)_- & (\bar q_p \gamma_\mu q_r)(\bar u_s \gamma^\mu u_t) \\
Q_{qu}^{(8)} & \frac12 n_g^2(n_g^2+1)_+ + \frac12 n_g^2(n_g^2-1)_- & (\bar q_p \gamma_\mu T^A q_r)(\bar u_s \gamma^\mu T^A u_t) \\
Q_{qd}^{(1)} & \frac12 n_g^2(n_g^2+1)_+ + \frac12 n_g^2(n_g^2-1)_- & (\bar q_p \gamma_\mu q_r)(\bar d_s \gamma^\mu d_t) \\
Q_{qd}^{(8)} & \frac12 n_g^2(n_g^2+1)_+ + \frac12 n_g^2(n_g^2-1)_- & (\bar q_p \gamma_\mu T^A q_r)(\bar d_s \gamma^\mu T^A d_t)\\
\hline
\text{Total} & 4 n_g^2(n_g^2+1)_+ + 4 n_g^2(n_g^2-1)_-
\end{array}
\end{align*}
\subsection{Dimension $6,\ (\bar LR)(\bar RL)$}
\begin{align*}
\renewcommand{\arraystretch}{1.5}
\begin{array}[t]{c|c|c}
\multicolumn{3}{c}{\boldsymbol{(\bar LR)(\bar RL)+\text{h.c.} }} \\
\hline
Q_{ledq} & n_g^4 & (\bar l_p^j e_r)(\bar d_s q_{tj}) \\
\hline
\text{Total} & n_g^4 + \text{h.c.}
\end{array}
\end{align*}
\subsection{Dimension $6,\ (\bar LR)(\bar LR)$}
\begin{align*}
\renewcommand{\arraystretch}{1.5}
\begin{array}[t]{c|c|c}
\multicolumn{3}{c}{\boldsymbol{(\bar LR)(\bar LR)+\text{h.c.} }} \\
\hline
Q_{quqd}^{(1)} & n_g^4 & (\bar q_p^j u_r) \epsilon_{jk} (\bar q_s^k d_t) \\
Q_{quqd}^{(8)} & n_g^4 & (\bar q_p^j T^A u_r) \epsilon_{jk} (\bar q_s^k T^A d_t) \\
Q_{lequ}^{(1)} & n_g^4 & (\bar l_p^j e_r) \epsilon_{jk} (\bar q_s^k u_t) \\
Q_{lequ}^{(3)} & n_g^4 & (\bar l_p^j \sigma_{\mu\nu} e_r) \epsilon_{jk} (\bar q_s^k \sigma^{\mu\nu} u_t) \\
\hline
\text{Total} & 4n_g^4 + \text{h.c.}
\end{array}
\end{align*}
\begin{exercise}
\item In the SMEFT for $n_g$ generations, how many operators are there of the following kind (in increasing order of difficulty): (a) $Q_{He}$ (b) $Q_{ledq}$ (c) $Q_{lq}^{(1)}$ (d) $Q_{qq}^{(1)}$ (e) $Q_{ll}$ (f) $Q_{uu}$ (g) $Q_{ee}$ \\ (h) show that there are a total of 2499 Hermitian dimension-six $\Delta B= \Delta L=0$ operators.
\end{exercise}
The NDA normalization eqn~(\ref{fnda4}) for the SMEFT leads to an interesting pattern for the operators~\cite{Gavela:2016bzc,Jenkins:2013sda},
\begin{align}\label{42}
\mathscr{L} &\sim \widehat C_H \frac{(4\pi)^4}{\Lambda^2} H^6 \nn
& + \widehat C_{\psi^2 H^3} \frac{(4\pi)^3 }{\Lambda^2} \psi^2 H^3 \nn
& + \widehat C_{H^4 D^2} \frac{(4\pi)^2 }{\Lambda^2} H^4 D^2 + \widehat C_{\psi^2H^2 D} \frac{(4\pi)^2}{\Lambda^2} \psi^2 H^2 D + \widehat C_{\psi^4} \frac{(4\pi)^2}{\Lambda^2} \psi^4 \nn
& + \widehat C_{\psi^2 X H} \frac{(4\pi) }{\Lambda^2} g \psi^2 X H \nn
& + \widehat C_{X^2 H^2} \frac{1}{\Lambda^2} g^2 X^2 H^2 \nn
& + \widehat C_{X^3} \frac{1}{(4\pi)^2\Lambda^2} g^3 X^3
\end{align}
with $4\pi$ factors ranging from $(4\pi)^4$ to $1/(4\pi)^2$, a variation of $\sim 4\times 10^6$.
The complete renormalization group equations for the SMEFT up to dimension six have been worked out~\cite{Alonso:2014zka,Alonso:2013hga,Jenkins:2013zja,Jenkins:2013wua}. A very interesting feature of these equations is that they respect holomorphy, reminiscent of what happens in a supersymmetric gauge theory~\cite{Alonso:2014rga}. The renormalization group equations take a simpler form if written using the normalization eqn~(\ref{42}).
\section{EFT below $M_W$}
Below the electroweak scale, one can write a low energy effective theory (LEFT) with quark and lepton fields, and only QCD and QED gauge fields. The operators have been classified in Ref.~\cite{Jenkins:2017dyc,Jenkins:2017jig}. Since $SU(2)$ gauge invariance is no longer a requirement, there are several new types of operators beyond those in SMEFT.
\begin{itemize}
\item
There are dimension-three $\nu \nu$ operators which give a Majorana neutrino mass for left-handed neutrinos.
\item There are dimension-five dipole operators. These are the analog of
the $(\bar L R) XH$ operators in sec~\ref{sec:dipole6}, which turn into dimension-five operators when $H$ is replaced by its vacuum expectation value $v$. There are 70 Hermitian $\Delta B=\Delta L=0$ dipole operators for $n_g=3$.
\item There are $X^3$ and $\psi^4$ operators as in SMEFT, but operators containing $H$ are no longer present.
\item There are $\Delta L=4$ $\nu^4$ operators, and $\Delta L=2$ $(\bar \psi \psi)\nu \nu $ four-fermion operators, as well as four-fermion $\Delta B=-\Delta L$ operators.
\item There are 3631 Hermitian $\Delta B=\Delta L=0$ dimension-six operators for $n_g=3$.
\end{itemize}
The complete renormalization group equations up to dimension-six have been worked out for LEFT~\cite{Jenkins:2017dyc,Jenkins:2017jig}. Since the theory has dimension-five operators, there are non-linear terms from two insertions of dimension-five operators for the dimension-six running. Various pieces of the computation have been studied previously~\cite{Aebischer:2015fzz,Aebischer:2017gaw,Bhattacharya:2015rsa,Buchalla:1995vs,Celis:2017hod,Cirigliano:2017azj,Cirigliano:2012ab,Crivellin:2017rmk,Davidson:2016edt,Dekens:2013zca,Falkowski:2017pss,Gonzalez-Alonso:2017iyc}.
\acknowledgements
I would like to thank Sacha Davidson, Paolo Gambino, Mikko Laine and Matthias Neubert for organizing a very interesting school, and all the students for listening patiently to the lectures, asking lots of questions, and solving homework problems on weekends instead of hiking. Sacha Davidson, in particular, made sure all the students were well looked after. I would also like to thank Elizabeth Jenkins, Andrew Kobach, John McGreevy and Peter Stoffer for carefully reading the manuscript, and Peter Stoffer for permission to use the photographs in Fig.~\ref{fig:eclipse}. This work was supported in part by DOE Grant No.~DE-SC0009919.
|
1,314,259,994,173 | arxiv | \section{Introduction}
One of the main goals of today's experiments in nucleus-nucleus (A+A)
collisions is to search for
the critical point (CP) of QCD matter
\cite{Stephanov:1998dy,Stephanov:1999zu,Stephanov:2004wx,Lacey:2006bc,Aggarwal:2010cw}
(see also recent reviews
\cite{Luo:2017faz, Gazdzicki:2015ska}).
Theoretical arguments suggest the enhancement of
net baryon number fluctuations in the critical region
\cite{Stephanov:1998dy,Stephanov:1999zu,Athanasiou:2010kw,Hatta:2003wn,Stephanov:2008qz,Kitazawa:2012at}.
On
the experimental side, the STAR collaboration has presented the Beam Energy Scan
data of proton cumulants in Au+Au collisions for center
of mass energies per nucleon pair, $\sqrt{s_{NN}}$,
of the Relativistic Heavy
Ion Collider
from $7.7$~GeV to $200$~GeV. At moderate collision
energies, the data \cite{Aggarwal:2010wy,Adamczyk:2013dal,Luo:2015ewa} of
skewness $S\sigma$ and kurtosis $\kappa\sigma^2$ of the net proton number
fluctuations show an interesting non-monotonic
behavior and exhibit large
deviations from the Poisson baseline.
This is considered as a possible signal for the CP
\cite{Jiang:2015hri,Ling:2015yau,Mukherjee:2015swa,Mukherjee:2016kyu,Herold:2016uvv,Bluhm:2016byc,Jiang:2017mji}.
The experimental data
is also influenced by other effects,
such as initial state fluctuations~\cite{Kapusta:2011gt},
system volume fluctuations~\cite{Gorenstein:2011vq,Sangaline:2015bma,Xu:2016skm}, stopping effects~\cite{Bzdak:2016jxo}, acceptance effects~\cite{Bzdak:2012ab,Braun-Munzinger:2016yjz},
global charge conservation~\cite{Begun:2004gs,Bzdak:2012an},
effects of the hadronic phase~\cite{Kitazawa:2012at,Steinheimer:2016cir}, etc.
Some of these effects have been studied with transport models~\cite{Xu:2016qjd,He:2016uei,He:2017zpg}.
It has also been argued that correlation functions, expressible through cumulants, may provide a cleaner information about the underlying dynamics in heavy-ion collisions~\cite{Bzdak:2016sxg}.
In this paper, we focus on the interactions between baryons
within the Quantum van der Waals (QvdW) equation of state \cite%
{Vovchenko:2015xja,Vovchenko:2015vxa,Vovchenko:2015pya}.
In Ref.~\cite{Vovchenko:2016rkn}
the Hadron Resonance Gas (HRG) model with QvdW interactions
between baryons and between antibaryons was formulated and compared
with lattice QCD simulations at zero chemical potentials
in the crossover temperature region. Inclusion of the
QvdW interactions has only minor influence on the pressure and energy density
in comparison with the ideal HRG~(IHRG) model,
but they change significantly the structure of all high order fluctuations
of conserved charges, in most cases leading to a much better agreement with the lattice data,
e.g., a quantitative agreement up to $T \simeq 160$~MeV is obtained for
the net baryon number susceptibilities.
The influence of nucleon-nucleon interactions and the associated nuclear liquid-gas criticality on baryon number fluctuations had previously also been pointed out in Refs.~\cite{Fukushima:2014lfa,Mukherjee:2016nhb}.
In the
following, we calculate the baryon number fluctuations
along the chemical freeze-out line within the QvdW-HRG model and
make a comparison with the STAR data.
We employ the QvdW formalism in this paper because it is the simplest and straightforward way
to include the nuclear matter physics into the hadronic equation of state.
\section{Model}
Searching for the CP signatures~(or, more generally, for any notable deviations from the Poisson baseline) within an equilibrium HRG model is only possible if appropriate hadron-hadron interactions are taken into account.
Following \cite{Vovchenko:2016rkn}
we assume that
the pressure of the QvdW-HRG system can be written as the sum of $3$ terms:
\begin{equation}
p\left( T,\mu \right) =p_{M}\left( T,\mu \right) +p_{B}\left( T,\mu \right)
+p_{\bar{B}}\left( T,\mu \right)~ ,\label{p}
\end{equation}%
where partial pressures of mesons, baryons, and antibaryons are given by
\begin{eqnarray}
p_{M}\left( T,\mu \right) &=&\sum_{j\in M}p_{j}^{id}\left( T,\mu _{j}\right)~
, \label{pM}\\
p_{B(\bar{B})}\left( T,\mu \right)& =& \sum_{j\in B(\bar{B})}
p_{j}^{id}\left( T,\mu _{j}^{B(\bar{B}) \ast }\right)
-an_{B(\bar{B}) }^{2}. \label{p-BantiB}
\end{eqnarray}%
Here
$p_{j}^{\rm id}$ are the ideal Bose-Einstein~[Eq.~(\ref{pM})] or Fermi-Dirac~[Eq.~(\ref{p-BantiB})] pressures,
$ \mu=(\mu_B,\mu_S,\mu_Q)$
are the chemical potentials which regulate the average values of the total baryonic number $B$,
strangeness $S$, and electric charge $Q$.
In Eqs.~(\ref{pM})-(\ref{p-BantiB}),
$\mu_j=b_j\mu_B+s_j\mu_S+q_j\mu_Q$ and
\eq{\label{muB*}
\mu _j^{B*}
& = \mu_j - b\,p_{B}
- a\,b\,n_{B
}^2 + 2\,a\,n_{B
}~,\\
n_{B} & =\sum_{j \in B}n_j =
\left( 1-bn_B\right) \sum_{j\in
B} n_{j}^{id}\left( T,\mu_j^{B*}\right)~,\label{nB}
}
are, respectively, the shifted chemical potentials for different baryons
and total density of all baryons.
Note that Eq.~\eqref{p-BantiB} implicitly contains terms of arbitrary large powers of $n_{B(\bar{B})}$, through the transcendental equations~\eqref{muB*} and \eqref{nB}.
The expressions for $\mu_j^{\bar{B}*}$ and $n_{\bar{B}}$ for the antibaryons are analogous
to (\ref{muB*}) and (\ref{nB}).
The net baryonic density is $\rho_B=n_B-n_{\bar{B}}$.
The QvdW interactions are assumed to exist between all pairs of baryons
and between all pairs of antibaryons,
including all the strange (anti)baryons,
with the same parameters
as for nucleons, $\ a=329$ MeV fm$^{3}$ and $b=3.42$ fm$^{3}$~\cite{Vovchenko:2016rkn}. These $a$ and $b$ parameters
were obtained in Ref.~\cite{Vovchenko:2015vxa} by fitting the
saturation density, $n_{0}=0.16$ fm$^{-3}$, and binding energy, $E/A=-16$ MeV,
of the ground state of nuclear matter.
Baryon-antibaryon, meson-meson, and
meson-(anti)baryon QvdW interactions are neglected. Thus, the present version
of the QvdW-HRG model is a "minimal interaction" extension of the IHRG model.
It should be noted that the vdW parameters $a$ and $b$ may attain different values in the meson-dominated
region of the phase diagram at small $\mu_B / T$ and high $T$
(see, e.g.,
the recent analysis of the lattice data~\cite{Vovchenko:2017drx}).
As our focus here is on the effects of the nuclear liquid-gas criticality in the baryon-rich region,
we retain the $a$ and $b$ parameter values found from nuclear ground state properties.
For the same reason we omit the possible vdW-like interactions between baryons and antibaryons.
Since we do not study strangeness related observables, we do not consider also
the possible
inclusion
of the uncharted strange baryons suggested in~\cite{Alba:2017mqu}.
The sums in Eqs.~(\ref{pM}) and (\ref{p-BantiB}) include
all stable hadrons and resonances listed in the Particle Data Tables~\cite{Patrignani:2016xqp}.
The QvdW-HRG equation of state (\ref{p})-(\ref{nB}) leads to the liquid-gas phase transition
in the symmetric nuclear matter, with a CP at
$T_c \cong 19.7$~MeV and
$\mu^c _{B} \cong 908$~MeV \cite{Vovchenko:2015vxa}, where a singular behavior of baryon number fluctuations appears~\cite{Vovchenko:2015pya}.
\begin{figure*}[tb]
\center
\includegraphics[width=0.49\textwidth]{Tmu-skewness-BES-err.pdf}
\includegraphics[width=0.49\textwidth]{Tmu-kurtosis-BES-err.pdf}
\caption{The contour plots of (a) $S\protect\sigma$ and (b) $\protect\kappa\protect\sigma^2$ for net baryon fluctuations in the $\mu_B$-$T$ plane, as calculated within the QvdW-HRG model.
The dash-dotted line shows the IHRG model chemical freeze-out curve~[Eq.~\ref{Tmu}] from Ref.~\cite{Cleymans:2005xv},
while the semi-transparent shaded area along it depicts the uncertainty in the parameters of this freeze-out curve. The nuclear liquid-gas phase transition is depicted by the thick black line which ends at the CP depicted by full circle.
The red circles with error bars correspond to the thermal fits performed within
the QvdW-HRG model to the hadron yield data at AGS~($\sqrt{s_{NN}} = 5$~GeV)
and SPS~($\sqrt{s_{NN}} = 6.3$, 7.7, and 8.8~GeV).
}
\label{figTmu}
\end{figure*}
To calculate the particle number fluctuations in A+A collisions
we adopt the thermodynamic freeze-out parameters, which were obtained in Refs.~\cite{Andronic:2005yp,Cleymans:2005xv}
by fitting the particle yields at different collision
energies within the HRG model.
The following simple functional form of the freeze-out curve was obtained~\cite{Cleymans:2005xv}:
\eq{
T =a_{1}-a_{2}\mu_B^{2}-a_{3}\mu_B^{4}~,~~~~
\mu_B=\frac{b_{1}}{1+b_{2}\sqrt{s_{NN}}}, \label{Tmu}
}
with $a_{1}=0.166 \pm 0.002$ GeV, $a_{2}=0.139 \pm 0.016$ GeV$^{-1}$, $a_{3}=0.053 \pm 0.021$ GeV$^{-3}$, $b_{1}=1.308 \pm 0.028$ GeV, $b_{2}=0.273 \pm 0.008$ GeV$^{-1}$.
A flatter chemical freeze-out curve,
with a possibly
lower limiting $T \sim 145$ MeV value at $\mu_B = 0$, was also suggested
in Refs.~\cite{Alba:2014eba,Vovchenko:2015idt,Bazavov:2017dus,Critelli:2017oub} based on fluctuations of conserved charges.
As our focus here is
the
baryon-rich region, we retain the original parameterization of Ref.~\cite{Cleymans:2005xv}.
It was obtained by analyzing the hadron yield data at collision energies as low as Schwerionen Synchrotron (i.e., $\mu_B / T \simeq 15$).
The
fluctuations of conserved charges
can be calculated in the grand canonical ensemble (GCE) from the system pressure
by taking the derivatives over the corresponding chemical potentials.
The net baryon number fluctuations are given by the following normalized cumulants
(susceptibilities) $(n=1,\ldots,4)$:
\eq{
\chi_{n}& =\frac{\partial ^{n}\left( p/T^{4}\right) }{\partial \left( \mu_B
/T\right) ^{n}}= \frac{\partial ^{n}\left( p_B/T^{4}\right) }{\partial \left( \mu_B
/T\right) ^{n}}\nonumber \\
&+~(-1)^n\frac{\partial ^{n}\left( p_{\bar{B}}/T^{4}\right) }{\partial \left( \mu_{\bar{B}}
/T\right) ^{n}}~\equiv~\chi_n^{B}+(-1)^n \chi_n^{\bar{B}}~,\label{kn}
}
where $\mu_{\bar{B}} \equiv - \mu_B$
is the baryochemical potential~(not to be confused with the shifted chemical potentials $\mu^{B*}_j$ which are only auxiliary quantities).
The simple presentation of the $\chi_n$, with the $\chi_n^B$ and $\chi_n^{\bar{B}}$ cumulants in (\ref{kn})
is due to the
absence of correlations between baryons and antibaryons in the QvdW-HRG model, i.e.
the probabilty distribution ${\cal P} (N_B,N_{\bar{B}})$ of the number of baryons
and antibaryons is the product
$ {\cal P} (N_B,N_{\bar{B}})=
{\cal P}_B(N_B)\,{\cal P}_{\bar{B}}(N_{\bar{B}})$.
In the following, we consider the normalized skewness and kurtosis for the net baryonic number
fluctuations. They are defined as the corresponding ratios of cumulants:
\begin{eqnarray}
S\sigma =\frac{\chi_{3}}{\chi_{2}}~,~~~~
\kappa \sigma ^{2} =\frac{\chi_{4}}{\chi_{2}}~.
\label{kurt}
\end{eqnarray}
\section{Results and discussion}
The $\mu_B$-$T$ contour plots of $S \sigma$ and $\kappa \sigma^2$, as calculated in the QvdW-HRG model, are depicted in Fig.~\ref{figTmu}.
The IHRG model chemical freeze-out curve (\ref{Tmu}) from Ref.~\cite{Cleymans:2005xv}
is depicted in Fig.~\ref{figTmu} by the dash-dotted line.
At each $\mu_B$-$T$ point, the strangeness and electric charge chemical potentials $\mu_S$ and $\mu_Q$ are determined in the QvdW-HRG model from the condition of strangeness neutrality, $S = 0$, and fixed electric-to-baryon charge ratio, $Q / B = 0.4$.
Figure~\ref{figTmu} shows that signals from the nuclear matter CP shine brightly in net baryon $S \sigma$ and $\kappa \sigma^2$ across the whole phase diagram probed by the heavy-ion collision experiments.
\begin{figure*}[tb]
\center
\includegraphics[width=0.49\textwidth]{skewnessBESerr.pdf}
\includegraphics[width=0.49\textwidth]{kurtosisBESerr.pdf}
\caption{The (a) $S\protect\sigma$ and (b) $\protect\kappa\protect\sigma^2$
of net baryons in full acceptance (solid lines) and
net protons in finite acceptance (dash-dotted lines)
as a function of the collision energies in the QvdW-HRG model.
The bands estimate the uncertainty coming from the chemical freeze-out curve~[Eq.~\eqref{Tmu}].
The dotted lines correspond to the Skellam distribution baseline of non-interacting hadrons.
The STAR collaboration data for the midrapidity net proton fluctuations in the $0.4 < p_T < 0.8$~GeV/$c$~\cite{Adamczyk:2013dal} and $0.4 < p_T < 2$~GeV/$c$~\cite{Luo:2015ewa} intervals are shown by full and open red circles, respectively.
}
\label{fig1}
\end{figure*}
The mean values, $\langle N_B\rangle$, and
central moments,
$m_n^B=\sum_{N_B}(N_B-\langle N_B\rangle )^n {\cal P}_B(N_B)$,
of the corresponding baryon number
distributions may be measured in A+A collisions.
From these measured values the cumulants of ${\cal P}(N_B)$ distribution are found:
\eq{\label{cum}
& K_1^B=\langle N_B\rangle, ~~~~K_2^B=m_2^B~,~~~~K_3^B=m_3^B~,\nonumber \\
& K_4^B=m_4^B-3m_2^B~.
}
Similar expressions hold for antibaryon quantities.
The cumulants $K_n$ are
connected to $\chi_n$ in Eq.~(\ref{kn}) as $K_n=VT^3\,\chi_n$, where $V$ is the system volume.
Therefore, all ratios of cumulants $\chi_n$, in particular those in Eq.~(\ref{kurt}),
are equal to the corresponding ratios of $K_n$.
The
"required acceptance"
\cite{Jeon:2000wg,Asakawa:2000wh,Bzdak:2012ab} for the event-by-event measurements in A+A
reactions should then satisfy the following requirements:
The
GCE
can be used
if only the accepted phase-space region is a {\it small} part of the whole system.
On the other hand, this region should be {\it large} enough to capture
the relevant physics.
Following Refs.~\cite{Kitazawa:2012at,Bzdak:2012ab}, it is assumed
that acceptance corrections from all different sources can be
modeled by binomial distributions,
\eq{\label{binom}
P\left( n \right) =\sum_{N=n}^{\infty }{\cal P}\left( N\right)
\frac{N!}{n!\left( N-n\right)!} q^{n} \left( 1-q\right) ^{N-n},
}
where $n$ represents the \emph{measured} number of baryons (or antibaryons), and $N$ represents
their \emph{true} numbers.
Equation~\eqref{binom} includes the possible effects of isospin randomization,
which are also modeled by the binomial distribution~\cite{Kitazawa:2011wh,Kitazawa:2012at}.
The parameter $q$~($0\le q\le 1$) describes the acceptance effects.
In general, it can be different
for baryons and antibaryons.
Equation~(\ref{binom}) gives $P(n)={\cal P}(N)$ for $q=1$, whereas
$P(n)$ becomes the Poisson distribution, with $\langle n\rangle=q\langle N\rangle$ in the limit $q\rightarrow 0$.
Using Eq.~(\ref{binom}), the calculation of all moments and all cumulants $c_n$, for all
accepted baryons and antibaryons,
are straightforward.
They are presented as linear combinations,
$c_n=a_1K_1+\ldots+a_nK_n$, with $a_i=a_i(q)$ (see details
and explicit expressions in Refs.~\cite{Bzdak:2012ab,Kitazawa:2012at}).
The STAR data correspond to accepted, efficiency corrected protons and antiprotons at midrapidity~($|y|<0.5$),
in two different transverse momentum intervals:
$0.4 < p_T < 0.8$~GeV/$c$~\cite{Adamczyk:2013dal}
and $0.4 < p_T < 2$~GeV/$c$~\cite{Luo:2015ewa}.
It is further assumed that parameters $q$ for baryons and antibaryons attain the same value.
We take $q=1$ and $0.2$,
which approximately represents
two cases:
1) the ideal case of all baryons and antibaryons~($q = 1$);
2) a more realistic case of protons and antiprotons within a particular acceptance~($q = 0.2$),
including the possible isospin randomization~\cite{Kitazawa:2011wh,Kitazawa:2012at}.
Fig.~\ref{fig1} shows the skewness
and the kurtosis, as calculated in the QvdW-HRG
model, as functions of $\sqrt{s_{NN}}$
along the chemical freeze-out line (\ref{Tmu}). Solid lines represent the results
of $S\sigma$ and $\kappa\sigma^2$ of the QvdW-HRG model under the assumption of a full acceptance
for both baryons and antibaryons, $q=1$ in Eq.~(\ref{binom}).
The kurtosis $\kappa\sigma^2$ shows nonmonotonic
behavior at moderate collision energies,
similar to the STAR data
for net protons \cite{Luo:2015ewa}.
Dashed-dotted lines
represent the results for $q=0.2$.
Obviously, acceptance effects have a large quantitative and qualitative
influence on the behavior of both $S\sigma$ and $\kappa\sigma^2$,
and appear to bring them closer to the experimental measurements.
Note that the model does not reproduce the preliminary STAR data at the lowest collision energies.
This could signal new physics not contained in the purely hadronic QvdW-HRG model, although it may be more prudent to await for these data to be finalized before stronger conclusions can be drawn.
At small $q$, the QvdW-HRG results approach the baselines
obtained from ideal gas calculations, shown in Fig.~\ref{fig1} by the dotted lines.
Note that there would be no difference between net baryon and accepted
net proton cumulant ratios in the IHRG model.
This is because the binomial filter acts as a ``Poissonizer'', and therefore it
does not introduce differences between cumulant ratios of net proton and net baryon distributions in the IHRG model,
where fluctuations correspond to Poisson statistics.
More detailed IHRG model studies~\cite{Nahrgang:2014fza} show that net proton fluctuations remain very
similar to net baryon fluctuations in the IHRG model even when more effects, such as the probabilistic resonance decays, are taken into account.
In contrast to the IHRG model,
the presence of baryon-baryon interactions in the QvdW-HRG model makes the net baryon distribution quite different from the Poisson statistics.
In this case, the application of the binomial filter changes the cumulant ratios and is the reason for
the large difference between the results for net baryon~($q = 1$) and
accepted net proton~($q = 0.2$) fluctuation observables seen in Fig.~\ref{fig1}.
The binomial filter is only a schematic way to do an acceptance correction, and a more accurate analysis should take into account the correlation range relative to the acceptance.
Nevertheless, the binomial filter is fully sufficient to illustrate
that the presence of the QvdW interactions between baryons at the chemical freeze-out leads to differences between experimentally observed net proton fluctuations and net baryon fluctuations.
This difference can be quite large when baryon-baryon interactions are non-negligible, as suggested by our calculations within the QvdW-HRG model.
If the effects of baryonic interactions are indeed significant, then the justification for the direct correspondence between the net
baryon cumulant ratios, calculated either from first principles
in lattice QCD~\cite{Borsanyi:2014ewa,Bazavov:2017tot} or within
effective models for QCD equation of state~\cite{Albright:2015uua,Fu:2016tey,Almasi:2017bhq},
and the net proton cumulant ratios measured in heavy-ion collisions at RHIC, can be questioned.
Corrections for differences between net proton and net baryon fluctuations are then required.
We also stress an importance of including the full
spectrum of baryonic resonances, which is a new element compared to the earlier works~\cite{Fukushima:2014lfa,Vovchenko:2015pya}.
The resonance decay feeddown to the final proton yield could be neglected in the very vicinity of the nuclear liquid-gas transition~($T \lesssim 30$~MeV),
but it is essential at the higher temperatures probed by heavy-ion collisions: the resonance decay feeddown accounts for about 10\% of all observed protons already at the HADES energy of $\sqrt{s_{NN}} = 2.4$~GeV, and for about 50\% of all observed protons at the lowest STAR-BES energy of $\sqrt{s_{NN}} = 7.7$~GeV.
The feeddown from unstable mass fragments can also be important~\cite{Hahn:1986mb}.
In the present work we took into account the resonance decay contribution approximately, by applying the binomial filter to all baryons and antibaryons.
The full probabilistic decay treatment was considered in Ref.~\cite{Nahrgang:2014fza} for the IHRG model, a consistent result with binomial filter was reported: no additional significant differences between net proton and net baryon cumulant ratios.
It will be interesting to consider the full probabilistic
decay treatment in the QvdW-HRG model as well to verify the accuracy of the binomial filter.
One more comment is appropriate here: The chemical
freeze-out line (\ref{Tmu}) used in our studies is determined from the thermal fits to heavy-ion hadron yield data within the simple statistical model for non-interacting hadrons -- the IHRG model.
This may be approximately valid
in the QvdW-HRG model at large collision energies.
However, as thermal fits are affected by hadronic interactions~\cite{Vovchenko:2015cbk,Vovchenko:2016ebv},
it is not clear whether a simple IHRG model
is appropriate for determination of the chemical freeze-out conditions in the baryon-rich matter created in heavy-ion collisions at $\sqrt{s_{NN}}=7.7$~GeV and at lower collision energies.
As a cross-check, we have performed thermal fits to the hadron yield data
in central heavy ion collisions at AGS~($\sqrt{s_{NN}} = 5$~GeV)~\cite{AGSdata} and SPS~($\sqrt{s_{NN}} = 6.3$, 7.7, and 8.8~GeV)~\cite{NA49data}
within the chemical equilibrium QvdW-HRG model.
The results are depicted in Fig.~\ref{fig1} by the red symbols.
This leads to the increased uncertainties of
the $T$ and $\mu_B$ chemical freeze-out values.
Nevertheless, the overall picture is consistent with the chemical freeze-out curve given by Eq.~\eqref{Tmu}.
In fact,
in the IHRG model itself the chemical freeze-out parameters
are also rather uncertain, as illustrated
in Fig.~\ref{figTmu}.
Further refinements will be studied in the ongoing and future heavy-ion experiments, such as the HADES experiment~\cite{Agakishiev:2009am},
the NA61/SHINE experiment~\cite{Abgrall:2014xwa,Gazdzicki:2015ska}
the STAR fixed-target program~\cite{Meehan:2016iyt},
the future CBM experiment at FAIR~\cite{Ablyazimov:2017guv}, and the future NICA project~\cite{Kekelidze:2016wkp}.
The
{\it moderate} collision energies, $\sqrt{s_{NN}} \lesssim 7.7$~GeV, look as the most interesting region
for the studies of baryon number fluctuations.
We emphasize that our results give a {\it qualitative}
description of the net proton fluctuations measured at mid-rapidity in heavy-ion collision experiments.
A complete analysis has to take into account other effects
including the initial state fluctuations and global charge conservation,
as well as possible loss of information about equilibrium fluctuations during the non-equilibrium evolution in the hadronic phase~\cite{Steinheimer:2016cir}.
A dynamical model, incorporating the above effects,
and the baryonic interactions, can be used in a {\it quantitative} study.
\begin{figure}[tb]
\center
\includegraphics[width=0.49\textwidth]{scenarios2.pdf}
\caption{
A schematic view of the collision energy dependence of kurtosis $\kappa \sigma^2$ of net proton fluctuations in two different scenarios.
(a): The ``standard scenario''~\cite{Stephanov:2011zz}, where the energy dependence is determined by the chiral CP of QCD.
(b): The scenario where the $\kappa \sigma^2$ behavior is determined by the nuclear liquid-gas criticality.
}
\label{figscen}
\end{figure}
\section{Summary}
In summary, the QvdW-HRG model, with attractive and repulsive vdW
interactions between the pairs of baryons and antibaryons to study the
higher order cumulants of particle number fluctuations. The cumulant ratios
which define the skewness $S\sigma$ and the kurtosis $\kappa\sigma^2$
for baryonic number fluctuations are calculated. These quantities
show non-monotonic structures
along the chemical freeze-out line,
which bear similarities to the data presented by the STAR Collaboration.
These results emphasize the importance of the interactions between baryons for higher order fluctuations.
Any serious thermodynamics-based analysis of the net baryon fluctuation measurements should take into account the effects arising from the nuclear liquid-gas criticality.
The QvdW-HRG model predictions for higher-order net baryon fluctuations presented here are quantitatively reliable in the vicinity of the critical point of nuclear matter.
A more precise description away from nuclear matter will require refinements and modifications.
The present results, however, are sufficient to make an important point regarding the beam energy dependence of the baryon number fluctuations.
In the "standard scenario" for the QCD CP, shown in upper panel of Fig.~\ref{figscen}, it is expected that the kurtosis $\kappa\sigma^2$ {\it decreases} with decreasing
$\sqrt{s_{NN}}$ at moderate collision energies, because the chemical freeze-out
$(T,\mu_B)$-point moves away from the hypothetical QCD CP~\cite{Stephanov:2011zz}.
In contrast, here~(lower panel of Fig.~\ref{figscen}) $\kappa\sigma^2$ keeps {\it increasing}
with decreasing $\sqrt{s_{NN}}$, as the chemical freeze-out point moves closer towards the nuclear CP.
Future fluctuation measurements and their analysis at moderate collision energies should be able to distinguish these scenarios.
\begin{acknowledgments}
\emph{Acknowledgments.} We are grateful to Adam Bzdak, Volker Koch, Misha Stephanov,
Jan Steinheimer,
and Nu Xu for stimulating discussions.
The authors appreciate interesting discussions with the participants at the EMMI Workshop on Critical
Fluctuations
Near the QCD Phase Boundary in Relativistic Nuclear Collisions, 10-17 October
2017, Wuhan, China.
This work was supported by HIC for FAIR within the LOEWE program of the State of Hesse.
V.V. acknowledges the support from HGS-HIRe for FAIR.
H.St. acknowledges the support through the Judah M. Eisenberg Laureatus Chair at Goethe University.
The work of M.I.G. was supported
by the Program of Fundamental Research of the Department of
Physics and Astronomy of National Academy of Sciences of Ukraine.
\end{acknowledgments}
|
1,314,259,994,174 | arxiv |
\section*{Acknowledgments}
\label{sec:ackno}
The authors thank Sam Fletcher and Borja Balle for comments on this material.
\section{Boosting with the M$\alpha$-loss}\label{sec-boost}
\noindent$\triangleright$ \textbf{Boosting decision trees}: We know from the last Section that the M$\alpha$-loss allows, by
tuning $\alpha$, to continuously change the sensitivity of the
criterion between the minimal ($\alpha = 0$) and a maximal one ($\alpha = 1$). We are now going to show that the criterion allows as well to
intrapolate between optimal boosting regime ($\alpha = 1$) and a
"minimal" convergence guarantee ($\alpha = 0$), thereby completing the
boosting vs privacy picture for the M$\alpha$-loss. We first tackle the
induction of a single DT as in \citet{kmOT}. Boosting start
by formulating a \textit{Weak Learning Assumption} (WLA) which gives a weak
form of correlation with labels for the elementary block of a
classifier. In the case of a DT, such a block is a split. So, consider
leaf $\leaf$ and a test $g : {\mathcal{X}} \rightarrow \{0,1\}$ that
splits the leaf in two, the examples going to the left (for which $g=0$)
and those going to the right (for which $g=1$). The relative weight of
positive examples reaching $\leaf$ is $q \in (0,1)$, where $q\neq 0,1$
ensures that the leaf is not pure. Define the balanced
weights at leaf $\leaf$ to be (a) $\tilde{w}_i = 0$ if $\ve{x}_i\not\in\leaf$, else (b) $\tilde{w}_i = 1/(2q)$ if $y_i = +1$, else (c)
$\tilde{w}_i = 1/(2(1-q))$. Let $\tilde{\ve{w}}_{\leaf}$ denote the complete
distribution and $g^{\nicefrac{+}{-}} \in \{-1, 1\}^{\mathcal{X}}$ as $g^{\nicefrac{+}{-}}
\defeq -1 + 2g$. We adopt
the \textit{edge}
notation $\eta(\tilde{\ve{w}}, h) \defeq \sum_i \tilde{w}_i y_i h(\ve{x}_i)$ for any $h
\in \mathbb{R}^{\mathcal{X} }$. Suppose $\upgamma > 0$ a constant.
\begin{definition}(WLA for DT)\label{wlaKMDT}
Split $g$ at leaf $\leaf$ satisfies the $\upgamma$-WLA iff $|\eta(\tilde{\ve{w}}_\leaf, g^{\nicefrac{+}{-}})| \geq \upgamma$.
\end{definition}
Definition \ref{wlaKMDT} is not the same as \citet{kmOT}, but it is
equivalent (\sm, $\S$ \ref{proof_thBoostDT1}) and in fact more
convenient for our framework. A random split would not satisfy the WLA so the
WLA enforces the existence of splits at least moderately correlated
with the class. Top-down DT induction usually does
\textit{not} proceed by optimizing the split based on the WLA, but in
fact it can be shown that the WLA \textit{implies} good splits according to
top-down DT induction criteria \citet[Section 5.3]{kmOT}.
So we let $h_\ell$ denote the current DT with $\ell$
leaves and $\ell-1$ internal nodes. We grow it to get $h_{\ell+1}$ by minimizing
$\bayesalphariskparam{\ell}$ as:
\begin{eqnarray}
\bayesalphariskparam{\ell}(h_{\ell+1}) & \defeq & \alpha_{\ell} \cdot \bayesmatrisk(h_{\ell+1}) + (1-\alpha_{\ell}) \cdot \bayeserrrisk(h_{\ell+1}),\nonumber
\end{eqnarray}
where $h_{\ell+1}$ is $h_\ell$ with a leaf $\leaf \in \leafset(h_\ell)$
replaced by a split. Noting $K\geq 0$ a constant, $w \defeq \sum_i w_i$ and
\begin{eqnarray}
\tilde{w}_\ell & \defeq & \left(\frac{1}{w}\right) \cdot \sum_i w_i \iver{i\in \leaf_\ell} \quad
\in
[0,1]\label{defNW}
\end{eqnarray}
the total \textit{normalized} weight of the examples reaching
$\leaf_\ell$, we say that the sequence of $\alpha$s is \textit{$K$-monotonic}
iff $\alpha_\ell \leq \alpha_{\ell-1} \cdot \exp\left(K \tilde{w}_\ell
(1-\alpha_{\ell-1})\right)$ for any $\ell>2$ (and $\alpha_1 \in [0,1]$). Since the parameter in the
$\exp$ is $\geq 0$, $K$-monotonicity prevents the sequence from
growing too fast.
\begin{theorem}\label{thBoostDT1}
Suppose all splits satisfy the $\upgamma$-WLA and the sequence of
$\alpha$s is $(\upgamma^2/16)$-monotonic. Then $\forall \xi \in (0,1]$, the empirical risk of $h_L$ satisfies $\emprisk(h_L) \leq \xi$ as long as
\begin{eqnarray}
\sum_{\ell=1}^L \tilde{w}_\ell \alpha_\ell & \geq & \left(\frac{16}{\upgamma^2}\right) \cdot \log \left(\frac{1}{\xi}\right).\label{eqBOOST11}
\end{eqnarray}
\end{theorem}
It is worth remarking
that this is indeed a generalization of \cite{kmOT}: suppose
$\alpha_{\ell} = \alpha$ constant (which is
$(\upgamma^2/16)$-monotonic $\forall \upgamma \geq 0$) and we pick at each
iteration the heaviest leaf to split. We thus have $\tilde{w}_\ell\geq
1/\ell$, assuming further it satisfies the WLA. Since $\sum_{\ell=1}^L (1/\ell) \geq
\int_1^{L+1} \mathrm{d}z/z = \log(L+1)$, \eqref{eqBOOST11} is
guaranteed if
\begin{eqnarray}
L & \geq & \left(\frac{1}{\xi}\right)^{\frac{16}{\alpha \upgamma^2}}, \label{boostCOND}
\end{eqnarray}
which, for $\alpha=1$, is in fact the square root of the bound in \citet[Theorem
10]{kmOT} and is thus significantly better. Rather than a quantitative
improvement, we were seeking for a qualitative one as \citet{kmOT}
pick the heaviest leaf to split, which means using DP budget to find
it. To see how we can get essentially the same guarantee
\textit{without} this contraint, suppose instead that we split
\textit{all} current leaves, \textit{all} of them satisfying the
WLA\footnote{This happens to be reasonable on domains big enough, for small trees or when the
set from which $g$ is picked is rich enough.}. Since
$\sum_{\leaf} \tilde{w}(\leaf) = 1$ (those weights are normalized), once we remark that it
takes one split for the root, then two, then four and so on to fully
split the current leaves, $L$ boosting iterations guarantee a full
split up to depth $O(\log(L))$, which delivers the same condition as
\eqref{boostCOND} with an eventual change in the exponent
constant. Our result is also a generalization of \citet{kmOT} since it allows
to \textit{tune} $\alpha$ during learning, which is important
for us ($\S$ \ref{sec-solv}).\\
\noindent$\triangleright$ \textbf{Boosting linear combinations of
classifiers}: we now consider that we build a Linear Combination
(LC) of classifiers, $H_T \defeq
\sum_{t=1}^T \beta_t h_t$, where $h_t : \mathcal{X} \rightarrow
\mathbb{R}$ is a real valued classifier --- this could be a DT or any
other applicable classifier. We tackle the problem of achieving
boosting-compliant convergence when building $H_T$, which means we
have a WLA on each $h_t$. We also assume
$\exists M>0$ such that $|h_t|\leq M$. Let $\ve{w}_t
\in [0,1]^m$ an \textit{unnormalized} weight vector on $\mathcal{S}$, $t$ denoting the
iteration number from which $h_t$ is obtained. Noting $\tilde{w}_t \defeq \left(\nicefrac{1}{m}\right)\cdot \sum_i w_{ti} \in [0,1]$
the expected unnormalized weight at iteration $t$, we also let
$\tilde{\ve{w}}_t \defeq (1/(m \tilde{w}_t))\cdot \ve{w}_t$ denote the
\textit{normalized} weight vector at iteration $t$. The WLA is as follows.
\begin{definition}(WLA for LC)\label{wlaKMLC}
$h_t$ obtained at iteration $t$ satisfies the $\upgamma$-WLA iff
$|\eta(\tilde{\ve{w}}_t, h_t)| \geq \upgamma M$.
\end{definition}
Remark that this definition is similar to Definition \ref{wlaKMDT},
since $|g^{\nicefrac{+}{-}}| = 1$. All the crux is now how to get the weight vectors $\ve{w}_t$ so that
we can prove a boosting-compliant convergence rate using the
M$\alpha$-loss. We do so using a standard mechanism, which consists in
initializing $\bm{w}_1 \defeq (1/2)\cdot \ve{1}$ (unnormalized) and then using the
mirror update of the M$\alpha$-loss to update weights after $h_t$ has
been received:
\begin{eqnarray}
w_{(t+1)i} \hspace{-0.3cm} & \leftarrow & \hspace{-0.3cm}{\alphalink}^{-1}\left( -\beta_t y_{i}
h_t(\bm{x}_i) + {\alphalink}
(w_{ti})\right) \label{defwunMF},
\end{eqnarray}
where $\beta_t$ is a leveraging coefficient for $h_t$ in the final
classifier, taken to be $\beta_t \leftarrow a \tilde{w}_t \cdot
\eta(\tilde{\ve{w}}_t, h_t)$, where
$a$ is a constant chosen beforehand anywhere in interval $
(\alpha/M^2)\cdot \left[ 1 - \pi, 1 + \pi\right]$, $\pi \in [0,1)$ quantifying the
freedom in choosing $a$. This is sufficient to complete the
description of the algorithm (also given \textit{in
extenso} in \sm, $\S$ \ref{proof_thBoostLC1}).
\begin{theorem}\label{thBoostLC1}
Suppose all $h_t$ satisfy the $\upgamma$-WLA. Then $\forall \xi
\in [0,1]$, we have $\emprisk(H_T)\leq
\xi$ as long as:
\begin{eqnarray}
\sum_{t=1}^T
\tilde{w}^2_t & \geq &\frac{2
(1-\xi)}{(1-\pi^2)\upgamma^2
\alpha}.\label{eqCONST111M}
\end{eqnarray}
\end{theorem}
This Theorem
has a very similar flavour on boosting conditions as we had in Theorem
\ref{thBoostDT1} for DTs but its dependence on $\xi$ is comparatively
misleading. What Theorem \ref{thBoostLC1} indeed tells us is
boosting for LC is efficient under the WLA as long as $w_t$ is "large"
enough in $[0,1]$. The weight update in \eqref{defwunMF} meets the classical
boosting property that an example has its weight directly correlated
to classification: the better, the smaller its weight (\textit{Cf} the plot of ${\alphalink}^{-1}$ in Figure
\ref{f-alphaM}). Hence, as classification gets better, the sum on the
LHS of \eqref{eqCONST111M} increases at smaller rate and if
$\xi$ is too small, this means a potentially larger number of
iterations to meet \eqref{eqCONST111M}.
\section{Privacy and boosting: objective calibration}\label{sec-solv}
We have so far described the complete picture of DP for DT with any noisification mechanism that relies on the sensitivity of a Bayes risk, and the complete but noise-free boosting picture for the M$\alpha$-loss for DT and LC. We now assemble them. In an iterative boosted combination of DT, two locations of privacy budget spending can make the full classifier meet DP: (a) node splitting in trees, (b) leaf predictions in trees. The protection of the leveraging coefficients $\beta_t$ can be obtained in two ways: either we multiply each leaf prediction by $\beta_t$, then replace $\beta_t \leftarrow 1$ and then carry out (b), or use the faster but more conservative approach to just do (b) \textit{e.g.} with the Laplace mechanism from which follows the protection of $\beta_t$ ($\S$ \ref{sec-boost}). \textit{We do not carry out pruning} as boosting alone can be sufficient for good generalization, see \textit{e.g.} \citet[Section 2.1]{sfblBT}, \citet[Theorems 16, 17]{bmRA}, and pruning also requires privacy budget \citep[$\S$ 3.5]{fiDT}. The public information is the attribute domain, which is standard \citep{fiDT}, and we consider that each continuous attributes is regularly quantized using a public number $\nvpriv$ of values. This makes sense for many common attributes like age, percentages, $\$$-value, and this can contribute to ease interpretation; this also has three technical justifications: (1) a private approaches requires budget, (2) $\nvpriv$ allows to tightly control the computational complexity of the whole DT induction, (3) boosting does not require exhaustive split search provided $\nvpriv$ is not too small (more in \sm, $\S$\ref{sub-sec-gen}).
\noindent $\triangleright$ \textbf{Private induction of a DT: objective calibration}. The overall privacy budget $\epsilon$ is split in two proportions: $\betatree$ for node splitting (a) and $\betapred\defeq 1-\betatree$ leaves' predictions (b). The basis of our approach to split nodes is the nice --- but never formally analyzed --- trick of \citet{fsDM} which consists in using the exponential mechanism to choose splits. Let $\mathcal{G}$ denote the whole set of splits. The probability to pick $g \in \mathcal{G}$ to split leaf $\leaf \in \leafset(h_\ell)$ is:
\begin{eqnarray}
\pexpm ((g, \leaf)) \hspace{-0.3cm} & \propto & \hspace{-0.3cm}
\exp\left(-\frac{\epsilon(h_\ell, \leaf) w(\mathcal{S}) F(h_\ell\oplus(g, \leaf))}{2\Delta^*_{\bayesalpharisk}(m)} \right),\label{expSPLIT}
\end{eqnarray}
where notation $h_\ell\oplus(g, \leaf)$ refers to decision tree $h_\ell$ in which leaf $\leaf$ is replaced by split $g$, $w(\mathcal{S}) \cdot F(h_\ell\oplus(g, \leaf))$ is the unnormalized Bayes risk (Section \ref{sec:priv}) and $\Delta^*_{\bayesalpharisk}(m)$ is given in Lemma \ref{sensALPHA}. $\epsilon(h_\ell, \leaf)$ is the fraction of the total privacy budget allocated to the split. So far, all recorded approaches consider uniform budget spending \citep{fiDT} but such a strategy is clearly oblivious to the accuracy vs privacy dilemma as explained in Section \ref{sec:priv}.
We now introduce a more sophisticated approach exploiting our result, allowing to bring strong probabilistic guarantees on boosting \textit{while} being private. The intuition behind is simple: the "support" (total unnormalized weight) of a node is monotonic decreasing on any root-to-leaf path. Therefore, we should typically increase the budget spent in low-depth splits because (i) it impacts more examples and (ii) it increases the likelihood of picking the splits that meet the WLA in the exponential mechanism \eqref{expSPLIT}. Consequently, we also should pick $\alpha$ larger for low-depth splits, to increase the early boosting rate and drive as fast as possible the empirical risk to the minimum, yet monitoring the dependency of the exponential mechanism in $\alpha$ to control the probability of picking the splits that meet the WLA. This may look like a quite intricate set of dependences between privacy and boosting, but here is a solution that matches all of them.
If we denote $h_1$ the tree reduced to a leaf from which $h_\ell$ was built, $\depth(.)$ as the depth of a node, $d$ the maximal depth of a tree and $T$ the number of trees in the combination, then we let:
\begin{eqnarray}
\alpha_\ell & \defeq & \frac{\emprisk(h_\ell)}{\emprisk(h_1)} \quad(\in
[0,1]),\label{fixALPHA} \\
\epsilon(h_\ell, \leaf) & = & \frac{\betatree}{Td2^{\depth(\leaf)}}\cdot
\epsilon\label{fixEPSILON}.
\end{eqnarray}
The choice of $\alpha_\ell$ makes it decreasing along every path from the root: while we split the root using Matsushita loss ($\alpha = 1$), which guarantees optimal boosting rate, we gradually move in deeper leaves to using more of the Bayes risk of the 0/1 loss, which may reduce the rate but reduces privacy budget used as well. Referring to objective perturbation which noisifies the loss \citep{cmsDP}, we call our method that \textit{tunes} the loss \textbf{objective calibration} (O.C).
We formally analyze O.C. First, remark that the total budget spent for one tree is $\betatree \epsilon/T$, which fits in the global budget $\varepsilon$. To develop the boosting picture, we build on the $\upgamma$-WLA. We first remark that for any $h$, leaf $\leaf \in \leafset(h)$ and split $g \in \mathcal{G}$, there exists $u\geq 0$ such that
\begin{eqnarray}
\hspace{-0.3cm}\bayesalpharisk(h) - \bayesalpharisk(h
\oplus (g, \leaf)) \hspace{-0.3cm} & = & \hspace{-0.3cm}
u \cdot \frac{\upgamma^2 \alpha \tilde{w}(\lambda) }{16} \cdot \bayesalpharisk(h).\label{condGAP}
\end{eqnarray}
This is a simple consequence of the concavity of any Bayes risk. Interestingly, for \textit{all} splits that satisfy the WLA, it can be shown that we can pick $u\geq 1$ (\sm, $\S$ \ref{proof_thBOOSTDP1}). Let us denote $\setfat \subseteq \mathcal{G}$ the whole set of such boosting amenable splits, and let $\setslim \defeq \mathcal{G} \backslash \setfat$ denote the remaining splits. The exponential mechanism might of course pick splits in $\setslim$ but let us assume that there is at least a small "gap" between those splits and those of $\setfat$, in such a way that for any split in $\setslim$, \eqref{condGAP} holds only for $u\leq \delta$ for some $\delta < 1$. This property always holds for \textit{some} $\delta<1$ but let us assume that this $\delta$ is a constant, just like the $\upgamma$ of the WLA, and call it the \textit{$\delta$-Gap assumption}. Let $\nodeset(h)$ denotes the set of nodes of $h$, including
leaves in $\leafset(h)$. The \textit{tree-efficiency} of $\node \in \nodeset(h)$ in
$h$ is defined as
\begin{eqnarray}
J(\node, h) & \defeq & \frac{8\tilde{w}(\node)
\emprisk(h)^2}{2^{\depth(\node)}}
\quad \in [0,1],
\end{eqnarray}
where $\tilde{w}(\node)$ is the normalized weights of examples reaching $\node$. Let $\mathcal{L}$ be a subset of indexes of the leaves split from $h_1$ to create a depth-$d$ tree, with unnormalized weights $\ve{w}$. Each element $\ell$ refers to a couple $(\leaf_\ell, h_\ell)$ where $h_\ell$ is the tree in which $\leaf_\ell$ was replaced by a split.
\begin{theorem}\label{thBOOSTDP1}
Suppose the exponential mechanism is implemented with $\alpha_\ell$ and $\epsilon_\ell$ as in \eqref{fixALPHA}, \eqref{fixEPSILON}. Suppose $Td \leq \log m$, $m\geq 3$ and
both the $\upgamma$-WLA and $\delta$-Gap assumptions hold. Suppose that $\forall \ell \in \mathcal{L}$,
\begin{eqnarray*}
|{\setfat}_\ell| & \geq & |{\setslim}_\ell| \cdot \exp\left(- \Omega\left( J(\leaf_\ell, h_\ell) \cdot \frac{
\epsilon \sqrt{m}}{\log m}\right)\right).
\end{eqnarray*}
Then, for any $\xi>0$, if
\begin{eqnarray}
\min_{\ell \in \mathcal{L}} J(\leaf_\ell, h_\ell) & = & \Omega \left(\frac{\log m}{
\epsilon \sqrt{m}} \log \frac{|\mathcal{L}|}{\xi}\right),\label{constJJTREE}
\end{eqnarray}
then with probability $\geq 1 - \xi$, \textbf{all} splits chosen by the exponential mechanism to split the leaves indexed in $\mathcal{L}$ satisfy the WLA.
\end{theorem}
The proof (\sm, $\S$ \ref{proof_thBOOSTDP1}), explicits all hidden constants. We insist on the message that Theorem \ref{thBOOSTDP1} carries about the exponential mechanism: under the WLA/Gap assumptions and a size constraint on each ${\setfat}_t$ (which by the way authorises it to be reasonably smaller than ${\setslim}_t$), the exponential mechanism has essentially \textit{no negative impact} on boosting with high probability. This, we believe, is a very strong incentive in favor of the exponential mechanism as designed in \citet{fsDM}. Finally, the condition $Td \leq \log m$ could be replaced by a low-degree polylog but even without doing so, it actually fits well in a series of experimental work \citep{fiDT}, for example \citet{mbacSA} ($Td = 4$), \citet{fsDM} ($Td = 5$), \citet{fiAD} ($Td = 20$).
\begin{remark} Theorem \ref{thBOOSTDP1} reveals another reason why we should indeed put emphasis on boosting on low-depth nodes: for any node of $h$, if its tree efficiency is above a threshold, then so is the case for all nodes along a shortest path from this node to the root of $h$. Hence the largest set $\mathcal{L}$ for which \eqref{constJJTREE} holds corresponds to a \textit{subtree} of $h$ with the same root.
\end{remark}
To get a simple idea of how \eqref{constJJTREE} vanishes with $m$, remark that $|\mathcal{L}| \leq 2^{d+1}-1$. Condition
\eqref{condJJ} is therefore satisfied if for example $\epsilon, \xi, d$ are related to $m$ as
\begin{eqnarray*}
\frac{\log (m)}{\sqrt{m}} & = & o(\epsilon),\\
d, \log \nicefrac{1}{\xi} & = & o\left(\frac{\sqrt{m}}{\log m}\right),
\end{eqnarray*}
and in this case the constraint on $\min_{\ell \in \mathcal{L}} J(\leaf_\ell, h)$ in
\eqref{condJJ2} vanishes with $m$. This makes that strong privacy regimes can fit to Theorem \ref{thBOOSTDP1}, \textit{e.g.} with $\epsilon = \log^{1+c}(m)/\sqrt{m}$ for $c>0$ a constant.
\noindent $\triangleright$ \textbf{Private predictions at the leaves}: because our trees output real values, we use the Laplace mechanism. This fits well with the WLA using $|h_t|\leq M$ (Definition \ref{wlaKMLC}), for the sensitivity of the mechanism \citep{drTA}.
\section{Conclusion}\label{sec-conc}
While boosted ensemble of DTs have long shown their accuracy in international competitions, to our knowledge nothing is known on how to fit them in a differentially private framework while keeping some of the boosting guarantees, a setting in which random forests have been reigning supreme. In this paper, we first establish the existence of a nontrivial tradeoff to push boosting methods in a differentially private framework. To address this tradeoff, we first create a tunable proper canonical loss, whose boosting rate and sensitivity can be controlled up to optimal boosting rate, or minimal sensitivity. We then show guaranteed boosting rates for both the induction of DTs and ensembles using this loss, of independent interest. We introduce objective calibration as a way to dynamically tune this loss and make the most of boosting under a given privacy budget with high probability. Experiments reveal that our approach manages to significantly beat random forests, that the best private models tend to be learned by objective calibration, and that our technique appears all the better on high privacy regimes.
\section*{Appendix}
\section{Table of contents}\label{tabcon}
\noindent \textbf{Appendix on proofs} \hrulefill Pg
\pageref{proof_proofs}\\
\noindent Proof of Theorem \ref{th3P}\hrulefill Pg
\pageref{proof_th3P}\\
\noindent Proof of Lemma \ref{lemcurv}\hrulefill Pg
\pageref{proof_lemcurv}\\
\noindent Proof of Lemma \ref{lemPhiProof}\hrulefill Pg
\pageref{proof_lemPhiProof}\\
\noindent Proof of Theorem \ref{theoremalpha}\hrulefill Pg
\pageref{proof_theoremalpha}\\
\noindent Proof of Theorem \ref{thBoostDT1}\hrulefill Pg
\pageref{proof_thBoostDT1}\\
\noindent Proof of Theorem \ref{thBoostLC1}\hrulefill Pg
\pageref{proof_thBoostLC1}\\
\noindent Proof of Theorem \ref{thBOOSTDP1}\hrulefill Pg
\pageref{proof_thBOOSTDP1}\\
\noindent \textbf{Appendix on experiments} \hrulefill Pg
\pageref{exp_expes}\\
\noindent Implementation\hrulefill Pg \pageref{sub-sec-sum-imp}\\
\noindent General setting\hrulefill Pg \pageref{sub-sec-gen}\\
\noindent Domain summary Table\hrulefill Pg \pageref{sub-sec-sum}\\
\noindent UCI \texttt{transfusion}\hrulefill Pg \pageref{sub-sec-transfusion}\\
\noindent UCI \texttt{banknote}\hrulefill Pg \pageref{sub-sec-banknote}\\
\noindent UCI \texttt{breastwisc}\hrulefill Pg \pageref{sub-sec-breastwisc}\\
\noindent UCI \texttt{ionosphere}\hrulefill Pg \pageref{sub-sec-ionosphere}\\
\noindent UCI \texttt{sonar}\hrulefill Pg \pageref{sub-sec-sonar}\\
\noindent UCI \texttt{yeast}\hrulefill Pg \pageref{sub-sec-yeast}\\
\noindent UCI \texttt{winered}\hrulefill Pg \pageref{sub-sec-winered}\\
\noindent UCI \texttt{cardiotocography}\hrulefill Pg \pageref{sub-sec-cardiotocography}\\
\noindent UCI \texttt{creditcardsmall}\hrulefill Pg \pageref{sub-sec-creditcardsmall}\\
\noindent UCI \texttt{abalone}\hrulefill Pg \pageref{sub-sec-abalone}\\
\noindent UCI \texttt{qsar}\hrulefill Pg \pageref{sub-sec-qsar}\\
\noindent UCI \texttt{page}\hrulefill Pg \pageref{sub-sec-page}\\
\noindent UCI \texttt{mice}\hrulefill Pg \pageref{sub-sec-mice}\\
\noindent UCI \texttt{hill+noise}\hrulefill Pg \pageref{sub-sec-hillnoise}\\
\noindent UCI \texttt{hill+nonoise}\hrulefill Pg \pageref{sub-sec-hillnonoise}\\
\noindent UCI \texttt{firmteacher}\hrulefill Pg \pageref{sub-sec-firmteacher}\\
\noindent UCI \texttt{magic}\hrulefill Pg \pageref{sub-sec-magic}\\
\noindent UCI \texttt{eeg}\hrulefill Pg \pageref{sub-sec-eeg}\\
\noindent Summary in $d, T$ for the best DP results in \alphaboost\hrulefill Pg \pageref{sub-sec-sumdT}\\
\noindent Summary of the comparison \alphaboost~vs RFs with DP\hrulefill Pg \pageref{sub-sec-AlphavsRF}\\
\noindent Summary comparison $\nvpriv=10$ vs $\nvpriv=50$ ($M = 10$)\hrulefill Pg \pageref{sub-sec-1050}
\newpage
\input{content-arxiv/appendix}
\clearpage
\newpage
\input{content-arxiv/appendix-suppexp}
\section{Definitions}\label{sec:def}
\noindent $\triangleright$ \textbf{Batch learning}: most of our notations from \citet{nwLO}. We use the
shorthand notations $[n]
\defeq \{1, 2, ..., n\}$ for $n \in \mathbb{N}_*$ and $z' + z \cdot [a, b]
\defeq [z' + za, z' + zb]$ for $z \geq 0, z'\in \mathbb{R}, a \leq b \in \mathbb{R}$. We also let
$\overline{\mathbb{R}} \defeq [-\infty, \infty]$.
In the batch supervised
learning setting, one is given a training set of $m$ examples
${\mathcal{S}} \defeq \{({\ve{x}}_i, y_i), i \in [m]\}$, where ${\ve{x}}_i
\in {\mathcal{X}}$ is an observation
(${\mathcal{X}}$ is called the domain: often, ${\mathcal{X}}\subseteq {\mathbb{R}}^n$) and $y_i
\in \mathcal{Y} \defeq \{-1,1\}$ is a label, or class. The objective
is to learn a \textit{classifier}, \textit{i.e.} a function $h
: \mathcal{X} \rightarrow \mathbb{R}$ which belongs to a given
set $\mathcal{H}$. The first class of models we consider are decision trees (DTs).
A (binary) DT $h$ makes a recursive partition of a domain. There are two types of nodes: internal nodes are indexed by a binary test and leaves are indexed by a real number. The depth of a node (resp. a tree) is the minimal path length from the root to the node (resp. the maximal node depth). Thus, depth(root) is zero. The classification of some $\ve{x} \in \mathcal{X}$ is achieved by taking the sign of the real number whose leaf is reached by $\ve{x}$ after traversing the tree from the root, following the path of the tests it satisfies. The other types of classifiers we consider are linear combinations of base classifiers, now hugely popular when base classifiers are DTs, after the advents of bagging \citep{bBP} and boosting \citep{fhtAL}.
\noindent $\triangleright$ \textbf{Losses}: the goodness of fit of some $h$ on $\mathcal{S}$ is
evaluated by a given \textit{loss}. There are two dual views of losses to train domain-partitioning classifiers (like DTs) and linear combinations of base classifiers \citep{nnBD}. Both views start from the definition of a \textit{loss for class probability estimation}, $\losscpe : \mathcal{Y} \times [0,1]
\rightarrow \mathbb{R}$,
\begin{eqnarray}
\losscpe(y,u) & \defeq & \iver{y=1}\cdot \partialloss{1}(u) +
\iver{y=-1}\cdot \partialloss{-1}(u), \label{eqpartialloss}
\end{eqnarray}
where $\iver{.}$ is Iverson's bracket. Functions $\partialloss{1}, \partialloss{-1}$ are called \textit{partial} losses; we refer to \citet{rwCB} for the additional background on partial losses. We consider
\textit{symmetric} losses for which $\partialloss{1}(u) = \partialloss{-1}(1-u),
\forall u \in [0,1]$ \citep{nnOT} (in particular, this assumes that there is no class-dependent misclassification loss). For example, the square loss has $\partialsqloss{1}(u) \defeq (1/2)\cdot (1-u)^2$
and $\partialsqloss{-1}(u) \defeq (1/2)\cdot u^2$. The
log loss has $\partiallogloss{1}(u) \defeq -\log u$
and $\partiallogloss{-1}(u) \defeq -\log(1-u)$. The 0/1 loss has $\partialZOloss{-1}(u) \defeq \iver{\pi \geq 1/2}$ and $\partialZOloss{1}(u) \defeq \iver{\pi \leq 1/2}$. All these losses are symmetric. The associated (pointwise) \textit{Bayes} risk is
\begin{eqnarray}
\bayesrisk(\pi) & \defeq & \inf_u \expect_{\Y \sim \mathrm{B}(\pi)} [\losscpe(\Y, u)], \label{pbr}
\end{eqnarray}
where $\mathrm{B}(\pi)$ denotes a Bernoulli for picking label $\Y = 1$. Most DT induction algorithms follow the greedy minimisation of a loss which is in fact a Bayes risk \citep{kmOT}. For example, up to a multiplicative constant that plays no role in its minimisation, the square loss gives Gini criterion, $\bayessqrisk(\pi) = (1/2) \cdot \pi (1-\pi)$ \citep{bfosCA}; the log loss gives the information gain, $\bayeslogrisk(u) = -\pi\log(\pi)-
(1-\pi)\log(1-\pi)$ \citep{qC4} and the 0/1 loss gives the empirical risk $\bayesZOrisk(u) = \min\{\pi, 1-\pi\}$. To follow \citet{kmOT}, we assume wlog that all Bayes risks are normalized so that $\bayesrisk(1/2)=1$, which is the maximum for any symmetric proper loss \citep{nnOT}, and $\bayesrisk(0) = \bayesrisk(1) = 0$ (the loss is \textit{fair}, \citet{rwCB}). Any Bayes risk is concave \citep{rwCB}. So, if $h$ is a DT, then the loss minimized to greedily learn $h$, $F(h;\mathcal{S})$, can be defined in general as:
\begin{eqnarray}
F(h;\mathcal{S}) & \defeq & \expect_{\mathcal{S}}[\bayesrisk(q(\ell(\ve{x}_i)))],\label{defLOSSGENPROB}
\end{eqnarray}
where $\ell(.)$ is the leaf reached by $\ve{x}_i$ in $H$\footnote{Not to be confused with the general notation of a loss for class probability estimation, $\losscpe$.} and $q(\ell) \defeq \hat{p}[\Y = 1 | \ell]$ is the relative proportion of class $1$ in the examples reaching $\ell$. To ensure that a real valued classification is taken at each leaf of $h$, the predicted value for leaf $\leaf$ is
\begin{eqnarray}
h(\ell) & \defeq & -\bayesrisk'(q(\ell)) \in \mathbb{R}.\label{predDT}
\end{eqnarray}
Function $-\bayesrisk'$ is called the \textit{canonical link} of the loss \citep{bssLF,nwLO,rwCB}. If the loss is non differentiable, the canonical link is obtained from any selection of its subdifferential.
If $H$ is a linear combination of base classifiers, we adopt the convex dual formulation of (negative) the Bayes risk which, by the property of Bayes risk, admits a domain that can be the full $\mathbb{R}$ \citep{bvCO}. In this case, we replace \eqref{defLOSSGENPROB} by the following loss:
\begin{eqnarray}
F(h;\mathcal{S}) & \defeq & \expect_{\mathcal{S}}[(-\bayesrisk)^\star(-y_i h(\ve{x}_i))],\label{defLOSSGENR}
\end{eqnarray}
where $\star$ denotes the Legendre conjugate
of $F$, $F^\star(z)\defeq \sup_{z' \in \mathrm{dom}(F)}\{zz' - F(z')\}$
\citep{bvCO}. Losses like \eqref{defLOSSGENR} are sometimes called balanced convex losses \citep{nnOT} and belong to a broad class of losses also known as margin losses \citep[Section 2.3]{mvAV}. The most popular losses are particular cases of \eqref{defLOSSGENR}, like the square or logistic losses \citep{mvAV}. It can be shown that if a DT has its outputs mapped to $\mathbb{R}$ following the canonical link \eqref{predDT}, then minimizing \eqref{defLOSSGENR} to learn the DT is equivalent to minimizing \eqref{defLOSSGENPROB}, which therefore make both views equivalent \citep[Theorem 3]{nnBD}. Finally, the \textit{empirical risk} of $H$, $\emprisk(H)$, is \eqref{defLOSSGENR} in which the inside brackets is predicate $\iver{y_i h(\ve{x}_i) < 0}$.
\noindent $\triangleright$ \textbf{Differential privacy} (DP) essentially relies on randomized mechanisms to guarantee that \textit{neighbor} inputs to an algorithm $\mathcal{M}$ should not change too much its distribution of outputs \citep{dmnsCN}. In our context, $\mathcal{M}$ is a learning algorithm and its input is a training sample (omitting additional inputs for simplicity) and two training samples ${\mathcal{S}}$ and ${\mathcal{S}}'$ are neighbors, noted ${\mathcal{S}}\approx {\mathcal{S}}'$ iff they differ by at most one example. The output of $\mathcal{M}$ is a classifier $h$.
\begin{definition}\label{defEDPRIV} Fix $\epsilon\geq 0$. $\mathcal{M}$ gives $\epsilon$-DP if $p[\mathcal{M} ({\mathcal{S}}) = h] \leq \exp(\epsilon) \cdot p[\mathcal{M} ({\mathcal{S}}') = h], \forall {\mathcal{S}}\approx {\mathcal{S}}', \forall h$,
where the probabilities are taken over the coin flips of $\mathcal{M}$.
\end{definition}
The smaller $\epsilon$, the more private the algorithm. Privacy comes with a price which is in general the noisification of $\mathcal{M}$. A fundamental quantity that allows to finely calibrate noise to the privacy parameters relies on the \textit{sensitivity} of a function $f(.)$, defined on the same inputs as $\mathcal{M}$, which is just the maximal possible difference of $f$ among two neighbor inputs. Assuming $\mathrm{Im} f\subseteq {\mathbb{R}}^n$, the global sensitivity of $f(.)$, $\Delta_f$, is $\Delta_f \defeq \max_{{\mathcal{S}}\approx {\mathcal{S}}'} \|f({\mathcal{S}}) - f({\mathcal{S}}')\|_1$ \citep{dmnsCN}.
DP offers two standard tools to devise general mechanisms with $\varepsilon$-DP guarantees, one to protect real values and the other to protect a choice in a fixed set \citep{drTA,mtMD}. The former, the Laplace mechanism, adds $\mathrm{Lap}(b)$ noise to a real-valued input, with $b\defeq \Delta_f /\epsilon$ is the scale parameter.
The latter is the exponential mechanism: let $\{g : g\in {\mathcal{G}}\}$ denote a set of alternatives and $f: {\mathcal{R}} \rightarrow {\mathbb{R}}$ a function that scores each of them (the higher, the better), whose values depend of course on $\mathcal{S}$. The exponential mechanism outputs $g\in {\mathcal{G}}$ with probability $\propto \exp(\epsilon f(g)/(2\Delta_f))$, thus tending to favor the highest scores.
Finally, the \textit{composition theorem}, particularly useful when training $h$ is iterative like for DTs, states that the sequential application of $\epsilon_i$-DP mechanisms ($i=1, 2, ...$), provides $\left( \sum_i \epsilon_i \right)$-DP \citep{dmnsCN}.
\section{Introduction}\label{sec:int}
The past decade has seen considerable growth of the subfield of machine learning (ML) tackling the augmentation of the classical models with additional constraints that are now paramount in applications \citep{adwFR,kmmsDP,agltvQC,dljfTD,jkczthakQA,jkmprsuDP}.
One challenge posed by such constraints is the potentially risky design process for new approaches: it may not be hard to modify the state of the art to accomodate for the new constraint(s), but if not cared for enough,
the modification may come at a hefty price tag for accuracy. Differential privacy (DP) is a very good example of a now popular constraint, which essentially proceeds by randomizing parts of the whole process to reduce the output's sensitivity to local changes in the input \citep{drTA}. DP possesses a toolbox of simple randomisation mechanisms that can allow for simple modifications of ML algorithms to make them private.
However, a careful optimization of the utility (accuracy) under the DP constraints typically requires rethinking the training process, as exemplified by the output perturbation mechanism to train kernel machines in \citet{cmsDP}.\\
There is to date no such comparable achievement in the case of Decision Trees (DTs) induction, a crucial problem to address: decision trees have been popular in machine learning for decades \citep{bfosCA,qC4}, they are widely used, in particular for tabular data, and recognised for their accuracy, interpretability, and efficiency; they are virtually present in almost every Kaggle competition \cite{ahPR}, with extremely popular implementations like \cite{cgXA,kmfwcmylLA}. On the DP side, there is to our knowledge no extension of boosting properties to DP. We attribute the fact that random forests (RFs) currently "reign supreme" in DP \citep[Section 6]{fiDT} as more a consequence of the lack of formal results for boosting rather than following from any negative result.
\noindent\textbf{Our first contribution} shows a tradeoff to address to solve this problem. On the accuracy side, it has been known for a long time that the curvature of the Bayes risk used conditions the convergence rate in the boosting model \citep{kmOT,nnOD}. In this paper, we first investigate the privacy side and show that the \textit{sensitivity} of the splitting criterion has the \textit{same} dependence on the curvature: in few words, faster rate goes along with putting more noise to pick the split. Since the total privacy budget spent grows with the size of the tree, there is therefore a nontrivial tradeoff to solve between rate and noise injection to get sufficient accuracy under DP budget constraints.
\noindent\textbf{Our second contribution} brings a nail to hammer for this tradeoff: a new proper loss, properness being the minimal requirement that Bayes rule achieves the optimum of the loss. This loss, that we call M$\alpha$-loss, admits parameter $\alpha \in [0,1]$ which finely tunes the boosting convergence vs privacy budget tradeoff. As $\alpha \rightarrow 1$, boosting rate converges to the optimal rate while as $\alpha \rightarrow 0$, sensitivity converges to the minimum. In addition, we provide the full picture of boosting rates for the M$\alpha$-loss, of independent interest since generalizing the results of \citet{kmOT}.
\noindent\textbf{Our third contribution} brings a possible hammer for this nail. We show how to \textit{tune} the loss during induction to limit the privacy budget spent while keeping the \textit{same} boosting rates as in the noise-free case for a subtree of the tree with the same root, with a guaranteed probability. As the training sample increase in size, all else being equal, this probability converges to 1 and the subtree converges to the full boosted tree. This technique, that we nickname \textbf{objective calibration}, picks at the beginning of the induction a splitting criterion with optimal boosting convergence, thus paying significant privacy budget, and then reduces the budget spent as we split deeper nodes, thus also reducing convergence. Ultimately, the budget converges to the smallest splitting budget as the tree converges to consistency on training.
\noindent\textbf{Our fourth contribution} provides extensive experiments on 19 UCI domains \citep{dgUM}. An extensive comparison of our approach with two SOTA RFs reveals that our approach tends to very significantly beat RFs, even with ensembles more than ten times smaller. Our results display the benefits of combining boosting with DP, as well as the fact that objective calibration happens to be competitive also in the noise-free case.
\noindent The rest of this paper follows the order of contributions: after some definition in Section $\S$ \ref{sec:def}, the tradeoff between privacy and accuracy is developed in Section $\S$ \ref{sec:priv}, the M$\alpha$-loss is presented in $\S$ \ref{sec:matalpha}, results on boosting with the M$\alpha$-loss are given in $\S$ \ref{sec-boost}, objective calibration is presented in $\S$ \ref{sec-solv}, experiments are summarized in $\S$ \ref{sec-exp} and a last Section, $\S$ \ref{sec-conc}, concludes the paper. In order not to laden the main body's content, all proofs and considerably more detailed experiments have been pushed to an appendix (\sm), available from pp \pageref{tabcon} (proofs) and from pp \pageref{exp_expes} (experiments).
\section{The M$\alpha$-loss}\label{sec:matalpha}
\begin{figure*}[t]
\begin{tabular}{cc}
\resizebox{.70\linewidth}{!}{
\hspace{-1.8cm} \begin{minipage}{\linewidth}
\begin{eqnarray*}
\alphalink(u) \hspace{-0.3cm} & \in & \hspace{-0.3cm} \alpha \cdot \frac{2u - 1}{\sqrt{u(1-u)}} -2(1-\alpha)\cdot \left\{
\begin{array}{rcl}
1 & \mbox{ if } & u < 1/2\\
\big[ -1,1 \big] & \mbox{ if } & u = 1/2\\
-1 & \mbox{ if } & u > 1/2
\end{array}
\right. ,\nonumber\\
{\alphalink}^{-1}(z) \hspace{-0.3cm} & = & \hspace{-0.3cm} \frac{1}{2} \cdot \left( 1 + \iver{z \not\in 2(1-\alpha)\cdot
[-1,1]}\cdot \frac{\frac{z}{2} - \mathrm{sign}(z) \cdot (1-\alpha)}{\sqrt{\alpha^2+\left(\frac{|z|}{2} - (1-\alpha)\right)^2} }\right) ,\\
\alphasur(z) \hspace{-0.3cm} & = & \hspace{-0.3cm} 1 - \frac{z}{2} + \iver{z \not\in 2(1-\alpha)\cdot [-1,1]}\cdot \left(\sqrt{\alpha^2+\left(\frac{|z|}{2} - (1-\alpha)\right)^2} - \alpha\right).
\end{eqnarray*}
\end{minipage}
} & \hspace{-2cm} \begin{minipage}{\linewidth}
\begin{tabular}{cc}
\includegraphics[trim=10bp 10bp 30bp
10bp,clip,width=0.18\linewidth]{Figs/plot_Bayes_risk} & \hspace{-0.5cm}\includegraphics[trim=10bp 10bp 30bp
10bp,clip,width=0.18\linewidth]{Figs/plot_Budget}\\
\includegraphics[trim=10bp 10bp 30bp
10bp,clip,width=0.18\linewidth]{Figs/plot_inverse_canonical_link} & \hspace{-0.5cm}
\includegraphics[trim=10bp 10bp 30bp
10bp,clip,width=0.18\linewidth]{Figs/plot_surrogate}
\end{tabular}
\end{minipage}
\end{tabular}
\caption{\textit{Left}: canonical link $\alphalink$, inverse canonical link
${\alphalink}^{-1}$ and convex surrogate $\alphasur$ for the
M$\alpha$-loss. \textit{Right}: plots of Bayes risk
$\bayesalpharisk(q)$, sensitivity $\Delta^{(\alpha)}\defeq
\Delta_{\bayesalpharisk}(m)$, ${\alphalink}^{-1}(z)$ and
$\alphasur(u)$ for the M$\alpha$-loss,
for various $\alpha$s (colors).}
\label{f-alphaM}
\end{figure*}
In the boosting vs DP picture, there are two extremal losses. The 0/1
loss is the one that necessitates the smallest DP budget (Lemma
\ref{lemPhiProof}) but achieves the poorest convergence guarantee
\citep[Section 5.1]{kmOT}. On the other side of the spectrum,
Matsushita loss guarantees the optimal convergence rate
\citep{kmOT,nnOD} but necessitates a considerable DP budget (Lemmata
\ref{lemcurv}, \ref{lemPhiProof}). We address
the challenge of tuning the convergence rate vs DP budget by creating a
new proper symmetric loss, allowing to stand anywhere in between these
extremes via a simple tunable parameter $\alpha$.
\begin{definition}\label{defMatsualpha}
The M$\alpha$-loss is defined for any $\alpha \in [0,1]$ by the
following partial losses, for $y \in \{-1,1\}$:
\begin{eqnarray*}
\partialalphaloss{y}(u) & \defeq & 2\alpha \cdot \left(\frac{1-u}{u}\right)^{\nicefrac{y}{2}} + 2(1-\alpha)\cdot \iver{yu\leq y/2}.
\end{eqnarray*}
\end{definition}
It is easy to check that the M$\alpha$-loss is
proper (strictly if $\alpha > 0$) and symmetric, as well as its Bayes risk is
a convex combination of those of the 0/1 and
Matsushita losses:
\begin{eqnarray*}
\bayesalpharisk(u) & = & \alpha \cdot \bayesmatrisk(u) + (1-\alpha)
\cdot \bayesZOrisk(u).
\end{eqnarray*}
It is also not hard to show that the sensitivity intrapolates between
both losses' sensitivities using Lemma \ref{lemPhiProof}.
\begin{corollary}\label{sensALPHA}
The sensitivity \eqref{gensen} of the M$\alpha$-loss satisfies
$\Delta_{\bayesalpharisk} \leq \Delta^*_{\bayesalpharisk}(m) \defeq
3+2\alpha(\sqrt{m} - 1)$.
\end{corollary}
Because of the 0/1 loss is not differentiable, getting the inverse canonical link and the
convex surrogate is trickier.
\begin{theorem}\label{theoremalpha}
The canonical link $\alphalink$, inverse canonical link
${\alphalink}^{-1}$ and surrogate $\alphasur$ of the
M$\alpha$-loss are as in Fig. \ref{f-alphaM}.
\end{theorem}
\section{The Privacy vs Boosting Dilemma for DT}\label{sec:priv}
Let $\leafset (h)$ denote the set of leaves of tree $h$. Let $\ve{w} \in (0,1]^m$ denote a set of non-normalized weights over the training sample $\mathcal{S}$. Because $h$ produces a partition of $\mathcal{S}$, we rewrite the loss \eqref{defLOSSGENPROB} as $w(\mathcal{S}) \cdot F(h;\mathcal{S}) = \sum_{\leaf \in \leafset (h)} f_{\bayesrisk}(h, \leaf, \mathcal{S})$ with\footnote{We multiply both sides by $w(\mathcal{S})$ to follow \citet{fsDM}; $w(\mathcal{S})$ is indeed constant when growing a tree and does not influence the exponential mechanism.}
\begin{eqnarray}
f_{\bayesrisk}(h, \leaf, \mathcal{S}) & \defeq & w(\leaf) \cdot \bayesrisk\left( \frac{w^1(\leaf)}{w(\leaf)} \right),\label{defMECH}
\end{eqnarray}
and $w(\mathcal{S}) \defeq 1^\top \ve{w}, w^1(\leaf) \defeq \sum_i \iver{(i \in \leaf) \wedge (y_i = 1)}\cdot w_i, w(\leaf) \defeq \sum_i \iver{i \in \leaf}\cdot w_i$ and $i \in \leaf$ is the predicate "observation $\ve{x}_i$ reaches leaf $\leaf$ in $h$". Following \citet{fsDM}, we want to compute the sensitivity of $f$,
\begin{eqnarray}
\Delta_{\bayesrisk}(h, \lambda) & \defeq & \sup_{{\mathcal{S}}' \approx {\mathcal{S}}} |f_{\bayesrisk}(h, \leaf, \mathcal{S}')-f_{\bayesrisk}(h, \leaf, \mathcal{S})|\label{gensen}
\end{eqnarray}
(we sometimes note $\Delta_{\bayesrisk}$ to save readability). To compute it, we need a definition from convex analysis, \textit{perspectives}.
\begin{definition}\citep{mOA1,mOA2}\label{defpt}
Given closed convex function $f$, the \textbf{perspective} of $f$, noted
$\check{f}(x,y)$ is:
\begin{eqnarray}
\check{f}(x,y) & \defeq & y\cdot f (x/y)\:\:, \mbox{
if } y>0\:\:,\label{defPF}
\end{eqnarray}
and otherwise $\check{f}(x,y) \defeq f0^+(x)$ if $y=0$ and
$\check{f}(x,y) \defeq +\infty$ if $y<0$. Here, $f0^+$ is the
recession function of $f$.
\end{definition}
To save notations, we extend this notion to Bayes risks, that are concave, and therefore write for short $\check\bayesrisk \defeq -\left(\reallywidecheck{-\bayesrisk}\right)$.
\begin{theorem} \label{th3P}
$\Delta_{\bayesrisk}(h, \lambda) \leq \max\{3, 1 + \check{\bayesrisk}(1, m+1)\}$.
\end{theorem}
Theorem \ref{th3P} generalizes SOTA in two ways, first because only up to 4 Bayes risks were covered \citep{fsDM}, and second because classical analyses have $\ve{w}$ uniform (which precludes boosting). We now show that the variation of a perspective transform of a Bayes risk is linked to its \textit{weight} (or curvature, \citet{rwCB}).
\begin{lemma}\label{lemcurv}
For any twice differentiable $\bayesrisk$, for any $m$, there exists $a\in [0,1/(m+1)]$ such that
\begin{eqnarray}
(\check{\bayesrisk}')(1, m+1) & = & (m+1)^{-2}\cdot(-\bayesrisk'')(a)\:\:.\label{eqCURV1}
\end{eqnarray}
\end{lemma}
The proof of Theorem \ref{th3P} includes the proof that the bound is in fact almost tight as some neighboring samples admit $\Delta_{\bayesrisk}(h, \lambda) = \check{\bayesrisk}(1, m)$, so the variation in DP budget with $m$ is directly linked to \eqref{eqCURV1}. In other words, the larger the weight ($-\bayesrisk''$, \citet{rwCB}), the more expensive becomes DP with $m$ when relying on $\Delta_{\bayesrisk}$ as sensitivity measure --- such as the exponential mechanism in \citet{fsDM}. It turns out that it has long been known that \textit{boosting}'s convergence works the exact \textit{same} way: the larger the weight, the better is the rate guaranteed under boosting-compliant assumptions \citep{kmOT}. Since the top-down induction of a greedy tree gradually spends privacy budget to split each node, the boosting vs privacy dilemma is thus to guarantee fast enough convergence --- because it also saves budget as we converge in less iterations --- while keeping the privacy budget within required bounds. We now give an example of the budget required for popular Bayes risks using Theorem \ref{th3P}. $\bayesmatrisk(u) \defeq 2\sqrt{u(1-u)}$ is Bayes risk of Matsushita loss \citep{nnOT,nnBD}, which guarantees optimal boosting convergence \citep{kmOT} and thus, as expectable, is the most "expensive" DP-wise.
\begin{lemma}\label{lemPhiProof}
$\forall \bayesrisk \in \{\bayesmatrisk, \bayeslogrisk, \bayessqrisk, \bayesZOrisk\}$, we have $\Delta_{\bayesrisk} \leq \max\{3, 1 + \Delta^*_{\bayesrisk}(m)\}$ where $\Delta^*_{\bayesmatrisk} = 2\sqrt{m}$, $\Delta^*_{\bayeslogrisk} = (1+\log(m+1))\cdot \log^{-1} 2 $, $\Delta^*_{\bayessqrisk} = 4m/(m+1)$, $\Delta^*_{\bayesZOrisk} = 2$.
\end{lemma}
We note that all our bounds are within 1 of the bounds known for $\bayeslogrisk, \bayessqrisk, \bayesZOrisk$ \citep{fsDM}, so the generality of Theorem \ref{th3P} (all applicable Bayes risks, non-uniform weights over examples) comes at reduced price.
\section{Appendix on Experiments}\label{exp_expes}
\subsection{General setting}\label{sub-sec-gen}
\noindent $\triangleright$ \textbf{Public information} is as
follows. First, the attribute domain is public, which is standard in
the field \citep{fiDT}. Several authors have tried to compute the
threshold information for continuous attributes in a private way
\citep{fiDT,fsDM}. This is not necessarily a good approach: it requires
privacy budget, it can require weakening privacy and does not
necessarily buys improvements \citep[Section 3.2.2]{fiDT}. Since the
attribute domain is public, there is a simple alternative that does
not suffer most of these workarounds: the regular quantisation of the
domain using a public number of values. This particularly makes sense
\textit{e.g.} for many commonly used attribute classes like age, percentages, $\$$-value,
mileages, distances, or for any attribute for which the key segments
are known from the specialists, such as in life sciences or medical
domain. This also has three technical justifications: (1) a private
approaches requires budget, (2) $\nvpriv$ allows to tightly control
the computational complexity of the whole DT induction, and most importantly (3) boosting
does not require exhaustive split search. It indeed just assumes the
WLA, which essentially requires $\nvpriv$ not too
small, even more if the tree is not too deep \\
\noindent $\triangleright$ \textbf{Parameters} for \alphaboost. We ran out approach, both private and not private, for all combinations
of $T \in \{2, 5, 10, 20\}$, $\alpha \in \{0.1,
1.0, \mbox{O.C}\}$ (O.C = Objective Calibration), $d\in \{1, 2, 3, 4,
5, 6\}$. Finally, we have tried a quantisation in $\nvpriv\in
\{10, 50\}$ values, for all numeric attributes (Section \ref{sec-solv}
in the main file). In order not to give a potential advantage to
noise-free boosting in its tests that would not come from the absence
of noise, we also use this regular
quantisation for the noise-free boosting tests of our approach.\\
For the private version, in
addition to all these combinations, we considered $\epsilon \in \{0.01, 0.1,
1.0, 10.0, 25.0\}$ and
$\betatree \in \{0.1, 0.5, 0.9\}$. For the private trees, after having
noisified the leaf predictions, we clamp the output
values of the private trees to a maximal $M \in \{1,
10, 100\}$, which is another parameter. In the private setting, once
the depth is fixed, all tree induced have each of their leaves at the
same depth: this means that we even split leaves that are pure if they
are below the required depth, to prevent using DP budget to test for
purity (which we do when there is no DP, as we do not split pure
leaves in this case).\\
Altogether, this represents more than 1.3 million (ensemble) models
learned using our approach. Obviously, increasing $\nvpriv$ tends to
improve accuracy but significantly increases time complexity for
\alphaboost, in particular to split the nodes, a task carried out
repeatedly for both the non private but also for the exponential
mechanism in the differential
privacy case, adding an further computational burden in this case. Because of the size of the experiments, we report here
the results obtained for $M=10, \nvpriv = 10$, which seems to lead to
a good compromise between accuracy and execution time.
\subsection{Implementation}\label{sub-sec-sum-imp}
We give here a few details on the implementation.\\
\noindent $\triangleright$ \textbf{Boosting}: For boosting
algorithms, we clamp the value $q(\ell) \in [\zeta, 1 -
\zeta]$ with $\zeta = 10^{-4}$ to prevent infinite predictions and
NaNs via the link function. Then the value is noisifed if DP, and if
DP, after that, the maximal value is clamped to a maximum value,
$M$. Since in theory weights cannot be 0 or 1 when $\alpha \neq 0$ but
numerical precision errors can result in 0 or 1 weights in exceptional
cases, we replace such weights by a corresponding value in $\{\zeta', 1 -
\zeta'\}$. \\
\noindent $\triangleright$ \textbf{Random forests}: A random decision
forest is an ensemble of random decision trees \citep{Fan03israndom}. A random decision tree is constructed by choosing the split features purely at random.
\citet{FLETCHER201716} showed that this independence of the training data can be favourable for learning differentially private classifier, as the construction of the tree does not incur any privacy costs.
We implemented random decision forest based on the ideas from those
papers. However, instead of smooth sensitivity, we use global
sensitivity, not just to rely on the exact same definition of
sensitivity: our code was written with federated learning in mind,
and, as smooth sensitivity is data dependent, it is an open problem if
you can cooperatively compute smooth sensitivity over distributed
datasets without leaking information. Since privacy is spent at the
leaves' predictions, we have implemented two mechanisms to make those
private: the exponential mechanism using the class counts, and the
Laplace mechanism, still on the class counts, splitting evenly the
privacy budget among the leaves prior to applying each mechanism. We
refer to the two random forest approaches as \rfexp~and \rflap,
respectively for the exponential and Laplace mechanisms.
\subsection{Additional experimental results}\label{sub-sec-exp-res}
\subsection*{Domain summary Table}\label{sub-sec-sum}
\begin{table}[h]
\begin{center}
\begin{tabular}{|crr|}
\hline \hline
Domain & \multicolumn{1}{c}{$m$} & \multicolumn{1}{c|}{$n$} \\ \hline
Transfusion & 748 & 4 \\
Banknote & 1 372 & 4 \\
Breast wisc & 699 & 9 \\
Ionosphere & 351 & 33 \\
Sonar & 208 & 60 \\
Yeast & 1 484 & 7\\
Wine-red & 1 599 & 11 \\
Cardiotocography (*) & 2 126 & 9 \\
CreditCardSmall (**) & 1 000 & 23\\
Abalone & 4 177 & 8 \\
Qsar & 1 055 & 41\\
Wine-white & 4 898 & 11 \\
Page & 5 473 & 10 \\
Mice & 1 080 & 77\\
Hill+noise & 1 212 & 100\\
Hill+nonoise & 1 212 & 100\\
Firmteacher & 10 800 & 16\\
Magic & 19 020 & 10 \\
EEG & 14 980 & 14\\ \hline\hline
\end{tabular}
\end{center}
\caption{UCI domains considered in our experiments ($m=$ total number
of examples, $n=$ number of features), ordered in
increasing $m \times n$. (*) we used features 13-21 as descriptors; (**) we used the first 1 000 examples of the
UCI domain.}
\label{t-s-uci}
\end{table}
\clearpage
\newpage
\section*{Results for $\nvpriv=10, M = 10$}
Due to the excessive number of files/plots, results on a subset of the
domains are shown here. Contact the authors for a more comprehensive
non-ArXiv version of the paper.
\subsection*{$\triangleright$ UCI \texttt{transfusion}}\label{sub-sec-transfusion}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{without DP} & \multicolumn{2}{c}{with DP}\\ \hline
\pushgraphicsLeafDepth{SubFigs/NCT_10/transfusion/PostProcess10/plot_binTransfusion_avg_dpfree_depth.png}
\hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/transfusion/PostProcess10/plot_binTransfusion_avg_dpfree_leaves.png} & \pushgraphicsLeafDepth{SubFigs/NCT_10/transfusion/PostProcess10/plot_binTransfusion_avg_dp_depth.png} \hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/transfusion/PostProcess10/plot_binTransfusion_avg_dp_leaves.png} \\ \hline
depth \hspacegss & \hspacegss $\#$leaves & depth \hspacegss & \hspacegss $\#$leaves \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{transfusion}: $x$ = test error values, $y$
= cumulated expected depth (left plots) or number of leaves (right
plots) for the models having test error $\leq x$, aggregated over
all runs ($\pm$ standard deviation) -- the vertical
black bar depicts the test error of the default class. \textit{Left
panel}: w/o DP; \textit{Left
panel}: with DP; values are
aggregated over all varying parameters (left: $\alpha$; right:
$\alpha$, $\varepsilon$, [ $\betatree|\betapred$ ]). }
\label{f-s-transfusion}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{performances wrt $\alpha$s} & \multicolumn{2}{c}{performances wrt $\epsilon$s (with DP)}\\ \hline
w/o DP \hspacegss & \hspacegss with DP & full
& crop \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/transfusion/PostProcess10/plot_binTransfusion_cumulated_dpfree_alphas_ytotal.png}
\hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/transfusion/PostProcess10/plot_binTransfusion_cumulated_dp_alphas_ytotal.png} & \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/transfusion/PostProcess10/plot_binTransfusion_cumulated_dp_epsilons_ytotal.png} \hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/transfusion/PostProcess10/plot_binTransfusion_cumulated_dp_epsilons_sub.png} \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{transfusion}: $x$ = test error values and
$y$ = aggregated percentage of runs having error no
less than $x$ -- the vertical
black bar depicts the test error of the default class;
\textit{Left pane}: performances as a function of $\alpha$ (O.C =
objective calibration), without (left plot) or with DP (right plot);
\textit{Right pane}: performances as a function of $\varepsilon$,
either displaying the full plot (left plot) or a crop over the best
results (right plot). The crop panel is indicated in the left plot.}
\label{f-s-transfusion2}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{ccccc}\hline \hline
$\varepsilon=0.01$ \hspacegs & \hspacegs $\varepsilon=0.1$ \hspacegs & \hspacegs $\varepsilon=1$ \hspacegs & \hspacegs
$\varepsilon=10$ \hspacegs &
\hspacegs $\varepsilon=25$ \hspacegs \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/transfusion/PostProcess10/plot_binTransfusion_EPS0_01_cumulated_dp_epsilonxalphas_ytotal.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/transfusion/PostProcess10/plot_binTransfusion_EPS0_1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/transfusion/PostProcess10/plot_binTransfusion_EPS1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/transfusion/PostProcess10/plot_binTransfusion_EPS10_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/transfusion/PostProcess10/plot_binTransfusion_EPS25_cumulated_dp_epsilonxalphas_ytotal.png}
\\
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/transfusion/PostProcess10/plot_binTransfusion_EPS0_01_cumulated_dp_epsilonxalphas_ysub.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/transfusion/PostProcess10/plot_binTransfusion_EPS0_1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/transfusion/PostProcess10/plot_binTransfusion_EPS1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/transfusion/PostProcess10/plot_binTransfusion_EPS10_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/transfusion/PostProcess10/plot_binTransfusion_EPS25_cumulated_dp_epsilonxalphas_ysub.png}
\\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{transfusion}: $x$ = test error values and
$y$ = aggregated percentage of runs having error no
less than $x$ -- the vertical
black bar depicts the test error of the default class; \textit{top row}: performances as a function of
$\alpha$ showing the full plot for each value of $\varepsilon$;
\textit{bottom row}: crop of the best results from the top row (the crop panel is indicated in the left plot).}
\label{f-s-transfusion3}
\end{figure}
\clearpage
\newpage
\subsection*{$\triangleright$ UCI \texttt{banknote}}\label{sub-sec-banknote}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{without DP} & \multicolumn{2}{c}{with DP}\\
\hline
\pushgraphicsLeafDepth{SubFigs/NCT_10/banknote/PostProcess10/plot_binBanknote_avg_dpfree_depth.png}
\hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/banknote/PostProcess10/plot_binBanknote_avg_dpfree_leaves.png} & \pushgraphicsLeafDepth{SubFigs/NCT_10/banknote/PostProcess10/plot_binBanknote_avg_dp_depth.png} \hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/banknote/PostProcess10/plot_binBanknote_avg_dp_leaves.png} \\ \hline
depth \hspacegss & \hspacegss $\#$leaves & depth \hspacegss & \hspacegss $\#$leaves \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{banknote}, conventions identical as in
Figure \ref{f-s-transfusion}. }
\label{f-s-banknote}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{performances wrt $\alpha$s} & \multicolumn{2}{c}{performances wrt $\epsilon$s (with DP)}\\ \hline
w/o DP \hspacegss & \hspacegss with DP & full
& crop \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/banknote/PostProcess10/plot_binBanknote_cumulated_dpfree_alphas_ytotal.png}
\hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/banknote/PostProcess10/plot_binBanknote_cumulated_dp_alphas_ytotal.png} & \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/banknote/PostProcess10/plot_binBanknote_cumulated_dp_epsilons_ytotal.png} \hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/banknote/PostProcess10/plot_binBanknote_cumulated_dp_epsilons_sub.png} \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{banknote}, conventions identical as in
Figure \ref{f-s-transfusion2}.}
\label{f-s-banknote2}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{ccccc}\hline \hline
$\varepsilon=0.01$ \hspacegs & \hspacegs $\varepsilon=0.1$ \hspacegs & \hspacegs $\varepsilon=1$ \hspacegs & \hspacegs
$\varepsilon=10$ \hspacegs &
\hspacegs $\varepsilon=25$ \hspacegs \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/banknote/PostProcess10/plot_binBanknote_EPS0_01_cumulated_dp_epsilonxalphas_ytotal.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/banknote/PostProcess10/plot_binBanknote_EPS0_1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/banknote/PostProcess10/plot_binBanknote_EPS1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/banknote/PostProcess10/plot_binBanknote_EPS10_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/banknote/PostProcess10/plot_binBanknote_EPS25_cumulated_dp_epsilonxalphas_ytotal.png}
\\
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/banknote/PostProcess10/plot_binBanknote_EPS0_01_cumulated_dp_epsilonxalphas_ysub.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/banknote/PostProcess10/plot_binBanknote_EPS0_1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/banknote/PostProcess10/plot_binBanknote_EPS1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/banknote/PostProcess10/plot_binBanknote_EPS10_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/banknote/PostProcess10/plot_binBanknote_EPS25_cumulated_dp_epsilonxalphas_ysub.png}
\\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{banknote}, conventions identical as in
Figure \ref{f-s-transfusion3}.}
\label{f-s-banknote3}
\end{figure}
\clearpage
\newpage
\subsection*{$\triangleright$ UCI \texttt{breastwisc}}\label{sub-sec-breastwisc}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{without DP} & \multicolumn{2}{c}{with DP}\\
\hline
\pushgraphicsLeafDepth{SubFigs/NCT_10/breastwisc/PostProcess10/plot_binBreast_avg_dpfree_depth.png}
\hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/breastwisc/PostProcess10/plot_binBreast_avg_dpfree_leaves.png} & \pushgraphicsLeafDepth{SubFigs/NCT_10/breastwisc/PostProcess10/plot_binBreast_avg_dp_depth.png} \hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/breastwisc/PostProcess10/plot_binBreast_avg_dp_leaves.png} \\ \hline
depth \hspacegss & \hspacegss $\#$leaves & depth \hspacegss & \hspacegss $\#$leaves \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{breastwisc}, conventions identical as in
Figure \ref{f-s-transfusion}. }
\label{f-s-breastwisc}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{performances wrt $\alpha$s} & \multicolumn{2}{c}{performances wrt $\epsilon$s (with DP)}\\ \hline
w/o DP \hspacegss & \hspacegss with DP & full
& crop \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/breastwisc/PostProcess10/plot_binBreast_cumulated_dpfree_alphas_ytotal.png}
\hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/breastwisc/PostProcess10/plot_binBreast_cumulated_dp_alphas_ytotal.png} & \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/breastwisc/PostProcess10/plot_binBreast_cumulated_dp_epsilons_ytotal.png} \hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/breastwisc/PostProcess10/plot_binBreast_cumulated_dp_epsilons_sub.png} \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{breastwisc}, conventions identical as in
Figure \ref{f-s-transfusion2}.}
\label{f-s-breastwisc2}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{ccccc}\hline \hline
$\varepsilon=0.01$ \hspacegs & \hspacegs $\varepsilon=0.1$ \hspacegs & \hspacegs $\varepsilon=1$ \hspacegs & \hspacegs
$\varepsilon=10$ \hspacegs &
\hspacegs $\varepsilon=25$ \hspacegs \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/breastwisc/PostProcess10/plot_binBreast_EPS0_01_cumulated_dp_epsilonxalphas_ytotal.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/breastwisc/PostProcess10/plot_binBreast_EPS0_1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/breastwisc/PostProcess10/plot_binBreast_EPS1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/breastwisc/PostProcess10/plot_binBreast_EPS10_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/breastwisc/PostProcess10/plot_binBreast_EPS25_cumulated_dp_epsilonxalphas_ytotal.png}
\\
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/breastwisc/PostProcess10/plot_binBreast_EPS0_01_cumulated_dp_epsilonxalphas_ysub.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/breastwisc/PostProcess10/plot_binBreast_EPS0_1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/breastwisc/PostProcess10/plot_binBreast_EPS1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/breastwisc/PostProcess10/plot_binBreast_EPS10_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/breastwisc/PostProcess10/plot_binBreast_EPS25_cumulated_dp_epsilonxalphas_ysub.png}
\\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{breastwisc}, conventions identical as in
Figure \ref{f-s-transfusion3}.}
\label{f-s-breastwisc3}
\end{figure}
\clearpage
\newpage
\subsection*{$\triangleright$ UCI \texttt{ionosphere}}\label{sub-sec-ionosphere}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{without DP} & \multicolumn{2}{c}{with DP}\\ \hline
\pushgraphicsLeafDepth{SubFigs/NCT_10/ionosphere/PostProcess10/plot_binIono_avg_dpfree_depth.png}
\hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/ionosphere/PostProcess10/plot_binIono_avg_dpfree_leaves.png} & \pushgraphicsLeafDepth{SubFigs/NCT_10/ionosphere/PostProcess10/plot_binIono_avg_dp_depth.png} \hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/ionosphere/PostProcess10/plot_binIono_avg_dp_leaves.png} \\ \hline
depth \hspacegss & \hspacegss $\#$leaves & depth \hspacegss & \hspacegss $\#$leaves \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{ionosphere}, conventions identical as in
Figure \ref{f-s-transfusion}. }
\label{f-s-ionosphere}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{performances wrt $\alpha$s} & \multicolumn{2}{c}{performances wrt $\epsilon$s (with DP)}\\ \hline
w/o DP \hspacegss & \hspacegss with DP & full
& crop \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/ionosphere/PostProcess10/plot_binIono_cumulated_dpfree_alphas_ytotal.png}
\hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/ionosphere/PostProcess10/plot_binIono_cumulated_dp_alphas_ytotal.png} & \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/ionosphere/PostProcess10/plot_binIono_cumulated_dp_epsilons_ytotal.png} \hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/ionosphere/PostProcess10/plot_binIono_cumulated_dp_epsilons_sub.png} \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{ionosphere}, conventions identical as in
Figure \ref{f-s-transfusion2}.}
\label{f-s-ionosphere2}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{ccccc}\hline \hline
$\varepsilon=0.01$ \hspacegs & \hspacegs $\varepsilon=0.1$ \hspacegs & \hspacegs $\varepsilon=1$ \hspacegs & \hspacegs
$\varepsilon=10$ \hspacegs &
\hspacegs $\varepsilon=25$ \hspacegs \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/ionosphere/PostProcess10/plot_binIono_EPS0_01_cumulated_dp_epsilonxalphas_ytotal.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/ionosphere/PostProcess10/plot_binIono_EPS0_1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/ionosphere/PostProcess10/plot_binIono_EPS1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/ionosphere/PostProcess10/plot_binIono_EPS10_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/ionosphere/PostProcess10/plot_binIono_EPS25_cumulated_dp_epsilonxalphas_ytotal.png}
\\
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/ionosphere/PostProcess10/plot_binIono_EPS0_01_cumulated_dp_epsilonxalphas_ysub.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/ionosphere/PostProcess10/plot_binIono_EPS0_1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/ionosphere/PostProcess10/plot_binIono_EPS1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/ionosphere/PostProcess10/plot_binIono_EPS10_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/ionosphere/PostProcess10/plot_binIono_EPS25_cumulated_dp_epsilonxalphas_ysub.png}
\\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{ionosphere}, conventions identical as in
Figure \ref{f-s-transfusion3}.}
\label{f-s-ionosphere3}
\end{figure}
\clearpage
\newpage
\subsection*{$\triangleright$ UCI \texttt{sonar}}\label{sub-sec-sonar}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{without DP} & \multicolumn{2}{c}{with DP}\\
\hline
\pushgraphicsLeafDepth{SubFigs/NCT_10/sonar/PostProcess10/plot_binSonar_avg_dpfree_depth.png}
\hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/sonar/PostProcess10/plot_binSonar_avg_dpfree_leaves.png} & \pushgraphicsLeafDepth{SubFigs/NCT_10/sonar/PostProcess10/plot_binSonar_avg_dp_depth.png} \hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/sonar/PostProcess10/plot_binSonar_avg_dp_leaves.png} \\ \hline
depth \hspacegss & \hspacegss $\#$leaves & depth \hspacegss & \hspacegss $\#$leaves \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{sonar}, conventions identical as in
Figure \ref{f-s-transfusion}. }
\label{f-s-sonar}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{performances wrt $\alpha$s} & \multicolumn{2}{c}{performances wrt $\epsilon$s (with DP)}\\ \hline
w/o DP \hspacegss & \hspacegss with DP & full
& crop \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/sonar/PostProcess10/plot_binSonar_cumulated_dpfree_alphas_ytotal.png}
\hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/sonar/PostProcess10/plot_binSonar_cumulated_dp_alphas_ytotal.png} & \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/sonar/PostProcess10/plot_binSonar_cumulated_dp_epsilons_ytotal.png} \hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/sonar/PostProcess10/plot_binSonar_cumulated_dp_epsilons_sub.png} \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{sonar}, conventions identical as in
Figure \ref{f-s-transfusion2}.}
\label{f-s-sonar2}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{ccccc}\hline \hline
$\varepsilon=0.01$ \hspacegs & \hspacegs $\varepsilon=0.1$ \hspacegs & \hspacegs $\varepsilon=1$ \hspacegs & \hspacegs
$\varepsilon=10$ \hspacegs &
\hspacegs $\varepsilon=25$ \hspacegs \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/sonar/PostProcess10/plot_binSonar_EPS0_01_cumulated_dp_epsilonxalphas_ytotal.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/sonar/PostProcess10/plot_binSonar_EPS0_1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/sonar/PostProcess10/plot_binSonar_EPS1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/sonar/PostProcess10/plot_binSonar_EPS10_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/sonar/PostProcess10/plot_binSonar_EPS25_cumulated_dp_epsilonxalphas_ytotal.png}
\\
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/sonar/PostProcess10/plot_binSonar_EPS0_01_cumulated_dp_epsilonxalphas_ysub.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/sonar/PostProcess10/plot_binSonar_EPS0_1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/sonar/PostProcess10/plot_binSonar_EPS1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/sonar/PostProcess10/plot_binSonar_EPS10_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/sonar/PostProcess10/plot_binSonar_EPS25_cumulated_dp_epsilonxalphas_ysub.png}
\\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{sonar}, conventions identical as in
Figure \ref{f-s-transfusion3}.}
\label{f-s-sonar3}
\end{figure}
\clearpage
\newpage
\subsection*{$\triangleright$ UCI \texttt{yeast}}\label{sub-sec-yeast}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{without DP} & \multicolumn{2}{c}{with DP}\\ \hline
\pushgraphicsLeafDepth{SubFigs/NCT_10/yeast/PostProcess10/plot_binYeast_avg_dpfree_depth.png}
\hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/yeast/PostProcess10/plot_binYeast_avg_dpfree_leaves.png} & \pushgraphicsLeafDepth{SubFigs/NCT_10/yeast/PostProcess10/plot_binYeast_avg_dp_depth.png} \hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/yeast/PostProcess10/plot_binYeast_avg_dp_leaves.png} \\ \hline
depth \hspacegss & \hspacegss $\#$leaves & depth \hspacegss & \hspacegss $\#$leaves \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{yeast}, conventions identical as in
Figure \ref{f-s-transfusion}. }
\label{f-s-yeast}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{performances wrt $\alpha$s} & \multicolumn{2}{c}{performances wrt $\epsilon$s (with DP)}\\ \hline
w/o DP \hspacegss & \hspacegss with DP & full
& crop \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/yeast/PostProcess10/plot_binYeast_cumulated_dpfree_alphas_ytotal.png}
\hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/yeast/PostProcess10/plot_binYeast_cumulated_dp_alphas_ytotal.png} & \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/yeast/PostProcess10/plot_binYeast_cumulated_dp_epsilons_ytotal.png} \hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/yeast/PostProcess10/plot_binYeast_cumulated_dp_epsilons_sub.png} \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{yeast}, conventions identical as in
Figure \ref{f-s-transfusion2}.}
\label{f-s-yeast2}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{ccccc}\hline \hline
$\varepsilon=0.01$ \hspacegs & \hspacegs $\varepsilon=0.1$ \hspacegs & \hspacegs $\varepsilon=1$ \hspacegs & \hspacegs
$\varepsilon=10$ \hspacegs &
\hspacegs $\varepsilon=25$ \hspacegs \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/yeast/PostProcess10/plot_binYeast_EPS0_01_cumulated_dp_epsilonxalphas_ytotal.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/yeast/PostProcess10/plot_binYeast_EPS0_1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/yeast/PostProcess10/plot_binYeast_EPS1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/yeast/PostProcess10/plot_binYeast_EPS10_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/yeast/PostProcess10/plot_binYeast_EPS25_cumulated_dp_epsilonxalphas_ytotal.png}
\\
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/yeast/PostProcess10/plot_binYeast_EPS0_01_cumulated_dp_epsilonxalphas_ysub.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/yeast/PostProcess10/plot_binYeast_EPS0_1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/yeast/PostProcess10/plot_binYeast_EPS1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/yeast/PostProcess10/plot_binYeast_EPS10_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/yeast/PostProcess10/plot_binYeast_EPS25_cumulated_dp_epsilonxalphas_ysub.png}
\\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{yeast}, conventions identical as in
Figure \ref{f-s-transfusion3}.}
\label{f-s-yeast3}
\end{figure}
\clearpage
\newpage
\subsection*{$\triangleright$ UCI \texttt{winered}}\label{sub-sec-winered}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{without DP} & \multicolumn{2}{c}{with DP}\\ \hline
\pushgraphicsLeafDepth{SubFigs/NCT_10/winered/PostProcess10/plot_binWinered_avg_dpfree_depth.png}
\hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/winered/PostProcess10/plot_binWinered_avg_dpfree_leaves.png} & \pushgraphicsLeafDepth{SubFigs/NCT_10/winered/PostProcess10/plot_binWinered_avg_dp_depth.png} \hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/winered/PostProcess10/plot_binWinered_avg_dp_leaves.png} \\ \hline
depth \hspacegss & \hspacegss $\#$leaves & depth \hspacegss & \hspacegss $\#$leaves \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{winered}, conventions identical as in
Figure \ref{f-s-transfusion}. }
\label{f-s-winered}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{performances wrt $\alpha$s} & \multicolumn{2}{c}{performances wrt $\epsilon$s (with DP)}\\ \hline
w/o DP \hspacegss & \hspacegss with DP & full
& crop \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/winered/PostProcess10/plot_binWinered_cumulated_dpfree_alphas_ytotal.png}
\hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/winered/PostProcess10/plot_binWinered_cumulated_dp_alphas_ytotal.png} & \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/winered/PostProcess10/plot_binWinered_cumulated_dp_epsilons_ytotal.png} \hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/winered/PostProcess10/plot_binWinered_cumulated_dp_epsilons_sub.png} \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{winered}, conventions identical as in
Figure \ref{f-s-transfusion2}.}
\label{f-s-winered2}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{ccccc}\hline \hline
$\varepsilon=0.01$ \hspacegs & \hspacegs $\varepsilon=0.1$ \hspacegs & \hspacegs $\varepsilon=1$ \hspacegs & \hspacegs
$\varepsilon=10$ \hspacegs &
\hspacegs $\varepsilon=25$ \hspacegs \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/winered/PostProcess10/plot_binWinered_EPS0_01_cumulated_dp_epsilonxalphas_ytotal.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/winered/PostProcess10/plot_binWinered_EPS0_1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/winered/PostProcess10/plot_binWinered_EPS1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/winered/PostProcess10/plot_binWinered_EPS10_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/winered/PostProcess10/plot_binWinered_EPS25_cumulated_dp_epsilonxalphas_ytotal.png}
\\
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/winered/PostProcess10/plot_binWinered_EPS0_01_cumulated_dp_epsilonxalphas_ysub.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/winered/PostProcess10/plot_binWinered_EPS0_1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/winered/PostProcess10/plot_binWinered_EPS1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/winered/PostProcess10/plot_binWinered_EPS10_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/winered/PostProcess10/plot_binWinered_EPS25_cumulated_dp_epsilonxalphas_ysub.png}
\\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{winered}, conventions identical as in
Figure \ref{f-s-transfusion3}.}
\label{f-s-winered3}
\end{figure}
\clearpage
\newpage
\subsection*{$\triangleright$ UCI \texttt{cardiotocography}}\label{sub-sec-cardiotocography}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{without DP} & \multicolumn{2}{c}{with DP}\\ \hline
\pushgraphicsLeafDepth{SubFigs/NCT_10/cardiotocography/PostProcess10/plot_binCardio_avg_dpfree_depth.png}
\hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/cardiotocography/PostProcess10/plot_binCardio_avg_dpfree_leaves.png} & \pushgraphicsLeafDepth{SubFigs/NCT_10/cardiotocography/PostProcess10/plot_binCardio_avg_dp_depth.png} \hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/cardiotocography/PostProcess10/plot_binCardio_avg_dp_leaves.png} \\ \hline
depth \hspacegss & \hspacegss $\#$leaves & depth \hspacegss & \hspacegss $\#$leaves \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{cardiotocography}, conventions identical as in
Figure \ref{f-s-transfusion}. }
\label{f-s-cardiotocography}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{performances wrt $\alpha$s} & \multicolumn{2}{c}{performances wrt $\epsilon$s (with DP)}\\ \hline
w/o DP \hspacegss & \hspacegss with DP & full
& crop \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/cardiotocography/PostProcess10/plot_binCardio_cumulated_dpfree_alphas_ytotal.png}
\hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/cardiotocography/PostProcess10/plot_binCardio_cumulated_dp_alphas_ytotal.png} & \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/cardiotocography/PostProcess10/plot_binCardio_cumulated_dp_epsilons_ytotal.png} \hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/cardiotocography/PostProcess10/plot_binCardio_cumulated_dp_epsilons_sub.png} \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{cardiotocography}, conventions identical as in
Figure \ref{f-s-transfusion2}.}
\label{f-s-cardiotocography2}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{ccccc}\hline \hline
$\varepsilon=0.01$ \hspacegs & \hspacegs $\varepsilon=0.1$ \hspacegs & \hspacegs $\varepsilon=1$ \hspacegs & \hspacegs
$\varepsilon=10$ \hspacegs &
\hspacegs $\varepsilon=25$ \hspacegs \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/cardiotocography/PostProcess10/plot_binCardio_EPS0_01_cumulated_dp_epsilonxalphas_ytotal.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/cardiotocography/PostProcess10/plot_binCardio_EPS0_1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/cardiotocography/PostProcess10/plot_binCardio_EPS1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/cardiotocography/PostProcess10/plot_binCardio_EPS10_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/cardiotocography/PostProcess10/plot_binCardio_EPS25_cumulated_dp_epsilonxalphas_ytotal.png}
\\
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/cardiotocography/PostProcess10/plot_binCardio_EPS0_01_cumulated_dp_epsilonxalphas_ysub.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/cardiotocography/PostProcess10/plot_binCardio_EPS0_1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/cardiotocography/PostProcess10/plot_binCardio_EPS1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/cardiotocography/PostProcess10/plot_binCardio_EPS10_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/cardiotocography/PostProcess10/plot_binCardio_EPS25_cumulated_dp_epsilonxalphas_ysub.png}
\\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{cardiotocography}, conventions identical as in
Figure \ref{f-s-transfusion3}.}
\label{f-s-cardiotocography3}
\end{figure}
\clearpage
\newpage
\subsection*{$\triangleright$ UCI \texttt{creditcardsmall}}\label{sub-sec-creditcardsmall}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{without DP} & \multicolumn{2}{c}{with DP}\\ \hline
\pushgraphicsLeafDepth{SubFigs/NCT_10/creditcardsmall/PostProcess10/plot_binCreditCard_avg_dpfree_depth.png}
\hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/creditcardsmall/PostProcess10/plot_binCreditCard_avg_dpfree_leaves.png} & \pushgraphicsLeafDepth{SubFigs/NCT_10/creditcardsmall/PostProcess10/plot_binCreditCard_avg_dp_depth.png} \hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/creditcardsmall/PostProcess10/plot_binCreditCard_avg_dp_leaves.png} \\ \hline
depth \hspacegss & \hspacegss $\#$leaves & depth \hspacegss & \hspacegss $\#$leaves \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{creditcardsmall}, conventions identical as in
Figure \ref{f-s-transfusion}. }
\label{f-s-creditcardsmall}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{performances wrt $\alpha$s} & \multicolumn{2}{c}{performances wrt $\epsilon$s (with DP)}\\ \hline
w/o DP \hspacegss & \hspacegss with DP & full
& crop \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/creditcardsmall/PostProcess10/plot_binCreditCard_cumulated_dpfree_alphas_ytotal.png}
\hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/creditcardsmall/PostProcess10/plot_binCreditCard_cumulated_dp_alphas_ytotal.png} & \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/creditcardsmall/PostProcess10/plot_binCreditCard_cumulated_dp_epsilons_ytotal.png} \hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/creditcardsmall/PostProcess10/plot_binCreditCard_cumulated_dp_epsilons_sub.png} \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{creditcardsmall}, conventions identical as in
Figure \ref{f-s-transfusion2}.}
\label{f-s-creditcardsmall2}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{ccccc}\hline \hline
$\varepsilon=0.01$ \hspacegs & \hspacegs $\varepsilon=0.1$ \hspacegs & \hspacegs $\varepsilon=1$ \hspacegs & \hspacegs
$\varepsilon=10$ \hspacegs &
\hspacegs $\varepsilon=25$ \hspacegs \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/creditcardsmall/PostProcess10/plot_binCreditCard_EPS0_01_cumulated_dp_epsilonxalphas_ytotal.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/creditcardsmall/PostProcess10/plot_binCreditCard_EPS0_1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/creditcardsmall/PostProcess10/plot_binCreditCard_EPS1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/creditcardsmall/PostProcess10/plot_binCreditCard_EPS10_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/creditcardsmall/PostProcess10/plot_binCreditCard_EPS25_cumulated_dp_epsilonxalphas_ytotal.png}
\\
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/creditcardsmall/PostProcess10/plot_binCreditCard_EPS0_01_cumulated_dp_epsilonxalphas_ysub.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/creditcardsmall/PostProcess10/plot_binCreditCard_EPS0_1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/creditcardsmall/PostProcess10/plot_binCreditCard_EPS1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/creditcardsmall/PostProcess10/plot_binCreditCard_EPS10_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/creditcardsmall/PostProcess10/plot_binCreditCard_EPS25_cumulated_dp_epsilonxalphas_ysub.png}
\\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{creditcardsmall}, conventions identical as in
Figure \ref{f-s-transfusion3}.}
\label{f-s-creditcardsmall3}
\end{figure}
\clearpage
\newpage
\subsection*{$\triangleright$ UCI \texttt{abalone}}\label{sub-sec-abalone}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{without DP} & \multicolumn{2}{c}{with DP}\\ \hline
\pushgraphicsLeafDepth{SubFigs/NCT_10/abalone/PostProcess10/plot_binAba_avg_dpfree_depth.png}
\hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/abalone/PostProcess10/plot_binAba_avg_dpfree_leaves.png} & \pushgraphicsLeafDepth{SubFigs/NCT_10/abalone/PostProcess10/plot_binAba_avg_dp_depth.png} \hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/abalone/PostProcess10/plot_binAba_avg_dp_leaves.png} \\ \hline
depth \hspacegss & \hspacegss $\#$leaves & depth \hspacegss & \hspacegss $\#$leaves \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{abalone}, conventions identical as in
Figure \ref{f-s-transfusion}. }
\label{f-s-abalone}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{performances wrt $\alpha$s} & \multicolumn{2}{c}{performances wrt $\epsilon$s (with DP)}\\ \hline
w/o DP \hspacegss & \hspacegss with DP & full
& crop \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/abalone/PostProcess10/plot_binAba_cumulated_dpfree_alphas_ytotal.png}
\hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/abalone/PostProcess10/plot_binAba_cumulated_dp_alphas_ytotal.png} & \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/abalone/PostProcess10/plot_binAba_cumulated_dp_epsilons_ytotal.png} \hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/abalone/PostProcess10/plot_binAba_cumulated_dp_epsilons_sub.png} \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{abalone}, conventions identical as in
Figure \ref{f-s-transfusion2}.}
\label{f-s-abalone2}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{ccccc}\hline \hline
$\varepsilon=0.01$ \hspacegs & \hspacegs $\varepsilon=0.1$ \hspacegs & \hspacegs $\varepsilon=1$ \hspacegs & \hspacegs
$\varepsilon=10$ \hspacegs &
\hspacegs $\varepsilon=25$ \hspacegs \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/abalone/PostProcess10/plot_binAba_EPS0_01_cumulated_dp_epsilonxalphas_ytotal.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/abalone/PostProcess10/plot_binAba_EPS0_1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/abalone/PostProcess10/plot_binAba_EPS1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/abalone/PostProcess10/plot_binAba_EPS10_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/abalone/PostProcess10/plot_binAba_EPS25_cumulated_dp_epsilonxalphas_ytotal.png}
\\
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/abalone/PostProcess10/plot_binAba_EPS0_01_cumulated_dp_epsilonxalphas_ysub.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/abalone/PostProcess10/plot_binAba_EPS0_1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/abalone/PostProcess10/plot_binAba_EPS1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/abalone/PostProcess10/plot_binAba_EPS10_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/abalone/PostProcess10/plot_binAba_EPS25_cumulated_dp_epsilonxalphas_ysub.png}
\\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{abalone}, conventions identical as in
Figure \ref{f-s-transfusion3}.}
\label{f-s-abalone3}
\end{figure}
\clearpage
\newpage
\subsection*{$\triangleright$ UCI \texttt{qsar}}\label{sub-sec-qsar}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{without DP} & \multicolumn{2}{c}{with DP}\\ \hline
\pushgraphicsLeafDepth{SubFigs/NCT_10/qsar/PostProcess10/plot_binQsar_avg_dpfree_depth.png}
\hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/qsar/PostProcess10/plot_binQsar_avg_dpfree_leaves.png} & \pushgraphicsLeafDepth{SubFigs/NCT_10/qsar/PostProcess10/plot_binQsar_avg_dp_depth.png} \hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/qsar/PostProcess10/plot_binQsar_avg_dp_leaves.png} \\ \hline
depth \hspacegss & \hspacegss $\#$leaves & depth \hspacegss & \hspacegss $\#$leaves \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{qsar}, conventions identical as in
Figure \ref{f-s-transfusion}. }
\label{f-s-qsar}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{performances wrt $\alpha$s} & \multicolumn{2}{c}{performances wrt $\epsilon$s (with DP)}\\ \hline
w/o DP \hspacegss & \hspacegss with DP & full
& crop \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/qsar/PostProcess10/plot_binQsar_cumulated_dpfree_alphas_ytotal.png}
\hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/qsar/PostProcess10/plot_binQsar_cumulated_dp_alphas_ytotal.png} & \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/qsar/PostProcess10/plot_binQsar_cumulated_dp_epsilons_ytotal.png} \hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/qsar/PostProcess10/plot_binQsar_cumulated_dp_epsilons_sub.png} \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{qsar}, conventions identical as in
Figure \ref{f-s-transfusion2}.}
\label{f-s-qsar2}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{ccccc}\hline \hline
$\varepsilon=0.01$ \hspacegs & \hspacegs $\varepsilon=0.1$ \hspacegs & \hspacegs $\varepsilon=1$ \hspacegs & \hspacegs
$\varepsilon=10$ \hspacegs &
\hspacegs $\varepsilon=25$ \hspacegs \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/qsar/PostProcess10/plot_binQsar_EPS0_01_cumulated_dp_epsilonxalphas_ytotal.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/qsar/PostProcess10/plot_binQsar_EPS0_1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/qsar/PostProcess10/plot_binQsar_EPS1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/qsar/PostProcess10/plot_binQsar_EPS10_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/qsar/PostProcess10/plot_binQsar_EPS25_cumulated_dp_epsilonxalphas_ytotal.png}
\\
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/qsar/PostProcess10/plot_binQsar_EPS0_01_cumulated_dp_epsilonxalphas_ysub.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/qsar/PostProcess10/plot_binQsar_EPS0_1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/qsar/PostProcess10/plot_binQsar_EPS1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/qsar/PostProcess10/plot_binQsar_EPS10_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/qsar/PostProcess10/plot_binQsar_EPS25_cumulated_dp_epsilonxalphas_ysub.png}
\\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{qsar}, conventions identical as in
Figure \ref{f-s-transfusion3}.}
\label{f-s-qsar3}
\end{figure}
\clearpage
\newpage
\subsection*{$\triangleright$ UCI \texttt{page}}\label{sub-sec-page}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{without DP} & \multicolumn{2}{c}{with DP}\\ \hline
\pushgraphicsLeafDepth{SubFigs/NCT_10/page/PostProcess10/plot_binPage_avg_dpfree_depth.png}
\hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/page/PostProcess10/plot_binPage_avg_dpfree_leaves.png} & \pushgraphicsLeafDepth{SubFigs/NCT_10/page/PostProcess10/plot_binPage_avg_dp_depth.png} \hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/page/PostProcess10/plot_binPage_avg_dp_leaves.png} \\ \hline
depth \hspacegss & \hspacegss $\#$leaves & depth \hspacegss & \hspacegss $\#$leaves \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{page}, conventions identical as in
Figure \ref{f-s-transfusion}. }
\label{f-s-page}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{performances wrt $\alpha$s} & \multicolumn{2}{c}{performances wrt $\epsilon$s (with DP)}\\ \hline
w/o DP \hspacegss & \hspacegss with DP & full
& crop \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/page/PostProcess10/plot_binPage_cumulated_dpfree_alphas_ytotal.png}
\hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/page/PostProcess10/plot_binPage_cumulated_dp_alphas_ytotal.png} & \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/page/PostProcess10/plot_binPage_cumulated_dp_epsilons_ytotal.png} \hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/page/PostProcess10/plot_binPage_cumulated_dp_epsilons_sub.png} \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{page}, conventions identical as in
Figure \ref{f-s-transfusion2}.}
\label{f-s-page2}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{ccccc}\hline \hline
$\varepsilon=0.01$ \hspacegs & \hspacegs $\varepsilon=0.1$ \hspacegs & \hspacegs $\varepsilon=1$ \hspacegs & \hspacegs
$\varepsilon=10$ \hspacegs &
\hspacegs $\varepsilon=25$ \hspacegs \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/page/PostProcess10/plot_binPage_EPS0_01_cumulated_dp_epsilonxalphas_ytotal.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/page/PostProcess10/plot_binPage_EPS0_1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/page/PostProcess10/plot_binPage_EPS1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/page/PostProcess10/plot_binPage_EPS10_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/page/PostProcess10/plot_binPage_EPS25_cumulated_dp_epsilonxalphas_ytotal.png}
\\
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/page/PostProcess10/plot_binPage_EPS0_01_cumulated_dp_epsilonxalphas_ysub.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/page/PostProcess10/plot_binPage_EPS0_1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/page/PostProcess10/plot_binPage_EPS1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/page/PostProcess10/plot_binPage_EPS10_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/page/PostProcess10/plot_binPage_EPS25_cumulated_dp_epsilonxalphas_ysub.png}
\\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{page}, conventions identical as in
Figure \ref{f-s-transfusion3}.}
\label{f-s-page3}
\end{figure}
\clearpage
\newpage
\subsection*{$\triangleright$ UCI \texttt{mice}}\label{sub-sec-mice}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{without DP} & \multicolumn{2}{c}{with DP}\\ \hline
\pushgraphicsLeafDepth{SubFigs/NCT_10/mice/PostProcess10/plot_binMice_avg_dpfree_depth.png}
\hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/mice/PostProcess10/plot_binMice_avg_dpfree_leaves.png} & \pushgraphicsLeafDepth{SubFigs/NCT_10/mice/PostProcess10/plot_binMice_avg_dp_depth.png} \hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/mice/PostProcess10/plot_binMice_avg_dp_leaves.png} \\ \hline
depth \hspacegss & \hspacegss $\#$leaves & depth \hspacegss & \hspacegss $\#$leaves \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{mice}, conventions identical as in
Figure \ref{f-s-transfusion}. }
\label{f-s-mice}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{performances wrt $\alpha$s} & \multicolumn{2}{c}{performances wrt $\epsilon$s (with DP)}\\ \hline
w/o DP \hspacegss & \hspacegss with DP & full
& crop \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/mice/PostProcess10/plot_binMice_cumulated_dpfree_alphas_ytotal.png}
\hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/mice/PostProcess10/plot_binMice_cumulated_dp_alphas_ytotal.png} & \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/mice/PostProcess10/plot_binMice_cumulated_dp_epsilons_ytotal.png} \hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/mice/PostProcess10/plot_binMice_cumulated_dp_epsilons_sub.png} \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{mice}, conventions identical as in
Figure \ref{f-s-transfusion2}.}
\label{f-s-mice2}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{ccccc}\hline \hline
$\varepsilon=0.01$ \hspacegs & \hspacegs $\varepsilon=0.1$ \hspacegs & \hspacegs $\varepsilon=1$ \hspacegs & \hspacegs
$\varepsilon=10$ \hspacegs &
\hspacegs $\varepsilon=25$ \hspacegs \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/mice/PostProcess10/plot_binMice_EPS0_01_cumulated_dp_epsilonxalphas_ytotal.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/mice/PostProcess10/plot_binMice_EPS0_1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/mice/PostProcess10/plot_binMice_EPS1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/mice/PostProcess10/plot_binMice_EPS10_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/mice/PostProcess10/plot_binMice_EPS25_cumulated_dp_epsilonxalphas_ytotal.png}
\\
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/mice/PostProcess10/plot_binMice_EPS0_01_cumulated_dp_epsilonxalphas_ysub.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/mice/PostProcess10/plot_binMice_EPS0_1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/mice/PostProcess10/plot_binMice_EPS1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/mice/PostProcess10/plot_binMice_EPS10_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/mice/PostProcess10/plot_binMice_EPS25_cumulated_dp_epsilonxalphas_ysub.png}
\\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{mice}, conventions identical as in
Figure \ref{f-s-transfusion3}.}
\label{f-s-mice3}
\end{figure}
\clearpage
\newpage
\subsection*{$\triangleright$ UCI \texttt{hill$+$noise}}\label{sub-sec-hillnoise}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{without DP} & \multicolumn{2}{c}{with DP}\\ \hline
\pushgraphicsLeafDepth{SubFigs/NCT_10/hillnoise/PostProcess10/plot_binHillnoise_avg_dpfree_depth.png}
\hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/hillnoise/PostProcess10/plot_binHillnoise_avg_dpfree_leaves.png} & \pushgraphicsLeafDepth{SubFigs/NCT_10/hillnoise/PostProcess10/plot_binHillnoise_avg_dp_depth.png} \hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/hillnoise/PostProcess10/plot_binHillnoise_avg_dp_leaves.png} \\ \hline
depth \hspacegss & \hspacegss $\#$leaves & depth \hspacegss & \hspacegss $\#$leaves \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{hill$+$noise}, conventions identical as in
Figure \ref{f-s-transfusion}. }
\label{f-s-hillnoise}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{performances wrt $\alpha$s} & \multicolumn{2}{c}{performances wrt $\epsilon$s (with DP)}\\ \hline
w/o DP \hspacegss & \hspacegss with DP & full
& crop \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/hillnoise/PostProcess10/plot_binHillnoise_cumulated_dpfree_alphas_ytotal.png}
\hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/hillnoise/PostProcess10/plot_binHillnoise_cumulated_dp_alphas_ytotal.png} & \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/hillnoise/PostProcess10/plot_binHillnoise_cumulated_dp_epsilons_ytotal.png} \hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/hillnoise/PostProcess10/plot_binHillnoise_cumulated_dp_epsilons_sub.png} \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{hill$+$noise}, conventions identical as in
Figure \ref{f-s-transfusion2}.}
\label{f-s-hillnoise2}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{ccccc}\hline \hline
$\varepsilon=0.01$ \hspacegs & \hspacegs $\varepsilon=0.1$ \hspacegs & \hspacegs $\varepsilon=1$ \hspacegs & \hspacegs
$\varepsilon=10$ \hspacegs &
\hspacegs $\varepsilon=25$ \hspacegs \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/hillnoise/PostProcess10/plot_binHillnoise_EPS0_01_cumulated_dp_epsilonxalphas_ytotal.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/hillnoise/PostProcess10/plot_binHillnoise_EPS0_1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/hillnoise/PostProcess10/plot_binHillnoise_EPS1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/hillnoise/PostProcess10/plot_binHillnoise_EPS10_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/hillnoise/PostProcess10/plot_binHillnoise_EPS25_cumulated_dp_epsilonxalphas_ytotal.png}
\\
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/hillnoise/PostProcess10/plot_binHillnoise_EPS0_01_cumulated_dp_epsilonxalphas_ysub.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/hillnoise/PostProcess10/plot_binHillnoise_EPS0_1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/hillnoise/PostProcess10/plot_binHillnoise_EPS1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/hillnoise/PostProcess10/plot_binHillnoise_EPS10_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/hillnoise/PostProcess10/plot_binHillnoise_EPS25_cumulated_dp_epsilonxalphas_ysub.png}
\\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{hill$+$noise}, conventions identical as in
Figure \ref{f-s-transfusion3}.}
\label{f-s-hillnoise3}
\end{figure}
\clearpage
\newpage
\subsection*{$\triangleright$ UCI \texttt{hill$+$nonoise}}\label{sub-sec-hillnonoise}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{without DP} & \multicolumn{2}{c}{with DP}\\ \hline
\pushgraphicsLeafDepth{SubFigs/NCT_10/hillnonoise/PostProcess10/plot_binHillnonoise_avg_dpfree_depth.png}
\hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/hillnonoise/PostProcess10/plot_binHillnonoise_avg_dpfree_leaves.png} & \pushgraphicsLeafDepth{SubFigs/NCT_10/hillnonoise/PostProcess10/plot_binHillnonoise_avg_dp_depth.png} \hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/hillnonoise/PostProcess10/plot_binHillnonoise_avg_dp_leaves.png} \\ \hline
depth \hspacegss & \hspacegss $\#$leaves & depth \hspacegss & \hspacegss $\#$leaves \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{hill$+$nonoise}, conventions identical as in
Figure \ref{f-s-transfusion}. }
\label{f-s-hillnonoise}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{performances wrt $\alpha$s} & \multicolumn{2}{c}{performances wrt $\epsilon$s (with DP)}\\ \hline
w/o DP \hspacegss & \hspacegss with DP & full
& crop \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/hillnonoise/PostProcess10/plot_binHillnonoise_cumulated_dpfree_alphas_ytotal.png}
\hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/hillnonoise/PostProcess10/plot_binHillnonoise_cumulated_dp_alphas_ytotal.png} & \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/hillnonoise/PostProcess10/plot_binHillnonoise_cumulated_dp_epsilons_ytotal.png} \hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/hillnonoise/PostProcess10/plot_binHillnonoise_cumulated_dp_epsilons_sub.png} \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{hill$+$nonoise}, conventions identical as in
Figure \ref{f-s-transfusion2}.}
\label{f-s-hillnonoise2}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{ccccc}\hline \hline
$\varepsilon=0.01$ \hspacegs & \hspacegs $\varepsilon=0.1$ \hspacegs & \hspacegs $\varepsilon=1$ \hspacegs & \hspacegs
$\varepsilon=10$ \hspacegs &
\hspacegs $\varepsilon=25$ \hspacegs \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/hillnonoise/PostProcess10/plot_binHillnonoise_EPS0_01_cumulated_dp_epsilonxalphas_ytotal.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/hillnonoise/PostProcess10/plot_binHillnonoise_EPS0_1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/hillnonoise/PostProcess10/plot_binHillnonoise_EPS1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/hillnonoise/PostProcess10/plot_binHillnonoise_EPS10_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/hillnonoise/PostProcess10/plot_binHillnonoise_EPS25_cumulated_dp_epsilonxalphas_ytotal.png}
\\
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/hillnonoise/PostProcess10/plot_binHillnonoise_EPS0_01_cumulated_dp_epsilonxalphas_ysub.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/hillnonoise/PostProcess10/plot_binHillnonoise_EPS0_1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/hillnonoise/PostProcess10/plot_binHillnonoise_EPS1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/hillnonoise/PostProcess10/plot_binHillnonoise_EPS10_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/hillnonoise/PostProcess10/plot_binHillnonoise_EPS25_cumulated_dp_epsilonxalphas_ysub.png}
\\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{hill$+$nonoise}, conventions identical as in
Figure \ref{f-s-transfusion3}.}
\label{f-s-hillnonoise3}
\end{figure}
\clearpage
\newpage
\subsection*{$\triangleright$ UCI \texttt{firmteacher}}\label{sub-sec-firmteacher}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{without DP} & \multicolumn{2}{c}{with DP}\\ \hline
\pushgraphicsLeafDepth{SubFigs/NCT_10/firmteacher2/PostProcess10/plot_binFirmteacher2_avg_dpfree_depth.png}
\hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/firmteacher2/PostProcess10/plot_binFirmteacher2_avg_dpfree_leaves.png} & \pushgraphicsLeafDepth{SubFigs/NCT_10/firmteacher2/PostProcess10/plot_binFirmteacher2_avg_dp_depth.png} \hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/firmteacher2/PostProcess10/plot_binFirmteacher2_avg_dp_leaves.png} \\ \hline
depth \hspacegss & \hspacegss $\#$leaves & depth \hspacegss & \hspacegss $\#$leaves \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{firmteacher}, conventions identical as in
Figure \ref{f-s-transfusion}. }
\label{f-s-firmteacher2}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{performances wrt $\alpha$s} & \multicolumn{2}{c}{performances wrt $\epsilon$s (with DP)}\\ \hline
w/o DP \hspacegss & \hspacegss with DP & full
& crop \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/firmteacher2/PostProcess10/plot_binFirmteacher2_cumulated_dpfree_alphas_ytotal.png}
\hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/firmteacher2/PostProcess10/plot_binFirmteacher2_cumulated_dp_alphas_ytotal.png} & \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/firmteacher2/PostProcess10/plot_binFirmteacher2_cumulated_dp_epsilons_ytotal.png} \hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/firmteacher2/PostProcess10/plot_binFirmteacher2_cumulated_dp_epsilons_sub.png} \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{firmteacher}, conventions identical as in
Figure \ref{f-s-transfusion2}.}
\label{f-s-firmteacher22}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{ccccc}\hline \hline
$\varepsilon=0.01$ \hspacegs & \hspacegs $\varepsilon=0.1$ \hspacegs & \hspacegs $\varepsilon=1$ \hspacegs & \hspacegs
$\varepsilon=10$ \hspacegs &
\hspacegs $\varepsilon=25$ \hspacegs \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/firmteacher2/PostProcess10/plot_binFirmteacher2_EPS0_01_cumulated_dp_epsilonxalphas_ytotal.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/firmteacher2/PostProcess10/plot_binFirmteacher2_EPS0_1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/firmteacher2/PostProcess10/plot_binFirmteacher2_EPS1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/firmteacher2/PostProcess10/plot_binFirmteacher2_EPS10_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/firmteacher2/PostProcess10/plot_binFirmteacher2_EPS25_cumulated_dp_epsilonxalphas_ytotal.png}
\\
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/firmteacher2/PostProcess10/plot_binFirmteacher2_EPS0_01_cumulated_dp_epsilonxalphas_ysub.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/firmteacher2/PostProcess10/plot_binFirmteacher2_EPS0_1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/firmteacher2/PostProcess10/plot_binFirmteacher2_EPS1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/firmteacher2/PostProcess10/plot_binFirmteacher2_EPS10_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/firmteacher2/PostProcess10/plot_binFirmteacher2_EPS25_cumulated_dp_epsilonxalphas_ysub.png}
\\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{firmteacher}, conventions identical as in
Figure \ref{f-s-transfusion3}.}
\label{f-s-firmteacher23}
\end{figure}
\clearpage
\newpage
\subsection*{$\triangleright$ UCI \texttt{magic}}\label{sub-sec-magic}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{without DP} & \multicolumn{2}{c}{with DP}\\ \hline
\pushgraphicsLeafDepth{SubFigs/NCT_10/magic/PostProcess10/plot_binMagic_avg_dpfree_depth.png}
\hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/magic/PostProcess10/plot_binMagic_avg_dpfree_leaves.png} & \pushgraphicsLeafDepth{SubFigs/NCT_10/magic/PostProcess10/plot_binMagic_avg_dp_depth.png} \hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/magic/PostProcess10/plot_binMagic_avg_dp_leaves.png} \\ \hline
depth \hspacegss & \hspacegss $\#$leaves & depth \hspacegss & \hspacegss $\#$leaves \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{magic}, conventions identical as in
Figure \ref{f-s-transfusion}. }
\label{f-s-magic}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{performances wrt $\alpha$s} & \multicolumn{2}{c}{performances wrt $\epsilon$s (with DP)}\\ \hline
w/o DP \hspacegss & \hspacegss with DP & full
& crop \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/magic/PostProcess10/plot_binMagic_cumulated_dpfree_alphas_ytotal.png}
\hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/magic/PostProcess10/plot_binMagic_cumulated_dp_alphas_ytotal.png} & \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/magic/PostProcess10/plot_binMagic_cumulated_dp_epsilons_ytotal.png} \hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/magic/PostProcess10/plot_binMagic_cumulated_dp_epsilons_sub.png} \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{magic}, conventions identical as in
Figure \ref{f-s-transfusion2}.}
\label{f-s-magic2}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{ccccc}\hline \hline
$\varepsilon=0.01$ \hspacegs & \hspacegs $\varepsilon=0.1$ \hspacegs & \hspacegs $\varepsilon=1$ \hspacegs & \hspacegs
$\varepsilon=10$ \hspacegs &
\hspacegs $\varepsilon=25$ \hspacegs \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/magic/PostProcess10/plot_binMagic_EPS0_01_cumulated_dp_epsilonxalphas_ytotal.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/magic/PostProcess10/plot_binMagic_EPS0_1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/magic/PostProcess10/plot_binMagic_EPS1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/magic/PostProcess10/plot_binMagic_EPS10_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/magic/PostProcess10/plot_binMagic_EPS25_cumulated_dp_epsilonxalphas_ytotal.png}
\\
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/magic/PostProcess10/plot_binMagic_EPS0_01_cumulated_dp_epsilonxalphas_ysub.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/magic/PostProcess10/plot_binMagic_EPS0_1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/magic/PostProcess10/plot_binMagic_EPS1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/magic/PostProcess10/plot_binMagic_EPS10_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/magic/PostProcess10/plot_binMagic_EPS25_cumulated_dp_epsilonxalphas_ysub.png}
\\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{magic}, conventions identical as in
Figure \ref{f-s-transfusion3}.}
\label{f-s-magic3}
\end{figure}
\clearpage
\newpage
\subsection*{$\triangleright$ UCI \texttt{eeg}}\label{sub-sec-eeg}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{without DP} & \multicolumn{2}{c}{with DP}\\ \hline
\pushgraphicsLeafDepth{SubFigs/NCT_10/eeg/PostProcess10/plot_binEeg_avg_dpfree_depth.png}
\hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/eeg/PostProcess10/plot_binEeg_avg_dpfree_leaves.png} & \pushgraphicsLeafDepth{SubFigs/NCT_10/eeg/PostProcess10/plot_binEeg_avg_dp_depth.png} \hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/eeg/PostProcess10/plot_binEeg_avg_dp_leaves.png} \\ \hline
depth \hspacegss & \hspacegss $\#$leaves & depth \hspacegss & \hspacegss $\#$leaves \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{eeg}, conventions identical as in
Figure \ref{f-s-transfusion}. }
\label{f-s-eeg}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc||cc}\hline \hline
\multicolumn{2}{c||}{performances wrt $\alpha$s} & \multicolumn{2}{c}{performances wrt $\epsilon$s (with DP)}\\ \hline
w/o DP \hspacegss & \hspacegss with DP & full
& crop \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/eeg/PostProcess10/plot_binEeg_cumulated_dpfree_alphas_ytotal.png}
\hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/eeg/PostProcess10/plot_binEeg_cumulated_dp_alphas_ytotal.png} & \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/eeg/PostProcess10/plot_binEeg_cumulated_dp_epsilons_ytotal.png} \hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/eeg/PostProcess10/plot_binEeg_cumulated_dp_epsilons_sub.png} \\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{eeg}, conventions identical as in
Figure \ref{f-s-transfusion2}.}
\label{f-s-eeg2}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{ccccc}\hline \hline
$\varepsilon=0.01$ \hspacegs & \hspacegs $\varepsilon=0.1$ \hspacegs & \hspacegs $\varepsilon=1$ \hspacegs & \hspacegs
$\varepsilon=10$ \hspacegs &
\hspacegs $\varepsilon=25$ \hspacegs \\ \hline
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/eeg/PostProcess10/plot_binEeg_EPS0_01_cumulated_dp_epsilonxalphas_ytotal.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/eeg/PostProcess10/plot_binEeg_EPS0_1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/eeg/PostProcess10/plot_binEeg_EPS1_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/eeg/PostProcess10/plot_binEeg_EPS10_cumulated_dp_epsilonxalphas_ytotal.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/eeg/PostProcess10/plot_binEeg_EPS25_cumulated_dp_epsilonxalphas_ytotal.png}
\\
\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/eeg/PostProcess10/plot_binEeg_EPS0_01_cumulated_dp_epsilonxalphas_ysub.png}
\hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/eeg/PostProcess10/plot_binEeg_EPS0_1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/eeg/PostProcess10/plot_binEeg_EPS1_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/eeg/PostProcess10/plot_binEeg_EPS10_cumulated_dp_epsilonxalphas_ysub.png} \hspacegs & \hspacegs \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sg\linewidth]{SubFigs/NCT_10/eeg/PostProcess10/plot_binEeg_EPS25_cumulated_dp_epsilonxalphas_ysub.png}
\\ \hline \hline
\end{tabular}
\end{center}
\caption{UCI domain \texttt{eeg}, conventions identical as in
Figure \ref{f-s-transfusion3}.}
\label{f-s-eeg3}
\end{figure}
\clearpage
\newpage
\subsection*{Summary in $d, T$ for the best DP in \alphaboost}\label{sub-sec-sumdT}
Table \ref{t-sumdT} roughly summarizes the optimal regimes for $d$
(depth) and $T$ (number of trees) for the best DP results in \alphaboost.
\begin{table}[t]
\begin{center}
\scalebox{.83}{\begin{tabular}{c|c|c}\hline \hline
$d\downarrow$, $T\rightarrow$ & small & big\\\hline
small & \texttt{page} & \texttt{breastwisc}, \texttt{ionosphere},
\texttt{yeast}, \\
& & \texttt{abalone},
\texttt{firmteacher}\\ \hline
big & \texttt{cardiotocography}, \texttt{hillnonoise}& \texttt{transfusion},
\texttt{banknote}, \texttt{sonar},\\
& & \texttt{hillnoise},
\texttt{qsar}, \texttt{mice}, \texttt{wine}*\\ \hline\hline
\end{tabular}}
\end{center}
\caption{Localisation of each of the 19 domains in terms of the model
complexity parameters ($d,T$) allowing to get the best DP results,
as observed from the "\texttt{results}*" file (see above, Section \ref{sub-sec-sum-imp}).}
\label{t-sumdT}
\end{table}
\clearpage
\newpage
\subsection*{Summary of the comparison \alphaboost~vs RFs with DP}\label{sub-sec-AlphavsRF}
\begin{table}[t]
\begin{center}
\hspacegs \includegraphics[trim=5bp 5bp 5bp
5bp,clip,width=0.5\linewidth]{Figs/FigTable-crop.png}
\end{center}
\caption{Comparison of \alphaboost~vs two SOTA random forest (RFs)
approaches, each inducing $T=21$ random trees. For each domain and
each depth value in $\{2,4,6\}$, we compute the number of runs where
one algorithm significantly (evaluated with a Student's $t$ test and all counts get
p-value $p<0.01$) beats the other and then compute the
percentage of those where \alphaboost~is the lead, for several values
of $\alpha$ and a number of trees $T\in \{2,20\}$ (left and right
tables, resp.) for \alphaboost.}
\label{t-bvsrf}
\end{table}
\section*{Summary comparison $\nvpriv=10$ vs $\nvpriv=50$ ($M = 10$)}\label{sub-sec-1050}
\begin{figure}[h]
\begin{center}
\begin{tabular}{c|cc||cc}\hline \hline
& \multicolumn{2}{c||}{performances wrt $\epsilon$s} &
\multicolumn{2}{c}{high
privacy
performances
($\varepsilon = 0.01$)}\\
& $\nvpriv = 10$ & $\nvpriv = 50$ & $\nvpriv = 10$ & $\nvpriv = 50$ \\ \hline
\rotatebox[origin=l]{90}{\texttt{banknote}} & \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/banknote/PostProcess10/plot_binBanknote_cumulated_dp_epsilons_ytotal.png}
\hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_50/banknote/PostProcess50/plot_binBanknote_cumulated_dp_epsilons.png}
& \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/banknote/PostProcess10/plot_binBanknote_EPS0_01_cumulated_dp_epsilonxalphas_ytotal.png}
\hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_50/banknote/PostProcess50/plot_binBanknote_EPS0_01_cumulated_dp_epsilonxalphas.png}\\ \hline
\rotatebox[origin=l]{90}{\texttt{winered}} & \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/winered/PostProcess10/plot_binWinered_cumulated_dp_epsilons_ytotal.png}
\hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_50/winered/PostProcess50/plot_binWinered_cumulated_dp_epsilons.png}
& \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/winered/PostProcess10/plot_binWinered_EPS0_01_cumulated_dp_epsilonxalphas_ytotal.png}
\hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_50/winered/PostProcess50/plot_binWinered_EPS0_01_cumulated_dp_epsilonxalphas.png}\\ \hline
\rotatebox[origin=l]{90}{\texttt{qsar}} & \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/qsar/PostProcess10/plot_binQsar_cumulated_dp_epsilons_ytotal.png}
\hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_50/qsar/PostProcess50/plot_binQsar_cumulated_dp_epsilons.png}
& \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/qsar/PostProcess10/plot_binQsar_EPS0_01_cumulated_dp_epsilonxalphas_ytotal.png}
\hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_50/qsar/PostProcess50/plot_binQsar_EPS0_01_cumulated_dp_epsilonxalphas.png}\\ \hline
\rotatebox[origin=l]{90}{\texttt{winewhite}} & \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/winewhite/PostProcess10/plot_binWineWhite_cumulated_dp_epsilons_ytotal.png}
\hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_50/winewhite/PostProcess50/plot_binWineWhite_cumulated_dp_epsilons.png}
& \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_10/winewhite/PostProcess10/plot_binWineWhite_EPS0_01_cumulated_dp_epsilonxalphas_ytotal.png}
\hspacegss & \hspacegss \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=\sgg\linewidth]{SubFigs/NCT_50/winewhite/PostProcess50/plot_binWineWhite_EPS0_01_cumulated_dp_epsilonxalphas.png}\\ \hline
\end{tabular}
\end{center}
\caption{Extract of the comparison between quantization in
$\nvpriv=10$ vs $\nvpriv=50$ values for continuous attributes, for
both the overall privacy results (left subtable) and results
as a function of $\alpha$ for high privacy regime ($\varepsilon =
0.01$, right subtable). Conventions follow Figures
\ref{f-s-transfusion} and \ref{f-s-transfusion2}. The vertical black
line is the test error of the majority class.}
\label{f-summary1050}
\end{figure}
\section{Experiments}\label{sec-exp}
\begin{table}[t]
\begin{center}
\scalebox{.93}{\begin{tabular}{cccccc}\hline \hline
$\varepsilon $ & $0.01$ & $0.1$ & $1$ &
$10$
& $25$\\
(O.C, 0.1, 1.0) & (14,3,5) & (13,2,6) & (9,3,9) & (5,4,11)
& (8,6,6) \\ \hline\hline
\end{tabular}}
\end{center}
\caption{Summary, on the 19 domains, of the $\#$ of domains
for which one strategy for $\alpha$ in $\{$objective calibration
(O.C), $0.1$, $1,0$$\}$ leads to the best result (ties lead to sums $>19$).}
\label{f-DP-alpha}
\end{table}
\begin{figure*}[t]
\begin{center}
\scalebox{1.0}{
\begin{tabular}{cc?cc}\hline \hline
\hspace{-0.3cm} perf. wrt $\alpha$s, w/o DP \hspacegss & \hspacegss
perf. wrt
$\alpha$s,
with DP &
$\#$leaves,
w/o
DP
\hspacegss
& \hspacegss $\#$leaves, with DP \\ \hline
\hspace{-0.3cm} \includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=0.2\linewidth]{SubFigs/NCT_10/banknote/PostProcess10/plot_binBanknote_cumulated_dpfree_alphas_ytotal.png}
\hspacegs& \hspacegs\includegraphics[trim=90bp 5bp 75bp
15bp,clip,width=0.2\linewidth]{SubFigs/NCT_10/banknote/PostProcess10/plot_binBanknote_cumulated_dp_alphas_ytotal.png}
&
\pushgraphicsLeafDepth{SubFigs/NCT_10/banknote/PostProcess10/plot_binBanknote_avg_dpfree_leaves.png} \hspacegss & \hspacegss \pushgraphicsLeafDepth{SubFigs/NCT_10/banknote/PostProcess10/plot_binBanknote_avg_dp_leaves.png} \\ \hline\hline
\end{tabular}}
\end{center}
\caption{UCI domain \texttt{banknote}: in each plot, $x$ depicts test errors and $y$ a cumulated $\%$ of runs of
\algoname~having test error at most $x$. In each pane (left, right),
the left plot is without DP and the right plot is with
DP. \textit{Left pane}: comparison of \alphaboost~for three strategies on $\alpha$
(see text). \textit{Right pane}: mean $\pm$ stddev for the number of
leaves in the related trees (see text).}
\label{f-banknote}
\end{figure*}
We have performed 10-folds stratified CV experiments on 19 UCI domains, detailed in \sm,
Section \ref{sub-sec-exp-res}, ranging from $m\cdot n<$ 3 000 to $m\cdot n>$ 200 000 . We have compared our approach,
\alphaboost, to two state of the art implementation of RFs
based on \cite{FLETCHER201716} but replacing the smooth sensitivity by
the global sensitivity (Definition \ref{defEDPRIV}). RFs have the appealing property for DP that privacy budget needs
only be spent at the leaves: we have tried both the Laplace
(\rflap) and the exponential (\rfexp) mechanisms (see \sm, $\S$
\ref{sub-sec-sum-imp}) with RFs containing $T=21$ trees to prevent ties.
We have performed three kinds of
experiments: (i) check that \alphaboost~performs well and complies
with the boosting theory in the privacy-free case, (ii) compare the
various flavours of \alphaboost~in the private case, (iii) compare
\alphaboost~vs RFs in the private case.
We ran \algoname, both private and not private, for all combinations
of $T \in \{2, 5, 10, 20\}, \alpha \in \{0.1,
1.0, \mbox{O.C}\}$, depth $\in \{1, 2, 3, 4,
5, 6\}, \epsilon \in \{0.01, 0.1,
1.0, 10.0, 25.0\}, \betatree \in \{0.1, 0.5, 0.9\}$, and even more
parameters (see \sm, $\S$ \ref{sub-sec-gen}), for a total number of
boosting experiments alone that far exceeds the million ensemble
models learned. When there is no DP constraint, we add in
\alphaboost~the test of whether a leaf is pure -- \textit{i.e.} is
not reached by examples of both classes -- before attempting to
split it (we do not split further pure leaves). When there is DP however, we do not make the test in order
not to spend privacy budget, and so \alphaboost~
builds trees in which all leaves are at the required depth.
The \sm, $\S$ \ref{exp_expes}, gives the experiments in
greater details, summarized here.\\
\noindent $\triangleright$ \textbf{\alphaboost, with and without
noise}: Figure \ref{f-banknote}, left pane, displays a picture that
can be observed more or less over all domains: \alphaboost~with
$\alpha=1$ tends to obtain better results than with $\alpha=0.1$,
which complies with Theorems
\ref{thBoostDT1} and \ref{thBoostLC1}, and with the boosting theory
more generally \citep{kmOT}. Three additional observations
emerge: (a) objective calibration (O.C) is competitive without noise, (b)
this also holds \textit{with noise}, which we believe indicate a good
compromise between convergence rates and safekeeping privacy budget in \alphaboost~
(Section \ref{sec:priv}) and contributes to experimentally validate
our theory in Section \ref{sec-solv}; (c) DP curves display predictable degradations due to noise,
but on many domains noisification still gives interesting results
compared to the noise-free setting: in \texttt{banknote} for example
(Figure \ref{f-banknote}), more than 2/3 of the private runs with
O.C get test error $\leq 20\%$, an
upperbound test error for noise-free boosting.\\
\noindent $\triangleright$ \textbf{\alphaboost~in various privacy regimes}:
Table \ref{f-DP-alpha} is an extremal experiments which looks at the
best models that can be learned under DP under various
$\varepsilon$. The picture that seems to emerge is that objective
calibration is the best technique for high privacy demand, which we
take as a good sign given our theory (Section
\ref{sec-solv}). Obviously, the experiments aggregate a number of
parameters for each $\epsilon$, such as $T, d, \betatree$, so to
really get the best of a regime for $\alpha$, one should be able to
have clues on how to fix those other parameters. It turns out that the
experiments display that this should be possible. In particular, for
each domain, the value $\betatree$ does not seem to significantly matter to get the
best results \textit{but} the model size parameters seem to matter a
lot more: for each domain, there is a particular regime of $d, T$ that
tends to give the top DP results (like rather deep trees for
\texttt{banknote}, Figure \ref{f-banknote}). \sm, Section \ref{sub-sec-sumdT} presents
the whole list details. This, we believe, is important, in
particular for domains where $T$ is small like \texttt{page}, as some RFs approaches
fit huge sets reducing
interpretability \citep[Table I]{fiDT}.\\
\noindent $\triangleright$ \textbf{\alphaboost~vs (\rflap~and \rfexp)}:
a Table (\ref{t-bvsrf}, given in \sm, S$\S$ \ref{sub-sec-AlphavsRF})
computes over all 19 domains the $\%$ of runs where \alphaboost~beats
RFs, among all runs for which one approach statistically significantly
($p_{\mbox{\tiny val}}<0.01$)
beats the other. The scale heavily tips in favor of \alphaboost~when
it boosts $T=20$ trees: O.C and $\alpha=1.0$ are
significantly superior than \rfexp~and
\rflap~on more than \textbf{80} $\%$ of such cases (less than 4$\%$ of
the differences are not significant). This means two things: first, for these strategies of \alphaboost, there is not much care needed
to optimize some parameters of \alphaboost~($d, \betapred$) to get to
or beat SOTA, which
is good news; second, this suggests that we can compete
with RFs on much smaller trees, which is indeed displayed in the
left pane of Table \ref{t-bvsrf} where \alphaboost~fits less than
\textbf{ten times} trees than RFs, and still beat those in
a majority of cases, which is good news for interpretability. When we drill down into the
results as a function of $\varepsilon$, we observe that
\alphaboost~tends to be especially good against RFs for high
privacy regimes (\textit{e.g.}$\varepsilon = 0.01$). \\
\noindent $\triangleright$ \textbf{\alphaboost~in the $\nvpriv=10$ vs
$\nvpriv=50$ regime}: the previous summarizes experiments for a
regular quantization with $\nvpriv=10$ of the continuous
attributes. Our experiments (\sm, Section \ref{sub-sec-1050}) also
contain a summary of the comparisons for \alphaboost~when we rather
use $\nvpriv=50$. Notice that multiplying by five the potential number
of splits significantly affects the time
complexity of the algorithm. The results display that the impact
varies as a function of the domain at hand. There can be significant
improvements: \texttt{qsar} and \texttt{winewhite} are two domains for
which $\nvpriv=50$ buys more than 2$\%$ improvement for
objective calibration, a clear winner among all tested strategies for
$\alpha$. On \texttt{banknote}, the improvement is more in favor of
$\alpha = 1.0$. On \texttt{winered}, there is no significant
improvement for the best strategy and apart from a seemingly better "concentration" of
more than 3/4 of the runs of objective calibration towards its best
results with $\nvpriv = 50$, there is no apparent gain otherwise.
\section*{Appendix on Proofs}\label{proof_proofs}
\section{Proof of Theorem \ref{th3P}}
\label{proof_th3P}
The proof is split in three parts. The two first being the following
two Lemmata.
\begin{lemma}\label{lemI}
Fix $u\geq 0$. $\check{\bayesrisk}(u,v)$ is non-decreasing over $v\geq
u$.
\end{lemma}
\begin{proof}
We know that $-\bayesrisk$ is convex and therefore $D_{-\bayesrisk}(a\|b)$ is non negative, $D_{-\bayesrisk}$ being the Bregman divergence with generator $-\bayesrisk$ \citep{nnBD}. We obtain, with $a=0, b=u/v$,
\begin{eqnarray}
D_{-\bayesrisk}(a\|b) = \bayesrisk\left(\frac{u}{v}\right) - \bayesrisk (0) - \left(0-\frac{u}{v}\right)\cdot (-\bayesrisk)' & \geq & 0\:\:,\nonumber
\end{eqnarray}
where $(-\bayesrisk) ' \in \partial (-\bayesrisk) (u/v)$, $\partial$
denoting the subdifferential. Simplifying ($\bayesrisk (0) = 0$) yields
\begin{eqnarray}
\bayesrisk\left(\frac{u}{v}\right) + \frac{u}{v} \cdot (-\bayesrisk) ' & \geq & 0\:\:.\label{defphi}
\end{eqnarray}
We then remark that
\begin{eqnarray}
\partial_v \check{\bayesrisk}(u,v) & = & \left\{\bayesrisk\left(\frac{u}{v}\right) + \frac{u}{v} \cdot (-\bayesrisk) '\:\:, \bayesrisk '\in \partial \bayesrisk\left(\frac{u}{v}\right)\right\}\:\:.\label{SI_1}
\end{eqnarray}
Therefore, $\check{\bayesrisk}(u,v)$ is non-decreasing when $v \geq u$, and we obtain the statement of Lemma \ref{lemI}.
\end{proof}
The next Lemma shows a few more facts about $\bayesrisk$.
\begin{lemma}\label{lemI2}
The following holds true:
\begin{itemize}
\item [(A)] $\bayesrisk$ is not decreasing (resp. not increasing) over
$[0,1/2]$ (resp. $[1/2, 1]$);
\item [(B)] For any $0\leq p\leq q\leq
1/2$, or any $1/2\leq q\leq p\leq 1$, we have
\begin{eqnarray}
0\leq \bayesrisk(q) - \bayesrisk(p) \leq \bayesrisk(|q-p|)\:\:.
\end{eqnarray}
\item [(C)] Suppose $m\geq 2$. For any $0<v \leq m+1$, $0<u\leq
\min\{1, v\}$,
$\check{\bayesrisk}(u,v) \leq \check{\bayesrisk}(1,m+1)$
\item [(D)] For any $x \geq 2$
\begin{eqnarray}
\bayesrisk\left(\frac{1}{2}\right) - \bayesrisk\left(\frac{1}{2} -
\frac{1}{x}\right) &\leq & \frac{2}{x}.
\end{eqnarray}
\end{itemize}
\end{lemma}
\begin{proof}
A fact that we will use repeatedly hereafter is the fact that a
concave function sits above all its chords. We first prove (A): if $\bayesrisk$ were decreasing somewhere on
$[0,1/2]$, there would be some
$a\geq b, a, b \in [0,1/2]$ such that $\bayesrisk(a) >
\bayesrisk(b)$. Since $\bayesrisk(a) \leq \bayesrisk(1/2)$, $(a,
\bayesrisk(a))$ sits below the chord $(b, \bayesrisk(b)),(1/2,1)$,
which is impossible. The case $[1/2,1]$ is obtained by symmetry.\\
\noindent We now prove (B). We prove it for the case $0\leq p\leq q\leq
1/2$, the other following from the symmetry of $\bayesrisk$.
Non-negativity follows from (A) and the fact that $\bayesrisk(0) = 0$. The right inequality follows from the concavity of $\bayesrisk$:
indeed, since $\bayesrisk(0) = 0$, this inequality is equivalent to proving
\begin{eqnarray}
\frac{\bayesrisk(q) - \bayesrisk(p)}{q-p} \leq \frac{\bayesrisk(q-p) - \bayesrisk(0)}{q-p-0}\:\:,
\end{eqnarray}
which since $p\geq 0$, is just stating that slopes of chords that
intersect $\bayesrisk$ at points of constant difference between abscissae do not
increase, \textit{i.e.} $\bayesrisk$ is concave. \\
\noindent We prove (C). To get the result, we just need to
write:
\begin{eqnarray}
\check{\bayesrisk}(u,v) & \defeq & v\cdot \bayesrisk\left(\frac{u}{v}\right)\nonumber\\
& \leq & (m +1) \cdot \bayesrisk\left(\frac{u}{m+1}\right) \label{eqaa}\\
& \leq & (m+1) \cdot \bayesrisk\left(\frac{1}{m+1}\right) \label{eqbb},
\end{eqnarray}
where Ineq. (\ref{eqaa}) follows from Lemma
\ref{lemI} and $v\leq m$. Ineq. \eqref{eqbb} follows from $u/m\leq 1/m
\leq 1/2$. We finally prove (D). We have
\begin{eqnarray}
\bayesrisk(x) & \geq & 2x \:\:, \forall x
\in [0,1/2]\:\:,\label{eqbinf}
\end{eqnarray}
since $y=2x$ is a chord for $\bayesrisk$ over
$[0,1/2]$ and $\bayesrisk$ is concave (it therefore sits over its
chords). We get for any $x\geq 2$,
\begin{eqnarray}
\bayesrisk\left(\frac{1}{2}\right) - \bayesrisk\left(\frac{1}{2} -
\frac{1}{x}\right) & = & 1 - \bayesrisk\left(\frac{1}{2} -
\frac{1}{x}\right)\nonumber\\
& \leq & 1 - 2\cdot \left(\frac{1}{2} -
\frac{1}{x}\right)\nonumber\\
& & = \frac{2}{x}\:\:,\nonumber
\end{eqnarray}
as claimed. We obtain the statement of Lemma \ref{lemI2}.
\end{proof}
We now embark on the proof of Theorem \ref{th3P}.
Let us fix for short
\begin{eqnarray}
\Delta & \defeq & |f_{\bayesrisk}(h, \leaf,
\mathcal{S}')-f_{\bayesrisk}(h, \leaf,
\mathcal{S})|\nonumber\\
& & = \left| w'(\leaf) \cdot \bayesrisk\left(
\frac{{w'}^1(\leaf)}{w'(\leaf)}\right) - w(\leaf) \cdot \bayesrisk\left( \frac{w^1(\leaf)}{w(\leaf)}\right)\right|,
\end{eqnarray}
and let us assume
without loss of generality that samples contain at least two examples
(otherwise $\Delta = 0$). The only eventual difference between $\mathcal{S}$ and $\mathcal{S}'$
that can make $\Delta > 0$ is on a weight and / or class change for the
switched example. So, for some $\delta \leq 1$, we consider the
following cases.
\noindent \textbf{Case A}: the total weight in leaf $\leaf$ changes vs
it does not change
\begin{eqnarray}
A_1 \defeq "w'(\leaf) = w(\leaf)" & ; & A_2 \defeq "w'(\leaf) = w(\leaf) + \delta".
\end{eqnarray}
\noindent \textbf{Case B}: the total weight for class 1 in leaf $\leaf$ changes vs
it does not change
\begin{eqnarray}
B_1 \defeq "{w'}^1(\leaf) = w^1(\leaf) + \delta" & ; & B_2 \defeq "{w'}^1(\leaf) = w^1(\leaf)".
\end{eqnarray}
And we also consider different cases depending on the relationship between the weight of class 1
and the total weight in leaf $\leaf$ in $\mathcal{S}$: $\exists u \in
(w(\leaf)/2) \cdot [-1,1]$ such that
\begin{eqnarray}
w^1(\leaf) & = & \frac{w(\leaf)}{2} + u.
\end{eqnarray}
We also suppose wlog that $\delta > 0$ (otherwise, we permute
${\mathcal{S}}$ and ${\mathcal{S}}'$, which does not change $\Delta$
because of $|.|$). We also remark that if we prove the result for
$u\leq 0$, then because of the symmetry of $\bayesrisk$, we get the
result for $u\geq 0$ as well -- this just amounts to reasoning on
negative examples instead of positive examples, changing notations but
not the reasoning.
\noindent $\hookrightarrow$ Case $A_1 \wedge B_1 \wedge (u \leq
-\delta/2)$. Because of the constraint on $u$, either
$w^1(\leaf)+\delta \leq w(\leaf)/2$ (if $u \leq -\delta$), or when $u\in
(-\delta, -\delta/2]$, we have both $w^1(\leaf) / w(\leaf)\leq 1/2$,
$(w^1(\leaf) + \delta) / w(\leaf) > 1/2$ and $(w^1(\leaf) + \delta) /
w(\leaf) - (1/2) \leq (1/2) - w^1(\leaf) / w(\leaf)$. So,
\begin{eqnarray}
\bayesrisk\left(\frac{w^1(\leaf)+\delta}{w(\leaf)}\right) & \geq & \bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right),
\end{eqnarray}
and therefore using $A_1, B_1$, we get
\begin{eqnarray}
\Delta & = & w(\leaf) \cdot \left| \bayesrisk\left(\frac{w^1(\leaf)+\delta}{w(\leaf)}\right) - \bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right) \right|\nonumber\\
& = & w(\leaf) \cdot \left(
\bayesrisk\left(\frac{w^1(\leaf)+\delta}{w(\leaf)}\right) -
\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right) \right)
\nonumber.
\end{eqnarray}
We now have two sub-cases,\\
$\bullet$ If $(w^1(\leaf)+\delta)/ w(\leaf) \leq 1/2$, then we directly get
\begin{eqnarray}
\Delta & \leq & w(\leaf) \cdot
\bayesrisk\left(\frac{\delta}{w(\leaf)}\right) \label{eq2}\\
& & = \check{\bayesrisk}(\delta, w(\leaf))\nonumber\\
& \leq & m \cdot
\bayesrisk\left(\frac{1}{m}\right) \:\:.\label{eq4}
\end{eqnarray}
\eqref{eq2} holds because of Lemma
\ref{lemI2} (B). Ineq. (\ref{eq4}) follows from Lemma
\ref{lemI2} (C). \\
$\bullet$ If $(w^1(\leaf)+\delta)/ w(\leaf) > 1/2$, then we know that
since $w^1(\leaf) / w(\leaf) \leq 1/2$,
\begin{eqnarray}
\frac{1}{2} - \frac{w^1(\leaf)}{w(\leaf)} & \leq & \frac{\delta}{w(\leaf)},
\end{eqnarray}
and so
\begin{eqnarray}
\Delta & = & w(\leaf) \cdot \left(
\bayesrisk\left(\frac{w^1(\leaf)+\delta}{w(\leaf)}\right) -
\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right) \right) \nonumber\\
& \leq & w(\leaf) \cdot \left( \bayesrisk\left(\frac{1}{2}\right) -
\bayesrisk\left(\frac{1}{2} - v\right)
\right)\nonumber,
\end{eqnarray}
with therefore
\begin{eqnarray}
v & \defeq & \frac{w(\leaf) - 2w^1(\leaf)}{2w(\leaf)} \leq \frac{\delta}{w(\leaf)}.
\end{eqnarray}
We get
\begin{eqnarray}
\Delta & \leq & w(\leaf) \cdot \left( \bayesrisk\left(\frac{1}{2}\right) -
\bayesrisk\left(\frac{1}{2} - \frac{\delta}{w(\leaf)}\right)
\right) \nonumber\\
& \leq & w(\leaf) \cdot \left( \frac{2\delta}{w(\leaf)}
\right) \nonumber\\
& &= 2\delta \leq 2,\label{eq42}
\end{eqnarray}
because of Lemma \ref{lemI2} (D) ($x \defeq w(\leaf) / \delta \geq 2$),
$\delta \leq 1$, $\bayesrisk(1/2) = 1$ and Lemma \ref{lemI}.\\
\noindent $\hookrightarrow$ Case $A_1 \wedge B_1 \wedge (u \in
(-\delta/2, \delta/2))$. We now
obtain:
\begin{eqnarray}
\Delta & = & w(\leaf) \cdot \left(
\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right) -
\bayesrisk\left(\frac{w^1(\leaf)+\delta}{w(\leaf)}\right)
\right) \nonumber\\
& \leq & w(\leaf) \cdot \left( \bayesrisk\left(\frac{1}{2}\right) -
\bayesrisk\left(\frac{w^1(\leaf)+\delta}{w(\leaf)}\right)
\right)\nonumber\\
& & = w(\leaf) \cdot \left( \bayesrisk\left(\frac{1}{2}\right) -
\bayesrisk\left(\frac{1}{2} + v\right)
\right)\nonumber
\end{eqnarray}
with
\begin{eqnarray}
v & \defeq & \frac{2w^1(\leaf)+2\delta - w(\leaf)}{2w(\leaf)}.
\end{eqnarray}
we remark that $v\geq 0$ because it is equivalent to
\begin{eqnarray}
w^1(\leaf) & \geq & \frac{w(\leaf)}{2} - \delta,
\end{eqnarray}
which indeed holds because $u > -\delta/2 \geq -\delta$ (we recall
$\delta > 0$). We also obviously have $v \geq 1/2$, so using the
symmetry of $\bayesrisk$ around $1/2$, we get
\begin{eqnarray}
\Delta & \leq & w(\leaf) \cdot \left( \bayesrisk\left(\frac{1}{2}\right) -
\bayesrisk\left(\frac{1}{2} - v\right)
\right)\nonumber\\
& \leq & w(\leaf) \cdot \left( \frac{2w^1(\leaf)+2\delta - w(\leaf)}{w(\leaf)}
\right)\label{eqUSE1}\\
& & = 2w^1(\leaf)+2\delta - w(\leaf)\nonumber\\
& = & 2(u+\delta) \leq 3\delta \leq 3 .\label{eq43}
\end{eqnarray}
\eqref{eqUSE1} follows from Lemma \ref{lemI2} (D).\\
\noindent $\hookrightarrow$ Case $A_1 \wedge B_1 \wedge (u \geq
\delta/2)$. Since $\bayesrisk$ is symmetric around $1/2$, this boils
down to case $(u \leq
-\delta/2)$ with the negative examples.\\
\noindent $\hookrightarrow$ Case $A_1 \wedge B_2$. In this case, $\Delta = 0\leq
\check{\bayesrisk}(1,m)$.\\
\noindent $\hookrightarrow$ Case $A_2 \wedge B_1 \wedge \left( u \leq -\frac{\delta}{2} \cdot \frac{w(\leaf)}{2 w(\leaf) + \delta}\right)$. In this
case, there is no class flip, just a change in weight. We first show
that because of the constraint on $u$,
\begin{eqnarray}
\bayesrisk\left(\frac{w^1(\leaf)+\delta}{w(\leaf)+\delta}\right) & \geq
& \bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right).
\end{eqnarray}
A sufficient condition for this to happen is
$(w^1(\leaf)+\delta)/(w(\leaf)+\delta) \leq 1/2$, which, after
reorganising yields
\begin{eqnarray}
w^1(\leaf) & \leq & \frac{w(\leaf)}{2} - \frac{\delta}{2},
\end{eqnarray}
and so is covered by the fact that $u\leq -\delta / 2$, or, given the
symmetry of $\bayesrisk$, can also be achieved if the following
conditions are met:
\begin{eqnarray}
\frac{w^1(\leaf)+\delta}{w(\leaf)+\delta} & \geq &
\frac{1}{2}, \label{eqBM1}\\
\frac{w^1(\leaf)}{w(\leaf)} & \leq & \frac{1}{2}, \label{eqBMZ}\\
\frac{w^1(\leaf)+\delta}{w(\leaf)+\delta} -
\frac{1}{2} & \leq & \frac{1}{2} - \frac{w^1(\leaf)}{w(\leaf)}\label{eqB11}.
\end{eqnarray}
To satisfy all these inequalities, we need, respectively, $u \geq
-\delta/2$, $u\leq 0$ and
\begin{eqnarray}
u & \leq & -\frac{\delta}{2} \cdot \frac{w(\leaf)}{2 w(\leaf) + \delta},
\end{eqnarray}
all of which are then implied if
\begin{eqnarray}
u & \in & -\frac{\delta}{2}\left( 1, \frac{w(\leaf)}{2 w(\leaf) + \delta}\right),
\end{eqnarray}
which, together with the previous case results in the Case condition
on $u$. We therefore have:
\begin{eqnarray}
\Delta & = & (w(\leaf)+\delta) \cdot
\bayesrisk\left(\frac{w^1(\leaf)+\delta}{w(\leaf)+\delta}\right) -
w(\leaf) \cdot \bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right)
\end{eqnarray}
We now have two sub-cases:\\
$\bullet$ If
\begin{eqnarray}
\frac{w^1(\leaf)+\delta}{w(\leaf)+\delta} & \leq & \frac{1}{2},
\end{eqnarray}
then since we have as well $w^1(\leaf) / w(\leaf) \leq
(w^1(\leaf)+\delta)/(w(\leaf)+\delta)$ and $\bayesrisk$ is non
decreasing over $[0,1/2]$, we get directly from Lemma \ref{lemI2} (B),
\begin{eqnarray}
\Delta & = & (w(\leaf)+\delta) \cdot \left(
\bayesrisk\left(\frac{w^1(\leaf)+\delta}{w(\leaf)+\delta}\right)
-\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right) \right)
+ \delta \cdot \bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right) \nonumber\\
& \leq & (w(\leaf)+\delta) \cdot \left(
\bayesrisk\left(\frac{w^1(\leaf)+\delta}{w(\leaf)+\delta}\right)
- \bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf) +\delta}\right)
\right) + \delta \cdot \bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right) \nonumber\\
& \leq & (w(\leaf)+\delta) \cdot
\bayesrisk\left(\frac{\delta}{w(\leaf)+\delta} \right) + \delta \cdot \bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right) \nonumber\\
& \leq & (w(\leaf)+\delta) \cdot
\bayesrisk\left(\frac{\delta}{w(\leaf)+\delta} \right) + \delta .\label{eqBEF1}
\end{eqnarray}
We also remark that
\begin{eqnarray}
(w(\leaf)+\delta) \cdot
\bayesrisk\left(\frac{\delta}{w(\leaf)+\delta} \right) & \leq
& (m+1) \cdot
\bayesrisk\left(\frac{\delta}{m+1} \right)\nonumber\\
& \leq & (m+1) \cdot
\bayesrisk\left(\frac{1}{m+1} \right),\nonumber
\end{eqnarray}
respectively because of Lemma \ref{lemI} and $1/(m+1) \leq 1/2$, which
is in the regime where $\bayesrisk$ is non decreasing. We get from
\eqref{eqBEF1} that
\begin{eqnarray}
\Delta & \leq & \check{\bayesrisk}(1,m+1) + 1.\label{eq44}
\end{eqnarray}
$\bullet$ If
\begin{eqnarray}
\frac{w^1(\leaf)+\delta}{w(\leaf)+\delta} & > & \frac{1}{2},\label{eqBM2}
\end{eqnarray}
then we can use \eqref{eqB11} and get
\begin{eqnarray}
\Delta & = & (w(\leaf)+\delta) \cdot \left(
\bayesrisk\left(\frac{w^1(\leaf)+\delta}{w(\leaf)+\delta}\right)
- \bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right)
\right) + \delta \cdot \bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right) \nonumber\\
& \leq & (w(\leaf)+\delta) \cdot \left(
\bayesrisk\left(\frac{1}{2}\right)
- \bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right)
\right) + \delta \cdot \bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right) \nonumber\\
& & = (w(\leaf)+\delta) \cdot \left(
\bayesrisk\left(\frac{1}{2}\right)
- \bayesrisk\left(\frac{1}{2} - v\right)
\right) + \delta \cdot \bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right) ,\label{eqSZ1}
\end{eqnarray}
with
\begin{eqnarray}
v & \defeq & \frac{1}{2} - \frac{w^1(\leaf)}{w(\leaf)}.
\end{eqnarray}
Remark that
\begin{eqnarray}
\frac{w^1(\leaf)+\delta}{w(\leaf)+\delta} & = &
\frac{w^1(\leaf)}{w(\leaf)} + \frac{\delta(w(\leaf)-w^1(\leaf))}{w(\leaf)(w(\leaf)+\delta)},\label{eqCOND12}
\end{eqnarray}
so to get both \eqref{eqBM2} and \eqref{eqBMZ}, we need
\begin{eqnarray}
\frac{w^1(\leaf)}{w(\leaf)} & \geq & \frac{1}{2} -
\frac{\delta(w(\leaf)-w^1(\leaf))}{w(\leaf)(w(\leaf)+\delta)}, \label{eqCOND13}
\end{eqnarray}
which implies
\begin{eqnarray}
v & \leq & \frac{\delta(w(\leaf)-w^1(\leaf))}{w(\leaf)(w(\leaf)+\delta)},
\end{eqnarray}
and so \eqref{eqSZ1} and Lemma \ref{lemI2} (D) yields
\begin{eqnarray}
\Delta & \leq & (w(\leaf)+\delta) \cdot \left(
\frac{2\delta(w(\leaf)-w^1(\leaf))}{w(\leaf)(w(\leaf)+\delta)}
\right) + \delta \cdot \bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right) \nonumber\\
& & = 2\delta \cdot \left(1 - \frac{w^1(\leaf)}{w(\leaf)}\right) +\delta
\leq
3\delta \leq 3.\label{eq45}
\end{eqnarray}
\noindent $\hookrightarrow$ Case $A_2 \wedge B_1 \wedge \left( u \in
\left(-\frac{\delta}{2} \cdot \frac{w(\leaf)}{2 w(\leaf) + \delta},
\frac{\delta}{2} \cdot \frac{w(\leaf)}{2 w(\leaf) +
\delta}\right)\right)$.
In this case, we have
\begin{eqnarray}
\bayesrisk\left(\frac{w^1(\leaf)+\delta}{w(\leaf)+\delta}\right) & \leq
& \bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right),\label{eqPROP11}
\end{eqnarray}
and therefore, by virtue of the triangle inequality,
\begin{eqnarray}
\Delta & = & \left| (w(\leaf)+\delta) \cdot
\bayesrisk\left(\frac{w^1(\leaf)+\delta}{w(\leaf)+\delta}\right) -
w(\leaf) \cdot \bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right)
\right|\nonumber\\
& = & \left| (w(\leaf)+\delta) \cdot
\left(\bayesrisk\left(\frac{w^1(\leaf)+\delta}{w(\leaf)+\delta}\right) -
\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right) \right) +
\delta\cdot
\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right)\right|\nonumber\\
& \leq & \left| (w(\leaf)+\delta) \cdot
\left(\bayesrisk\left(\frac{w^1(\leaf)+\delta}{w(\leaf)+\delta}\right) -
\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right) \right) \right| +
\delta\cdot
\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right)\nonumber\\
& = & (w(\leaf)+\delta) \cdot \left(\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right)-
\bayesrisk\left(\frac{w^1(\leaf)+\delta}{w(\leaf)+\delta}\right)
\right) +
\delta\cdot
\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right)\nonumber.
\end{eqnarray}
We have two sub-cases.\\
\noindent $\bullet$ $w^1(\leaf) / w(\leaf) \geq 1/2$. In this case, we
apply Lemma \ref{lemI2} (B) and get
\begin{eqnarray}
\Delta & \leq & (w(\leaf)+\delta) \cdot
\bayesrisk\left(\frac{w^1(\leaf)+\delta}{w(\leaf)+\delta}-\frac{w^1(\leaf)}{w(\leaf)}\right)
+\delta\cdot
\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right)\nonumber\\
& & = (w(\leaf)+\delta) \cdot
\bayesrisk\left(\frac{\delta(w(\leaf)-w^1(\leaf))}{w(\leaf)(w(\leaf)+\delta)}\right)
+\delta\cdot
\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right).
\end{eqnarray}
Fixing $u\defeq \frac{\delta(w(\leaf)-w^1(\leaf))}{w(\leaf)}\leq \delta$ and $v
\defeq w(\leaf)+\delta$, we remark that $u\leq v$ and $v\leq m+1$ so we can apply Lemma
\ref{lemI2} (C) and get
\begin{eqnarray}
\Delta & \leq & \check{\bayesrisk}(1,m+1) + \delta\cdot
\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right)\nonumber\\
& \leq & \check{\bayesrisk}(1,m+1) + \delta \leq \check{\bayesrisk}(1,m+1) + 1.\label{eq46}
\end{eqnarray}
\noindent $\bullet$ $w^1(\leaf) / w(\leaf) < 1/2$. In this case, we
remark that \eqref{eqPROP11} implies $(w^1(\leaf) +\delta) / (w(\leaf) +
\delta) > 1/2$, in which case since we still get \eqref{eqCOND12}, to
get $w^1(\leaf) / w(\leaf) < 1/2$, we must have
\begin{eqnarray}
\frac{w^1(\leaf)+\delta}{w(\leaf)+\delta} & \leq & \frac{1}{2} +
\frac{\delta(w(\leaf)-w^1(\leaf))}{w(\leaf)(w(\leaf)+\delta)}, \label{eqCOND14}
\end{eqnarray}
and combining this with the fact that (i) $\bayesrisk$ is maximum in
$1/2$ and non increasing afterwards, and symmetric around $1/2$,
\begin{eqnarray}
\Delta & \leq & (w(\leaf)+\delta) \cdot \left(\bayesrisk\left(\frac{1}{2}\right)-
\bayesrisk\left(\frac{w^1(\leaf)+\delta}{w(\leaf)+\delta}\right)
\right) +
\delta\cdot
\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right)\nonumber\\
& \leq & (w(\leaf)+\delta) \cdot \left(\bayesrisk\left(\frac{1}{2}\right)-
\bayesrisk\left(\frac{1}{2} +
\frac{\delta(w(\leaf)-w^1(\leaf))}{w(\leaf)(w(\leaf)+\delta)}\right)
\right) +
\delta\cdot
\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right)\nonumber\\
& & = (w(\leaf)+\delta) \cdot \left(\bayesrisk\left(\frac{1}{2}\right)-
\bayesrisk\left(\frac{1}{2} -
\frac{\delta(w(\leaf)-w^1(\leaf))}{w(\leaf)(w(\leaf)+\delta)}\right)
\right) +
\delta\cdot
\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right)\nonumber\\
& \leq & (w(\leaf)+\delta) \cdot \left(\frac{2\delta(w(\leaf)-w^1(\leaf))}{w(\leaf)(w(\leaf)+\delta)}
\right) +
\delta\cdot
\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right)\label{eqTHF1}\\
& & = 2 \delta \cdot \left(1 - \frac{w^1(\leaf)}{w(\leaf)}\right) +
\delta\cdot
\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right) \nonumber\\
& \leq & 3\delta \leq 3 \label{eq47}.
\end{eqnarray}
We have used Lemma \ref{lemI2} (D) in \eqref{eqTHF1}.\\
\noindent $\hookrightarrow$ Case $A_2 \wedge B_1 \wedge \left(u \geq
\frac{\delta}{2} \cdot \frac{w(\leaf)}{2 w(\leaf) + \delta}\right)$. Since $\bayesrisk$ is symmetric around $1/2$, this boils
down to case $\left(u \leq
-\frac{\delta}{2} \cdot \frac{w(\leaf)}{2 w(\leaf) + \delta}\right)$ with the negative examples.\\
\noindent $\hookrightarrow$ Case $A_2 \wedge B_2 \wedge \left(u \leq \frac{\delta}{2} \cdot \frac{2w(\leaf)}{2w(\leaf) + \delta}
\right)$. This time, we can immediately write, independently from the
condition on $u$,
\begin{eqnarray}
\Delta & = & \left| (w(\leaf)+\delta) \cdot
\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)+\delta}\right) -
w(\leaf) \cdot \bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right)
\right|\nonumber\\
& = & \left| (w(\leaf)+\delta) \cdot
\left(\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)+\delta}\right) -
\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right) \right) +
\delta\cdot
\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right)\right|\nonumber\\
& \leq & \left| (w(\leaf)+\delta) \cdot
\left(\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)+\delta}\right) -
\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right) \right) \right| +
\delta\cdot
\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right) \nonumber\\
& \leq & \left| (w(\leaf)+\delta) \cdot
\left(\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)+\delta}\right) -
\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right) \right) \right| +
\delta\label{eqB2B}.
\end{eqnarray}
We first examine the condition under which
\begin{eqnarray}
\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)+\delta}\right) & \leq & \bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right).\label{const1B}
\end{eqnarray}
Again, $u\leq 0$ is a sufficient condition. Otherwise, if therefore
$w^1(\leaf)/w(\leaf) \geq 1/2$, then we need
\begin{eqnarray}
\frac{w^1(\leaf)}{w(\leaf)+\delta} & < & \frac{1}{2},\nonumber\\
\frac{1}{2} - \frac{w^1(\leaf)}{w(\leaf)+\delta} & \geq &
\frac{w^1(\leaf)}{w(\leaf)}
- \frac{1}{2};
\end{eqnarray}
the latter constraint is equivalent to
\begin{eqnarray}
\frac{w^1(\leaf)}{w(\leaf)} &\leq & \frac{w(\leaf)+\delta}{2w(\leaf)+\delta},
\end{eqnarray}
and therefore
\begin{eqnarray}
\frac{u}{w(\leaf)} \defeq \frac{w^1(\leaf)}{w(\leaf)}
- \frac{1}{2}
& \leq & \frac{w(\leaf)+\delta}{2w(\leaf)+\delta} - \frac{1}{2} = \frac{\delta}{2w(\leaf)+\delta},
\end{eqnarray}
which leads to our constraint on $u$ and gives
\begin{eqnarray}
\Delta & \leq & (w(\leaf)+\delta) \cdot
\left(\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right) - \bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)+\delta}\right)
\right) +
\delta\label{eqB2B2}.
\end{eqnarray}
We have two sub-cases.\\
\noindent $\bullet$ $w^1(\leaf) / w(\leaf)\leq 1/2$. In thise case, we
get directly from Lemma \ref{lemI2} (B),
\begin{eqnarray}
\Delta & \leq & (w(\leaf)+\delta) \cdot
\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}-\frac{w^1(\leaf)}{w(\leaf)+\delta}\right) +
\delta\nonumber\\
& & = (w(\leaf)+\delta) \cdot
\bayesrisk\left(\frac{w^1(\leaf) \delta}{w(\leaf)(w(\leaf)+\delta)}\right) +
\delta\nonumber\\
& \leq & \check{\bayesrisk}(1,m+1) + 1,\label{eq48}
\end{eqnarray}
where we have used Lemma \ref{lemI2} (C) with $u \defeq w^1(\leaf)\delta
/w(\leaf) \leq 1$ and $v \defeq w(\leaf)+\delta \leq m+1$. We also check
that $u\leq \delta\leq v$.\\
\noindent $\bullet$ $w^1(\leaf) / w(\leaf)\geq 1/2$. In this case, we
remark that
\begin{eqnarray}
\frac{w^1(\leaf)}{w(\leaf)} & = & \frac{w^1(\leaf)}{w(\leaf)+\delta} + \frac{w^1(\leaf) \delta}{w(\leaf)(w(\leaf)+\delta)},
\end{eqnarray}
and since we need $w^1(\leaf)/(w(\leaf)+\delta)\leq 1/2$ (otherwise,
\eqref{const1B} cannot hold), then it implies
\begin{eqnarray}
\frac{w^1(\leaf)}{w(\leaf)+\delta} & \geq & \frac{1}{2} - \frac{w^1(\leaf) \delta}{w(\leaf)(w(\leaf)+\delta)},
\end{eqnarray}
and so the fact that $\bayesrisk$ is non-decreasing before $1/2$ and Lemma \ref{lemI2} (D) yield
\begin{eqnarray}
\Delta & \leq & (w(\leaf)+\delta) \cdot
\left(\bayesrisk\left(\frac{1}{2}\right) - \bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)+\delta}\right)
\right) +
\delta\nonumber\\
& \leq & (w(\leaf)+\delta) \cdot
\left(\bayesrisk\left(\frac{1}{2}\right) - \bayesrisk\left(\frac{1}{2} - \frac{w^1(\leaf) \delta}{w(\leaf)(w(\leaf)+\delta)}\right)
\right) +
\delta\nonumber\\
& \leq & (w(\leaf)+\delta) \cdot
\frac{2 w^1(\leaf) \delta}{w(\leaf)(w(\leaf)+\delta)} +
\delta\nonumber\\
& & =
\frac{2 w^1(\leaf) \delta}{w(\leaf)} +
\delta \leq 3\delta \leq 3\label{eq49}.
\end{eqnarray}
To complete the proof of the Case, suppose now that
\begin{eqnarray}
\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)+\delta}\right) & \geq & \bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right),\label{const1B3}
\end{eqnarray}
which therefore imposes
\begin{eqnarray}
\frac{w^1(\leaf)}{w(\leaf)} \geq \frac{w^1(\leaf)}{w(\leaf)+\delta} \geq \frac{1}{2},
\end{eqnarray}
so using Lemma \ref{lemI2} (B) yields
\begin{eqnarray}
\Delta & \leq & (w(\leaf)+\delta) \cdot
\left(\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)+\delta} - \bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}\right) \right)
\right) +
\delta\nonumber\\
& \leq & (w(\leaf)+\delta) \cdot
\bayesrisk\left(\frac{w^1(\leaf)}{w(\leaf)}-\frac{w^1(\leaf)}{w(\leaf)+\delta}\right) +
\delta\nonumber\\
& & = (w(\leaf)+\delta) \cdot
\bayesrisk\left(\frac{w^1(\leaf) \delta}{w(\leaf)(w(\leaf)+\delta)}\right) +
\delta\nonumber\\
& \leq & \check{\bayesrisk}(1,m+1) + 1,\label{eq410}
\end{eqnarray}
where we have used Lemma \ref{lemI2} (C) with $u \defeq w^1(\leaf)\delta
/w(\leaf) \leq 1$ and $v \defeq w(\leaf)+\delta \leq m+1$. We also check
that $u\leq \delta\leq v$.\\
\noindent $\hookrightarrow$ Case $A_2 \wedge B_2 \wedge \left(u > \frac{\delta}{2} \cdot \frac{2w(\leaf)}{2w(\leaf) + \delta}
\right)$. Since $\bayesrisk$ is symmetric around $1/2$, this boils
down to case $\left(u \leq \frac{\delta}{2} \cdot
\frac{2w(\leaf)}{2w(\leaf) + \delta}\right)$ with the negative
examples.\\
We can now finish the upperbound on $\Delta$ by taking all bounds in
\eqref{eq4}, \eqref{eq42}, \eqref{eq43}, \eqref{eq44}, \eqref{eq45},
\eqref{eq46}, \eqref{eq47}, \eqref{eq48}, \eqref{eq49} and
\eqref{eq410}:
\begin{eqnarray}
\Delta & \leq & \max\{\check{\bayesrisk}(1,m), 2, 3,
1+\check{\bayesrisk}(1,m+1)\} = \max\{3, 1+\check{\bayesrisk}(1,m+1)\},
\end{eqnarray}
as claimed, using Lemma \ref{lemI2} (C).
\noindent \textbf{Remark}: We can prove that $\Delta = \check{\bayesrisk}(1, m)$ can be
realized: consider set $\mathcal{S}$ with $m$ examples with unit weight, 1 of which
each is from the positive class
class. In ${\mathcal{S}}'$, we flip this class. We get:
\begin{eqnarray}
\Delta & = & m\cdot \bayesrisk\left(\frac{1}{m}\right) - m\cdot \bayesrisk\left(\frac{0}{m}\right)\nonumber\\
& = & m\cdot \bayesrisk\left(\frac{1}{m}\right) - m\cdot \bayesrisk\left(0\right)\nonumber\\
& = & m\cdot \bayesrisk\left(\frac{1}{m}\right) = \check{\bayesrisk}(1, m)\:\:,
\end{eqnarray}
as claimed (since $\bayesrisk(0) = 0$).
\section{Proof of Lemma \ref{lemcurv}}\label{proof_lemcurv}
We perform a Taylor expansion of $\bayesrisk$ up to second order and obtain:
\begin{eqnarray*}
\bayesrisk(0) & = & \underbrace{\bayesrisk\left(\frac{1}{x}\right) + \left( 0-\frac{1}{x}\right)\cdot \bayesrisk'\left(\frac{1}{x}\right) }_{\defeq J} \nonumber\\
& & + \left( 0-\frac{1}{x}\right)^2\cdot \bayesrisk''(a)\:\:,
\end{eqnarray*}
for some $a \in [0,x]$. There remains to see that $J = \check{\bayesrisk}'(1, x)$ (eq. (\ref{SI_1}) in the Appendix), fix $x = m+1$ and reorder given $\bayesrisk(0) = 0$.
\section{Proof of Lemma \ref{lemPhiProof}}\label{proof_lemPhiProof}
\noindent We have
\begin{eqnarray}
\bayesmatrisk (1,m+1) & = & (m+1)\cdot 2\sqrt{\frac{1}{m+1}
\cdot \frac{m}{m+1}} \nonumber\\
& = & 2\sqrt{m}\:\:,
\end{eqnarray}
as claimed.\\
\noindent We have (we make the distinction $\log$ base-2 and $\ln$
base-$e$\footnote{In the main body, $\log$ is base-$e$ by default.})
\begin{eqnarray}
\bayeslogrisk (1,m+1) & = & (m+1)\cdot \left( -\frac{1}{m+1}\log
\frac{1}{m+1} - \frac{m}{m+1}\log \frac{m}{m+1}\right) \nonumber\\
& = & \log(m+1) + m\log\frac{m+1}{m}\nonumber\\
& \leq & \log(m+1) + \frac{1}{\ln 2}\:\:.
\end{eqnarray}
The last inequality follows from \citet[Claim 1]{fsDM}.\\
\noindent We have
\begin{eqnarray}
\bayessqrisk (1,m+1) & = & (m+1)\cdot \frac{4}{m+1} \cdot \frac{m}{m+1} \nonumber\\
& = & \frac{4m}{m+1}\:\:,
\end{eqnarray}
as claimed.\\
\noindent Finally, we have
\begin{eqnarray}
\bayesZOrisk (1,m+1) & = & (m+1)\cdot 2
\min\left\{\frac{1}{m+1}, \frac{m}{m+1}\right\}\nonumber\\
& = & 2\:\:,
\end{eqnarray}
as claimed.
\section{Proof of Theorem \ref{theoremalpha}}\label{proof_theoremalpha}
That Matsushita's $\alpha$-loss is symmetric is a direct consequence
of its definition. It is proper because it is a convex combination of
two proper losses, Matsushita loss and the 0/1-loss \citet[Table
1]{rwCB}. As a consequence, its pointwise Bayes risk is the convex
combination of the Bayes risks:
\begin{eqnarray}
\bayesalpharisk(u) = 2 \cdot(\alpha \cdot \sqrt{u(1-u)} + (1-\alpha) \cdot\min\{u, 1-u\}).
\end{eqnarray}
We get the canonical link in the subdifferential of negative
the pointwise Bayes risk:
\begin{eqnarray}
\alphalink (u) \defeq -\partial \bayesalpharisk(u) & = & \alpha \cdot \frac{2u - 1}{\sqrt{u(1-u)}} -2(1-\alpha)\cdot \left\{
\begin{array}{rcl}
1 & \mbox{ if } & u < 1/2\\
\big[ -1,1 \big] & \mbox{ if } & u = 1/2\\
-1 & \mbox{ if } & u > 1/2
\end{array}
\right. ,
\end{eqnarray}
and we immediately get the weight function from the fact that
$\weightalphaloss \defeq - \bayesalpharisk''$ \citet[Theorem 6]{rwCB}.
We get the corresponding convex surrogate of the proper loss by taking the convex conjugate of negative the pointwise Bayes risk:
\begin{eqnarray}
\leaf_\alpha(z) & \defeq & \sup_{u \in [0,1]} \{zu + 2 \cdot(\alpha \cdot \sqrt{u(1-u)} + (1-\alpha) \cdot\min\{u, 1-u\})\}.\label{negent}
\end{eqnarray}
We remark that if $z<0$ then the $\sup$ is going to be attained for $u$ closer to $0$ than $1$ (thus $u \leq 1/2$), and if $z>0$, it is the opposite: the $\sup$ is going to be attained for $u$ closer to $1$ than to $0$ (thus $u \geq 1/2$). If $z = 0$, the $\sup$ is trivially going to hold for $u=1/2$ (that is, $\leaf_\alpha(0) = 1/2$).\\
\noindent \textbf{Case 1: $\alpha = 0$} -- when $z< -2$ (resp. $z> 2$), the $\sup$ is attained for $u=0$ (resp. $u = 1$). Otherwise, the $\sup$ is attained for $u = 1/2$. Hence
\begin{eqnarray}
\leaf_0(z) & = & \left\{
\begin{array}{lcr}
0 & \mbox{ if } & z< -2\\
1 + \frac{z}{2} & \mbox{ if } & z\in 2\cdot [-1, 1]\\
z & \mbox{ if } & z > 2
\end{array}
\right. .\label{csurzero}
\end{eqnarray}
\noindent \textbf{Case 2: $\alpha \neq 0$} -- Let us find the values of $z$ for which the argument $u = 1/2$ in \eqref{negent}, that is we want to find $z$ such that
\begin{eqnarray}
\left\{
\begin{array}{rcl}
zu + 2\alpha\sqrt{u(1-u)} + 2(1-\alpha)u & \leq & 1 + \frac{z}{2}, \forall u \in [0,1/2]\\
zu + 2\alpha\sqrt{u(1-u)} + 2(1-\alpha)(1-u) & \leq & 1 + \frac{z}{2}, \forall u \in [1/2,1]
\end{array}\right. .\label{feq1}
\end{eqnarray}
We consider the topmost condition in \eqref{feq1}. Reorganising, we want $2\alpha \sqrt{u(1-u)} \leq 1 + (z/2) - (z+2(1-\alpha))u$ for $u \in [0,1/2]$. Fix $z \defeq -2(1-\alpha)+\delta$, which gives the condition
\begin{eqnarray}
2\sqrt{u(1-u)} & \leq & 1 + \frac{\delta}{\alpha}\cdot (1-u), \forall u \in [0,1/2].
\end{eqnarray}
This condition obviously holds when $\delta \geq 0$, and it is in fact violated when $\delta < 0$ because the LHS can be made as close as desired to $1$. So the topmost condition holds for $z\geq -2(1-\alpha)$. Regarding the bottommost condition, we now want $2\alpha \sqrt{u(1-u)} \leq 1 - 2(1-\alpha) + (z/2) - (z-2(1-\alpha))u$ for $u \in [1/2,1]$, which, after letting $z \defeq 2(1-\alpha)+\delta$, gives equivalently
\begin{eqnarray}
2\sqrt{u(1-u)} & \leq & 1 - \frac{\delta}{\alpha}\cdot \left(u-\frac{1}{2}\right), \forall u \in [1/2,1].
\end{eqnarray}
While the condition trivially holds when $\delta \leq 0$, it is in fact violated when $\delta > 0$ because the LHS can be made as close as desired to $1$. To summarize, the trivial argument $u = 1/2$ giving us \eqref{negent} is obtained when $z \in [-2(1-\alpha), +\infty) \cap (-\infty, 2(1-\alpha)] = 2(1-\alpha)\cdot [-1,1]$, and we get
\begin{eqnarray}
\leaf_\alpha (z) & = & 1 + \frac{z}{2} \mbox{ if } z \in 2(1-\alpha)\cdot [-1,1],
\end{eqnarray}
which, we also remark, gives the mid condition in \eqref{csurzero} when $\alpha \rightarrow 0$.\\
\noindent Now, when $z \not\in 2(1-\alpha)\cdot [-1,1]$, we can differentiate \eqref{negent} to find the argument $u$ realising the max.
Let
\begin{eqnarray}
h_-(u) & \defeq & (z+2(1-\alpha)) \cdot u + 2 \alpha \cdot \sqrt{u(1-u)}\nonumber\\
& & = \alpha \cdot \underbrace{\left(Z_- u + 2 \sqrt{u(1-u)}\right)}_{\defeq g_-(u)},\nonumber\\
h_+(u) & \defeq & 2(1-\alpha) + (z-2(1-\alpha)) \cdot u + 2 \alpha \cdot \sqrt{u(1-u)}\nonumber\\
& & = 2(1-\alpha) + \alpha \cdot \underbrace{\left(Z_+ u + 2 \sqrt{u(1-u)}\right)}_{\defeq g_+(u)},
\end{eqnarray}
with $Z_- \defeq (z+2(1-\alpha))/\alpha, Z_+ \defeq (z-2(1-\alpha))/\alpha$.
We compute $\max_{[0,1/2]} h_-(u)$ and $\max_{[1/2,1]} h_+(u)$, granted that the max of the two will give us the convex conjugate.
\noindent Let us focus on $h_-(u)$. The argument $u$ we seek satisfies, after derivating $g_-(u)$,
\begin{eqnarray}
Z_- +\frac{1-2u}{\sqrt{u(1-u)}} & = & 0,\label{firsteqZ1}
\end{eqnarray}
\textit{i.e.} $1 - 2 u = - Z_-\sqrt{u(1-u)}$, or $1-(4+Z_-^2)u+(4+Z_-^2)u^2 = 0$, which brings the solution $u^*(z)$,
\begin{eqnarray}
u^*(z) & = & \frac{4+Z_-^2 \pm |Z_-|\sqrt{4+Z_-^2}}{2(4+Z_-^2)} = \frac{1}{2} \pm \frac{|Z_-|}{2 \sqrt{4+Z_-^2}} = \frac{1}{2} - \frac{|Z_-|}{2 \sqrt{4+Z_-^2}} ,
\end{eqnarray}
because we maximize $g_-$ in $[0,1/2]$. We get:
\begin{eqnarray}
h_-(u^*(z)) & = & \frac{\alpha Z_-}{2} - \frac{\alpha |Z_-|Z_-}{2 \sqrt{4+Z_-^2}} + 2\alpha \sqrt{\frac{1}{4} - \frac{Z_-^2}{4(4+Z_-^2)}}\nonumber\\
& = & \frac{\alpha Z_-}{2} - \frac{\alpha |Z_-|Z_-}{2 \sqrt{4+Z_-^2}} + \alpha \sqrt{1 - \frac{Z_-^2}{4+Z_-^2}}\nonumber\\
& = & \frac{\alpha Z_-}{2} - \frac{\alpha |Z_-|Z_-}{2 \sqrt{4+Z_-^2}} + \frac{2\alpha}{\sqrt{4+Z_-^2}}\nonumber\\
& = & \alpha \cdot \left(\frac{Z_-}{2} + \frac{4 - |Z_-|Z_-}{2 \sqrt{4+Z_-^2}}\right) \nonumber\\
& = & \frac{z + 2(1-\alpha)}{2} + \frac{4\alpha^2 - |z + 2(1-\alpha)|(z + 2(1-\alpha))}{2 \alpha \sqrt{4+\left(\frac{z + 2(1-\alpha)}{\alpha}\right)^2}} \nonumber\\
& = & 1 - \alpha+ \frac{z}{2} + \frac{4\alpha^2 - |z + 2(1-\alpha)|(z + 2(1-\alpha))}{2 \sqrt{4\alpha^2+(z + 2(1-\alpha))^2}} \defeq h^*_-(z). \label{eqzz1}
\end{eqnarray}
\noindent We now focus on $h_+(u)$. It is straightforward to check that \eqref{firsteqZ1} still holds but with $Z_+$ replacing $Z_-$ and
\begin{eqnarray}
u^*(z) & = & \frac{1}{2} + \frac{|Z_+|}{2 \sqrt{4+Z_+^2}} \geq 1/2,
\end{eqnarray}
leading to
\begin{eqnarray}
h_+(u^*(z)) & = & 2 (1-\alpha) + \frac{\alpha Z_+}{2} + \frac{\alpha |Z_+|Z_+}{2 \sqrt{4+Z_+^2}} + 2\alpha \sqrt{\frac{1}{4} - \frac{Z_+^2}{4(4+Z_+^2)}}\nonumber\\
& = & 2 (1-\alpha) +\frac{z - 2(1-\alpha)}{2} + \frac{4\alpha^2 + |z - 2(1-\alpha)|(z - 2(1-\alpha))}{2 \sqrt{4\alpha^2+(z - 2(1-\alpha))^2}}\nonumber\\
& = & 1 - \alpha+ \frac{z}{2} + \frac{4\alpha^2 + |z - 2(1-\alpha)|(z - 2(1-\alpha))}{2 \sqrt{4\alpha^2+(z - 2(1-\alpha))^2}} \defeq h^*_+(z).\label{eqzz2}
\end{eqnarray}
To finish up, we need to compute $\leaf_\alpha(z) = \max\{h^*_-(z), h^*_+(z)\}$ for $z \not\in 2(1-\alpha)\cdot [-1,1]$.
\noindent \textbf{Case 2.1: $z < -2(1-\alpha)$} --- In this case,
\begin{eqnarray}
h^*_-(z) & = & 1 - \alpha+ \frac{z}{2} + \frac{4\alpha^2 + (z + 2(1-\alpha))^2}{2 \sqrt{4\alpha^2+(z + 2(1-\alpha))^2}},\nonumber\\
& = & 1 - \alpha+ \frac{z}{2} + \frac{\sqrt{4\alpha^2+(z + 2(1-\alpha))^2}}{2}.\nonumber\\
h^*_+(z) & = & 1 - \alpha+ \frac{z}{2} + \frac{4\alpha^2 - (z - 2(1-\alpha))^2}{2 \sqrt{4\alpha^2+(z - 2(1-\alpha))^2}},
\end{eqnarray}
and it is easy to check that $h^*_-(z) > h^*_+(z)$.
\noindent \textbf{Case 2.1: $z > 2(1-\alpha)$} --- In this case,
\begin{eqnarray}
h^*_-(z) & = & 1 - \alpha + \frac{z}{2} + \frac{4\alpha^2 - (z + 2(1-\alpha))^2}{2 \sqrt{4\alpha^2+(z + 2(1-\alpha))^2}},\nonumber\\
h^*_+(z) & = & 1 - \alpha + \frac{z}{2} + \frac{4\alpha^2 + (z - 2(1-\alpha))^2}{2 \sqrt{4\alpha^2+(z - 2(1-\alpha))^2}}\nonumber\\
& = & 1 - \alpha+ \frac{z}{2} + \frac{\sqrt{4\alpha^2+(z - 2(1-\alpha))^2}}{2}.
\end{eqnarray}
and it is easy to check that $h^*_+(z) > h^*_-(z)$.
\noindent To summarize \textbf{Case 2}, we get the convex conjugate and surrogate loss for Matsushita $\alpha$-entropy:
\begin{eqnarray}
\leaf_\alpha (z) & = &
\left\{\begin{array}{rcl}
1 - \alpha+ \frac{z}{2} + \frac{\sqrt{4\alpha^2+(z + 2(1-\alpha))^2}}{2} & \mbox{ if } & z < -2(1-\alpha)\\
1 + \frac{z}{2} & \mbox{ if } & z \in 2(1-\alpha)\cdot [-1,1]\\
1 - \alpha+ \frac{z}{2} + \frac{\sqrt{4\alpha^2+(z - 2(1-\alpha))^2}}{2} & \mbox{ if } & z > 2(1-\alpha)
\end{array}
\right.,
\end{eqnarray}
which can be further simplified to
\begin{eqnarray}
\leaf_\alpha (z) & = & 1 + \frac{z}{2} + \iver{z \not\in 2(1-\alpha)\cdot [-1,1]}\cdot \left(\sqrt{\alpha^2+\left(\frac{|z|}{2} - (1-\alpha)\right)^2} - \alpha\right),
\end{eqnarray}
and the convex surrogate is just by definition
\begin{eqnarray}
\alphasur(z) & = &
\leaf_\alpha (-z), \label{propCSUR}
\end{eqnarray}
as claimed. We also get the inverse canonical link by
differentiating $\leaf_\alpha$, giving
\begin{eqnarray}
{\alphalink}^{-1}(z) & \defeq & \leaf'_\alpha (z) \nonumber\\
& = & \frac{1}{2} \cdot \left( 1 + \iver{z \not\in 2(1-\alpha)\cdot
[-1,1]}\cdot \mathrm{sign}(z) \cdot \frac{\frac{|z|}{2} - (1-\alpha)}{\sqrt{\alpha^2+\left(\frac{|z|}{2} - (1-\alpha)\right)^2} }\right)
\end{eqnarray}
This achieves the proof of Theorem \ref{theoremalpha}.
\section{Proof of Theorem \ref{thBoostDT1}}\label{proof_thBoostDT1}
The proof proceeds in two steps. First we give some notations and
explain why our WLA in Definition \ref{wlaKMDT} is equivalent to
\citet[Section 3]{kmOT}. We then proceed to the proof itself.\\
\noindent $\triangleright$ \textbf{Notations and the Weak Learning Assumption}: recall that our objective is to minimise
\begin{eqnarray}
\bayesalpharisk(h) & \defeq & \sum_{\leaf \in \leafset}
w(\leaf) \bayesalpharisk(q(\leaf)),\label{eqENTR-SM}
\end{eqnarray}
where $h$ is a tree and $\leafset$ is its set of leaves. Note also
that $\sum_{\leaf} w(\leaf) = w({\mathcal{S}})$, which is \textit{not}
normalized. Even when un-normalizing makes no difference, we are going
to stick to \citet{kmOT}'s setting and assume that our loss in
\eqref{eqENTR-SM} is \textit{normalized} (thus divided by
$w({\mathcal{S}})$). We shall remove this assumption at the end of the proof.
We have alleviated the boosting iteration index in $w$, so that $w(\leaf) \defeq \sum_i w_i \cdot \iver{i \in \leaf}$. $q(\leaf) \in [0, 1]$ is the relative proportion of positive examples reaching leaf $\leaf$,
\begin{eqnarray}
q(\leaf) & \defeq & (1/w(\leaf)) \cdot \sum_i \iver{(i \in \leaf) \wedge (y_i = +1)}\cdot w_i.
\end{eqnarray}
It should be clear at this stage that because we spend part of our DP budget each time we learn a split in a tree, we need to minimise \eqref{eqENTR-SM} as fast as possible under the weakest possible assumptions. Boosting gives us a very convenient framework to do so. Notations used are now simplified as summarized in Figure \ref{f-tree-not}, so that for example $q \defeq q(\leaf)$.
\begin{figure}[t]
\begin{center}
\begin{tabular}{c}
\includegraphics[trim=50bp 580bp 680bp
10bp,clip,width=0.80\linewidth]{Figs/FigTree}
\end{tabular}
\end{center}
\caption{Notations used in our proof of Theorem \ref{thBoostDT1}: leaf
$\lambda$ in tree $h$ is replaced by subtree indexed by binary
subtree with root test $g : \mathbb{R} \rightarrow \{0, 1\}$ and two
new leaves $\lambda_0$ and $\lambda_1$ in grown tree $h\oplus(g, \leaf)$. The total proportion of examples reaching $\lambda$ (and therefore subject to test $g$) is $w$; the relative proportion of those for which $g(.)=0$ (resp. $g(.) = 1$) is $1-\tau$ (resp. $\tau$). The relative proportion of positive examples in $\lambda$ (resp. $\lambda_0$; resp. $\lambda_1$) is $q$ (resp $p$; resp. $r$).}
\label{f-tree-not}
\end{figure}
We first review the weak learning assumption (WLA) for decision trees as carried out in \citet{kmOT}, which imposes a weak correlation between split $g$ and the labels of the examples reaching $\leaf$ for the split to meet the WLA. This correlation is measured not with respect to the current weights $w$ but to a distribution restricted to leaf $\leaf$ and giving equal weight to positive and negative examples: let
\begin{eqnarray}
w_{\leaf, i} & \defeq & w_i \cdot \left\{
\begin{array}{rcl}
0 & \mbox{ if } & i \not\in \leaf\\
\frac{1}{2q} & \mbox{ if } & (i \in \leaf) \wedge (y_i = +1)\\
\frac{1}{2(1-q)} & \mbox{ if } & (i \in \leaf) \wedge (y_i = -1)
\end{array}
\right. .
\end{eqnarray}
\begin{definition}(Weak learning assumption, \citet{kmOT})\label{wlaKM}
Fix $\upgamma > 0$. Split $g$ at leaf $\leaf$ satisfies the $\upgamma$-weak learning assumption (WLA for short, omitting $\upgamma$) iff
\begin{eqnarray}
\left| \sum_i w_{\leaf, i} \cdot \iver{( (g(\ve{x}_i) = 0) \wedge (y_i = +1) ) \vee ( (g(\ve{x}_i) = 1) \wedge (y_i = -1) )} - \frac{1}{2}\right| & \geq & \upgamma. \label{wladef1}
\end{eqnarray}
\end{definition}
It is not hard to check that, provided the splits are closed under negation (that is, if $g$ is a potential split then so is $\neg g$), then Definition \ref{wlaKM} is equivalent to the weak hypothesis assumption of \citet[Lemma 2]{kmOT}. To better see the correlation, define $g^{\nicefrac{+}{-}} \defeq -1 + 2g \in \{-1, 1\}$. Then it is not hard to check that
\begin{eqnarray*}
\lefteqn{\sum_i w_{\leaf, i} \cdot \iver{( (g(\ve{x}_i) = 0) \wedge (y_i = +1) ) \vee ( (g(\ve{x}_i) = 1) \wedge (y_i = -1) )}}\\
& = & \frac{1}{2} \cdot \sum_i w_{\leaf, i} \cdot (1 - y_i g^{\nicefrac{+}{-}}(\ve{x}_i))\\
& = & \frac{1}{2} \cdot \left( 1 - \sum_i w_{\leaf, i} \cdot y_i g^{\nicefrac{+}{-}}(\ve{x}_i) \right),
\end{eqnarray*}
so the WLA is equivalent to $|\sum_i w_{\leaf, i} \cdot y_i
g^{\nicefrac{+}{-}}(\ve{x}_i)| \geq 2 \upgamma$, that is, using the edge
notation $\eta(\ve{w}, h) \defeq \sum_i w_i y_i h(\ve(x)_i)$ with $h :
\mathcal{X} \rightarrow \mathbb{R}$ and $\ve{w}$ defines a discrete
distribution over the training sample $\mathcal{S}$, we can
reformulate the weak learning assumption as: split $g$ at leaf $\leaf$
satisfies the $\upgamma$-WLA iff $|\eta(\ve{w}_\leaf, g^{\nicefrac{+}{-}})| \geq
\upgamma$, which is Definition \ref{wlaKMDT} and is therefore
equivalent to Definition \ref{wlaKM} up to a factor 2 in the weak
learning guarantee.
\noindent $\triangleright$ \textbf{Proof of the Theorem}: we now
embark on the proof of Theorem \ref{thBoostDT1}. The proof follows the same
schema as \cite{kmOT} with some additional details to handle the
change of $\alpha$ in the course of training a DT. We first summarize
the high-level details of the proof. Denote $h\oplus(g, \leaf)$ tree $h$ in which a leaf
$\leaf$ has been replaced by a split indexed with some $g: \mathbb{R}
\rightarrow \{0,1\}$ satisfying the weak learning assumption (Figure \ref{f-tree-not}). The
decrease in $\bayesrisk(.)$, $\Delta \defeq
\bayesrisk(h)-\bayesrisk(h\oplus(g, \leaf))$, is lowerbounded as a function of
$\upgamma$ and then used to lowerbound the number of iterations (each
of which is the replacement of a leaf by a binary subtree) to get to a
given value of $\bayesrisk(.)$. It follows that $\Delta \defeq \omega(\leaf) \cdot \Delta_{\bayesalpharisk}(q, \tau, \delta)$, with
\begin{eqnarray}
\Delta_{\bayesalpharisk}(q, \tau, \delta) & \defeq & \bayesalpharisk(q) -
(1-\tau)
\bayesalpharisk(q-\tau\delta) -\tau \bayesalpharisk(q+(1-\tau)\delta)\label{defDELTA1}
\end{eqnarray}
with $\delta \defeq
\upgamma q(1-q)/(\tau(1-\tau))$ with $\tau$ denoting the
\textit{relative} proportion of examples for which $g = +1$ in leaf
$\leaf$, following \cite{kmOT}. We thus have
\begin{eqnarray}
\tau & \defeq & \frac{\sum_i w_i \cdot \iver{(i\in \leaf) \wedge
(g(\ve{x}_i) = 1)}}{\sum_i w_i \cdot \iver{i\in \leaf}}.
\end{eqnarray}
We also introduce normalized weights with notation
$\tilde{w}_i \defeq w_i / w(\mathcal{S})$, so the total normalized
weight of examples reaching leaf $\leaf$ can also be denoted with the
tilda: $\tilde{w}(\leaf) \defeq \sum_i \tilde{w}_i \cdot \iver{i\in
\leaf}$.
\begin{figure}[t]
\begin{center}
\begin{tabular}{c}
\includegraphics[trim=20bp 630bp 580bp
30bp,clip,width=0.80\linewidth]{Figs/FigAlgo}
\end{tabular}
\end{center}
\caption{Sequence of key parameters for the induction of a DT, which
leads to tree $h_{t+1}$ after having split leaf $\leaf_t$ in
$h_t$. $\alpha_t$ is the parameter chosen for the M$\alpha$-loss.}
\label{f-learn}
\end{figure}
\noindent We now let $h_\ell$ denote the current DT with $\ell$
leaves and $\ell-1$ internal nodes, the first tree being thus the single
root leaf $h_1$. We obtain $h_{\ell+1}$ by splitting a leaf $\leaf_\ell \in
\leafset(h_\ell)$, chosen to minimize
\begin{eqnarray}
\bayesalphariskparam{\ell}(h_{\ell+1}) & \defeq & \alpha_{\ell} \cdot \bayesmatrisk(h_{\ell+1}) + (1-\alpha_{\ell}) \cdot \bayeserrrisk(h_{\ell+1})\nonumber
\end{eqnarray}
over all possible leaf splits in $\leafset(h_\ell)$. Figure \ref{f-learn}
summarizes the whole process of getting $h_{\ell+1}$ from $h_\ell$.
\begin{lemma}\label{lemBCONV}
Suppose the sequence of $\alpha_\ell$ satisfies:
\begin{eqnarray}
\alpha_\ell & \leq & \alpha_{\ell-1} \cdot \exp\left(\frac{\upgamma^2 \tilde{w}_\ell }{16} \cdot (1-\alpha_{\ell-1})\right), \forall \ell>0,\label{condKMOT1}
\end{eqnarray}
with $\tilde{w}_\ell$ the total normalized weight of examples reaching leaf $\leaf_\ell$ split at iteration $\ell$.
Then for any $\xi \in (0,1]$, the empirical risk of $h_L$ satisfies $\emprisk(h_L) \leq \xi$ as long as
\begin{eqnarray}
\sum_{\ell=1}^L \tilde{w}_\ell \alpha_\ell & \geq & \frac{16}{\upgamma^2} \cdot \log \frac{1}{\xi}.
\end{eqnarray}
\end{lemma}
\begin{proof}
We first need a technical Lemma, in which we replace $\alpha_\ell$ by
$\alpha$ for the sake of readability.
\begin{lemma}\label{lemBINFDELTA}(Equivalent of \citet[Lemma 13]{kmOT}
for $\Delta_{\bayesalpharisk}$) Fix $\alpha \in [0,1]$.
If $\upgamma < 0.2$ and $q$ is sufficiently small, then $\Delta_{\bayesalpharisk}$ is minimized by $\tau \in [0.4, 0.6]$.
\end{lemma}
\begin{proof}
We have
\begin{eqnarray}
\Delta_{\bayesalpharisk}(q, \tau, \delta) & = & \alpha \cdot \Delta_{\bayesmatrisk}(q, \tau, \delta) + (1-\alpha) \cdot \Delta_{\bayeserrrisk}(q, \tau, \delta).
\end{eqnarray}
Suppose without loss of generality that $p \leq q \leq r$. It follows that if $r \leq 1/2$ or $p \geq 1/2$, $\Delta_{\bayeserrrisk}(q, \tau, \delta) = 0$ so we get the result directly from \citet[Lemma 13]{kmOT}. Otherwise, we have two cases.
\noindent \textbf{Case 1}: $q \leq 1/2, r>1/2$. In this case,
\begin{eqnarray*}
\Delta_{\bayeserrrisk}(q, \tau, \delta) & = & 2q -
2 (1-\tau)(q-\tau\delta) -2\tau (1-(q+(1-\tau)\delta))\\
& = & 2\tau \cdot \left( 2q + 2(1-\tau) \delta -1 \right)\\
& = & 2\tau\cdot \left( 2q + \frac{2\upgamma q(1-q)}{\tau} - 1\right)\\
& = & 2\tau (2q - 1) + 4 \upgamma q(1-q),
\end{eqnarray*}
under the additional condition (for $r > 1/2$)
\begin{eqnarray}
\tau & < & \frac{2\gamma q(1-q)}{1-2q} \nonumber\\
& & \sim_0 4\gamma q\label{condR}.
\end{eqnarray}
We get $\partial \Delta_{\bayeserrrisk}(q, \tau, \delta) / \partial \tau = 2 (2q-1)$ and so
\begin{eqnarray}
\frac{\partial \Delta_{\bayesalpharisk}(q, \tau, \delta)}{\partial \tau} & = & \alpha \cdot \frac{\partial \Delta_{\bayesmatrisk}(q, \tau, \delta)}{\partial \tau} + 2(1-\alpha)(2q-1)\nonumber\\
& \leq & \alpha \cdot \frac{\partial \Delta_{\bayesmatrisk}(q, \tau, \delta)}{\partial \tau}
\end{eqnarray}
since $q \leq 1/2$, and it comes from Lemma 13 in \cite{kmOT} that $\partial \Delta_{\bayesalpharisk}(q, \tau, \delta) / \partial \tau \leq 0$ for $\tau \leq 0.4$, and under the condition of their Lemma ($q$ is sufficiently small, $\upgamma < 0.2$), then \eqref{condR} precludes $\tau \geq 0.6$ on Case 1.
\noindent \textbf{Case 2}: $q \geq 1/2, p<1/2$. In this case, we remark that $\Delta_{\bayesalpharisk}$ is invariant to the change
$p\mapsto 1-p$,
$q\mapsto 1-q$,
$r\mapsto 1-r$, which brings us back to Case 1.
\end{proof}
The following Lemma brings the key brick to the proof of Lemma \ref{lemBCONV}.
\begin{lemma}\label{tekLEM}
Using notations of Figure \ref{f-tree-not}, suppose the split put at left $\leaf_\ell$ in $h_\ell$ satisfies the $\upgamma$-Weak Learning Assumption and furthermore the sequence of $\alpha$s satisfies \eqref{condKMOT1}.
Then we have
\begin{eqnarray}
\bayesalphariskparam{\ell} (h_{\ell+1}) & \leq & \left(1 - \frac{\upgamma^2 \tilde{w}_\ell \alpha_\ell}{16}\right)\cdot \bayesalphariskparam{\ell-1} (h_{\ell}).\label{bONEITER}
\end{eqnarray}
\end{lemma}
\textbf{Remark}: the key result for Matsushita's loss in \citet[Theorem 10]{kmOT} follows from the particular case of Lemma \ref{tekLEM} for $\alpha_\ell = 1, \forall \ell$ (for which condition \eqref{condKMOT1} obviously holds for any $\upgamma$ and $\tilde{w}_\ell$).
\begin{proof}
We use the notations of Figures \ref{f-tree-not} and \ref{f-learn}. As long as the split satisfies the $\upgamma$-Weak Learning Assumption, we get from the proof of \citet[Theorem 10]{kmOT}
\begin{eqnarray}
\bayesmatrisk(h_{\ell+1}) & \leq & \left(1 - \frac{\upgamma^2 \tilde{w}_\ell }{16}\right) \cdot \bayesmatrisk(h_{\ell}),\label{eqKM1}
\end{eqnarray}
further noting that the use of Lemma \ref{lemBINFDELTA} is "hidden" in this bound, but proceeds as in the proof of \citet[Theorem 10]{kmOT}. We remind that if we tune $\alpha$ then by definition
\begin{eqnarray}
\bayesalphariskparam{\ell} (h_{\ell+1}) & \defeq & \alpha_\ell \cdot \bayesmatrisk(h_{\ell+1}) + (1-\alpha_\ell) \cdot \bayeserrrisk(h_{\ell+1}) ,\nonumber\\
\bayesalphariskparam{\ell-1} (h_{\ell}) & \defeq & \alpha_{\ell-1} \cdot \bayesmatrisk(h_{\ell}) + (1-\alpha_{\ell-1}) \cdot \bayeserrrisk(h_{\ell}) .\nonumber
\end{eqnarray}
Now we have, successively because of \eqref{eqKM1} and $\bayeserrrisk(h_{\ell+1}) \leq \bayeserrrisk(h_{\ell})$ (error cannot increase as the partition of $\mathcal{X}$ achieved by $h_{\ell+1}$ is finer than that of $h_\ell$),
\begin{eqnarray}
\bayesalphariskparam{\ell} (h_{\ell+1}) & \leq & \alpha_{\ell} \cdot \left(1 - \frac{\upgamma^2 \tilde{w}_\ell }{16}\right) \cdot \bayesmatrisk(h_{\ell}) + (1-\alpha_\ell) \cdot \bayeserrrisk(h_{\ell+1}) \nonumber\\
& \leq & \alpha_{\ell} \cdot \left(1 - \frac{\upgamma^2 \tilde{w}_\ell }{16}\right) \cdot \bayesmatrisk(h_{\ell}) + (1-\alpha_\ell) \cdot \bayeserrrisk(h_{\ell})\nonumber\\
& & = \alpha_{\ell} \cdot \left(1 - \frac{\upgamma^2 \tilde{w}_\ell }{16}\right) \cdot \bayesmatrisk(h_{\ell}) + Q \cdot \bayeserrrisk(h_{\ell}) \nonumber\\
& & + (1-\alpha_{\ell-1}) \cdot \left(1- \frac{\upgamma^2 \tilde{w}_\ell \alpha_{\ell}}{16}\right) \cdot \bayeserrrisk(h_{\ell})\label{llKM2},
\end{eqnarray}
with
\begin{eqnarray}
Q & \defeq & \alpha_{\ell-1} - \alpha_\ell + \frac{\upgamma^2 \tilde{w}_\ell }{16} \cdot \alpha_\ell(1-\alpha_{\ell-1}).
\end{eqnarray}
Now, if
\begin{eqnarray}
\alpha_\ell & \leq & \frac{\alpha_{\ell-1}}{1 - \frac{\upgamma^2 \tilde{w}_\ell }{16} \cdot (1-\alpha_{\ell-1})},\label{approxEQ1}
\end{eqnarray}
then $Q \geq 0$. Since $\bayeserrrisk(h_{\ell}) \leq \bayesmatrisk(h_{\ell})$,
\begin{eqnarray}
\lefteqn{\alpha_{\ell} \cdot \left(1 - \frac{\upgamma^2 \tilde{w}_\ell }{16}\right) \cdot \bayesmatrisk(h_{\ell}) + Q \cdot \bayeserrrisk(h_{\ell})}\nonumber \\
& \leq & \alpha_{\ell} \cdot \left(1 - \frac{\upgamma^2 \tilde{w}_\ell }{16}\right) \cdot \bayesmatrisk(h_{\ell}) + Q \cdot \bayesmatrisk(h_{\ell}) \nonumber\\
& & =\left(\alpha_\ell - \frac{\upgamma^2 \tilde{w}_\ell \alpha_\ell}{16} + \alpha_{\ell-1} - \alpha_\ell + \frac{\upgamma^2 \tilde{w}_\ell }{16} \cdot \alpha_\ell(1-\alpha_{\ell-1})\right)\cdot \bayesmatrisk(h_{\ell})\nonumber\\
& = & \alpha_{\ell-1} \cdot \left(1 - \frac{\upgamma^2 \tilde{w}_\ell \alpha_\ell}{16}\right)\cdot \bayesmatrisk(h_{\ell}),\nonumber
\end{eqnarray}
and so, assembling with \eqref{llKM2}, we get
\begin{eqnarray}
\bayesalphariskparam{\ell} (h_{\ell+1}) & \leq & \alpha_{\ell-1} \cdot \left(1 - \frac{\upgamma^2 \tilde{w}_\ell \alpha_\ell}{16}\right)\cdot \bayesmatrisk(h_{\ell}) + (1-\alpha_{\ell-1}) \cdot \left(1- \frac{\upgamma^2 \tilde{w}_\ell \alpha_{\ell}}{16}\right) \cdot \bayeserrrisk(h_{\ell})\nonumber\\
& & = \left(1 - \frac{\upgamma^2 \tilde{w}_\ell \alpha_\ell}{16}\right)\cdot (\alpha_{\ell-1} \cdot \bayesmatrisk(h_{\ell}) + (1-\alpha_{\ell-1}) \cdot \bayeserrrisk(h_{\ell}))\nonumber\\
& = & \left(1 - \frac{\upgamma^2 \tilde{w}_\ell \alpha_\ell}{16}\right)\cdot \bayesalphariskparam{\ell-1} (h_{\ell}),
\end{eqnarray}
which achieves the proof of Lemma \ref{tekLEM} once we use the fact
that $1-z \leq \exp(-z)$ on the denominator of \eqref{approxEQ1},
which yields a lower-bound on its right-hand side and thus a
sufficient condition of this inequality to hold, which, after
simplification, is \eqref{condKMOT1} and the definition of
$\Gamma$-monotonicity in the main file. Notice finally that the first
split, on $h_1$ to get $h_2$ ($t\defeq 1$) introduces a dependence on
$\alpha_0 \in [0,1]$ to compute the M$\alpha_0$-loss of the root
leaf. Since $\bayesalpharisk(q)\leq \bayesmatrisk(q), \forall q \in
[0,1]$, we just pick $\alpha_0 = 1$, which implies complete freedom to
pick $\alpha_1 \in [0,1]$ under $\Gamma$-monotonicity.
\end{proof}
To finish the proof of Lemma \ref{lemBCONV}, we use the fact that $1-z \leq \exp(-z)$ and unravel \eqref{bONEITER}: after $L$ iterations of boosting, under the conditions of Lemma \ref{tekLEM}, we get
\begin{eqnarray}
\bayesalpharisk(h_{L}) & \leq & \exp\left(-\frac{\upgamma^2}{16}\cdot
\sum_{\ell=1}^L \tilde{w}_\ell
\alpha_\ell\right), \label{eqCONV1C}
\end{eqnarray}
from which, since $\alpha_\ell \in [0,1], \forall \ell$, we have the empirical risk of $h_L$, $\emprisk(h_L)$, satisfy $\emprisk(h_L) = \bayeserrrisk(h_{L}) \leq \bayesalpharisk(h_{L})$ and a sufficient condition for $\emprisk(h_L) \leq \xi$ is thus
\begin{eqnarray}
\sum_{\ell=1}^L \tilde{w}_\ell \alpha_\ell & \geq & \frac{16}{\upgamma^2} \cdot \log \frac{1}{\xi},
\end{eqnarray}
which is the statement of Lemma \ref{lemBCONV}.
\end{proof}
Remark that Lemma \ref{lemBCONV} is Theorem \ref{thBoostDT1} \textit{with normalized
weights}. If we consider unnormalized weights in $\bayesalpharisk$
then we need to multiply the right hand side of \eqref{eqCONV1C} by
$w({\mathcal{S}})$, but we also have in this case $\emprisk(h_L) \leq
\bayesalpharisk(h_{L}) / w({\mathcal{S}})$, which in fact does not
change the statement for normalized weights. We also remark that the
Weak Learning Assumption is not affected by this change in
normalization, so we get the statement of Theorem \ref{thBoostDT1} for unnormalized
weights as well.
\section{Proof of Theorem \ref{thBoostLC1}}\label{proof_thBoostLC1}
\begin{algorithm}[t]
\caption{\maboost}\label{cboost}
\begin{algorithmic}
\STATE \textbf{Input} sample ${\mathcal{S}} = \{(\bm{x}_i, y_i), i
= 1, 2, ..., m\}$, number of iterations $T$, loss and update parameters
\begin{eqnarray}
\alpha & \in & (0, 1]\nonumber\\
\pi & \in & [0, 1)\nonumber\\
a & \in & \frac{\alpha}{M^2}\cdot \left[ 1 - \pi, 1 + \pi\right];
\end{eqnarray}
\STATE Step 1 : let $w_i = 1/2, \forall i = 1, 2, ..., m$; // initial weights
\STATE Step 2 : \textbf{for} $t = 1, 2, ..., T$
\STATE \hspace{1.1cm} Step 2.1 : let $h_t \leftarrow
\weak({\mathcal{S}}, \bm{w}_t)$\; // weak classifier
\STATE \hspace{1.1cm} Step 2.2 : let $\beta_t \leftarrow (a/m) \cdot \sum_{i}
{w_{ti} y_{i} h_t(\bm{x}_i)}$\; //
leveraging coefficient
\STATE \hspace{1.1cm} Step 2.3 : \textbf{for} $i = 1, 2, ..., m$, let
\begin{eqnarray}
w_{(t+1)i} & \leftarrow & {\alphalink}^{-1}\left( -\beta_t y_{i}
h_t(\bm{x}_i) + {\alphalink} (w_{ti})\right)
\quad(\in [0,1])\:\:; \label{defwun}
\end{eqnarray}
\STATE \textbf{Return} $H_T = \sum_t \beta_t h_t$.
\end{algorithmic}
\end{algorithm}
\begin{figure}[t]
\begin{center}
\begin{tabular}{c}
\includegraphics[trim=5bp 0bp 20bp
10bp,clip,width=0.45\linewidth]{Figs/plot_second_derivative}
\end{tabular}
\end{center}
\caption{Second derivative of the convex surrogate $\alphasur$, for
various values of $\alpha$. The color code follows Figure
\ref{f-alphaM} in the main file.}
\label{f-der2}
\end{figure}
We first display in Algorithm \maboost~the complete pseudo-code of our
approach to boosting using the M$\alpha$-loss. In stating the
algorithm, we have simplified notations; in particular we can indeed
check that the leveraging coefficient of $h_t$ satisfies:
\begin{eqnarray}
\beta_t & = & a \tilde{w}_t \eta(\tilde{\ve{w}}_t, h_t).
\end{eqnarray}
We make use of the same
proof technique as in \citet[Theorem 7]{nwLO}. We
sketch here the main steps. A first quantity we define is:
\begin{eqnarray}
X & \defeq & \expect_{{\mathcal{S}}}\left[ (y_i
H_{t}(\ve{x}_i) - y_i
H_{t+1}(\ve{x}_i)) \alphasur '(y_i
H_{t}(\ve{x}_i))\right]
\nonumber\\
& = & \beta_t \expect_{{\mathcal{S}}}\left[ -y_i h_t (\ve{x}_i) \cdot - {\alphalink}^{-1}\left(-y_i
H_{t}(\ve{x}_i)\right)\right]\label{fEQ1}\\
& = & \beta_t \expect_{{\mathcal{S}}}\left[w_{ti} y_i h_t (\ve{x}_i) \right]\label{fEQ2}\\
& = & \beta_t \cdot \frac{1}{m}\cdot \sum_i w_{ti} y_i h_t (\ve{x}_i)\label{fEQ3}\\
& = & a \tilde{w}^2_t \eta^2(\tilde{\ve{w}}_t, h_t).\label{bX}
\end{eqnarray}
\eqref{fEQ1} holds because of \eqref{propCSUR} and the fact that
$H_{t+1}(\ve{x}_i) = H_{t}(\ve{x}_i) + y_i h_t (\ve{x}_i)$ by
definition. \eqref{fEQ2} holds because of the definition of $w_{ti}$
and \eqref{fEQ3} is just a rewriting using the distribution of
examples in $\mathcal{S}$. A second quantity we define is
\begin{eqnarray}
Y(\mathcal{Z}) & \defeq & \expect_{{\mathcal{S}}}\left[ (y_i
H_{t}(\ve{x}_i) - y_i
H_{t+1}(\ve{x}_i))^2
\alphasur ''(z_i)\right] \label{defYY},
\end{eqnarray}
where $\mathcal{Z} \defeq \{z_1, z_2, ..., z_m\} \subset
\mathbb{R}^m$. We then need to compute the second derivative of
$\alphasur$, which we find to be (Figure \ref{f-der2})
\begin{eqnarray}
\alphasur''(z) & = & \left\{
\begin{array}{ccl}
0 & \mbox{ if } & z \in 2(1-\alpha)\cdot(-1,1)\\
\frac{4\alpha^2}{\left(4\alpha^2+(|z|-2(1-\alpha))^2\right)^{\frac{3}{2}}}
& \mbox{ if } & z \not\in 2(1-\alpha)\cdot[-1,1]\\
\mbox{undefined} & \mbox{ if } & z \in 2(1-\alpha)\cdot\{-1,1\}
\end{array}
\right..
\end{eqnarray}
from which we easily find
\begin{eqnarray}
\sup_z \alphasur'' & = & \frac{1}{2\alpha},
\end{eqnarray}
and therefore for any $\mathcal{Z} \subset
\mathbb{R}^m$,
\begin{eqnarray}
Y(\mathcal{Z}) & \leq & \frac{1}{2\alpha}\cdot \expect_{{\mathcal{S}}}\left[ (y_i
H_{t}(\ve{x}_i) - y_i
H_{t+1}(\ve{x}_i))^2\right]\nonumber\\
& & = \frac{1}{2\alpha}\cdot \expect_{{\mathcal{S}}}\left[ (a\eta_t\cdot
h_t(\ve{x}_i))^2\right]\nonumber\\
& \leq & \frac{a^2 \tilde{w}^2_t \eta^2(\tilde{\ve{w}}_t, h_t) M^2}{2\alpha}\label{bY}.
\end{eqnarray}
We then get from the proof of \citet[Theorem 7]{nwLO} and \eqref{bX}, \eqref{bY} that there
exists a set $\mathcal{Z} \subset \mathbb{R}^m$ such that
\begin{eqnarray}
\expect_{{\mathcal{S}}}\left[ \alphasur (y_i H_{t}(\ve{x}_i))\right] - \expect_{{\mathcal{S}}}\left[
\alphasur (y_i H_{t+1}(\ve{x}_i))\right] & \geq & X - Y(\mathcal{Z})
\nonumber\\
& \geq & a \tilde{w}^2_t \eta^2(\tilde{\ve{w}}_t, h_t) - \frac{a^2 \tilde{w}^2_t \eta^2(\tilde{\ve{w}}_t, h_t) M^2}{2\alpha}\nonumber\\
& & \left(1 - \frac{aM^2}{2\alpha}\right)\cdot a \eta_t^2 .
\end{eqnarray}
Suppose
\begin{eqnarray}
a & \in & \frac{\alpha}{M^2}\cdot \left[ 1 - \pi, 1 + \pi\right]
\end{eqnarray}
for some $\pi \in [0,1]$. We then have:
\begin{eqnarray}
\expect_{{\mathcal{S}}}\left[ \alphasur (y_i H_{t}(\ve{x}_i))\right] - \expect_{{\mathcal{S}}}\left[
\alphasur (y_i H_{t+1}(\ve{x}_i))\right] & \geq &
\frac{(1-\pi^2)\alpha}{2M^2}\cdot \eta_t^2,
\end{eqnarray}
so after combining $T$ classifiers in the linear combination, we get
\begin{eqnarray}
\expect_{{\mathcal{S}}}\left[
\alphasur (y_i H_{T}(\ve{x}_i))\right] & \leq & \alphasur(0) -
\frac{(1-\pi^2)\alpha}{2M^2}\cdot
\sum_{t=1}^T
\tilde{w}^2_t \eta^2(\tilde{\ve{w}}_t, h_t)\nonumber\\
& & = 1 - \frac{(1-\pi^2)\alpha}{2M^2}\cdot
\sum_{t=1}^T
\tilde{w}^2_t \eta^2(\tilde{\ve{w}}_t, h_t).\label{eqSEQ}
\end{eqnarray}
To summarize, if the sequence of edges satisfies
\begin{eqnarray}
\frac{1}{M^2} \cdot \sum_{t=1}^T
\tilde{w}^2_t \eta^2(\tilde{\ve{w}}_t, h_t) & \geq & \frac{2
(1-\xi)}{(1-\pi^2)\alpha},\label{eqCONST1111}
\end{eqnarray}
then
\begin{eqnarray}
\expect_{{\mathcal{S}}}\left[ \alphasur (y_i H_{T}(\ve{x}_i))\right] & \leq &
\xi.\label{eqCONST1112}
\end{eqnarray}
Since for any $\alpha > 0$, $\alphasur$ is strictly decreasing and non
negative, for
any $\theta\geq 0$, if $\pr_{{\mathcal{S}}}\left[
\iver{y_i H_{T}(\ve{x}_i)\leq \theta}\right] > \xi$, then
\begin{eqnarray}
\expect_{{\mathcal{S}}}\left[
\alphasur (y_i H_{T}(\ve{x}_i))\right] & > & \xi
\alphasur (\theta)
+
(1-\xi)\inf_z\alphasur
(z)\nonumber\\
& & \geq \xi
\alphasur (\theta).
\end{eqnarray}
Hence, we get from \eqref{eqCONST1111} and \eqref{eqCONST1112} that if
the sequence of edges satisfies
\begin{eqnarray}
\sum_{t=1}^T
\tilde{w}^2_t \eta^2(\tilde{\ve{w}}_t, h_t) & \geq &
\frac{2M^2
(1-\xi
\alphasur
(\theta))}{(1-\pi^2)\alpha},\label{eqCONST111}
\end{eqnarray}
then $\expect_{{\mathcal{S}}}\left[
\alphasur (y_i H_{T}(\ve{x}_i))\right] \leq \xi
\alphasur(\theta)$ and so
\begin{eqnarray}
\expect_{{\mathcal{S}}}\left[
\iver{y_i H_{T}(\ve{x}_i)\leq \theta}\right] & \leq & \xi.\label{eqBB1}
\end{eqnarray}
There remains to remark that $\emprisk(H_T) \leq \expect_{{\mathcal{S}}}\left[
\iver{y_i H_{T}(\ve{x}_i)\leq 0}\right] $, and therefore pick
$\theta = 0$ for which $\alphasur(\theta) = 1$. Under the
$\upgamma$-WLA, we note that
\begin{eqnarray}
\tilde{w}^2_t \eta^2(\tilde{\ve{w}}_t, h_t) & \geq &
\tilde{w}^2_t\upgamma^2 M^2,\nonumber
\end{eqnarray}
and so, to summarise, under the $\upgamma$-WLA, if the sequence of
expected weights satisfies
\begin{eqnarray}
\sum_{t=1}^T
\tilde{w}^2_t & \geq &
\frac{2
(1-\xi)}{(1-\pi^2)\upgamma^2
\alpha},\label{eqCONST111}
\end{eqnarray}
then $\emprisk(H_T)\leq \xi$. This ends the proof of Theorem \ref{thBoostLC1}.
\section{Proof of Theorem \ref{thBOOSTDP1}}
\label{proof_thBOOSTDP1}
We first prove a preliminary result used in the main file.
\begin{lemma}\label{lemPRELIM}
For any $\alpha_\ell \in [0,1]$, any split $g$ on leaf $\leaf$ that
satisfies the $\upgamma$-Weak Learning Assumption on $h_\ell$ yields
\begin{eqnarray}
\bayesalphariskparam{\ell}(h_\ell
\oplus (g, \leaf)) & \leq & \left(1 - \frac{\upgamma^2 \alpha_\ell \tilde{w}(\leaf)
}{16}\right) \cdot \bayesalphariskparam{\ell}(h_\ell).
\end{eqnarray}
\end{lemma}
\begin{proof}
As long as split $g$ on leaf $\leaf$ satisfies the $\upgamma$-Weak Learning Assumption, we get from the proof of \citet[Theorem 10]{kmOT}
\begin{eqnarray}
\bayesmatrisk(h_\ell
\oplus (g, \leaf)) & \leq & \left(1 - \frac{\upgamma^2 \tilde{w}_\ell }{16}\right) \cdot \bayesmatrisk(h_{\ell}),\label{eqKM122}
\end{eqnarray}
It yields, $\forall \alpha_\ell \in [0,1]$,
\begin{eqnarray}
\lefteqn{\bayesalphariskparam{\ell}(h_\ell
\oplus (g, \leaf)) \defeq \alpha_\ell \bayesmatrisk(h_\ell
\oplus (g, \leaf)) + (1-\alpha_\ell) \bayeserrrisk(h_\ell
\oplus (g, \leaf))}\nonumber\\
& \leq & \alpha_\ell \left(1 - \frac{\upgamma^2 \tilde{w}_\ell }{16}\right)
\cdot \bayesmatrisk(h_{\ell}) + (1-\alpha_\ell) \bayeserrrisk(h_\ell
\oplus (g, \leaf))\nonumber\\
& \leq & \alpha_\ell \left(1 - \frac{\upgamma^2 \tilde{w}_\ell }{16}\right)
\cdot \bayesmatrisk(h_{\ell}) + (1-\alpha_\ell)
\bayeserrrisk(h_\ell)\label{eq1BSUP}\\
& & = \alpha_\ell \left(1 - \frac{\upgamma^2 \alpha_\ell \tilde{w}_\ell }{16}\right)
\cdot \bayesmatrisk(h_{\ell}) + (1-\alpha_\ell) \left(1 -
\frac{\upgamma^2 \alpha_\ell \tilde{w}_\ell }{16}\right) \cdot
\bayeserrrisk(h_\ell)+ Q\nonumber\\
& = & \left(1 - \frac{\upgamma^2 \alpha_\ell \tilde{w}(\leaf)
}{16}\right) \cdot \bayesalphariskparam{\ell}(h_\ell) + Q,
\end{eqnarray}
where \eqref{eq1BSUP} holds because the partition achieved by $h_\ell
\oplus (g, \leaf)$ is finer than that achieved by $h_\ell$ (hence,
its empirical risk cannot be greater), with
\begin{eqnarray}
Q & \defeq & \left[ \alpha_\ell \left(1 - \frac{\upgamma^2 \tilde{w}_\ell
}{16}\right)- \alpha_\ell \left(1 - \frac{\upgamma^2
\alpha_\ell \tilde{w}_\ell }{16}\right)\right]\cdot
\bayesmatrisk(h_{\ell}) \nonumber\\
& & + \left[(1-\alpha_\ell) - (1-\alpha_\ell) \left(1 -
\frac{\upgamma^2 \alpha_\ell \tilde{w}_\ell }{16}\right) \right]\cdot
\bayeserrrisk(h_\ell)\nonumber\\
& = & - \frac{\upgamma^2 \alpha_\ell \tilde{w}_\ell
}{16}(1-\alpha_\ell) \cdot
\bayesmatrisk(h_{\ell}) + \frac{\upgamma^2 \alpha_\ell \tilde{w}_\ell
}{16}(1-\alpha_\ell) \cdot
\bayeserrrisk(h_{\ell}) \nonumber\\
& = & - \frac{\upgamma^2 \alpha_\ell \tilde{w}_\ell
}{16}(1-\alpha_\ell) \cdot
(\bayesmatrisk(h_{\ell}) -
\bayeserrrisk(h_{\ell})) \nonumber\\
& \leq & 0
\end{eqnarray}
because $\bayesmatrisk(h_{\ell}) \geq \bayeserrrisk(h_{\ell})$ for any
$\alpha_\ell, h_\ell$. This ends the proof of Lemma \ref{lemPRELIM}
\end{proof}
\begin{figure}[t]
\begin{center}
\begin{tabular}{c}
\includegraphics[trim=20bp 600bp 630bp
40bp,clip,width=0.80\linewidth]{Figs/FigGapModel}
\end{tabular}
\end{center}
\caption{In the $\delta$-Gap model of boosting, the total set of potential
splits $\mathcal{G}$ contains two subsets in regard to the current
leaf that is being split, $\leaf$. A subset $\setfat$ contains all
splits that guarantee a moderate decrease in the Bayes risk -- this
set is guaranteed non empty under the Weak Learning
Assumption (Lemma \ref{lemPRELIM}). Another set, $\setslim$, contains all the other splits,
supposed to yield a decrease in the Bayes risk at least smaller by
factor $\delta < 1$. In the main file, we have assumed for
simplicity that we can fix $\delta = \upgamma$ but the proof of
Theorem \ref{thBOOSTDP1} below relaxes this assumption.}
\label{f-gapmodel}
\end{figure}
Notations are as follows: $\mathcal{G}$ denotes the complete set of possible splits and
\begin{eqnarray}
\kappa & \defeq & \frac{\epsilon}{2\Delta^*_{\bayesalpharisk}(m)},
\end{eqnarray}
which depends on $\epsilon, m, \alpha, \leaf$ (See Corollary \ref{sensALPHA} in the
main file). $\nodeset(h)$ denotes the set of nodes of $h$, including
leaves in $\leafset(h)$.
\begin{definition}
For any node $\node \in \nodeset(h)$, let $\depth(\node)$ denote its
depth in $h$ and $\tilde{w}(\node) \in [0,1]$ the normalized weight of
examples reaching $\node$. The \textbf{tree-efficiency} of $\node$ in
$h$ is:
\begin{eqnarray}
J(\node, h) & \defeq & \frac{8\tilde{w}(\node)
\emprisk(h)^2}{2^{\depth(\node)}}
\quad \in [0,1].
\end{eqnarray}
\end{definition}
\begin{figure}[t]
\begin{center}
\begin{tabular}{c}
\includegraphics[trim=50bp 600bp 740bp
30bp,clip,width=0.60\linewidth]{Figs/FigMonotonic}
\end{tabular}
\end{center}
\caption{Visualisation of Lemma \ref{lem-rtnode}: root-to-node tree
efficiency is decreasing.}
\label{f-mono}
\end{figure}
The following Lemma gives a key property of the tree efficiency of a node.
\begin{lemma}\label{lem-rtnode}
(Tree efficiency is root-to-node decreasing) For any decision tree $h$, consider any path of nodes $\node_1, \node_2,
..., \node_k \in \nodeset(h)$ where $\node_1$ is the root of $h$ and
$\depth(\node_{i+1}) = \depth(\node_{i})+1$, $\forall i$. Then the
tree efficiency is strictly decreasing along this path:
$J(\node_i, h) > J(\node_{i+1}, h), \forall i$.
\end{lemma}
The proof of this Lemma comes from the fact that along such a path,
$\tilde{w}(.)$ is non-increasing while depth strictly
increases. Figure \ref{f-mono} gives a sketch visualisation of Lemma \ref{lem-rtnode}.\\
We now prove Theorem \ref{thBOOSTDP1}. We consider two cases, starting
first with the simplified case of a single split and then investigate a set of splits.\\
$\triangleright$ \textbf{Single split}: notation $h\oplus(g, \leaf)$ indicates decision tree $h$
in which leaf $\leaf \in \leafset(h)$ is replaced by split $g \in \mathcal{G}$.
It follows from \cite{fsDM} that the probability to pick
split $g$ for leaf $\leaf \in h$ following the exponential mechanism, $\pexpm ((g, \leaf))$,
\begin{eqnarray}
\pexpm ((g, \leaf)) & = & \frac{1}{Z}\cdot
\exp\left(-\kappa \cdot w(\mathcal{S}) \cdot F(h\oplus(g, \leaf))\right),
\end{eqnarray}
where $Z \defeq \sum_{g'\in \mathcal{G}} \exp\left(-\kappa
w(\mathcal{S}) \cdot F(h\oplus(g, \leaf)) \right)$. Notice that the part in the sum in
$F(h\oplus(g, \leaf))$ that does not depend on $\leaf$ can be factored
thanks to the $\exp$, which allows us to simplify
\begin{eqnarray}
\pexpm ((g, \leaf)) & = & \frac{1}{Z}\cdot \exp\left(-\kappa \cdot
\left[w(\leaf\wedge g) \cdot
\bayesrisk\left( \frac{w^1(\leaf\wedge
g)}{w(\leaf\wedge g)} \right) +
w(\leaf\wedge \neg g) \cdot \bayesrisk\left(
\frac{w^1(\leaf\wedge \neg g)}{w(\leaf\wedge \neg
g)} \right)\right]\right)\nonumber\\
& \propto & \exp\left(\kappa \cdot
\left[\bayesalpharisk(h) - \bayesalpharisk(h
\oplus (g, \leaf))\right]\right)
\end{eqnarray}
and $Z$ is the normalization coefficient modified
accordingly. Suppose, $h$ and $\leaf$ being fixed, that we have two subsets,
$\setfat$ and $\setslim$ such that
\begin{eqnarray}
\bayesalpharisk(h) - \bayesalpharisk(h
\oplus (g, \leaf)) & \geq
& \frac{\upgamma^2 \alpha \tilde{w}(\leaf) }{16} \cdot \bayesalpharisk(h),
\forall g \in \setfat\label{eqKM1bb},\\
\bayesalpharisk(h) - \bayesalpharisk(h
\oplus (g, \leaf)) & \leq
& \frac{\delta^2 \upgamma^2 \alpha\tilde{w}(\leaf) }{16} \cdot \bayesalpharisk(h),
\forall g \in \setslim\label{eqKM1cc},
\end{eqnarray}
where we remind that $\tilde{w}(\leaf)$ is the total normalized weight of
examples reaching leaf $\leaf$ \eqref{defNW}.
Assuming $\mathcal{G} = \setfat\cup \setslim$ and letting $\rho \defeq
|\setfat|/|\setslim|$, we get
\begin{eqnarray}
\frac{\pexpm (g \in \setfat | \leaf)}{\pexpm (g \in \setslim | \leaf)}
& \geq &
\rho
\cdot \exp\left(\frac{(1-\delta^2)\upgamma^2 \alpha\epsilon
\tilde{w}(\leaf)
}{32}\cdot \frac{\bayesalpharisk(h)}{\Delta^*_{\bayesalpharisk}(m)}\right).\label{eqBOUND22}
\end{eqnarray}
We want $\pexpm (g \in \setfat | \leaf) \geq \exp( - \xi)$ for some $\xi>0$. From
\eqref{eqBOUND22}, this
shall be the case if
\begin{eqnarray}
\frac{(1-\delta^2)\upgamma^2 \alpha\epsilon
\tilde{w}(\leaf)
}{32}\cdot
\frac{\bayesalpharisk(h)}{\Delta^*_{\bayesalpharisk}(m)} &
\geq
& \log\left(\frac{1}{\exp(\xi)-1}\right) -\log\rho\nonumber\\
& & = \logsur^{-1}(\xi) - \log \rho,
\end{eqnarray}
where $\logsur$ is the convex surrogate of the $\log$-loss. This can
also be inverted to get all $\xi$s for which this applies using the
fact that $\logsur$ is strictly decreasing,
as
\begin{eqnarray}
\xi & \geq & \logsur\left( \frac{(1-\delta^2)\upgamma^2 \alpha\epsilon
\tilde{w}(\leaf)
}{32}\cdot
\frac{\bayesalpharisk(h)}{\Delta^*_{\bayesalpharisk}(m)} + \log \rho\right).
\end{eqnarray}
$\triangleright$ \textbf{Sequence $\mathcal{L}$ of split}: we now index
quantities $\leaf_\ell$ (replacing notation $\tilde{w}(\leaf_\ell)$ by
$\tilde{w}_\ell$ to follow Theorem \ref{thBoostDT1}), $h_\ell, \rho_\ell,
\alpha_\ell, \xi_\ell, \epsilon_\ell$. In particular, the exponential mechanism
to pick $g \in \mathcal{G}$ to split $\leaf_\ell$ in $h_\ell$
now becomes
\begin{eqnarray}
\pexpm ((g, \leaf_\ell)) \hspace{-0.3cm} & \propto & \hspace{-0.3cm}
\exp\left(-\frac{\epsilon_\ell w(\mathcal{S}) F(h_\ell\oplus(g, \leaf_\ell))}{2\Delta^*_{\bayesalpharisk}(m)} \right),\label{expSPLIT2}
\end{eqnarray}
We constrain the analysis to indexes $\ell$
in a specific set $\mathcal{L}$ of size $|\mathcal{L}|$. We get that for any
$\xi_\ell$,
\begin{eqnarray}
\xi_\ell \geq \logsur\left( \frac{(1-\delta^2)\upgamma^2
}{32}\cdot
\frac{\alpha_\ell\epsilon_\ell \tilde{w}_\ell\bayesalphariskparam{\ell}(h_\ell)}{\Delta^*_{\bayesalphariskparam{\ell}}(m)}
+ \log \rho_\ell\right) & \Rightarrow & \pexpm (g \in ({\setfat})_\ell | \leaf_\ell) \geq \exp( - \xi_\ell),
\end{eqnarray}
with the simplifying assumption that $\forall \ell, \mathcal{G} =
({\setfat})_\ell \cup ({\setslim})_\ell$. Because of Theorem
\ref{eqBOOST11}, whenever the sequence of $\alpha_\ell$ is
$\upgamma^2/16$-monotonic, letting
\begin{eqnarray}
Q \defeq \frac{(1-\delta^2)\upgamma^2
}{32} & , & A_\ell \defeq
\frac{\epsilon_\ell
\tilde{w}_\ell \alpha_\ell\bayesalphariskparam{\ell}(h_\ell)}{\Delta^*_{\bayesalphariskparam{\ell}}(m)},
\end{eqnarray}
if furthermore $\xi_\ell \geq \logsur\left(Q A_\ell + \log \rho_\ell\right), \forall \ell$,
then with probability $\geq \exp(-\sum_\ell \xi_\ell)$, \textit{all} splits
in $\mathcal{L}$
satisfy the $\upgamma$-WLA and therefore the boosting condition in
\eqref{eqBOOST11} is met. In other words, the use of the exponential
mechanism to make splits differentially private does not endanger at all convergence
with high probability. We now have two competing objectives in a
differentially private induction of a top-down decision tree:\\
\noindent (i) we need to pick the $\epsilon_\ell$s so as to match the
total privacy budget allowed for the induction of a single tree,
\begin{eqnarray}
\frac{\betatree \epsilon}{T} & \defeq & \sum_\ell
\epsilon_\ell,
\end{eqnarray}
(composition theorem).\\
\noindent (ii) we want to find $\xi_\ell, \ell = 1, 2, ..., L$ such that we have
simultaneously, for some $\xi>0$,
\begin{eqnarray}
\sum_\ell \xi_\ell & \leq & \log \frac{1}{1-\xi},\label{bb1}\\
\xi_\ell & \geq & \logsur\left(Q A_\ell + \log \rho_\ell\right), \forall \ell,\label{bb2}
\end{eqnarray}
because then we can lowerbound the probability that all splits chosen
comply with the WLA:
\begin{eqnarray}
\pexpm \left(\wedge_\ell (g \in ({\setfat})_\ell | \leaf_\ell)\right) & \geq & 1 - \xi,
\end{eqnarray}
Note that, in particular for the first tree induced, $w(\mathcal{S}) =
m/2 = \Omega(m)$ and in all cases, $w(\mathcal{S}) \leq m = O(m)$, so
suppose $w(\mathcal{S}) = \xi' m$ with $\xi' \in (0,1)$ a constant\footnote{The
boosting weight update \eqref{defwunMF} prevents zero / unit weights if the
number of boosting iterations $T\ll \infty$.}. We have
\begin{eqnarray}
\bayesalphariskparam{\ell}(h_\ell) & = & \sum_{\leaf_\ell \in \leafset(h_\ell)} w(\leaf_\ell) \cdot
\bayesalphariskparam{\ell}\left(
\frac{w^1(\leaf_\ell)}{w(\leaf_\ell)}
\right)\nonumber\\
& = & w(\mathcal{S}) \cdot \sum_{\leaf_\ell \in \leafset(h_\ell)} \frac{w(\leaf_\ell)}{w(\mathcal{S})} \cdot
\bayesalphariskparam{\ell}\left(
\frac{w^1(\leaf_\ell)}{w(\leaf_\ell)}
\right)\nonumber\\
& \geq & 2\xi' m \cdot \emprisk(h_\ell).
\end{eqnarray}
Then we can
refine and lowerbound
\begin{eqnarray}
A_\ell & = & \frac{\epsilon_\ell
\tilde{w}_\ell \alpha_\ell w(\mathcal{S}) \cdot \bayesalphariskparam{\ell}\left(h_\ell
\right)}{3+2\alpha_\ell(\sqrt{m} - 1)}\nonumber\\
& \geq & \epsilon_\ell
\tilde{w}_\ell \cdot \frac{2 \alpha_\ell m\xi' \emprisk(h_\ell) }{3+2\alpha_\ell(\sqrt{m} - 1)}.\nonumber
\end{eqnarray}
Suppose we fix\footnote{We note that $ \emprisk(h_\ell) \leq 1/2, \forall
h_\ell$.}
\begin{eqnarray}
\alpha_\ell & \defeq & \frac{\emprisk(h_\ell)}{\emprisk(h_1)} \quad(\in
[0,1]),
\end{eqnarray}
which, since $\emprisk(h_\ell)$ is
non increasing, is therefore $\upgamma^2/16$-monotonic as a
sequence. We get
\begin{eqnarray}
A_\ell & \geq & \xi' \epsilon_\ell
\tilde{w}_\ell \cdot \frac{4 m \emprisk(h_\ell)^2 }{3
\emprisk(h_1)+4 \emprisk(h_\ell) (\sqrt{m} - 1)}.\nonumber
\end{eqnarray}
Define for $r\geq 0$
\begin{eqnarray}
t(z) & \defeq & \frac{4z^2}{3r+4z}.
\end{eqnarray}
We can check that if $z \geq (3qr)/(4(1-q))$ for some $q>0$, then $t(z)
\geq q z$. Now,
\begin{eqnarray}
A_\ell & \geq & \xi' \epsilon_\ell
\tilde{w}_\ell \cdot \frac{4 m \emprisk(h_\ell)^2 }{3+4
\emprisk(h_\ell) \sqrt{m}}\nonumber\\
& & = \xi' \epsilon_\ell
\tilde{w}_\ell \cdot \frac{4 z^2 }{3+4
z}
\end{eqnarray}
for $z \defeq \emprisk(h_\ell) \sqrt{m}$. We get
\begin{eqnarray}
A_\ell & \geq & \xi' \epsilon_\ell
\tilde{w}_\ell\emprisk(h_\ell)^2 \sqrt{m},
\end{eqnarray}
provided $\emprisk(h_\ell) \sqrt{m} \geq (3
\emprisk(h_\ell) \xi')/(4(1-\emprisk(h_\ell)))$, which simplifies in
\begin{eqnarray}
m & \geq & \frac{9{\xi'}^2}{16(1-\emprisk(h_\ell))^2},
\end{eqnarray}
and since $\xi'\leq 1, \emprisk(h_\ell)\leq 1/2$, holds whenever
\begin{eqnarray}
m & \geq & \frac{9}{4}.\label{boundM1}
\end{eqnarray}
We then have
\begin{eqnarray}
\logsur\left(Q A_\ell + \log \rho_\ell\right) & \leq &
\logsur\left(\frac{(1-\delta^2)\upgamma^2 \xi' \epsilon_\ell
\tilde{w}_\ell \emprisk(h_\ell)^2
}{32} \cdot \sqrt{m} + \log \rho_\ell\right).
\end{eqnarray}
Suppose
\begin{eqnarray}
m & \geq & 3,
\end{eqnarray}
which implies \eqref{boundM1}. Fix now
\begin{eqnarray}
\epsilon_\ell & = & \frac{\betatree}{Td2^{\depth(\leaf_\ell)}}\cdot
\epsilon,\label{vllEPSILONt}\\
Td & \leq & \log m.\label{boundSIZE}
\end{eqnarray}
We recall that $d$ is the maximum depth of a tree and $T$ is the
number of trees in the boosted combination. $Td$ is therefore a proxy for the maximal number of tests in trees to
classify an observation.
\begin{eqnarray}
\logsur\left(Q A_\ell + \log \rho_\ell\right) & \leq &
\logsur\left(\frac{\betatree (1-\delta^2)\upgamma^2{\xi'} \epsilon
}{32Td} \cdot \frac{\tilde{w}_\ell
\emprisk(h_\ell)^2\sqrt{m}}{2^{\depth(\leaf_\ell)}} + \log \rho_\ell\right)\nonumber\\
& \leq & \logsur\left( \frac{\betatree (1-\delta^2)\upgamma^2{\xi'} }{256}
\cdot J(\leaf_\ell, h) \cdot \frac{
\epsilon \sqrt{m}}{\log m} + \log \rho_\ell\right),
\end{eqnarray}
with
\begin{eqnarray}
J(\leaf_\ell, h) & \defeq & \frac{8\tilde{w}_\ell
\emprisk(h_\ell)^2}{2^{\depth(\leaf_\ell)}}
\quad \in [0,1].
\end{eqnarray}
Suppose now that
\begin{eqnarray}
\log \rho_\ell & \geq & - \frac{\betatree (1-\delta^2)\upgamma^2{\xi'} }{256}
\cdot J(\leaf_\ell, h) \cdot \frac{
\epsilon \sqrt{m}}{\log m},
\end{eqnarray}
which is equivalent to
\begin{eqnarray}
\frac{|\setfat|}{|\setslim|} & \geq & \exp\left(- \frac{\betatree (1-\delta^2)\upgamma^2{\xi'} }{256}
\cdot J(\leaf_\ell, h) \cdot \frac{
\epsilon \sqrt{m}}{\log m}\right),
\end{eqnarray}
or
\begin{eqnarray}
|\setfat| & \geq & \frac{|\mathcal{G}|}{1 + \exp\left(\frac{\betatree (1-\delta^2)\upgamma^2{\xi'} }{256}
\cdot J(\leaf_\ell, h) \cdot \frac{
\epsilon \sqrt{m}}{\log m}\right)}
\end{eqnarray}
and thus $\setfat$ cannot be vanishing (or at least too fast as a
function of $m$) with respect to $\mathcal{G}$. This implies
\begin{eqnarray}
\logsur\left(Q A_\ell + \log \rho_\ell\right) & \leq &\logsur\left( Q'
\cdot J(\leaf_\ell, h) \cdot \frac{
\epsilon \sqrt{m}}{\log m} \right),
\end{eqnarray}
with
\begin{eqnarray}
Q' & \defeq & \frac{\betatree{\xi'} (1-\delta^2)\upgamma^2}{256}
\quad \in (0, 1/256].\nonumber
\end{eqnarray}
Notice that $Q' = \theta(1)$, \textit{i.e.} it is a constant. The concavity of $\log$ yields
\begin{eqnarray}
\sum_{\ell \in \mathcal{L}} \logsur\left( Q'
\cdot J(\leaf_\ell, h) \cdot \frac{
\epsilon \sqrt{m}}{\log m} \right) & \leq & |\mathcal{L}|
\log\left(1 +
\expect_{\mathcal{L}}
\exp\left(-Q'
\cdot J(\leaf_\ell, h) \cdot \frac{
\epsilon \sqrt{m}}{\log m} \right)\right),
\end{eqnarray}
and so if we pick
\begin{eqnarray}
\xi_\ell & \defeq & \logsur\left( Q'
\cdot J(\leaf_\ell, h) \cdot \frac{
\epsilon \sqrt{m}}{\log m} \right),
\end{eqnarray}
then a sufficient condition to have \eqref{bb1} is
\begin{eqnarray}
\expect_{\mathcal{L}}
\exp\left(-Q'
\cdot J(\leaf_\ell, h) \cdot \frac{
\epsilon \sqrt{m}}{\log m} \right) & \leq &
\left(\frac{1}{1-\xi}\right)^{\frac{1}{|\mathcal{L}|}}
-1.\label{cond1a1}
\end{eqnarray}
We also have $\forall \xi \in [0,1], |\mathcal{L}|\geq 1$,
\begin{eqnarray}
\left(\frac{1}{1-\xi}\right)^{\frac{1}{|\mathcal{L}|}}
-1 & \geq & \frac{\xi}{|\mathcal{L}|},
\end{eqnarray}
so to get \eqref{cond1a1} it is sufficient that
\begin{eqnarray}
\expect_{\mathcal{L}}
\exp\left(-Q'
\cdot J(\leaf_\ell, h) \cdot \frac{
\epsilon \sqrt{m}}{\log m} \right) & \leq & \frac{\xi}{|\mathcal{L}|},
\end{eqnarray}
which is ensured if
\begin{eqnarray}
\min_{\ell \in \mathcal{L}} J(\leaf_\ell, h) & \geq & \frac{1}{Q'}\cdot \frac{\log m}{
\epsilon \sqrt{m}} \log \frac{|\mathcal{L}|}{\xi}\nonumber\\
& = & \Omega \left(\frac{\log m}{
\epsilon \sqrt{m}} \log \frac{|\mathcal{L}|}{\xi}\right) \label{condJJ}.
\end{eqnarray}
This ends the proof of Theorem \ref{thBOOSTDP1}.\\
\noindent \textbf{Remark}: Notice that $|\mathcal{L}| \leq 2^{d+1}-1$, so we get that condition
\eqref{condJJ} is satisfied if for example
\begin{eqnarray}
\min_{\ell \in \mathcal{L}} J(\leaf_\ell, h) & = & \Omega \left(\frac{\log m}{
\epsilon \sqrt{m}} \cdot \left(d + \log \frac{1}{\xi}\right)\right) \label{condJJ2}.
\end{eqnarray}
As long as for example
\begin{eqnarray}
\frac{\log m}{\sqrt{m}} & = & o(\epsilon),\\
d, \log \frac{1}{\xi} & = & o\left(\frac {\sqrt{m}}{\log m}\right),
\end{eqnarray}
then the constraint on $\min_{\ell \in \mathcal{L}} J(\leaf_\ell, h)$ in
\eqref{condJJ2} will vanish.
|
1,314,259,994,175 | arxiv | \section{Introduction}
It is widely believed that quantum computers and generally quantum devices, can outperform their classical counterparts. In particular, there are problems that it is believed a quantum computer could solve efficiently, while a classical computer would require exponentially (in the size of the input) long time. In general, it is not possible to classically simulate such computations and therefore in order to verify that a generic quantum device functions correctly, we need to resort to different techniques. Currently the most efficient ways to verify a quantum computation, is to employ cryptographic methods, where we have an almost classical verifier that executes the computation using an untrusted but fully quantum prover. There has been a number of such verification methods \cite{fk,abe,efk,kdk2015,KKD14,tomo2014,hm2015,ruv2,mckague,gkw2015,hpf2015,DFPR13,Broadbent2015,GHZ_verification,MM2015,exp_secret,BGS13} where generally there exists a trade-off between the practicality of the scheme versus their generality, trust assumptions and security level. It is the target of this work to both reduce the experimental requirements of the most general schemes and to achieve further improvements in the more restricted schemes. In order to make quantum verification schemes practical number of aspects can be considered:
\begin{itemize}
\item whether the verifier's devices are trusted or not;
\item which are the verifier's quantum technological requirements;
\item is the scheme suitable for any universal computation or for only a restricted class;
\item is the output of the quantum computation classical or quantum;
\item does the protocol tolerates errors due to noise;
\item how does the probability of failure scale;
\item which is the classical and quantum overhead and which is the round complexity of the scheme;
\item whether the quantum communication needs to be done during the computation (online) or can it be done at some earlier stage (offline);
\end{itemize}
A full review of all necessary parameters are beyond the scope of this paper but it is worth noticing that all the above aspects have been addressed using protocols based on verification via blind quantum computing \cite{fk,efk,kdk2015,KKD14,gkw2015,hpf2015,DFPR13}. We will refer to this family of protocols collectively as VBQC schemes where the key idea is based on hiding the underlying computation (also known as blindness). This would allow the verifier to encode simple trap computations within a general computation that runs on a remote device in such a way that the computation is not affected, while revealing no information to the device. The correctness of the general computation is then tested via the verification of the trap computation. The latter is significantly less costly and thus leads to an efficient scheme (essentially similar to an error detection code). What makes the procedure work is the blindness that hides the trap computation from the actual one. To elaborate further on the security parameter scaling, consider the following informal definition of \emph{verification} that we formalise later (for details see also \cite{fk}).
\begin{definition}\label{epsilon-verification}
A quantum computation protocol is $\epsilon$-verifiable if the probability of accepting an incorrect output (classical or quantum) for any choice of the prover's strategy is bounded by $\epsilon$.
\end{definition}
In a practical scenario, to be convinced of the correctness of the output obtained from a given quantum device, one needs a verification protocol where the security parameter ($\epsilon$) can be made arbitrarily small while keeping the cost (in terms of the experimental requirements) optimal. The standard technique for amplification when dealing with classical output is to simply repeat the protocol multiple times (let say $d$) and if all rounds are accepted and result in the same outputs, then this output is the correct except with probability $\epsilon^d$. However, dealing with quantum output requires more elaborate methods (to deal with the possibility of coherent attacks)
that involves the use of full fault-tolerant computation and the presence of multiple traps in order to achieve exponential bounds.
\subsection{Our Contribution}
In this work we focus to further improve the underlying resource construction required for VBQC schemes. Our main results can be summarised as follows:
\begin{enumerate}
\item In Section \ref{Sec:DTG}, inspired by the dotted-complete graph state introduced in \cite{fk}, we give a generic construction where for any given (universal or non-universal graph state resource) multiple trap qubits isolated from the computation qubits
can be added. Unlike the dotted-complete graph state the overhead of the new construction is only linear in the size of the specific computation that will be performed. Furthermore the traps are uniformly distributed and their positions are essentially independent from each other.
\item We use this construction to obtain a new universal VBQC protocol (Section \ref{Sec:Verification1}) that has lower cost. Since we are using a different resource, the proof technique had to be accordingly adapted. Our protocol before embedding any boosting mechanism will already have a constant security parameter and thus allows a potentially straightforward one-shot experiment.
\item When the output of the quantum computation is classical, we use a repetition technique to boost the security of our protocol to arbitrarily small $\epsilon$ (Section \ref{Sec:Repetition}). Importantly, we can achieve this using a constant number of repetitions that is independent of the size of the computation. In previous VBQC protocols the number of repetitions that were required increased with the size of the computation.
\item For the general quantum output case, we use a fault-tolerant encoding of the computation and this boosts the security to arbitrary small $\epsilon$ while in the same time we still require only linear, in the size of the computation, overhead (Section \ref{Sec:FT_Verification}). The overhead of most of previous VBQC protocols are quadratic in the size of the computation with the only exception of \cite{kdk2015}.
\end{enumerate}
Our generic construction could be explored to optimise various other existing VBQC and we briefly comment on that in Section \ref{Sec:Examples}.
\subsection{Related works}
There has been a number of papers on verification addressing different aspects. With no aim to give a complete list we give here a brief description of some related works. Aharonov, Ben-Or and Eban \cite{abe} provided the first verification protocol. It requires a linear overhead in the size of the computation, but also requires a verifier that has involved quantum abilities, and in particular that can prepare entangled states of size that depends on the security parameter.
Following another approach, based on measurement-based quantum computation, Fitzsimons and Kashefi \cite{fk} obtained the most optimal scheme from the point of view of verifier’s capability.
However, the overhead of the full scheme becomes quadratic. Recently a solution for addressing this issue was proposed in \cite{kdk2015} by combining the above two protocols \cite{fk} and \cite{abe} in order to construct a hybrid scheme. This was the only verification protocol (before our work) that requires linear number of qubits while in the same time requires that the verifier has the minimal quantum property of preparing single quantum systems. However, the protocol requires the preparation of qudits (rather than qubits) where the dimension is dictated by the desired level of security. Moreover the required resource is still constructed based upon the dotted-complete graph state though of small constant size. Hence further investigation is required for establishing the experimental simplicity of the two schemes, ours and the one in \cite{kdk2015}.
The first experimental implementation of a simplified verification protocol was presented in \cite{efk} where a repetition technique was explored as well. Other experiments on verifiable protocols include \cite{exp_secret} and an experiment based on the protocol in \cite{GHZ_verification}. However, none of these works are applicable to a full universal scheme like ours.
On the other hand to achieve a classical verifier new techniques are proposed at the cost of increasing the overall overhead of the protocol dramatically \cite{ruv2} or increasing the number of the provers \cite{mckague}. Other device-independent protocols \cite{gkw2015,hpf2015} used a single universal quantum prover and an untrusted qubit measuring device and while the complexity improved it is still far from experimentally realisable.
The VBQC protocol could be generally viewed as prepare and send scheme (using the terminology from QKD). Equivalent schemes based on measurement-only could also be obtained \cite{tomo2014,hm2015}. In this scenario the prover prepares a universal resource and sends it qubit-by-qubit to the verifier that performs different measurements in order to complete a quantum computation. These protocols are referred to as online protocols (in contrast to the offline protocols mentioned above) since the quantum operations of the verifier occur when they know what they wants to compute. The online scheme can also achieve verification either by creating traps \cite{tomo2014}, or by measuring the stabiliser of the resource state \cite{hm2015}. These protocols could be improved using our techniques as we will comment in Section \ref{Sec:Examples}.
Finally a composable definition of \cite{fk} is given in \cite{DFPR13}, while a limited computational model (one-pure-qubit) is examined in \cite{KKD14}. Due to the generic nature of our construction these results would be applicable to our protocol as well.
The verification protocols in \cite{Broadbent2015,BGS13} are teleportation based. However, due to the existence of a general mapping between the teleportation (with two qubits measurement) and one-way computing (with one qubit measurement), see details in \cite{Aliferis04,mbqc}, one should be able to explore any possible improvement that our techniques could bring to these protocols, i.e. extending them to a fault-tolerant setting with quantum outputs.
\subsection{Background}
The family of VBQC protocols are conveniently presented in the measurement-based quantum computation (MBQC) model
\cite{onewaycomputer} that is known to be the same as any gate teleportation model \cite{childs2005unified}. We will assume that the
reader is familiar with this model (also known as the one-way model), whereas further details can be found in \cite{mbqc}. The general idea
behind an MBQC protocol is that one starts with a large and highly entangled multiparty state (the resource state) and the computation is
performed by carrying out single-qubit measurements. There is a (partial) order on the measurements since the basis of a measurement
could depend on the outcomes of previous measurements. The resource states used are known as \emph{graph states} as they could be fully determined by a given graph see details in \cite{hein2004multiparty}. One physical way to construct a graph state given the graph
description is to assign to each vertex of the graph a qubit initially prepared in the state $\ket{+}$ and for each edge of the graph to perform a $\textrm{controlled-}Z$ gate to the two adjacent vertices.
If one starts with a graph state that the qubits were prepared in a rotated basis $\ket{+_\theta}=1/\sqrt{2}(\ket{0}+e^{i\theta}\ket{1})$
instead, then it is possible to perform the same computation with the non-rotated graph state by preforming measurements in a similarly
rotated basis. This observation led to the formulation of the \emph{universal blind quantum computation} (UBQC) protocol \cite{bfk}.
Here a client prepares rotated qubits, where the rotation is only known to them. Client sends the qubits to the server (as soon as they
are prepared hence there is no need for any quantum memory). Finally the client instructs the server to perform entangling operations according to the graph and perform single qubits measurements in suitable angles and order to perform the desired computation. During this protocol the client receives the intermediate outcomes of previous measurements allowing them to classically evaluate the next measurement angle. Due to the unknown rotation the server does not learn what computation they actually perform.
The UBQC protocol could be turned into a verification protocol where the client (referred to now as verifier) could detect a cheating
server (referred to now as prover). To do so, the verifier for certain vertices (called dummies) sends states from the set
$\{\ket{0},\ket{1}\}$ which has the same effect as a $Z$-basis measurement on that vertex. In any graph state if a vertex is measured
in the $Z$-basis it results in a new graph where that vertex and all its adjacent edges are removed. During the protocol the prover does not
know for a particular vertex if the verifier send a dummy qubit or not. This enables the verifier to isolate some qubits (disentangled
from the rest of the graph) and those qubits have fixed deterministic outcomes if the prover followed honestly the
instructions. The positions of those isolated qubits are unknown to the prover and the verifier uses them as traps to test that the prover
performs the quantum operations that is given. This technique resulted in the first universal VBQC protocol \cite{fk} which is the basis of
our paper. As we explain later, while the trapification idea is straightforward however it is challenging to find the most optimal way
of inserting trap qubits while not breaking the general computation. This is the central focus of this paper to introduce a general
optimised scheme for constructing graph state resources for VBQC protocols.
\section{The dotted triple-graph construction}\label{Sec:DTG}
Our construction starts with a ``base'' graph $G$ such that the related graph state $\ket{G}$ can be used as the resource to perform a particular (or universal) quantum computation in MBQC. This graph is then ``decorated'' in a suitable way, resulting to a graph that we will call dotted triple-graph $DT(G)$. This new graph leads to a suitable resource state $\ket{DT(G)}$ for running a verified quantum computation in an efficient way. The general idea is to construct the $DT(G)$ graph which after some operations (chosen secretly by the verifier) can be broken to three identical graphs. The one will be used to perform the desired computation and the other two to insert trap computations to detect possible deviations. The way that the $DT(G)$ is broken is chosen by the verifier and thus the prover is blind about which vertex belongs to which graph. This general idea was first introduced in \cite{fk}. The key difference of our construction is that while in \cite{fk} the breaking to subgraphs occurs in a global way, in our construction it happens locally. This difference results in a reduction on the number of vertices (and thus qubits) that are required. The precise meaning of ``locality'' will become apparent later after we introduce our construction.
In this section we will only give definitions and properties of the dotted triple-graph construction when viewed purely as graph operations. Those properties will play a crucial role in the next sections where we will use as resource state the dotted triple-graph state $\ket{DT(G)}$ in order to obtain verifiable quantum computation protocols. Before going to the construction we give a definition:
\begin{definition}\label{dotting} We define the \emph{dotting} operator $D$ on graph $G$ to be the operator which transforms a graph $G$ to a new graph denoted as $D(G)$ and called \emph{dotted} graph, by replacing every edge in $G$ with a new vertex connected to the two vertices originally joined by that edge. We call the set of vertices of $D(G)$ previously inherited from $G$ as primary vertices $P(D(G))$, and the vertices added by the $D$ operation as added vertices denoted by $A(D(G))$.
\end{definition}
\noindent\textbf{Dotted triple-graph construction:}
\begin{enumerate}
\item We are given a base graph $G$ that has vertices $v\in V(G)$ and edges $e\in E(G)$, as in Figure \ref{figure1} (a). In the following steps we will give the new graph $DT(G)$, called dotted-triple graph and specify its vertices and edges.
\item For each vertex $v_i$, we define a set of three new vertices $P_{v_i}=\{p^{v_i}_1,p^{v_i}_3,p^{v_i}_3\}$.
\item Corresponding to each edge $e(v_i,v_j)\in E(G)$ of the base graph that connects the base vertices $v_i$ and $v_j$, we introduce a set of nine edges $E_{e(v_i,v_j)}$ that connect each of the vertices in the set $P_{v_i}$ with each of the vertices in the set $P_{v_j}$.
\item The graph that its vertices are $\cup_{v_i\in V(G)}P_{v_i}$ and the edges are defined as in the previous step is called triple-graph $T(G)$, as in Figure \ref{figure1} (b).
\item We perform the dotting operator $D$ on the triple graph $T(G)$ to obtain the dotted triple-graph $DT(G)$. An example of dotted triple-graph can be seen in Figure \ref{figure1} (c).
\end{enumerate}
\begin{figure}
\includegraphics[width=1.0\columnwidth]{figure1.jpg}
\caption{(a) A base graph consisting of three vertices and two edges. (b) A triple-graph $T(G)$ where for each vertex $v$ there is a set of three vertices $P_v$. (c) A dotted triple-graph. For each edge of the base graph there is a set of nine added vertices $A_e$. The added vertices are denoted as squares, while the primary as circles.}
\label{figure1}
\end{figure}
Note that, according to Definition \ref{dotting} and the labeling in the above construction the primary vertices are given as $P(DT(G)) = \cup_{v_i} P_{v_i}$. For convenience we also label the added vertices $A(DT(G))$ as follows. Corresponding to each edge $e(v_i,v_j)$ of the base graph $G$, there are now 9 added vertices and we will denote each set of added vertices as $A_{e_{ij}}=\{a^{e_{ij}}_1,\cdots,a^{e_{ij}}_9\}$. Note that the number of vertices of the new graph is $|V(DT(G))|=3|V(G)|+9|E(G)|$.
If the maximum degree of the base graph is a constant $c$ then the number of vertices of the $DT(G)$ are linear in the number of vertices of the base graph.
This property means that if we can base our verifiable quantum computation protocol on this graph, then the number of qubits we will need is linear in the size of the computation.
Having defined the dotted triple-graph construction, we give some definitions before we prove certain properties of the $DT(G)$. Note that in what follows we present a coloring scheme that is not in effect a graph coloring of the $DT(G)$. The next definition is essentially a labeling scheme that for convenience we present it as a coloring. Therefore connected vertices could get the same color.
\begin{definition}[Trap-Colouring]\label{trap colouring} We define trap-colouring to be an assignment of one colour to each of the vertices of the dotted triple-graph that is consistent with the following conditions.
\begin{enumerate}
\item[(i)] Primary vertices are coloured in one of the three colours white, black or green.
\item[(ii)] Added vertices are coloured in one of the four colours white, black, green or red.
\item[(iii)] In each primary set $P_v$ there is exactly one vertex of each colour.
\item[(iv)] Colouring the primary vertices fixes the colours of the added vertices: Added vertices that connect primary vertices of different colour are red. Added vertices that connect primary vertices of the same colour get that colour.
\end{enumerate}
\end{definition}
Note that the choice of colours for each of the primary sets $P_v$ can be chosen randomly and is independent from the choices made on other primary sets. We can also see that in each of the added sets $A_e$ we have one white, one black, one green and six red vertices.
\begin{figure}\label{figure2}
\includegraphics[width=1.0\columnwidth]{figure2.jpg}
\caption{(a) A dotted triple-graph, where only the primary vertices are coloured, and this is done randomly for each set. (b) A trap-colouring of $DT(G)$ that is fully fixed from the colouring of the primary vertices. (c) $DT(G)$ after performing break operations on all red vertices. This results to three copies of the dotted base graph. (d) $DT(G)$ after performing further break operations on the primary vertices of the black graph and added vertices of the white graph. The result is a dotted base graph (green) and isolated white traps on primary vertices and black traps on added vertices.}
\end{figure}
In Figure \ref{figure2} (a) and (b) we see an example of trap-colouring, where in (a) we choose independently the colour choices of primary vertices and in (b) the colours of added vertices is fixed following the rules for trap-colouring given above. Before proving the next property we present a graph operation (also introduced in \cite{fk}).
\begin{definition}
We define the \emph{break} operator on a vertex $v$ of a graph $G$ to be the operator which removes vertex $v$ and any adjacent edges to $v$ from $G$.
\end{definition}
\begin{lemma}\label{break_red}
Given the dotted triple-graph $DT(G)$ and a trap-colouring, by performing break operations on the red vertices we obtain three identical copies of the dotted base graph $D(G)$ each of them consisting of a single colour.
\end{lemma}
\begin{proof}
First we note that after the break operations on red added vertices, all the vertices of different colours are disconnected. This follows since edges connecting different colour primary vertices were coloured by definition red, while all added vertices that were not red are connected with same colour vertices. Then, we need to show that the graph of each colour results to a graph identical to $D(G)$. To see this note that for each vertex $v_i$ of the base graph, there is a white (black, green) vertex in $P_{v_i}$. Then for each edge $e(v_i,v_j)$ of the base graph $G$, there is a unique white (black, green) added vertex in $A_{e_{ij}}$ that joins the white vertex $p_w^{v_i}\in P_{v_i}$ and the white vertex $p_w^{v_j}\in P_{v_j}$ (and similarly for black and green).
\end{proof}
Figure \ref{figure2} (c) illustrates how the $DT(G)$ breaks to three identical dotted base graph, after performing break operations on the red vertices.
\begin{definition}
We define the \emph{base-location} of a vertex $f\in V(DT(G))$ of the dotted triple-graph to be the position that the set $P_v$ or $A_e$ that includes $f$ has in the dotted base graph $D(G)$. This position is denoted by either ``$v$'' corresponding to the primary vertex of $D(G)$ or with ``$e$'' corresponding to the added vertex of $D(G)$ on the edge $e$.
\end{definition}
Given a trap-colouring, each primary vertex belongs to one of the three graphs where the colour is determined by the trap-colouring. However, its base-location is fixed prior to the trap-colouring. Here we can see the difference of our construction with that of \cite{fk}. There a dotted-complete graph was used and the graph also broke to three identical graphs, where all primary vertices belonged to one of these graphs. However, there was no restriction of how this break happens, and any choice of three equal subsets was valid. In our construction we maintain the structure of the base-location (reducing the number of added vertices required), but in the same time the colour choices at one primary base-location are totally independent from colour choices at other primary base-location.
The next property that was essentially proved in \cite{fk}, is also required for the verification protocols presented in the next sections.
\begin{lemma}\label{break_traps}
Given a dotted graph $D(G)$, by applying break operators to every vertex in $P(D(G))$ or $A(D(G))$ the resulting graph is composed of the vertices of $A(D(G))$ or $P(D(G))$ respectively and contains no edges.
\end{lemma}
\begin{proof}
As the dotting operation only introduces vertices connected to vertices in $P(D(G))$, every vertex in $A(D(G))$ shares edges only with vertices in $P(D(G))$. Thus when the vertices in $P(D(G))$ and their associated edges are removed by the break operators, the vertices in $A(D(G))$ become disconnected. Similarly, since the dotting operation removes all edges between vertices in $P(D(G))$, hence every vertex in $P(D(G))$ shares edges only with vertices in $A(D(G))$. Thus when the vertices in $A(D(G))$ and their associated edges are removed by the break operators, the vertices in $P(D(G))$ become disconnected.
\end{proof}
In Figure \ref{figure2} (d) we see that after the break operations of Firgure \ref{figure2} (c), further break operations are performed on all white added vertices and on all black primary vertices. We end up with a (green) copy of the dotted base graph and white isolated traps at primary vertices and black isolated traps at added vertices.
There are common properties that we will prove for both primary and added vertices and for the ease of notation we will refer to either such set $P_v$ or $A_e$ as $F_l$ with the convention that the subscript $l$ denotes the base-location of the set and when it takes value $v$ (primary base-location) it becomes $P_v$ and when it takes value $e$ (added base-location) it becomes $A_e$.
Next we show that while the trap-colouring is a global construction it can indeed be considered as a local scheme. This property will be explored in our proof technique for the verification. We formalise this notion in the next set of definitions and lemmas.
\begin{definition}
We define \emph{local-colouring} of a set $F_l$ to be an assignment of colours to that set that is consistent with some global trap-colouring.
\end{definition}
This definition captures the idea of colouring a particular set $F_l$ corresponding to base-location $l$ such that it can be part of some global trap-colouring without having \emph{any} further constraints from colours of vertices at other base-locations. We can see that a local-colouring of an added set $A_{e_{ij}}$ fully determines the colours of the vertices in the two neighbouring primary base-locations $P_{v_i},P_{v_j}$, while the converse is also true. A local-colouring of the two primary sets $P_{v_i},P_{v_j}$ fully determines the colours of the added set $A_{e_{ij}}$. We can therefore see that a local-colouring of set $A_{e_{ij}}$ is equivalent with a local-colouring of the two neighboring primary base-locations $P_{v_i},P_{v_j}$. We can also see that a local-colouring of \emph{all} primary sets $P_v$ is compatible with a trap-colouring and fixes it uniquely.
However, if we have two general sets $F_{l_1},F_{l_2}$ it is not always possible to colour them both using a local-colouring and still be able to find a global trap-colouring. An example is if we have a primary set $P_{v_1}$ and its added neighbouring set $A_{e_{1k}}$, where a local colouring of the set $P_{v_1}$ imposes constraints on the colours of $A_{e_{1k}}$ further than those required from a local-colouring. To see this note that an added vertex connected to a white primary vertex can be either white or red, but can never be black.
One can see that an added set $A_{e_{ij}}$ can have local-colouring if there is no constraint on the colours from the neighbouring primary sets $P_{v_i},P_{v_j}$, but also from other added sets $A_{e_{ik}},A_{e_{jk}}$ that have common neighbor sets (either $P_{v_i}$ or $P_{v_j})$. Here we wish to make precise when there is a collection of base-locations that one can assign (independently) local-colourings to all the related sets $F_l$ and still be able to always find a global trap-colouring.
\begin{definition}[Independently Colourable Locations (ICL)]\label{ICL}
Given a dotted triple-graph $DT(G)$ and a collection of $n$ base-locations $\mathcal{E}$ with corresponding sets $F_l$, we call the set $\mathcal{E}$ \emph{independently colourable locations} if any local-colouring of the sets $F_l$ is consistent with at least one trap-colouring.
\end{definition}
What this definition captures is that the choice of colours within each of the sets $F_l$ corresponding to a base-location in $\mathcal{E}$ is independent from the choice of colours in other sets $F_{l'}$ with base-location in $\mathcal{E}$.
For each base-location $l$ we define $\epsilon_{l}=\{l\}$ if the base-location is primary and $\epsilon_{l}=N_{D(G)}(l)$ if the base-location is added (i.e. in the latter case, it contains the two primary base-locations that are adjacent to the location $l$).
\begin{lemma} \label{ICL3}
A set of $n$ base-locations $\mathcal{E}$ is ICL if and only if for all pairs $i,j\in \mathcal{E}$ the sets $\epsilon_{i}\cap\epsilon_{j}=\emptyset$.
\end{lemma}
\begin{proof}
First we prove that a collection of base-locations satisfying this condition, is ICL. From $\epsilon_i\cap\epsilon_j=\emptyset$ we can see that (i) for all primary base-locations in $\mathcal{E}$ no neighbouring base-location is in $\mathcal{E}$ and (ii) for each added base-location, the two neighbouring primary-locations $P_{v_i},P_{v_j}$ are not in $\mathcal{E}$ and neither is any other added base-location set that has neighbours either of $P_{v_i},P_{v_j}$. In other words, the sets of neighbours of added base-locations are disjoint. However, we already noted that a local-colouring of an added base-location is equivalent with a local-colouring of the two neighboring primary base-locations $P_{v_i},P_{v_j}$. By replacing the local-colouring of added base-locations with that of the neighbouring primary base-locations, we reduce the local-colouring of the set $\mathcal{E}$ to that of a collection of local-colourings of primary base-locations. This is ICL since by the definition of trap-colouring no constraint is imposed between the colours of primary sets.
To prove the converse consider two locations $i,j$ such that $\epsilon_i\cap\epsilon_j\neq\emptyset$. Either one is primary and the other is a neighbouring added base-location or $i$ and $j$ are added base-locations sharing a common neighbour $k$. In the first case it is clear that the choice of colour at the primary set (say $i$) imposes constraints on the colours of the added base-location $j$. In the second case, the choise of colour at the added location $i$ can determine that of the neighbour location $k$ (for example a white added vertex that is connected with a primary vertex fixes the colour of that vertex to white). But then fixing the colours of the primary base-location $k$ in its turn imposes constraints for the other added neighbour $j$, and thus a local-colouring of $i$ and $j$ may not be consistent with a trap-colouring.
\end{proof}
We are now in position to prove the following property that is necessary for Section \ref{Sec:FT_Verification}.
\begin{theorem}\label{ICL2}
Consider a dotted triple-graph $DT(G)$. Consider a set $S$ of $n$ base-locations and assume that the base graph $G$ has maximum degree $c$. Then there exists a subset $S'\subseteq S$ of these base-locations that are ICL (independent colourable locations) and it contains at least $|S'|=\frac{n}{2c+1}$ locations.
\end{theorem}
\begin{proof}
The set $S$ has $n$ locations of the graph $D(G)$. We want a subset of these locations $S'$ such that it satisfies Lemma \ref{ICL3}. The condition of that lemma requires that if a primary base-location $v_i$ is included, then all its neighbouring base-locations should be excluded. The maximum number of neighbours is given by the maximum number of added base-locations which is $c$. Therefore if we include the base-location $v_i$ in the set $S'$, we might need to exclude at most $c$ other base-locations from the set $S$.
To include any added base-location $e_{ij}$ in the set $S'$, Lemma \ref{ICL3} requires that its neighbours $v_i,v_j$ and the neighbours of its neighbours $e_{ik},e_{jk}$ should be excluded. Its neighbours are 2, while the neighbours of the neighbours are at most $2(c-1)$. It follows that to include $e_{ij}$ in the set $S'$ we might need to exclude at most $2c$ other base-locations from the set $S$.
From the pigeonhole principle follows that we can find a set $S'$ with at least $\frac{n}{2c+1}=|S'|$ base-locations that are ICL.
\end{proof}
\section{Verifiable quantum computation}\label{Sec:Verification1}
In this section we give a verifiable blind quantum computation protocol using the dotted triple-graph construction, but otherwise, we follow similar steps with \cite{fk}.
With our construction we obtain a protocol where the probability of success is constant (independent of the size of the computation) and we use only linear, in the size of the computation, number of qubits.
\begin{algorithm}[H]
\caption{Verifiable Universal Blind Quantum Computation using dotted triple-graph}
\label{prot:AUBQC}
We assume that a standard labelling of the vertices of the dotted triple-graph $DT(G)$ used, is known to both the verifier and the prover. The number of qubits is at most $3N(3c+1)$ where $c$ is the maximum degree of the base graph $G$.\\
\noindent$\bullet$ \textbf{Verifier's resources} \\
-- Verifier is given a base graph $G$ that the dotted graph state $\ket{D(G)}$ can be used to perform the desired computation in MBQC with measurement pattern $\mathbb{M}_{\textrm{Comp}}$.\\
-- Verifier generates the dotted triple-graph $DT(G)$, and selects a trap-colouring according to definition \ref{trap colouring} which is done by choosing independently the colours for each set $P_v$.\\
-- Verifier for all red vertices will send dummy qubits and thus performs break operation.\\
-- Verifier chooses the green graph to perform the computation.\\
-- Verifier for the white graph sends dummy qubits for all added qubits $a^{e}_w$ and thus generates white isolated qubits one at each primary vertex set $P_{v}$. Similarly for the black graph the verifier sends dummy qubits for the primary qubits $p^v_b$ and thus generates black isolated qubits one at each added vertex set $A_{e}$.\\
\noindent -- The dummy qubits position set $D$ is chosen as defined above (fixed by the trap-colouring).\\
\noindent -- A sequence of measurement angles, $\phi=(\phi_i)_{1\leq i \leq 3N(3c+1)}$ with $\phi_i \in A=\{0,\pi/4,\cdots,7\pi/4\}$, consistent with $\mathbb{M}_{\textrm{Comp}}$, where $\phi_i = 0$ for all the trap and dummy qubits.
The verifier chooses a measurement order on the dotted base-graph $D(G)$ that is consistent with the flow of the computation (this is known to prover). The measurements within each set $P_v,A_e$ of $DT(G)$ graph are order randomly.
\\
\noindent -- $3N (3c+1)$ random variables $\theta_i$ with value taken uniformly at random from $A$.\\
\noindent -- $3N (3c+1)$ random variables $r_i$ and $|D|$ random variable $d_i$ with values taken uniformly at random from $\{0,1\}$. \\
\noindent -- A fixed function $C(i, \phi_i, \theta_i, r_i, \mathbf{s})$ that for each non-output qubit $i$ computes the angle of the measurement of qubit $i$ to be sent to the prover.
\noindent$\bullet$ \textbf{Initial Step} \\
-- \textbf{Verifier's move:} Verifier sets all the value in $\mathbf{s}$ to be $0$ and prepares the input qubits as
\AR{
\ket e = X^{x_1} Z(\theta_1) \otimes \ldots \otimes X^{x_l} Z(\theta_l) \ket I
}
and the remaining qubits in the following form
\AR{
\forall i\in D &\;\;\;& \ket {d_i} \\
\forall i \not \in D &\;\;\;& \prod_{j\in N_G(i) \cap D} Z^{d_j}\ket {+_{\theta_i}}
}
and sends the prover all the $3N (3c+1)$ qubits in the order of the labelling of the vertices of the graph.
-- \textbf{Prover's move:} Prover receives $3N(3c+1)$ single qubits and entangles them according to $DT(G)$.
\noindent$\bullet$ \textbf{Step $i: \; 1 \leq i \leq 3N (3c+1)$}
-- \textbf{Verifier's move:} Verifier computes the angle $\delta_i=C(i, \phi_i, \theta_i, r_i, \mathbf{s})$ and sends it to the prover.\\
-- \textbf{Prover's move:} Prover measures qubit $i$ with angle $\delta_i$ and sends the verifier the result $b_i$. \\
-- \textbf{Verifier's move:} Verifier sets the value of $s_i$ in $\mathbf{s}$ to be $b_i+r_i$. \\
\noindent$\bullet$ \label{step:Alice-prep} \textbf{Verification} \\
-- After obtaining the output qubits from the prover, the verifier measures the output trap qubits with angle $\delta_t=\theta_t+r_t\pi$ to obtain $b_t$.
-- Verifier accepts if $b_i = r_i$ for all the white (primary) and black (added) trap qubits $i$.
\end{algorithm}
As it is evident from the protocol, the positions of the dummy qubits (i.e. those that are $\{\ket{0},\ket{1}\}$) is determined by the trap-colouring. It is known that sending dummy qubits has the same effect as making a $Z$ measurement in MBQC which effectively breaks the graph state at this vertex. Therefore the properties defined in Section \ref{Sec:DTG} corresponding to the reduction of the $DT(G)$ to one dotted base graph $D(G)$ and isolated traps (Lemmas \ref{break_red} and \ref{break_traps}) as well as the properties concerning the independence of the colouring and thus the distribution of traps (Theorem \ref{ICL2}), all apply here.
\begin{theorem} (correctness)
If both verifier and prover follow the steps of protocol \ref{prot:AUBQC} then the output is correct and the computation accepted.
\end{theorem}
\begin{proof}
If both verifier and prover follow the steps of protocol \ref{prot:AUBQC} then the prover essentially (when pre-rotations are taken into account) applies the pattern $\mathbb{M}_{\textrm{Comp}}$ at the green dotted base graph $D(G)$, which by assumption performs the desired computation (see also theorems 1 and 3 of \cite{fk}). Moreover, the isolated white and black qubits are measured in the correct basis and thus the verifier receives $b_i=r_i$ for the traps and accepts the computation.
\end{proof}
\begin{theorem} (Verification)\label{Verification1}
Protocol \ref{prot:AUBQC} is $\left(\frac{8}{9}\right)$-verifiable both when the output is quantum or classical.
\end{theorem}
\begin{proof}
The proof follows closely steps of the proof of theorem 8 of ref \cite{fk} and thus here we will only outline the steps of the proof and stress where we differ, while a more detailed proof can be found in Appendix \ref{App:proof1}.
The proof consists of five steps. In \textbf{step 1} we prove that any deviation from the ideal protocol can be expressed in terms of some Kraus operators which are then written as linear combination of strings of Pauli matrices (denoted as $\sigma_i$) and the remaining of the proof is to see which of those attacks maximise the probability of accepting an incorrect outcome.
In \textbf{step 2} we note that there are some strings $\sigma_i$ that for \emph{any} choice of the secret parameters (trap positions, angles, etc) of the verifier do not corrupt the computation and thus they do not contribute to the probability of failure. The set of all the other strings $\sigma$ (that could corrupt the computation for some choice of parameters) will be denoted as $E_i$. It is clear that the prover, to optimise the chance to get an incorrect outcome accepted, should only use attacks from the set $E_i$. In this section, where we consider the simplest protocol, a single non-trivial attack could corrupt the computation and the set $E_i$ consists of all the attacks $\sigma$'s that have in at least one position a non-trivial attack. However, in the next section this changes. The technique to amplify the success probability uses fault-tolerant encoding and thus the computation is corrupted only if multiple errors occur and this leads to different set $E_i$. For now we keep the description general for as long as possible, so that it applies for the next section. After the set $E_i$ is defined, in order to compute an upper bound for the failure probability, we simply compute the probability of not triggering any trap given that the attacks used are all from the set $E_i$. This is clearly an upper bound for the failure probability (worse-case scenario), since in reality the fact that there exist some choices of the secret parameters that a given $\sigma\in E_i$ corrupts the computation does not mean that it corrupts the computation in general. However, an upper bound $\epsilon$ of the failure probability suffices to prove that the protocol is $\epsilon$-verifiable\footnote{It is worth pointing out, that since this is a weak bound a different proof technique could result to a tighter bound for the failure probability.}.
In \textbf{step 3} we exploit the blindness of the prover. The fact that the prover does not know the secret parameters restricts the attacks that contribute to the bound of the failure probability we compute to a convex combination of Pauli attacks. This is important since it eliminates ``coherent'' type of attacks and resembles theorems in quantum key distribution (QKD) that reduce coherent attacks to collective by exploiting the symmetry of the states.
In \textbf{step 4} we show that the prover maximises the value of the bound of failure probability if they perform an attack with exactly the fewest non-trivial attacks that are consistent with $E_i$ obtained from step 2. This is a single attack for the protocol of this section (but different in Section \ref{Sec:FT_Verification}). Moreover, since we have a convex combination of positive numbers, the greatest value is obtained for a single $\sigma$. In the next steps of the proof we find the maximum value of our bound for an attack corresponding to the single optimal (for the prover) $\sigma$. Here our technique has deviated from \cite{fk} so we will give more details later (and in Appendix \ref{App:proof1}).
Finally, in \textbf{step 5} we use the partition of the qubits to sets $P_v$ and $A_e$. It is important to note, that within each of those sets there is exactly one computation qubit and exactly one trap qubit. From previous steps we know that the bound of the failure probability is highest if the prover chooses to make a single non-trivial attack. This attack happens at a qubit that belongs to either some set $P_v$ or some set $A_e$. The probability of hitting a trap given a single set is clearly independent from the other free parameters corresponding to this qubit (but not the probability to detect it in general) and it goes as $1/|P_v|$ or $1/|A_e|$. This leads us to a bound for the failure probability $\epsilon=8/9$.
We now give some definitions taken from \cite{fk} in order o elaborate further on the above steps at places we significantly differ from \cite{fk}, while the full proof is given in Appendix \ref{App:proof1}. The \emph{output density operator} of the protocol is $B_j(\nu)$ and is given by
\EQ{
B_j(\nu)=\Tr_B\left(\sum_b \ket{b+c_r}\bra{b}C_{\nu_C,b}\Omega\mathcal{P}((\otimes^B\ket{0}\bra{0}\otimes\ket{\Psi^{\nu,b}}\bra{\Psi^{\nu,b}})\mathcal{P}^\dagger\Omega^\dagger C_{\nu_C, b}^\dagger\ket{b}\bra{b+c_r}\right)
}
where we have the following definitions: The subscript $j$ of the operator $B$, corresponds to the strategy/deviation that the prover makes, and when $j=0$ is the honest run where there is no deviation (and thus the operator $\Omega=\mathbb{I}$). The index $\nu$, collectively denotes all the random choices made by the verifier, i.e. $x,r,\theta,d$ and the positions of the traps $T$ (where the latter depends on the trap-colouring of the dotted triple-graph). The $b$'s are the outcomes of the prover's measurement, $(c_r)_i=r_i$ for $i\notin T$ and $(c_r)_t=0$ for $t\in T$, the subscript $B$ denotes tracing-out the prover's private registers. $C_{\nu_C,b}$ is the Pauli operator acting on the quantum output, that maps the final outcome to the correct one depending on the choices of random variables $\nu_C$ and the computation branch $b$. $\mathcal{P}$ is the unitary corresponding to implementing honestly the protocol. $\Omega$ is the deviation of the prover and is identity in the honest run. $\ket{\Psi^{\nu,b}}=\ket{M^\nu}\otimes_j \ket{\delta_j^b}$ is the initial state send by the verifier, that includes the quantum input and the $\ket{+_{\theta}}$ states which are jointly denoted as $\ket{M^\nu}$ and depend on the random choices, and the $\ket{\delta}$ registers correspond to the measurement angles (that depend on the branch of the computation $b$).
The probability of failure of the protocol is when the protocol returns ``Accept'' but the output is orthogonal to the honest ideal. This probability is given by
\EQ{
p_{\textrm{fail}}&=& \sum_\nu p(\nu)\Tr (P^\nu_{\textrm{incorrect}} B_j(\nu))
}
where
\EQ{
P^\nu_{\textrm{incorrect}}&=&(\mathbb{I}-\ket{\Psi_{\textrm{ideal}}}\bra{\Psi_{\textrm{ideal}}})\otimes_{t\in T}\ket{\eta_t^{\nu_T}}\bra{\eta_t^{\nu_T}}\nonumber\\
&=&P_\bot \otimes_{t\in T}\ket{\eta_t^{\nu_T}}\bra{\eta_t^{\nu_T}}
}
is the projection into the wrong subspace (orthogonal space to the correct ideal state) while it still remains within the accept subspace (where the traps succeed). The prover's attack can be written in terms of Kraus operators such that $\sum_k\chi_k^\dagger\chi_k=\mathbb{I}$ which are written in terms of strings of Pauli as $\chi_k=\sum_i a_{ik}\sigma_i$. We then follow \cite{fk} up to step 3 and obtain the expression (see Appendix \ref{App:proof1}):
\EQ{
p_{\mathrm{fail}}&\leq & \sum_k\sum_{i\in E_i}|\alpha_{ki}|^2\sum_T p(T) \prod_{t\in T}\left(\sum_{\theta_t,r_t}p(\theta_t)p(r_t)(\bra{\eta_t^{\nu_T}}\sigma_{i|t}\ket{\eta_t^{\nu_T}})^2\right)
}
Here we denote $\sigma_{i|\gamma}$ the single-qubit Pauli matrix corresponding to position $\gamma$ of the string of Pauli $\sigma_i$.
From $\sum_{ik}|a_{ik}|^2=1$ we conclude that $p_{\mathrm{fail}}$ is maximised when $|a_{ik}|=0$ for all $i\notin E_i$. Moreover, it is maximised if $|a_{ik}|=1$ for a single $\sigma_i$.
\EQ{\label{p_inc}p_{\mathrm{fail}}&\leq & \max_{i\in E_i} \sum_T p(T) \prod_{t\in T}\left(\sum_{\theta_t,r_t}p(\theta_t)p(r_t)(\bra{\eta_t^{\nu_T}}\sigma_{i|t}\ket{\eta_t^{\nu_T}})^2\right)
}
This consists of a product of non-negative terms that are less than unity only if there is a non-trivial attack. It follows that the expression is maximum for a $\sigma_i$ that has non-trivial terms in as few as possible positions. This happens when $\sigma_i$ is non-trivial for a single position $\beta$ and this position belongs to either the set $P_{v_\beta}$ or the set $A_{e_\beta}$ depending on whether the non-trivial attack is done on a qubit that belongs to a primary set or an added set. The set that the attack belongs, is a property that does \emph{not} depend on the secret choices of the verifier, as for example, the trap-colouring. We will use the notation $F_\beta$ to refer to a set that can be either $P_{v_\beta}$ or $A_{e_\beta}$.
We then break $p(T)$ which is the probability of different trap configurations using the structure of the subsets $P_v,A_e$, i.e. $p(T)=p(t_1\in P_{v_1},t_2\in P_{v_2}, \cdots, t_k\in A_{e_1}, \cdots)$. Therefore, given a single attack on set $F_{\beta}$ we can sum over all the other sets (all the other positions do not appear/have explicit dependency in the expression) and obtain $\sum_T p(T)=\sum_{t_\beta\in F_{\beta}}\sum_{t\notin F_\beta}p(T)=\sum_{t_\beta\in F_\beta}p(t_\beta)$. We obtain
\EQ{
p_{\mathrm{fail}}&\leq& \max_{i\in E_i} \sum_{t_\beta\in F_\beta}\sum_{\theta_{t_\beta},r_{t_\beta}} p(t_\beta) p(\theta_{t_\beta})p(r_{t_\beta})(\bra{\eta_{t_\beta}^{\nu}}\sigma_{i|t_\beta}\ket{\eta_{t_\beta}^{\nu}})^2.
}
It is important to note that $\sigma_{i|t_\beta}$ is identity if $\beta\neq t_\beta$ while it is non-trivial otherwise, therefore $(|F_\beta|-1)$ terms of the sum will be unity, while one term will be less than one\footnote{It turns out that the not-unity term, is zero for measured qubits, while it can be up to 1/2 for output qubits.}. The above expression depends on whether the set $F_\beta$ is output set, or in the case that is a measured set on whether it is a primary or added set. Since the prover chooses which set to attacks the bound will be the highest of these values. We consider separately each case (see Appendix \ref{App:proof1}) and we obtain the highest $p_{\textrm{fail}}$ when there is a single attack on an added qubit. This happens because the added sets have 9 qubits $|A_e|=9$ and the chance of missing the trap is better. Noting that added qubits are not output qubits we thus obtain
\EQ{
p_{\mathrm{fail}}&\leq& \frac1{16|A_{e_\beta}|} \sum_{t_\beta\in A_{e_\beta}}\sum_{\theta_{t_\beta},r_{t_\beta}}(\bra{\eta_{t_\beta}^{\nu}}\sigma_{i|t_\beta}\ket{\eta_{t_\beta}^{\nu}})^2.\nonumber\\
& \leq & \frac1{16|A_e|} \sum_{r_t}\left(8\cdot (|A_{e_\beta}|-1)+8\cdot(\bra{r_{t_\beta}}\sigma_{i|t_\beta}\ket{r_{t_\beta}})^2\right) \nonumber\\
& \leq & \left(\frac 89\right)
}
\end{proof}
\section{Amplification of the probability of success}
In the previous section we gave a simple construction to directly obtain a verification protocol with constant failure probability $\epsilon$. However, a verification protocol is successful if the $\epsilon$ of the failure probability can be made arbitrarily small. There are two techniques that have been used to amplify the probability of success of a verification protocol. The first one is simpler both conceptually and in terms of experimental requirements, but applies only in the case that the output of the quantum computation performed is classical. The second applies for computations with quantum output as well. We will use both techniques and show that starting with the dotted triple-graph construction we obtain in both cases improvements.
\subsection{Amplification for classical output}\label{Sec:Repetition}
As we mentioned in the introduction, in the case that a quantum computation has a classical output (e.g. solving classical problems or sampling, etc) it is sufficient to have a protocol that is $\epsilon$-verifiable for \emph{any} $\epsilon<1$. The reason is because one can boost this $\epsilon$ and make it arbitrarily small by repeating the protocol sufficiently many times and accepting only when all repetitions agree. This results to an $\epsilon'=\epsilon^d$ which can be made as small as the security level required by choosing the number of repetitions $d$ suitably.
In a previous protocol for classical output in \cite{fk}, the brickwork state was used. The brickwork state can have a single trap and leads to an $\epsilon=\left(1-\frac1n\right)$. By employing the repetition technique (that was used in \cite{efk}) this $\epsilon$ can be boosted to an $\epsilon'=\left(1-\frac1n\right)^d$. However, we see that in order to achieve a constant security level, the number of repetitions needs to increase as the size of the computation increases. For large enough $n$ it is easy to see, that if the single repetition requires $n$ qubits, to maintain a fixed security level $O(n^2)$ qubits are needed. This is not better than the quantum-output protocol given in \cite{fk} that used a ``dotted-complete graph''. It is worth noting, that even though the complexity scales in the same way as the full ``dotted-complete graph'' protocol of \cite{fk}, the repetition protocol has still a number of practical advantages (it is easier to implement, has smaller coefficient of the leading term and for simple enough computation can be experimentally realised).
By using the dotted triple-graph construction we can obtain a repetition protocol where we only repeat a constant number of times (and the number of repetitions depends \emph{only} on the security level). It follows, that the dotted triple-graph repetition protocol requires only a linear number of qubits. Again, it does not give better complexity from the general protocol (that allows for quantum output) which we give in the next section. However, it has the same practical advantages as the repetition protocol of the brickwork state.
\begin{algorithm}[H]
\caption{Boosted Verifiable UBQC using dotted triple-graph for classical output}
\label{prot:RepUBQC}
\begin{itemize}
\item Verifier chooses a computation that has classical and deterministic output (e.g. a decision problem).
\item Verifier chooses a number $d$, where $d=\frac{\log \epsilon}{\log (8/9)}$ and the desired security level is $\epsilon$.
\item For each $i\in\{1,\cdots,d\}$ \textbf{Verifier follows steps of Protocol \ref{prot:AUBQC}, with random different choices of secret parameters.} If the verifier accepts the computation, they register the classical output as $O_i$ and store it.
\item If the verifier rejected at any single repetition of Protocol \ref{prot:AUBQC}, they reject the overall computation. If not, they compare the classical outputs $O_i$ and if all of them are identical, they accept this output as the output of the computation.
\end{itemize}
\end{algorithm}
\begin{theorem} (Verification)\label{verification2}
Protocol \ref{prot:RepUBQC} is $\left(\frac{8}{9}\right)^d$-verifiable where the output is classical and $d$ is the number of repetitions.
\end{theorem}
\begin{proof}
We have multiple repetitions and if all of them return the same output $O$, then the probability that this is not the correct output is bounded by the probability that \emph{all} repetitions failed (and resulted to the same deviation). Since the different repetitions have the same outcome it means that if a single of those repetitions is successful then the output $O$ is the correct output. From Theorem \ref{Verification1} we know that the probability that a single repetition fails, is $8/9$. Then the probability that all the $d$ repetitions fail is $\left(\frac89\right)^d$.
\end{proof}
\noindent\textbf{Alternative construction for classical output.} In the case of classical output, there is an alternative construction to the dotted triple-graph that could decrease further the (linear) overhead. In particular, instead of having the dotted triple-graph $DT(G)$ one could consider three copies of the dotted base graph $D(G)$. We will name this the \emph{three dotted copies} construction. The one copy will be used for computation, while the other two for white traps (on primary vertices) and black traps (on added vertices). This construction is global, in the sense that the decision of which vertices are in which graph is made from the beginning and cannot be decided independently per base graph vertex $v_i$. It follows that the location of the traps is totally correlated globally and there is no way to amplify the success probability in the quantum output case. This is the reason we focused on the dotted triple-graph construction for the quantum output case.
For the classical output however, the three dotted copies construction works.
\begin{algorithm}[H]
\caption{Boosted three dotted copies Verifiable UBQC for classical output}
\label{prot:Rep3copies}
\begin{itemize}
\item Verifier chooses a computation that has classical and deterministic output (e.g. a decision problem).
\item Verifier chooses a number $d$, where $d=\frac{\log \epsilon}{\log (2/3)}$ and the desired security level is $\epsilon$.
\item For each $i\in\{1,\cdots,d\}$ \textbf{Verifier follows steps of Protocol \ref{prot:AUBQC}, using three dotted-copies instead of dotted triple-graph and with random different choices of secret parameters for every run.} If the verifier accepts the computation, they register the classical output as $O_i$ and store it.
\item If the verifier rejected at any single repetition of the modified Protocol \ref{prot:AUBQC}, they reject the overall computation. If not, they compare the classical outputs $O_i$ and if all of them are identical, they accept this output as the output of the computation.
\end{itemize}
\end{algorithm}
\begin{theorem} (Three dotted-copies Verification)\label{verification_3copies}
Protocol \ref{prot:Rep3copies} is $\left(\frac{2}{3}\right)^d$-verifiable where the output is classical and $d$ is the number of repetitions.
\end{theorem}
\begin{proof}
Following the proof of Theorem \ref{Verification1}, at step 2 in order to corrupt the computation the prover needs to make at least one non-trivial attack. However, the prover is blind about which of the three graphs is the computation, white and black trap graph. Therefore, it has probability $1/3$ that the attack coincides with a trap graph of the same type of the attack location (i.e. if it attacks a primary vertex, then with probability $1/3$ the attack was on a qubit that belongs in the white graph, while if the attack was on an added vertex with probability $1/3$ the attack was on a qubit that belongs in the black graph). For classical output the non-trivial attacks are $\{X,Y\}$ only (all qubits are measured), and thus the attack is deterministically detected when it hits a trap, as $\bra{\eta_{t}^{\nu}}\sigma_{X/Y}\ket{\eta_{t}^{\nu}}=\bra{r_t}\sigma_{X/Y}\ket{r_t}=0$.
Therefore the probability of failure is $p_{\textrm{fail}} \leq 2/3$.
To amplify this probability, we can simply repeat the protocol $d$ times, and if all classical outputs agree in all the runs then the probability that the computation was corrupted is bounded by $p_{\textrm{fail}}\leq \left(\frac23\right)^d$. Moreover, the number of qubits required per repetition, are $3|V(G)|+3|E(G)|\leq 3(1+c)|V(G)$. Both in terms of failure probability and in terms of the (linear in both cases) number of qubits per repetition, the three copies construction gives better result.
\end{proof}
\subsection{Amplification for quantum output}\label{Sec:FT_Verification}
We now turn to the general case, where the output of the computation can be quantum. Our main result is that our dotted triple-graph construction leads to an exponentially-secure verification protocol for quantum output with only linear overhead. Similar to \cite{fk} we will use a technique that exploits the possibility to encode the computation in a fault-tolerant way in order to amplify the probability of success of the protocol. The particular size of the boosting achieved will depend on the fault-tolerant code that is used. Here we treat the protocol in full generality.
The general idea is that the computation is encoded using a fault-tolerant encoding, while the traps are still single (non-encoded) qubits. Therefore, while a single error on a trap leads to a rejection of the computation, to corrupt the actual output of the computation many errors on computation qubits are required. The prover needs to simultaneously avoid hitting any single trap and in the same time hit many computation qubits in order to corrupt the output.
\begin{algorithm}[H]
\caption{Boosted Verifiable UBQC for quantum output, using dotted triple-graph and Fault-Tolerant Encoding}
\label{prot:FTUBQC}
\begin{itemize}
\item Verifier chooses a base graph $G$ and a measurement pattern $\mathbb{M}_{\mathbb{\textrm{Comp}}}$ on the dotted base graph $D(G)$ that implements the desired computation in a fault-tolerant way, that can detect or correct errors fewer than $\delta/2$.
\item \textbf{Verifier follows steps of Protocol \ref{prot:AUBQC}.}
\end{itemize}
\end{algorithm}
\begin{theorem} (Verification)\label{verification3}
Protocol \ref{prot:FTUBQC} is $\left(\frac{8}{9}\right)^d$-verifiable for quantum or classical output, where $d=\lceil\frac\delta{2(2c+1)}\rceil$, $c$ is the maximum degree of the base graph and $\delta$ is the number of errors tolerated on the base graph $G$.
\end{theorem}
\begin{proof}
We assume that there is a fault-tolerant encoding of the computation, that when done in MBQC, corrects or detects all errors that have fewer than $\delta$ number of errors when the computation is performed on the base graph $G$. Any operation on a measured qubit, diagonal in the computational basis ($\sigma_i\in\{I,Z\}$) does not alter the computation. Therefore errors that can contribute to corrupting a single logical qubit, involve errors $\sigma_i\in\{X,Y\}$ for measured qubits and $\sigma_i\in\{X,Y,Z\}$ for output qubits.
Considering the dotted base graph $D(G)$, one can easily see that any (non-trivial) error on an added qubit $a_{e_{ij}}$, is equivalent with a local error on each of the two primary qubits that are neighbours $p_{v_i},p_{v_j}$ (see also \cite{fk}). If to corrupt a computation one needs $\delta$ errors on primary qubits of the base graph $G$, it follows that to corrupt the computation when done on the dotted base graph $D(G)$ one needs at least $\delta/2$ errors on qubits of the dotted base graph $D(G)$.
We now turn back to step 2 of the proof of theorem \ref{Verification1} and we see that the set $E_i$ of attacks that could possibly corrupt the computation, should include non-trivial attack in at least $\delta/2$ \emph{different} sets $P_v,A_e$ (which collectively we call $F_\beta$). It is important to note that within each of the sets $F_\beta$ there is a single computation-graph qubit and therefore not only the prover needs to perform $\delta/2$ non-trivial attacks, but they should also be done on at least $\delta/2$ different location sets. The prover of course could choose to attack multiple qubits of the same set $F_\beta$, but in order to hit at least $\delta/2$ computation qubits, the sets that they perform non-trivial attacks should also be at least $\delta/2$.
Any given attack $\sigma_i$ is characterised by the set $S_i$ of locations on the dotted base graph $D(G)$, that it has at least one non-trivial attack, which in the case $\sigma_i\in E_i$ it means that $|S_i|\geq\delta/2$ .
Following step 3 and 4 of the proof of theorem \ref{Verification1}, we reach eq. (\ref{p_inc}). From this expression we can again see, that the fewer the positions of non-trivial attack (consistent with $E_i$) the greatest the value of this bound is. We already know that we need at least $\delta/2$ sets $F_\beta$ with non-trivial attacks, so it follows that the maximum is achieved when there are exactly $\delta/2$ different sets $F_\beta$ with exactly a single attack in each.
To proceed further we need to decompose the probability of different configuration of traps $p(T)$ to this of individual sets. This is not in general possible since there are correlation between the traps of (neighbouring) sets. To this point we should note that fixing a configuration of traps is identical with giving a trap-colouring as in definition \ref{trap colouring}.
From Theorem \ref{ICL2}, we know that given a collection $S_i$ of $\delta/2$ locations on the dotted base graph $D(G)$ there are at least a collection $S'_i$ of $\frac\delta{2(2c+1)}$ that are independently colourable locations, i.e. $|S_i'|=\lceil\frac\delta{2(2c+1)}\rceil$. To obtain an upper bound on the failure probability, we set $\sigma_{i|\gamma}=I$ for all $\gamma$ that belong to locations in $S_i\setminus S'_i$. This change is non-decreasing for the expression for the bound given in eq. (\ref{p_inc}). Now the only locations that have non-trivial attacks, are those in $S_i'$ and we have
\EQ{\label{p_inc2}
p_{\mathrm{fail}}&\leq& \max_{i\in E_i} \prod_{\beta=1}^{|S_i'|}\sum_{t_\beta\in F_\beta}p(t_\beta)\sum_{\theta_{t_\beta},r_{t_\beta}} p(\theta_{t_\beta})p(r_{t_\beta})(\bra{\eta_{t_\beta}^{\nu}}\sigma_{i|t_\beta}\ket{\eta_{t_\beta}^{\nu}})^2
}
where we used the fact that $p(T)=p(t_1\in P_{v_1},t_2\in P_{v_2}, \cdots, t_k\in A_{e_1}, \cdots)$ and for a collection $S'_i$ of independent colourable locations,
\EQ{
\sum_{t_\beta\in S_i'}\left(\sum_{t_\beta\notin S_i'}p(T)\right)=\sum_{t_\beta\in S_i'}\left(\prod_{\beta=1}^{|S_i'|}p(t_\beta)\right)
}
We can see this due to the fact that after summing all locations apart from those in $S'$, the probability for choosing the location of the trap within each set is independent and thus the joint probability is simply their product. Now, each term in the product of eq. (\ref{p_inc2}) is bounded by $8/9$ as proven in the previous section and therefore we obtain
\EQ{\label{ft_bound}
p_{\mathrm{fail}}&\leq&\left(\frac{8}{9}\right)^d
}
where we define $d=|S_i'|=\lceil\frac\delta{2(2c+1)}\rceil$
\end{proof}
As a final remark, we should note that the value $d=\lceil\frac\delta{2(2c+1)}\rceil$ is the minimum number of ICL that exist and in particular cases this number can be greater and thus the probability of success of the verification protocol also becomes greater for those cases.
\section{Consequences for existing verification protocols}\label{Sec:Examples}
The dotted triple-graph construction can be used to improve a number of existing verification protocols and here we give indicatively three of them. First we consider the specific case where the computation is done using the Raussedorf-Harrington-Goyal (RHG) \cite{rhg} encoding and the related graph is $\mathcal{G}_{\mathcal{L}}$. Following our construction instead of the dotted-complete graph of \cite{fk}, we obtain the dotted triple-RHG graph $DT\mathcal{G}_{\mathcal{L}}$. This graph state has linear number of qubits (as the maximum degree of the graph is $4$). With the same choices of parameters as in \cite{fk}, it can detect or corrects any deviation that has fewer than $\delta/2$ errors. From our results of the previous section, it follows that we obtain a linear-complexity verification protocol with exponential security bound given by $p_{\mathrm{fail}}\leq\left(\frac89\right)^{\lceil\frac\delta{18}\rceil}$.
The second application is that it can be used to improve verifiable fault-tolerant protocols. Assuming that there are errors due to noise (non-adversarial), the protocols given earlier in the text and in other VBQC protocols could face a problem. In particular, honest errors due to noise could make trap measurement to fail and lead us to reject the output even in honest runs where the computation is not corrupted. Here we should stress that both in this paper and in \cite{fk} the use of fault-tolerant encoding was in order to amplify the security and \emph{not} to correct the computation from errors caused by honest noise. However, one \emph{can} construct a fault-tolerant verification protocol, at least for classical output, and one such example is presented in \cite{gkw2015}. The starting graph used to obtain the fault-tolerant protocol of \cite{gkw2015} was the brickwork graph that has a single trap. Then a fault-tolerant encoding was done followed by the repetition technique used to amplify the success probability. However the number of repetitions to maintain a constant level of security increased with the size of the computation. By using the dotted triple-brickwork instead, as the first step of the construction in \cite{gkw2015}, we can achieve exponential security with a constant number of repetitions. This would essentially bring down the number of qubits required from $O(n^2)$ to $O(n)$.
The third application is that we can directly use the dotted triple-graph construction for the verifiable measurement-only protocols \cite{tomo2014,hm2015}. In particular \cite{tomo2014} is essentially the online version of \cite{fk}, and the technique to include traps in the graph is equivalent. It follows that if the resource used instead of a dotted-complete graph is a dotted triple-RHG graph then the number of qubits required will reduce from quadratic to linear. In \cite{hm2015} the verifier, instead of including traps, uses $2k+1$ copies of a universal graph. In order to test the honesty of the prover it makes stabiliser measurements to $2k$ copies of the desired graph while performs the computation on the final copy. However, there is always at least a $1/(2k+1)$ probability that the computation is corrupted and not detected. Using the dotted triple-graph construction, modified for the measurement-only protocols, this probability can be made exponentially small while still using only linear number of qubits.
\section*{Acknowledgements}
The authors would like to thank Vedran Dunjko, Alexandru Gheorghiu and Theodoros Kapourniotis for useful discussions. The authors are also grateful to Joe Fitzsimons for useful discussion regarding a robust fault tolerance scheme as proposed in Section 5. EK acknowledges funding through EPSRC grants EP/N003829/1 and EP/M013243/1.
|
1,314,259,994,176 | arxiv | \section{Introduction}\label{sec:intro}
Finding optimal solutions for combinatorial optimization problems, some of which are known to be NP-hard, is a very important problem.
Among many possible approaches to such problems, the application of Ising models to solve real social problems has been getting attention due to its versatility (see \cite{L14}).
More precisely, a given social combinatorial optimization problem can be mapped into a Hamiltonian $H$ on a graph $G=(V,E)$, whose expression is given by
\begin{align*}
H(\sigma)=-\sum_{b=\{x,y\}\in E}J_b\sigma_x\sigma_y-\sum_{x\in V}h_x\sigma_x
\end{align*}
for every Ising spin configuration $\sigma\in\{-1,1\}^V$, where $\{J_b\}_{b \in E}$ are coupling coefficients and $\{h_x\}_{x \in V}$ are local magnetic fields.
In that approach, an optimal solution for the intended combinatorial problem corresponds to a ground state (or global minimum) $\sigma_G$ of $H$, that is, $\sigma_G \in \mathrm{arg\, min}\,H$.
There are some well-known methods that can be applied to obtain a ground state.
Implementing a Markov chain Monte Carlo (such as Glauber dynamics and stochastic cellular automata) is known as a way to find an approximation for the Gibbs distribution whose highest peaks correspond to the ground states of $H$.
We refer for details to \cite{DSS12,H88,R13,HKKS19} and also \cite{HKKS21}.
However, as long as we use Ising machines or any computer to perform numerical simulations to find a ground state, we cannot avoid the error occurring due to the analog nature or the difficulty of representing real numbers (see \cite{AMH18}).
Because of these reasons, we should incorporate the error coming from the coupling coefficients and local magnetic fields by introducing a perturbed Hamiltonian.
Hence, our original Hamiltonian $H$ will be perturbed, originating a perturbed Hamiltonian $H_{\delta}$ whose coupling coefficients and local magnetic fields have a maximal error $\delta$.
Then, the following natural questions arise:
\begin{enumerate}[label=(\arabic*),start=1]
\item
For any pair of configurations which are ordered in terms of energy with respect to the perturbed Hamiltonian $H_\delta$, is that ordering preserved in the original Hamiltonian $H$, up to a given error margin? \label{question1}
\item
Given a Hamiltonian $H$ with coupling coefficients and local magnetic fields distributed as i.i.d. Gaussian random variables, what is the probability that the energy gap in $H$ between two ground states respectively for $H$ and the corresponding perturbed Hamiltonian $H_\delta$ is sufficiently small?\label{question2}
\end{enumerate}
In addition to the above questions \ref{question1} and \ref{question2}, the following problem is also important when using Ising machines and computers.
It may be somewhat a waste of resources taking all coupling coefficients and local magnetic fields into account.
It may be useful to ``eliminate" vertices of a given graph whose contribution to the total energy is relatively small, in order to save memories of computers.
Hence, we also have the following natural question:
\begin{enumerate}[label=(\arabic*),start=3]
\item
Can we find a subset of a given graph such that for an arbitrary choice of configuration outside of that region, the energy variation can be controlled? \label{question3}
\end{enumerate}
In this paper, we investigate the stability of energy landscape of a given Hamiltonian under perturbations from the view point of order preservation, aiming at answering the questions we addressed above.
Thanks to the order preservation property, we can obtain better estimates for the success probability of finding a ground state compared to the result given in \cite{AMH18}.
This paper is organized as follows.
In Section \ref{sec:settings}, we provide a precise formulation for the questions we just posed and raise them again.
In Section \ref{sec:solutions}, we answer the questions \ref{question1'} and \ref{question2'} from Section \ref{sec:settings}.
In Section \ref{pertgraph}, we provide an example together with a sufficient condition that guarantees a positive answer for question \ref{question3'}.
\section{Setting and the main questions}\label{sec:settings}
In this section, we introduce some necessary definitions and terminologies for discussing the stability of energy landscape.
Further we also introduce the notion of order preservation for a perturbed system, which plays a central role in this paper.
Here, order preservation means, roughly speaking, if we take a ground state for a perturbed Hamiltonian (implemented by a device) then it should be close to the ground state for an original Hamiltonian (intended mathematical problem) in energy, up to a given error margin.
Let us begin by introducing the precise setting.
Let $G=(V,E)$ be a finite simple graph with the vertex set $V$ and the edge set $E$.
The so-called {\it original Hamiltonian} $H$ with coupling coefficients $\{J_b\}_{b\in E}$ and external magnetic fields $\{h_x\}_{x\in V}$ on $G$ is defined by
\begin{align}
H(\sigma)=-\sum_{b=\{x,y\} \in E}J_b\sigma_x\sigma_y - \sum_{ x \in V}h_x\sigma_x
\end{align}
for each $\sigma=\{\sigma_x\}_{x\in V}\in\{-1,1\}^V$.
Such a function $H$ can be regarded as a cost function of an intended problem.
Given $\delta>0$, we denote by $H_\delta$ the {\it perturbed Hamiltonian} with the coupling coefficients $\{J'_b\}_{ b \in E}$ and external fields $\{h'_x\}_{x \in V}$, i.e.,
\begin{align}\lbeq{perturb}
H_\delta(\sigma)=-\sum_{ b=\{x,y\} \in E}J'_b\sigma_x\sigma_y-\sum_{ x \in V}h'_x
\sigma_x
\end{align}
where the $J'_b$'s and $h'_x$'s satisfy the bounds $\sup_b\lvert J_b-J'_b\rvert\le\delta$ and $\sup_x\lvert h_x-h'_x\rvert\le\delta$.
This perturbation will be often interpreted as a round-off in the following way.
Let $(J_b^{(1)}J_b^{(2)}\dots)$ and $(h_x^{(1)}h_x^{(2)}\dots)$ be the binary expansions of the fractional parts of $J_b$ and $h_x$, i.e.,
\begin{align}
J_b=J_b^{(0)}+\sum_{i\ge1}\frac{J_b^{(i)}}{2^i},\quad h_x=h_x^{(0)}+\sum_{i\ge1}\frac{h_x^{(i)}}{2^i}
\end{align}
where $J_b^{(0)},h_x^{(0)}\in\Z$ and $J_b^{(i)},h_x^{(i)}\in\{0,1\}$ for $i\ge1$. If we set $J_b'=J_b^{(0)}+\sum_{i=1}^{N}\frac{J_b^{(i)}}{2^i}$ and $ h_x'=h_x^{(0)}+\sum_{i=1}^N\frac{h_x^{(i)}}{2^i}$ in the equation (\ref{eq:perturb}), then the error $\delta$ can be taken as $2^{-N}$.
It means that the perturbed Hamiltonian $H_{\delta}$ is obtained by rounding off the given parameters $J_b$'s and $h_x$'s uniformly from the $(N+1)$-th digit of their binary expansions.
The main purpose of this paper is to clarify the stability of the ground states for a given Hamiltonian under a perturbation in terms of order preservation.
In this paper, we will answer the following questions:
\begin{enumerate}[label=(\arabic*'),start=1]
\item
Find a $\delta>0$ corresponding to a given $\varepsilon>0$, so that, for any pair $(\sigma,\tau)$ that satisfies
$H_{\delta}(\sigma)\ge H_{\delta}(\tau)$, the ordering is preserved in $H$
up to the error margin $\varepsilon\sup_{\xi,\eta}\left\lvert H(\xi)-H(\eta)\right\rvert$,
i.e.,
\begin{align}
H(\sigma)\ge H(\tau)-\varepsilon\sup_{\xi,\eta}\left\lvert H(\xi)-H(\eta)\right\rvert.
\end{align}
Here, $\sup_{\xi,\eta}\left\lvert H(\xi)-H(\eta)\right\rvert$ is the total margin of the original Hamiltonian. \label{question1'}
\item \label{question2'}
Let $(\Omega, \mathcal{F},\mathbb{P})$ be a probability space and let $\{J_b\}_{b\in E}$ and $\{h_x\}_{x\in V}$ be mutually independent standard Gaussian random variables on this probability space.
Estimate the probability that the energy gap in $H$ between ground states for $H$ and $H_{\delta}$, say $\sigma_G$ and $\tilde{\sigma}_G$, respectively, is controled by the given error margin, explicitly,
\begin{align}\label{eq:quest2}
\mathbb{P}\left(0 \leq H(\tilde{\sigma}_G)-H(\sigma_G)\le\varepsilon\sup_{\xi,\eta}\left\lvert H(\xi)-H(\eta)\right\rvert\right).
\end{align}
\end{enumerate}
A different aspect of stability of a given system is to find a nontrivial subsystem so that the energy gap between any two spin configurations whose spins restricted to the subgraph coincide is bounded above by a given error margin.
Also, at the same time, we require that the number of vertices that can be disregarded is at least of order $N^{\alpha}$, where $N=\lvert V\rvert$ and $\alpha\in {[}0,1)$, so that such a number can go to infinity as $N\to\infty$.
In the later part of this paper, we answer the following question for a particular case:
\begin{enumerate}[label=(\arabic*'),start=3]
\item
Let $\lvert V\rvert =N$, and let $\{J_b\}_{b\in E}$ and $\{h_x\}_{x\in V}$ be mutually independent standard Gaussian random variables.
Find a subset $V_0\subset V$ for a given $\varepsilon>0$ and $\alpha\in {[}0,1)$ such that
\begin{align*}
{\mathbb P}\left(\sup_{\sigma,\tau\in\{-1,1\}^N} \left\lvert H(\sigma)-H(\sigma_{V_0}, \tau_{V\setminus V_0}) \right\rvert < \varepsilon\sup_{\xi,\eta}\left\lvert H(\xi)-H(\eta)\right\rvert \;\&\; CN^{\alpha} \leq \lvert V\setminus V_0\rvert < N \right)
\end{align*}
is close to $1$, where $\sigma_{V_0}\in\{-1,1\}^{V_0}$ is the spin configuration $\sigma$ restricted to $V_0$ and $\tau_{V\setminus V_0}\in\{-1,1\}^{V\setminus V_0}$ is the restriction of the spin configuration $\tau$ to $V\setminus V_0$. \label{question3'}
\end{enumerate}
Questions \ref{question1'}, \ref{question2'} and \ref{question3'} above correspond to questions \ref{question1}, \ref{question2} and \ref{question3} from Section \ref{sec:intro}, respectively.
In Section \ref{sec:solutions}, we investigate the first two questions above, where for the second one we adopt two different approaches. We obtain answers for question \ref{question2'} by means of a method involving the $L^\infty$-distance and a graph's structure approach, and we compare these two methods for three different graphs.
Specifically, we consider sufficient conditions on the perturbation $\delta$ to satisfy order preservation, and calculate the probability that such a sufficient condition holds.
In Section \ref{pertgraph}, we obtain an answer for the question \ref{question3'} when the graph is a one-dimensional torus $\Z/N\Z$ without external fields.
\section{Stability under a Hamiltonian perturbation} \label{sec:solutions}
This part is dedicated to provide solutions for questions \ref{question1'} and \ref{question2'} just posed in the end of the previous section. Before we proceed to the next sections, let us introduce the quantity $R_{H}$ defined by
\begin{equation}
R_{H}\coloneqq\max_{\xi,\eta}\lvert H(\xi)-H(\eta)\rvert,
\end{equation}
which is defined whenever a Hamiltonian $H$ is given. Moreover, if $G = (V,E)$ is a finite simple graph, then we define $k_{G}$ by
\begin{equation}
k_{G} := |E| + |V|.
\end{equation}
Keeping in mind the mathematical setting introduced in the beginning of Section \ref{sec:settings}, let us start by showing that the order preservation property holds, that is,
let us first answer the question \ref{question1'}, which consists in finding a $\delta>0$ corresponding to a given $\epsilon>0$ such that $H_\delta(\sigma)\ge H_\delta(\tau)$ implies $H(\sigma)\ge H(\tau)-\epsilon R_{H}$; and later on, assuming some randomness on the spin-spin couplings and external fields, we adopt two different approaches to answer question \ref{question2'} and estimate the probability that the condition $H(\tilde{\sigma}_G)- H(\sigma_G)\le \epsilon R_{H}$ is satisfied.
In order to solve the second problem, we will adopt two distinct approaches: a method that relies on uniform estimates and a method where combinatorial estimates are considered instead, which will be presented in Sections \ref{sec:FA} and \ref{sec:SA}, respectively.
In the last part of this section, we compare these two methods and conclude that depending on the underlying graph structure of the problem, one of them will give us a better lower bound for the probability from equation (\ref{eq:quest2}).
\subsection{Order preservation of energy}
The answer for question \ref{question1'} from Section \ref{sec:settings} is provided by Theorem \ref{thm:order_preservation}, however, let us show first a preliminary result.
In \cite{HKKS19}, we have already established lower bounds for the total margin $R_{H}$ of the Hamiltonian $H$, but for the reader's convenience we include its proof in the present paper.
\begin{lemma}[See \cite{HKKS19}]\label{lem:total margin}
Let us consider a finite simple graph $G=(V,E)$ and a Hamiltonian $H$ written in the form
\begin{align*}
H(\sigma)=-\sum_{b=\{x,y\}\in E} J_b \sigma_x\sigma_y -\sum_{x\in V}h_x \sigma_x
\end{align*}
for each $\sigma\in \{-1,1\}^V$. Then, we have
\begin{align*}
R_{H}\ge \sqrt{v_H}, \qquad \text{where } v_H\coloneqq\sum_b J_b^2+\sum_x h_x^2.
\end{align*}
\end{lemma}
\begin{proof}
For any probability measure $\mu$ on the configuration space $\{-1,1\}^V$, we have
\begin{align*}
R_{H}\ge \left(\mathbb{E}_\mu[H^2]-\mathbb{E}_\mu[H]^2\right)^{1/2},
\end{align*}
where $\mathbb{E}_\mu$ stands for the expectation with respect to the probability measure $\mu$.
If $\mu$ is particularly chosen as the uniform distribution on $\{-1,1\}^V$, then we have
\begin{align*}
\mathbb{E}_\mu[H]&\coloneqq\frac{1}{2^{\lvert V\rvert}} \sum_\sigma H(\sigma)=0,{}
\end{align*}
and
\begin{align*}
\mathbb{E}_\mu[H^2]&\coloneqq\frac{1}{2^{\lvert V\rvert}} \sum_\sigma H(\sigma)^2\\
&=\frac{1}{2^{\lvert V\rvert}} \sum_\sigma \left( -\sum_{b=\{x,y\}\in E} J_b\sigma_x\sigma_y-\sum_{x\in V}h_x\sigma_x\right)^2 \\
&=\frac{1}{2^{\lvert V\rvert}} \sum_\sigma \left(\sum_{b,b'\in E} J_bJ_{b'}\sigma_x\sigma_y\sigma_{x'}\sigma_{y'}+\sum_{x,x'\in V}h_xh_{x'}\sigma_x\sigma_{x'} \right)\\
&=\frac{1}{2^{\lvert V\rvert}} \sum_\sigma \left( \sum_{b\in E}J_b^2+\sum_{x\in V}h_x^2\right) ={v_H}.
\end{align*}
Therefore, $R_{H}\ge \left(\mathbb{E}_\mu[H^2]-\mathbb{E}_\mu[H]^2\right)^{1/2} =\sqrt{v_H}$.
\end{proof}
In order to prove the next result, it is convenient to consider the following notation introduced by \cite{AMH18}.
For any Ising spin configurations $\sigma$ and $\tau$, we consider the sets $D_{\sigma,\tau} $ and $W_{\sigma,\tau} $ defined by
\[D_{\sigma,\tau} \coloneqq\{x\in V: \sigma_x\tau_x =-1\}\]
and
\[W_{\sigma,\tau} \coloneqq\{\{x,y\}\in E: \sigma_x\sigma_y\tau_x\tau_y =-1\},\]
where the products $\sigma_x\tau_x$ and $\sigma_x\sigma_y\tau_x\tau_y$ are called the {\it spin overlap} and the {\it link overlap}, respectively.
\begin{theorem}\label{thm:order_preservation}
Given $\epsilon>0$ and configurations $\sigma$ and $\tau$, if the condition
\begin{align*}
0 < \delta k_{G} \le \frac{1}{2} \epsilon \sqrt{v_{H}}
\end{align*}
is satisfied, then $H_\delta(\sigma)\ge H_\delta(\tau)$ implies $H(\sigma)\ge H(\tau)-\epsilon R_{H}$.
\end{theorem}
\begin{proof}
If we suppose that $H_\delta(\sigma)\ge H_\delta(\tau)$, then, we have
\begin{align*}
H(\tau)-H(\sigma)& = \left(H(\tau)-H_\delta(\tau)\right) + H_\delta(\tau)-H_\delta(\sigma) + \left(H_\delta(\sigma)-H(\sigma)\right)\\
&\le \left(H_\delta(\sigma)-H(\sigma)\right) - \left(H_\delta(\tau)- H(\tau)\right)\\
&\le \sum_{b=\{x,y\}\in E} \left\lvert J_b-J_b'\right\rvert \left\lvert\sigma_x\sigma_y-\tau_x\tau_y\right\rvert+ \sum_{x\in V}\left\lvert h_x-h_x'\right\rvert \left\lvert\sigma_x-\tau_x\right\rvert\\
&=2\sum_{b\in W_{\sigma,\tau}}\left\lvert J_b-J_b'\right\rvert + 2\sum_{x\in D_{\sigma,\tau}}\left\lvert h_x-h_x'\right\rvert\\
&\le 2\delta (\lvert W_{\sigma,\tau}\rvert+\lvert D_{\sigma,\tau}\rvert).
\end{align*}
Since $\lvert W_{\sigma,\tau}\rvert \leq |E|$, $\vert D_{\sigma,\tau}\rvert \leq |V|$, $k_{G} = |E| + |V|$, and $R_{H}\ge \sqrt{v_{H}}$, then, by our assumption, we obtain
\begin{align*}
H(\tau)-H(\sigma)\le 2\delta k_{G} \le \epsilon \sqrt{v_{H}} \le \epsilon R_{H}.
\end{align*}
Therefore, the conclusion of this theorem follows.
\end{proof}
\subsection{Stability of ground states: first approach}\label{sec:FA}
In the previous subsection, we did not assume any randomness on the spin-spin couplings $J_b$'s and local external fields $h_x$'s.
In this subsection, let us consider the same setting as stated in question \ref{question2'} from Section \ref{sec:settings}.
Precisely speaking, we assume that $\{J_b\}_{b \in E}$ and $\{h_x\}_{x \in V}$ are mutually independent random variables distributed according to a standard Gaussian distribution.
Under such assumptions, let us estimate the probability that the inequality $H(\tilde{\sigma}_G)-H(\sigma_G)\le \epsilon R_{H}$ holds,
where $\epsilon$ is a given positive constant, by using a method that relies on uniform bounds with respect to certain spin configurations.
In the following lemma, we provide an upper bound for the difference $H(\tilde{\sigma}_G)-H(\sigma_G)$.
\begin{lemma}
Given $\delta>0$, if $\sigma_{G}$ and $\tilde{\sigma}_G$ are ground states for $H$ and $H_{\delta}$, respectively, then, we have
\begin{equation}
H(\tilde{\sigma}_G)-H(\sigma_G)\le 2\delta k_{G}.
\end{equation}
\end{lemma}
\begin{proof}
It follows from the definition of a ground state that $H_\delta(\tilde{\sigma}_G)-H_\delta(\sigma_G)\le 0$, then
\begin{align*}
H(\tilde{\sigma}_G)-H(\sigma_G)&= \left(H(\tilde{\sigma}_G)-H_\delta(\tilde{\sigma}_G)\right) + H_\delta(\tilde{\sigma}_G)-H_\delta(\sigma_G) + \left(H_\delta(\sigma_G)-H(\sigma_G)\right)\\
&\le 2\lVert H_\delta-H\rVert_\infty,
\end{align*}
where $\|\cdot\|_{\infty}$ stands for the uniform norm, as usual. Furthermore, for any spin configuration $\sigma$, we have
\begin{align*}
\lvert H_{\delta}(\sigma)-H(\sigma)\rvert&=\left\lvert\sum_{b=\{x,y\}\in E} (J_b-J_b')\sigma_x\sigma_y+\sum_{x\in V}(h_x-h_x') \sigma_x\right\rvert\\
&\le \sum_{b\in E}\left\lvert J_b-J_b' \right\lvert+\sum_{x\in V} \left\rvert h_x-h_x'\right\rvert\\
&\le \delta (\lvert E\rvert+\lvert V\rvert)=\delta k_G.
\end{align*}
Then, $\lVert H_\delta-H\rVert_\infty \le \delta k_G$, therefore, we conclude the proof.
\end{proof}
By the lemma above , it follows that
\begin{align*}\label{ineq:total}
\mathbb{P}\left(H(\tilde{\sigma}_G)-H(\sigma_G)\le \epsilon R_{H}\right) \ge \mathbb{P}\left( \delta k_{G} \le \frac{1}{2}\epsilon R_{H}\right),
\end{align*}
and by using the fact that $R_{H}\ge \sqrt{v_H}$ (see Lemma \ref{lem:total margin}), we conclude that
\begin{equation}
\mathbb{P}\left(H(\tilde{\sigma}_G)-H(\sigma_G)\le \epsilon R_{H}\right) \ge \mathbb{P}\left( \delta k_{G} \le \frac{1}{2} \epsilon \sqrt{v_{H}}\right).
\end{equation}
Finally, we have the following estimation for the probability that $H(\tilde{\sigma}_G)-H(\sigma_G)\le \epsilon R_{H}$ holds, which consists of one of the answers for the question \ref{question2'}.
\begin{theorem}\label{thmLinfty}
Let $\{J_b\}_{b \in E}$ and $\{h_x\}_{x \in V}$ be mutually independent standard Gaussian random variables. It follows that
\begin{equation}
\mathbb{P} \left( H(\tilde{\sigma}_G)-H(\sigma_G)\le \epsilon R_{H} \right) \ge 1- \gamma\left(k_G; \left(\frac{2\delta k_G}{\epsilon}\right)^2\right),
\end{equation}
where $\gamma(s;x)$ is the distribution function of the chi-square distribution with $s>0$ degrees of freedom, that is,
\begin{align*}
\gamma(s;x)\coloneqq\frac{1}{2^{s/2}\Gamma(s/2)}\int_0^x t^{s/2-1}e^{-t/2} dt
\end{align*}
for $x \geq 0$, and $\gamma(s;x) \coloneqq 0$ for $x<0$.
\end{theorem}
\begin{proof}
It follows from the above discussion that we have
\begin{align*}
\mathbb{P}\left(H(\tilde{\sigma}_G)-H(\sigma_G)\le \epsilon R_{H}\right) &\ge \mathbb{P}\left( \delta k_{G} \le \frac{1}{2} \epsilon \sqrt{v_{H}}\right)\\
&=\mathbb{P}\left( v_{H} \ge \left(\frac{2\delta k_G}{\epsilon} \right)^2\right)\\
&=1-\mathbb{P}\left( v_{H} < \left(\frac{2\delta k_G}{\epsilon} \right)^2\right).
\end{align*}
Since $\{J_b\}_{b \in E}$ and $\{h_x\}_{x \in V}$ are mutually independent random variables distributed according to a standard Gaussian distribution, then the random variable $v_{H}$ is distributed as the chi-square distribution with $k_{G}$ degrees of freedom.
Therefore,
\begin{align*}
\mathbb{P}\left( v_{H} < \left(\frac{2\delta k_G}{\epsilon} \right)^2\right)=\gamma\left(k_G; \left(\frac{2\delta k_G}{\epsilon}\right)^2\right).
\end{align*}
Thus, we obtain the lower bound of the target probability.
\end{proof}
\subsection{Stability of ground states: second approach}\label{sec:SA}
Before we proceed, let us point out the fundamental difference between the uniform approach and the current approach to solve question \ref{question2'}. Note that,
if we use the same computations as considered in the proof of Theorem \ref{thm:order_preservation} in the particular case where $\tau = \tilde{\sigma}_G$ and $\sigma = \sigma_{G}$ and use the fact that
$H_\delta(\sigma_{G})\ge H_\delta(\tilde{\sigma}_G)$, then it follows that
\begin{equation}\label{eq:OPbound}
H(\tilde{\sigma}_G) - H(\sigma_G) \le 2\delta(\lvert W_{\sigma_G,\tilde{\sigma}_G}\rvert+\lvert D_{\sigma_G,\tilde{\sigma}_G}\rvert).
\end{equation}
Recall that the proof of Theorem \ref{thmLinfty} fundamentally relied on the fact that, by using the $L^{\infty}$-distance estimates, the left-hand side of equation (\ref{eq:OPbound}) could be bounded above by $2\delta k_{G}$. Note that the right-hand side of equation (\ref{eq:OPbound}) is also bounded above by $2\delta k_{G}$, therefore, let us explore the geometry of the underlying graph
$G$ in order to see whether it is possible to obtain better bounds.
The value of $\lvert W_{\sigma_G,\tilde{\sigma}_G}\rvert+\lvert D_{\sigma_G,\tilde{\sigma}_G}\rvert$ depends on the underlying graph structure and the relationship between the ground states $\sigma_G$ and $\tilde{\sigma}_G$. Therefore, we should check the value of $\lvert W_{\sigma_G,\tilde{\sigma}_G}\rvert+\lvert D_{\sigma_G,\tilde{\sigma}_G}\rvert$ for the intended problem.
In general, we look for a uniform estimation for the value of $\lvert W_{\sigma,\tau}\rvert+\lvert D_{\sigma,\tau}\rvert$ for any $\sigma$ and $\tau$ since the ground states $\sigma_G$ and $\tilde{\sigma}_G$ in practice are unknown.
First, let us show the following lemma.
\begin{lemma}\label{lem2}
For any two configurations $\sigma$ and $\tau$, we have
\begin{align*}
\lvert W_{\sigma,\tau}\rvert \leq (\deg G) \cdot \min \left\{\lvert D_{\sigma,\tau}\rvert, \lvert V\setminus D_{\sigma,\tau}\rvert\right\},
\end{align*}
where $\deg G$ stands for the maximum degree of $G$.
\end{lemma}
\begin{proof}
Let us assume that $\lvert D_{\sigma,\tau}\rvert =s$, for some $s$ such that $0\leq s \leq \lvert V\rvert$.
Then, let us enumerate $D_{\sigma,\tau}$ as $D_{\sigma,\tau}=\{x_1, \dots , x_s\}\subset V$, where $x_i\in V$ for each $i=1,\dots, s$.
Moreover, we have $\lvert V\setminus D_{\sigma,\tau}\rvert =\lvert V\rvert -s$, and therefore we can write $V\setminus D_{\sigma,\tau}=\{y_1, \dots, y_{\lvert V\rvert -s}\}\subset V$, where $y_i\in V$ for each $i=1,\dots ,\lvert V\rvert-s$.
By the definition of $D_{\sigma,\tau}$, we have $\sigma_{x_i} \tau_{x_i} =-1$ for every $i=1, \dots, s$ and $\sigma_{y_j} \tau_{y_j}=1$ for all $j=1, \dots, \lvert V\rvert -s$.
If $\{x_i,x_j\}\in E$ for distinct $i$ and $j$ in $\{1, \dots, s\}$, then
\begin{align*}
\sigma_{x_i}\sigma_{x_j} \tau_{x_i} \tau_{x_j} = (\sigma_{x_i} \tau_{x_i}) (\sigma_{x_j} \tau_{x_j})=(-1)^2=1.
\end{align*}
Thus, $\{x_i,x_j\}\notin W_{\sigma,\tau}$.
In a similar way, we conclude that in case $\{y_i,y_j\}\in E$ for distinct $i$ and $j$ in $\{1,\dots ,\lvert V\rvert -s\}$, it follows that $\{y_i,y_j\}\notin W_{\sigma,\tau}$.
If $\{x_i,y_j\}\in E$ for some $i\in\{1,\dots , s\}$ and some $j\in \{1,\cdots , \lvert V\rvert -s\}$, then
\begin{align*}
\sigma_{x_i}\sigma_{y_j} \tau_{x_i}\tau_{y_j} = (\sigma_{x_i}\tau_{x_i}) (\sigma_{y_j} \tau_{y_j}) =(-1)\times 1=-1.
\end{align*}
Hence, $\{x_i,y_j\}\in W_{\sigma,\tau}$.
It follows that
\begin{align*}
W_{\sigma,\tau}=\{\{x,y\}\in E: x=x_i, y=y_j \text{ for some } i,j\}.
\end{align*}
Therefore, we have
\begin{align*}
\lvert W_{\sigma,\tau}\rvert\leq (\deg G) \min \{ \lvert D_{\sigma,\tau}\rvert ,\lvert V\setminus D_{\sigma,\tau}\rvert\}.
\end{align*}
\end{proof}
\begin{proposition}\label{prop:unif_W+D}
For any graph $G$, let $\sigma$ and $\tau$ be two spin configurations. Then, we have
\begin{align}
\lvert W_{\sigma,\tau}\rvert +\lvert D_{\sigma,\tau}\rvert\le \frac{(\deg G+1)\lvert V\rvert}{2}.
\end{align}
\end{proposition}
\begin{proof}
Using Lemma \ref{lem2}, if $\lvert D_{\sigma,\tau}\rvert\leq \lvert V\rvert/2$ then
\begin{align*}
\lvert D_{\sigma,\tau}\rvert +\lvert W_{\sigma,\tau}\rvert\leq (\deg G+1)\lvert D_{\sigma,\tau}\rvert\leq \frac{\deg G+1}{2}\lvert V\rvert,
\end{align*}
otherwise, if $\lvert D_{\sigma,\tau}\rvert > \lvert V\rvert/2$, it follows that
\begin{align*}
\lvert D_{\sigma,\tau}\rvert +\lvert W_{\sigma,\tau}\rvert &\leq (\deg G) \lvert V\setminus D_{\sigma,\tau}\rvert +\lvert D_{\sigma,\tau}\rvert\\
&=(\deg G)\lvert V\rvert -(\deg G-1)\lvert D_{\sigma,\tau}\rvert\\
&\leq (\deg G)\lvert V\rvert -\frac{\deg G-1}{2}\lvert V\rvert\\
&=\frac{\deg G+1}{2}\lvert V\rvert.
\end{align*}
\end{proof}
Thus, we have the following theorem which is another answer for the question \ref{question2'} (see Theorem \ref{thmLinfty} for an alternative approach to the question \ref{question2'}).
\begin{theorem}\label{thmop}
Let $\{J_b\}_{b \in E}$ and $\{h_x\}_{x \in V}$ be mutually independent standard Gaussian random variables. Then, we have
\begin{equation}
\mathbb{P}\left(H(\tilde{\sigma}_G)-H(\sigma_G)\le \epsilon R_{H}\right)\ge 1-\gamma\left( k_G; \left(\frac{\delta\lvert V\rvert(\deg G+1)}{\epsilon}\right)^2\right).
\end{equation}
\end{theorem}
\begin{proof}
Analogously as in the proof of Theorem \ref{thmLinfty}, it follows from equation (\ref{eq:OPbound}), $R_{H}\ge \sqrt{v_H}$ and Proposition \ref{prop:unif_W+D} that
\begin{align*}
\mathbb{P}\left(H(\tilde{\sigma}_G)-H(\sigma_G)\le \epsilon R_{H}\right){}
&\ge \mathbb{P}\left(\delta\le \frac{\epsilon \sqrt{v_{H}}}{2(\lvert W_{\sigma_G,\tilde{\sigma}_G}\rvert +\lvert D_{\sigma_G,\tilde{\sigma}_G}\rvert )} \right)\\
&=\mathbb{P} \left( v_{H}\ge \left(\frac{2\delta(\lvert W_{\sigma_G,\tilde{\sigma}_G}\rvert +\lvert D_{\sigma_G,\tilde{\sigma}_G}\rvert)}{\epsilon}\right)^2\right)\\
&\ge \mathbb{P}\left( v_{H} \ge \left(\frac{\delta\lvert V\rvert(\deg G+1)}{\epsilon}\right)^2\right)\\
&=1-\gamma\left( k_G; \left(\frac{\delta\lvert V\rvert(\deg G+1)}{\epsilon}\right)^2\right),
\end{align*}
where we used the fact that $v_{H}$ is distributed according to a chi-square distribution with $k_{G}$ degrees of freedom.
\end{proof}
\subsection{Comparison between approaches}
In the rest of this section, we compare the methods presented in Sections \ref{sec:FA} and \ref{sec:SA}
passing through several examples to which we apply Proposition \ref{prop:unif_W+D}.
The first example is the case where we consider complete graphs including the SK model.
If we consider complete graphs, then Theorem \ref{thmop} provides us with better results if compared to Theorem \ref{thmLinfty}.
\begin{example}\label{ex:completegraph}
If $G$ is a complete graph (that is, all vertices are connected to each other) with $N$ vertices, then we have
\begin{align*}
\frac{\deg G+1}{2}\lvert V\rvert=\frac{N^2}{2}.
\end{align*}
On the other hand, the value of $k_{G}$ will be given by
\begin{align*}
k_{G} :=\lvert E\rvert+\lvert V\rvert =\frac{N(N-1)}{2}+N=\frac{N(N+1)}{2}.
\end{align*}
Therefore,
\begin{align*}
\frac{\deg G+1}{2}\lvert V\rvert< k_{G}.
\end{align*}
Hence the uniform upper bound for $\lvert W_{\sigma,\tau}\rvert+\lvert D_{\sigma,\tau}\rvert$ we obtained in Proposition \ref{prop:unif_W+D} is always better than $k_G$.
Furthermore, we can calculate the explicit value of $\lvert W_{\sigma,\tau}\rvert+\lvert D_{\sigma,\tau}\rvert$ when $G$ is a complete graph.
From the proof of Lemma \ref{lem2}, by assuming that $G$ is a complete graph,
we can say that $\lvert W_{\sigma,\tau}\rvert =\lvert D_{\sigma,\tau}\rvert (\lvert V\rvert -\lvert D_{\sigma,\tau}\rvert)$. Therefore,
\begin{align*}
\lvert W_{\sigma,\tau}\rvert +\lvert D_{\sigma,\tau}\rvert = \lvert D_{\sigma,\tau}\rvert (N + 1 - \lvert D_{\sigma,\tau}\rvert)
\le \frac{(N+1)^2}{4},
\end{align*}
and the proof of Theorem \ref{thmop} implies
\begin{align*}
\mathbb{P}\left(H(\tilde{\sigma}_G)-H(\sigma_G)\le \epsilon R_{H}\right)
&\geq \mathbb{P} \left( v_{H}\ge \left(\frac{2\delta(\lvert W_{\sigma_G,\tilde{\sigma}_G}\rvert +\lvert D_{\sigma_G,\tilde{\sigma}_G}\rvert)}{\epsilon}\right)^2\right)\\
&\ge {\mathbb P}\left( v_{H}\ge\frac{\delta^{2} (N+1)^4}{4\varepsilon^2} \right)\\
&= 1-\gamma\left( k_G; \frac{\delta^{2} (N+1)^4}{4\varepsilon^2}\right).
\end{align*}
\end{example}
The following example considers King's graphs and Theorem \ref{thmop} works better than Theorem \ref{thmLinfty} as well as the above example.
\begin{example}
Let $G$ be an $N\times M$ King's graph.
The $N\times M$ King's graph can be visualized as an $N\times M$ chessboard where each of its squares corresponds to a vertex of the graph, and each edge represents
a legal move of a king in a chess game. In that way, the inner vertices of the graph have $8$ neighbors each, while the vertices in the corners have $3$ neighbors each, and each of the remaining vertices on the
sides of the graph has $5$ neighbors.
For an $N\times M$ King's graph, we have
\begin{align*}
\frac{\deg G+1}{2}\lvert V\rvert=\frac{9}{2}MN,
\end{align*}
since $\deg G=8$. Moreover, we have
\begin{align*}
k_{G} = \lvert E\rvert+\lvert V\rvert=5MN-3(M+N)+2.
\end{align*}
If $M$ and $N$ are sufficiently large, then we have
\begin{align*}
\frac{\deg G+1}{2}\lvert V\rvert< k_{G}.
\end{align*}
\end{example}
In the following example, differently from the previous ones, we can see that the estimate provided by Theorem \ref{thmLinfty} suits better than that of Theorem \ref{thmop}.
\begin{example}
If $G$ is a star graph with degree $k\ge 3$, that is, $G$ consists of one vertex placed in the center and other $k$ vertices connected only with the center, then
\begin{align*}
\frac{\deg G+1}{2}\lvert V\rvert=\frac{(k+1)^2}{2}.
\end{align*}
Furthermore, we have
\begin{align*}
k_{G} = \lvert E\rvert+\lvert V\rvert=2k+1.
\end{align*}
Therefore, we obtain
\begin{align*}
\frac{\deg G+1}{2}\lvert V\rvert> k_{G}.
\end{align*}
\end{example}
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.44\textwidth}
\includegraphics[width=\textwidth]{digit_CG}
\caption{Complete graph.}
\label{fig:dig1}
\end{subfigure}
\begin{subfigure}[b]{0.44\textwidth}
\includegraphics[width=\textwidth]{digit_KG}
\caption{$N \times N$ King's graph.}
\label{fig:dig2}
\end{subfigure}
\begin{subfigure}[b]{0.44\textwidth}
\includegraphics[width=\textwidth]{digit_SG}
\caption{Star graph.}
\end{subfigure}
\caption{Minimal number of digits to be considered in the binary expansions of the parameters so that with probability higher than $99\%$ the difference
$H(\tilde{\sigma}_G) - H(\sigma_G)$ represents a value smaller than $1\%$ of $R_H$, as a function of the size of the graph.}
\label{fig:digit}
\end{figure}
According to the above examples, we conclude that it is not always possible to guarantee that the uniform upper bound of $\lvert W_{\sigma,\tau}\rvert +\lvert D_{\sigma,\tau}\rvert$ provided by Proposition \ref{prop:unif_W+D} works better than $k_G=\lvert E\rvert +\lvert V\rvert$.
Thus, we may have to consider such bounds separately when considering different graphs in order to obtain an optimal estimate for the probability that inequality $H(\tilde{\sigma}_G)-H(\sigma_G)<\epsilon R_{H}$ holds.
Let us consider again the problem of stability where we take into account only a finite number of terms in the binary expansions of the parameters $(J_b)_{b \in E}$ and $(h_x)_{x \in V}$ as we illustrated in the beginning of Section \ref{sec:settings}. In Figure \ref{fig:digit}, corresponding to the sizes of different graphs, we show the minimum number of digits necessary to be considered in the binary expansions of such parameters such that with probability at least $99\%$ the difference $H(\tilde{\sigma}_G) - H(\sigma_G)$ represents a value smaller than $1\%$ of $R_H$. On each plot we compare the different methods developed in this paper, where the first method corresponds to the estimate from
Theorem \ref{thmLinfty} and the second method corresponds to the estimate from Theorem \ref{thmop}. In Figure \ref{fig:dig1}, we also included a third estimate from Example \ref{ex:completegraph} which is sharper and gives us better results when compared to the other methods. As we expected, the second method provides us with better results when compared to the first one for complete graphs and for $N \times N$ King's graphs when $N$ is sufficiently large. On the other hand, for star graphs the first method is more appropriate, moreover, a certain discrepancy of performance is easily observed.
\section{Stability under a perturbed graph}\label{pertgraph}
In this section, we consider the stability of energy landscape when a given spin system defined on a graph is compressed into a smaller subsystem.
Differently from the previous sections, we fix a given Hamiltonian and we assume a sufficient condition that guarantees the existence of a nontrivial subset of the entire vertex set outside of which we can randomly assign any spin configuration and the energy of the system is kept under control up to a certain error margin.
Let $G=(V,E)$ be a finite simple graph, and let $H$ be the Hamiltonian on $G$ given by
\begin{align*}
H(\sigma) = -\sum_{ \{x,y\} \in E} J_{x,y} \sigma_x\sigma_{y}
\end{align*}
for every configuration $\sigma\in\{-1,1\}^V$, where $\{J_{x,y}\}_{\{x,y\} \in E}$ is a collection of mutually independent standard Gaussian random variables.
What we would like to show is that we can compress the whole system into a nontrivial subsystem so that the energy landscape of such subsystem is close to the original one up to a given error margin. More precisely, our goal is to find a class of examples for which given a positive constant $\epsilon$, there is a positive $\delta$ such that the subsystem $V_0 = V_0(\delta)$ of $V$, defined from the relation
\begin{align*}
V\setminus V_0\coloneqq\left\{x\in V: \text{$\lvert J_{x,y}\rvert<\delta$ holds for every $y$ such that $\{x,y\} \in E$}\right\},
\end{align*}
is non-trivial, has size comparable to the size of $V$, and satisfies
\begin{equation}\label{cond:ignore}
\sup_{\sigma,\eta\in\{-1,1\}^V} \left\lvert H(\sigma)-H(\sigma_{V_0}, \eta_{V\setminus V_0}) \right\rvert < \varepsilon R_H
\end{equation}
with high probability, see Figure \ref{fig:V0}.
\begin{figure}[h]
\centering
\includegraphics[scale=.2]{stability_volume.pdf}
\caption{We want to approximate the energy of a configuration $\sigma$ defined in the whole vertex set $V$ by the energy of a configuration that coincides with $\sigma$ in $V_{0}$ and
whose spins $\eta_{i}$'s in the set $V\setminus V_0$ are arbitrary.}\label{fig:V0}
\end{figure}
\subsection{One-dimensional discrete torus}\label{sec:1dtorus}
Let us solve the problem stated above in the particular case where the graph $G$ is a one-dimensional discrete torus.
\begin{theorem}\label{thm4}
Let $G=(V,E)$ be a one-dimensional discrete torus with $N$ vertices, that is, $V=\{1,2,\dots,N\}$ and $E=\{\{1,2\},\{2,3\},\dots,\{N-1,N\},\{N,1\}\}$. Given $\epsilon > 0$, let $\delta$ be a positive number such that
$\delta < \epsilon/\sqrt{2\pi}$. Then, if $A$ is a subset of the event $\{0 < |V \backslash V_{0}| < N\}$, it follows that
\begin{equation}\label{thm4a}
{\mathbb P}\left(\Bigg\{\sup_{\sigma,\tau\in\{-1,1\}^N} \left\lvert H(\sigma)-H(\sigma_{V_0}, \tau_{V\setminus V_0}) \right\rvert < \varepsilon R_H \Bigg\}\cap A \right)
\geq {\mathbb P}(A) - \frac{1 - \frac{2}{\pi}}{N \left(\sqrt{\frac{2}{\pi}} - \frac{2\delta}{\epsilon}\right)^{2}}
\end{equation}
holds for each $N \geq 3$. In particular, given constants $C > 0$ and $\alpha \in {[}0,1)$, we have
\begin{align}\label{thm4b}
{\mathbb P}\left(\Bigg\{\sup_{\sigma,\tau\in\{-1,1\}^N} \left\lvert H(\sigma)-H(\sigma_{V_0}, \tau_{V\setminus V_0}) \right\rvert < \varepsilon R_H \Bigg\} \cap \Big\{ C N^{\alpha} \leq |V \backslash V_{0}| < N \Big\}\right) &\nonumber \\
\geq \left( 1-\frac{C}{N^{1-\alpha}\theta^2} \right)^{2}\frac{1}{1+\frac{1+2\theta-3\theta^2}{N\theta^2}} - \theta^{N} -& \frac{1 - \frac{2}{\pi}}{N \left(\sqrt{\frac{2}{\pi}} - \frac{2\delta}{\epsilon}\right)^{2}}
\end{align}
for $N$ sufficiently large, where
\begin{equation}
\theta=\int_{-\delta}^{\delta}\frac{e^{-\xi^2/2}}{\sqrt{2\pi}}d\xi.
\end{equation}
\end{theorem}
Before we follow to the proof of the result above, let us clarify the theorem by providing the reader with some practical results. Let us consider the particular case where $C = 1$ and $\alpha \in (0,1)$. Corresponding to different values of $\epsilon$ and $\delta$, we obtain lower bounds for the probability that the size of $V \backslash V_0$ is at least $N ^ \alpha$ and condition (\ref{cond:ignore}) holds, see the table below.
\begin{table}[h!]
\centering
\begin{tabular}{ |p{1.5cm}||p{1.5cm}|p{1.5cm}|p{1.5cm}|p{2cm}|p{3.5cm}|}
\hline
\multicolumn{6}{|c|}{Examples} \\
\hline
$N$ & $\epsilon$ & $\delta $ & $\alpha$& Minimum size of $V \backslash V_0$ & Right-hand side of (\ref{thm4b})\\
\hline
$10^8$ & $0.05$ & $0.0198$ & $0.4$ & $1584$ & $0.877$ \\
$10^8$ & $0.05$ & $0.0198$ & $0.5$ & $10^4$ & $0.361$ \\
$10^8$ & $0.1$ & $0.0398$ & $0.5$ & $10^4$ & $0.810$ \\
$10^{12}$ & $0.01$ & $0.00398$ & $0.5$ & $10^ 6$ & $0.811$\\
$10^{12}$ & $0.05 $& $0.0199$ & $0.5$ & $10^ 6$ & $0.992$\\
$10^{12}$ & $0.1$ & $0.0399$ & $0.5$& $10^ 6$ & $0.998$\\
$10^{12}$ & $0.05$ & $0.0199$ & $0.6$& $\approx 1.58 \times 10^7$ & $0.879$\\
$10^{12}$ & $0.05$ & $0.0199$ & $0.65$& $\approx 6.31 \times 10^7 $ & $0.563$\\
\hline
\end{tabular}
\caption{Applications of Theorem \ref{thm4}.}
\end{table}
Let us observe that for any pair $\sigma, \tau$ of spin configurations, we have
\begin{eqnarray*}
|H(\sigma) - H(\sigma_{V_{0}}, \tau_{V \backslash V_{0}})| &=& \left|\sum_{x \in V_{0}} \sum_{\substack{y \in V \backslash V_{0} \\ \{x,y\} \in E}} J_{x,y} \sigma_{x}(\sigma_{y} - \tau_{y}) +
\sum_{\substack{\{x,y\} \subseteq V \backslash V_{0} \\ \{x,y\} \in E}} J_{x,y} (\sigma_{x} \sigma_{y} - \tau_{x} \tau_{y})\right| \\
&=& \left|\sum_{x \in V_{0}} \sum_{\substack{y \in V \backslash V_{0} \\ \{x,y\} \in E}} J_{x,y} \sigma_{x}\sigma_{y}(1 - \sigma_{y}\tau_{y}) +
\sum_{\substack{\{x,y\} \subseteq V \backslash V_{0} \\ \{x,y\} \in E}} J_{x,y} \sigma_{x} \sigma_{y} (1 - \sigma_{x}\tau_{x} \sigma_{y}\tau_{y}) \right|
\end{eqnarray*}
\begin{eqnarray*}
&=& \left|\sum_{y \in V \backslash V_{0}} \sum_{\substack{x \in V \\ \{x,y\} \in E}} J_{x,y} \sigma_{x}\sigma_{y}\left[\mathbbm{1}_{x \in V_{0}} (1 - \sigma_{y}\tau_{y}) +
\mathbbm{1}_{x \in V \backslash V_{0}} (1 - \sigma_{x}\tau_{x} \sigma_{y}\tau_{y})/2\right] \right|\\
&\leq& 2 \sum_{y \in V \backslash V_{0}} \sum_{\substack{x \in V \\ \{x,y\} \in E}} |J_{x,y}| \leq 2\delta \sum_{y \in V \backslash V_{0}} \text{deg}(y).
\end{eqnarray*}
In particular, if $G$ is the one-dimensional torus as in Theorem \ref{thm4}, it follows that
\begin{equation}\label{Hamiltoniandiff}
\sup_{\sigma,\tau\in\{-1,1\}^N} \left\lvert H(\sigma)-H(\sigma_{V_0}, \tau_{V\setminus V_0}) \right\rvert \leq 4 \delta |V \backslash V_{0}|.
\end{equation}
Now, let us prepare two lemmas in order to prove Theorem \ref{thm4}.
\begin{lemma}\label{lem41}
For $R_H=\sup_{\xi,\eta}\lvert H(\xi)-H(\eta)\rvert$, we have
\begin{align}
2\sum_{x=1}^N\lvert J_{x,x+1}\rvert - 4\min_{x = 1,\dots, N}\lvert J_{x,x+1}\rvert
\le R_H
\le 2\sum_{x=1}^N\lvert J_{x,x+1}\rvert,
\end{align}
hence, with probability 1,
\begin{align*}
R_H \sim 2\sqrt{\frac{2}{\pi}}N \quad\text{as $N$ approaches infinity}.
\end{align*}
\end{lemma}
\begin{proof}
Without loss of generality, we assume $\min_x\lvert J_{x,x+1}\rvert=\lvert J_{N,1}\rvert$.
Let us fix $\sigma_{1}=1$.
Then, depending on the sign of $J_{1,2}$, we can determine $\sigma_2$ to minimize (or maximize) $H(\sigma)$.
We continue this procedure up to $\sigma_{N}$ and we have
\begin{align*}
\min_{\sigma\in\{-1,1\}^N}H(\sigma) &\leq -\sum_{x=1}^N\lvert J_{x,x+1}\rvert \ +2\min_{x = 1,\dots, N}\lvert J_{x,x+1}\rvert,\\
\max_{\sigma\in\{-1,1\}^N}H(\sigma) &\geq \sum_{x=1}^N\lvert J_{x,x+1}\rvert \ -2\min_{x = 1,\dots, N}\lvert J_{x,x+1}\rvert
\end{align*}
(the additional terms exist if frustration exist at $\sigma_N$ and $\sigma_1$).
Hence the inequality of the lemma is proven.
To show the last statement, we divide all terms by $N$ and use the law of large numbers for the folded normal distribution.
\end{proof}
\begin{lemma}\label{lem42}
If we assume $N \geq 3$, then it follows that
\begin{align*}
\mathbb{E}\left[\lvert V\setminus V_0\rvert\right]=N\theta^2
\end{align*}
and
\begin{align*}
\mathbb{E}\left[\lvert V\setminus V_0\rvert^2\right]=N\theta^2\left(1+2\theta-3\theta^2+N\theta^2\right).
\end{align*}
\end{lemma}
\begin{proof}
For each $i=1,\dots,N$, let us define a random variable $X_{i}$ by letting
\begin{align*}
X_i=
\begin{cases}
1 & \text{if $\lvert J_{i-1,i}\rvert<\delta$ and $\lvert J_{i,i+1}\rvert<\delta$,}\\
0 & \text{otherwise}.
\end{cases}
\end{align*}
Then, by the definition of $V_0$, the condition $i\in V\setminus V_0$ is equivalent to $X_i = 1$.
Therefore, the expected value of the size of $V\setminus V_0$ will be given by
\begin{align*}
\mathbb{E}\left[\lvert V\setminus V_0\rvert\right]
= \mathbb{E}\left[\sum_{i=1}^N X_i \right]
= \sum_{i=1}^N\mathbb{E}\left[X_i\right]
= N\theta^2.
\end{align*}
Furthermore, we write
\begin{align*}
\lvert V\setminus V_0\rvert^2
=\sum_{i=1}^NX_i^2+2\sum_{i=1}^NX_iX_{i+1}+\sum_{i=1}^N\sum_{\substack{j\notin \{i-1,\, i,\, i+1\}}}X_iX_j.
\end{align*}
Here, the random variables $X_i$ and $X_{i+1}$ are not mutually independent but we have
\begin{align*}
X_iX_{i+1}=
\begin{cases}
1 & \text{if $\lvert J_{i-1,i}\rvert<\delta$,\ $\lvert J_{i,i+1}\rvert<\delta$ and $\lvert J_{i+1,i+2}\rvert<\delta$},\\
0 & \text{otherwise}.
\end{cases}
\end{align*}
Thus, it follows that the identity
\begin{align*}
\mathbb{E}\left[\lvert V\setminus V_0\rvert^2\right]
&=N\theta^2+2N\theta^3+N(N-3)\theta^4\\
&=N\theta^2\left(1+2\theta-3\theta^2+N\theta^2\right)
\end{align*}
holds, and we complete the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm4}]
Let us start by splitting the probability in the left-hand side of equation (\ref{thm4a}) as
\begin{align*}
{\mathbb P}&\left(\Bigg\{\sup_{\sigma,\tau\in\{-1,1\}^N} \left\lvert H(\sigma)-H(\sigma_{V_0}, \tau_{V\setminus V_0}) \right\rvert < \varepsilon R_H \Bigg\}\cap A \right)\\
&\geq {\mathbb P}\left(\Bigg\{\sup_{\sigma,\tau\in\{-1,1\}^N} \left\lvert H(\sigma)-H(\sigma_{V_0}, \tau_{V\setminus V_0}) \right\rvert < \varepsilon R_H \Bigg\} \cap \left\{\left| \frac{1}{N} \sum_{x = 1}^{N} |J_{x,x+1}| - \sqrt{\frac{2}{\pi}} \right| < \sqrt{\frac{2}{\pi}} - \frac{2\delta}{\epsilon} \right\} \cap A \right)\\
& = {\mathbb P}\left(\sup_{\sigma,\tau\in\{-1,1\}^N} \left\lvert H(\sigma)-H(\sigma_{V_0}, \tau_{V\setminus V_0}) \right\rvert < \varepsilon R_H \,\middle\vert\, B \right)\,
{\mathbb P}\left(B \right),
\end{align*}
where $B$ is the event given by
\begin{equation}
B = \left\{\left| \frac{1}{N} \sum_{x = 1}^{N} |J_{x,x+1}| - \sqrt{\frac{2}{\pi}} \right| < \sqrt{\frac{2}{\pi}} - \frac{2\delta}{\epsilon} \right\} \cap A.
\end{equation}
From equation (\ref{Hamiltoniandiff}), Lemma \ref{lem41}, and the fact that, under condition $B$ (subset of $A$), $\min_{x = 1,\dots, N}|J_{x,x+1}| \leq \delta$, it follows that the conditional probability above satisfies
\begin{align*}
{\mathbb P}\left(\sup_{\sigma,\tau\in\{-1,1\}^N} \left\lvert H(\sigma)-H(\sigma_{V_0}, \tau_{V\setminus V_0}) \right\rvert < \varepsilon R_H \,\middle\vert\, B \right)
&\ge {\mathbb P}\left( 2\delta\lvert V\setminus V_0\rvert <\varepsilon\left( \sum_{x=1}^N\lvert J_{x,x+1}\rvert - 2\min_{x = 1,\dots, N}\lvert J_{x,x+1}\rvert\right)\,\middle\vert\, B \right)\\
&\ge {\mathbb P}\left( \frac{2\delta}{\varepsilon} \lvert V\setminus V_0\rvert < \left( \sum_{x=1}^N\lvert J_{x,x+1}\rvert - 2\delta\right)\,\middle\vert\, B \right)\\
&= {\mathbb P}\left( \frac{2\delta}{\varepsilon} \frac{\lvert V\setminus V_0\rvert + \epsilon}{N} < \frac{1}{N}\sum_{x=1}^N\lvert J_{x,x+1}\rvert \,\middle\vert\, B \right)\\
&\ge {\mathbb P}\left( \frac{2\delta}{\varepsilon} < \frac{1}{N}\sum_{x=1}^N\lvert J_{x,x+1}\rvert \,\middle\vert\, B \right) = 1,
\end{align*}
then
\begin{equation}\label{thm4:1}
{\mathbb P}\left(\sup_{\sigma,\tau\in\{-1,1\}^N} \left\lvert H(\sigma)-H(\sigma_{V_0}, \tau_{V\setminus V_0}) \right\rvert < \varepsilon R_H \,\middle\vert\, B \right) = 1.
\end{equation}
The rest of the proof consists of estimating the probability of the event $B$. Let us write
\begin{equation}\label{thm4:2}
{\mathbb P}(B)
\geq {\mathbb P}(A)
+ {\mathbb P}\left(\left| \frac{1}{N} \sum_{x = 1}^{N} |J_{x,x+1}| - \sqrt{\frac{2}{\pi}} \right| < \sqrt{\frac{2}{\pi}} - \frac{2\delta}{\epsilon} \right)
-1.
\end{equation}
It follows from Chebyshev's inequality that
\begin{equation}\label{thm4:3}
{\mathbb P}\left(\left| \frac{1}{N} \sum_{x = 1}^{N} |J_{x,x+1}| - \sqrt{\frac{2}{\pi}} \right| \geq \sqrt{\frac{2}{\pi}} - \frac{2\delta}{\epsilon} \right)\leq \frac{\sigma_{FG}^{2}}{N \left(\sqrt{\frac{2}{\pi}} - \frac{2\delta}{\epsilon}\right)^{2}},
\end{equation}
where $\sigma_{FG}^{2}$ is the variance of the folded Gaussian random variable $Y = |J_{1,2}|$ which is equal to $1 - \frac{2}{\pi}$. By using equations (\ref{thm4:1}), (\ref{thm4:2}) and (\ref{thm4:3}), equation (\ref{thm4a}) follows.
In particular, if $A$ is the event given by $A = \{C N^{\alpha} \leq |V \backslash V_{0}| < N\}$. Note that
\begin{equation}\label{thm4:4}
{\mathbb P}(A) = {\mathbb P}(|V \backslash V_{0}| \geq C N^{\alpha}) - {\mathbb P}(|V \backslash V_{0}| = N),
\end{equation}
where ${\mathbb P}(|V \backslash V_{0}| = N) = \theta^{N}$. By the Paley-Zygmund inequality and Lemma \ref{lem42}, we have
\begin{align*}
{\mathbb P}\left(\lvert V\setminus V_0\rvert \geq C N^{\alpha} \right)
&\ge { \left( 1 - \frac{C N^{\alpha}}{\mathbb{E}[\lvert V\setminus V_0\rvert]} \right)^2} \frac{\mathbb{E}[\lvert V\setminus V_0\rvert]^2}{\mathbb{E}[\lvert V\setminus V_0\rvert^2]}\\
&= { \left( 1 - \frac{C N^{\alpha}}{N \theta^{2}} \right)^2}\frac{1}{1+\frac{1+2\theta-3\theta^2}{N\theta^2}}
\end{align*}
for $N$ sufficiently large, therefore, equation (\ref{thm4b}) holds.
\end{proof}
\subsection{Generalizations}
The most natural step in further investigations is to extend the results obtained in Section \ref{sec:1dtorus} to the case where we include i.i.d. standard Gaussian external fields, and also extend such results to a larger class of examples such as to a $d$-dimensional torus or even to
finite graphs with bounded degree. Note that, by assuming the absence of external fields, in the same way as we obtained inequality (\ref{Hamiltoniandiff}), one can show that
\begin{equation}\label{Hamiltoniandiff2}
|H(\sigma) - H(\sigma_{V_{0}}, \tau_{V \backslash V_{0}})| \leq 2\delta \sum_{y \in V \backslash V_{0}} \text{deg}(y)
\end{equation}
holds for any graph. So, analogously as in the one-dimensional torus case, it is expected that if we find a lower bound for $R_{H}$, as we did in Lemma \ref{lem41}, which is comparable to the right-hand side of equation (\ref{Hamiltoniandiff2}), then we may derive an extension of our results for a larger class of graphs. Some numerical results suggest that, for an Ising spin system in a $d$-dimensional torus with i.i.d. standard Gaussian spin-spin couplings and without external fields, $R_{H}$ is still of order $N$, but it still lacks a rigorous proof of that observation due to the difficulty of dealing with frustrated configurations in a higher dimensional torus.
The simulations presented in this section were performed by using a modified version of the stochastic cellular automata algorithm studied in \cite{HKKS19,HKKS21} to estimate the maximum and minimum value
of the Hamiltonian $H$ in order to find an approximation of $R_{H}$ corresponding to different values of $N$. Note that, in such plots, each dot represents the value of $R_{H}$ (resp. $R_{H}/N$) corresponding to a torus with $N$ vertices for a realization of the random values of spin-spin couplings (i.i.d. standard Gaussian random variables). In the one-dimensional case (see Figure \ref{fig:torus1d}), we see that the value of $R_{H}/N$ approximates the value $2 \sqrt{2/\pi} \approx 1.5957$, as expected due to Lemma \ref{lem41}.
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.44\textwidth}
\includegraphics[width=\textwidth]{RH1dTorus_1}
\caption{}
\label{fig:lab0}
\end{subfigure}
\begin{subfigure}[b]{0.44\textwidth}
\includegraphics[width=\textwidth]{RH1dTorus_2}
\caption{}
\label{fig:lab1}
\end{subfigure}
\caption{The dependence of $R_{H}$ with respect to the size of the system $N$ in the one dimensional case and its asymptotic behavior as $N$ grows.}
\label{fig:torus1d}
\end{figure}
Now, for the two and three dimensional cases (see Figure \ref{fig:torus}), when we consider larger values of $N$, the value of $R_{H}/N$ seems to approximate the values $2.564$ and $3.329$, respectively. Note that such simulated values represent lower bounds for the real value of the limit $R_{H}/N$ as $N$ approaches infinity, so the true limits are still unknown. Furthermore, we conjecture that such limit exists in any dimension and the random variable $R_{H}/N$ converges almost surely due to the fact that, in higher dimension, its simulated values seem to fluctuate less around an asymptotic limit as compared to the one-dimensional case.
It is straightforward to show that, for the $d$-dimensional torus, we have
\begin{equation*}
R_H \leq 2 \sum_{k = 1}^{d}\sum_{i \in V} |J_{i, i+\mathbf{e_k}}|,
\end{equation*}
where $\mathbf{e_k}$ stands for the $k$-th canonical vector of the $d$-dimensional Euclidean space, then
\begin{equation}
\limsup_N \frac{R_H}{N} \leq 2d \sqrt{\frac{2}{\pi}}.
\end{equation}
Moreover, it follows from the fact that $R_H \geq \sqrt{\sum_b J_b^2}$ (see Lemma \ref{lem:total margin}) and the Cauchy–Schwarz inequality that
\begin{equation}
\frac{1}{\sqrt{N}} \sum_b |J_b| \leq R_H.
\end{equation}
Therefore, we see that there is still room for improvement and the need of rigorous proofs about the existence and determination of the limit $\lim_{N \to \infty}R_H/N$, originating a mathematical problem which is interesting by itself.
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.44\textwidth}
\includegraphics[width=\textwidth]{RH2dTorus_1}
\caption{Two dimensional.}
\label{fig:lab2}
\end{subfigure}
\begin{subfigure}[b]{0.44\textwidth}
\includegraphics[width=\textwidth]{RH2dTorus_2}
\caption{Two dimensional.}
\label{fig:lab3}
\end{subfigure}
\begin{subfigure}[b]{0.44\textwidth}
\includegraphics[width=\textwidth]{RH3dTorus_1}
\caption{Three dimensional.}
\label{fig:lab4}
\end{subfigure}
\begin{subfigure}[b]{0.44\textwidth}
\includegraphics[width=\textwidth]{RH3dTorus_2}
\caption{Three dimensional}
\label{fig:lab5}
\end{subfigure}
\caption{The dependence of $R_{H}$ with respect to the size of the system $N$ in the two dimensional and three dimensional cases and their asymptotic behavior as $N$ grows.}\label{fig:torus}
\end{figure}
\subsection*{Acknowledgment}
This work was supported by JST CREST Grant Number JP22180021, Japan. We would like to thank Takashi Takemoto and Normann Mertig of Hitachi, Ltd., for providing us with a stimulating platform for the weekly meeting at Global Research Center for Food \& Medical Innovation (FMI) of Hokkaido University. We would also like to thank Hiroshi Teramoto of Kansai University, as well as Masamitsu Aoki, Yoshinori Kamijima, Katsuhiro Kamakura, Suguru Ishibashi and Takuka Saito of Mathematics Department, for valuable comments and encouragement at the aforementioned meetings at FMI.
|
1,314,259,994,177 | arxiv |
\section{Introduction}
The LHCb detector is one of the four main detectors that operate at the Large Hadron Collider (LHC) at CERN.
The LHCb detector consists of a single-arm forward spectrometer operating in the region of pseudorapidity, $1.9<\eta<4.9$.
The detector was originally designed to study the production and decay of hadrons containing $b$ and $c$ quarks and indirectly probing the strength of the Standard Model (SM).
The LHCb detector is now playing a fundamental role also in other areas of research, such as direct searches of rare SM decays and exotica.
Exotica searches at LHCb primarily consist in Higgs physics and direct searches for beyond the SM particles.
Many theoretical models predict the existence of new particles: their existence can be detected either directly through the production of on-shell particles or indirectly through virtual contributions in loop processes.
During Run I of the LHC, LHCb recorded data at a centre of mass energy $\sqrt{(s)}=7\ensuremath{\mathrm{\,Te\kern -0.1em V}}\xspace$ (for 2010 and 2011) and $8\ensuremath{\mathrm{\,Te\kern -0.1em V}}\xspace$ (for 2012) corresponding to an integrated luminosity of 1.0 and $2.0\ensuremath{\mbox{\,fb}}\xspace^{-1}$ respectively.
The SM is an incomplete theory.
Not only the SM is in conflict with the observations of non-zero neutrino masses, the excess of matter over antimatter in the Universe, and the presence of non-baryonic dark matter but it also presents a number of fine-tuning problems (such as the hierarchy and strong CP problems).
Beyond the SM (BSM) physics has been searched for at the LHC without success so far.
Nevertheless LHCb is an ideal experiment to probe unique regions BSM phase space thanks to the detector unique particle identification capabilities, precise secondary vertex reconstruction and accurate measurements of lifetime, momentum and invariant mass.
Throughout this document charge conjugation is implied unless explicitly stated otherwise and $c = 1$.
\section{Search for Majorana Neutrinos in $\decay{B}{\pi^{+}\mu^{-}\mu^{-}}$ Decays at LHCb}
The nature of neutrinos in the SM has not been defined yet: neutrinos could either be Dirac fermions or their own antiparticle. In the latter case they are called ``Majorana" particles~\cite{Majorana}.
Some of the most economical theories that can account simultaneously for neutrino masses and oscillations, baryogenesis, and dark matter, extend the SM by requiring the existance of a fourth neutrino generation.
Since a fourth neutrino generation can couple with SM particles there exist many ways of searching for such particles, one of them being the neutrino-less double $\beta$ decay.
The approach followed by LHCb is different and complementary, which performs a direct search in heavy flavour decays, similar to what has been done in the past~\cite{Aaij:2012zr,Aaij:2011ex,Lees:2013pxa,Seon:2011ni}.
The LHCb experiment has performed many studies for Majorana Neutrino produced in $B^{-}$ decays, probing a wide range of masses and lifetimes; these searches were performed for the lepton flavour violating decays $B^-\ensuremath{\rightarrow}\xspace h^+\mu^-\mu^-$, where $h$ is a hadron.
These types of decays are prohibited by the SM but can happen thanks to production of on-shell Majorana neutrinos.
The LHCb collaboration published three papers using different final states and different data sets:
\begin{itemize}
\item $h^+ = K^+$ or $\pi^+$, with $\sim$36 pb$^{-1}$ ($\sqrt{s}=$7 TeV) \cite{LHCb-PAPER-2011-009}.
\item $h^+ = D^+$, $D^{\ast +}$, $D^+_s$ and $D^0 \pi^+$, with $\sim$410 pb$^{-1}$ ($\sqrt{s}=$7 TeV) \cite{LHCb-PAPER-2011-038}.
\item $h^+ = \pi^+$, with 3.0 fb$^{-1}$ ($\sqrt{s}=$7 TeV + $\sqrt{s}=$8 TeV)~\cite{LHCb-PAPER-2013-064}.
\end{itemize}
This proceedings will concentrate on the latter paper, being the most recent of the three.
A Feynman diagram for the lepton number and flavour violating decay $B^-\ensuremath{\rightarrow}\xspace\pi^+\mu^-\mu^-$ is shown in Fig.~\ref{fig:pimumu}. This decay is prohibited by the SM but can happen thanks to production of on-shell Majorana neutrinos, it has been chosen as it is one of the most sensitive way to look for Majorana neutrinos in $B$ decays~\cite{LHCb-PAPER-2013-064}.
This decay, which has been theoretically modelled in Ref.~\cite{Atre:2009rg}, is sensitive to contributions from both on- and off-shell Majorana neutrino. More specifically if the mass of the Majorana neutrino, $m_N$, is smaller than $m_B - m_\mu$ then it can be produced on-shell with a finite lifetime in the detector.
If, on the other hand, $m_N$ is larger then it can still contribute to the decay as a virtual particle.
\begin{figure}
\includegraphics[height=0.2\textheight]{Majorana-feyn}
\caption{Feynman diagram for ${\ensuremath{\Bub}}\xspace\ensuremath{\rightarrow}\xspace\pi^+\mu^-\mu^-$ decay mediated by a Majorana neutrino ($N$). Reproduced from~\cite{LHCb-PAPER-2013-064}.}
\label{fig:pimumu}
\end{figure}
The selection is designed to maximise the efficiency squared divided by the background yield.
This allows for decay products to be detached from the $B^-$ decay vertex, therefore $\tau_N$ can span from few picoseconds up to $\sim 1000 \ensuremath{{\mathrm{ \,ps}}}\xspace$.
Because for lifetimes $\sim1 \ensuremath{{\mathrm{ \,ps}}}\xspace$, the $\pi^+\mu^-$ vertex can be significantly detached from the $B^-$ decay vertex two different strategies are used: one for short $\tau_N$ ($\cal{S}$) and another for $\tau_N$ up to 1000 ps ($\cal{L}$).
In order to reduce the systematic uncertainty and to convert the yield into a branching fraction, the normalisation channel $B^- \rightarrow {\ensuremath{{J\mskip -3mu/\mskip -2mu\psi\mskip 2mu}}}\xspace K^-$ (with ${\ensuremath{{J\mskip -3mu/\mskip -2mu\psi\mskip 2mu}}}\xspace \ensuremath{\rightarrow}\xspace \mu^+\mu^-$) was chosen.
For the $\cal{S}$ category and the normalisation channel the $\mu^-\mu^-\pi^+$ candidate combinations must, when reconstructed, form a common vertex.
For the $\cal{L}$ category the $\pi^+\mu^-$ pair can be significantly displaced from the ${\ensuremath{\Bub}}\xspace$ vertex.
A {\ensuremath{\Bub}}\xspace candidate decay vertex is searched for by tracing back a neutrino $N$ candidate to another $\mu^{-}$ in the event, which must form a vertex.
Figure~\ref{fig:mass} shows the mass spectra for the selected candidates.
No signal is observed in both the ${\cal{S}}$ and ${\cal{L}}$ samples.
\begin{figure}
\includegraphics[width=1.\textwidth]{hmumu-mass-all}
\caption{Invariant mass distributions with fits overlaid of candidate mass spectra for (a) ${\ensuremath{{J\mskip -3mu/\mskip -2mu\psi\mskip 2mu}}}\xspace K^-$, (b) ${\ensuremath{\pion^+}}\xspace\mun\mun$ ($\cal{S}$) , and (c) ${\ensuremath{\pion^+}}\xspace\mun\mun$ ($\cal{L}$). Where $\cal{S}$ $\cal{L}$ indicates the two different data samples one for short $\tau_N$ and another for $\tau_N$ up to 1000 ps.
Peaking backgrounds are (green) shaded.
The dotted lines show the combinatorial backgrounds only. The solid line shows the sum of both backgrounds~\cite{LHCb-PAPER-2013-064}.}
\label{fig:mass}
\end{figure}
In order to set upper limits the Confidence Level method is used~\cite{CLS}.
The signal region is defined as the mass interval within $\pm 2\sigma$ of the {\ensuremath{\Bub}}\xspace mass where $\sigma$ is the mass resolution.
Because no evidence for a signal is found, upper limits are set by scanning across the $m_N$ window.
The efficiency is highest for $\tau_N$ of a few ps, then it decreases rapidly until $\tau_N \sim200 \ensuremath{{\mathrm{ \,ps}}}\xspace$ when it levels off until $\tau_N\sim 1000 \ensuremath{{\mathrm{ \,ps}}}\xspace$.
After this value, the efficiency decreases to $\sim 0$ because most of the decays happen outside of the vertex detector.
The multi-dimensional plot of the upper limit on ${\cal{B}}({\ensuremath{\Bub}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\pion^+}}\xspace \mun \mun)$ is shown in Fig.~\ref{fig:UL-mass-lifetime}.
\begin{figure}
\includegraphics[height=0.2\textheight]{UL-mass-lifetime}
\caption{Upper limits on ${\cal{B}}({\ensuremath{\Bub}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\pion^+}}\xspace \mun \mun)$ at 95\% C.L. as a function of $m_N$, in 5~MeV intervals, for specific values of $\tau_N$~\cite{LHCb-PAPER-2013-064}.}
\label{fig:UL-mass-lifetime}
\end{figure}
A model dependent upper limit on the coupling of a single fourth-generation Majorana neutrino to muons, $|V_{\mu 4}|$, for each value of $m_N$, is calculated using an expansion of the formula used by Ref.~\cite{Atre:2009rg}.
The resulting 95\% C.L. limit on $|V_{\mu4}|^2$ is extracted as a function of $m_N$ and is shown in Fig.~\ref{fig:Vmu4modified}.
\begin{figure}
\includegraphics[height=0.2\textheight]{V_mu4_modified}
\caption{Upper limits at 95\% C.L. on the fourth generation neutrino coupling to the muon $|V_{\mu4}|^2$ are shown as a function of the mass of the Majorana neutrino $m_N$ for the events in the displaced region~\cite{LHCb-PAPER-2013-064}.}
\label{fig:Vmu4modified}
\end{figure}
\section{A Search for the Decay of a Hidden Sector Particle $\decay{\chi}{\mu{+}\mu^{-}}$ in $\decay{B^{0}}{K^{*0}\chi}$ at LHCb}
In particle physics, the term hidden-sector refers to the set of predicted particles that do not interact via the gauge boson forces of the SM.
Interest in hidden-sector SM extensions has increased~\cite{Essig:2013lka} due to the lack of any new TeV scale particles and missing evidence for a dark matter candidate that could solve the open questions in high energy physics~\cite{Weidenspointner:2006nua,Chang:2008aa,Adriani:2008zr,Adriani:2011xv,Adriani:2013uda,FermiLAT:2011ab,Aguilar:2014mma}.
As for the Majorana neutrino, coupling between the SM and hidden-sector particles may arise via mixing between the hidden-sector field and any SM field with an associated particle that is not charged under the electromagnetic or strong interaction.
This mixing could provide a portal through which a hidden-sector particle, $\chi$, may be produced when kinematically allowed.
This proceedings will concentrate on the search performed by LHCb for a hidden-sector boson produced in the decay $B^0\!\ensuremath{\rightarrow}\xspace K^{*0}\chi$, with $\chi \! \ensuremath{\rightarrow}\xspace \mu^+\mu^-$ and $K^{*0} \!\ensuremath{\rightarrow}\xspace K^+\pi^-$ (throughout this proceedings, $K^{*0} \equiv K^{*}(892)^0$)~\cite{LHCb-PAPER-2015-036}.
As shown in Fig.~\ref{fig:feyDB}, the $b\!\ensuremath{\rightarrow}\xspace s$ transition is mediated by a top quark loop at leading order.
For this reason, $\chi$ boson with a sizeable top quark coupling, \mbox{\itshape e.g.}\xspace obtained via mixing with the Higgs sector, could be produced at a substantial rate in such decays.
The dataset used for this analysis is the same used for the Majorana neutrino search reported in the previous chapter.
\begin{figure}
\includegraphics[height=0.2\textheight]{Fig1.pdf}
\caption{
Feynman diagram for the decay \ensuremath{B^0 \!\to K^{*0}\chi}\xspace, with $\chi \! \ensuremath{\rightarrow}\xspace \mu^+\mu^-$~\cite{LHCb-PAPER-2015-036}.
\label{fig:feyDB}}
\end{figure}
The search is conducted, as outlined in Ref.~\cite{Williams:2015xfa}, by scanning the \ensuremath{m(\mu^+\mu^-)}\xspace distribution for an excess of $\chi$ signal candidates over the expected background.
The $\chi\!\ensuremath{\rightarrow}\xspace\mu^+\mu^-$ decay vertex is permitted, but not required, to be displaced from the \ensuremath{B^0 \!\to K^{*0}\chi}\xspace decay vertex. Two
regions of reconstructed dimuon lifetime, \ensuremath{\tau(\mu^+\mu^-)}\xspace, are defined for each \ensuremath{m(\chi)}\xspace considered in the search: a prompt region and a displaced region.
Narrow resonances are vetoed by excluding the regions near the $\omega$, $\phi$, $J/\psi$, $\psi(2S)$ and $\psi(3770)$ resonances. These regions are removed in both the prompt and displaced samples to avoid contamination from unassociated dimuon and $K^{*0}$ resonances.
The branching fraction product $\mathcal{B}(B^0\!\ensuremath{\rightarrow}\xspace K^{*0}\chi(\mu^+\mu^-))\equiv\mathcal{B}(\ensuremath{B^0 \!\to K^{*0}\chi}\xspace)\times\mathcal{B}(\chi\!\ensuremath{\rightarrow}\xspace\mu^+\mu^-)$ is measured relative to $\mathcal{B}(\ensuremath{B^0 \!\to K^{*0}\mu^+\mu^-}\xspace)$, where the normalisation sample is taken from the prompt region.
Figure~\ref{fig2} shows the $K^+\pi^-\mu^+\mu^-$ control channel mass distribution for all prompt candidates that satisfy the full selection.
An extended unbinned likelihood fit is performed on the control channel to the mass spectrum.
\begin{figure}
\includegraphics[height=0.2\textheight]{Fig2.pdf}
\caption{
Invariant mass spectrum with fit overlaid~\cite{LHCb-PAPER-2015-036}.
\label{fig2}}
\end{figure}
The \ensuremath{m(\mu^+\mu^-)}\xspace distributions in both the prompt and displaced regions for candidates with an invariant mass that lies in a window of $50\ensuremath{\mathrm{\,Me\kern -0.1em V}}\xspace$ around the known $B^0$ mass are shown in Fig.~\ref{fig3}.
The $p$-value of the no-signal hypothesis is 80\%, showing no evidence for a hidden-sector boson.
Because no signal events are found, Fig.~\ref{fig4} shows the upper limits on $\mathcal{B}(\ensuremath{B^0 \!\to K^{*0}\chi}\xspace(\mu^+\mu^-))$, relative to $\mathcal{B}(\ensuremath{B^0 \!\to K^{*0}\mu^+\mu^-}\xspace)$, set at the 95\% C.L. for different values of \ensuremath{\tau(\chi)}\xspace.
As the Figure shows, the limits become less stringent for $\ensuremath{\tau(\chi)}\xspace \gtrsim 10\ensuremath{{\mathrm{ \,ps}}}\xspace$, as the probability of the dark boson decaying within the vertex locator decreases.
The branching fraction $\mathcal{B}(\ensuremath{B^0 \!\to K^{*0}\mu^+\mu^-}\xspace)=(1.6\pm0.3)\times10^{-7}$~\cite{LHCb-PAPER-2013-019} is used to obtain upper limits on $\mathcal{B}(\ensuremath{B^0 \!\to K^{*0}\chi}\xspace(\mu^+\mu^-))$, which are also shown in Fig.~\ref{fig4}.
\begin{figure}
\includegraphics[height=0.15\textheight]{Fig3.pdf}
\caption{
Distribution of \ensuremath{m(\mu^+\mu^-)}\xspace in the (black) prompt and (red) displaced regions. The shaded bands denote regions where no search is performed due to (possible) resonance contributions. The {\ensuremath{{J\mskip -3mu/\mskip -2mu\psi\mskip 2mu}}}\xspace, $\psi(2S)$ and $\psi(3770)$ peaks are suppressed to better display the search region~\cite{LHCb-PAPER-2015-036}.
\label{fig3}}
\end{figure}
\begin{figure}
\includegraphics[height=0.15\textheight]{Fig4.pdf}
\caption{
Upper limits at 95\% CL for (left axis)
$\mathcal{B}(\ensuremath{B^0 \!\to K^{*0}\chi}\xspace(\mu^+\mu^-))/\mathcal{B}(\ensuremath{B^0 \!\to K^{*0}\mu^+\mu^-}\xspace)$, with
\ensuremath{B^0 \!\to K^{*0}\mu^+\mu^-}\xspace in $1.1 < \ensuremath{m^2(\mu^+\mu^-)}\xspace < 6.0\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace^2$, and (right axis)
$\mathcal{B}(\ensuremath{B^0 \!\to K^{*0}\chi}\xspace(\mu^+\mu^-))$~\cite{LHCb-PAPER-2015-036}.
\label{fig4}}
\end{figure}
\section{Conclusions}
In these proceedings we have provided two examples of direct searches for light exotics in the LHCb detector.
This shows how LHCb can make contributions in the intensity frontier searches for BSM physics, where light particles are rarely coupling to the SM filed.
As an example, Fig.~\ref{fig:boundsUmu} shows the existing experimental limits for the mixing parameter $|V_{\mu 4}|$ as a function of the Majorana neutrino candidate mass.
It is striking that DELPHI was the last experiment to set a limit in the region of phase space above the charm quark mass.
This means that LHCb is one of the few experiments, up to date, able to further constrain the phase space for this parameter.
\begin{figure}
\includegraphics[height=0.2\textheight]{boundsUmu.eps}
\caption{Limits on $|V_{\mu 4}|$ as a function of $m_4$ as set by previous non LHCb results. The area indicated by dotted lines are at 95\% confidence level. Reproduced from~\cite{Atre:2009rg}.}
\label{fig:boundsUmu}
\end{figure}
\addcontentsline{toc}{section}{References}
\setboolean{inbibliography}{true}
\bibliographystyle{LHCb}
|
1,314,259,994,178 | arxiv | \section{Introduction}
By extracting radio frequency(RF) energy from the nearby wireless access points(APs), the energy constrained wireless devices like wireless sensors can theoretically posses perpetual lifetime \cite{Visser2013Rf}. Comparing with traditional energy harvesting techniques (e.g. solar energy, thermal gradients energy and vibrations or movements energy), RF energy harvesting technique has advantages such as all-weather operation and a compact harvester\cite{paradiso2005energy}. Very recently, a commercial program named FreeVolt has been launched in London, devoting to absorb energy from cell towers, Wi-Fi access points and TV broadcasters to charge low energy IoT devices \cite{FreeVolt15}. As the feasibility of RF-energy harvesting technique improves, the wireless powered communications (WPC) has attracted growing attentions \cite{huang2015some}.
For wireless nodes without fixed power supply, a harvest-then-transmit protocol has been commonly adopted where the node first harvest energy from radio signal transmitter and then use the harvested energy to transfer information to its intended receiver \cite{yin2013throughput,ju2014throughput,Rui2014Throughput,Yue2015Spatial}. The half-duplex operation mode produces an interesting trade-off for energy harvesting communications: what is the optimal harvesting-time to transmitting-time ratio for obtaining maximum achievable throughput. In \cite{yin2013throughput}, the impact of energy harvesting rate on throughput optimization was studied. Their results indicated that the optimal harvest-ratio decreases with increasing energy harvest rate. In \cite{ju2014throughput}, the doubly near-far problem was investigated for WPC networks consisting many RF-powered nodes. To counter the unfair throughput allocation among the near and far users, a common-throughput optimization problem was formulated and solved. In similar settings, \cite{Rui2014Throughput} studied the sum-throughput maximization problem for the case that the nodes can save energy for later use. A large-scale WPC network was studied in \cite{Yue2015Spatial}, where the node's spatial throughput is maximized subject to successful information transmission probability constraint.
As the energy harvested from RF is quite low and then one can use relay node to improve the transmission rate of the source's information \cite{ahmed2007throughput}. The outage probability of energy harvesting relay-aided link over fading channel was studied in \cite{li2016outage}. However, the harvesting profile of RF energy was not considered in the paper.
In previous works, people usually assume that the wireless nodes harvest energy from dedicated sources, where the AP and the nodes can operate synchronously in a cooperative mode. In this context, the AP keeps silent when the nodes are transmitting information. However, harvesting from dedicated sources like Hybrid AP is still not practical at resent because of high upgrade cost and low energy efficiency. On the contrary, harvesting from non-dedicated sources like WiFi, small base stations and TV stations is more practical and easy to implement.
In this paper, we consider the throughput optimization problem in the WPCs powered by non-dedicated sources. In particular, we assume that the wireless powered user harvests energy from a conventional nearby AP, and the AP keeps transmitting to its associate receiver even when the user starts transmitting information. Therefore, from the view of wireless powered user, the AP first act as an energy source and then as an interference source.
To the best of my knowledge, throughput optimization under this settings has not been addressed as so far. Interestingly, since the energy and interference come from the same source, improving the transmitting power of AP will not bring any profit to WPC users. And then the {\it harvest ratio} is becoming the only parameter that affects the performance.
Since the harvested RF energy is usually very weak, the outage may occur more frequently than conventional communications. As well known, the outage probability decreases with increasing {\it signal-to-interference} ratio (SIR). However, as will be shown later, the throughput is a quasi-concave function over SIR. Then there must be a trade-off between the throughput and the outage probability. Different from previous work, We integrate the outage probability constraint to the throughput optimization problem in this paper.Besides, the throughput of the WPC with relay has not been extensively studied as far as I know. And we will examine this problem considering outage as well as data causality constraint.
The main contributions of this paper are listed as follows.
\begin{itemize}
\item We propose a novel non-dedicated sources powered wireless communication model. The protocol of direct transmission and decode-and-forward relay transmission are presented, respectively.
\item The maximization of expected throughput for direct transmission subject to outage probability constraint is formulated and solved. The upper bound of the expected throughput is given in close form.
\item The maximization of expected throughput for DF relay transmission subject to outage and data causality constraints is formulated. We solve this problem by dividing it into two sub-problems.
\item Our results show that the optimal expected throughput for direct transmission is dominated by the outage constraint in most practical scenarios , while for DF relay transmission by the data causality constraint.
\end{itemize}
\section{Preliminaries}
\subsection{System model}
In this paper, we consider a wireless powered communication as shown in Fig.\ref{Fig:System}. We assume AP is in full-load operation and transmits with fixed power $P_{A}$.
The energy harvesting nodes $S, R$ has no fixed power supply and extract energy from radio signal radiated by a conventional AP. Suppose the AP and energy harvesting nodes operate in the same frequency band.
We consider two WPC schemes: direct transmission (DT) and Decode and Forward(DF) relay. For the case without relay, the system follows a {\it harvest-then-transmit} MAC protocol as shown in Fig.\ref{Fig:protocol}(a): In each time slot, the source node $S$ harvests energy from the AP in the first $\alpha T$ , while employ the remaining time fraction of $(1-\alpha)T$ to directly transmit information to the destination node $D$. The symbol $\alpha$ denotes {\it harvest ratio} and $0<\alpha<1$ . We further assume that node $S$ uses up all the harvested energy to transmit information in the second phase.
For the case with relay, we assume there is a relay node $R$ helps node $S$ transfer information to the destination. The system follows a {\it harvest-transmit-relay} MAC protocol as shown in Fig.\ref{Fig:protocol}(b): The source node $S$ and the relay node $R$ first harvest energy from the AP for $\alpha T$ time. Secondly, node $S$ transfer information to node $R$ with all of the harvested energy in $\beta T$ time, $0<\beta<1$. Lastly, node $R$ relay the information from $S$ to $D$ in $(1-\alpha-\beta)T$ time, by using up the harvested energy in the frist phase. To simplify analysis, we assume normalized slot duration, i.e. $T=1$, in the left part of this paper.
\begin{figure}
\centering
\includegraphics[width=.5\linewidth]{system}
\caption{System model}
\label{Fig:System}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{mac}
\caption{MAC protocol for direct transmission and DF relay transmission}
\label{Fig:protocol}
\end{figure}
\textbf {Non-dedicated Sources Assumption}: Other than previous works, we assume energy harvesting nodes have no cooperation with the AP, which means the AP keeps transmitting to serve its own users no-matter the node $S$ is transmitting or not. For such assumption, the AP first acts as an energy source and latter as an interferer. Some new challenges should be observed and formulated. This assumption is real for those low-power sensors harvesting energy from nearby information AP, like WiFi and small cell base stations.
\subsection{Energy harvesting model}
We assume that the energy transfer channels from AP to $S, R$ and $D$ are subject to Rayleigh fading with unit mean and large-scale path loss. Let $r_{AS}$ , $r_{AR}$ and $r_{AD}$ denote the distance from AP to $S$, $R$ and $D$, respectively. Then in the energy harvesting phase, the accumulated energy of the node $S$ and $R$ are:
\begin{equation}\label{eqn:es}
E_{S}=\alpha \zeta P_A h_{AS} r_{AS}^{-\mu}
\end{equation}
and
\begin{equation}\label{eqn:er}
E_{R}=\alpha \zeta P_A h_{AR} r_{AR}^{-\mu},
\end{equation}
respectively, where $0<\zeta<1$ is the energy harvesting efficiency, $ h_{AS}, h_{AR}$ are independent and identically distributed (i.i.d.) exponential random variables with unit mean, $\mu>2$ is the path-loss exponent. For simplification, we assume $\zeta=1$ in the remaining parts.
\subsection{Information transfer model}
In transmission without relay, only the direct transmission between $S$ and $D$ is available. Without loss of generality, the distance between them $r_{SD}$ is set to be $1$. The channel power gain between $S$ and $D$ is assumed to be only determined by their distance as: $h_{SD}=r_{SD}^{-\mu}=1$. The channel power gain between the AP and the node $D$ is given by $h_{AD}r_{AD}^{-/mu}$. Let $x_A(t)$ and $x_S(t)$ with zero mean and unit power, denote the transmit signal of the AP and the node $S$, respectively. For direct transmission link, the baseband equivalent model for this channel is
\begin{equation}
y_{SD}(t)=\sqrt{P_S}x_S(t)+\sqrt{P_A h_{AD}r_{AD}^{-\mu}}x_A(t)+n_D(t)
\end{equation}
where $y_{SD}(t)$ denotes the received signal from direct link, $n_D(t)$ is the additive white noise with power $\sigma_{n_D}^2$. As we assume the node $S$ use up the harvested energy to transfer information, and the previous research showed that keeping constant transmitting power for such energy harvesting system will achieve the maximum channel capacity \cite{devillers2012general}. Therefore the transmitting power of the node $S$ is,
\begin{equation}\label{eqn:ps}
P_S=\frac{E_{S}}{1-\alpha}
\end{equation}
The receiving SINR of node $D$ for direct transmission is
\begin{equation}\label{eqn:sinr-dt}
\gamma_{DT}=\frac{P_S}{P_Ah_{AD}r_{AD}^{-\mu}+\sigma_{n_D}^2},
\end{equation}
The throughput of direct link $S-D$ is given as following,
\begin{equation}\label{eqn:th-dt}
R_{DT}=(1-\alpha)\log(1+\gamma_{DT}).
\end{equation}
For DF relay transmission, we assume that the distance between the source node and the relay node is $d$, the distance between relay and destination $1-d$ \cite{ahmed2007throughput}. Then the channel power gain of link $S-R$ and $R-D$ is, respectively, $d^{-\mu}$ and $(1-d)^{-\mu}$. The information transmission phase is divided into two sub-phases. In the first $\beta $ time, the information is transmitted to the relay node $R$, the received signal at $R$ is
\begin{equation}\label{eqn:ysr}
y_{SR}(t)=\sqrt{P_S^{co}d^{-\mu}}x_S(t)+\sqrt{P_Ah_{AR}r_{AR}^{-\mu}}x_A(t)+n_R(t),
\end{equation}
where $n_R(t)$ denotes the white noise at node $R$. Accordingly, the received SINR of node $R$ and the throughput of $S-R$ link is given as (\ref{eqn:sinr-sr}) and (\ref{eqn:th-sr}) , respectively.
\begin{equation}\label{eqn:sinr-sr}
\gamma_{SR}=\frac{P_S^{co}d^{-\mu}}{P_Ah_{AR}r_{AR}^{-\mu}+\sigma_{n_R}^2},
\end{equation}
\begin{equation}\label{eqn:th-sr}
R_{SR}=\beta\log(1+\gamma_{SR}).
\end{equation}
Since we assume all the energy harvesting nodes use up the energy in their batteries or capacities, the transmitting power of node $S$ in the cooperative mode is $P_S^{co}=\frac{E_S}{\beta}$.
In the following $1-\alpha-\beta$ time, the relay node transfer the information to the destination node $D$ by using all of its harvested energy. The received baseband-equivalent signal at node $D$ is
\begin{equation}\label{eqn:yrd}
y_{RD}=\sqrt{P_R (1-d)^{-\mu}}x_R(t)+\sqrt{P_Ah_{AD}r_{AD}^{-\mu}}x_A(t)+n_D(t),
\end{equation}
where the transmitting power of node $R$ is $P_R=\frac{E_R}{1-\alpha-\beta}$ , $x_r(t)$ is the relay signal with unit mean power, $n_D(t)$ denotes the noise signal with power $\sigma_{n_D}^2$. Similar to (\ref{eqn:sinr-sr}) and (\ref{eqn:th-sr}), the received SINR of node $D$ and throughput of $R-D$ link is, respectively,
\begin{equation}\label{eqn:sinr-rd}
\gamma_{RD}=\frac{P_R (1-d)^{-\mu}}{P_Ah_{AD}r_{AD}^{-\mu}+\sigma_{n_D}^2},
\end{equation}
and
\begin{equation}\label{eqn:th-rd}
R_{RD}=(1-\alpha-\beta)\log_2(1+\gamma_{RD}).
\end{equation}
\subsection{Preliminary Mathematical Results}
\begin{lemma}\label{lem:cdf}
Assume $H_1$ and $H_2$ are independent exponential distribution variables with unit mean, $k\in R^+$. Then for variable $X=k\frac{H_1}{H_2}$, the probability density function (PDF) is $\frac{k}{(k+x)^2}$, the cumulative distribution function (CDF) is $\frac{k}{k+x}$.
\end{lemma}
\begin{proof}
See appendix \ref{app:1}
\end{proof}
\begin{lemma}\label{lem:mean}
Assume X is a random with PDF $\frac{k}{(k+x)^2}$, $k\in R^+$, the expectation of function $f(X)=\log_2(1+X)$ is $\mathbb{E}[f(X)]=\frac{\log_2(1/k)}{1/k-1}$.
\end{lemma}
\begin{proof}
See appendix \ref{app:2}
\end{proof}
\section{Problem Formulation}\label{sec:probform}
From previous analysis we find that the system throughput and the received SINR are both random variables for given time allocation strategy. In a specific time slot, the time allocation ratio can affect the throughput performance. For example, in direct transmission case, allocating more time to harvest energy will increase the transmitting power according to (\ref{eqn:ps}). Thus the received SINR would be increased and lower outage probability can be achieved. However, this strategy would shorten the transmission duration thus decrease the system throughput.
To observe how the{ \it harvest-ratio } $\alpha$ affects the throughput as well as indirectly, how the throughput varies with increasing SIR, we consider a deterministic case that the channel gains $h_{AS}$ and $h_{AR}$ are both assumed to equal to $1$. By introducing (\ref{eqn:es}), (\ref{eqn:ps}) to (\ref{eqn:sinr-dt}), we get the SINR of node $D$ as
\begin{equation}\label{eqn:gkk}
\gamma_{DT}=\frac{\alpha r_{AS}^{-\mu}}{(1-\alpha)r_{AD}^{-\mu}+\sigma_{n_D}^2}.
\end{equation}
Considering the RF energy powered communications are usually low-power and communication range limited, the distance between two nodes is far less than that between the AP and the nodes. Therefore, we assume $r_{AS}\sim r_{AD}$. Besides, as noise power is far less than the signal radiated from the energy source, we further remove $\sigma_{n_D}^2$ from (\ref{eqn:gkk}). Then the SINR is reduced to
\begin{equation}\label{eqn:gkk1}
\gamma_{DT}=\frac{\alpha }{1-\alpha}.
\end{equation}
The throughput of direct transmission link can be expressed as
\begin{displaymath}\label{eqn:rdt}
R_{DT}=(1-\alpha)\log_2{\frac{1}{1-\alpha}}.
\end{displaymath}
If we only aim to maximize the throughput, the optimal { \it harvest-ratio } $\alpha$ can be easily derived as $1-1/e$ from theorem 1 in \cite{yin2013throughput} . However, if we plot a curve of throughput over SIR, as shown in Fig.\ref{Fig:Th1}, we can find that the trade-off exists between throughput maximization and QoS optimization. The trade-off comes from the fact that allocating more time to harvest energy will always increase the SIR from (\ref{eqn:gkk1}), but it just increases the throughput before the critical point $e-1$ while decreases after that point. As well known, to avoid the occurrence of outage, the SIR should exceed certain threshold $\gamma_{o}$. Therefore, the optimal { \it harvest-ratio } $\alpha$ integrating the outage constraint should be:
\begin{displaymath}
{\alpha}^* = \left\{ \begin{array}{ll}
\gamma_o & \gamma_o\ge e-1\\
1-1/e & \gamma_o< e-1
\end{array} \right.
\end{displaymath}
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{sir-r}
\caption{Throughput versus SIR for deterministic channel.}
\label{Fig:Th1}
\end{figure}
In this paper, we focus on the stochastic case where the channel power gain follows exponential distribution with unit mean. Our goal is to maximize the long-term expectation of the achievable throughput subject to outage probability constraint. For direct transmission protocol we formulate the following optimization problem,
\begin{eqnarray}\label{eqn:p1}
\mathrm{\mathbf{P1}}:\underset{\mathit{\alpha}}{\mathbf{max}} & & \mathbb{E} [R_{DT}] \label{eq:p3} \\
\mathrm{\mathbf{s.t.}} & & P_{\gamma_o, DT}^{out}\le\theta\\
&&0<\alpha<1,
\end{eqnarray}
where $\mathbb{E}[R_{DT}]$ denotes the expectation of $R_{DT}$, $\theta$ the maximum tolerable outage probability and $P_{DT}^{out}$ the outage probability with SINR threshold $\gamma_o$.
For DF relay transmission, we formulate the following optimization problem,
\begin{eqnarray}\label{eqn:p2}
\mathrm{\mathbf{P2}}:\underset{\mathit{\alpha, \beta}}{\mathbf{max}} & & \mathbb{E} [R_{DF}] \label{eq:p3} \\
\mathrm{\mathbf{s.t.}} & & P_{\gamma_o, DF}^{out}\le\theta\\
&&\mathbb{E}[R_{SR}]\le\mathbb{E}[R_{RD}]\label{eqn:p2-3}\\
&&0<\alpha<1\\
&&0<\beta<1\\
&&\alpha+\beta<1.
\end{eqnarray}
The constraint (\ref{eqn:p2-3}) is the data causality constraint that the average throughput of $S-R$ link can not be larger than that of $R-D$ link \cite{orhan2015energy}.
Note that for interference limited channel assumption, i.e. $\sigma_{n_D}^2=0$, the SIR and the throughput is irrelevant to the transmitting power of the AP. That is to say increasing the AP's transmitting power will not lead to increasing of achievable throughput. The only parameters that determine the system performance is the time allocation ratio $\alpha$ and $\beta$, which are to be optimized in this paper.
\section{Throughput Maximization for Direct Transmission}\label{sec:dt}
In this section, we first get the outage probability and the average throughput for direct transmission based on the distribution of SIR. Then we solve the optimization problem by convex optimization technique.
\subsection{Outage Probability and Expected Throughput}
Similar to assumption in section \ref{sec:probform}, we get the received SIR of node $D$ by combining (\ref{eqn:es}), (\ref{eqn:ps}) and (\ref{eqn:sinr-dt}) as following,
\begin{equation}\label{eqn:gdt}
\gamma_{DT}=\frac{\alpha}{1-\alpha}\cdot\frac{h_{AS}}{h_{AD}}.
\end{equation}
Since the channel gain $h_{AS}$ and $h_{AD}$ are exponential random variables with unit mean, we get the PDF of $\gamma_{DT}$ as following, according to lemma \ref{lem:cdf}.
\begin{equation}\label{eqn:pdfgdt}
f_{\Gamma_{DT}}(\gamma)= \dfrac{\alpha(1-\alpha)}{[(1-\alpha)\gamma+\alpha]^2}.
\end{equation}
For given outage threshold $\gamma_o$, the outage probability is thus given by
\begin{align}
P_{\gamma_o,DT}^{out} &= \mathbb{P}(\gamma_{DT}\le \gamma_o) \notag \\
&=\int_{0}^{\gamma_o}f_{\Gamma_{DT}}(\gamma)d\gamma \notag\\
&=\frac{(1-\alpha)\gamma_o}{\alpha +(1-\alpha)\gamma_o}.\label{eqn:pout}
\end{align}
Based on distribution of the SIR, we get the expected throughput of direct transmission over numerous time slots as following,
\begin{align}\label{eqn:erdt}
\mathbb{E}[R_{DT}]=&\mathbb{E}[(1-\alpha)\log_2(1+\gamma_{DT})] \\
=&(1-\alpha)\int_{0}^{\infty}\log_2(1+\gamma_{DT})f_{\Gamma_{DT}}(\gamma)d\gamma\notag\\
\overset{(a)}{=}&\frac{\alpha(1-\alpha)}{1-2\alpha}\log_2(\alpha^{-1}-1)\notag,
\end{align}
where (a) comes by replacing $k$ with $\frac{\alpha}{1-\alpha}$ in Lemma \ref{lem:mean}. It is not difficult to prove that $\mathbb{E}[R_{DT}]$ is a continuous function for $\alpha\in(0,1)$. Substitute (\ref{eqn:pout}) and (\ref{eqn:erdt}) into problem P1, and make some simplifications we get the following equivalent problem:
\begin{eqnarray}
\mathrm{\mathbf{P3}}:\underset{\mathit{\alpha}}{\mathbf{max}} & &\mathbb{E}[R_{DT}] \label{eqn:obj2}\\
\mathrm{\mathbf{s.t.}} & &\alpha \ge \frac{\gamma_o(1-\theta)}{\theta +\gamma_o (1-\theta)} \label{eqn:cond2}\\
&&o<\alpha<1.
\end{eqnarray}
\subsection{Solution of Problem P3}
In this part, we first prove the objection function of problem P3 is convex and then obtain the solution by using convex optimization technique.
\begin{lemma}\label{lem:concdt}
The objective function in P3 is concave.
\end{lemma}
\begin{proof}
See Appendix for details.
\end{proof}
\begin{proposition}\label{pro:opt}
The optimal $\alpha$ for problem P3 is ${\alpha}^*=\max \Big(0.5,\big(\frac{\theta}{(1-\theta)\gamma_0}+1\big)^{-1}\Big)$.
\end{proposition}
\begin{proof}
As problem P3 is an univariate maximization problem with bounded constraint, we could simply maximize the objection function and compare the optimal parameter ${\alpha}^*$ with the bound to get the actual optimal value. We get ${\alpha}^*$ by letting the first derivative of $\mathbb{E}[R_{DT}]$ equal zero,
\begin{align}
\frac{d\mathbb{E}[R_{DT}]}{d{\alpha}}&=\frac{{\alpha}^2+(\alpha-1)^2}{(2\alpha-1)^2}\log_2(\frac{\alpha}{1-\alpha})-\frac{1}{(2\alpha-1)\ln 2}=0.\notag
\end{align}
This equation can be equivalently translated to the following one
\begin{equation}
\ln(\frac{\alpha}{1-\alpha})=\frac{2\alpha-1}{{\alpha}^2+(\alpha-1)^2} \label{eqn:diff1}
\end{equation}
The exact solution for (\ref{eqn:diff1}) is $\alpha^*=0.5$. Considering the lower bound of $\alpha$ subject to (\ref{eqn:cond2}), we can intuitively get Proposition \ref{pro:opt}.
\end{proof}
From Proposition \ref{pro:opt} we find that the expected throughput would be no more than $0.5\log_2 e\approx 0.7213$ bps/Hz even if we remove the outage constraint, which implies that the spectral efficiency of the non-cooperative WPCs is quite low. However, we can find broad applications for this protocol in low-rate wireless sensors networks.
\subsection{Simulation Result}
In Fig.\ref{Fig:r_k}, we present the simulation of average achievable throughput to verify our analytical results. The simulation results are obtained by averaging over 10,000 independent Rayleigh channel realizations. The analytical results is plotted according to (\ref{eqn:erdt}). We find that the two curves matches well and then the analysis framework is verified.
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{r_k}
\caption{Average achievable throughput versus harvesting ratio.}
\label{Fig:r_k}
\end{figure}
In Fig.\ref{Fig:r_zeta}, the optimal average throughput versus energy harvesting efficiency $\zeta$ under outage probability threshold 0.05 and 0.02 are depicted, respectively. The outage SIR threshold is set as $\gamma_o=-13$ dB. The curve without outage probability constraint is also given for comparison. We can see that 1) the throughput performance will be improved with weaker outage probability constraint as expected, and be maximized if the outage probability constraint is completely relaxed; 2)The upper bound given by Proposition \ref{pro:opt} is testified at point (1, 0.73).
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{r_zeta}
\caption{Average achievable throughput versus energy harvesting efficiency.}
\label{Fig:r_zeta}
\end{figure}
To investigate how the varying outage SIR threshold impacts the optimal {\it harvest-ratio}, we plot Fig.\ref{Fig:k_gamma}. The simulation parameters are the same as in Fig.\ref{Fig:r_k}. We present the cases with higher and lower outage probability threshold, respectively. Our results imply that 1) the optimal {\it harvest-ratio} is determined by the outage probability constraint for most practical scenarios, i.e. $\gamma_o>-10$ dB, while in the lower SIR regime it is just determined by the objective function itself; 2) The optimal $\alpha$ for the case with a stronger outage constraint ($\theta=0.02$) is always higher than that with a weaker one ($\theta=0.05$). This can be explained as that to meet stronger outage constraint, more time should be allocated to harvest energy and then high SIR is achieved; 3)The optimal $\alpha=0.5$ given by Proposition \ref{pro:opt} is checked.
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{r_gamma}
\caption{Optimal harvesting ratio versus outage SIR threshold.}
\label{Fig:k_gamma}
\end{figure}
\section{Throughput maximization for DF relay transmission}
\subsection{Outage Probability and Expected Throughput}
For DF relay aided communication, the outage occurs when anyone of the two links, $S-R$ or $R-D$ link, is outaged. Thus the overall outage probability under DF cooperative protocol can be presented as
\begin{equation}\label{eqn:poutdf}
P_{\gamma_o, DF}^{out}=1-(1-P_{\gamma_o, SR}^{out})(1-P_{\gamma_o, RD}^{out})
\end{equation}
We next get the expression for $P_{\gamma_o, SR}^{out}$ and $P_{\gamma_o, RD}^{out}$. Following the same assumption as sections \ref{sec:dt}, we can get the received SIR at node $R$ from (\ref{eqn:sinr-sr}), and the received SIR at node $D$ from (\ref{eqn:sinr-rd}) as following,
\begin{equation}\label{eqn:sir-sr}
\gamma_{SR}=\frac{\alpha d^{-\mu}}{\beta}\cdot\frac{h_{AS}}{h_{AR}},
\end{equation}
\begin{equation}\label{eqn:sir-rd}
\gamma_{RD}=\frac{\alpha d^{-\mu}}{1-\alpha-\beta}\cdot\frac{h_{AR}}{h_{AD}}.
\end{equation}
The distribution of $\gamma_{SR}$ and $\gamma_{RD}$ are similar to that of direct transmission protocol. We can obtain them by replacing $k$ with $\frac{\alpha d^{-\mu}}{\beta}$ and $\frac{\alpha(1- d)^{-\mu}}{1-\alpha-\beta}$ in Lemma \ref{lem:cdf}, respectively, as following.
\begin{equation}\label{eqn:sir-sr-dist}
f_{\Gamma_{SR}}(\gamma)= \dfrac{\alpha\beta d^{-\mu}}{[\alpha d^{-\mu}+\gamma\beta]^2},
\end{equation}
and
\begin{equation}\label{eqn:sir-rd-dist}
f_{\Gamma_{RD}}(\gamma)= \dfrac{\alpha(1-\alpha-\beta) (1-d)^{-\mu}}{[\alpha d^{-\mu}+\gamma(1-\alpha-\beta)]^2}.
\end{equation}
The outage probability of link $S-R$ is derived by introducing $\gamma_o$ to the CDF of SIR,
\begin{align}\label{eqn:pout-sr}
P_{\gamma_o,SR}^{out} &= \mathbb{P}(\gamma_{SR}\le \gamma_o) \notag \\
&=\int_{0}^{\gamma_o}f_{\Gamma_{SR}}(\gamma)d\gamma \notag\\
&=\frac{\beta\gamma_o}{\alpha d^{-\mu} +\beta\gamma_o}.
\end{align}
Similarly, the outage probability of link $R-D$ is derived as
\begin{align}\label{eqn:pout-rd}
P_{\gamma_o,RD}^{out} &= \mathbb{P}(\gamma_{RD}\le \gamma_o) \notag \\
&=\int_{0}^{\gamma_o}f_{\Gamma_{RD}}(\gamma)d\gamma \notag\\
&=\frac{(1-\alpha-\beta)\gamma_o}{\alpha(1-d)^{-\mu} +(1-\alpha-\beta)\gamma_o}.
\end{align}
Substitute (\ref{eqn:pout-sr}) and (\ref{eqn:pout-rd}) to (\ref{eqn:poutdf}), we get the overall outage probability of the DF relay transmission as
\begin{equation}\label{eqn:poutdf2}
P_{\gamma_o, DF}^{out}=1-\dfrac{\alpha^2(d(1-d))^{-\mu}}{(\alpha d^{-\mu}+\beta\gamma_o)(\alpha(1-d)^{-\mu}+(1-\alpha-\beta)\gamma_o)}
\end{equation}
With the PDF of $\gamma_{SR}$ and $\gamma_{RD}$, the expected throughput of link $S_R$ and $R-D$ are derived as
\begin{align}\label{eqn:ersr}
\mathbb{E}[R_{SR}]=&\mathbb{E}[\beta\log_2(1+\gamma_{SR})] \\
=&\beta\int_{0}^{\infty}\log_2(1+\gamma_{SR})f_{\Gamma_{SR}}(\gamma)d\gamma\notag\\
=&\frac{\beta}{\beta d^{\mu}\alpha^{-1}-1}\log_2(\beta d^{\mu}\alpha^{-1})\notag
\end{align}
and
\begin{align}\label{eqn:errd}
\mathbb{E}[R_{RD}]=&\mathbb{E}[\beta\log_2(1+\gamma_{RD})] \\
=&(1-\alpha-\beta)\int_{0}^{\infty}\log_2(1+\gamma_{RD})f_{\Gamma_{RD}}(\gamma)d\gamma\notag\\
=&\frac{(1-\alpha-\beta)}{(1-\alpha-\beta)(1- d)^{\mu}\alpha^{-1}-1}\log_2((1-\alpha-\beta) (1-d)^{\mu}\alpha^{-1})\notag,
\end{align}
respectively.
For the DF relay transmission powered by RF energy, we assume that the direct link from $S$ to $D$ can be ignored due to its limited transmission ability. This assumption can also be used to get the lower bound of an actual relay system. Based on this assumption, the overall expected throughput is determined by the lower one between $S-R$ and $R-D$ link,
\begin{equation}\label{eqn:erdt2}
\mathbb{E}[R_{DF}]=\min\{\mathbb{E}[R_{SR}],\mathbb{E}[E_{RD}]\}
\end{equation}
Considering that the data causality constraint of problem P2, the objective function can be equally converted to
\begin{eqnarray}\label{eqn:p4}
\mathrm{\mathbf{P4}}:\underset{\mathit{\alpha, \beta}}{\mathbf{max}} & & \mathbb{E} [R_{SR}] \\
\mathrm{\mathbf{s.t.}} & & P_{\gamma_o, DF}^{out}\le\theta \label{eqn:p4-1}\\
&&\mathbb{E}[R_{SR}]\le\mathbb{E}[R_{RD}] \label{eqn:p4-2}\\
&&0<\alpha<1\\
&&0<\beta<1\\
&&\alpha+\beta<1.
\end{eqnarray}
\subsection{Solution of Problem P4}
The objective function of P4 can be translated to finding the optimal \textit{harvest ratio} only considering the first two phases of the whole slot, which can be solved by using method similar to P3. Therefore, to simplify the analysis we divide the original problem two steps. In the first step, we will find the optimal \textit{harvest ratio} denoted by $\kappa=\frac{\alpha}{\beta}$. Note that we just find the optimal ratio of two times but not the times itself. In the second step, integrating the optimal ratio and the constraints in P4 we find the optimal harvesting time $\alpha$ and $S-R$ transmitting time $\beta$.
First, we see that finding the optimal \textit{harvest ratio } in P4 is not different from that in P2. The only difference is that in P3 the distance between $S-D$ is $1$ but in P4, the distance between $S-R$ is shorter and denoted as $d$. Setting $\alpha=\kappa\beta$, the corresponding expected throughput over fading power transfer channel is
\begin{equation}\label{eqn:ersr-p5}
\mathbb{E}[R_{SR}]=\frac{z}{(1+\kappa)}\frac{\log_2(\kappa^{-1}d^{\mu})}{\kappa^{-1}d^{\mu}-1},
\end{equation}
where $z=\alpha+\beta$ is assumed to be a constant in the first step. We name $z$ the \textit{harvest-and-first-hop} sum time which will be optimized in the second step. The quasi-concavity of function (\ref{eqn:ersr-p5}) can be verified by investigating its second derivative. However, the proof is tedious and we neglect it due to space limit. We will also show the curve of $\mathbb{E}[R_{SR}]$ as proof in the simulation part. The optimal $\kappa$ is given by
solving
\begin{equation}\label{eqn:optk}
\frac{d\mathbb{E}[R_{SR}]}{d\kappa}=0.
\end{equation}
However, there is no close form solution for (\ref{eqn:optk}), we can numerically calculate the optimal $\kappa^*$.
Till now we get the optimal time allocation ratio under a constant sum time $z$. Next, we will find the optimal sum time $z$ that meets the data causality constraint and outage probability constraint. Substitute $\alpha=\frac{\kappa z}{1+\kappa}$ and $\beta=\frac{z}{\kappa +1}$ to (\ref{eqn:p4-1}) and make some simplification we get the equivalent constraint of (\ref{eqn:p4-1}) as
\begin{equation}\label{eqn:cp51}
z\ge\dfrac{\gamma_o(1+\kappa)}{\gamma_o(1+\kappa)+\kappa^2[d(1-d)]^{-\mu}(\kappa d^{\mu}+\gamma_o)^{-1}-\kappa(1-d)^{-\mu}}.
\end{equation}
We denote the right hand of (\ref{eqn:cp51}) as $\widetilde{z_1}(\kappa, \theta, \gamma_o) $ for convenience of description. Similarly, we introduce $\alpha=\frac{\kappa z}{1+\kappa}, \beta=\frac{z}{\kappa +1}$ to (\ref{eqn:p4-2}) and get its equivalent constraint as
\begin{equation}\label{eqn:p5c2}
\frac{\tau}{\tau-1}\log_2\tau\ge \Psi ,
\end{equation}
where
\begin{equation}\label{eqn:tau}
\tau=(z^{-1}-1)(\kappa^{-1}+1)(1-d)^{-\mu}
\end{equation}
and
\begin{equation}\label{eqn:psi}
\Psi=\frac{(1+\kappa)(1-d)^{\mu}}{d^{\mu}-\kappa}\log_2(\kappa^{-1}d^{\mu}).
\end{equation}
We define function $f(\tau)=\frac{\tau}{\tau-1}\log_2\tau$. Obviously it is a monotonically increasing function with $\tau$ for $\tau>0$. Assuming $\tau^*=f^{-1}(\Psi)$, we get the equivalent constraint of (\ref{eqn:p4-2}) as
\begin{equation}\label{eqn:p5c21}
z\le \left[\frac{\kappa\tau^*}{(1+\kappa)(1-d)^{\mu}}+1\right]^{-1},
\end{equation}
the right hand of which is denoted as $\widetilde{z_2}(\kappa)$ for easy description.
In conclude, we explain the solving process of problem P4 as following,
\begin{enumerate}
\item Numerically evaluate equation (\ref{eqn:optk}) to get optimal \textit{harvest ratio} $\kappa^*$.
\item Solve the following problem
\begin{eqnarray}\label{eqn:p5}
\underset{\mathit{z}}{\mathbf{max}} & & z\dfrac{\log_2(\kappa^{*-1}d^{\mu})}{(1+\kappa^*)(\kappa^{*-1}d^{\mu}-1)} \\
\mathrm{\mathbf{s.t.}} & & \widetilde{z_1}\le z\le\widetilde{z_2}.
\end{eqnarray}
\item Obtain the optimal time allocation ratio by $\alpha=\frac{\kappa^*z}{1+\kappa^*}$ and $\beta=\frac{z}{1+\kappa^*}$.
\end{enumerate}
\subsection{Simulation Result}
In this part, we first verify the quasi-concavity of the expected throughput function. Then we investigate the feasible region of the \textit{harvest-and-first-hop} sum time for varying SIR threshold values. At last, we compare the optimal throughput performance of direct transmission and DF relay transmission.
In figure.\ref{fig:r_k_df}, the expected throughput of the DF relay system over the \textit{harvest ratio} $\kappa$ is depicted. In this simulation, We set $z=1$, $d=0.5$ and $\mu=2$. The simulation results is given through 10000 independent Rayleigh fading channel realizations, while the analytic results is given by (\ref{eqn:ersr-p5}). It can be found that the two curves match well, thus the quasi-concavity of expected throughput over \textit{harvest ratio} is verified.
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{r_k_df}
\caption{Expected throughput versus harvesting ratio $\kappa$ for DF relay transmission.}
\label{fig:r_k_df}
\end{figure}
In figure.\ref{fig:z_sir}, we demonstrate the feasible region of \textit{harvest-and-first-hop} sum time versus outage SIR threshold. In this simulation, we set $d=0$, $\mu=2$ and $\gamma_o$ varies from -20 dB to 0 dB. Since $\widetilde{z_1}\le z\le\widetilde{z_2}$, we find that the feasible region of $z$ is narrow mainly due to data causality. However, this region for looser outage constraint ($\theta=0.05$) is much wider than that for the tighter one ($\theta=0.02$).
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{z_sir}
\caption{Optimal \textit{harvest-and-first-hop} sum time $z$ versus outage SIR threshold $\gamma_o$ for DF relay transmission.}
\label{fig:z_sir}
\end{figure}
In figure.\ref{fig:th_d}, we depict the two curves of expected throughput performance over source-relay distance. We set SIR threshold $\gamma_o=-18dB$ and source-relay distance $d$ varies from 0 to 1. The figure shows that when the S-R distance is small, the direct transmission protocol posses better performance, while for larger S-R distance (i.e. $d>0.5$) the cooperative protocol is better. Especially, the performance gain is greatly improved for larger path-loss exponent. This observation coincides the fact that larger path-loss links can gain more benefit from cooperative communications.
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{th_d}
\caption{Optimal expected throughput versus source-relay distance for DF relay transmission and direct transmission.}
\label{fig:th_d}
\end{figure}
\section{Conclusion}
This paper studied the expected throughput optimization for wireless communications powered by non-dedicated sources. We formulate the optimal problems to maximize the expected throughput for direct transmission and DF relay transmission, respectively, subject to outage probability constraint. The optimal {\it harvest ratio} is derived by convex optimization technique. We find that the optimal throughput is irrelevant to the transmitting power of energy source in interference limited environment and upper bounded by a constant $0.5\log_2{e} $(bps/Hz). For most practical settings, We conclude that the optimal {\it harvest ratio} is dominated by outage probability constraint with direct transmission, while by data causality with DF relay transmission .
|
1,314,259,994,179 | arxiv | \section{Introduction}
Water-vapor (H$_2$O) maser emission ($J_\mathrm{K_{a}K_{c}} = 6_{16}$--$5_{23}$ rotational transition at 22.23508 GHz) is a unique probe to directly investigate the structure and dynamics of active galactic nuclei (AGN) on the (sub-)parsec scale.
Since the brightness temperature of H$_2$O maser emission is extremely high ($T_{\rm B} \sim 10^{10}$ K), it is one of a few emission lines that can be observed with very long baseline interferometry (VLBI).
By using VLBI, the distribution and motion of dense gas in AGN can be measured on scales of (sub-)milliarcseconds (mas), where 1 mas corresponds to 0.05 pc at a distance of 10 Mpc.
Presently, more than 80 galaxies, many of which are type-2 Seyfert or LINER systems, have been known to radiate H$_2$O maser emission from their nuclei.
In the active galaxy NGC 4258, VLBI observations of the maser emission have revealed the existence of a compact disk in Keplerian rotation and of a massive black hole at its nucleus (\cite{miyo95}).
Another evidence for a compact edge-on disk rapidly rotating was obtained from a secular velocity drift of the systemic maser features.
The systemic features of NGC 4258 showed an increase of velocities with about 9 km s$^{-1}$ yr$^{-1}$ by monitoring observations over periods of several years (e.g., \cite{has94}, \cite{green95}, \cite{nakai95}, \cite{herr99}).
In addition, measurements of the distribution and velocity drifts of the masers led to a direct determination of the distance of the galaxy (\cite{miyo95}; \cite{nakai96}; \cite{herr99}; \cite{argon07}).
Applying this method to more distant galaxies would make an evaluation of the cosmological constants possible.
Other galaxies also show evidences of circumnuclear disks.
Position measurements of maser spots and position-velocity diagram analysis by VLBI observations have found rotating edge-on disks at the nuclei of NGC 1068 (\cite{GG97}), NGC 3079 (\cite{T98}; \cite{S00}; \cite{yama04}; \cite{kon05}), NGC 4945 (\cite{green97}), Circinus Galaxy (\cite{green03a}), NGC 3393 (\cite{kon08}), UGC 3789 (\cite{reid09}), IC 1481 (\cite{mamyo09}), and other six active galaxies (\cite{kuo11}).
In addition, some galaxies show secular velocity drifts of the systemic features; NGC 2639 (\cite{wil95}), IC 2560 (\cite{ishi01}), Mrk 1419 (\cite{hen02}), and UGC 3789 (\cite{braatz10}).
Water-vapor masers in IC 2560 have been analyzed by \citet{ishi01}, and here we will re-analyze them more accurately, using new VLBI observations.
IC 2560 is an SB(r)b galaxy (\cite{RC3}) located in the southern sky with a declination of $-33^{\circ}$.
The galaxy is in the Antila cluster at a distance of 26 Mpc (\cite{aaron89}) and receding with a velocity of $V_{\rm sys} = 2876 \pm 20$ km s$^{-1}$ (\cite{strau92}), which has been converted into the local standard of rest (LSR) frame (from the heliocentric frame; $V_{\rm LSR} = V_{\rm hel} - 12.1$ km s$^{-1}$) and also into the radio definition (velocities will be in the same definition throughout this paper).
The adopted parameters for IC 2560 in this paper are listed in table \ref{i2para}.
A bar structure with a full length of $120''$ (or 15 kpc), two clear spiral arms, and a box-shaped bulge can be seen in the optical image (see section 4.5).
The galaxy was classified as a Seyfert 2 from optical line observations (\cite{fair86}).
The nucleus has a 2--10 keV luminosity of $\sim 1.0 \times 10^{41}$ erg s$^{-1}$ (\cite{ishi01}; \cite{iwa02}; \cite{mad06}; \cite{tilak08}).
\citet{braatz96} detected H$_2$O maser emission from IC 2560, during their survey for H$_2$O masers in nearby active galaxies.
The detected emission was near to the systemic velocity of the galaxy, and its peak flux density was 0.19 Jy.
\citet{ishi01} detected blue- and red-shifted high velocity features, offset from the systemic velocity by $\Delta V = 213$--418 km s$^{-1}$.
They have also measured velocities of the systemic features in 1996--2000 and detected a secular velocity drift of $2.62 \pm 0.09$ km s$^{-1}$ yr$^{-1}$.
For the red-shifted feature, no significant velocity drift was able to be seen with the upper limit of 0.5 km s$^{-1}$ yr$^{-1}$ (1$\sigma$) in 1999--2000.
The blue-shifted features were too weak to measure velocity drift.
They observed this source with VLBA in 1996 and 1998, and detected the systemic maser features and a continuum component, but did not detect the high velocity features.
Assuming a compact Keplerian edge-on disk for the maser features, the radius was $r = 0.068$--0.26 pc.
The binding mass and the mass density within 0.068 pc were estimated to be $2.8 \times 10^6 \MO$ and $2.1 \times 10^9 \MO$ pc$^{-3}$, respectively.
In this paper, we present results of new VLBA observations concerning all the maser features, and discuss the nuclear structure of IC 2560 based on the results.
\section{Observations}
\subsection{The NRO 45-m Telescope}
We observed H$_2$O maser emission from 1995 June through 2006 April using the 45-m telescope of the Nobeyama Radio Observatory\footnotemark (NRO).
The half-power beam width and the aperture efficiency of the telescope were $\mathrm{HPBW} = \timeform{73''} \pm \timeform{1''}$ and $\eta_\mathrm{a} = 0.63 \pm 0.02$ at 22 GHz, respectively.
\footnotetext{Nobeyama Radio Observatory is a branch of the National Astronomical Observatory of Japan, National Institutes of Natural Sciences.}
The front-end receivers utilized HEMT amplifiers cooled to 20 K, equipped with two polarized feeds that received right and left-circular polarization simultaneously.
The system noise temperature, $T_\mathrm{sys}$, including the atmospheric effect and the antenna ohmic loss, was 110--640 K, depending on the weather conditions and the observing elevations of $\timeform{12D}$--$\timeform{21D}$.
The receiver back-ends were 2048-channel high-resolution Acousto-Optical Spectrometers (AOS).
Each AOS provided a frequency resolution of 37 kHz and a bandwidth of 40 MHz, which corresponded to a velocity resolution of 0.50 km s$^{-1}$ and a velocity width of 540 km s$^{-1}$ at 22 GHz, respectively.
We used eight AOS; each one overlapped its neighbors by 5 MHz, resulting in a total velocity coverage of $V_\mathrm{LSR} = 1015$--4430 km s$^{-1}$.
The observations were made in the position-switching mode with an off-source position of $\timeform{5'}$ in the azimuth direction.
The pointing accuracy was about $\timeform{10''}$.
The calibration of the line intensity was performed by chopping the sky and a reference load at room temperature, yielding an antenna temperature, $T^{*}_\mathrm{A}$, corrected for the atmospheric attenuation.
The measured antenna temperature, $T^{*}_\mathrm{A}$, was converted into the flux density, $S$, using the sensitivity, $S/T^{*}_\mathrm{A} = 2.76 \pm 0.09$ Jy K$^{-1}$, calculated from the aperture efficiency.
Figure \ref{each} shows the maser spectra of IC 2560 measured with the 45-m telescope from 1995 June to 2006 April, but those from 1996 January to 2000 June, which have been shown in \citet{ishi01}, are omitted.
Figure \ref{sp} shows a spectrum of the maser averaged over all observing epochs from 1995 June to 2006 April.
\subsection{VLBA}
The observations of this research were made using the Very Long Baseline Array (VLBA) (except an antenna at Hancock) and the phased Very Large Array (VLA) of the National Radio Astronomy Observatory\footnotemark (NRAO), USA.
\footnotetext{The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.}
IC 2560 was observed on 2000 April 10 for systemic and high velocity maser features.
Four IFs were recorded, each with a bandwidth of 8 MHz divided into 128 channels (0.84 km s$^{-1}$ velocity resolution).
The LSR velocities at the band centers of the IFs were 3185, 2880, 2635, and 2490 km s$^{-1}$ (see figure \ref{sp}).
The data were processed on the VLBA correlator at NRAO, and after the correlation, data reduction including calibration and imaging were processed using the Astronomical Image Processing System (AIPS) package.
The bandpass response was calibrated by observing 4C39.25, and the residual delays and fringe rates were estimated using 1037-295.
An amplitude calibration was performed on the basis of measured system temperature.
Self-calibration was applied using the strongest maser feature at $V_{\rm LSR} = 2876$ km s$^{-1}$ as a reference.
The imaging was performed with the CLEAN-method.
For all maser features, CLEAN maps were made using an intermediate weighting between natural and uniform weighting.
The maser features were imaged by averaging 4 channels (3.36 km s$^{-1}$) for IF 2, or by averaging 6 channels (5.04 km s$^{-1}$) for the other IFs.
The synthesized beams and the image noise are given in table \ref{beam}.
\section{Results}
\subsection{Velocity Drifts}
Figure \ref{drift} shows the velocity variations of the red-shifted, systemic, and blue-shifted features at $V_{\rm LSR} = 3050$--3350 km s$^{-1}$, 2860--2910 km s$^{-1}$, and 2400--2800 km s$^{-1}$, respectively, measured with the Nobeyama 45-m telescope.
All velocity peaks are listed in table \ref{vel}, including the data used in \citet{ishi01}.
For the systemic features, \citet{ishi01} reported those velocities from 1996 January to 2000 May (the open circles in figure \ref{drift}); we newly plot the features with peak intensities of $\geq 5 \sigma$ for each observation in 1995 June and 2000 December to 2006 April.
The vertical error bars show the range having an intensity of $2 \sigma$ down from each maser peak.
The best-fit drift rates for 6 systemic features are listed in table \ref{rate}.
The rates of the velocity drifts are $a = 2.03$--2.69 km s$^{-1}$ yr$^{-1}$, and the weighted average is $\bar{a} = 2.57 \pm 0.04\ \mathrm{km\ s}^{-1}\ \mathrm{yr}^{-1}$, where the error is one sigma. The averaged rate is consistent with and more accurate than the result of \citet{ishi01}.
For the red- and blue-shifted features, we plot the features with peak intensities of $\geq 4 \sigma$.
We fit 2 red-shifted and 1 blue-shifted feature (see table \ref{rate}), and get negative rates and a positive rate, respectively.
The spiral shock model (\cite{maoz98}) predicted that red-shifted features have negative rates and blue-shifted features have positive rates.
Actually, \citet{yama05} detected such rates in NGC 4258.
Our results for IC 2560 may be consistent with the theoretical prediction, but the measured rates are less than $3 \sigma$ of those fitting errors and we need additional monitoring observations to confirm them.
\subsection{Distribution of the Maser Features}
Table \ref{position} lists the measured positions of maser spots, and figure \ref{m+c} shows the distribution of the maser spots in the coordinate system relative to the position of the strongest maser feature at $V_{\rm LSR} = 2876$ km s$^{-1}$.
The circles indicate the positions of the maser peaks with peak intensities of $\geq 5 \sigma$, where $1 \sigma \simeq 11$ or 6 mJy beam$^{-1}$ for the maps
with averaging 4 or 6 channels, respectively.
The position errors were $\Delta \theta ({\rm rms}) = 0.5 \theta_{\rm beam} / {\it SNR} = 0.030$--0.099 mas and 0.010--0.036 mas in the major and minor axis directions of the synthesized beam, respectively, where $\theta_{\rm beam}$ is the synthesized beam size and {\it SNR} the signal-to-noise ratio of the peak emission.
Systemic features, which have a velocity range of 2867--2890 km s$^{-1}$, were located around $(\Delta {\it RA}, \Delta {\it Decl}) \approx (0\ {\rm mas}, 0\ {\rm mas})$, within 0.4 mas (see also figure \ref{radec}).
The peak flux density was 0.145 Jy beam$^{-1}$.
The isotropic luminosity of the systemic features was $\sim 45 \LO$ at 26 Mpc.
A red-shifted feature with the center velocity of $V_{\rm LSR} = 3201$ km s$^{-1}$ was detected at $(\Delta {\it RA}, \Delta {\it Decl}) = (-0.837\ {\rm mas}, 0.774\ {\rm mas})$.
A spatial separation from the strongest systemic feature was $1.140 \pm 0.030$ mas, corresponding to $0.144 \pm 0.004$ pc.
The peak flux density was $0.062 \pm 0.007$ Jy beam$^{-1}$, and the isotropic luminosity was $\sim 1 \LO$ at 26 Mpc.
A blue-shifted feature with the center velocity of $V_{\rm LSR} = 2656$ km s$^{-1}$ was detected at $(\Delta {\it RA}, \Delta {\it Decl}) = (-1.573\ {\rm mas}, 0.131\ {\rm mas})$.
A spatial separation from the strongest systemic feature was $1.578 \pm 0.047$ mas, corresponding to $0.199 \pm 0.006$ pc.
The peak flux density was $0.030 \pm 0.006$ Jy beam$^{-1}$, and the isotropic luminosity was $\sim 0.5 \LO$ at 26 Mpc.
Figure \ref{radec} shows an enlarged map of the systemic features in figure \ref{m+c}, with the systemic features detected in 1998 January 2 by \citet{ishi01}.
Assuming that the position of the feature with $V_{\rm LSR} = 2876$ km s$^{-1}$ (without channel average) observed in 1998 was the same as observed in 2000, we plot the maser features; filled and open circles indicate the peak positions of the maser spots detected at the $\geq 5 \sigma$ level in 2000 and in 1998, respectively.
The position errors of the spots detected in 1998 were $\Delta \theta ({\rm rms}) = 0.012$--0.076 mas and 0.004--0.024 mas in the major and minor axis directions of the synthesized beam, respectively.
The results of a least-squares fitting of the systemic features observed in 1998 and 2000 are ${\it PA} = -8 \pm \timeform{4D}$ and ${\it PA} = \timeform{-19D}^{+12}_{-10} $, respectively, but affected by the synthesized beam elongated toward the north-south direction (table \ref{beam}).
\subsection{Distribution of Continuum Emission}
A 22 GHz continuum component was detected in a map using the data of observations in 1998 (as for the observations, see \cite{ishi01}).
Its position referred to that of the maser feature at $V_{\rm LSR} = 2876$ km s$^{-1}$.
We averaged over the channels at a velocity range of 2646--3024 km s$^{-1}$, avoiding maser features.
To improve the signal-to-noise ratio, a Gaussian taper of 100 M$\lambda$ was applied for the $v$ direction.
A CLEAN map was made using NATURAL weighting.
The peak flux density and the integrated flux density were $1.8 \pm 0.3$ mJy beam$^{-1}$ and $1.7 \pm 0.5$ mJy, respectively, and the continuum emission did not extend more than the synthesized beam ($2.17 \times 1.18$ mas).
Assuming that the peak position of the strongest maser features ($V_{\rm LSR} = 2876$ km s$^{-1}$) observed in 1998 was the same as it observed in 2000, it is possible to overlay the continuum map on the maser map, such as figure \ref{m+c}.
The peak position of the continuum component was at $(\Delta {\it RA}, \Delta {\it Decl}) = (-1.400\ {\rm mas}, -0.722\ {\rm mas})$.
A spatial separation from the strongest systemic feature was $1.575 \pm 0.095$ mas, corresponding to $0.199 \pm 0.012$ pc.
The blue-shifted maser feature detected was located within the 3$\sigma$ contour of the continuum component.
\section{Discussion}
\subsection{Rotating Edge-on Disk}
The H$_2$O megamasers in IC 2560 show characteristic natures seen in other megamasers which have an edge-on maser disk rotating around the galactic center;
(1) the maser spectrum of figure \ref{sp} is composed of the systemic, red- and blue-shifted features,
(2) most of maser spots except a maser at $(\Delta {\it RA}, \Delta {\it Decl}) = (-1.573\ {\rm mas}, 0.131\ {\rm mas})$ align on the sky (figure \ref{m+c}),
(3) the velocities of the systemic features show secular drift but high-velocity features do not (figure \ref{drift}),
and (4) there is a jet-like continuum component nearly perpendicular to the aligned maser distribution.
From these results, we propose an edge-on disk with the position angle of ${\it PA} = -46 \pm \timeform{1D}$ which is the result of a least-squares fitting of the systemic and red-shifted features (figure \ref{m+c}), rotating around $(\Delta {\it RA}, \Delta {\it Decl}) \approx (0\ {\rm mas}, 0\ {\rm mas})$ which is the position of the systemic velocity of the galaxy.
The inclination angle of the disk is assumed to be $i \approx \timeform{90D}$.
[The position angle of ${\it PA} = \timeform{-46D}$ is different from the position angles obtained from the systemic features only in section 3.2.
The discrepancy is mainly because that ${\it PA}$ of the systemic features is affected by the synthesized beam elongated toward the north-south direction (table \ref{beam}).]
A red-shifted feature rotates at the radius of $r = 0.144 \pm 0.021$ pc ($1.14 \pm 0.17$ mas) with the velocity of $V_{\rm rot} = |V_{\rm LSR} - V_{\rm sys}| = |3201 - 2876| = 325$ km s$^{-1}$.
The distribution of the systemic features perpendicular to ${\it PA} = \timeform{-46D}$ gives the thickness of the disk of $2H \leq 0.025$ pc.
[If the maser disk in IC 2560 has warps, such as NGC 4258 (e.g., \cite{miyo95}) and Circinus Galaxy (\cite{green03a}), its position angle may be different.
A least-squares fitting of only the systemic features in 1998 and 2000 (see figure \ref{radec}) gives the position angles of ${\it PA} = \timeform{-8 \pm 4D}$ and ${\it PA} = \timeform{-19D}^{+12}_{-10}$, respectively, which are much smaller than ${\it PA} = \timeform{-46D}$, but may be affected by the synthesized beam elongated toward the north-east direction (see table \ref{beam}).
It is difficult, however, to insist the warped disk, because the position angles of the inner and outer disks are indeterminate.]
The blue-shifted feature at $(\Delta {\it RA}, \Delta {\it Decl}) = (-1.573\ {\rm mas}, 0.131\ {\rm mas})$, located within the 3$\sigma$ contour of the 22 GHz continuum component, may not belong to the maser disk but may emit due to stimulation by continuum radiation from the background source (i.e., a jet, see section 4.3).
NGC 1068 has maser features associated with continuum components (jets), which are called ``the jet masers", apart from ``the disk masers" (\cite{galli01}).
The Circinus Galaxy showed some maser features associated with a wide-angle outflow from the nucleus with different velocities from the disk masers (\cite{green03a}), and IC 1481 also showed a jet maser as well as disk masers (\cite{mamyo09}).
The blue-shifted features in the maser disk were not detected with VLBA because of their weakness (figure \ref{sp}), and the red-shifted features around 3090 km s$^{-1}$ were not observed with VLBA.
To confirm our edge-on disk model, we are conducting further sensitive VLBI observations of these features.
Continuum emission at the position of the systemic features in figure \ref{m+c} (i.e., the galactic nucleus) was not detected ($< 0.93$ mJy beam$^{-1} = 3 \sigma$).
This is probably because that the continuum emission is attenuated by thermal absorption in a layer of ionized gas above the disk.
\subsection{Super-Massive Black Hole}
The mass inside the radius of the rotating disk, $r = 0.144 \pm 0.021$ pc,
is given by
\begin{eqnarray}
M &=& \frac{V_{\rm rot}^2 r}{G} = (3.5 \pm 0.5) \times 10^6 \MO,
\end{eqnarray}
where $V_{\rm rot}$ ($= 325$ km s$^{-1}$) is the rotation velocity at $r$, $G$ the gravitational constant, and a spherical distribution of the central matter is assumed.
This value revises a binding mass of $M = 2.8 \times 10^6 \MO$ estimated by \citet{ishi01} using only the velocities in the maser spectrum detected with the Nobeyama 45-m telescope and the drift rates of the velocities of the systemic features.
This mass is smaller than those of NGC 4258 ($3.9 \times 10^7 \MO$, \cite{miyo95}; \cite{herr99}) by an order of magnitude, but is the same order as those of the Circinus Galaxy ($1.7 \times 10^6 \MO$, \cite{green03a}), NGC 4945 ($1.4 \times 10^6 \MO$, \cite{green97}), and NGC 3079 [(2--3)$\times 10^6 \MO$, \cite{yama04}] .
The mean volume density inside the radius of $r = 0.144 \pm 0.021$ pc is
\begin{eqnarray}
\rho &=& \frac{3 M}{4 \pi r^3} = (2.8 \pm 1.3) \times 10^8 \MO\ {\rm pc}^{-3}.
\end{eqnarray}
Blue- and red-shifted features in figure \ref{sp} were detected at $V_{\rm LSR} = 2458$--2661 km s$^{-1}$ and 3089--3222 km s$^{-1}$, respectively.
If the high-velocity features except the blue-shifted features around $V_{\rm LSR} = 2656$ km s$^{-1}$ (jet masers) are circularly rotating, the rotation velocities of the edge-on disk are $V_{\rm rot} = |V_{\rm LSR} - V_{\rm sys}| = 215$--418 km s$^{-1}$ for the blue-shifted ($V_{\rm LSR} = 2458$--2556 km s$^{-1}$) and $V_{\rm rot} = 213$--346 km s$^{-1}$ for the red-shifted ($V_{\rm LSR} = 3089$--3222 km s$^{-1}$).
If the rotation is Keplerian ($V_{\rm rot} \propto r^{-1/2}$) like NGC 4258 (\cite{miyo95}), we can estimate the radii of the maser disk to be $r = (325/V_{\rm rot})^2 \times 0.144 = 0.087$--0.329 pc (0.69--2.60 mas) for the blue-shifted and $r = 0.127$--0.335 pc (1.01--2.65 mas) for the red-shifted, using the radius (0.144 pc) and velocity (325 km s$^{-1}$) of the red-shifted feature at $(\Delta {\it RA}, \Delta {\it Decl}) = (-0.837\ {\rm mas}, 0.774\ {\rm mas})$.
The blue-shifted features of $V_{\rm LSR} = 2458$ km s$^{-1}$ and the red-shifted features of $V_{\rm LSR} = 3089$ km s$^{-1}$ have the minimum and maximum radii, respectively.
Figure \ref{disk} shows the proposed maser disk.
The mean volume density inside the radius of $r_{\rm in} = 0.087$ pc is
\begin{eqnarray}
\rho &=& (1.3 \pm 0.6) \times 10^9 \MO\ {\rm pc}^{-3},
\end{eqnarray}
which is comparable to that of NGC 4258 [$3.4 \times 10^9 \MO$ pc$^{-3}$, \cite{miyo95}; $4.9 \times 10^{12} \MO$ pc$^{-3}$, \cite{maoz95}; both rescaled to the new distance of 7.2 Mpc determined by \citet{herr99}].
The extremely high density of IC 2560 strongly suggests that this central mass is a super-massive black hole like NGC 4258 [see detailed discussion in \citet{ishi01}].
For IC 2560, hard X-ray emission from the nucleus has been observed.
The 2--10 keV luminosity estimated by correcting for absorption was $L = 1.0^{+1.0}_{-0.4} \times 10^{41}$ erg s$^{-1}$ (\cite{ishi01}; ASCA), $L \lesssim 3 \times 10^{42}$ erg s$^{-1}$ (\cite{iwa02}; {\it Chandra}), $L \gtrsim 3 \times 10^{41}$ erg s$^{-1}$ (\cite{mad06}; {\it Chandra}), and $L = 6.3 \times 10^{41}$ erg s$^{-1}$ (\cite{tilak08}; {\it XMM-Newton}).
To see how luminous the galaxy is, we compare these X-ray luminosities with the Eddington luminosity.
The Eddington luminosity $L_{\rm E}$ is a maximum luminosity for a spherically symmetric object with a given mass.
If the object radiates luminosity larger than $L_{\rm E}$, then the force from radiation pressure wins over the gravity, and thus gas cannot accrete toward the central object (black hole) but is blown away from the surface.
The Eddington luminosity for the central mass of $M = (3.5 \pm 0.5) \times 10^{6} M_{\odot}$ in IC 2560 is
\begin{eqnarray}
L_{\rm E} &=& 4 \pi \frac{GMm_{\rm H}c}{\sigma_{\rm T}} = (4.4 \pm 0.7) \times 10^{44}\ {\rm erg\ s^{-1}},
\end{eqnarray}
where $c$ is the velocity of light, $m_{\rm H}$ the mass of a hydrogen atom and $\sigma_{\rm T}$ the Thomson cross section ($\sigma_{\rm T} = 6.65 \times 10^{-25}$ cm$^{2}$).
So the normalized luminosity becomes $L / L_{\rm E} \sim 10^{-4}$--$10^{-3}$ for the above luminosities, indicating that IC 2560 is actually a low luminosity AGN.
\subsection{Jet}
A continuum component is located southwest of the rotational center of the maser disk, inclined by $\sim \timeform{20D}$ from the rotation axis.
Thus we consider the continuum component a jet ejected from the nucleus.
Jets nearly perpendicular to maser disks have been detected also at the nuclei of NGC 4258 (\cite{herr97}, 1998), NGC 1068 (\cite{galli96}), NGC 3079 (\cite{yama04}), and IC 1481 (\cite{mamyo09}).
A counter jet at the northeastern side of the maser disk was not detected with the upper limit of 0.93 mJy beam$^{-1}$ ($3 \sigma$) which is half of the southwestern jet.
Absence of the counter jet may be due to attenuation by thermal absorption in a layer of ionized gas above the maser disk, if the disk is slightly inclined.
Absence of central continuum emission may be also caused by attenuation by the same thermal absorption.
In this case, the maser disk would incline like figure \ref{disk}, i.e., the northeastern side is nearer to an observer.
Such relation between continuum emission and a maser disk was seen in NGC 4258 (\cite{herr97}, 1998).
For IC2560, there has been almost no other VLBI continuum observation at not only 22 GHz but also other frequencies.
More sensitive observations are needed to investigate the jets and expected continuum emission at the nucleus.
\subsection{Distance to IC 2560}
We can estimate the distance to IC 2560, assuming a circular and Keplerian rotation of the maser disk like NGC 4258.
Figure \ref{pv} shows position-velocity diagrams along the maser disk with ${\it PA} = \timeform{-46D}$.
Filled circles indicate the peak positions and velocities of the systemic and the red-shifted features detected in 2000.
Open circles indicate the peak positions and velocities of the systemic features detected in 1998.
Thick dashed lines are the result of a least-squares fitting of all the circles (i.e., both of 1998 and 2000).
Thin dashed lines show one sigma error of the fitting result.
A solid curve in (b) indicates an assumed Keplerian curve through a red-shifted feature at $(r, V_{\rm LSR}) = ($1.165 mas, 3201 km s$^{-1}$).
Intersections of the dashed lines and the Keplerian curve give
the apparent distance (radius) of the systemic features from the galactic center, $r_{app} = 0.46^{+0.17}_{-0.20}$ mas.
In the cases of a least-squares fitting of only the open (1998 data) and filled circles (2000 data), $r_{app} = 0.46^{+0.18}_{-0.22}$ mas and $r_{app} = 0.33^{+0.47}_{-0.33}$ mas, respectively.
The three $r_{app}$ are consistent with each other within the errors.
For the radii of all the circles ($r_{app} = 0.46^{+0.17}_{-0.20}$ mas), their rotation velocities are $V_{\rm rot(in)} = |V_{\rm LSR} - V_{\rm sys}| = 515^{+138}_{-59}$ km s$^{-1}$.
The absolute distance of the systemic features can be estimated using the velocity drift rate $dV_{||}/dt = 2.57 \pm 0.04$ km s$^{-1}$ yr$^{-1}$ (see section 3.1) as the following (e.g., \cite{ishi01}),
\begin{eqnarray}
r &=& V_{\rm rot(in)}^2 \sin i \left| \frac{dV_{||}}{dt} \right|^{-1} \nonumber \\
&=& 0.070 \pm 0.003\ {\rm pc},
\end{eqnarray}
where $i$ is the inclination angle of the rotating disk and assumed to be $i \approx \timeform{90D}$, i.e., an edge-on disk.
Equation (5) is valid under a circular rotation of the maser disk.
By comparing the apparent (measured using all the circles) and absolute radii, the distance to the host galaxy can be determined to be $D = r/r_{\rm app} = 31^{+12}_{-14}$ Mpc which is consistent with the distance of 26 Mpc estimated using the infrared Tully-Fisher relation (\cite{aaron89}) within the errors, although the errors are large.
To improve accuracy of the distance and to check the Keplerian rotation, we need VLBI observations with higher sensitivity to detect more masers of both systemic and high velocity features.
\subsection{Independent Rotation of the Nuclear Maser Disk from the Galactic Disk}
The right panel of figure \ref{disk} is an optical image of IC 2560.
The inclination and position angles of the large-scale galactic disk
have been measured to be $i = \timeform{63D}$ and ${\it PA} = \timeform{45D}$, respectively,
from an {\it I}-band image (\cite{mathew92}).
A velocity curve of the kilo-pc scale along ${\it PA} = \timeform{90D}$ showed that radial velocities in the west side were larger than those in the east side (\cite{schu03}), and thus the large-scale disk rotates as the north east side approaches and the south west side recedes.
Assuming that galactic spiral arms are trailing, the large-scale disk rotates as shown with a thin arrow in the right panel.
On the other hand, the rotation axis of the compact maser disk (${\it PA} = \timeform{-46D}$) is almost perpendicular to that of the galactic disk as shown the left panel of figure \ref{disk}.
Much difference of the rotating axis of the nuclear maser disk and the galactic disk of IC 2560 is reminiscent of the case of NGC 4258, whose difference of the position angle between the rotation axes of the galactic and maser disks is $\sim \timeform{110D}$ (\cite{miyo95}), indicating reverse rotation to each other.
Also UGC 3789 (\cite{reid09}) has the large difference of the position angle between the rotation axes of the galactic and maser disk.
Detailed statistical analysis of the relation between the galactic and maser disk rotation among AGN megamasers will be made in the forthcoming paper.
\section{Summary}
H$_2$O maser emission from the nucleus of IC 2560 was observed with a single-dish and VLBI.
The results are summarized as follows:
\begin{enumerate}
\item The velocity drift rates were measured to be $\bar{a} = +2.57 \pm
0.04$ km s$^{-1}$ yr$^{-1}$ and $-0.09 \pm 0.15$ km s$^{-1}$ yr$^{-1}$
as for six systemic features and two red-shifted features, respectively,
and $a = +0.28 \pm 0.23$ km s$^{-1}$ yr$^{-1}$ for a blue-shifted feature.
\item Not only the systemic features but also the red- and blue-shifted
features were detected with VLBI.
The velocities for the red- and blue-shifted features are
3201 km s$^{-1}$ and 2656 km s$^{-1}$, respectively.
The velocities relative to the galactic systemic velocity were
325 km s$^{-1}$ for a red-shifted feature, and 220 km s$^{-1}$
for a blue-shifted feature.
The isotropic luminosities of the blue-shifted, red-shifted, and
systemic features were $L = 0.5 L_{\odot}$, $1L_{\odot}$,
and $45 L_{\odot}$, respectively, at the distance of 26 Mpc.
\item The systemic and red-shifted features were emitted from a nearly edge-on
disk with the position angle of ${\it PA} \approx \timeform{-46D}$.
The thickness of the maser disk is $2H \leq 0.025$ pc.
The binding mass is $(3.5 \pm 0.5) \times 10^6 \MO$.
Assuming the Keplerian rotation and combining with the single-dish
spectrum, the radii become to be $r = 0.087$--0.335 pc.
The mean volume density within the inner radius is $(1.3 \pm 0.6) \times
10^9 \MO$ pc$^{-3}$, strongly suggesting existence of a super-massive
black hole.
\item The rotating axis of the maser disk of IC 2560 was nearly perpendicular
to that of the galactic disk.
\item A 22 GHz continuum component was detected at the southwest of the disk
center by $0.199 \pm 0.012$ pc,
and considered as a jet ejected from the nucleus.
A position angle of the component relative to the maser disk was
${\it PA} \sim \timeform{70D}$.
The blue-shifted maser feature was located on the continuum component,
and thus is interpreted to be a ``jet maser".
\item The distance to IC 2560 is estimated to be $D = 31^{+12}_{-14}$ Mpc
from geometry of the maser disk and the velocity drift rate,
assuming the circular and Keplerian rotation of the disk.
\end{enumerate}
In our observations with VLBA, only limited high velocity features in the maser disk were detected. We are conducting more sensitive VLBI observations of other high velocity features, both red- and blue-shifted features, as well as the systemic features to obtain the rotation curve without the assumption of a Keplerian rotation, and thus to exactly determine the distance of IC2560.
\bigskip
We thank NISHIYAMA Kohta, and HIROTA Akihiko for helping with the maser observations using the Nobeyama 45-m telescope.
We also thank members of NRO for their continuous support.
|
1,314,259,994,180 | arxiv | \section{Introduction and motivation}
\label{sec:intro}
\nocite{AGS13}
This article is about the well-known statement: `The heat kernel is
local'. We will discuss what this means exactly and when it
holds. This statement first appeared around 1950, for example in the
works of~\cite{MinakshisundaramPleijel49}. They studied the short time
expansion of the heat kernel of the Laplace-Beltrami operator on
Riemannian manifolds. In order to go from boundaryless manifolds to
manifolds with boundary one uses the following trick. A small
neighbourhood of a point on the boundary looks like a half space where
the heat kernel is known explicitly, a small neighbourhood of a point
away from the boundary looks like a neighbourhood of a point in a
boundaryless manifold, where the expansion of the heat kernel is also
known. One can get an expansion of the heat kernel of a manifold with
boundary by combining these two situations. The fact that this
strategy works is the essence of what is meant by saying the heat
kernel is local. For that reason, the same fact is also
referred to as `the heat kernel does not feel the boundary', see \cite{Kac51}.
A first naive understanding of `the heat kernel is local' would be the
following. Given a space $M$ with some operator $D$ on it and some
nice set $U \subset M$, one could think that locality means the heat
kernel on $U$ is determined by knowing $U$ and $D|_U$. Unfortunately
this is not true. Consider a simple example, the unit interval with
the standard Laplace operator, once with Neumann boundary conditions
at both ends $\laplacianN$ and once with Dirichlet boundary conditions
$\laplacianD$. Let $U = (a, 1-a)$ for some $a>0$. One can compute both
heat kernels $p_{\laplacianN}$ and $p_{\laplacianD}$ explicitly and
$p_{\laplacianN} \restr U \neq p_{\laplacianD} \restr U$. For large
times one should not expect this to be true anyway, the Neumann Laplace
operator preserves energy, whereas in the Dirichlet case all heat
eventually leaks out of the system.
What is true however, is that the difference between the two heat
kernels is quite small for small times $t$, namely it can be bounded
by $\e^{-\eps/t}$ for some explicit constant $\eps>0$ depending
on $a$. In particular, such a bound implies that the asymptotic
expansion for $t \to 0$ of these two heat kernels is equal on
$U$. Hence, getting a bound of this form in a quite general setting is
our primary goal.
Locality in this sense holds for Riemannian manifolds with boundary
and the Laplace-Beltrami operator. It still holds if we add lower
order terms to the operator. It is also known for quantum graphs with
the Laplacian and some suitable boundary conditions at the
vertices. Some works on orbifolds also make indirect use of it.
A slightly weaker notion of locality is proved in \cite{Hsu95} for Riemannian manifolds and in \cite{Gueneysu17} for general metric spaces. Their definition of the principle of not feeling the boundary only makes a statement about the behaviour in the $t \rightarrow 0^+$ limit. Hence our locality definition implies theirs but not vice versa.
While in the original works from the 50s the locality statement is
first rigorously proven before it is used, in some of the more modern
works it seems to be the other way round. Some asymptotic expansion of
the heat kernel is derived, the derivation clearly makes use of the
some version of the locality of the heat kernel. Sometimes this
locality is just referred to as a well known fact, sometimes not even
that but little consideration is given to the question whether it
actually holds in the given setting and how this might be proven. Here
we are going to show that locality does indeed hold in a very general
setting.
Our strategy is based on the Wiener measure. One can represent the heat
kernel at time $t$ between two points $x$ and $y$ as the Wiener
measure of the set of paths from $x$ to $y$ in time $t$. In the unit
interval example the Neumann and the Dirichlet Laplacian give rise to
Wiener measures $W^\Neu$ and $W^\Dir$. These Wiener measures satisfy
locality in the literal sense of the word, that is $W^\Neu \restr U =
W^\Dir \restr U$, where the restriction means the restriction to paths
that stay inside of $U$ during the entire time interval $[0,t]$. Our
first \Mthm{locality_wiener_measure} states that locality of the
Wiener measure holds for fairly general metric measure spaces.
Our next \Mthm{locality_heat_kernel} states that if the heat kernels
satisfy a suitable decay bound (see \Def{heat_kernel_decay}), then locality of the Wiener measure
implies locality of the heat kernel. Decay bounds for general heat kernels on metric spaces are an area of active research, see for example \cite{Sturm95, GrigoryanTelcs12, GrigoryanKajino17}.
To show that this heat kernel decay and hence locality is satisfied on
a wide class of examples, we define a class of metric spaces we call
manifold-like. They are an extension of spaces that satisfy the
measure contraction property introduced by Sturm in~\cite{Sturm98}
and~\cite{Sturm06b}. This class includes Riemannian manifolds with our
without boundary, quantum graphs, products and certain quotients of
these and a number of other spaces. On these spaces there exists a
natural Dirichlet form, and the associated operator corresponds to the
Laplace operator in the case of Riemannian manifolds. The associated
heat kernel satisfies a bound of the form $p_t(x,y) \le
\e^{-d(x,y)^2/(c\cdot t)}$, that is the heat kernel decays
exponentially fast with distance between the points. Our third
\Mthm{manifold_like_spaces} shows that locality of the heat kernel
holds for manifold-like spaces.
This paper is organised as follows. \Sec{gen.heat.kernels} defines
heat kernels and the Wiener measure in a metric space setting and
collects some facts on how the operator, the Dirichlet form, the
semigroup, the heat kernel and the Wiener measure all relate to each
other and imply each others existence. \Sec{locality_wiener_measure} contains our
\Mthms{locality_wiener_measure}{locality_heat_kernel},
locality of the Wiener measure and locality of the heat kernel on metric measure spaces.
In \Sec{the.space} we define
our class of manifold-like metric spaces and in \Sec{dir.forms} we
define Laplace-type operators through Dirichlet forms on them. We then show that these operators satisfy the conditions for locality of the Wiener measure and the heat kernel in \Mthm{manifold_like_spaces}.
Finally, \Sec{example} contains an example computation of the heat
kernel asymptotics of a two particle system on a metric graph through
decomposing the state space into simple pieces.
\section{General heat kernels}
\label{sec:gen.heat.kernels}
A heat kernel can arise in a variety of ways, one can define a heat
kernel directly, it can arise as the fundamental solution of a
suitable operator, which itself can be defined through a Dirichlet
form, it can be seen as the transition function of a Markov process or
as the integral kernel of a semigroup. In full generality not all
these interpretations might exist but we are interested in situations
where they are. We will first collect some well-known facts that hold
in great generality.
To introduce a heat kernel and a Markov process we need an underlying
space $M$ with Hilbert space $\Lsqr M$. Throughout the paper we let
\emph{$(M,d)$ be a \textbf{compact} metric space} ($M$ is then automatically
complete and separable). The metric is a map $\map d {M \times
M}{[0,\infty]}$, in particular we do not assume that $M$ is
connected. Let $\mu$ be a Radon measure on $\mathcal{B}(M)$, the Borel
sets on $M$. We assume $\mu$ has full support on $M$. The triple
$(M,d,\mu)$ is called a \emph{metric measure space}.
A semigroup $P_t$ is called \emph{Markovian} if for any $f$ satisfying
$0 \le f \le 1$ $\mu$-almost everywhere we also have $0 \le P_tf \le
1$ $\mu$-almost everywhere. It is \emph{contracting} if $\norm {P_t}
\le 1$.
A closed, densely defined linear operator $D$ is called a
\emph{Dirichlet operator} if for all $f \in \dom D$ we have
$\iprod{Df}{\max \{f-1, 0\}} \ge 0$.
As usual, we use the same symbol for a \emph{symmetric form} $\qf E$
and the associated \emph{quadratic form} given by $\qf E(f):= \qf
E(f,f)$. We assume that $\qf E$ is non-negative, i.e., $\qf E(f) \ge
0$ for all $f \in \dom \qf E$. Such a form is \emph{closed} if $\dom
\qf E$ equipped with the norm given by $\normsqr[\qf E] f :=
\normsqr[\Lsqr M] f + \qf E(f)$ is complete, i.e., itself a Hilbert
space. A closed symmetric form $ \qf E$ is a \emph{Dirichlet form} if
the unit contraction operates on it, i.e., if for $f \in \dom \qf E$,
then $f^\# \in \dom \qf E$ and $\qf E(f) \ge \qf E(f^\#)$, where
$f^\#:= \min( \max (f, 1), 0)$ denotes the \emph{unit contraction
operator}.
A Dirichlet form is called \emph{regular} if $\Cont M \cap \dom \qf E$
is dense in $\Cont M$ (with respect to the supremum norm) and also
dense in $\dom \qf E$ with respect to $\norm[\qf E]\cdot$. A
Dirichlet form is called \emph{local} if $\qf E(f,g)=0$ whenever $f,g
\in \dom \qf E$ have disjoint support. If $\qf E(f,g)=0$ whenever
$f$ is constant on a neighbourhood of the support $\supp g$ of $g$,
then $\qf E$ is called \emph{strongly local}.
The following is well known, see for example~\cite{Fukushima80},
\cite{Kato80},~\cite{Davies80},~\cite{FOT11},~\cite{MaRoeckner92}
or~\cite{BouleauHirsch91}.
\begin{theorem}
\label{thm:semigroup_form_operator}
Let $H$ be a Hilbert space. Then the existence of any of the
following implies the existence of the others.
\begin{itemize}
\item a linear non-negative self-adjoint and densely defined
operator $D$\item a closed symmetric form $\qf
E$
\item a strongly continuous contraction semigroup $\{P_t\}_{t \ge
0}$ of self-adjoint operators
\end{itemize}
If $H=\Lsqr M$ we can additionally say the following. The semigroup
satisfies the Markov property if and only if the operator is a
Dirichlet operator if and only if the symmetric form is a Dirichlet
form.
\end{theorem}
\begin{proof}
The equivalence of operator and form is given by $\iprod {Df} g =
\qf E(f,g) $ and $\dom \qf E = \dom \sqrt D$. The operator $D$ is
the generator of the semigroup $P_t:= \e^{tD}$.
\end{proof}
\begin{definition}
A family $\{p_t\}_{t>0}$ of functions $\map {p_t}{M \times M} \mathbb{R}$ is
called a \emph{heat kernel} if it satisfies the following conditions
for all $t>0$.
\begin{enumerate}
\item Measurability: $p_t(\cdot,\cdot)$ is $\mu \times \mu$ measurable
\item Markov property: $p_t(x,y) \ge 0$ for $\mu$-almost all $x,y$
and $\int_M p_t(x,y) \dd \mu(x) = 1$ for $\mu$-almost all $y$
\item Symmetry: $p_t(x,y) = p_t(y,x)$ for $\mu$-almost all $x,y$
\item Semigroup property: for any $s,t>0$ and $\mu$-almost all $x,y$
we have
\begin{align*}
p_{s+t}(x,y) = \int_M p_s(x,z)p_t(z,y) \dd\mu(z)
\end{align*}
\item Approximation of identity: for any $f \in \Lsqr M$ we have
\begin{align*}
\int_M p_t(x,y)f(y) \dd\mu(y) \to_{t \to 0^+} f(x)
\end{align*}
\end{enumerate}
\end{definition}
\begin{remark}
\label{rem:compactification}
One can consider more general heat kernels that satisfy only the
sub-Markov property $\int_{M} p_t(x,y) \dd\mu(x) \le 1$. In this
case one usually expands the space $M$ by a cemetery point $\Delta$
and then extends the heat kernel so that it satisfies the Markov
property on this larger space. In a similar way one can generalise
the definition of a Markov process below. This construction is well
known and explained for example in~\cite{FOT11}. However it makes
various formulations a lot more clunky and gets somewhat
technical. In the interest of clear exposition we will restrict to
the strict Markov property here but remark that the generalization
to sub-Markov is straight forward.
\end{remark}
\begin{remark}
Heat kernels are only defined up to behaviour on a set of
$\mu$-measure zero. We will thus identify heat kernels that agree
$\mu$-almost everywhere.
If the heat kernel is continuous, this distinguishes this particular
heat kernel, so that all the $\mu$-almost everywhere statements can
be replaced by everywhere statements.
\end{remark}
\begin{definition}
\label{def:hunt-process}
A \emph{Markov process} on the set of continuous paths $\mathcal{P}(M)$
consists of a family of probability measures $\{\Proba_x\}_{x \in
M}$ on $\mathcal{P}(M)$ such that $\Proba_x\left( \omega(0)=x
\right) =1$ and a stochastic process $X_t(\omega):=\omega(t)$ on
$\mathcal{P}(M)$ with values in $M$. It additionally satisfies the
Markov property. See the appendix for more details.
If it satisfies the \emph{strong Markov property},
that is
\begin{align*}
\mathbb{P}_x(X_{\zeta+s} \in U | \mathcal{F}_{\zeta}) =
\mathbb{P}_{X_{\zeta}}(X_s \in U)
\end{align*}
holds for any time stopping function $\zeta$, then it is called a
\emph{continuous Hunt process} or a \emph{diffusion}.
\end{definition}
By convention one refers to $X_t$ as the Hunt process, the probability
measure is implicit.
\begin{theorem}[\cite{Grigoryan03}]
A heat kernel $p_t$ defines a semigroup via
\begin{align*}
P_t f(x) := \int_M p_t(x,y)f(y) \dd\mu(y)
\end{align*}
This semigroup is strongly continuous, self-adjoint, contracting and
Markov.
\end{theorem}
The Markov property is not mentioned in~\cite{Grigoryan03}, but it
follows trivially from the definition.
\begin{theorem}[\cite{BlumenthalGetoor68}]
\label{thm:markov_transition}
A continuous Hunt process $X_t$ on $M$ defines a heat kernel via
\begin{align*}
\int_U p_t(x,y)d\mu(y) = \Proba_x(X_t \in U)
\end{align*}
The heat kernel is the density function of the transition function
$p_t(x,U):=\Proba_x(X_t \in U)$. See the appendix for details.
\end{theorem}
Two stochastic processes are called \emph{equivalent} if their
transitions functions agree outside of a properly exceptional
set. Note that all properly exceptional sets have $\mu$-measure zero.
\begin{theorem}[{\cite[Thm 7.2.1 and Thm 4.2.8]{FOT11}}]
\label{Dirichlet_Hunt}
Given a regular local Dirichlet form $\qf E$ on $\Lsqr M$, there
exists a continuous Hunt process $X_t$ on $M$ whose Dirichlet form
is the given one $\qf E$.
This Hunt process is unique up to equivalence.
\end{theorem}
\begin{remark}
This theorem is by far the most difficult part in the
equivalence. The full proof goes over several dozen pages. The basic
construction is as follows. Given a family of probability measures
$p_t(x,U)$, for $x$ fixed and $t \in [0,T]$, the Kolmogorov
extension theorem guarantees the existence of a stochastic process
$X_t$ and a probability measure $\Proba_x$ such that
\begin{align*}
p_t(x,U) = \Proba_x(X_t \in U)
\end{align*}
but proving that this process has the claimed regularity is very
hard.
\end{remark}
\section{Locality of the Wiener measure and the heat kernel}
\label{sec:locality_wiener_measure}
In this section we will show that the Wiener measure is local and then that the heat
kernel is local provided it satisfies a suitable decay bound. We will first introduce the
notion of martingales and then quote a uniqueness and existence
theorem for the Wiener measure. Next, if two spaces are identical on
some subset, we can define a new measure on the set of paths of one of the spaces by using
one of the measures inside the subset and the other one outside. This
is called splicing and will be explained in further detail. One can
then show that this spliced Wiener measure is also compatible with the
operator, by uniqueness this implies that the spliced measure is
identical to the original measure. In other words, on the subset where
the spaces and operators agree, so do the Wiener measures. Combined
with a decay bound (see \Def{heat_kernel_decay}) this implies that
the heat kernel is local as well.
\subsection{Local isometries}
\label{ssec:local.isom}
Let $(M,d,\mu)$ and $(\oS M,\oS d, \oS \mu)$ be two metric measure
spaces with energy forms $\qf E$ and $\oS{\qf E}$ and associated
operators $D$ and $\oS D$, respectively.
Assume that $U \subset M$ and $\oS U \subset \oS M$ are open and that there exists a
local isometry $\map \psi U {\oS U=\psi(U)}$.
For a function $\map f M \mathbb{R}$ with
$\supp f \subset U$ we denote by $\psi_*f$ the function $f\restr U
\circ \psi^{-1}$ extended by $0$ onto $\oS M$. Note that
$\map{\psi_*}{\Lsqr U}{\Lsqr{\oS U}}$ is unitary.
\begin{definition}
\label{def:forms.ops.agree}
\indent
\begin{enumerate}
\item
\label{agree.a}
We say that $\qf E$ and $\oS{\qf E}$ \emph{agree on $U$ and $\oS
U$} if there is a measure preserving isometry $\map \psi U {\oS
U=\psi(U) \subset \oS M}$ such that for any $f \in \dom \qf E$
with $\supp f \subset U$ we have $\psi_*f \in \dom \oS{\qf E}$ and
$\qf E(f) = \oS{\qf E}(\psi_* f)$.
\item
\label{agree.b}
Similarly, we say that $D$ and $\oS D$ \emph{agree on $U$ and $\oS
U$} if there is a measure preserving isometry $\map \psi U {\oS
U=\psi(U) \subset \oS M}$ such that for any $f \in \dom D$ with
$\supp f \subset U$ we have $\psi_*f \in \dom \oS D$ and
$\psi_*(Df)=\oS D(\psi_*f)$.
\end{enumerate}
\end{definition}
\begin{lemma}
\label{lem:forms.ops.agree}
Assume that $\qf E$ and $\oS {\qf E}$ are local Dirichlet forms,
then $\qf E$ and $\oS {\qf E}$ agree on $U$ and $\oS U$ if and only
if $D$ and $\oS D$ agree on $U$ and $\oS U$.
\end{lemma}
\begin{proof}
Note first that $\qf E$ is local (i.e., $\qf E(f,g)=0$ for all $f,g
\in \dom \qf E$ with $\supp f \cap \supp g = \emptyset$) if and only
if $D$ is local (i.e., $\supp Df \subset \supp f$ for all $f \in
\dom D$).
Let $f \in \dom D$ with $\supp f \subset U$. Then there is an
open set $V$ such that $\supp f \subset V \subset \clo V \subset U$.
If $g \in \dom \qf E$ with $\supp g \subset U$, then
\begin{equation*}
\iprod[\Lsqr {\oS M}]{\psi_*(Df)} {\psi_*g}
= \iprod[\Lsqr M]{Df} g
= \qf E(f,g)
= \oS{\qf E}(\psi_*f,\psi_*g)
\end{equation*}
as $\psi_*$ is an isometry for functions with support in $U$ and
$\oS U$ (we used the locality of $D$ here) and as $\qf E$, $\oS{\qf
E}$ agree on $U$ and $\oS U$.
For $\oS g \in \dom \oS{\qf E}$ with $\supp \oS g \cap V =
\emptyset$, we have $\iprod[\Lsqr {\oS M}]{\psi_*(Df)} {\oS g}=0$
(again by locality of $D$) and $\oS{\qf E}(\psi_*f,\oS g)=0$. Since
all $\psi_*g$ with $g \in \dom \qf E$ and $\supp g \subset U$ and
$\oS g \in \dom \oS{\qf E} \cap V = \emptyset$ span $\dom \oS{\qf
E}$, we have shown that $\psi_* f \in \dom \oS D$ and $\oS
D(\psi_*f)=\psi_*(Df)$.
The opposite implication can be seen similarly.
\end{proof}
\subsection{Martingales}
Recall that $\mathcal P(M)$ denotes the set of continuous paths on a
metric measure space $(M,d,\mu)$.
\begin{definition}
A stochastic process $\map Y {[0, \infty) \times \mathcal{P}(M)} \mathbb{R}$
is called a \emph{martingale} with respect to the family of
probability measures $\Proba=\{\Proba_x\}_{x\in M}$ and the
increasing sequence of $\sigma$-algebras $\mathcal{F}_t$ if the
following conditions are fulfilled:
\begin{enumerate}
\item Measurability: $Y(t, \cdot)$ is $\mathcal{F}_t$ measurable.
\item Right continuity: for every $\omega \in \mathcal{P}(M)$ the
map $t \mapsto Y(t, \omega)$ is right continuous.
\item Conditional constancy: for $0 \le s < t$ we have
\begin{align*}
Y(s, \cdot) = \mathbb E^{\Proba_x}[Y(t, \cdot) | \mathcal{F}_{s}]
\end{align*}
holds $\Proba_x$-almost surely.
\end{enumerate}
\end{definition}
In most of our applications the probability measures and the
$\sigma$-algebras will come from a Markov process $X_t$,
$\mathcal{F}_t=\sigma(X_s| s\le t)$ and we will just write $Y$ is a
martingale with respect to $X_t$.
\begin{definition}
Let $X_t$ be a Markov process on $\mathcal{P}(M)$ and let $D$ be a
non-negative self-adjoint operator on $M$. For each $f \in \dom D$,
we define a stochastic process $\map{M_f}{[0,\infty) \times
\mathcal{P}(M)} \mathbb{R}$ by setting
\begin{align*}
M_f (t, X) := f(X_t) - \int_0^t (D f)(X_s) \dd s.
\end{align*}
We say that the Markov process $X_t$ \emph{solves the martingale
problem} for $(M,D)$ if $M_f$ is a martingale with respect to $X_t$
for each $f \in \dom D$.
\end{definition}
\begin{theorem}[\cite{EthierKurtz86}]
\label{thm:uniqueness_martingales}
Let $(M,d,\mu)$ be a compact metric measure space and $D$ a
non-negative self-adjoint operator on it.
Then the continuous Hunt process $X_t$ associated to $D$ is the
unique solution of the martingale problem for $(M,D)$.
\end{theorem}
\subsection{Splicing measures}
We will follow the construction of~\cite{Stroock05} for splicing
measures on $\mathbb{R}^n$ and extend it to the more general setting of metric
measure spaces.
\begin{definition}
\label{def:stopping-time}
A measurable function $\map \zeta {\mathcal{P}(M)}{[0,\infty]}$ such
that for all $t \ge 0$, $\{\zeta \le t\} \in \mathcal{F}_t^0$, is
called a \emph{stopping time function}.
Let $\omega \in \mathcal{P}(M)$ and $U \subset M$ be open, let
\begin{align*}
\zeta_U(\omega) := \inf \{t \ge 0 | \omega(t) \notin U\}
\end{align*}
be the first exit time from $U$ of the path $\omega$. The function
$\zeta_U$ is an example of a stopping time function. All stopping
time functions we are going to use are of this form.
\end{definition}
\begin{definition}[Splicing measures on the same space]
\label{def:spliced_measures}
Let $\oS \Proba=\{\oS \Proba_x\}_{x \in M}$ be a family of Borel
probability measure on $\mathcal{P}(M)$ with $\oS
\Proba_x(\omega(0)=x)=1$. Let $U \subset M$ be open and let $\chi_U$
denote the characteristic function of $U$. Define a family of Borel
probability measures $\delta_{\hat\omega} \otimes_t \oS \Proba$ on
$\mathcal{P}(M)$ indexed by $t \in [0,\infty)$ and $\hat\omega \in
\mathcal{P}(M)$ by setting
\begin{align*}
\left( \delta_{\hat\omega} \otimes_t \oS \Proba\right)
\left(\omega(s) \in U \right)
:= \begin{cases}
\chi_U(\hat\omega(s)) & s < t \\
\oS \Proba_{\hat\omega(t)}(\omega(s-t) \in U) & s \ge t
\end{cases}
\end{align*}
Next, given another family of Borel probability measures $\Proba$ on
$\mathcal{P}(M)$ and a stopping time function $\map \zeta
{\mathcal{P}(M)} {[0,\infty]}$ define a new family of spliced
measures $\Proba \otimes_{\zeta} \oS \Proba$ by setting
\begin{align*}
& (\Proba \otimes_{\zeta} \oS \Proba)_x(\omega(t) \in U) \\
:=& \int_{\{\hat\omega \in \mathcal{P}(M) |\zeta(\hat\omega) < \infty\}}
\delta_{\hat\omega} \otimes_{\zeta(\hat\omega)}
\oS \Proba(\omega(t) \in U) \dd\Proba_x(\hat\omega)
+ \Proba_x(\omega(t) \in U | \{ \zeta(\omega) = \infty \}).
\end{align*}
\end{definition}
This can be interpreted as follows. Each path $\omega$ is measured
with $\Proba$ until time $\zeta(\omega)$. After time $\zeta(\omega)$
it is measured by $\oS \Proba$ shifted back in time by
$\zeta(\omega)$.
\begin{remark}
This spliced measure $\Proba \otimes_{\zeta} \oS \Proba$ is also
completely determined by stating that $\Proba \otimes_{\zeta} \oS
\Proba|_{\mathcal{F}_{\zeta}}= \Proba|_{\mathcal{F}_{\zeta}}$ and
that the conditional distribution of shifted paths $\omega(\tau)
\mapsto \omega(\tau+\zeta(\omega))$ under $\Proba \otimes_{\zeta}
\oS \Proba$ with $\mathcal{F}_{\zeta}$ given, is just $\oS
\Proba_{\omega(\zeta)}$.
\end{remark}
\begin{remark}
Note that if the stopping time function is of the form $\zeta_U$
defined above, paths that leave $U$ but reenter it at a later point
would be measured with $\oS \Proba$ upon reentering. Hence the
spliced measure is not just using one measure inside the set $U$ and
the other one outside of it.
\end{remark}
\begin{definition}[Splicing measures on different spaces]
Let $(M,d,\mu)$ and $(\oS{M},\oS{d},\oS{\mu})$ be two metric measure spaces. Assume there
exists an open set $U \subset M$ and a measure preserving isometry
$\map \psi U {\psi(U)} \subset \oS{M}$. Let $\Proba$ and $\oS
\Proba$ be two families of Borel probability measures on
$\mathcal{P}(M)$ and
$\mathcal{P}(\oS{M})$.
For $A \subset U$ and $x \in U$ we let
\begin{align*}
\Proba^U_x(\omega(t) \in A) := \oS \Proba_{\psi(x)}(\psi(\omega(t))
\in \psi(A))
\end{align*}
This defines a family of Borel measures on $\mathcal{P}(U)$.
\end{definition}
We can now define the spliced measure $\Proba^U \otimes_{\zeta_U}
\Proba$ which is a family of Borel probability measures on
$\mathcal{P}(M)$ as in \Def{spliced_measures}.
\subsection{Locality of the Wiener measure}
For $a,b \in \mathbb{R}$ let $a \wedge b:= \min(a,b)$.
\begin{theorem}[{Doob's time stopping theorem~\cite{Stroock11}}]
\label{thm:doob_time_stopping}
If $Y(t,\omega)$ is a martingale with respect to a Markov process
$X_t$, then for any time stopping function $\zeta$, $Y(t \wedge
\zeta(\omega),\omega)$ is also a martingale with respect to the
Markov process $X_t$.
\end{theorem}
\begin{lemma}[{\cite{StroockVaradhan79}}]
\label{lem:glueing_martingales}
Let $\zeta$ be a stopping time and $X_t$ a Markov process. Recall
that $\mathcal{F}_t^0=\sigma(X_s| s\le t)$ and let
$\{Q_{\hat\omega}\}_{\hat\omega \in \mathcal{P}(M)}$ be the
conditional probability distribution of $\Proba$ with respect to
$\mathcal{F}_t^0$ (see \Def{cond-proba}). Let $\map M {[0,\infty)
\times \mathcal{P}(M)} \mathbb{R}$ be a stochastic process. Assume $M$ is
$\Proba$-integrable, $M(t, \cdot)$ is $\mathcal{F}_t^0$-measurable
and $M(\cdot, \omega)$ is continuous. Then the following two
statements are equivalent:
\begin{enumiii}
\item $M(t,\omega)$ is a martingale with respect to $X_t$.
\item $M(t \wedge \zeta(\omega), \omega)$ is a martingale with
respect to $X_t$ and $M(t,\omega)-M(t \wedge
\zeta(\omega),\omega)$ is a martingale with respect to the
measures $Q_{\hat\omega}$ and the $\sigma$-algebra
$\mathcal{F}_t^0$ for all $\hat\omega \in \mathcal{P}(M)$ outside
of a $\Proba$-null-set.
\end{enumiii}
\end{lemma}
Recall that two (local) operators $D$ and $\oS D$ agree on some
subsets if there is a measure preserving local isometry intertwining
$D$ and $\oS D$ (see \Defenum{forms.ops.agree}{agree.b}).
We now formulate our first main theorem:
\begin{maintheorem}[Locality of the Wiener measure]
\label{mthm:locality_wiener_measure}
Let $(M,d,\mu)$ and $(\oS{M},\oS{d},\oS{\mu})$ be two metric measure spaces with non-negative
self-adjoint operators $D$ and $\oS D$.
Let $\Proba$ and $\oS \Proba$ be the associated Wiener measures.
Assume that $D$ and $\oS D$ agree on some open subsets $U \subset M$
and $\oS U \subset \oS M$, then
\begin{align*}
\Proba = \Proba^U \otimes_{\zeta_U} \Proba.
\end{align*}
In words, the spliced measure that uses the Wiener measure from $\oS
M$ until the first exit from $U$ and the Wiener measure from $M$
after the first exit time is identical to the original Wiener
measure on $M$.
\end{maintheorem}
\begin{corollary}
\label{cor:main1}
Under the assumptions of the previous theorem we have
\begin{align*}
\Proba\restr{\mathcal{F}(U)} = \oS \Proba \restr{\mathcal{F}(\oS U)},
\end{align*}
i.e., when restricted to paths that stay inside $U$, the two Wiener
measures are identical.
\end{corollary}
\begin{proof}[Proof of \Mthm{locality_wiener_measure}]
This proof is a generalization of a proof in~\cite{Stroock11} where Stroock
shows the above theorem for $\mathbb{R}^n$ instead of metric spaces.
We are going to show that the Markov process
$X_t(\omega):=\omega(t)$ with the family of measures $\Proba^U
\otimes_{\zeta_U} \Proba$ solves the martingale problem for
$D$. Then uniqueness of the solution (\Thm{uniqueness_martingales})
shows the equality of the measures. Thus we need to check that for
all $f \in \dom D$ the map
\begin{align*}
M_f (t, \omega) = f(\omega(t)) - \int_0^t (D f)(\omega(s)) ds
\end{align*}
is a martingale with respect to $\Proba^U \otimes_{\zeta_U}
\Proba$.
We let $\oS f = f \circ \psi^{-1}$ and define
$\oS M_{\oS f}$ analogously to $M_f$, hence
$\oS M_{\oS f}(t,\oS \omega)$ is a
martingale with respect to $\oS{X}_t$. Through the isometry
$\psi$ we get $M_f(t \wedge \zeta_U(\omega), \omega)=
\oS M_{\oS f}(t \wedge
\zeta_{\psi(U)}(\oS \omega), \oS \omega)$ which
is a martingale with respect to $\oS{X}_t$ by Doob's time
stopping \Thm{doob_time_stopping}.
Note that up to time $\zeta_U(\omega)$ the measures
$\oS \Proba$ and $\Proba^U \otimes_{\zeta_U}
\Proba$ are identical, so this implies that $M_f(t \wedge
\zeta_U(\omega), \omega)$ is a martingale with respect to
$\Proba^U \otimes_{\zeta_U} \Proba$ as well.
$M_f(t,\omega) - M_f(t \wedge \zeta_U(\omega), \omega)$ is just the
function $M_f(t,\omega)$ starting at time $\zeta_U(\omega)$.
Hence $M_f(t,\omega) - M_f(t \wedge
\zeta_U(\omega), \omega)$ is a martingale for a shifted version of
some $\overline{\Proba}$ for $t \ge 0$ if and only if
$M_f(t,\omega)$ is a martingale for $\overline{\Proba}$ for $t
\ge \zeta_U(\omega)$. Here being a martingale for $t \ge
\zeta_U(\omega)$ means that the conditionally constant property only
holds for these $t$ and not for $t \ge 0$ as in the original
definition. For $t \ge \zeta_U(\omega)$, the measure
$\delta_\omega \otimes_{\zeta_U(\omega)}\Proba$ is the time
shifted version of $\Proba$. Thus $M_f(t,\omega) - M_f(t \wedge
\zeta_U(\omega), \omega)$ is a martingale with respect to
$\delta_\omega \otimes_{\zeta_U(\omega)}\Proba$ if and only if
$M_f(t,\omega)$ is a martingale with respect to $\Proba$. The
latter is true by assumption.
Next we have that $\delta_\omega
\otimes_{\zeta_U(\omega)}\Proba$ is the conditional probability
distribution of $\Proba^U \otimes_{\zeta_U} \Proba$ with
respect to $\mathcal{F}_{\zeta_U(\omega)}$, which by definition
means
\begin{align*}
(\Proba^U \otimes_{\zeta_U} \Proba)(\omega(s) \in A \cap
B)
= \int_A (\delta_\omega \otimes_{\zeta_U(\omega)}\Proba)(\omega(s) \in B)
\dd\Proba(\omega)
\end{align*}
for $A \in \mathcal{F}_t$ and $B \in \mathcal{F}$. This is just the
definition of the spliced measure \Def{spliced_measures}.
Hence we can apply \Lem{glueing_martingales} and conclude that
$M_f$ is a martingale with respect to the measures $\Proba^U
\otimes_{\zeta_U} \Proba$.
\end{proof}
\subsection{Locality of the heat kernel}
\label{sec:locality.heat.kernel}
Here we are going to prove that exponential decay of the heat kernel together with
the locality of the Wiener measure imply locality of the heat
kernel.
\begin{definition}
\label{def:heat_kernel_decay}
Let $(M,d,\mu)$ be a metric measure space. Let $\qf E$ be a strongly
local regular Dirichlet form on it. Then we say the heat kernel satisfies an \emph{exponential
decay bound} if there exist constants $C,c>0$ and $n \ge 0$ such that
\begin{align*}
p_t(x,y) \le C t^{-n/2} \exp\left(-\frac{d^2(x,y)}{ct}\right)
\end{align*}
holds for all $x,y \in M$ and all $t$ as long as
$0<t<T$ for some $T$.
\end{definition}
In most applications $n$ is the dimension of $M$.
\begin{theorem}[Existence of conditional Wiener
measure~\cite{BaerPfaeffle11}]
\label{thm:bp11}
Let $(M,d,\mu)$ be a metric measure space. Assume the heat kernel
$p$ satisfies an exponential decay bound as in \Def{heat_kernel_decay}.
Then there exists a unique point-to-point Wiener measure
$\Proba_x^y$ on the set $\Contsymb_x^y([0,T],M)$, that is the set of
continuous paths with start point $x$ and end point $y$ at time $T$.
Moreover, it satisfies
\begin{align*}
\Proba_x^y(\omega(t) \in U) = \int_U p_t(z,y)p_{T-t}(x,z)
\dd\mu(z)
\end{align*}
for $t \in (0,T)$ and $U \in \mathcal{B}(M)$. It is compatible with
the Wiener measure $\Proba_x$ in the sense that
\begin{align*}
\int_{\Contsymb_{x}([0,t],M)} f(\omega) \dd\Proba_{x}(\omega) =
\int_{M} \int_{\Contsymb_{x}^{y}([0,t],M)} f(\omega)
\dd\Proba_{x}^{y}(\omega) \dd\mu(y)
\end{align*}
for any function $\map f {\Contsymb_{x}([0,t],M)} \mathbb{R}$ that is
integrable with respect to $\Proba_{x}$.
\end{theorem}
\begin{remark}
Note that the definition of $\Proba_x^y$ immediately implies
\begin{align*}
\Proba_x^y(\Contsymb_x^y([0,t],M)) = p_t(x,y).
\end{align*}
\Thm{bp11} cited above from~\cite{BaerPfaeffle11} requires a decay
bound of the heat kernel that is much weaker than the one we use
here.
\end{remark}
\begin{definition}
For $U \subseteq M$ be open and $p_t$ a heat kernel on $M$, let
\begin{align*}
p_t^U(x,y):= \Proba_x^y(\Contsymb_x^y([0,t],U)).
\end{align*}
This means $p_t^U$ kills off all paths that leave the set $U$ and
corresponds to the heat kernel on $U$ with Dirichlet boundary
conditions. In particular $p_t^U(x,y) \le p_t(x,y)$.
\end{definition}
\begin{proposition}
Let $(M,d,\mu)$ and $(\oS M, \oS d, \oS \mu)$ be two metric measure
spaces with non-negative self-adjoint operators $D$ and $\oS D$ and let $p_t$ and $\oS p_t$ be the associated heat kernels.
Assume $D$ and $\oS D$ agree on some
open subsets $U$ and $\oS U$ via a measure preserving isometry $\map
\psi U {\oS U}$.
Then
\begin{align*}
p_t^U(x,y) = p_t^{\prime \psi(U)}(\psi(x),\psi(y)).
\end{align*}
\end{proposition}
\begin{proof}
Using \Thm{bp11} this is exactly the statement of \Cor{main1}.
\end{proof}
The following lemma is based on an argument of~\cite{Hsu95}. The authors are
indebted to Batu G\"uneysu for providing us with this reference and
useful comments on the proof.
\begin{lemma}
\cite{Hsu95}
\label{lem:heat_kernel_decomposition}
Let $\zeta := \zeta_U$ be the time stopping function for the first
exit time from the open set $U \subseteq M$. Let $x,y \in U$. Then
the following decomposition of the heat kernel
\begin{align*}
p_t(x,y)
= p_t^U(x,y) + \int_{\{ \omega \in \mathcal{P}_x(M) | \zeta(\omega) \le t\}}
p_{t- \zeta(\omega)}(\omega(\zeta(\omega)), y) \dd\mathbb{P}_x(\omega)
\end{align*}
holds for $\mu$-almost all $x,y \in U$.
\end{lemma}
This can be interpreted as follows. The set of all paths from $x$ to
$y$ in time $t$ is decomposed into the set of paths that stay inside
$U$ and those that do not. The paths that leave the set $U$ can be
represented as an integral using the time $\zeta(\omega)$ and place
$\omega(\zeta(\omega))$ where they leave the set $U$ for the first
time.
If the heat kernel is continuous as a function of $x$ and $y$ one can replace the $\mu$-almost all $x,y \in U$ by all $x,y \in U$.
\begin{proof}
Let $f \in \Cont M$ be such that $\supp f \subset U$. Then we have
\begin{align*}
& \int_U f(y) p_t(x,y) \dd \mu(y)\\
=& \int_{\mathcal{P}_x(M)} f(\omega(t)) \dd\mathbb{P}_x(\omega) \\
=& \int_{\zeta > t} f(\omega(t)) \dd\mathbb{P}_x(\omega)
+ \int_{\zeta \le t} f(\omega(t)) \dd\mathbb{P}_x(\omega) \\
=& \int_U f(y) p_t^U(x,y) \dd\mu(y)
+ \mathbb{E}^x[ f(\omega(t))\chi_{\zeta \le t}(\omega) ]
\end{align*}
by the definition of $p^U$. Here $\chi_{\zeta \le t}$ denotes the
characteristic function of the set $\{\zeta \le t\}$ in
$\mathcal{P}_x(M)$.
Using the substitution $\tau:=t-\zeta$ we can write
\begin{align*}
& \mathbb{E}^x[ f(\omega(t))\chi_{\zeta \le t}(\omega) ] \\
=& \mathbb{E}^x[ f(\omega(\tau+\zeta))\chi_{\tau \ge 0}(\omega) ] \\
=& \mathbb{E}^x\left[\chi_{\tau\ge 0}(\omega)
\mathbb{E}^{\mathcal{F}_{\zeta}}\left[f(\omega(\tau+\zeta)\right]\right] \\
=& \mathbb{E}^x\left[\chi_{\tau\ge 0}(\omega) \mathbb{E}^{\zeta}\left[f(\omega(\tau)\right]\right] \\
=& \int_{\mathcal{P}_x(M)}\chi_{\tau\ge 0}(\omega)
\mathbb{E}^{\zeta}[f(\omega(\tau)] d\mathbb{P}_x(\omega)
\end{align*}
where we first used the fact that the condition expectation
$\mathbb{E}^{\mathcal{F}_{\zeta}}$ is just the identity projection
in this case and then applied the strong Markov property (see
\Def{strong-markov}).
We have
\begin{align*}
& \int_{\mathcal{P}_x(M)} \mathbb{E}^{\zeta}[f(\omega(\tau))]
\chi_{\tau \ge 0}(\omega) \dd \mathbb{P}^x(\omega) \\
=& \int_{\mathcal{P}_x(M)} \mathbb{E}^{\zeta}[f(\omega(t-\zeta))]
\chi_{\zeta \le t}(\omega) \dd\mathbb{P}^x(\omega) \\
=& \int_{\mathcal{P}_x(M)} \int_{\mathcal{P}_x(M)}
f(\tilde{\omega}(t-\zeta)) \dd\mathbb{P}_{\omega(\zeta)}(\tilde{\omega})
\chi_{\zeta \le t}(\omega) \dd\mathbb{P}^x(\omega) \\
=& \int_{\mathcal{P}_x(M)} \int_M p_{t-\zeta}(\omega(\zeta),y)f(y)\dd\mu(y)
\chi_{\zeta \le t}(\omega) \dd\mathbb{P}^x(\omega) \\
=&\int_U \int_{\{ \omega \in \mathcal{P}_x(M) | \zeta(\omega) \le t\}}
p_{t- \zeta}(\omega(\zeta), y) \dd\mathbb{P}_x(\omega)f(y) \dd\mu(y)
\end{align*}
where we used Fubini's theorem in the last step.
This holds for all $f \in \Cont M$ with $\supp f \subset U$, hence
we proved the lemma for $\mu$-almost all $y$.
\end{proof}
\begin{lemma}
\label{lem:nonlocal_wiener_measure_bound}
Let $(M,d,\mu)$ be a metric measure space. Let $p_t$ be a heat kernel
on it. Assume the heat kernel satisfies a decay bound as in
\Def{heat_kernel_decay}. Let $U \subset M$ be open.
Then for $\mu$-almost all $x,y \in U$ we have
\begin{align*}
\Proba_x^{y}\left(\Contsymb^{y}_x\left([0,t],M\right)
\setminus \Contsymb^{y}_x\left([0,t],U\right)\right)
< C t^{-\frac{n}{2}} \e^{-\rho^2/(ct)}
\end{align*}
where
\begin{align*}
\rho:= \inf d(\{x,y\}, \bd U)
\end{align*}
is the infimum of the distance of $x$ and $y$ from the boundary and
$C,c> 0$ are some constants that are independent of $x,y$ for all $x,y$ such that $\rho$ is bounded away from zero.
\end{lemma}
This is a bound on the set of paths from $x$ to $y$ in time $t$ that
leave the set $U$. If the set $U$ is geodesically convex, these paths
are longer than the distance realizing paths.
\begin{proof}
We have
\begin{align*}
& \Proba_x^{y}\left(\Contsymb^{y}_x\left([0,t],M\right)
\setminus \Contsymb^{y}_x\left([0,t],U\right)\right) \\
=& p_t(x,y) - p_t^U(x,y) \\
=& \int_{\{ \omega \in \mathcal{P}_x(M) | \zeta(\omega) \le t\}}
p_{t- \zeta}(\omega(\zeta(\omega)), y) \dd\mathbb{P}_x(\omega)\\
\le& C\int_{\{ \omega \in \mathcal{P}_x(M) | \zeta(\omega) \le t\}}
(t- \zeta(\omega))^{-\frac{n}{2}} e^{-d(\omega(\zeta(\omega)), y)^2/(c(t-\zeta(\omega))} \dd\mathbb{P}_x(\omega)\\
\le& C\int_{\{ \omega \in \mathcal{P}_x(M) | \zeta(\omega) \le t\}}
(t- \zeta(\omega))^{-\frac{n}{2}} e^{-d(\partial U, y)^2/(c(t-\zeta(\omega))} \dd\mathbb{P}_x(\omega)
\end{align*}
where we used the heat kernel decomposition from
\Lem{heat_kernel_decomposition} and then the heat kernel decay bound.
Let $f(t) := t^{-n/2}e^{-\alpha/t}$ with $\alpha>0$ and let $T>0$ be fixed. Then for any $0 < s < t < T$ we have
\begin{align*}
f(s) < \frac{f_{\max}}{f(T)} f(t)
\end{align*}
where $f_{\max}$ denotes the unique maximum of the function $f$.
Plugging in this estimate with $s= t-\zeta(\omega)$ we get
\begin{align*}
& \Proba_x^{y}\left(\Contsymb^{y}_x\left([0,t],M\right)
\setminus \Contsymb^{y}_x\left([0,t],U\right)\right) \\
\le & C \frac{f_{\max}}{f(T)} t^{-\frac{n}{2}} \e^{-d(\partial U,y)^2/(ct)} \mathbb{P}_x(\zeta \le t) \\
\le & C' t^{-\frac{n}{2}} \e^{-d(\partial U,y)^2/(ct)}
\end{align*}
The constant $f_{\max}/f(T)$ depends on $y$ but if one restricts to values of $y$ such that $d(\partial U, y)$ is bounded away from zero one can pick a universal constant $C'$ that works for all such $y$.
By symmetry we can get the same estimate with $d(\partial U,x)$.
\end{proof}
Recall again that two (local) operators $D$ and $\oS D$ agree on some
subsets if there is a measure preserving local isometry intertwining
$D$ and $\oS D$ (see \Defenum{forms.ops.agree}{agree.b}).
We now state our second main result:
\begin{maintheorem}[Locality of the heat kernel]
\label{mthm:locality_heat_kernel}
Let $(M,d,\mu)$ and $(\oS M, \oS d, \oS \mu)$ be two metric measure
spaces with non-negative self-adjoint operators $D$ and $\oS D$.
Assume $D$ and $\oS D$ agree on some open subsets $U$ and $\oS U$
via a measure preserving isometry $\map \psi U {\oS U}$. Assume in
addition that the associated heat kernels $p_t$ and $\oS p_t$ each
satisfy an exponential decay bound as stated in \Def{heat_kernel_decay}.
Let $V$ be open with $\clo V \subset U$ and let $x,y \in V$. Then
\begin{align*}
\abs{p_t(x,y) - \oS p_t(\psi(x), \psi(y))}
\le C\e^{- \eps/t}
\end{align*}
for $\mu$-almost all $x,y \in V$ and all $t\in (0,T]$, where the constants $C$
and $\eps$ depend only on $U,V$ and $T$, but not on $x$, $y$ or $t$.
\end{maintheorem}
\begin{proof}
We can write the heat kernel $p_t(x,y)$ with the help of the Wiener
measure and separate the set of paths from $x$ to $y$ into the local
part that stays in $U$ and the part that leaves $U$ as in
\Lem{heat_kernel_decomposition}. As $\psi$ is an isometry, the set
of local paths in $U$ is the same as the local paths in $\oS{U}$. By
\Mthm{locality_wiener_measure} the Wiener measures on these sets are
also identical. Hence the heat kernels $p_t$ and $\oS{p}_t$ differ
only by the Wiener measures of the non-local paths. Let $\bar \rho:=
\inf (d(\bd V, \bd U))$. Then $\bar\rho>0$ because we assumed that
the closure of $V$ is contained in $U$ and $\bar \rho$ is the
infimum over the $(x,y)$-dependent $\rho$ in
\Lem{nonlocal_wiener_measure_bound} taken over all $x,y \in
V$. Hence if we apply \Lem{nonlocal_wiener_measure_bound} we get the
estimate
\begin{align*}
\abs{p_t(x,y) - \oS p_t(\psi(x), \psi(y))}
\le 2Ct^{-\frac{n}{2}}\e^{- \bar{\rho}^2/ct}
\end{align*}
One can remove the $t^{-n}$ term by using the following elementary
estimate. For all $\alpha >0$ and all $0 < b <a$ there exists a
$C>0$ such that
\begin{align*}
t^{-\alpha}\e^{-a/t} < C \e^{-b/t}
\end{align*}
holds for all $t>0$.
\end{proof}
\begin{corollary}
\label{cor:main2}
Under the assumptions of \Mthm{locality_heat_kernel}, the asymptotic
expansions for $p_t$ and $\oS p_t$ are identical over $V$, i.e.,
\begin{align*}
\Bigabs{
\int_V p_t(x,x) \dd \mu(x)
- \int_{\psi(V)}\oS p_t(\oS x,\oS x)\dd \oS \mu(\oS x)}
\to_{t \to 0^+} 0 + \tilde{C}e^{-\eps / t}.
\end{align*}
\end{corollary}
\begin{remark}
We believe that \Mthm{locality_heat_kernel} is sharp in the sense
that some exponential decay bound on the heat kernel is needed for
locality of the heat kernel to hold. Grigoryan~\cite{Grigoryan03}
considers heat kernel estimates for very general metric spaces. He
shows that some fractals satisfy heat kernel estimates of the form
\begin{equation*}
p_t(x,y)
\le Ct^{-c_1}\exp\Bigl(-\frac{d(x,y)^{c_2}}{t^{c_3}}\Bigr)
\end{equation*}
for some suitable constants. One can probably extend
\Lem{nonlocal_wiener_measure_bound} and \Mthm{locality_heat_kernel}
to this setting.
However, he also shows that the heat kernel for the
operator $(-\partial_x^2-\partial_y^2)^{\frac{1}{2}}$ on subsets of
$\mathbb{R}^2$ with reasonably nice boundary satisfies a non-exponential
decay bound of the form $1/C(t^2+td(x,y)) \le p_t(x,y) \le C/(t^2+td(x,y))$ and one can easily show that these heat kernels do not satisfy locality in the sense of
\Mthm{locality_heat_kernel}.
\end{remark}
\section{Manifold-like spaces}
\label{sec:the.space}
In this section, we will define manifold-like spaces. They provide a rich class of examples where the conditions for locality can be explicitly checked and proven. We start with
metric measure spaces which satisfy the measure contraction property
(MCP), a concept first introduced in~\cite{Sturm98}. Then we define a
manifold-like space as a quotient of an MCP space with only a finite
number of points in each equivalence class being identified (and some
other conditions).
\subsection{The measure contraction property}
We need a few more notions from metric geometry. For details we refer
to the book~\cite{BBI01}. Let $(M,d)$ be a metric space and $\gamma$
a path in $M$, i.e., a continuous map $\map \gamma {[a,b]} M$ with
$a<b$. For a finite number of points $T:=\{t_0,\dots,t_N\}$ with
$t_0=a<t_1<\dots<t_N=b$ let
\begin{equation*}
L_d(\gamma,T):=\sum_{j=1}^N d(\gamma(t_{j-1}),\gamma(t_j)).
\end{equation*}
The \emph{length} $L_d(\gamma)$ of $\gamma$ is defined as the supremum
of $L(\gamma,T)$ over all partitions $T$ of $[a,b]$. The path $\gamma$
is called \emph{rectifiable} if $L_d(\gamma)$ is finite.
For a
subset $M_0 \subseteq M$ we define the \emph{intrinsic metric
$d_{M_0}(x,y)$ of $M_0$ in $M$} as the infimum of $L_d(\gamma)$ over
all rectifiable paths $\gamma$ from $x$ to $y$ which stay entirely in
$M_0$. We say that $M_0$ is \emph{geodesically complete} if for all
points $x,y \in M_0$, the intrinsic metric $d_{M_0}(x,y)$ is achieved
by a shortest path $\gamma$ joining $x$ and $y$ in $M_0$, i.e., if
$d_{M_0}(x,y)=L_d(\gamma)$. We say that $M_0$ is \emph{(geodesically)
convex in $M$} if $M_0$ is geodesically complete and if $d_{M_0} =d
\restr{M_0 \times M_0}$, i.e., all pairs of points $(x,y) \in M_0
\times M_0$ are joined by a geodesic $\gamma$ in $M_0$ with length
given by the original metric $d$, i.e., with $L_d(\gamma)=d(x,y)$. If
$M$ is geodesically convex in itself, we say $M$ is a \emph{geodesic
space}. We say that $M_0$ is \emph{(geodesically) strictly convex
(in $M$)} if the geodesic joining any pair of points is unique.
Let $B_r(x):=\set{y \in M}{d(x,y) \le r} \subset M$ denote the
(closed) ball of radius $r$ around $x$ and let $B_r^*(x)$ denote the
ball without the point $x$. Let $\LipCont M$ denote the Lipschitz
continuous functions on $M$. For $t \in (0,1)$, a point $z$ is
\emph{$t$-intermediate} between $x$ and $y$ if $d(x,z)=td(x,y)$ and
$d(y,z)=(1-t)d(x,y)$. If $M$ is geodesically strictly convex, the
$t$-intermediate point between $x$ and $y$ is unique but in general
there can be multiple $t$-intermediate points between $x$ and $y$.
For $N=1$ set $\zeta_{K,1}^{(t)}(\theta)=t$. For $N>1$ and $K<0$ define
\begin{equation*}
\zeta_{K,N}^{(t)}(\theta)
:= t \left(\frac {\sinh(t\theta\sqrt{-K/(N-1)})}
{\sinh(\theta\sqrt{-K/(N-1)})}
\right)^{N-1}
\end{equation*}
This function defines a reference constant which represents the ratio
of volumes of the radius $t\theta$ ball to the radius $\theta$ ball in
the constant curvature $K$ space of dimension $N$. One can make
suitable adjustments for $K=0$ or $K>0$.
\begin{definition}
A \emph{Markov kernel} from $(\Omega_1, \mathcal{F}_1)$ to
$(\Omega_2, \mathcal{F}_2)$ with $\Omega_i$ being measurable spaces
and $\mathcal{F}_i$ the $\sigma$-algebras of measurable sets, is a
map $P$ that associates to each $x \in \Omega_1$ a probability
measure $P(x,\cdot)$ on $\Omega_2$ such that for any $B \in
\mathcal{F}_2$ the map $x \mapsto P(x,B)$ is
$\mathcal{F}_1$-measurable.
\end{definition}
\begin{definition}[\cite{Sturm06b}]
Let $N \ge 1$ and $K \in \mathbb{R}$. A metric measure space $(M,d,\mu)$
satisfies the \emph{$(K,N)$ measure contraction property} or
\emph{$(K,N)$-MCP} for short if for every $t \in (0,1)$ there exists
a Markov kernel $P_t$ from $M \times M$ to $M$ such that
\begin{enumerate}
\item $P_t(x,y;dz)= \delta_{\gamma_t(x,y)}(dz)$ with $\gamma_t(x,y)$
a $t$-intermediate point between $x$ and $y$ holds for
$\mu^2$-almost all $(x,y) \in M \times M$.
\item for $\mu$-almost every $x \in M$ and every measurable $B
\subseteq M$ we have
\begin{align*}
\int_M \zeta_{K,N}^{(t)}(d(x,y)) P_t(x,y;B) \dd\mu(y) \le \mu(B) \\
\int_M \zeta_{K,N}^{(1-t)}(d(x,y)) P_t(y,x;B) \dd\mu(y) \le \mu(B)
\end{align*}
\end{enumerate}
As written, this definition implies that $M$ is connected. By a slight abuse of notation we will also include disconnected spaces provided they have at most finitely many connected components and satisfy the measure contraction property on each component.
\end{definition}
\begin{remark}
This definition can be interpreted as a way to generalize the notion
of a lower Ricci curvature bound and an upper dimensional bound on a
Riemannian manifold. A Riemannian manifold with Ricci curvature at
least $K$ and dimension $N$ satisfies the $(K,N)$-MCP.
\end{remark}
For a list of classes of spaces that satisfy this property and a few
more explicit examples see \Subsec{examples} below.
\begin{lemma}
If $(M,d,\mu)$ satisfies the $(K,N)$-MCP then (each connected component of) $M$ is a geodesic
space.
\end{lemma}
\begin{proof}
The definition of the $(K,N)$-MCP implies that for $\mu \otimes
\mu$-almost all points $(x,y) \in M \times M$ and all $t \in (0,1)$
a $t$-intermediate point exists. As we have assumed that $M$ is
complete, we can replace `$\mu \otimes \mu$-almost all $(x,y)$' by
`all $(x,y)$'. Existence of $t$-intermediate points for all $t$
and all $x,y$ is equivalent to being a geodesic space
by~\cite{Sturm06a}.
\end{proof}
\begin{theorem}[see~\cite{Sturm06b}]
\label{thm:mcp2}
Assume the metric measure space $(M,d,\mu)$ satisfies the
$(K,N)$-MCP for some $K \in \mathbb{R}$ and some $N\ge 1$.
Then
\begin{enumerate}
\item
\label{mcp2.d}
$(M,d,\mu)$ also satisfies the $(K',N')$-MCP for
any $K'\le K$ and any $N'\ge N$.
\item
\label{mcp2.e}
If $M' \subseteq M$ is convex, then $(M', d \restr{M' \times
M'}, \mu\restr{M'})$ also satisfies the $(K,N)$-MCP.
\item
\label{mcp2.a}
$(M,d,\mu)$ has Hausdorff dimension at most $N$.
\item
\label{mcp2.b}
For every $x \in M$ the function $r \mapsto \mu(B_r(x))/r^N$ is
bounded away from zero for $r \in (0,1]$.
\item
\label{mcp2.c}
$M$ satisfies the volume doubling property, that is there exists a
constant $v_M$ such that for all $r>0$ and all $x \in M$ we have
\begin{align*}
\mu(B_{2r}(x)) \le v_M \mu(B_r(x))
\end{align*}
\end{enumerate}
Note that property~\itemref{mcp2.c} follows from property~\itemref{mcp2.b}.
\end{theorem}
\begin{assumption}
\label{ass:uniform_dimension}
We assume from now on the following:
\begin{enumerate}
\item
\label{unif_dim.a}
The number $N$ is the exact Hausdorff dimension of $M$, that
is the $N$ in the $(K,N)$-MCP is sharp.
\item
\label{unif_dim.b}
The space $M$ is \emph{$N$-Alfohrs-regular}, i.e., there exists a constant
$c>0$ such that
\begin{equation}
\label{eq:alfohrs-reg}
\frac1c r^N \le \mu(B_r(x)) \le c r^N
\end{equation}
for all $x \in M$ and all $r \le 1$.
\end{enumerate}
\end{assumption}
\begin{lemma}
Assume the metric measure space $(M,d,\mu)$ satisfies the
$(K,N)$-MCP for some $K \in \mathbb{R}$ and some $N\ge 1$.
Then the limit
\begin{equation*}
\tau(x):=\lim_{r\to 0} \frac{\mu(B_r(x))}{r^{N}}
\end{equation*}
exists for all $x \in M$.
Note that the limit function $\map \tau M {(0,\infty)}$ is in general not continuous, but globally bounded.
\end{lemma}
\begin{proof}
MCP spaces satisfy the Bishop-Gromov inequality by \cite{Sturm06b}, so $\frac{\mu(B_r(x))}{r^{N}}$ is increasing as $r \rightarrow 0$, as it's bounded this implies the limit exists.
\end{proof}
\begin{remark}
These assumptions restrict the class of examples compared to~\cite{Sturm06b} but we feel we mostly excluded some pathological cases. We will call a space that satisfies the $(K,N)$-MCP \emph{and} these assumptions an \emph{MCP space}.
\end{remark}
\subsection{Glueing and manifold-like spaces}
The class of MCP spaces is already fairly large but it does not
contain some of the examples we want to study. We will extend this
class by introducing a glueing
operation.
\begin{definition}
\label{def:identif}
Let $(M,d,\mu)$ be a metric measure space. We say that $\bar M$ is
obtained from $M$ by \emph{glueing}
\begin{itemize}
\item if there are closed subsets $F_1$ and $F_2$ of $M$ such that
there is a measure preserving isometry $\map \phi {F_1} {F_2}$
and
\item if $\bar M:=M/{\sim}$, where $\sim$ is the equivalence
relation defined by $x \sim \phi(x)$;
\item we assume that there exists a $k \in \mathbb{N}$ such that each
equivalence class contains at most $k$ elements.
\end{itemize}
\end{definition}
Denote the natural projection map by $\map \pi M {\bar M}$. This
projection defines a metric measure space $(\bar M, \bar
d,\bar \mu)$ as follows: The induced distance is given by $\bar
d(\bar{x},\bar{y}):= \min \set{d(x,y)}{\pi(x)=\bar x,\pi(y)=\bar y}$,
and the induced measure is the the push forward measure $\bar
\mu:=\pi^*\mu$ (i.e., $\bar \mu(\bar{B}):= \mu(\pi^{-1}(\bar B))$).
Note that this construction includes both the possibility of glueing a metric space to itself as well as the possibility of glueing together two components of a disconnected metric space.
\begin{definition}
\label{def:mfd-like}
A \emph{manifold-like space} is a connected metric measure space $(\bar M,
\bar d,\bar \mu)$ that is obtained from a (possibly not connected) MCP space $(M,d,\mu)$
through a finite number of glueings.
\end{definition}
Note that $\bar M=M/{\sim}$ where $x \sim y$ if and only if there is a finite
sequence of isometries $\phi_1,\dots,\phi_r$ defining the glueing such
that $x=x_0$, $x_1=\phi_1(x_0)$, \dots, $y=x_r=\phi_r(x_{r-1})$. We
still write $\map \pi M {\bar M}$ for a manifold-like space.
\begin{remark}
Note that glueing does \emph{not} preserve the $(K,N)$-MCP property,
as we will see in the example \Subsec{examples}.
\end{remark}
\begin{theorem}
\label{thm:prop.identif.space}
Let $(\bar M, \bar d,\bar \mu)$ be a manifold-like space obtained
from the $(K,N)$-MCP space $(M,d,\mu)$. Then $\bar M$ inherits the
following properties.
\begin{enumerate}
\item
\label{prop.identif.space.a}
$(\bar M, \bar d,\bar \mu)$ has Hausdorff dimension $N$ and is
$N$-Alfohrs-regular, see~\eqref{eq:alfohrs-reg}.
\item
\label{prop.identif.space.b}
$(\bar M, \bar d,\bar \mu)$ satisfies the volume doubling
property. There exists a constant $v_{\bar M}$ such that for all
$r>0$ and all $\bar{x} \in \bar M$ we have
\begin{align*}
\bar \mu(B_{2r}(\bar{x})) \le v_{\bar M} \bar \mu(B_r(\bar{x}))
\end{align*}
\item
\label{prop.identif.space.c}
The limit
\begin{align*}
\bar{\tau}(\bar{x}) := \lim_{r\to 0} \frac{\bar \mu(B_r(\bar{x}))}{r^{N}}
\end{align*}
exists for every $\bar{x} \in \bar{M}$. The limit function
$\bar{x} \mapsto \bar{\tau}(\bar{x})$ is globally bounded on $\bar
M$.
\end{enumerate}
\end{theorem}
\begin{proof}
It is clearly sufficient to prove this for one glueing. By
definition each point in $\bar M$ has only finitely many preimages
in $M$ under the projection map $\pi$. This shows that $\bar M$
still has Hausdorff dimension $N$ and is $N$-Alfohrs-regular.
As $\bar \mu$ is just the push forward metric of $\mu$,
properties~\itemref{prop.identif.space.b}
and~\itemref{prop.identif.space.c} are directly inherited from $M$.
\end{proof}
\subsection{Examples}
\label{ssec:examples}
In this section we will show that various classes of spaces are MCP
spaces or manifold-like. We will also exhibit a few concrete examples
and counter examples.
\begin{lemma}[\cite{Sturm06b}]
If $(M,d,\mu)$ is a metric measure space with Hausdorff dimension $N$
and with Alexandrov curvature bounded from below by $\kappa$, then
$M$ satisfies the $((N-1)\kappa, N)$-MCP.
\end{lemma}
\begin{corollary}
Compact Riemannian manifolds without boundary or with smooth
boundary are MCP spaces.
\end{corollary}
\begin{proof}
Compact $N$-dimensional manifolds have Alexandrov curvature bounded
from below by the Cartan-Alexandrov-Toponogov triangle comparison
theorem and have Hausdorff dimension $N$.
\end{proof}
A closed subset $D \subset \mathbb{R}^n$ is called a \emph{special Lip\-schitz
domain} if there is a Lip\-schitz-continuous function $\map
\psi{\mathbb{R}^{n-1}}\mathbb{R}$ such that
\begin{equation*}
D=\set{(x',x_n) \in \mathbb{R}^n}{\psi(x') \le x_n \; \forall x' \in \mathbb{R}^{n-1}}.
\end{equation*}
\begin{definition}[cf.~\cite{MitreaTaylor99}]
\label{def:lip.mfd}
A pair of a compact metric measure space $(M,d,\mu)$ and a smooth
Riemannian manifold without boundary $(\wt M, \wt g)$ is a
\emph{smooth manifold with Lip\-schitz boundary} if the following
holds.
\begin{itemize}
\item $M \subseteq \wt M$;
\item the metric $d$ is the metric defined via the Riemannian metric
$\wt g$ restricted to $M$;
\item the measure $\mu$ is the Riemannian measure defined via $\wt
g$ restricted to $M$;
\item $M$ and $\wt M$ have the same dimension;
\item in the charts of $\wt M$, the boundary of $M$ in $\wt M$ is
Lip\-schitz, i.e, for each point $p \in \bd M$ there is an open
neighbourhood $U$ and a (smooth) chart $\map \phi U {V \subset
\mathbb{R}^n}$ of the manifold $\wt M$ and a special Lip\-schitz domain
$D \subset \mathbb{R}^n$ such that $M \cap U=\phi(V \cap D)$.
\end{itemize}
\end{definition}
\begin{corollary}
If $(M,\wt M)$ is a manifold with Lipschitz boundary and $M$ is
convex in $\wt M$, then $M$ is an MCP space.
\end{corollary}
\begin{proof}
The $(K,N)$-MCP property is inherited on convex subsets by
\Thmenum{mcp2}{mcp2.e}. The dimension \Ass{uniform_dimension} is
inherited for subsets with Lipschitz boundary. Note that Lipschitz
continuity is crucial here. If there are cusps, this assumption may
fail, see \Exenum{mcp}{mcp.e} below.
\end{proof}
\begin{lemma}
Compact metric graphs are manifold-like spaces.
\end{lemma}
\begin{proof}
A compact metric graph can be obtained from a finite number of
finite intervals, that is manifolds with boundary, through glueing
of the end points.
\end{proof}
Note that any vertex of degree at least $3$ has
Alexandrov curvature $-\infty$.
\begin{example}
A compact good orbifold is a manifold-like space. See~\cite{DGGW08}
for the exact definition and a general introduction to orbifolds. A
good orbifold is the orbit space of an isometric action by a
discrete group on a manifold. In other words it can be obtained
through glueing from a manifold.
\end{example}
\begin{lemma}[\cite{Ohta07}]
If $(M_1,d_1,\mu_1)$ and $(M_2,d_2,\mu_2)$ satisfy the
$(K_1,N_1)$-MCP and $(K_2,N_2)$-MCP respectively, then $(M_1\times
M_2, d_1+d_2, \mu_1 \times \mu_2)$ satisfies the $(\min(K_1,K_2),
N_1+N_2)$-MCP. In other words, the MCP property is preserved under
products.
\end{lemma}
\begin{example}
Let $(M,d,\mu)$ be a $(K,N)$-MCP space. Then $M \times M$ is a
$(K,2N)$-MCP space. This can be seen as a physical model of the
state space of two distinguishable particles.
Let $\map \phi {M \times M}{M \times M}$ be the isometry given by
$\phi(x,y)=(y,x)$. Then $(M \times M)/{\sim}$ with ${(x,y) \sim
\phi(x,y)}$ is a manifold-like space. This corresponds to the
state space of two indistinguishable particles. The same
construction applies to multi-particle systems.
\end{example}
\begin{examples}
\label{ex:mcp}
Some concrete examples and counter-examples:
\begin{enumerate}
\item
\label{mcp.a}
If $M$ is a flat cone (i.e. a wedge like segment of the unit
disk in $\mathbb{R}^2$ with boundaries identified) it satisfies the
$(0,2)$-MCP and is an MCP space. The Alexandrov curvature is
$+\infty$ at the cone point and zero elsewhere.
\item
\label{mcp.b}
Let $M$ be constructed as follows. Cut open the unit disk in
$\mathbb{R}^2$ along the negative $x$-axis and glue in another quarter of
the unit disk. $M$ is a pseudo cone with angle $5\pi/2$. It has
Alexandrov curvature $-\infty$ at the cone point and does
\emph{not} satisfy the $(K,N)$-MCP for any $K,N$
(see~\cite{Sturm06b}) but $M$ is a manifold-like space (glued out
of $3$ pieces to make the Lipschitz domains convex).
\item
\label{mcp.c}
Let $M$ consist of two copies of the unit disk in $\mathbb{R}^2$ glued
together at the origin. This is a manifold-like space but does not
satisfy the $(K,N)$-MCP.
\item
\label{mcp.d}
Let $M$ be the set of points in $\mathbb{R}^3$ that is the union of
$\set{(x,y,z)}{x^2+y^2 \le 1, z=0}$ and $\set{(x,y,z)}{x^2+z^2 \le
1, x,z \ge 0, y=0}$. This is a manifold-like space.
\item
\label{mcp.e}
Let $M:=\set{(x,y) \in [0,1]^2}{y \le x^2}$. Then the $\eps$-balls
around $(0,0)$ have volume proportional to $\eps^3$. Hence the
dimension \Ass{uniform_dimension} is \emph{not} satisfied and this is neither
an MCP space nor a manifold-like space.
\item
\label{mcp.f}
Let $M = [0,1]\times [0,1]/{\sim}$ where $(x,0) \sim (0,0)$ for
all $x \in [0,1]$. Then the $\eps$-balls around $(0,0)$ have
volume proportional to $\eps$. Hence the dimension \Ass{uniform_dimension} is
\emph{not} satisfied and this is neither an MCP space nor a
manifold-like space.
\end{enumerate}
\end{examples}
\section{The natural Dirichlet forms}
\label{sec:dir.forms}
\subsection{Definition of the natural Dirichlet form}
\label{ssec:dir.mcp}
There exists a natural Dirichlet form on MCP spaces and it induces a
Dirichlet form on manifold-like spaces.
\begin{definition}[\cite{Sturm06b}]
\label{def:dir.form1}
Let $(M,d,\mu)$ be a metric measure space. Let
\begin{align*}
\qf E_r(f) := \int_M \frac{N}{r^N} \int_{B_r^*(x)}
\left(\frac{f(y)-f(x)}{d(y,x)}\right)^2 \dd \mu(y) \dd\mu(x)
\end{align*}
for all $f \in \LipCont M$.
\end{definition}
\begin{theorem}[see~\cite{Sturm06b}]
\label{thm:dirichlet_form_existence}
Assume $(M,d,\mu)$ is an MCP space. Then the limit
\begin{align*}
\qf E(f):= \lim_{r \to 0} \qf E_r(f)
\end{align*}
exists for all $f \in \LipCont
M$. Furthermore the closure of $\qf E$ is a regular strongly local
Dirichlet form on $(M,d,\mu)$ with core $\LipCont M$.
\end{theorem}
\begin{remark}
This and other theorems quoted from~\cite{Sturm06b} also hold for
non-compact MCP spaces. In this case one needs to replace the
function spaces with the compactly supported versions.
\end{remark}
We now define a Dirichlet form $\bar{\qf E}$ on the quotient $\bar M$
from our Dirichlet form $\qf E$ on the original space $M$ via $\map
\pi M {\bar M=M/{\sim}}$. We can see $\bar {\qf E}$ as a
restriction of the form $\qf E$ (see remark below):
\begin{proposition}
\label{prp:dirichlet_form_on_manifold-like}
Let $(\bar M, \bar d,\bar \mu)$ be a manifold-like space obtained
from the MCP space $(M,d,\mu)$. Then the Dirichlet form $\qf E$ on
$M$ induces a Dirichlet form $\bar{\qf E}$ on $\bar M$ as a pull
back. This form $\bar{\qf E}$ is also regular, strongly local and
has core $\LipCont {\bar M}$.
\end{proposition}
\begin{proof}
As $\bar \mu$ is the push forward measure of $\mu$, the map
$\map{\pi^*}{\Lsqr {\bar M}}{\Lsqr M}$, $\pi^* \bar f
:= \bar f \circ \pi$, is an isometry onto its image.
The image of $\LipCont{\bar M} \subset \Lsqr{\bar M}$ under $\pi^*$
is given by
\begin{equation*}
\pi^*(\LipCont {\bar M})
= \bigset{f \in \LipCont M}{\text{$f(x)=f(y)$ whenever $x \sim y$}}
\subset \dom \qf E
\end{equation*}
Hence for $\bar f \in \LipCont{\bar M}$ we define $\bar{\qf E}(\bar
f):= \qf E (\pi^*\bar f)$. We then define $\bar{\qf E}$ to be the
closure of this form with respect to the norm given by $\normsqr[\qf
{\bar E}]\cdot =\normsqr[\Lsqr{\bar M}] \cdot + \bar{\qf E}(\cdot)$.
The unit contraction property is inherited from $\qf E$, i.e. $\bar
f \in \dom \bar{\qf E}$ implies that $\bar f^\# \in \dom \bar{\qf
E}$ and $\bar{\qf E}(\bar f^{\#}) \le \bar{\qf E} (\bar
f)$. Similarly, locality is inherited from $\qf E$.
For the regularity of $\bar{\qf E}$, we note first that
$\LipCont{\bar M} \subset \Cont {\bar M} \cap \dom \bar{\qf E}$ by
definition. Hence $\Cont {\bar M} \cap \dom \bar{\qf E}$ is dense in
$\dom \bar{\qf E}$. By Stone-Weierstrass, $\LipCont{\bar M}$ is
dense in $\Cont{\bar M}$ in the supremum norm. Thus $\Cont {\bar M}
\cap \dom \bar{\qf E}$ in also dense in $\Cont {\bar M}$ in the
supremum norm.
\end{proof}
\begin{remark}
By definition, $\map{\pi^*}{\dom \bar {\qf E}} {\dom \qf E}$
(endowed with the natural norms $\norm[\bar {\qf E}] \cdot$ and
$\norm[\qf E] \cdot$) is also an isometry onto its image (as
\begin{equation*}
\normsqr[\qf E] {\pi^* \bar f}
= \normsqr[\Lsqr M]{\pi^*\bar f} + \qf E(\pi^* \bar f)
= \normsqr[\Lsqr{\bar M}] {\bar f} + \bar{\qf E}(\bar f)
\end{equation*}
for $\bar f$ in the core $\LipCont{\bar M}$). Hence, we can also
work with the corresponding image form $\dbar{\qf E}$ on $\Lsqr M$,
which is the restriction $\dbar {\qf E} := \qf E \restr{\dom
\dbar{\qf E}}$ of $\qf E$ with domain given by
\begin{equation*}
\dom \dbar{\qf E}
= \clo[{\norm[\qf E]\cdot}]{\bigset{f \in \LipCont M}
{\text{$f(x)=f(y)$ whenever $x \sim y$}}}
\subset \dom \qf E.
\end{equation*}
Note that it can happen that $\dom \dbar{\qf E} = \dom \qf E$
although $\pi^* \LipCont{\bar M} \subsetneq \LipCont M$. This
happens because the $\norm[\qf E]\cdot$-norm cannot see subsets of
codimension at least two (see e.g.~\cite{ChavelFeldman78}). This
effect can be seen in \Exenum{mcp}{mcp.c}). The Dirichlet form of
two copies of the unit disk identified at a point is the same as the
Dirichlet form on two disjoint copies.
\end{remark}
\subsection{Local isometries on manifold-like spaces}
\label{ssec:local.isom.manifold_like_spaces}
\begin{proposition}
\label{prp:isomorphism_induces_form_equivalence}
\indent
\begin{enumiii}
\item
\label{iso_ind.a}
Let $(M,d,\mu)$ and $(\oS M, \oS d, \oS \mu)$ be two MCP
spaces with associated Dirichlet forms $\qf E$ and $\oS{\qf E}$ as
constructed in \Thm{dirichlet_form_existence}. If there is a
measure preserving isometry $\map \psi U {\oS U=\psi(U)}$ for open
subsets $U \subset M$ and $\oS U \subset \oS M$, then $\qf E$ and
$\oS{\qf E}$ agree on $U$ and $\oS U$.
\item
\label{iso_ind.b}
Let $(\bar M,\bar d, \bar \mu)$ and $(\bar{\oS M},\bar{\oS d},
\bar{\oS \mu})$ be two manifold-like spaces with associated
Dirichlet forms $\bar{\qf E}$ and $\bar{\oS{\qf E}}$ as constructed in
\Prp{dirichlet_form_on_manifold-like}. Assume that $\map \pi M
{\bar M}$ and $\map{\oS \pi}{\oS M}{\oS {\bar M}}$ are the
corresponding projections from MCP spaces $M$ and $\oS M$,
respectively (see \Def{mfd-like}).
If there is a measure preserving isometry $\map {\bar \psi} {\bar
U} {\bar{\oS U}=\bar \psi(\bar U)}$ for open subsets $\bar U
\subset \bar M$ and $\bar{\oS U} \subset \bar{\oS M}$ that lifts
to a measure preserving isometry $\map \psi U {\oS U}$ with
$U=\pi^{-1}(\bar U)$ and $\oS U=(\oS \pi)^{-1}(\bar{\oS U})$
(i.e., $\bar \psi \circ \pi = \oS \pi \circ \psi$), then $\bar{\qf
E}$ and $\bar{\oS{\qf E}}$ agree on $U$ and $\oS U$.
\end{enumiii}
\end{proposition}
\begin{proof}
\itemref{iso_ind.a}~The Dirichlet form on the MCP space $(M,d,\mu)$
is defined by $\qf E(f)= \lim_{r \to 0}\qf E_r(f)$ and similarly for
$(\oS M, \oS d, \oS \mu)$. As $\qf E_r$ and $\oS{\qf E}_r$ are
expressed entirely in terms of the metric $d$ and the measure $\mu$,
we have $\qf E_r(f)=\oS {\qf E}_r(\psi_* f)$ for $f \in \LipCont M$
with $\supp f \subset U$ and $0<r < d(\supp f, M \setminus
U)$.
Passing to the limit $r \to 0$ yields the
first result.
\itemref{iso_ind.b}~By part~\itemref{iso_ind.a}, the lifted forms
$\qf E$ and $\oS{\qf E}$ agree on $U$ and $\oS U$, i.e., $\qf
E(f)=\oS{\qf E}(\psi_* f)$. Moreover,
\begin{equation*}
\bar{\qf E}(\bar f)
=\qf E(\pi^*\bar f)
=\oS{\qf E}(\psi_* \pi^* \bar f)
=\oS {\qf E}((\oS \pi)^*\bar \psi_* \bar f)
=\bar{\oS {\qf E}}(\psi_* \bar f)
\end{equation*}
using the lift property of $\bar \psi$
and $\psi$.
\end{proof}
\begin{corollary}
\label{cor:isomorphism_induces_form_equivalence}
Under the assumptions of the previous proposition, the associated
operators on MCP resp.\ manifold-like spaces also agree.
\end{corollary}
\begin{proof}
This follows directly from \Lem{forms.ops.agree}.
\end{proof}
\subsection{The boundary conditions for the operator}
\label{ssec:bd_cond}
We defined a Dirichlet form and the associated operator in a quite general
setting. In this section we are going to show that for nice spaces
the operator and the Dirichlet form are very natural and familiar.
The main motivation for the definition of the Dirichlet form
\Def{dir.form1} in~\cite{Sturm06b} is the fact that if $M$ is a
Riemannian manifold, the corresponding form is $\qf E(f) = \int_M
\abssqr{\nabla f} \dd\mu$. Additionally, his definition makes sense in
a much broader metric space setting. This statement is also true in a
local version:
\begin{proposition}
\label{prp:mcp-mfd}
Assume that $(M,d,\mu)$ is a manifold-like space, and $\oS M$ a boundaryless
Riemannian manifold with its natural metric $\oS d$ and Riemannian
measure $\oS \mu$. If there is a measure preserving isometry $\map
\psi U {\oS U}$ with $U \subset M$ and $\oS U \subset \oS M$ open,
then on $U$, the form $\qf E$ just acts as $\int_{\oS U}\abssqr{\nabla
f}\dd \oS \mu$ and the operator $D$ acts as the Laplace Beltrami operator
on $\oS U$.
\end{proposition}
\begin{proof}
This follows directly from
\Prp{isomorphism_induces_form_equivalence}.
\end{proof}
\begin{definition}
\label{def:r-fold}
Assume that $(\bar M,\bar d,\bar \mu)$ is a manifold-like space with
MCP lift $(M,d,\mu)$ and projection $\map \pi M {\bar M}$. We say
that $\bar U \subset \bar M$ is an \emph{$r$-fold smooth fibration}
glued at a closed subset $\bar F \subset \bar M$ if the following
holds:
\begin{enumerate}
\item $\bar U$ is open and connected and $\pi^{-1}(\bar
U)=U=\bigdcup_{j=1}^r U_j$, where each $U_j$ is connected and the
closure of each $U_j$ is isometric to a subset of a Riemannian
manifold with smooth boundary. The sets $U_j$ are called
\emph{leaves}.
\item $\bar F$ is connected and $\pi^{-1}(\bar F)=\bigdcup_{j=1}^r
F_j$ with $F_j$ connected and $F_j \subset \bd U_j$, hence $F_j$
is isometric to a subset of the boundary of the Riemannian
manifold.
\end{enumerate}
\end{definition}
To simplify notation, we assume that $U_j$ and $F_j$ are already
subsets of a Riemannian manifold (the former open in the interior, the
latter a closed subset of the boundary). If $\map{\bar f}{\bar M}\mathbb{R}$
denote by $\map f M \mathbb{R}$ the lift of $\bar f$ onto $M$, i.e., $f \circ
\pi=\bar f$. If $f$ is smooth enough on each $U_j$, we define
$\normder f_j$ as the normal (outward) derivative of $f$ on $\hat U_j
:= F_j \cup U_j$, and we pull back all functions $\normder f_j
\restr{F_j}$ formally defined on $F_j \subset M$ onto $\bar F$ via the
isometries and denote them by $\map {\normder \bar f_j} {\bar F}\mathbb{R}$.
Note that $U \setminus \bigcup_{j=1}^r F_j$ is naturally the same as
$\bar U \setminus \bar F$, as $\pi$ does not identify any points here.
Moreover, these two
sets also have the same measure, and integrals over them agree. Therefore, we consider these sets as the same.
\begin{proposition}
\label{prp:kirchhoff}
Let $(\bar M,\bar d,\bar \mu)$ be a manifold-like space obtained
from the MCP space $(M,d,\mu)$. Let $\bar U \subset \bar M$ be an
$r$-fold smooth fibration with leaves $U_j$ glued at $F_j$.
If $\bar f$ is in the domain of the associated operator $\bar D$ on
$\bar M$ with $\supp \bar f \subset \bar U$ then $\bar f$ acts on
$\bar U$ as the usual Laplacian ($\bar D \bar f = -\Delta \bar f$).
Moreover, the normal derivatives on the leaves satisfy the so-called
\emph{Kirchhoff} condition on the glued part $\bar F$. This means that
\begin{equation*}
\sum_{j=1}^r \normder \bar f_j =0 \qquad \textit{on } \bar F
\end{equation*}
and that $\bar f$ is continuous on $\bar F$.
\end{proposition}
Note that the derivatives are only weak derivatives. This theorem does
not make any statements on the regularity of $\dom \bar D$. If we are
only on parts which are $r$-fold smooth fibrations, then the solutions
are in $\Sobspace[2]$.
\begin{proof}
Let $\bar g \in \dom \bar{\qf E}$ with $\supp \bar g \subset \bar U$
and $g$ its lift. Now after our notes made above, we have
\begin{align*}
\int_{\bar U} \bar D \bar f \cdot \bar g \dd \bar \mu
=& \bar{\qf E}(\bar f, \bar g)\\
=& \sum_{j=1}^r \int_{U_j} \nabla f \cdot \nabla g \dd \mu\\
=&\sum_{j=1}^r
\Bigl(
\int_{U_j} (-\Delta f) \cdot g \dd \mu
+\int_{F_j} \normder f_j \cdot g \dd \sigma
\Bigr)\\
=& \int_{\bar U \setminus \bar F} (-\Delta \bar f) \cdot g \dd \mu
+ \int_{\bar F} \Bigl(\frac 1 r\sum_{j=1}^r\normder \bar f_j\Bigr)
\cdot \bar g \dd \bar \sigma
\end{align*}
using Green's formula on the Riemannian manifold (third equality).
Here, $\sigma$ denotes the canonical measure on the boundary of the
Riemannian manifold and $\bar \sigma$ the push forward measure on
$\bar F$ (counting each measure from the leaves boundary, hence
$\bar \sigma (\bar F)=r\sigma(F_j)$. We first see that $\bar D \bar
f= -\Delta \bar f$ (choose $\bar g$ with support away from $\bar
F$). Then we let $\bar g \in \dom \bar{\qf E}$ with $\supp \bar g
\subset \bar U$; as $\bar g \restr {\bar F}$ runs through a dense
subspace of $\Lsqr{\bar F,\bar \sigma}$, the result follows.
\end{proof}
If $r=1$, this reduces to the manifold case with Neumann boundary
conditions.
\begin{corollary}
With the same notation as above and the additional assumption that
$r=1$, functions $\bar f \in \dom \bar D$ with $\supp \bar f \subset
\bar U$ satisfy Neumann boundary conditions $\normder \bar f=0$ on
$\bar F$.
\end{corollary}
\begin{examples}
The simplest example of the situation in \Prp{kirchhoff} is a metric
graph. The MCP space consists of a collection of disjoint intervals,
one for each edge of the metric graph. The glueing then identifies
the end points of the intervals that correspond to adjacent edges in
the metric graph.
\Exenum{mcp}{mcp.d} is a higher dimensional version.
\end{examples}
\subsection{Heat kernel estimates}
\begin{theorem}[\cite{CKS87}]
\label{thm:existence_measure}
Let $(M,d,\mu)$ be a compact metric measure space and $\qf E$ a
regular Dirichlet form on it. Then there exists a measure
$\Gamma(f)$ such that
\begin{align*}
\qf E(f) = \int_M \dd\Gamma(f)(x)
\end{align*}
for any $f \in \Cont M \cap \dom \qf E$.
\end{theorem}
\begin{lemma}[Subpartitioning lemma]
\label{lem:subpartitioning_lemma_proven}
Let $(M,d,\mu)$ be a $(K,N)$-MCP space and let $U \subset M$ be open and convex. Then
\begin{align*}
\int_U \frac{N}{r^N} \int_{B_r^*(x) \cap U}
\left(\frac{f(y)-f(x)}{d(y,x)}\right)^2\dd\mu(y)\dd\mu(x)
\le \int_U \dd\Gamma(f)(x)
\end{align*}
\end{lemma}
\begin{proof}
For any MCP space and $0 = t_0 < t_1 < \dots t_{n-1} < t_n=1$ a
partition of the unit interval we have the estimate
\begin{equation*}
\qf E_r(f) \le \sum_{i=1}^{n}(t_i-t_{i-1}) \qf E_{(t_i-t_{i-1})r}(f)
\end{equation*}
by~\cite{Sturm06b}. We will apply this directly to $U$, which is
also a $(K,N)$-MCP space by \Thmenum{mcp2}{mcp2.e}. Let $r_n :=
2^{-n}r$ then $\qf E(f)= \lim_{n \to \infty} \qf E_{r_n}(f)$. Using
the partition $0,\frac12,1$ this is an increasing sequence. Hence
$\qf E_r(f) \le \qf E(f)$.
\end{proof}
\begin{definition}
\label{def:e-metric}
Let $(M,d,\mu)$ be a metric measure space and $\qf E$ a regular
Dirichlet form on it. Then we define the \emph{energy metric}
$\rho$ on $M$ as follows
\begin{align*}
\rho(x,y) :=
\sup \Bigset{f(x) - f(y)}
{f \in \Cont M \cap \dom \qf E,
\frac{\dd \Gamma(f)}{\dd\mu} \le 1 \; \text{on}\; M}
\end{align*}
where $\dd\Gamma(f)/\dd \mu$ represents the Radon-Nikod\'ym
derivative. Note that this includes the implicit assumption that
$\dd\Gamma(f)$ is absolutely continuous with respect to $\dd\mu$.
\end{definition}
This metric is often called the intrinsic metric, especially when $M$
is only a (sufficiently nice) topological space. To avoid confusion
with the distance induced via the length of paths we use the term energy metric. A priori, the energy metric need not be a proper metric, it can be degenerate.
\begin{remark}
\label{rem:absolute_continuity}
Let $(M,d,\mu)$ be an MCP space and let $\qf E$ be the associated
Dirichlet form. Then $\dd\Gamma(f)$ is absolutely continuous with
respect to $\dd\mu$ for all $f \in \Cont M \cap \dom \qf E$
by~\cite[Cor.~6.6~(iii)]{Sturm06b}.
\end{remark}
\begin{lemma}
\label{lem:metric_equivalence}
Let $(M,d,\mu)$ be a $(K,N)$-MCP space. Let $\qf E$ be the
associated Dirichlet form from \Thm{dirichlet_form_existence}. Then
the energy metric is equivalent to the metric $d$. In particular
they induce the same topology on $M$.
\end{lemma}
\begin{proof}
This is proven in~\cite{Sturm98} for a different version of the
MCP. This proof is an adaptation of his proof.
We have $\qf E (f) = \int_M d\Gamma(f) = \int_M
\frac{d\Gamma(f)}{\dd\mu}(x) \dd\mu(x)$ by \Thm{existence_measure}
and \Rem{absolute_continuity}.
For $z \in M$ and $C>0$ fixed, let $f(x):=Cd(x,z)$. Then we have
\begin{align*}
\qf E_r(f)
&= \int_M \frac{N}{r^N} \int_{B_r^*(x)}C^2
\left(\frac{d(x,z)-d(y,z)}{d(x,y)}\right)^2
\dd\mu(y)\dd\mu(x) \\
&\le \int_M \frac{N}{r^N} \int_{B_r^*(x)}C^2 \dd\mu(y)\dd\mu(x) .
\end{align*}
We assumed in~\eqref{eq:alfohrs-reg} that $\mu(B_r(x))/r^N$ is
globally bounded by some constant $c$. Hence we can apply the
dominated convergence theorem and get
\begin{align*}
\frac{d\Gamma(f)}{d\mu}(x)
= \lim_{r \rightarrow 0} \frac{N}{r^N}
\int_{B_r^*(x)}C^2 \left(\frac{d(x,z)-d(y,z)}{d(x,y)}\right)^2
\dd\mu(y)
\end{align*}
This shows $\dd \Gamma(f)/\dd \mu\le 1$ for $C \le (cN)^{-1/2}$.
Plugging $f$ into the definition of the energy metric, we obtain
$\rho(x,y) \ge C(d(x,z)-d(y,z))$ valid for any $C \le (cN)^{-1/2}$
and any $z \in M$. In particular, for $z:=y$ we get the lower bound
$\rho(x,y) \ge Cd(x,y)$.
Let $f \in \LipCont M$ with $\dd \Gamma(f)/\dd \mu \le 1$ and let
$L_f$ denote the sharp Lipschitz constant of $f$. Then $f(x)-f(y)
\le L_fd(x,y)$. Hence if we show that there exists a global
Lipschitz constant $L$ for all functions $f \in \LipCont M$ that
satisfy $\dd\Gamma(f)/\dd\mu \le 1$ we get the estimate $\rho(x,y)
\le Ld(x,y)$.
Let $x_0,y_0 \in M$ be such that $f(x_0)-f(y_0)\ge
\frac{L_f}{2}d(x_0,y_0)$. We can assume without loss of generality
that $d_0:=d(x_0,y_0)$ is arbitrarily small by repeatedly taking
midpoints. Hence $d_0$ can be bounded from above independent of
$f$. Let $x \in B_{d_0/6}(x_0)$ and $y \in B_{d_0/6}(y_0)$. Then
\begin{align*}
|f(x)-f(y)|
\ge |f(x_0)-f(y_0)| - |f(x)-f(x_0)| - |f(y)-f(y_0)| \\
\ge \left( \frac{L_f}{2} - \frac{L_f}{6} - \frac{L_f}{6}\right)d_0
= \frac{L_fd_0}{6}
\ge \frac{L_f}{12}d(x,y)
\end{align*}
Let $U:= B_{2d_0}(x_0)$ and $r=2d_0$ in
\Lem{subpartitioning_lemma_proven}, then
\begin{align*}
\mu(B_{2d_0}(x_0))
\ge& \int_{B_{2d_0}(x_0)} \dd\Gamma(f) \\
\ge& \int_{B_{2d_0}(x_0)} \frac{N}{(2d_0)^N}
\int_{B_{2d_0}^*(x)\cap B_{2d_0}(x_0)}
\left(\frac{f(y)-f(x)}{d(y,x)}\right)^2 \dd\mu(y)\dd\mu(x) \\
\ge& \int_{B_{d_0/6}(x_0)} \frac{N}{(2d_0)^N} \int_{B_{d_0/6}^*(y_0)}
\left(\frac{f(y)-f(x)}{d(y,x)}\right)^2 \dd\mu(y)\dd\mu(x) \\
\ge & \frac{L_f^2}{144}\frac{N}{(2d_0)^N}\mu(B_{d_0/6}(x_0)) \mu(B_{d_0/6}(y_0))
\end{align*}
By~\eqref{eq:alfohrs-reg} we have uniform global bounds for the
volumes of balls. All the $d_0$ cancel out, so this proves an upper
bound for $L_f$ that is independent of $f$ completing the proof.
\end{proof}
\begin{lemma}
Let $(\bar M,\bar d,\bar \mu)$ be a manifold-like space with induced
Dirichlet form $\bar{\qf E}$. Then the energy metric is
equivalent to the $\bar d$ metric.
\end{lemma}
\begin{proof}
It is sufficient to prove this for one glueing with glueing map
$\phi$ and projection $\pi$. We can write $\bar d(\bar x,\bar y)=
\min \set{d(x,y)}{\pi(x)=\bar x,\pi(y)=\bar y}$ and similarly for
the energy metric. Hence the equivalence of metrics is
inherited through glueing.
\end{proof}
\begin{definition}
Let $(M,d,\mu)$ be a metric measure space that is $N$-Alfohrs
regular and let $\qf E$ be a regular Dirichlet form on $M$. Let
$N^*:= \max \{3,N\}$. Then we say $\qf E$ satisfies the
\emph{Sobolev inequality} if there exists a $C>0$ such that for all
$f \in \dom{\qf E} \cap \Contc{B_r(x)}$ we have
\begin{align*}
\left(\int_{B_r(x)}\abs f^{(2N^*)(N^*-1)} \dd\mu\right)^{(N^*-2)/N^*}
\le C \frac{r^2}{\mu(B_r(x))^{2/N^*}}
\left(\int_{B_r(x)} \dd\Gamma(f)
+ \frac 1{r^2} \int_{B_r(x)}\abssqr f \dd\mu \right)
\end{align*}
for all $r>0$.
\end{definition}
\begin{theorem}[\cite{Sturm95}]
\label{thm:heat_kernel_original}
Let $(M,d,\mu)$ be a metric measure space and $\qf E$ a strongly
local regular Dirichlet form on it.
Assume $M$ satisfies the volume doubling property, the Sobolev
inequality and the topology induced by $\rho$ is the same as the one
induced by $d$. Then for any $T>0$ and any $\eps>0$ there exists a
$C>0$ such that the heat kernel estimate
\begin{align*}
p_t(x,y)
\le C \mu(B_{\sqrt{t}}(x))^{-1/2}\mu(B_{\sqrt{t}}(y))^{-1/2}
\exp\left(-\frac{\rho^2(x,y)}{(4+\eps)t}\right)
\end{align*}
is valid for all $x,y \in M$ and all $0<t<T$.
\end{theorem}
\begin{corollary}
\label{cor:heat_kernel_decay}
Let $(M,d,\mu)$ be a $(K,N)$-MCP space. Let $\qf E$ be the strongly
local regular Dirichlet form from
\Thm{dirichlet_form_existence}. Then the heat kernel satisfies an
exponential decay bound as in \Def{heat_kernel_decay}. That is
\begin{align*}
p_t(x,y) \le C t^{-N/2} \exp\left(-\frac{d^2(x,y)}{ct}\right)
\end{align*}
holds for some $C,c>0$ independent of $x,y \in M$ and of $t$ as long as
$0<t<T$ for some $T$.
\end{corollary}
\begin{proof}
$(M,d,\mu)$ satisfies a parabolic Harnack inequality
by~\cite{Sturm06b}. This also implies that $\qf E$ satisfies the
Sobolev inequality~\cite{Sturm06b}. \Lem{metric_equivalence}
gives the equivalence of the energy metric and $d$. Hence the
assumptions of \Thm{heat_kernel_original} are satisfied.
As the volume of radius $\sqrt{t}$-balls is bounded and the metrics
$d$ and $\rho$ are equivalent, we can reformulate the bound
from~\cite{Sturm95} as written.
\end{proof}
\begin{corollary}
Let $(\bar M,\bar d,\bar \mu)$ be a manifold-like space with induced
Dirichlet form $\bar{\qf E}$. Then the heat kernel satisfies the
exponential decay bound in \Def{heat_kernel_decay}.
\end{corollary}
\begin{proof}
We are going to use \Thm{heat_kernel_original} again. The only
assumption that is missing is the Sobolev inequality.
Let $(M,d,\mu)$ be the $(K,N)$-MCP space that $\bar M$ was obtained
from. The Sobolev inequality holds on $M$ by~\cite{Sturm06b}. As
$\bar{\qf E}$ is defined as a restriction of $\qf E$ and the measure
$\bar \mu$ on $\bar M$ is just the push forward measure, the Sobolev
inequality also holds on $\bar M$.
\end{proof}
The manifold-like spaces we defined in \Sec{the.space} provide a large
class of examples where locality holds:
\begin{maintheorem}
\label{mthm:manifold_like_spaces}
Let $(M,d,\mu)$ and $(\oS M, \oS d, \oS \mu)$ be two manifold-like
spaces. Let $\qf E$ and $\qf {\oS E}$ be the natural Dirichlet forms
on $M$ and $\oS M$ from \Prp{isomorphism_induces_form_equivalence}.
Let $p_t$ and $\oS p_t$ be the associated heat kernels and let
$\Proba$ and $\oS \Proba$ the associated Wiener measures.
Let $U \subset M$ be open and assume there exists a measure
preserving isometry $\map \psi U {\oS U \subset \oS M}$.
Then the Wiener measures $\Proba$ and $\oS \Proba$ are identical on
$U$.
Let $V$ be open with $\clo V \subset U$ and let $x,y \in V$.
Then the difference of the heat kernels is
exponentially small, that is
\begin{align*}
\abs{p_t(x,y) - \oS p_t(\psi(x), \psi(y))}
\le C\e^{- \eps/t}
\end{align*}
for $\mu$-almost all $x,y \in V$ and all $t \in (0,T]$. The asymptotic expansions
of $p_t$ and $\oS p_t$ agree on $V$.
\end{maintheorem}
\begin{proof}
If $\psi$ is a measure preserving isometry, then the Dirichlet forms
and the operators are equivalent by
\Prp{isomorphism_induces_form_equivalence}. Hence we get equivalence
of the Wiener measures by \Mthm{locality_wiener_measure}. The heat
kernels associated to these Dirichlet forms satisfy the heat kernel
decay bound by corollary \Cor{heat_kernel_decay}. Thus we can apply
\Mthm{locality_heat_kernel} and get locality of the heat kernel.
\end{proof}
\section{Example application: a two particle system on a metric graph}
\label{sec:example}
Let $G=(V,E)$ be a compact metric graph with vertex set $V$ and edge
set $E$. A metric graph is a combinatorial graph together with an
assignment of edge lengths. The operator is the Laplacian, that is the
second derivative on the edges seen as intervals. We impose the
standard Kirchhoff boundary conditions at all vertices. This means
functions are continuous and the sum of the first derivatives on all
edges adjacent to a vertex is zero (the derivatives are oriented away
from the vertex). A metric graph together with the operator is called
a quantum graph. See for example~\cite{BerkolaikoKuchment13} for an
introduction and a survey of quantum graphs.
The manifold-like space we will look at is $M:= G \times G /{\sim}$
with $(x,y) \sim (y,x)$. This is a model from physics, it corresponds
to two particles moving freely on a metric graph. The particles do not
interact and they are indistinguishable, hence we factor out by the
symmetry.
This is a $2$-dimensional space which is neither a manifold nor an orbifold. To the best of our knowledge, the results in this paper are the first to explicitly show that these kind of spaces do have `well-behaved' heat kernels.
In order to compute the heat asymptotics of the heat kernel for this
system, we will decompose the state space of the two particles into
various pieces. This is were the locality of the heat kernel comes
in. For each piece we will explicitly compute the heat kernel of a
different space which is locally isometric to the piece of $M$ but
globally a much simpler space. For these much simpler spaces one can
write down an explicit expression of the heat kernel and use it to
compute the asymptotics.
Pick a universal $\eps >0$ much smaller than any edge length. We say
that a particle is in the neighbourhood of a vertex if it less than
$\eps$ away from it. We decompose $M$ into the following types of
pieces.
\begin{enumABC}
\item both particles are away from vertices and on distinct edges
\item both particles are away from vertices and on the same edge
\item one particle is in a neighbourhood of a vertex, the other one is
away from the vertices on an edge
\item the particles are in neighbourhoods of two distinct vertices
\item both particles are in the neighbourhood of a vertex
\end{enumABC}
Note that the cutoffs between the pieces need to be made in a way that
intersects the singular pieces of $M$ orthogonally otherwise these
cutoffs will produce additional terms in the asymptotics.
For the pieces of type $A$, $C$ and $D$ the particles cannot run into
each other. So the heat kernel is just the product of the heat kernels
of the two pieces. For pieces of type $B$ and $E$ the heat kernel is
the product of the individual heat kernels modded out by the symmetry.
For a single particle away from vertices we can just use the real line
as a comparison space, the heat kernel is
$p_{\mathbb{R}}(t,x,y)=\frac{1}{\sqrt{4\pi t}}\e^{-(x-y)^2/4t}$. For a
particle in the neighbourhood of a vertex we use the star shaped
metric graph consisting of $k$ half-infinite edges all meeting in a
single central vertex as a comparison space. Its heat kernel can be
written down explicitly. For $\alpha, \beta \in [1, \hdots, \deg(v)]$,
we write $x^{\alpha}$ when $x$ is on the edge $\alpha$. The heat
kernel is then given by
\begin{align*}
p_{S}(t,x^{\alpha},y^{\beta})
= \frac{1}{\sqrt{4\pi t}}\left(\delta_{\alpha \beta}\e^{-(x^{\alpha}-y^{\beta})^2/4t}
+ \sigma_{\alpha \beta}^v\e^{-(x^{\alpha}+y^{\beta})^2/4t}\right)
\end{align*}
(see e.g.~\cite{Roth83} or \cite{KPS07} for a more general version), where $\delta_{\alpha \beta}$ is the
Kronecker-$\delta$ and $\sigma_{\alpha \beta}^v$ is the matrix of the
boundary conditions at the vertex. For Kirchhoff conditions we have
$\sigma_{\alpha \beta}^v=-\delta_{\alpha \beta} + \frac{2}{\deg(v)}$.
We will just carry out the computations for pieces of type $C$ and
$E$. The other types work in exactly the same way.
Let $N(v)$ denote the $\eps$-neighbourhood of the vertex $v$ and
$l_{\gamma}$ the edge where the other particle is located. The
particles do not interact, hence we just multiply a heat kernel of the
star graph with a heat kernel on the real line.
We can now apply \Mthm{manifold_like_spaces} which says that the heat kernel on $M$ in the region C and the heat kernel on the comparison space differ only by an exponentially small error term. Hence,
on the diagonal the heat kernel of the region C is given by
\begin{align*}
p_C(t,(x_1^{\alpha},x_2),(x_1^{\alpha},x_2)) =\frac{1}{4\pi
t}\left(1 + \sigma^v_{\alpha \alpha}\e^{-(x_1^{\alpha})^2/t}
\right) +O(t^\infty)
\end{align*}
Here and in further computations we use the notation $O(t^{\infty})$ to mean an error term that can be bounded as $O(t^k)$ for any $k$. Such an error will make no contribution to the asymptotics we are interested in.
Thus the contribution to the asymptotics is
\begin{align*}
&\int_C p_C(t,(x_1,x_2),(x_1,x_2))\dd x_1\dd x_2 \\
=&\frac{1}{4\pi t}(l_{\gamma}-2\eps)\eps \deg(v)
+ \frac{1}{4\pi t} (l_{\gamma}-2\eps)
\sum_{\alpha \sim v}\sigma^v_{\alpha \alpha}
\int_{0}^{\eps} \e^{-(x_1^{\alpha})^2/t}\dd x_1^{\alpha}\\
=&\frac{1}{4\pi t}\vol(C) + \frac{1}{8\sqrt{\pi
t}}(l_{\gamma}-2\eps)\sum_{\alpha \sim v}\sigma^v_{\alpha
\alpha} +O(t^\infty)
\end{align*}
To get an expression for the heat kernel on the diagonal of the region
E, we also need to know it in a neighbourhood of the diagonal. In a
neighbourhood of the diagonal we can just assume that both points on
$M$ are in the region $E$.
For two distinguishable particles, the heat kernel on the domain $N(v)
\times N(v)$ is just the product, that is
\begin{align*}
&p_{N(v) \times N(v)}(t,(x_1^{\alpha},x_2^{\beta}),(y_1^{\gamma},y_2^{\delta})) \\
=&\frac{1}{4\pi t} \left(\delta_{\alpha
\gamma}\e^{-(x_1^{\alpha}-y_1^{\gamma})^2/4t} + \sigma^v_{\alpha
\gamma}\e^{-(x_1^{\alpha}+y_1^{\gamma})^2/4t}\right)\cdot
\left(\delta_{\beta \delta}\e^{-(x_2^{\beta}-y_2^{\delta})^2/4t} +
\sigma^v_{\beta
\delta}\e^{-(x_2^{\beta}+y_2^{\delta})^2/4t}\right)
\end{align*}
We now factor out by the isometry $\phi: (x_1,x_2) \mapsto
(x_2,x_1)$. To get the heat kernel on $\oS{E}:=N(v) \times N(v)
\slash_{(x_1,x_2) \sim \phi(x_1,x_2)}$ we use a result
of~\cite{Donnelly79} which says that in our setting
$p_{\oS{E}}(t,x,y)=p_{N(v)\times N(v)}(t,x,y)+ p_{N(v)\times
N(v)}(t,\phi(x),y)$. We parametrise $\oS{E}$ as follows,
$(x_1^{\alpha},x_2^{\beta}) \in \oS{E}$ where $(x_1^{\alpha},x_2^{\beta})
\in N(v)\times N(v)$ and either $\alpha < \beta$ or ($\alpha=\beta$
and $x_1^{\alpha} \le x_2^{\alpha}$). Now by \Mthm{manifold_like_spaces} the heat kernel on $M$ in the region $E$ is equal to the one on $\oS{E}$ up to an exponentially small error term. Hence
\begin{align*}
&p_{E}(t,(x_1^{\alpha},x_2^{\beta}),(y_1^{\gamma},y_2^{\delta})) \\
=&\frac{1}{4\pi t} \left(\delta_{\alpha
\gamma}\e^{-(x_1^{\alpha}-y_1^{\gamma})^2/4t} + \sigma^v_{\alpha
\gamma}\e^{-(x_1^{\alpha}+y_1^{\gamma})^2/4t}\right)\cdot
\left(\delta_{\beta \delta}\e^{-(x_2^{\beta}-y_2^{\delta})^2/4t}
+ \sigma^v_{\beta \delta}\e^{-(x_2^{\beta}+y_2^{\delta})^2/4t}\right) \\
&+ \frac{1}{4\pi t} \left(\delta_{\beta
\gamma}\e^{-(x_2^{\beta}-y_1^{\gamma})^2/4t} + \sigma^v_{\beta
\gamma}\e^{-(x_2^{\beta}+y_1^{\gamma})^2/4t}\right)\cdot
\left(\delta_{\alpha \delta}\e^{-(x_1^{\alpha}-y_2^{\delta})^2/4t} +
\sigma^v_{\alpha
\delta}\e^{-(x_1^{\alpha}+y_2^{\delta})^2/4t}\right)
+O(t^\infty)
\end{align*}
On the diagonal this gives
\begin{align*}
&p_{E}(t,(x_1^{\alpha},x_2^{\beta}),(x_1^{\alpha},x_2^{\beta})) \\
=&\frac{1}{4\pi t} \left(1 + \sigma^v_{\alpha
\alpha}\e^{-(x_1^{\alpha})^2/t}\right)\cdot
\left(1 + \sigma^v_{\beta \beta}\e^{-(x_2^{\beta})^2/t}\right) \\
& + \frac{1}{4\pi t} \left(\delta_{\alpha
\beta}\e^{-(x_1^{\alpha}-x_2^{\beta})^2/4t} + \sigma^v_{\alpha
\beta}\e^{-(x_1^{\alpha}+x_2^{\beta})^2/4t}\right)^2
+O(t^\infty)
\end{align*}
We will separate the integration into two parts, first the region
where $\alpha < \beta$ and second the region where $\alpha=\beta$ and
$x_1^{\alpha}\le x_2^{\alpha}$. In the first region we can integrate
over the rectangle $[0,\eps]\times [0,\eps]$ for each pair of edges.
\begin{align*}
&\sum_{\alpha < \beta}\int_0^{\eps}\int_0^{\eps}
p_{E}(t,(x_1^{\alpha},x_2^{\beta}),(x_1^{\alpha},x_2^{\beta}))
\dd x_1^{\alpha}\dd x_2^{\beta}\\
=&\frac{1}{4\pi t}\sum_{\alpha < \beta}\int_0^{\eps}1 +
\sigma^v_{\alpha \alpha}\e^{-(x_1^{\alpha})^2/t}\dd x_1^{\alpha}
\int_0^{\eps} 1 + \sigma^v_{\beta \beta}\e^{-(x_2^{\beta})^2/t}\dd x_2^{\beta}\\
& + \frac{1}{4\pi t}\sum_{\alpha < \beta}(\sigma^v_{\alpha
\beta})^2\int_0^{\eps}\int_0^{\eps}
\e^{-(x_1^{\alpha}+x_2^{\beta})^2/2t}\dd x_1^{\alpha}\dd x_2^{\beta} +O(t^\infty)\\
=&\frac{1}{4\pi t}\sum_{\alpha < \beta} \left( \eps +
\sigma^v_{\alpha \alpha}\frac{\sqrt{\pi}}{2}t^{1/2}\right)\cdot
\left( \eps + \sigma^v_{\beta
\beta}\frac{\sqrt{\pi}}{2}t^{1/2}\right)
+\frac{1}{4\pi t}\sum_{\alpha < \beta} (\sigma^v_{\alpha \beta})^2 t + O(t^\infty)\\
=&\frac{1}{4\pi t}\frac{1}{2}\deg(v)(\deg(v)-1)\eps^2
+ \frac{1}{8\sqrt{\pi t}}\sum_{\alpha < \beta}
(\sigma^v_{\alpha \alpha}+\sigma^v_{\beta \beta})\eps \\
& + \frac{1}{16}\sum_{\alpha < \beta}\sigma^v_{\alpha \alpha}
\sigma^v_{\beta \beta}
+\frac{1}{4\pi}\sum_{\alpha < \beta}(\sigma^v_{\alpha \beta})^2 + O(t^\infty)\\
\end{align*}
For the second part, where $\alpha=\beta$ we will drop the superscript
and write $x_1$ and $x_2$ for $x_1^{\alpha}$ and $x_2^{\alpha}$ to
simplify notation.
\begin{align*}
&e_{E}(t,(x_1,x_2),(x_1,x_2)) \\
=&\frac{1}{4\pi t} \left(1 + \sigma^v_{\alpha
\alpha}\e^{-x_1^2/t}\right)\cdot \left(1 + \sigma^v_{\alpha
\alpha}\e^{-x_2^2/t}\right) + \frac{1}{4\pi t}
\left(\e^{-(x_1-x_2)^2/4t}
+ \sigma^v_{\alpha \alpha}\e^{-(x_1+x_2)^2/4t}\right)^2 +O(t^\infty)\\
=& \frac{1}{4\pi t} + \frac{1}{4\pi t}\sigma^v_{\alpha
\alpha}\e^{-x_1^2/t}+\frac{1}{4\pi t}\sigma^v_{\alpha
\alpha}\e^{-x_2^2/t}
+\frac{1}{4\pi t}(\sigma^v_{\alpha \alpha})^2\e^{-(x_1^2+x_2^2)/t}\\
&+\frac{1}{4\pi t} \e^{-(x_1-x_2)^2/2t} +\frac{1}{2\pi
t}\sigma^v_{\alpha \alpha}\e^{-(x_1^2+x_2^2)/2t}
+\frac{1}{4\pi t}(\sigma^v_{\alpha \alpha})^2\e^{-(x_1+x_2)^2/2t} +O(t^\infty)\\
\end{align*}
In the second region we will integrate over a difference of two
triangles to ensure that the region meets the boundary orthogonally.
The first triangle is described by $0 \le x_2 \le x_1 \le \eps$ with
area $\frac{1}{2}\eps^2$. The second triangle is the region where
$2^{-1/2}\eps \le x_1 \le \eps$ and $2^{1/2}\eps-x_1 \le x_2 \le x_1$
with area
$\frac{1}{2}(1-2^{-1/2})^2\eps^2=(\frac{3}{8}-2^{-1/2})\eps^2$.
Therefore
\begin{align*}
& 4\pi t\int_0^{\eps}\int_0^{x_1}e_{E}(t,(x_1,x_2),(x_1,x_2)) \dd x_2\dd x_1 \\
& - 4\pi t\int_{2^{-1/2}\eps}^{\eps}\int_{2^{1/2}\eps-x_1}^{x_1}
e_{E}(t,(x_1,x_2),(x_1,x_2)) \dd x_2\dd x_1 \\
=& \frac{1}{2}\eps^2 + \sigma^v_{\alpha
\alpha}\int_0^{\eps}x_1\e^{-x_1^2/t}\dd x_1
+\sigma^v_{\alpha \alpha}\int_0^{\eps}\int_0^{x_1}\e^{-x_2^2/t}\dd x_2\dd x_1 \\
& + (\sigma^v_{\alpha
\alpha})^2\int_0^{\eps}\int_0^{x_1}\e^{-(x_1^2+x_2^2)/t}\dd
x_2\dd x_1
+\int_0^{\eps}\int_0^{x_1} \e^{-(x_1-x_2)^2/2t} \dd x_2\dd x_1\\
&+2\sigma^v_{\alpha
\alpha}\int_0^{\eps}\int_0^{x_1}\e^{-(x_1^2+x_2^2)/2t} \dd x_2\dd
x_1
+(\sigma^v_{\alpha \alpha})^2\int_0^{\eps}\int_0^{x_1}\e^{-(x_1+x_2)^2/2t}
\dd x_2\dd x_1 \\
& - (\frac{3}{8}-2^{-1/2})\eps^2
- \sigma^v_{\alpha \alpha}\int_{2^{-1/2}\eps}^{\eps}(2x_1-2^{1/2}\eps)
\e^{-x_1^2/t} \dd x_1\\
& -\sigma^v_{\alpha
\alpha}\int_{2^{-1/2}\eps}^{\eps}\int_{2^{1/2}\eps-x_1}^{x_1}\e^{-x_2^2/t}
\dd x_2\dd x_1
-(\sigma^v_{\alpha \alpha})^2\int_{2^{-1/2}\eps}^{\eps}
\int_{2^{1/2}\eps-x_1}^{x_1}\e^{-(x_1^2+x_2^2)/t} \dd x_2\dd x_1\\
&-\int_{2^{-1/2}\eps}^{\eps}\int_{2^{1/2}\eps-x_1}^{x_1}
\e^{-(x_1-x_2)^2/2t} \dd x_2\dd x_1
-2\sigma^v_{\alpha \alpha}\int_{2^{-1/2}\eps}^{\eps}
\int_{2^{1/2}\eps-x_1}^{x_1}\e^{-(x_1^2+x_2^2)/2t} \dd x_2\dd x_1\\
&-(\sigma^v_{\alpha
\alpha})^2\int_{2^{-1/2}\eps}^{\eps}\int_{2^{1/2}\eps-x_1}^{x_1}\e^{-(x_1+x_2)^2/2t}
\dd x_2\dd x_1
+O(t^\infty)\\
=& \frac{1}{2}\eps^2 + \sigma^v_{\alpha \alpha}\frac{t}{2}
+\sigma^v_{\alpha \alpha}(-\frac{1}{2}t +
\frac{\sqrt{\pi}}{2}\eps t^{1/2}) + (\sigma^v_{\alpha \alpha})^2\frac{1}{8}\pi t\\
& - t + 2^{-1/2}\sqrt{\pi}\eps t^{1/2}
+2\sigma^v_{\alpha \alpha}\frac{1}{4}\pi t +
(\sigma^v_{\alpha \alpha})^2\frac{1}{2}t \\
&- (\frac{3}{8}-2^{-1/2})\eps^2 - 0 -0 - 0 -
2^{-1/2}\sqrt{\pi}(1-2^{-1/2})\eps t^{1/2}+\frac{t}{2}\\
& - 0 - 0 + O(t^\infty) \\
=& \left(\frac{1}{2}- \frac{3}{8}+2^{-1/2}\right)\eps^2 +
\sigma^v_{\alpha \alpha}\frac{\sqrt{\pi}}{2}\eps t^{1/2} +
\frac{\sqrt{\pi}}{2}\eps t^{1/2}\\
&+ (\sigma^v_{\alpha \alpha})^2\frac{1}{8}\pi t - t +
\sigma^v_{\alpha \alpha}\frac{1}{2}\pi t +(\sigma^v_{\alpha
\alpha})^2\frac{1}{2}t +\frac{t}{2} + O(t^\infty)
\end{align*}
The various terms that give zero contribution can all be bounded by
noticing that the integrated function can be bounded as $\e^{-C/t}$
for some constant $C>0$ over the entire domain of integration.
Let $\Omega$ denote the region we integrated over, then this gives
the following contribution
\begin{align*}
& \int_\Omega e_{E}(t,(x_1,x_2),(x_1,x_2)) \dd\vol \\
=& \frac{1}{4\pi t}\vol(\Omega)
+ \frac{1}{8\sqrt{\pi t}}\left( \sigma^v_{\alpha \alpha}+1\right)\eps \\
& -\frac{1}{8\pi} + \frac{1}{8}\sigma^v_{\alpha \alpha} +
\left(\frac{1}{32}+\frac{1}{8\pi}\right) (\sigma^v_{\alpha
\alpha})^2 + O(t^\infty)
\end{align*}
so for the entire region $E$ we get
\begin{align*}
& \int_{E}e_{E}(t,(x_1,x_2),(x_1,x_2)) \dd\vol \\
=& \frac{1}{4\pi t}\vol(E) + \frac{1}{8\sqrt{\pi
t}}\sum_{\alpha}\left( \sigma^v_{\alpha \alpha}+1\right)\eps +
\frac{1}{8\sqrt{\pi t}}\sum_{\alpha < \beta}
(\sigma^v_{\alpha \alpha}+\sigma^v_{\beta \beta})\eps \\
& -\frac{\deg(v)}{8\pi} + \frac{1}{8}\sum_{\alpha}\sigma^v_{\alpha
\alpha} + \left(\frac{1}{32}+\frac{1}{8\pi}\right)\sum_{\alpha}
(\sigma^v_{\alpha \alpha})^2 \\
& + \frac{1}{16}\sum_{\alpha < \beta}\sigma^v_{\alpha \alpha}
\sigma^v_{\beta \beta}
+\frac{1}{4\pi}\sum_{\alpha < \beta}(\sigma^v_{\alpha \beta})^2 + O(t^\infty)\\
\end{align*}
After doing a similar computation for the pieces of type $A$, $B$ and
$D$ we get the following heat asymptotics.
\begin{theorem}
\label{thm:main-example}
Let $M:= G \times G / ((x,y)\sim(y,x))$ where $G=(V,E)$ is a metric
graph with the standard Kirchhoff boundary conditions at all
vertices. Then the heat kernel asymptotics of $M$ are
\begin{align*}
& \int_{M} p(t,(x_1,x_2),(x_1,x_2)) \dd x_1\dd x_2 \\
\to_{t \to 0^+}& \frac{1}{4\pi t}\vol(M)
+ \frac{1}{8\sqrt{\pi t}}\left(2L(G)(|V|-|E|) + \sqrt{2}L(G) \right) \\
&+\frac{1}{16} \sum_{v \neq v'}(2-\deg(v))(2-\deg(v'))
+ \sum_{v \in V}\left(\frac{3}{8}-\frac{\deg(v)}{4}
+\frac{\deg(v)^2}{32} \right)
+ O(t^\infty)\\
\end{align*}
\end{theorem}
The first two terms of the asymptotic expansion are the volume and the
lengths of the boundary (the boundary here consists of the terms from
the product and the symmetrization). The constant term describes the
corners and again has contributions from the product and the
symmetrization.
A vertex of degree $2$ in a metric graph with Kirchhoff boundary
condition imposes no conditions on the functions. Hence this vertex
should be invisible to the heat kernel and one can easily check that
degree two vertices give no contribution to the asymptotics above. A
vertex of degree one will produce a wedge with opening angle
$\frac{\pi}{4}$ in $M$. This can be compared to the known heat
asymptotics of planar polygons~\cite{BergSrisatkunarajah88}. The
constant term contribution matches the expected $\frac{5}{32}$.
\ifthenelse{\isundefined \OlafsVersion}
{
\bibliographystyle{amsalpha}
\section{\@startsection{section}{1}%
\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}%
{\Large\sffamily\bfseries}}
\renewcommand\subsection{\@startsection{subsection}{2}%
\z@{-.5\linespacing\@plus-.7\linespacing}{.5\linespacing}%
{\large\sffamily\bfseries}}
\renewcommand\subsubsection{\@startsection{subsubsection}{3}%
\z@{.5\linespacing\@plus.7\linespacing}{-.5em}%
{\sffamily\bfseries}}
\renewenvironment{abstract}{%
\ifx\maketitle\relax
\ClassWarning{\@classname}{Abstract should precede
\protect\maketitle\space in AMS document classes; reported}%
\fi
\global\setbox\abstractbox=\vtop \bgroup
\normalfont\Small
\list{}{\labelwidth\z@
\leftmargin3pc \rightmargin\leftmargin
\listparindent\normalparindent \itemindent\z@
\parsep\z@ \@plus\p@
\let\fullwidthdisplay\relax
}%
\item[\hskip\labelse
\sffamily\bfseries
\abstractname.]%
}{%
\endlist\egroup
\ifx\@setabstract\relax \@setabstracta \fi
}
\renewcommand\contentsnamefont{\sffamily\bfseries
\renewcommand\@starttoc[2]{\begingroup
\setTrue{#1}%
\par\removelastskip\vskip\z@skip
\@startsection{}\@M\z@{\linespacing\@plus\linespacing}%
{.5\linespacing}
\contentsnamefont}{#2}%
\ifx\contentsname#2%
\else \addcontentsline{toc}{section}{#2}\fi
\makeatletter
\@input{\jobname.#1}%
\if@filesw
\@xp\newwrite\csname tf@#1\endcsname
\immediate\@xp\openout\csname tf@#1\endcsname \jobname.#1\relax
\fi
\global\@nobreakfalse \endgroup
\addvspace{32\p@\@plus14\p@}%
\let\tableofcontents\relax
}
\renewcommand\@settitle{\begin{center}%
\baselineskip14\p@\relax
\LARGE
\bfseries
\sffamily
\@title
\end{center}%
}
\renewcommand\@setauthors{%
\begingroup
\def\thanks{\protect\thanks@warning}%
\trivlist
\centering\footnotesize \@topsep30\p@\relax
\advance\@topsep by -\baselineskip
\item\relax
\author@andify\authors
\def\\{\protect\linebreak}%
\large
\sffamily\bfseries\authors
\ifx\@empty\contribs
\else
,\penalty-3 \space \@setcontribs
\@closetoccontribs
\fi
\endtrivlist
\normalfont\sffamily\@setaddresses
\endgroup
}
\renewcommand\@setaddresses{\par
\nobreak \begingroup
\footnotesize
\def\author##1{\nobreak\addvspace\bigskipamount}%
\def\\{\unskip, \ignorespaces}%
\interlinepenalty\@M
\def\address##1##2{\begingroup
\par\addvspace\bigskipamount\indent
\@ifnotempty{##1}{(\ignorespaces##1\unskip) }%
\ignorespaces##2}\par\endgroup}%
\def\curraddr##1##2{\begingroup
\@ifnotempty{##2}{\nobreak\indent\curraddrname
\@ifnotempty{##1}{, \ignorespaces##1\unskip}\/:\space
##2\par}\endgroup}%
\def\email##1##2{\begingroup
\@ifnotempty{##2}{\nobreak\indent\emailaddrname
\@ifnotempty{##1}{, \ignorespaces##1\unskip}\/:\space
\ttfamily##2\par}\endgroup}%
\def\urladdr##1##2{\begingroup
\def~{\char`\~}%
\@ifnotempty{##2}{\nobreak\indent\urladdrname
\@ifnotempty{##1}{, \ignorespaces##1\unskip}\/:\space
\ttfamily##2\par}\endgroup}%
\addresses
\endgroup
}
\renewcommand\enddoc@text{\ifx\@empty\@translators \else\@settranslators\fi
}
\renewcommand\@secnumfont{\sffamily\bfseries}
\renewcommand\maketitle{\par
\@topnum\z@
\@setcopyright
\thispagestyle{firstpage
\ifx\@empty\shortauthors \let\shortauthors\shorttitle
\else \andify\shortauthors
\fi
\@maketitle@hook
\begingroup
\@maketitle
\toks@\@xp{\shortauthors}\@temptokena\@xp{\shorttitle}%
\toks4{\def\\{ \ignorespaces}
\edef\@tempa{%
\@nx\markboth{\the\toks4
\@nx\MakeUppercase{\the\toks@}}{\the\@temptokena}}%
\@tempa
\endgroup
\c@footnote\z@
\@cleartopmattertags
}
\makeatother
\newcommand{\Sec}[1]{Section~\ref{sec:#1}}
\newcommand{\Secs}[2]{Sections~\ref{sec:#1} and~\ref{sec:#2}}
\newcommand{\App}[1]{Appendix~\ref{app:#1}}
\newcommand{\Apps}[2]{Appendices~\ref{app:#1} and~\ref{app:#2}}
\newcommand{\Subsec}[1]{Subsection~\ref{ssec:#1}}
\newcommand{\Subsecs}[2]{Subsections~\ref{ssec:#1} and~\ref{ssec:#2}}
\newcommand{\Eq}[1]{Eq.~\eqref{eq:#1}}
\newcommand{\Eqs}[2]{Eqs.~\eqref{eq:#1} and~\eqref{eq:#2}}
\newcommand{\EqS}[2]{Eqs.~\eqref{eq:#1}--\eqref{eq:#2}}
\newcommand{\Fig}[1]{Figure~\ref{fig:#1}}
\newcommand{\Figenum}[2]{Figure~\ref{fig:#1}~(\ref{#2})}
\newcommand{\Figs}[2]{Figures~\ref{fig:#1} and~\ref{fig:#2}}
\newcommand{\Footnote}[1]{Footnote~\ref{fn:#1}}
\newcommand{\Footnotes}[2]{Footnotes~\ref{fn:#1} and~\ref{fn:#2}}
\newcommand{\FootnoteS}[2]{Footnotes~\ref{fn:#1}--\ref{fn:#2}}
\newcommand{\Thm}[1]{Theorem~\ref{thm:#1}}
\newcommand{\Thms}[2]{Theorems~\ref{thm:#1} and~\ref{thm:#2}}
\newcommand{\ThmS}[2]{Theorems~\ref{thm:#1}--\ref{thm:#2}}
\newcommand{\Thmenum}[2]{Theorem~\ref{thm:#1}~(\ref{#2})}
\newcommand{\Thmenums}[3]{Theorem~\ref{thm:#1}~(\ref{#2}) and~(\ref{#3})}
\newcommand{\ThmenumS}[3]{Theorem~\ref{thm:#1}~(\ref{#2})--(\ref{#3})}
\newcommand{\Mthm}[1]{Main Theorem~\ref{mthm:#1}}
\newcommand{\Mthms}[2]{Main Theorems~\ref{mthm:#1} and~\ref{mthm:#2}}
\newcommand{\Mthmss}[3]{Main Theorems~\ref{mthm:#1},~\ref{mthm:#2}
and~\ref{mthm:#3}}
\newcommand{\Ex}[1]{Example~\ref{ex:#1}}
\newcommand{\Exs}[2]{Examples~\ref{ex:#1} and~\ref{ex:#2}}
\newcommand{\ExS}[2]{Examples~\ref{ex:#1}--\ref{ex:#2}}
\newcommand{\Exenum}[2]{Example~\ref{ex:#1}~(\ref{#2})}
\newcommand{\Exenums}[3]{Example~\ref{ex:#1}~(\ref{#2}) and~(\ref{#3})}
\newcommand{\ExenumS}[3]{Example~\ref{ex:#1}~(\ref{#2})--(\ref{#3})}
\newcommand{\Lem}[1]{Lemma~\ref{lem:#1}}
\newcommand{\Lems}[2]{Lemmata~\ref{lem:#1} and~\ref{lem:#2}}
\newcommand{\LemS}[2]{Lemmata~\ref{lem:#1}--\ref{lem:#2}}
\newcommand{\Lemenum}[2]{Lemma~\ref{lem:#1}~(\ref{#2})}
\newcommand{\Lemenums}[3]{Lemma~\ref{lem:#1}~(\ref{#2}) and~(\ref{#3})}
\newcommand{\LemenumS}[3]{Lemma~\ref{lem:#1}~(\ref{#2})--(\ref{#3})}
\newcommand{\Cor}[1]{Corollary~\ref{cor:#1}}
\newcommand{\Cors}[2]{Corollaries~\ref{cor:#1} and~\ref{cor:#2}}
\newcommand{\CorS}[2]{Corollaries~\ref{cor:#1}--\ref{cor:#2}}
\newcommand{\Corenum}[2]{Corollary~\ref{cor:#1}~(\ref{#2})}
\newcommand{\Corenums}[3]{Corollary~\ref{cor:#1}~(\ref{#2}) and~(\ref{#3})}
\newcommand{\CorenumS}[3]{Corollary~\ref{cor:#1}~(\ref{#2})--(\ref{#3})}
\newcommand{\Prp}[1]{Proposition~\ref{prp:#1}}
\newcommand{\Prps}[2]{Propositions~\ref{prp:#1} and~\ref{prp:#2}}
\newcommand{\PrpS}[2]{Propositions~\ref{prp:#1}--\ref{prp:#2}}
\newcommand{\Prpenum}[2]{Proposition~\ref{prp:#1}~(\ref{#2})}
\newcommand{\Prpenums}[3]{Proposition~\ref{prp:#1}~(\ref{#2}) and~(\ref{#3})}
\newcommand{\PrpenumS}[3]{Proposition~\ref{prp:#1}~(\ref{#2})--(\ref{#3})}
\newcommand{\Rem}[1]{Remark~\ref{rem:#1}}
\newcommand{\Rems}[2]{Remarks~\ref{rem:#1} and~\ref{rem:#2}}
\newcommand{\RemS}[2]{Remarks~\ref{rem:#1}--\ref{rem:#2}}
\newcommand{\Remenum}[2]{Remark~\ref{rem:#1}~(\ref{#2})}
\newcommand{\Remenums}[3]{Remark~\ref{rem:#1}~(\ref{#2}) and~(\ref{#3})}
\newcommand{\RemenumS}[3]{Remark~\ref{rem:#1}~(\ref{#2})--(\ref{#3})}
\newcommand{\Def}[1]{Definition~\ref{def:#1}}
\newcommand{\Defs}[2]{Definitions~\ref{def:#1} and~\ref{def:#2}}
\newcommand{\DefS}[2]{Definitions~\ref{def:#1}--\ref{def:#2}}
\newcommand{\Defenum}[2]{Definition~\ref{def:#1}~(\ref{#2})}
\newcommand{\Defenums}[3]{Definition~\ref{def:#1}~(\ref{#2}) and~(\ref{#3})}
\newcommand{\DefenumS}[3]{Definition~\ref{def:#1}~(\ref{#2})--(\ref{#3})}
\newcommand{\Ass}[1]{Assumption~\ref{ass:#1}}
\newcommand{\Asss}[2]{Assumptions~\ref{ass:#1} and~\ref{ass:#2}}
\newcommand{\AssS}[2]{Assumptions~\ref{ass:#1}--\ref{ass:#2}}
\newcommand{\Assenum}[2]{Assumption~\ref{ass:#1}~(\ref{#2})}
\newcommand{\Assenums}[3]{Assumption~\ref{ass:#1}~(\ref{#2}) and~(\ref{#3})}
\newcommand{\AssenumS}[3]{Assumption~\ref{ass:#1}~(\ref{#2})--(\ref{#3})}
\newcommand{\Not}[1]{Notation~\ref{not:#1}}
\newcommand{\Nots}[2]{Notations~\ref{not:#1} and~\ref{not:#2}}
\newcommand{\NotS}[2]{Notations~\ref{not:#1}--\ref{not:#2}}
\newcommand{\Notenum}[2]{Notation~\ref{not:#1}~(\ref{#2})}
\newcommand{\Notenums}[3]{Notation~\ref{not:#1}~(\ref{#2}) and~(\ref{#3})}
\newcommand{\NotenumS}[3]{Notation~\ref{not:#1}~(\ref{#2})--(\ref{#3})}
\newcommand{\abs}[2][{}]{\lvert{#2}\rvert_{{#1}}}
\newcommand{\abssqr}[2][{}]{\lvert{#2}\rvert^2_{#1}}
\newcommand{\bigabs}[2][{}]{\bigl\lvert{#2}\bigr\rvert_{#1}}
\newcommand{\bigabssqr}[2][{}]{\bigl\lvert{#2}\bigr\rvert^2_{#1}
\newcommand{\Bigabs}[2][{}]{\Bigl\lvert{#2}\Bigr\rvert_{#1}}
\newcommand{\Bigabssqr}[2][{}]{\Bigl\lvert{#2}\Bigr\rvert^2_{#1}
\newcommand{\BIGabs}[2][{}]{\left\lvert{#2}\right\rvert_{#1}}
\newcommand{\BIGabssqr}[2][{}]{\left\lvert{#2}\right\rvert^2_{#1}
\newcommand{\|}{\|}
\newcommand{\bignormsymb}[1]{#1\|}
\newcommand{\norm}[2][{}]{\|{#2}\|_{{#1}}}
\newcommand{\normsqr}[2][{}]{\|{#2}\|^2_{#1}}
\newcommand{\bignorm}[2][{}]{\bignormsymb{\bigl}{#2}\bignormsymb{\bigr}_{#1}}
\newcommand{\bignormsqr}[2][{}]{\bignormsymb{\bigl}{#2}%
\bignormsymb{\bigr}^2_{#1}
\newcommand{\Bignorm}[2][{}]{\bignormsymb{\Bigl}{#2}\bignormsymb{\Bigr}_{#1}}
\newcommand{\Bignormsqr}[2][{}]{\bignormsymb{\Bigl}{#2}%
\bignormsymb{\Bigr}^2_{#1}
\newcommand{\BIGnorm}[2][{}]{\bignormsymb{\left}{#2}\Bignormsymb{\right}_{#1}}
\newcommand{\BIGnormsqr}[2][{}]{\bignormsymb{\left}{#2}%
\bignormsymb{\right}^2_{#1}
\newcommand{\iprod}[3][{}]{\langle{#2},{#3}\rangle_{#1}}
\newcommand{\bigiprod}[3][{}]{\bigl\langle{#2},{#3}\bigr\rangle_{#1}}
\newcommand{\Bigiprod}[3][{}]{\Bigl\langle{#2},{#3}\Bigr\rangle_{#1}}
\newcommand{\set}[2]{\{ \, #1 \, | \, #2 \, \} }
\newcommand{\bigset}[2]{\bigl\{ \, #1 \, \bigl|\bigr. \, #2 \, \bigr\} }
\newcommand{\Bigset}[2]{\Bigl\{ \, #1 \, \Bigl|\Bigr. \, #2 \, \Bigr\} }
\newcommand{\compl}[1]{#1^{\mathrm c}}
\DeclareMathOperator*{\bigdcup}{\mathaccent\cdot{\bigcup}}
\DeclareMathOperator*{\dcup} {\mathaccent\cdot\cup}
\newcommand{\map}[3]{ #1 \colon #2 \longrightarrow #3}
\newcommand{\embmap}[3]{ #1 \colon #2 \hookrightarrow #3}
\newcommand{\bd} {\partial}
\newcommand{\clo}[2][]{\overline{{#2}}^{#1}}
\newcommand{\intr}[1]{\ring{{#1}}}
\newcommand{\restr}[1]{{\restriction}_{#1}}
\def\Xint#1{\mathchoice
{\XXint\displaystyle\textstyle{#1}}%
{\XXint\textstyle\scriptstyle{#1}}%
{\XXint\scriptstyle\scriptscriptstyle{#1}}%
{\XXint\scriptscriptstyle\scriptscriptstyle{#1}}%
\!\int}
\def\XXint#1#2#3{{\setbox0=\hbox{$#1{#2#3}{\int}$}
\vcenter{\hbox{$#2#3$}}\kern-.5\wd0}}
\def\XXsum#1#2#3{{\setbox0=\hbox{$#1{#2#3}{\int}$}
\vcenter{\hbox{$#2#3$}}\kern-.60\wd0}}
\def\Xsum#1{\mathchoice
{\XXsum\displaystyle\textstyle{#1}}%
{\XXsum\textstyle\scriptstyle{#1}}%
{\XXsum\scriptstyle\scriptscriptstyle{#1}}%
{\XXsum\scriptscriptstyle\scriptscriptstyle{#1}}%
\!\sum}
\newcommand{\Xint=}{\Xint=}
\newcommand{\dashint}{\Xint-}
\newcommand{\dashsum}{\Xsum-}
\newcommand{\avint}{{\textstyle\dashint}}
\newcommand{\avsum}[1][{}]{{}^{#1}\text-\hspace*{-1ex}\Sigma}
\newcommand{\card}[1]{\lvert#1\rvert}
\DeclareMathOperator{\const} {const}
\DeclareMathOperator{\dd} {d\!}
\DeclareMathOperator{\dilog} {dilog}
\DeclareMathOperator{\dom} {dom}
\DeclareMathOperator{\ran} {ran}
\DeclareMathOperator{\id} {id}
\DeclareMathOperator{\ind} {ind}
\DeclareMathOperator{\injrad} {inj\,rad}
\DeclareMathOperator{\Ric} {Ric}
\DeclareMathOperator{\Scal} {Scal}
\DeclareMathOperator{\sgn} {sgn}
\DeclareMathOperator{\supp} {supp}
\DeclareMathOperator{\vol} {vol}
\DeclareMathOperator{\dvol} {d\, vol}
\DeclareMathOperator{\tr} {tr}
\DeclareMathOperator{\Hom} {Hom}
\newcommand{\de} {\mathord{\mathrm d}}
\newcommand{\specsymb} {\sigma}
\newcommand{\spec}[2][{}] {\specsymb_{\mathrm{#1}}(#2)}
\newcommand{\bigspec}[2][{}] {\specsymb_{\mathrm{#1}}\bigl(#2\bigr)}
\newcommand{\Bigspec}[2][{}] {\specsymb_{\mathrm{#1}}\Bigl(#2\Bigr)}
\newcommand{\essspec}[1]{\spec[ess] {#1}}
\newcommand{\disspec}[1]{\spec[disc]{#1}}
\newcommand{\eps}{\varepsilon}
\renewcommand{\phi}{\varphi}
\renewcommand{\rho}{\varrho}
\DeclareMathOperator{\myRe} {Re}
\renewcommand{\Re} {\myRe}
\DeclareMathOperator{\myIm} {Im}
\renewcommand{\Im} {\myIm}
\newcommand{\conj}[1]{\overline {#1}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathbb{C}}{\mathbb{C}}
\newcommand{\mathbb{N}}{\mathbb{N}}
\newcommand{\mathbb{Z}}{\mathbb{Z}}
\newcommand{\mathbb{Q}}{\mathbb{Q}}
\newcommand{\Sphere}{\mathbb{S}}
\newcommand{\Torus}{\mathbb{T}}
\newcommand{\1}{\mathbbm 1}
\newcommand{\e}{\mathrm e}
\newcommand{\im}{\mathrm i}
\newcommand{\wt}{\widetilde}
\newcommand {\qf}[1]{\mathcal{#1}}
\newcommand{\HS}{\mathcal H}
\newcommand{\HSaux}{\mathcal G}
\newcommand{\ring \HS}{\ring \HS}
\newcommand{\WS}{\mathcal W}
\newcommand{\Sobsymb} {\mathsf H}
\newcommand{\Sobnsymb} {\ring{\mathsf H}}
\newcommand{\SobWsymb}{\mathsf W}
\newcommand{\SobnWsymb}{\ring{\mathsf W}}
\newcommand{\Contsymb} {\mathsf C}
\newcommand{\Lsymb} {\mathsf L}
\newcommand{\lsymb} {\ell}
\newcommand{\TRspace}{\BdOpsymb_1
\newcommand{\HSspace}{\BdOpsymb_2
\newcommand{\Sobspace}[1][1]{\Sobsymb^{#1}}
\newcommand{\Sobnspace}[1][1]{\Sobnsymb^{#1}}
\newcommand{\SobWspace}[2][p]{\SobWsymb_{#1}^{#2}}
\newcommand{\SobnWspace}[2][p]{\SobnWsymb_{#1}^{#2}}
\newcommand{\Contspace}[1][{}]{\Contsymb^{#1}}
\newcommand{\Contlocspace}[1][{}]{\Contsymb^{#1}_\loc}
\newcommand{\Lpspace}[1][p] {\Lsymb_{#1}}
\newcommand{\lpspace}[1][p] {\lsymb_{#1}}
\newcommand{\Lsqrspace} {\Lpspace[2]}
\newcommand{\lsqrspace} {\lpspace[2]}
\newcommand{\BdOpsymb} {\mathcal B}
\newcommand{\BdOp}[2][{}]{\BdOpsymb_{#1}({#2})}
\newcommand{\Unitary}[1]{\mathcal U({#1})}
\newcommand{\Ci} [2][{}]{\Contspace [\infty]_{#1} ({#2})}
\newcommand{\Cci}[1]{\Ci[\mathrm c]{#1}
\newcommand{\Cont}[2][{}]{\Contspace[#1]({#2})}
\newcommand{\Contb}[2][{}]{\Contspace[#1]_{\mathrm b}({#2})}
\newcommand{\Contc}[2][{}]{\Contspace[#1]_{\mathrm c}({#2})}
\newcommand{\Contloc}[2][{}]{\Contlocspace[#1]({#2})}
\newcommand{\Schwartz}[1]{\mathcal S(#1)}
\newcommand{\Lp}[2][p]{\Lpspace [#1]({#2})}
\newcommand{\Lploc}[2][p]{\Lpspace [#1,\mathrm{loc}]({#2})}
\newcommand{\lp}[2][p]{\lpspace [#1]({#2})}
\newcommand{\Lsqr}[2][{}]{\Lsqrspace^{#1}({#2})}
\newcommand{\Lsqrx}[2][{}]{\Lpspace [2,\mathrm{#1}]({#2})} %
\newcommand{\Lsqrloc}[2][{}]{\Lpspace [2,\mathrm{loc}]^{#1}({#2})}
\newcommand{\lsqr}[2][{}]{\lsqrspace^{#1}({#2})}
\newcommand{\Linfty}[2][{}]{\Lpspace [\infty] ^{#1}({#2})}
\newcommand{\Linftyloc}[2][{}]{\Lpspace [\infty,\mathrm{loc}]^{#1} ({#2})}
\newcommand{\linfty}[1]{\lpspace [\infty] ({#1})}
\newcommand{\Sob}[2][1]{\Sobspace [#1]({#2})}
\newcommand{\Sobn}[2][1]{\Sobnspace [#1]({#2})}
\newcommand{\Sobnx}[3][1]{\Sobnspace [#1]_{{#2}}({#3})}
\newcommand{\Sobx}[3][1]{\Sobspace [#1]_{{#2}}({#3})}
\newcommand{\SobW}[3][p]{\SobWspace[#1]{#2}(#3)}
\newcommand{\SobnW}[3][p]{\SobnWspace[#1]{#2}(#3)}
\newcommand{\Neu}{\mathrm {Neu}}
\newcommand{\Dir}{\mathrm {Dir}}
\newcommand{\NeuDir}{{\Neu \Dir}}
\newcommand{\DirNeu}{{\Dir \Neu}}
\newcommand{\Delta}{\Delta}
\newcommand{\laplacianD}{\Delta^{\!\Dir}}
\newcommand{\laplacianN}{\Delta^{\!\Neu}}
\newcommand{\laplacianND}{\Delta^{\!\NeuDir}}
\newcommand{\err}{\mathrm o}
\newcommand{\mathrm O}{\mathrm O}
\newcommand {\loc}{\mathrm{loc}}
\newcommand{\spacetext}[2]{\hspace*{#1}\text{#2}\hspace*{#1}}
\newcommand{\quadtext}[1]{\spacetext{1em}{#1}
\newcommand{\qquadtext}[1]{\spacetext{2em}{#1}}
|
1,314,259,994,181 | arxiv |
\section{Introduction}
\label{sec:intro}
\vspace{0.5em}
\emph{"To measure is to know."}\hfill--- Lord Kelvin
\vspace{0.5em}
Internode communication is the crux of supercomputing.
We can classify the various components involved in sending a message
into one of three categories: CPU, I/O, or network fabric,
as shown in \figref{fig:components}. Software stacks on the CPU
include the Message Passing Interface (MPI) and the communication
protocol processing in the underlying communication frameworks.
I/O encompasses subsystems on the processor chip such as PCI Express (PCIe).
Network components are the high-performance interconnect's switches and physical wire. Each of
these components on the critical path of communication poses an
opportunity for optimization. However, blindly optimizing all of the
components is impractical considering the technical challenges
associated with each and the wide variety of use cases. For example,
the latency of sending a large message is driven by the time spent
in the network components. Hence, optimizing the software stack for
this case would be a futile effort. On the other hand, the time spent
in the software stack during the propagation of a small message is a
considerable portion of the overall latency and, hence, optimizing the
time spent in the CPU would be beneficial. Therefore, it is
important to understand where to focus our optimization efforts.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{pics/components.pdf}
\end{center}
\caption{Components involved in the transmission of a message (on one end).}
\vspace{-1em}
\label{fig:components}
\vspace{-1em}
\end{figure}
With the apparent end of Moore's law, the architectures of recent servers
are now featuring a large number of cores per node~\cite{tx2,xeon}, a trend
that is likely to continue moving forward~\cite{thakur2010mpi}. Furthermore,
other on-node resources such as memory, translation lookaside buffers, and
network-hardware registers are not growing at the same rate. Since developers
desire to solve the same problem faster on newer machines, they need to rely
on strong scalability with the decreasing amount of memory per core (assuming a
static split with a process per core). At the limits of strong scaling lies
fine-grained communication, that is, each core participates in communication,
eliminating the need to synchronize with the cores on a node. Since each core
communicates independently of the others, the size of the messages involved in
communication is small. Hence, we focus our analysis on the communication performance of small
messages since it is a critical factor in overall performance.
CPU, I/O, and network equally contribute to the
communication performance of small messages; the times spent in each of the categories are on
the same order of magnitude on state-of-the-art systems (we demonstrate
this in \secref{sec:complete}). Hence, optimizations of each category's constituents
would be beneficial. This raises the question: \emph{how much will optimizing
component X improve the overall communication performance?} The answer
to this question can guide the research and engineering efforts of software
developers, system architects, and the HPC community at large. Typically, one
measures the limits of a system's communication performance using
injection-rate and latency tests. But such measurements do not
inform the researcher where time is being spent or why the performance of one
version of the system varies from that of another.
In this paper, we answer the posed question by analyzing the
time spent in state-of-the-art software and system components during
the transmission of messages. We classify the components into
two levels: low and high. Low-level components include those
that are not exposed to a typical end-user of an HPC system. These include
the low-level communication framework ({\it e.g.}\ Verbs), the I/O subsystem ({\it e.g.}\ PCIe),
and network components ({\it e.g.}\ Mellanox InfiniBand fiber). High-level components
include programming model frameworks such as MPI.
\minititle{Contributions and findings}. This paper makes the following
contributions.
\begin{enumerate}[leftmargin=*]
\item \textbf{Detailed breakdown}. As a first step to answer the posed
question, we construct analytical models of the overall injection overhead and
end-to-end latency of a system. Our models explain where and why time is
spent during the transmission of a message. By attributing times to the models'
constituents using precise CPU timers and traces from a PCIe analyzer, we show
how much time is spent in low-level (\secref{sec:lowlevel}) and high-level
(\secref{sec:highlevel}) components, and thus present a detailed breakdown of
high-performance communication. Our analytical models estimate the observed
performance within a 5\% margin of error on Arm ThunderX2. This work is the
first of its kind on Arm. We use the breakdown to provide key insights in
\secref{sec:complete}.
\item \textbf{Measurement methodology}. We present a detailed methodology to
measure the overhead of each component such as the PCIe wire, the interconnect's
wire, etc. Researchers with access to a similar analysis infrastructure described in
\secref{sec:setup} can then measure overheads for components of their interest
using our methodology.
\item \textbf{Simulated optimizations}. Finally, we answer the aforementioned
question in \secref{sec:simulated} through a what-if analysis. We discuss the
impact and likelihood of a set of optimizations that target the CPU, I/O, and
network components of high-performance communication.
\end{enumerate}
\section{Breakdown of the Higher Level}
\label{sec:highlevel}
In this section, we present a breakdown of the time
spent in the high-level components of high-performance
communication. This comprises of the high-level
communication protocols (HLP). The most commonly
used programming model for large-scale parallel
systems today is MPI~\cite{thakur2010mpi}. Hence, at
the highest level of the software stack sits an MPI
library that implements the MPI standard. Modern
implementations, such as the CH4 device of MPICH,
rely on abstract communication frameworks, such
as UCX, so that the MPI libraries do not need to maintain
separate critical paths for all interconnects.
UCX in turn is composed of multiple components such
as UC-Transports (UCT) and UC-Protocols (UCP). UCT
is the LLP that we analyze in \secref{sec:llpbreakdown}.
UCP implements high-level communication protocols
such as collectives, message fragmentation, etc. using
the low transport-level capabilities exposed through
UCT. MPI libraries then use UCP to implement the
specifications of the MPI standard.
\begin{sloppypar}
We present a breakdown of time spent in
the HLP for a communication-initiation operation such as
\texttt{MPI\_Isend}, and a communication-progress operation
such as a successful ({\it i.e.}\ no busy waiting) \texttt{MPI\_Wait}
corresponding to an \texttt{MPI\_Irecv}.
\end{sloppypar}
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{pics/mpich_ucp_breakdown.pdf}
\end{center}
\vspace{-1.5em}
\caption{Breakdown of time in HLP.}
\vspace{-2em}
\label{fig:mpichucpbreakdown}
\end{figure}
\begin{sloppypar}
\textbf{Measuring HLP and its breakdown.} In an
\texttt{MPI\_Isend}, the MPI library first decides how to best execute the
operation by checking if the data is contiguous, computing
which communication interface to use, etc. Ultimately it will call into
the UCP layer (\texttt{ucp\_tag\_send\_nb}) which will eventually execute
the LLP in the UCT layer (\texttt{uct\_ep\_am\_short}). To
measure the time spent in MPICH and UCP for an
\texttt{MPI\_Isend}, we first measure the total time of \texttt{MPI\_Isend},
the total time of \texttt{ucp\_tag\_send\_nb} inside MPICH, and the total
time of \texttt{uct\_ep\_am\_short} inside UCP by wrapping them with
the UCS profiling infrastructure. We can then measure the time spent
in MPICH and UCP by taking the differences of times between the
upper and lower layers. For example, subtracting the total time of
\texttt{ucp\_tag\_send\_nb} from that of \texttt{MPI\_Isend} gives us
the time spent in MPICH.
\end{sloppypar}
\begin{sloppypar}
Similarly, in an \texttt{MPI\_Wait}, the MPI library executes its progress
engine which ultimately calls into the UCP layer (\texttt{ucp\_worker\_progress}).
UCP will then ensure progress on all outstanding operations that have
posted by progressing the low-level UCT layer (\texttt{uct\_worker\_progress}).
When an operation completes, UCT executes a registered callback into the
upper UCP layer to update data structures that indicate the completion
of the operation. Similarly, the UCP callback also executes a registered callback
into the upper MPICH layer to indicate that the operation has completed. Note
that these callbacks are executed before returning from \texttt{uct\_worker\_progress}.
To measure the time spent in MPICH and UCP for an
\texttt{MPI\_Wait}, we measure the times spent in the registered MPICH and UCP
callbacks in addition to measuring the total times of \texttt{MPI\_Wait},
\texttt{ucp\_worker\_progress}, and \texttt{uct\_worker\_progress}. Since
the UCP callback entails the MPICH callback, we can measure the time
spent in the UCP callback alone by taking the difference between the total
times spent in the callbacks. We can then measure the time spent in MPICH
and UCP by taking the differences of times between the upper and lower layers
and adding in the time for the upper layer's registered callback. For example,
subtracting the total time of \texttt{ucp\_worker\_progress} from that of
\texttt{MPI\_Wait} and adding in the time of the MPICH callback gives us the time
spent in MPICH.
\end{sloppypar}
\tabref{tab:measuredtimes} reports the time spent in MPICH and UCP on top
of the LLP's HW/SW interface for an \texttt{MPI\_Isend} (26.56 nanoseconds in total),
and a successful \texttt{MPI\_Wait} for an \texttt{MPI\_Irecv} (443.8 nanoseconds in total).
\figref{fig:mpichucpbreakdown} shows their percentage breakdown.
\section{Background}
\label{sec:background}
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.5\textwidth]{pics/ib_pcie.pdf}
\end{center}
\vspace{-1em}
\caption{PCIe transactions and mechanisms on sender node to transmit
data over wire.}
\label{fig:ibmech}
\vspace{-1em}
\end{figure}
The Network Interface Cards (NICs) of modern interconnects
are typically connected to the processor chip on the node
as a PCI Express (PCIe) device. In this section, we
delineate the transmission and completion of messages in the
context of the PCIe fabric.
\minititle{PCI Express}. The main conductor of the PCIe subsystem is the Root Complex
(RC). It connects the processor and memory to the PCIe fabric.
The peripherals connected to the PCIe fabric are called PCIe
endpoints. The PCIe protocol consists of three layers: the
Transaction layer, the Data Link layer, and the Physical layer.
The first, the upper-most layer, describes the type
of transaction occurring. In this paper, two types of
Transaction Layer Packets (TLPs) are relevant: Memory
Write (MWr), and Memory Read (MRd). Unlike the standalone MWr
TLP, the MRd TLP is coupled with a Completion with Data (CplD)
transaction from the target PCIe endpoint which contains the data
requested by the initiator.
The Data
Link layer ensures the successful execution of all transactions
using Data Link Layer Packet (DLLP) acknowledgements
(ACK/NACK) and a credit-based flow-control mechanism. An initiator
can issue a transaction as long as it has enough credits for that
transaction. Its credits are replenished when it receives Update Flow
Control (UpdateFC) DLLPs from its neighbors. Such a flow-control
mechanism allows the PCIe protocol to have multiple outstanding
transactions.
\minititle{Mechanisms of a high-performance interconnect}.
From a CPU programmer's perspective, there exists a \emph{transmit
queue} (TxQ) and a \emph{completion queue} (CQ). The user posts their
message descriptor (MD) to the transmit queue, after which they
poll on the CQ to confirm the completion of the
posted message. The user could also request to be notified with
an interrupt regarding the completion. However, the polling approach
is latency-oriented since there is no context switch to the kernel in
the critical path. The actual transmission of a message over the
network occurs through coordination between the processor chip
and the NIC using memory mapped I/O (MMIO) and direct memory
access (DMA) reads and writes. We describe these steps below using
\figref{fig:ibmech}.
\begin{enumerate}[leftmargin=*]
\setcounter{enumi}{-1}
\item The user first enqueues an MD into the TxQ.
The network driver then prepares the device-specific MD that
contains headers for the NIC, and a pointer to the payload.
\item Using an 8-byte atomic write to a memory-mapped location,
the CPU (the network driver) notifies the NIC that a message is ready
to be sent. This is called \emph{ringing the} \emph{DoorBell}. The RC executes
the \emph{DoorBell}\ using a MWr PCIe transaction.
\item After the \emph{DoorBell}\ ring, the NIC fetches the MD
using a DMA read. A MRd PCIe transaction conducts the DMA read.
\item The NIC will then fetch the payload from a registered memory
region using another DMA read (another MRd TLP). Note
that the virtual address has to be translated to its physical address
before the NIC can perform DMA-reads.
\item Once the NIC receives the payload, it transmits the
read data over the network. Upon a successful transmission, the NIC
receives an acknowledgment (ACK) from the target-NIC.
\item Upon the reception of the ACK, the NIC will DMA-write (using
a MWr TLP) a completion (64 bytes in Mellanox InfiniBand) to the CQ
associated with the TxQ. The CPU will then poll for this
completion to make progress.
\end{enumerate}
In summary, the critical data path of each post entails one MMIO write,
two DMA reads, and one DMA write. The DMA-reads translate to
round-trip PCIe latencies which are expensive.
A faster way to send a message that eliminates the PCIe round-trip
latencies is \emph{Programmed I/O (PIO)}. With PIO, the CPU copies the MD
as a part of the \emph{DoorBell}. Thus, the NIC doesn't need
to DMA-read the MD. Another feature for small payloads is
\emph{inlining} which means that the payload is a part of the MD.
Hence, when the NIC receives the MD, it does not need to
DMA-read the payload. Typically, communication frameworks, such as UCX, combine PIO
with inlining. This eliminates both the DMA-reads (steps (2) and (3)).
In Mellanox InfiniBand, the PIO occurs in 64-byte chunks. Note that the
CPU does more work in PIO (64-byte copy instead of an 8-byte write)
and inlining (\texttt{memcpy}). However, the increase in CPU's work
compared to the benefit gained from elimination of PCIe round-trip
latencies is minimal.
\section{Conclusion}
\label{sec:conclusion}
Our analytical models of the injection overhead and latency of high-performance
communication on state-of-the-art components explain observed performance
with a 5\% margin of error. The models and their resulting breakdown give the
reader insights into where, why, and how much time is spent during the
transfer of small messages. As the importance of small, fine-grained communication
is rising, we believe that such a breakdown can guide the efforts of software
developers and system architects alike to address the bottlenecks present today.
More importantly, researchers and engineers can identify bottlenecks on their
own systems using our detailed methodology described in this paper.
\subsection{Definitions}
\label{sec:definitions}
Below, we define keywords to denote the time spent in each of
the components involved in initiating and transmitting a
message.
\begin{itemize}
\item \emph{Post} -- time spent by the CPU to perform a PIO post. This
includes the time to prepare the message descriptor and copying
it into the memory-mapped location of the NIC's memory.
\item \poll -- time spent by the CPU to perform one poll operation
on the completion queue.
\item \emph{PCIe} -- time spent on the PCIe subsystem when the message
payload traverses from the node to the NIC, that is, from the RC
to the NIC.
\fix{\item \emph{Wire} -- time spent on the interconnect's wire between two
NICs.}
\item \rctomem[x] -- time spent in writing an \emph{x} byte payload
from the RC to memory.
\end{itemize}
\section{Simulated Optimizations}
\label{sec:simulated}
In this section, we use the insights gained from the breakdown of
the complete picture in \secref{sec:complete} to
study the effects of optimizing the CPU, I/O, and network fabric
components on the injection and latency of small message transfers.
In the figures that follow, we aim to answer the following question:
if we optimize component X by Y\%, what is the corresponding reduction
in injection overhead and latency? The horizontal axis of \figref{fig:whatif}
represents the degree of optimization for the component of interest.
It consists of five evenly spaced reductions in overhead, starting
from 10\% ($1.1\times$ faster) to 90\% ($10\times$ faster). The vertical axis represents
the speedup in the overall injection or end-to-end latency
as a result of reducing the component's overhead. Note that the components of our models
are not concurrent, that is, their executions do not overlap. Hence,
evaluating the impacts of reductions
in overheads on benchmarks such an MPI stencil kernel through
a distributed system simulator (such as SimGrid~\cite{casanova:hal-01017319})
results in exactly the same linear speedups that we generate through a
manual what-if analysis in \figref{fig:whatif}.
We organize our
discussion into a set of relevant optimizations that target the different
components. For each optimization we discuss their likelihood and evaluate
their impact. We consider speedups more than 5\% to be substantial.
\begin{figure*}
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=\linewidth]{pics/software_injection_what_if.pdf}
\caption{CPU }
\label{fig:softwareinjwhatif}
\end{subfigure}%
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=\linewidth]{pics/software_latency_what_if.pdf}
\caption{CPU}
\label{fig:softwarewhatif}
\end{subfigure}%
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=\linewidth]{pics/io_what_if.pdf}
\caption{I/O}
\label{fig:iowhatif}
\end{subfigure}
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=\linewidth]{pics/network_what_if.pdf}
\caption{Network}
\label{fig:networkwhatif}
\end{subfigure}
\vspace{-1em}
\caption{Simulated speedups in overall injection (a) and end-to-end latency (b, c, d)
by reducing overheads of CPU, I/O, and network components (note the differences
in y-axis scales).}
\vspace{-1em}
\label{fig:whatif}
\end{figure*}
\vspace{-1em}
\subsection{On-node optimizations}
\label{sec:onnodeopt}
In \secref{sec:complete} we learn that most of the time in the
transmission of a small message is spent on the node. CPU
and I/O components make for the on-node time. Below
we discuss three relevant optimizations.
\minititle{NIC integrated into a System-on-Chip (SoC).}
The idea of this optimization is that the NIC sits on the same die
as that of the processor. The deployment of such a solution
would be in the form of an SoC so that instead of interfacing with
the CPU through the PCIe subsystem, the NIC would connect to
the network-on-chip (NoC). Such a tight integration of the NIC
and the CPU would eliminate a majority of the I/O subsystem's
overhead, which accounts for the majority of the time in the
latency of a small message. While integrated NICs are not commonplace
in today's HPC systems, they are more than likely to become
ubiquitous in the future given the potential of their impact. There
have been multiple works ~\cite{binkert2006integrated, liao2009performance}
that argue for and evaluate the performance of SoC-integrated NICs
showing their benefits in terms of better performance and higher CPU
availability for all message sizes. More recently, Arm-based
supercomputers are on the rise~\cite{jackson2019evaluating} since
they allow HPC vendors to integrate their custom solutions
(such as an integrated NIC) with Arm IP on SoCs. The Tofu
interconnect D~\cite{ajima2018tofu} on Fujitsu's post-K machine is a
prominent example of this optimization. With Tofu's NIC integrated into a post-K-node, the RDMA-write latency has been improved by nearly 400 nanoseconds.
\textbf{\emph{Impact}}. "Integrated NIC" in \figref{fig:iowhatif}
shows the impact of a solution that simply brings the
NIC closer to the TX2-based SoC. While one can expect such a
solution to eliminate most of the I/O overhead, we can observe over
a 15\% improvement in overall latency even with a modest
50\% reduction in I/O time. In fact, a tightly integrated NIC
allows for opportunities to reduce the involvement of the CPU
in the LLP's HW/SW interface and thereby increase its availability
for computational tasks. Recall that the reason for the use of
PIO for small messages is expensive PCIe round-trip latencies
with the communication-offloading approach (see \secsref{sec:intro}{sec:background}).
Since an integrated NIC would sit close to memory, round-trip
latencies performed by the hardware logic of the NIC would most
likely be faster than involving the CPU in PIO.
\textbf{Improving the initiation of a message in LLP.} This optimization
deals with how writes to device memory occur in the microarchitecture
of a processor. Ideally, writes to \texttt{aarch64}'s \emph{Device memory}~\cite{armmemory} should be as fast
as writes to its \emph{Normal memory}~\cite{armmemory}. Such an optimization is likely since
the current difference between 64-byte writes to Normal and Device
memory is more than 90\%, hinting that there exists room for optimization.
It would reduce the time spent in the PIO copy, which accounts for
more than 50\% of the time in \emph{LLP\_post}\ (see \figref{fig:llppostbreakdown}).
\textbf{\emph{Impact}}. "PIO" in \figref{fig:softwareinjwhatif} and
\figref{fig:softwarewhatif} shows the impact of improving the
64-byte PIO copy on the overall injection and end-to-end latency,
respectively. A regular 64-byte \texttt{memcpy} on the TX2-based server
takes less than a nanosecond as expected. If we modestly project the
overhead of PIO to reduce to 15 nanoseconds (84\% reduction),
overall injection can improve by more than 25\% and end-to-end
latency can improve by more than 5\%.
\textbf{Reducing software overheads.} This optimization
deals with software engineering targeted to reduce overheads in the
HLP. However, unlike the previous optimizations, it is unlikely that this
optimization would reduce overheads by more than 50\%. For example, the current implementation
of MPICH is highly optimized~\cite{raffenetti2017mpi}, reducing the
number of instructions by 76\% from its previous implementation for
an \texttt{MPI\_Isend}. We conjecture that software optimizations
would reduce overheads by less than 20\%.
\textbf{\emph{Impact}}. \figref{fig:softwareinjwhatif} and
\figref{fig:softwarewhatif} show the what-if analysis for the different
components in the HLP and LLP. The "HLP" and "LLP" lines in the
figures reflect the upper bound on speedups that would result from
optimizing the components that constitute the HLP and LLP, respectively.
For both injection and latency, optimizing the progress of operations in
the HLP (\emph{HLP\_tx\_prog}\ and \emph{HLP\_rx\_prog}) can achieve speedups close to
HLP's upper bound. Similarly, optimizing \emph{LLP\_post}\ can achieve
speedups close to LLP's upper bound. If we consider software overheads
would be reduced at most by 20\%, the upper bounds
reflect a less than 5\% speedup in the end-to-end latency. On
the other hand, a 20\% reduction in overhead in the HLP can speedup injection
by up to 6.44\% while that in the LLP can do so by up to 13.33\%.
\vspace{-1em}
\subsection{Off-node optimizations}
\label{sec:offnodeopt}
\figref{fig:breakdown} shows that 27.6\% of the end-to-end latency
is spent on the interconnect's \emph{Wire}\ and in the \emph{Switch}. Our foresight
is that the reduction in off-node overheads is less than likely and
that the resulting speedups with off-node
optimizations alone would not be substantial. We explain our
foresight below.
The reduction in \emph{Wire}'s overhead is less than likely due to engineering
complexities at the physical layer. In fact, it is possible that the latency
will increase in future interconnects in order to accommodate for higher
throughput. The conversion between the parallel PCIe signals and the
serial signals on the interconnect's fiber transmission link occurs
through SerDes (serializer/deserializer) integrated circuits. For throughputs
higher than 100 Gb/s, the SerDes unit needs to be able to deliver higher
throughput. While higher degrees of pulse amplitude modulation (PAM)
deliver higher signal rates, they require more complex forward error
correction (FEC), which increases the latency of the transmission in some
cases by 300 nanoseconds~\cite{sun2017serdes,ghiasi2012investigation,bhoja2014fec}.
The current latency of a high-performance interconnect's switch is already
an order of magnitude lower than that of an Ethernet's switch~\cite{rumble2011s}.
New technologies like GenZ forecast their switch latencies to be 30-50
nanoseconds~\cite{genz}. However, such low latencies are
yet to be demonstrated. Only an optimistic reduction to 30 nanoseconds (72\%
overhead reduction) would correspond to a substantial speedup (5.45\%) in
end-to-end latency according to \figref{fig:networkwhatif}.
\section{The Complete Picture}
\label{sec:complete}
In this section, we first present a breakdown of the overall injection
overhead and end-to-end latency including all the software, I/O,
and network components for send-receive communication. Then,
analyzing the breakdown, we note a number of insightful findings.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{pics/overall_inj_breakdown.pdf}
\end{center}
\vspace{-1em}
\caption{Breakdown of the overall injection overhead.}
\vspace{-1em}
\label{fig:compinjbreakdown}
\end{figure}
\minititle{Overall injection overhead}. \secref{sec:injection} shows that
the injection overhead observed by the NIC for a single core is
governed by the rate at which the CPU can send messages to
the RC. \eqref{eq:llinjection} defines this injection overhead.
\emph{CPU\_time}\ in \secref{sec:injection} involved only the overhead
of the LLP; in this section, we add in the overheads of the HLP to
complete the picture. We redefine \emph{CPU\_time}\ as follows.
\begin{dmath}
\emph{CPU\_time} = \emph{Post} + \emph{Post\_prog} + \emph{Misc}
\label{eq:compinjection}
\end{dmath}
where \emph{Post}\ is the total time taken by the HLP and LLP to initiate
an operation, and \emph{Post\_prog}\ is the total overhead imposed by
both the HLP and LLP for the progress of a send-operation.
\emph{Post}\ is the sum of \emph{LLP\_post}\ and \emph{HLP\_post}, the time spent in the
HLP during the initiation of a message. For our setup, \emph{HLP\_post}\
equals 26.56 nanoseconds, which implies \emph{Post}\ equals 201.98
nanoseconds. Before attributing times to \emph{Post\_prog}\ and \emph{Misc},
we delineate certain caveats.
First, UCP schedules the successful execution of \emph{LLP\_post}\ for busy
posts (see \secref{sec:injection}) during the progress of operations. Second, progress
for a bunch of initiated operations is typically
conducted with a batch-progress operation in the HLP such as
\texttt{MPI\_Waitall}. MPICH executes its progress engine until
all the operations listed in \texttt{MPI\_Waitall} complete. More
important, UCP reduces the overhead of progress using
unsignaled completions~\cite{kalia2016design}, which means the
NIC DMA-writes a completion only every $c$ operations to
indicate the completion of all $c$ operations ($c = 64$ in UCX).
Hence, the overhead of progress is amortized
over $c$ operations.
The first caveat implies that the progress of some operations
includes the overhead of initiation in the LLP. Since we
already account for the successful posts of
busy posts in \emph{Post}, we deduct the cumulative \emph{LLP\_post} s
corresponding to the busy posts from the total time of
\texttt{MPI\_Waitall} for analytical purposes. We do so by keeping track of the
number of busy posts occurred before \texttt{MPI\_Waitall}.
Dividing the resulting total time by the number of operations
progressed, we measure \emph{Post\_prog}\ to be 59.82 nanoseconds.
Less than a nanosecond of \emph{Post\_prog}\ (due to the aforementioned
amortization) occurs in the LLP; the rest occurs in the HLP (\emph{HLP\_tx\_prog}).
We include the time incurred in busy posts under \emph{Misc}.
Using the tracked number of busy posts we can compute
the total time spent in busy posts during an
\texttt{MPI\_Isend}-\texttt{MPI\_Waitall} window. Dividing
this total time by the number of operations in the window
gives us an average of 3.17 nanoseconds per operation in \emph{Misc}.
We use OSU Micro-Benchmark's~\cite{osu} message rate
test\footnote{We remove the send-receive sync after every window
of posts for a clear analysis.} to measure the observed injection
overhead. By taking the inverse of the message rate, we measure
the mean injection overhead to be \textbf{263.91} nanoseconds. The
injection overhead computed with \eqref{eq:compinjection} is \textbf{264.97}
nanoseconds which is within \textbf{1\%} of the observed overhead.
\figref{fig:compinjbreakdown} shows the breakdown of the overall
injection overhead.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{pics/fine_level_latency_breakdown.pdf}
\end{center}
\vspace{-1.5em}
\caption{Breakdown of the end-to-end latency.}
\vspace{-1.5em}
\label{fig:fine}
\end{figure}
\minititle{End-to-end latency}. \secref{sec:latency} describes the
constituents of \emph{Latency}\ with minimal software involvement.
To complete the picture, we add in the latencies of the HLP as follows.
\begin{dmath*}
\emph{Latency} = \emph{HLP\_post} + \emph{LLP\_post} + 2(\emph{PCIe}) + \emph{Network} + \rctomem[x] + \emph{LLP\_prog} + \emph{HLP\_rx\_prog}
\end{dmath*}
\emph{HLP\_rx\_prog}\ refers to the overhead of progressing the reception of an
incoming message with MPI (after it has been written to memory by the RC).
We assume the initiation of the receive (such as \texttt{MPI\_Irecv})
overlaps with the rest of the constituents and, hence, do not account
for its time in the end-to-end latency.
\emph{HLP\_rx\_prog}\ is the sum of the times spent in the registered callbacks
of MPICH and UCP along with the remaining time spent in MPICH
after \texttt{ucp\_worker\_progress} returns. Note that the latter is not
the equivalent of the total time spent in MPICH for a successful
\texttt{MPI\_Wait} minus the time spent in the MPICH callback. \texttt{MPI\_Wait}
is a blocking call and incurs a portion of the 293.99 nanoseconds before even
progressing UCP. MPICH internally loops on \texttt{ucp\_worker\_progress}
until the operation is complete. Hence, we specifically measure the time
spent in MPICH after a successful \texttt{ucp\_worker\_progress} and
observe this time to be 36.89 nanoseconds. The value of
\emph{HLP\_rx\_prog}\ then is 224.66 nanoseconds. Adding in the values of \emph{LLP\_prog},
\emph{HLP\_post}, and \emph{HLP\_rx\_prog}\ to the modeled latency in \secref{sec:latency},
the end-to-end latency is \textbf{1387.02} nanoseconds. This is within \textbf{4\%}
of the observed latency of \textbf{1336} nanoseconds measured by OSU Micro-Benchmark's point-to-point latency test. \figref{fig:fine}
shows a detailed breakdown of this latency.
\minititle{Insight 1}. \secref{sec:injection} describes that the programmer
cannot indefinitely initiate messages. Hence, the progress of a send
operation serves as a "semantic bottleneck". Once the performance
overheads imposed by this bottleneck is minimized through optimizations
like unsignaled completions, \figref{fig:compinjbreakdown} shows that \emph{Post}\
dominates (more than 70\% of total) the overall injection overhead.
Within \emph{Post}, the LLP dominates as seen in "Initiation" of \figref{fig:comm}.
\minititle{Insight 2}. \figref{fig:breakdown} presents the overall percentage
breakdown of the end-to-end latency of a small message in the three categories:
CPU, I/O, and network. The constituents of the software
and I/O categories contribute almost equally (within 4\% of each other)
to their respective total times. In the case of \emph{Network}, the latency of \emph{Wire}\
dominates the overall off-node time. Note that none of the three
categories dominates the overall latency. However, we observe that
the network fabric constitutes less than a third of the overall latency
while CPU and I/O components together contribute towards 72.4\%
of the latency. Hence, most of the overhead in the transmission of a
small message is incurred on the node.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{pics/comm_protocol_level_breakdown.pdf}
\end{center}
\vspace{-1em}
\caption{Breakdown of time in HLP and LLP during the initiation and progress
of communication.}
\vspace{-1em}
\label{fig:comm}
\end{figure}
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{pics/high_level_latency_breakdown.pdf}
\end{center}
\vspace{-1em}
\caption{High-level breakdown of the end-to-end latency.}
\vspace{-1.25em}
\label{fig:breakdown}
\end{figure}
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{pics/on_node_breakdown.pdf}
\end{center}
\vspace{-1em}
\caption{Breakdown of time spent on node.}
\vspace{-1.5em}
\label{fig:onnode}
\end{figure}
\minititle{Insight 3}. \figref{fig:onnode} shows a high-level breakdown of the time spent on
the node during the transmission of the message. The majority of this time
occurs on the target node. Out of the time on the target
node, the majority occurs during I/O, the majority of which is comprised
by the RC writing the payload to memory. On
the contrary, software comprises the majority of the time spent on the
initiator node. This is due to the use of Programmed I/O
(see \secref{sec:background}) for short messages. Consequently, I/O
on the initiator node comprises only of a PCIe transaction unlike that on
the target node.
\minititle{Insight 4}. \figref{fig:comm} shows
that the HLP dominates the progress of both send and receive operations.
The progress of a receive operation is $4.78\times$ higher than that of a send
operation.
\section{Breakdown of the Lower Level}
\label{sec:lowlevel}
In this section, we present a detailed breakdown of time spent in the
low-level components. These include the low-level
communication protocol (LLP), the I/O subsystem, and network
components. The LLP software drives the I/O and network hardware.
We first define terminology for time spent in each of the low-level
components.
\begin{itemize}[leftmargin=*]
\item \emph{LLP\_post}\ -- LLP performing a PIO
post of one 8-byte message.
\item \emph{LLP\_prog}\ -- LLP dequeuing one entry of
the completion queue during the progress of an operation.
\item \emph{PCIe}\ -- payload traversing PCIe between RC and NIC.
\item \emph{Wire}\ -- payload traversing the physical wire of the interconnect.
\item \emph{Switch}\ -- overhead added by a network switch.
\item \emph{Network}\ -- the total time in the interconnect (\emph{Wire}\ + \emph{Switch}).
\item \rctomem[x] -- RC writing an x-byte payload to memory.
\end{itemize}
We use UCX's low-level transport API, UC-Transports (UCT) for our
LLP driver. It abstracts the capabilities of the various hardware architectures with minimal software overhead. The UCT driver runs
the UCX \texttt{perftest}'s injection-rate and ping-pong style latency
microbenchmarks, namely the \texttt{put\_bw},
and \texttt{am\_lat} tests, with a single thread. The \texttt{put}
test corresponds to RDMA-writes while the \texttt{am}\footnote{\texttt{am}
is short for active messages, terminology that describes
send-receive style messaging in UCX.} test corresponds to
send-receive semantics. Each message is 8 bytes, the size of
a \texttt{double}.
\subsection{Breakdown of the LLP}
\label{sec:llpbreakdown}
The LLP implements the HW/SW interface required to transmit
a message and confirm its completion. The network driver
(software) invokes the NIC (hardware) directly after correctly
preparing resources and registers needed by the NIC during an
\emph{LLP\_post}. The following details the steps involved in an \emph{LLP\_post}.
\begin{enumerate}[leftmargin=*]
\item Prepare MD -- this involves the time taken to
write the control segment of the descriptor. It also involves a \texttt{memcpy} of the
small payload when inlining is used.
\item A store memory barrier -- this ensures that the MD
is completely written before the CPU signals the NIC. This barrier is
relevant only for a weak memory model (\texttt{dmb st} on \texttt{aarch64}).
\item \emph{DoorBell}\ counter increment -- the NIC reads a
\emph{DoorBell}\ counter to perform speculative reads. The CPU updates this
counter before writing to the NIC.
\item A store memory barrier -- this ensures that the NIC sees the update
to the \emph{DoorBell}\ counter before any subsequent write to its device memory.
\item PIO copy -- this is the CPU's write to the memory-mapped device memory
instructing the NIC to transmit the message. Device memory is
typically an uncached, buffered memory region that supports out-of-order
writes. For the TX2-based server in our setup, we use Device-GRE memory
for the memory-mapped location. Though there would be a store memory
barrier (\texttt{dsb st}) after the PIO copy to flush the data to the NIC, we
observed experientially that this flush is not necessary for the
microarchitecture of the TX2-based server. The PIO copy
of an 8-byte message is one 64-byte chunk in Mellanox InfiniBand (see \secref{sec:background}).
\end{enumerate}
Similarly, the LLP reads the designated memory location (where
the NIC DMA-writes its completions) during an \emph{LLP\_prog}, the
progress of an operation. This progress operation constitutes a
load memory barrier for \texttt{aarch64}'s weak memory model to ensure that
the read for a completion queue entry occurs before subsequent
updates to data structures.
\minititle{Measuring LLP and its breakdown.} We measure \emph{LLP\_post}\ and
\emph{LLP\_prog}\ by wrapping the UCS profiling infrastructure around the calls
to \texttt{uct\_ep\_put\_short} and \texttt{uct\_worker\_progress}.
We use the same technique around the relevant regions of code in
the implementation of \texttt{uct\_ep\_put\_short} to measure the time
in each of the categories of an \emph{LLP\_post}. While these categories are critical
components of an \emph{LLP\_post}, they do not account for other miscellaneous time
such as the function call overhead, branches to decide code path, etc. We
compute this time by taking the difference of \emph{LLP\_post}\ and
the sum of the times spent in the categories. \tabref{tab:measuredtimes}
reports the times for \emph{LLP\_post}, \emph{LLP\_prog}, and each category of \emph{LLP\_post}.
\figref{fig:llppostbreakdown} shows the breakdown
of \emph{LLP\_post}. Since the \emph{LLP\_prog}\ contains only one critical category (the load
memory barrier), we don't show its breakdown.
\subsection{Injection overhead}
\label{sec:injection}
Injection is the insertion of a message into the network.
The message is injected when the payload reaches the NIC.
We study the case when the user is transmitting messages
continuously since this represents a system's injection
limit. Then, the system's injection overhead,
\emph{Inj\_overhead}, is the time difference between messages arriving
at the NIC. This \emph{Inj\_overhead}\ explains why all the messages
in a burst do not reach the NIC at time zero. We first model the injection
overhead of PIO posts for a small message, then measure the overhead
according to the model, and finally validate it.
\minititle{Modeling injection overhead.} Since the depth of the
transmit queue (TxQ) is finite, the user cannot post indefinitely.
Polling the completion queue (CQ) serves as the dequeue semantic
for the TxQ. Hence, the user must poll in between
their posts to inject messages into the NIC. Say, the user polls
after every $p$ posts. If $p=1$, the depth of the TxQ is
not utilized and the post translates to a synchronous post, that
is, the user will be able to post the next message only after
the previous message has reached the target node (since the
completion is generated only when the host NIC receives an ACK
from the target NIC (see \secref{sec:background})).
To remove the overhead of waiting for a previous message
to complete, the user must choose a value of $p$ such
that the completion for an earlier message is available during
a poll. Such a value of $p$ depends on the value of \emph{LLP\_post}\ and
the time taken to generate a completion, $gen\_completion$.
From \secref{sec:background}, we can deduce that
\begin{dmath*}
gen\_completion = 2 \times (\emph{PCIe} + \emph{Network}) + \rctomem[64]
\end{dmath*}
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{pics/post_breakdown.pdf}
\end{center}
\vspace{-1em}
\caption{Breakdown of time in an \emph{LLP\_post}\ (MD: message descriptor;
DBC: \emph{DoorBell}\ counter).}
\label{fig:llppostbreakdown}
\vspace{-1.5em}
\end{figure}
since the PCIe wire and the interconnect's network fabric are
traversed twice: first while transmitting the message to the target
NIC, and second while receiving the ACK from the target NIC
and writing the corresponding completion. A completion in
InfiniBand is 64 bytes and hence, the RC conducts a 64-byte
write to memory on behalf of the host NIC. Then, to remove
the overhead of waiting for a previous message, the lower
bound on $p$ is
\begin{dmath*}
p \ge gen\_completion / \emph{LLP\_post}
\end{dmath*}
In our modeling
of the injection overhead, we assume that the user meets this
lower bound on $p$.
Typically, the API of the network driver allows the user to
poll a batch of completions, reducing the overhead of
expensive memory barriers and function calls~\cite{kalia2016design}.
Say the user polls $b$
number of completions in each batch. This means that the
user can post only $b$ posts in the next round of posts since
only $b$ entries have been dequeued from the TxQ.
Note that $b$ meets the lower bound mentioned
above. Additionally, the user could perform some miscellaneous
operations during the window of $b$ posts or $b$ polls. Let
$tot\_misc$ demarcate the cumulative time spent in these
other operations. Then, the overhead of the CPU to post a
message is
\begin{dmath*}
\emph{CPU\_time} = \frac{b\times\emph{LLP\_post}+ b\times\emph{LLP\_prog} + tot\_misc}{b} = \emph{LLP\_post} + \emph{LLP\_prog} + \emph{Misc}
\label{eq:cputime}
\end{dmath*}
where $\emph{Misc} = tot\_misc/b$ is the miscellaneous
overhead amortized for each message.
Hence, on average, messages arrive at the RC every \emph{CPU\_time}.
Since PCIe supports multiple outstanding requests
(see \secref{sec:background}), the RC initiates MWr PCIe
transactions targeting the NIC as soon as it receives
messages from the CPU. Considering that the RC is
implemented with hardware logic, the time it takes to
generate a transaction would be in the order of a few cycles.
Hence, we ignore its contribution to the injection overhead.
Note that the RC can generate transactions only if it has
enough credits. Otherwise, it needs to wait for
an UpdateFC DLLP from the NIC which would incur the
overhead of the PCIe wire between the NIC
and the RC (\emph{PCIe}). Experientially, we observe that a single
core does not exhaust the credits for MWr transactions.
Hence, we do not model for the overheads imposed with
exhausted credits in this paper.
Once the message leaves the RC, it incurs \emph{PCIe}\ before arriving
at the NIC. Hence, the injection overhead of a \emph{single}
message is
\begin{dmath*}
\emph{Msg\_inj\_overhead} = \emph{CPU\_time} + \emph{PCIe}
\end{dmath*}
While \emph{Msg\_inj\_overhead}\ describes the time taken by each message to
reach the NIC, it is not the same as the injection overhead
observed by the NIC, \emph{Inj\_overhead}, as we shall see next. When the system is
issuing messages continuously, the \emph{CPU\_time}\ of the next
message overlaps with the \emph{PCIe}\ of the previous one
(see \figref{fig:nicinjection}). Hence, the time difference between
the initiation of messages is \emph{CPU\_time}. This holds true for any
relation of \emph{PCIe}\ with \emph{CPU\_time}\ (assuming that \emph{PCIe}\ is not long enough to
exhaust the RC's credits). When $\emph{PCIe} > \emph{CPU\_time}$,
\emph{PCIe}\ of the next message can also overlap with \emph{PCIe}\ of the
previous one. Hence, from the perspective of the
NIC, the time difference between the arrival of messages is the
same as that between the initiation of messages, that is,
\begin{dmath}
\emph{Inj\_overhead} = \emph{CPU\_time} = \emph{LLP\_post} + \emph{LLP\_prog} + \emph{Misc}
\label{eq:llinjection}
\end{dmath}
Next, we measure the constituents of \emph{Inj\_overhead}. In \secref{sec:llpbreakdown},
we reported the times measured for \emph{LLP\_post}\ and \emph{LLP\_prog}. To
account for \emph{Misc}, we first explain what occurs between
consecutive posts in UCX's \texttt{put\_bw} benchmark.
Every message in the benchmark generates a completion.
However, the benchmark polls for one completion every
16 posts. Hence, eventually the finite depth of the TxQ is fully
utilized after which an \emph{LLP\_post}\ results in a "busy" post, that is,
an \emph{LLP\_post}\ fails since an \emph{LLP\_prog}\ must occur before the next
successful \emph{LLP\_post}. Thus, in the average case, after every
successful \emph{LLP\_post}, there occurs a busy post. Additionally, the
benchmark records a timestamp and updates its injection-rate
measurements after every \emph{LLP\_post}. \tabref{tab:measuredtimes}
reports the times for a "Busy post" and a "Measurement update"
measured using the UCS profiling infrastructure wrapped around
the relevant code paths; $\emph{Misc} = 56.58$ nanoseconds.
\begin{table}
\centering
\caption{Measured times of various components.}
\label{tab:measuredtimes}
\vspace{-1em}
\begin{tabular}{ | r | l |}
\hline
\textbf{Component} & \textbf{Time (ns)} \\ \hline
Message descriptor setup & 27.78 \\ \hline
Barrier for message descriptor & 17.33 \\ \hline
Barrier for \emph{DoorBell}\ counter & 21.07 \\ \hline
PIO copy (64 bytes) & 94.25 \\ \hline
Miscellaneous in \emph{LLP\_post} & 14.99 \\ \hline
\emph{LLP\_post}\ (total of above) & 175.42 \\ \hline \hline
\emph{LLP\_prog} & 61.63 \\ \hline \hline
Busy post & 8.99 \\ \hline
Measurement update & 49.69 \\ \hline
\emph{Misc}\ in $Inj\_overhead$ (total of above) & 58.68 \\ \hline \hline
\emph{PCIe}\ for a 64-byte payload & 137.49 \\ \hline \hline
\emph{Wire}\ & 274.81 \\ \hline
\emph{Switch}\ & 108 \\ \hline
\emph{Network}\ (total of above) & 382.81 \\ \hline \hline
\rctomem[8] & 240.96 \\ \hline \hline
\texttt{MPI\_Isend} in MPICH & 24.37 \\ \hline \hline
\texttt{MPI\_Isend} in UCP & 2.19 \\ \hline \hline
Callback for a completed \texttt{MPI\_Irecv} in MPICH & 47.99 \\ \hline
Successful \texttt{MPI\_Wait} for \texttt{MPI\_Irecv} in MPICH & 293.29 \\ \hline \hline
Callback for a completed \texttt{MPI\_Irecv} in UCP & 139.78 \\ \hline
Successful \texttt{MPI\_Wait} for \texttt{MPI\_Irecv} in UCP & 150.51 \\ \hline
\end{tabular}
\vspace{-1.5em}
\end{table}
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{pics/nic_injection_overhead.pdf}
\end{center}
\vspace{-1em}
\caption{Injection overhead observed by the NIC.}
\label{fig:nicinjection}
\vspace{-1.5em}
\end{figure}
\begin{sloppypar}
\minititle{Breakdown of injection overhead.} The PCIe trace of the
\texttt{put\_bw} test shows the observed injection overhead of
the system. \figref{fig:pcietrace} shows a snippet of the
PCIe trace after filtering for downstream (RC to NIC)
transactions. The data in each downstream transaction is 64 bytes
corresponding to the PIO post of an 8-byte payload. Every transaction
is associated with a timestamp. This timestamp corresponds to the time
when the PCIe analyzer observes the transaction. Since the PCIe analyzer
is sitting just before the NIC, these timestamps correspond to the times
at which the messages reach the NIC. Hence, calculating the delta
of the timestamp of consecutive transactions would result in the observed
\emph{Inj\_overhead}. \figref{fig:observedir} shows the distribution of this
overhead. The modeled injection overhead of \textbf{295.73} nanoseconds is within \textbf{5\%}
of \textbf{282.33} nanoseconds, the mean observed injection overhead. \figref{fig:injection_breakdown}
shows a percentage breakdown of \emph{Inj\_overhead}.
\end{sloppypar}
\begin{figure*}[tbp]
\begin{center}
\includegraphics[width=0.99\textwidth]{pics/pcie_trace.pdf}
\end{center}
\vspace{-1em}
\caption{PCIe trace of downstream PCIe transactions for UCX's RDMA-write injection-rate benchmark (\texttt{put\_bw}).}
\label{fig:pcietrace}
\vspace{-1em}
\end{figure*}
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{pics/observed_injection_overhead.pdf}
\end{center}
\vspace{-1em}
\caption[Injection overhead distribution]{Distribution\footnotemark of the observed injection overhead.}
\label{fig:observedir}
\end{figure}
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{pics/injection_breakdown.pdf}
\end{center}
\vspace{-1em}
\caption{Breakdown of injection overhead with the LLP.}
\label{fig:injection_breakdown}
\vspace{-1em}
\end{figure}
\subsection{Latency}
\label{sec:latency}
Latency is the total time incurred by a message starting from
the time the host node initiates the transfer to the time of writing
the payload in the destination buffer on the target node.
\minititle{Modeling latency.} We study the latency of a short
message transmitted using send-receive semantics. The
initiation of the transmission begins with an \emph{LLP\_post}, after which
the message traverses the PCIe fabric and reaches the NIC.
The NIC then transmits the message over the network fabric
to reach the target node. On the target node, the NIC performs
a MWr PCIe transaction, which traverses the PCIe wire and
instructs the RC to write the payload into the target node's
memory. Meanwhile the CPU on the target node has been
polling for its posted receive to complete. The user can use its
receive buffer only after a successful poll. Thus, for a payload
of size, \emph{x}, the time for latency is derived as follows,
\begin{dmath*}
\emph{Latency} = \emph{LLP\_post} + 2(\emph{PCIe}) + \emph{Network} + \rctomem[x] + \emph{LLP\_prog}
\end{dmath*}
\begin{sloppypar}
Now, we measure the individual components that contribute to \emph{Latency}. The value of an
\emph{LLP\_post}\ is the same as the one measured in \secref{sec:injection},
that is, 175.42 nanoseconds.
\end{sloppypar}
\minititle{Measuring \emph{PCIe}.} To measure \emph{PCIe}, we first measure the
round-trip latency of the PCIe wire between the NIC and the RC.
Since the PCIe analyzer sits just before the NIC, any transaction
initiated by the NIC and the corresponding ACK DLLP from the RC
will give us the start and end time of the required round-trip.
For this purpose, we use the MWr transactions initiated
by the NIC during the DMA-write of completions. The
timestamp in the MWr transaction is the start time of the round
trip and that in the corresponding ACK DLLP is the end time.
Dividing this round-trip value by two is \emph{PCIe}\ (the size of this MWr
transaction is the same as that of the PIO copy: 64 bytes).
We measure \emph{PCIe}\ to be 137.49 nanoseconds.
\minititle{Measuring \emph{Network}.} One way to measure \emph{Network}\ would be to
first measure the time difference between when a PIO post reaches
the NIC and when the NIC receives an ACK from the target node
for that PIO post. Then, dividing that difference by two would
correspond to \emph{Network}\ since the difference entails a round-trip
\emph{Network}\ latency. The timestamps on the PCIe trace of the
ping-pong style \texttt{am\_lat} benchmark allow us to employ this method. A
downstream 64-byte PCIe transaction corresponds to a ping
and the next upstream 64-byte PCIe transaction corresponds
to the ping's completion which is generated upon reception of
the ACK. Doing so, we measured the value of \emph{Wire}\ to be 274.81 nanoseconds
for a direct NIC-to-NIC connection. If the NICs are connected via
a switch, the overhead of \emph{Switch}\ is 108 nanoseconds. We measured
this by taking the difference between two latency
measurements: one with a switch involved and one without.
\footnotetext{Max is not shown in the figure due to the large value.}
\minititle{Measuring \rctomem[8].} To measure \rctomem[8], we utilize
the timestamps on the PCIe trace data of the \texttt{am\_lat}
ping-pong benchmark. As shown in \figref{fig:rctomemcalc}, the
time difference between an incoming pong and outgoing ping
entails an \rctomem[8], two \emph{PCIe} s (one for the inbound
pong and the other for the outbound ping), a \emph{LLP\_prog}\ (successful poll),
and a \emph{LLP\_post}\ (the ping). Once we measure the pong-ping difference
from the PCIe trace, we can compute the value of \rctomem[8] since
we have measured the values of the other components. This way, we
measured the value of \rctomem[8] to be 240.96 nanoseconds.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{pics/pong_ping_delta.pdf}
\end{center}
\vspace{-1em}
\caption{Measuring \rctomem[x] using the time delta between an inbound
pong and outbound ping on node 1.}
\vspace{-1em}
\label{fig:rctomemcalc}
\end{figure}
Plugging in our measured values (reported in \tabref{tab:measuredtimes})
into the latency model of a short message transmitted with send-receive
semantics, we have $\emph{Latency} = \textbf{1135.8}$ nanoseconds.
\begin{sloppypar}
\minititle{Breakdown of latency.} The observed latency from
UCX's \texttt{am\_lat} test is 1215 nanoseconds. The benchmark
measures a round-trip latency and then divides the
measurement by two to report the latency. Since a
measurement update occurs before the target responds with a pong,
we need to deduct half of "Measurement update" from \tabref{tab:measuredtimes}
from the observed latency, which results in \textbf{1190.25} nanoseconds. The
modeled latency is within \textbf{5\% }of this observed latency. \figref{fig:latencybreakdown}
shows a percentage breakdown of latency.
\end{sloppypar}
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{pics/latency_breakdown.pdf}
\end{center}
\vspace{-1em}
\caption{Breakdown of latency with the LLP.}
\vspace{-1em}
\label{fig:latencybreakdown}
\end{figure}
\section{Related Work}
\label{sec:related}
Prior research (described below) show the effects of optimizing
certain components on the overall communication
performance. We take an inverse approach that first
explains the observed performance and then showcases the
potential of optimizations. To the best of our
knowledge, this paper's detailed breakdown
encompassing all CPU, I/O, and network components is the
first of its kind. Additionally, such work is the first
for an Arm-based server.
\minititle{Communication breakdown}. Papadopoulou et al.~\cite{papadopoulou2017performance}
present a detailed instruction breakdown of initiation and
progress functions between UCP and UCT to identify
engineering and abstraction overheads. Similarly,
Raffenetti et al.~\cite{raffenetti2017mpi} analyze the
overheads in the MPICH library using instruction analysis.
Both reduce the number of instructions
used in commonly used functions, resulting in higher
communication performance. However, they only focus on
one level of the stack. Our work spans both the MPICH and
UCX stacks in addition to I/O and network components.
Ajima et al.~\cite{ajima2018tofu} present a breakdown of
an RDMA-write latency on the post-K system using
simulation waveforms of hardware emulators. Our work
presents a breakdown with measured times using our described
methodology as opposed to instructions or simulations to
explain the observed communication performance.
\minititle{Relevance of I/O}. Like us, several others also mention the bottlenecks
imposed by PCIe in datacenter networking systems.
Kalia et al.~\cite{kalia2016design} emphasize the need to
consider the low-level details and features of Verbs and the
PCIe subsystem while designing RDMA-based systems. In fact,
R. Neugebauer et al.~\cite{neugebauer2018understanding}
and Alian et al.~\cite{alian2018simulating} contribute PCIe models to
evaluate the impact of improvements to current I/O subsystems.
We quantitatively compare the I/O overheads against those
of CPU and network components. In addition to PCIe, we
profile the time spent by the RC to write to memory.
Unlike prior work, we use PCIe traces to validate software measurements.
\section{Evaluation setup}
\label{sec:setup}
To measure the breakdown of time spent in components we use a
system of two nodes, node 1 and node 2, that are connected
to each other using a high-performance interconnect. Node 1 plays
the role of the initiator in our following experiments. We use the
CPU's timers to measure the time spent in software. To measure the
time spent in other components, we use traces from a PCIe analyzer.
Note that one can use this analysis infrastructure for any CPU or
interconnect of interest.
We choose a state-of-the-art ThunderX2-based (TX2) server (running at 2 GHz)
for the nodes and TOP500-popular Mellanox InfiniBand~\cite{ibtop500} as the high-speed interconnect.
Specifically, we use ConnectX-4, a recent Mellanox InfiniBand adapter,
and attach it to the node through a PCIe slot. A Lecroy PCIe analyzer
sits just before the NIC on node 1, as shown in \figref{fig:pcietestbed}.
The overhead of the PCIe analyzer is negligible as we did not observe
any difference in performance with and without it. Larsen et al~\cite{larsen2015reevaluation}
observe the same. The analyzer is a passive instrument
that allows data to pass through fully unaltered~\cite{lecroy}.
For our software stack, we use the CH4 device of MPICH~\cite{raffenetti2017mpi}
with Unified Communication (UCX)~\cite{shamis2015ucx} as the
underlying communication framework. Specifically, we use UCX's
\emph{rc\_mlx5} transport which is UCX's implementation of the
data-path operations, such as posting to the transmit queue and
polling from the completion queue, for modern Mellanox InfiniBand
adapters.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.49\textwidth]{pics/testbed.pdf}
\end{center}
\vspace{-1em}
\caption{Two-node setup with PCIe analyzer on node 1.}
\label{fig:pcietestbed}
\vspace{-1.5em}
\end{figure}
To measure time spent in the CPU, we instrument relevant code
with UCX's UCS profiling infrastructure~\cite{ucs_profiling}, which
internally reads the \texttt{cntvct\_el0} register timer preceded by an \texttt{isb}
for \texttt{aarch64}. The mean overhead of this infrastructure is 49.69
nanoseconds (a standard deviation of 1.48 for 1000 samples); we report
software measurements in the rest of the paper after removing this overhead.
Each reported CPU or PCIe analyzer measurement is a mean of at least
100 samples. While measuring time of a component, we do not
simultaneously measure time in any other component to minimize any
effects of artificial slowdowns caused by the timer infrastructure. Hence, we do not require
synchronized timers.
|
1,314,259,994,182 | arxiv | \section{Introduction}
\gro, an X-ray transient bursting pulsar,
was discovered near the Galactic center on 1995 Dec.\ 2
with the Burst And Transient Source Experiment (BATSE)
on board the Compton Gamma-Ray Observatory (CGRO)
(Fishman \etal\ 1995; Kouveliotou \etal\ 1996).
Using the BATSE data, Finger \etal\ (1996) concluded that
\gro\ is a low mass X-ray binary system including a neutron
star with pulse frequency of 2.14~Hz and a binary
orbital period of 11.83 days; the mass function of the
system is 1.36 $\times$ 10$^{-4}$ M$_{\odot}$.
Finger \etal\ (1996) estimated
the magnetic field to be $<6\times10^{11}$ G\@.
Later, Cui \etal\ (1997) analyzed observations with
the Proportional Counter Array (PCA) on Rossi X-ray
Timing Explorer (RXTE)
and estimated the magnetic field strength to be
$\sim~2.4\times10^{11}$~G,
assuming that the system was in the propeller state.
Zhang \etal\ (1996) discovered Quasi-Periodic
Oscillations (QPO) of the X-ray intensity at 20~Hz, 40~Hz and
60~Hz in the PCA data;
QPO features at 20~Hz and 60~Hz were
weak, while the 40~Hz QPO was quite prominent.
Major outbursts have been detected twice so far from \gro.
The first outburst started in December 1995, which lead to
the discovery of the source.
The burst activity was terminated in early May 1996
(Kouveliotou \etal\ 1996b).
The second outburst began about a year later in December 1996
(Kouveliotou \etal\ 1997) and ended in April 1997.
Both outbursts decayed with a time scale of a few tens of days.
In addition to these major outbursts \gro\ seems to show
small outbursts with shorter
durations (Jahoda \etal\ 1996), detectable only below 50 keV.
Since the discovery of \gro, several thousands of bursts have been observed.
Following some bursts, the flux decreases below the persistent level for
a few seconds to a few minutes (hereafter referred to as a dip),
depending on the burst fluence (Swank \etal\ 1996).
Kouveliotou \etal\ (1996) attributed these bursts to accretion
instabilities; similarly, Lewin \etal\ (1996) remarked that
these bursts were probably due to
spasmodic release of gravitational potential energy (Type II) rather
than thermonuclear burning (Type I), comparing the light curves
measured by the RXTE with those of the Rapid Buster.
Bildsten and Brown (1997), however, suggested that the bursts from
\gro\ during its low persistent flux level could be due to the
thermonuclear burning because the neutron star in \gro\ has
only a weak magnetic field.
A shift in the arrival time of the pulse during the bursts was
observed by the Oriented Scintillation Spectrometer Experiment
(OSSE; Blanco \etal\ 1996, Strickman \etal\ 1996)
and BATSE (Koshut \etal\ 1998) on board the CGRO and PCA on
board RXTE (Stark \etal\ 1996).
Strickman \etal\ (1996) reported on the phase shift observed during
the bursts, while Stark \etal\ (1996) reported on the pulse arrival
time lag which
persisted longer than the burst emission.
Koshut \etal\ (1998) reported on the energy and intensity dependence
of the phase lag, or the lack thereof.
In spite of the stimulating results on the timing analysis
summarized above, only limited spectral information has been
reported so far (Swank \etal\ 1996).
We therefore performed ASCA observations with a better spectral
resolution than so far reported.
This paper presents the results putting particular emphasis on the
spectral features, and discusses the nature on this enigmatic X-ray object.
Preliminary analysis of a part of the ASCA observations has been
reported by Dotani \etal\ (1998).
\section{Observations and Data Reductions}
The ASCA satellite carries two Gas Imaging Spectrometers
(GIS2 and GIS3) and two
Solid state Imaging Spectrometers (SIS0 and SIS1) at the foci of the
nested thin foil mirrors (Tanaka \etal\ 1994).
The GIS covers a wider field of view (50' in diameter)
with high time resolution in
the energy range of 0.7--10 keV (Ohashi \etal\ 1996, Makishima \etal\ 1996),
while the SIS achieves high energy resolution (2~\% at 6~keV),
but the field of view is smaller (11' $\times$ 11' in 1CCD mode,
Burke \etal\ 1994).
These two types of detectors have complementary characteristics.
ASCA observations of \gro\ were made twice so far.
The first was a TOO
(target of opportunity observation) made on 1996 Feb.\ 26--27
(hereafter observation I), and the second observation was made
on 1997 Mar.\ 16 (observation II)\@.
Serendipitously, \gro\ was in the ASCA field
of view while in outburst (observation II), as the primary
target of the observation was the Galactic center region.
In addition, a sky region
including \gro\ was observed in September 1995,
two months before the onset of the first outburst.
However, the source was not detected;
a $3\sigma$ upper limit of the source flux is listed in Table~1.
In observation I, SIS was operated in 1-CCD
bright mode and GIS in PH mode.
The bit assignment of GIS telemetry was changed to achieve higher
time resolution (2~msec in high bit rate) at the sacrifice of the rise
time information of the detector signals.
Operation made of SIS in observation II was 4-CCD
faint mode, and that of GIS was PH mode with the same
bit assignment as observation I.
In observation II, \gro\ was not the primary target,
hence was located at the edge of the SIS fields of view.
This makes it very difficult to analyze the SIS data of \gro. Furthermore, the SIS performance exhibited large degradation in 4-CCD
mode due to the radiation damage. Therefore, we analyzed
only the GIS data in observation II.
We screened the raw data using the following criteria.
The criteria reject (1) data obtained at low elevation angles
($<\!5^\circ$) from the earth limb, (2) those affected by the
South Atlantic Anomaly, (3) SIS data obtained at low elevation
angle ($<\!20^\circ$) from the day earth, and (4) data obtained
at medium and low telemetry bit rate. The 4th criterion, which
is usually not applied, was necessary because \gro\
was very bright during the observations.
The count rate of the source largely exceeded the telemetry
capacity in medium and low bit rate.
Hence, we used only the data in high bit rate, where the telemetry
capacities of 128 cnts/sec/GIS and 256 cnts/sec/SIS were large
enough to transfer all the event data of the persistent emission.
Although the data thus screened were enough for most of the
subsequent analysis, SIS data during the bursts could not be
used in observation I, because the large increase of the count rate
caused telemetry saturation. Thus SIS data were used only for the analysis of the
persistent and dip emission in observation I\@.
The journal of the observations is given in Table 1.
The source photons (event data) were extracted from a $3^\prime$ radius region centered on the position of \gro\ for each detector.
Background data are accumulated from an annulus of $3^\prime-5^\prime$ radius centered
on the source. Figure 1 shows the GIS images of \gro\ in observations I and II, together with the regions of source and background extraction.
Since \gro\ was very bright during the ASCA observations, SIS was heavily suffered by photon pile-up (more than 2 photons fell in a pixel within a readout cycle) near the image
center, where the flux density is large. Thus the SIS source data in the inner region of a radius of 1.5~arcmin are excluded from the extraction radius of $3^\prime$. To these data, we further corrected for the photon pile-up using the method described in Ebisawa \etal\ (1996), although the correction factor was found to be less than 1 \%.
The large flux of \gro\ also caused significant dead time on the on-board processing of GIS source data (event data). Using the method described in Makishima \etal\ (1996), the dead time fractions are estimated to be $\sim\! 30$~\% in observation I and $\sim\! 2$~\% in observation II in high bit rate.
The monitor data provide further information in addition to the event data. The most important monitor data used in this paper are L1 data (Ohashi \etal\ 1996); it is the total number of X-ray event data
(0.7--10 keV) after on-board screening.
The L1 data are essentially free from the telemetry limit
and suffer from only very little dead time (25 $\mu$s). Although the L1 data have no spatial resolution, this is not a problem for the \gro\ case, because the source was a single dominant X-ray source in the GIS field of view (see figure~1). Accordingly, the L1 data provide accurate flux of GRO J1744+28,
hence can be used for the correction of the dead time on the event data.
The minimum time resolution for the event data and L1 data are
2~msec and 125~msec, respectively, in high bit rate mode
in both of the observations.
Further details of ASCA and its instrumentation can be found in
Tanaka \etal\ (1994), Burke \etal\ (1991), Serlemitsos \etal\ (1995),
Ohashi \etal\ (1996) and Makishima \etal\ (1996).
\section{Analysis and Results}
\subsection{Burst and Dip profiles}
We first made the X-ray light curve using the L1 data of GIS and
they are given in Figure~2.
After the correction for the vignetting, the persistent
flux was found to be on average $\sim\! 200$ counts/sec
and $\sim$ 60 counts/sec for observations I and II, respectively.
We detected 12 bursts, 10 giant and 2 small bursts, during $1.6\times10^4$~sec
exposure in observation I, and 17 giant bursts during $3.8\times10^4$~sec
exposures in observation II\@.
Thus the mean burst intervals (for giant bursts) are 27~min for
observation I and 37~min for observation II\@.
Although the flux level of the persistent emission differed by a
factor of 3.3 between
the two observations, the burst rate decreased only by a factor of $\sim1.4$.
The so-called $\alpha$ value, ratio of the average luminosity
emitted in the persistent emission to that emitted in a burst
(average is taken over the time interval since the previous
burst) is roughly the same, $\sim\! 100$ for both observations.
In the light curve of observation I, we also found small bursts with a peak flux 10~\% of the giant bursts.
Typical profiles of the giant and small bursts are plotted in Figure~3.
We see clear intensity dip after both types of the bursts,
whose duration depends on the burst fluence.
We estimated the burst and dip total counts as follows.
Because the persistent flux level just before the burst was found to be
constant, we could define the pre-burst flux level for each burst.
Then, the total burst/dip counts are calculated as the sum of
deviation of the count rate from the pre-burst level.
Not all of the burst-dip pairs were fully observed until the flux level of
the dip recovered to the persistent level, hence
we were able to estimate the total counts of bursts and dips for
7 (observation I) and 8 (observation II) burst-dip pairs.
The correlation between the total counts of burst and dip is
plotted in Figure~4 for each burst-dip pair.
We see that the ratios of the burst fluence to the flux deficiency
in the following dip are about 1/2 and 1/4 for observations I and II, respectively.
\subsection{Pulse Profiles}
Using the L1 light curve, pulse periods in the persistent phase of each observation were determined.
The barycentric pulse periods (corrected for the binary motion) in the
persistent phase are found to be 0.467059(2) sec and 0.4670469(8) sec,
respectively, for observations I and II\@. The value in the parenthesis
represents error in the last digit of the pulse period.
The pulse profile in the persistent phases is sinusoidal, with
systematically larger pulse fraction in observation I than that of observation II: $7.8\pm0.1$~\% and $5.7\pm0.2$~\% in the
0.7--10~keV band for observations I and II, respectively.
During the dip and burst phases, the pulse profiles have essentially
the same shape (sinusoidal) as those of the persistent phase.
Pulse amplitude sometimes become very large (a few tens \%)
during the bursts, although the average amplitude is only slightly
larger than that of the persistent phase.
An example of the large pulse amplitude during bursts is shown in Figure~5.
Next, we investigated the energy dependence of the pulse profile using the X-ray event data in observation I.
Although the event data suffered from a significant dead time, we did not apply the dead time correction to the raw event data, because the time resolution of the L1 data, which is necessary for the dead time correction, is not good enough (125 msec) compared with the pulse period of 467 msec. The pulse profiles are given in figure 6 (upper panels) for three energy bands. The pulse profile are almost sinusoidal at every energy bands.
The pulse profile of observation I is fitted with a sinusoidal function plus a constant, and pulse fractions defined as the ratio of the pulse amplitude to the non-pulsed component are calculated and are given in figure 7 with cross symbols. To the pulse fractions, we estimated the dead time effect using the method given in Appendix.
The solid line in figure 7 is the dead time corrected
pulse fraction.
We see that the large difference of the pulse fraction between
higher and lower energy bands in the raw event data (crosses)
is mainly due to the dead time effect and that
the difference of the pulse fraction is much reduced
after the dead time correction (solid line).
The pulse fraction in observation II, where the dead time is negligibly small, shows the same dependence on the energy band as observation I, as is seen in the lower panel of figure 6.
\subsection{Spectral Analysis}
The spectral analysis of \gro\ are performed separately
for the persistent, dip and burst emissions.
As explained in section 2, both sets of SIS and GIS data are used for
observation I (except for
the bursts, for which only the GIS data are used), but only GIS data
are used for observation II\@.
For each GIS spectrum dead time was corrected according to the method given in Makishima \etal\ (1996). We also corrected for the GIS gain shift, because the GIS gain is known to increases slightly under the high count rate (Makishima \etal\ 1996). The gain correction factors we adopted for the burst and persistent/dip
phases, were 0.6~\% and 0.2~\% for observation I and II, respectively. Note that the persistent emission is not subtracted from the burst energy spectra.
We found that the energy spectra between observations I and II
are very similar for the persistent, burst, and dip phases.
Therefore, we first try to reproduce the energy spectra of the
persistent emission, which have the best statistics, and then
applied the same model to the energy spectra of the
burst and dip phases.
Among the simple models (such as a power law, a thermal Bremsstrahlung,
and a blackbody), a power law ($\Gamma \sim 1.0$) modified by the cold
matter absorption could reproduce the overall shape of the energy spectra
relatively well.
However, the model is rejected because of the large residual structure
around 6--7 keV as shown in the middle panel of Figure 8.
As a working hypothesis, we assume that it corresponds to
an iron emission line and we added a model of a gaussian line
to the power law continuum.
As shown in the lower panel of Figure 8, the residual structure at 6--7 keV is
greatly reduced.
From Table 2, we see that model parameters obtained
from different sensors are not consistent with each other; the parameters of all the 4 sensors show
systematic difference which is larger than the statistical errors.
Differences of the best-fit parameters are significant not only
between GIS and SIS, but also among the same sensors.
This may be partly due to a problem of the gain uncertainty of the sensor,
because the discrepancy between the data and the model is especially
large at the energies where the detection efficiency changes
rapidly (eg.\ energies at gold M-edge, silicon K-edge, xenon L-edge).
However, the discrepancy
at these energies can not be removed by simply adjusting the gain.
Therefore the systematic errors is attributed not only to
a possible gain uncertainty, but also to other calibration errors of
the sensors.
Because \gro\ is a highly absorbed source having a significant
flux only in the higher energy part of the ASCA band,
calibration errors of the sensors could be larger than usual.
Since we have no data to estimate the calibration errors
quantatively, we regard that the parameter
differences among the sensors are the practical range of the systematic errors.
We also fit the energy spectra of the burst and the dip phases with
the same model; results are summarized in Table 2.
Because the values of photon index and $N_{\rm H}$ couple together,
we investigate the confidence contour between these two values and find that
there is a tendency that energy spectrum becomes slightly hard
during the bursts and slightly soft during the dips in
observation I\@.
Although such tendency is not clear in observation II, it may
be due to poorer statistics of the data.
If we compare observations I and II, the photon index
of observation I seems to be systematically larger than
that of observation II\@.
\subsection{Iron feature}
We found that the 6-7 keV structure is relatively well reproduced by a phenomenological model of a broad gaussian line with the center energy of at iron K-shell transition;
the structure is consistent with a broad line ($\sigma \sim (6-8)\times10^2$ eV) centered at 6.6--6.8 keV\@.
In this subsection, we further apply a more physical model.
The large line width can not be due to the thermal broadening,
because the corresponding temperature would exceed 1 MeV, but
most probably reflects the bulk motion of the line emitting gas.
If the mass accretion occurs via an accretion disk, the
line width may arise from the Doppler broadening due to the
Kepler motion of the accreting matter.
To check this possibility, we try to fit the iron feature
using the so-called disk line model (the model ``diskline'' in XSPEC)\@.
In the fit, we fixed the outer disk radius, which is hardly
constrained by the data, to 1000 $R_g$ ($R_g$: gravitational radius, GM/c$^2$).
The results are summarized in Table 3.
The disk line model is found to reproduce the iron structure well.
The ratio of the observed energy spectra to the best-fit model
function is plotted in Figure 9.
The line energy ranged over 6.2--6.6 keV in observation I.
Therefore, we conclude
that the iron should be in a low ionization state, although the large
systematic errors do not exclude a possibility of He-like iron.
The inner radius of the accretion disk locates around
$10-30 \; R_g$, i.e.\ $(4-12) \times 10^6$ cm, and the
inclination angle of the accretion disk is $>\!40$ degree.
In observation II, we could not constrain the disk inclination.
If we fixed it to $15^\circ$,
a high line energy, 6.7--7.0 keV, seems to be preferred.
However, if the inclination is larger, lower line energy is still
accepted.
With the preliminary analysis, Dotani \etal\ (1998) reported that
the broad emission line structure was reproduced by a partial covering model. Thus we investigate this model using the more elaborated spectra. The results of the fitting are summarized in Table 3 and the ratio to the best-fit model is shown in Figure~10.
From the shape of the residual structures in Figure~10,
the profile of the partial covering model seems not
to match the observed shape of the iron structure very well.
Residual structures are prominent around 7~keV\@.
The fits are also generally not very good( red-$\chi^2$ of 2--3 is
obtained in observation I)\@.
Thus we conclude that the partial covering model is not preferred
over the disk line model.
\section{Discussion}
\subsection{Interstellar Absorption and Source Distance}
We found that N$_{\rm H}$ values are constant at $(5-6)\times10^{22}$
H~cm$^{-2}$
regardless of the large luminosity changes between observations I and II.
This value is slightly larger than the one reported by Giles \etal ~(1996)
with the RXTE PCA data; their analysis, however, was preliminary.
The stable N$_{\rm H}$ implies that the column density can be attributed to
the interstellar absorption.
Sakano \etal\ (1997) obtained N$_{\rm H}$ values from many X-ray sources
near the Galactic center direction and found that most of them show
systematic dependence on the Galactic latitude, which supports that
they really lie near the Galactic center.
The N$_{\rm H}$ value of \gro\ determined by the present work, lies
at the edge of the above column density distribution, hence we conclude that
\gro\ is really located near the Galactic center at 8.5~kpc distance.
\subsection{The Pulsed Emission}
X-ray pulsations of \gro\ are found to be very different from that of typical binary pulsars.
The pulse amplitude is small, 6--8 \% in 2--10 keV range, and
has almost perfect sinusoidal profile.
The pulse profile of binary pulsars is usually energy dependent and
sometimes has complex features in a lower energy band.
Such energy dependence is not recognized in \gro,
except for the increase of pulse fraction toward higher energies.
The differences in the characteristics of the X-ray pulsation
of \gro\ may result from the different parameters of the system, e.g.\
mass accretion rate, accretion geometry, parameters of the
neutron star, etc.\ from typical binary pulsars.
X-ray pulsation is produced by the beamed X-ray radiation
from the polar cap regions of the neutron star, onto which
the accreting matter is channeled by the strong magnetic field.
The sinusoidal pulse profile of \gro\ indicates that either the emitting regions near the polar caps are large , or the system is low inclination, as argued by
Finger \etal\ (1996), both of which will produce only weakly beamed radiation.
Small amplitude of the pulsation is also expected from a
weakly beamed radiation.
It is considered that the size of the polar cap emission reflect the size of the Alfv\'{e}n radius.
Accreting matter, either in a disk or in wind, is frozen in the
magnetic field at the Alfv\'{e}n radius and drifts inward along the
magnetic fields lines.
If we assume dipole magnetic field, smaller Alfv\'{e}n radius
makes a larger polar cap emission.
This means that the surface magnetic field of \gro\ may be
smaller than that of typical binary pulsars.
The large mass accretion rate of \gro\ also makes the
Alfv\'{e}n radius smaller.
Several estimates of the magnetic field of \gro\ in fact
prefer a smaller value.
Finger \etal\ (1996) estimated the magnetic field to be
$<\!6\times10^{11}$ G from the assumption that the Alfv\'{e}n radius
is smaller than the co-rotation radius when the pulsar spins up and
shows pulsation.
Bildsten and Brown (1997) obtained a tighter constraint of
$<\!3\times10^{11}$ G from the same assumption, using a
different set of data.
Cui (1997) observed evidence of the propeller effect
when the source flux decreased and estimated the magnetic field
to be $2.4 \times 10^{11}$ G\@.
Stark \etal\ (1998) used the same data and obtained $1.5\times10^{11}$ G\@,
assuming the presence of a contaminating source nearly at the position of GRO J1744-28.
All these estimates favor a weak magnetic field.
We found that the pulse fraction increases with energy, which indicates that the energy spectrum of the pulsed component is harder that of the non-pulsed component, and that
the pulsed emission comes from a different region from that of the non-pulsed component. Since the pulsed emission is most likely from the polar caps of the neutron star, the non-pulsed emission may be attributed to a larger emission region, e.g.\ whole neutron star surface.
Presence of the two emission region indicates
the presence of two accretion paths as indicated in Cui \etal\ (1997).
Some part of the accreting matter may not follow the magnetic
field and accrete spherically on to the neutron star.
This may be related to the weak magnetic field.
Presence of two accreting paths is also suspected
in the other type II burster, the Rapid Burster,
based on the observations of quasi-periodic oscillations
(Dotani \etal\ 1990).
Two accretion paths may, therefore, be common to the neutron star
which produces type II bursts.
We found that the pulse fraction during the bursts can be
very large compared to that in the persistent emission.
The light curve of individual burst indicates a pulse
fraction occasionally as large as 50 \%.
On the other hand, the pulse fraction during the dips is almost
the same as that in the persistent emission.
The small pulse fraction in the persistent emission means that
most of the matter accretes spherically on to the
neutron star and only a small fraction of matter is channeled
to the polar caps.
The large pulse amplitude during the bursts means that some excess mass tends to accrete to the polar
caps and may indicate the location of the reservoir responsible
for the bursts.
\subsection{Bursts and Dips}
We find that the burst fluence and the integrated flux deficiency
in the subsequent dip show a good correlation.
The burst fluence was approximately 1/2(observation I) -- 1/4(observation II) of the flux deficiency. We interpret this correlation that the burst luminosity is compensated
by the following dip luminosity, and the long-term energy release
rate, or accretion rate is in fact nearly constant.
Therefore the burst activity may not be due to a global increase of the
mass supply from the companion, but due to a sudden increase of the
infall matter which has been accumulated in a reservoir
near the neutron star.
After a burst, a fraction of the accreting matter is accumulated into the
reservoir, hence creating the flux dip phenomena.
The fact that the burst fluence and the flux deficiency differ by a
factor of 2 (observation I) --4 (observation II) suggest a possibility that the X-ray radiation from \gro\ is un-isotropic,
and the un-isotropy may change from observations I and II, as well as from the persistent to the bursts. This possibility is also discussed in the next subsection, in the context of super Eddington luminosity.
The $\alpha$-value is found to be $\sim$100 in both observations I and II
in spite of a factor 3--4 changes in the mass accretion rate.
This constancy is retained by the proportionality of the luminosity of
individual burst to the persistent luminosity, rather than the duty ratio
(burst interval) of the burst as indicated in Jahoda \etal\ (1998).
\subsection{X-ray Luminosity}
Soon after the discovery of \gro, it was noticed that its X-ray
luminosity can largely exceed the Eddington limit of
a neutron star (Giles \etal\ 1996; Jahoda \etal\ 1998).
The luminosity estimate of course depend on the source distance
and might be reduced if \gro\ were much closer.
However, we have shown that the source column density,
$\sim\! 5.5 \times 10^{22}$ cm$^{-2}$,
is fully consistent to the location near the Galactic center when
compared to the previous ASCA results (Sakano \etal\ 1997).
The super Eddington luminosity is, therefore, also confirmed by the
ASCA observations.
During observation I, the luminosity (2--10~keV) of the persistent emission was
$1.6-2.0)\times10^{38}$ \ergs,
which is comparable to the Eddington limit of a neutron star.
Because the burst peak flux was more than 10 times the
persistent flux, the X-ray emission of \gro\ during the bursts
by far exceeds the Eddington luminosity.
In the early phase of the outburst, the X-ray flux was 10 times larger
than that of observation I, hence exceeds the Eddington luminosity
by two orders of magnitude during the bursts as already noted by many authors.
Since the super Eddington luminosity is unlikely in rather long
duration of the burst peak, as well as the persistent flux in the early time of the outburst, one may argue that the apparent super Eddington luminosity by two orders of magnitude can be explained
by highly un-isotropic radiation from \gro. However as we already suggested, most of the emission is non-pulsed component,
and is likely come from the whole neutron star surface, hence highly un-isotropic radiation is unlikely, although small un-isotropy would be possible ( see section 4.3). Thus we argue that the Eddington luminosity can not simply be explained by un-isotropic radiation, hence would remain to be an important issue for further study.
\subsection{Iron Feature}
We find a significant iron feature in the energy spectra of
\gro\ in both observations I and II\@.
Although it is known that the diffuse Galactic ridge emission
contains an iron emission line, contamination to the energy
spectra of \gro\ is considered to be negligible.
The Fe-line intensity of the ridge emission at the location of \gro\
is $\sim\!20$ photons/sec/cm$^2$/sr (Maeda 1998).
This produces the contamination of $5\times10^{-5}$ photons/sec/cm$^2$
during the observations of \gro.
Because the line flux of \gro\ was $4\times10^{-2}$ photons/sec/cm$^2$
and $7\times10^{-3}$ photons/sec/cm$^2$ for observations I and II,
respectively, contamination from the ridge emission is at least two orders
of magnitude smaller and is completely negligible.
When fitted with a gaussian line model, the structure in observation I
is consistent with a broad line ($\sigma \sim (6-8)\times10^2$ eV) centered
at 6.6--6.8 keV\@.
The equivalent widths are approximately 200--400 eV\@.
A disk line model also represents relatively well the iron structure.
A problem on the disk line model, however, is that
the best-fit parameters in observation I, especially the inner disk
radius, $(4-12) \times 10^6$ cm, and the disk inclination,
$>\!40^\circ$, do not seem to fit the \gro\ system parameters.
Magnetic field strength of 2$\times$10$^{11}$ G and X-ray luminosity
of $\sim$ 10$^{38}$ erg/s indicate an Alfv\'{e}n radius of $\sim$ 5 $\times$
10$^{7}$ cm, which is an order of magnitude larger than the above estimate.
Furthermore, we find that the inclination of the accretion disk is much larger
than the previously estimated one (Finger \etal\ 1996).
Sturner and Dermer (1996) discussed the possible nature of the
companion star and estimated the orbital inclination to be $<18^\circ$.
Rappaport and Joss (1997) conducted a series of binary evolution
simulations to constrain the orbital inclination between $7^\circ-22^\circ$.
These estimates are much smaller than the present results.
If we force larger inner disk radius and lower inclination
angle to the diskline model, the model line profile becomes
much sharper than the observed profile and does not fit the data.
Therefore we are forced to consider additional broadening mechanism on the conventional disk line model. Are the iron line photons Compton scattered by a surrounding high temperature plasma ?
Does the accretion disk exist inner region than the Alfven radius, which is estimated from a spherical mass accretion ?
At this moment we have not enough data to judge what mechanism is really working, hence this issue is open for future study.
We find that the energy dependence of the pulse amplitude
shows a local minimum at 6--7~keV\@.
This local minimum may be related to the iron structure
of the energy spectrum.
If we assume that the iron structure is due to the broad
emission line and the line shows no pulsation, a local
minimum in the pulse amplitude is expected just as
observed in Figure~7.
Using the equivalent width of the line, 200~eV, we estimate
the contribution of the line flux to the total flux at 6--7~keV
bin (bin width is 1~keV) to be roughly 20~\%.
Thus the fractional pulse amplitude may decrease by 20~\%
in 6--7 keV\@.
In fact, the observed amplitude was 6~\% in 6--7~keV, whereas
the interpolated amplitude is 8~\%.
Thus the decrease of the pulse amplitude in 6--7~keV may be
due to the presence of the iron line which is not pulsated.
\section{Summary}
We summarize the results obtained from the ASCA observations
of \gro\ as follows:
\begin{enumerate}
\item
We detected 10 and 17 giant Type II bursts during Observation I and II, with
mean burst intervals of about 27 min and 37 min, respectively.
The burst fluence is found to have good correlation with the
total flux deficiency in the following dip.
This correlation is interpreted that the average mass accretion rate is
constant, but accretion instability makes the bursts and dips.
The burst intervals do not change very much in spite of a factor of 4
decrease in X-ray luminosity between observation I and II\@.
Thus the burst fluence also decreases in accordance with the
persistent flux.
\item
The absorption column is found to be constant at
$N_{\rm H} = (5-6) \times 10^{22}$ H~cm$^{-2}$ regardless of the observation date
and the source status (persistent, burst and dip).
This strongly indicates that the column density corresponds to the interstellar
absorption, and the source is actually located near the Galactic center,
at a distance of 8.5~kpc.
\item
The persistent X-ray luminosity in the first and the second observations are
$(1.6-2.0) \times 10^{38}$ erg s$^{-1}$ and $(4.1-4.2) \times
10^{37}$ erg s$^{-1}$, respectively. The burst peak fluxes at
the first observation exceed the Eddington limit of a neutron star
by a factor of 10, if the radiation is spherically symmetric.
\item
The energy spectra of \gro\ are represented approximately by an absorbed
power law with a broad line at $\sim$6.7 keV\@.
The shape of the spectra are almost the same for the different observation
dates and types (burst, dip and persistent emission).
However, the spectra is slightly harder in observation II ($\Gamma \sim 1.0$)
than in observation I ($\Gamma \sim 1.2$), and is harder during bursts
than in persistent phase in each set of observations.
\item
The energy spectrum of the pulsed component is harder than
that of the non-pulsed component.
We consider that the different spectral hardness indicates
different emission regions, e.g.\ pulsed component from the
polar caps and non-pulsed component from the whole neutron star
surface.
\item
The presence of the iron feature is clearly seen in all energy spectra and
is also indicated by the decrease of pulse fraction at 6--7 keV\@.
The feature is well reproduced by a disk line model.
However, some line broadening mechanism is needed
to make the disk line parameters consistent to the system
parameters of \gro.
\end{enumerate}
\vspace{2cm}
\acknowledgments
We express our thanks to all the ASCA team members for many efforts of the
fabrication of the satellite, launching, daily operation, soft ware developments
and calibrations. M.N. and Y.M. are financially supported by the Japan
society for the promotion of science.
\newpage
\section{Appendix}
Dead time of ASCA GIS is energy independent, hence does not change the
shape of energy spectrum, only decreases the normalization factor. Therefore
the dead time effects can simply be corrected by the L1 counts.
However, if the flux is time variable, the dead time effects on the time variable component become energy dependent due to a "cross-talk" between different energy bands. This appendix
investigated the effect on the pulse fraction with different energy bands.
Makishima \etal\ (1996) reported that GIS has three kinds of dead time;
dead time caused by the
hard-wired electronics, that due to the on-board CPU processing,
and that due to a limitation in the telemetry capacity.
These dead times depend on the GIS mode and telemetry bit rate.
In the case of PH mode with high bit rate, the same mode as the analysis of
present paper, on-board CPU processing time is the dominant source of the dead time.
Therefore, in this appendix, we consider only the CPU-induced dead time; the dead time fraction , $f_{\rm CPU}$, can be expressed as
$f_{\rm CPU} = 1-(y+{\rm e}^{-y})^{-1}$ with $y = \tau \cdot X$,
where $\tau $ is the average CPU time per event
($\tau = 8.1$ msec; Makishima et al.\ 1996), and $X$
is the incident photon flux in the total energy band.
When the incident flux exceeds a few tens photons per second,
dead time effects of GIS become significant.
We consider a simple case that the time variation (pulsation) of the incident
photon flux $x(t;E)$ is represented by a sinusoidal form:
\begin{equation}
\label{eq:append1}
x(t;E) = f_0(E) + f_1(E) \sin(\omega t),
\end{equation}
where $f_0(E)$ is a constant flux and $f_1(E)$ is the amplitude of
sinusoidal variation.
Since any time variations of X-ray flux can be expressed by Fourier series, results obtained in this appendix can be applied to general X-ray variabilities.
Since the dead time fraction of GIS does not depend on energy,
detected (dead time included) count rate $x_d(t;E)$ at energy $E$ is given as:
\begin{equation}
\label{eq:append2}
x_d(t;E) = \frac{f_0(E) + f_1(E) \sin(\omega t)}
{X(t)\tau + \exp(-X(t)\tau)},
\end{equation}
Then the time-averaged count rate is calculated as
\begin{equation}
\overline{x_d}(E) = \frac{1}{T} \int_0^T x_d(t;E) \, dt,
\end{equation}
where $T$ ($=2\pi/\omega$) is the period of sinusoidal variation.
Note that time variation of $x_d(t;E)$ is no more sinusoidal,
because the dead time fraction, which corresponds to the denominator
of eq.~(\ref{eq:append2}), is flux dependent; in other words the dead time
fraction varies from the pulse peak to valley.
The rms fractional amplitude $r(E)$ of the variation is:
\begin{equation}
r(E) = \frac{1}{\overline{x_d}(E)} \left\{ \frac{1}{T}
\int_0^T (x_d(t;E) - \overline{x_d}(E))^2 \,dt \right\}^{1/2}.
\end{equation}
When there is no dead time (i.e.\ $\tau = 0$), $\overline{x_d}(E) = f_0(E)$
and $r(E) = f_1(E)/(\sqrt{2} \, f_0(E))$.
To demonstrate how $r(E)$ changes with the incident flux, we show the results
of numerical calculation for three cases.
We assume a power law energy spectrum,
$f_0(E) = A_0 E^{-\Gamma_0}$ and $f_1(E) = A_1 E^{-\Gamma_1}$,
and rms fractional variation in the total band is 0.1.
Photon index of the constant (non-pulsed) component is fixed to
$\Gamma_0 = 1$, but three different indices are assumed for the
time variable (pulsed) component: $\Gamma_1 = 0.9$ (case 1), $\Gamma_1 = 1.0$ (case 2)
and $\Gamma_1 = 1.1$ (case 3).
For simplicity, we divide two energy bands:
1--5 keV and 5--10 keV \@.
The results are shown in Figure 11.
Ratio of the rms amplitudes (i.e.\ hardness ratio of the amplitude of
the pulsed component) are also plotted in the figure.
As seen from the figure, rms fractional variations generally decrease
with the increase of the incident flux.
When the constant and the pulsed components have the
same spectral shape (center panels of Figure 11), reduction of the
amplitude is exactly same between the two energy bands.
Thus the dead time does not alter the hardness of the pulsed component. However, when the pulsed component is harder (softer) than the
constant emission, local minimum of fractional variation appears
in the soft (hard) band,
and the fractional variation turns to increase for higher flux.
This is considered to be due to a "cross-talk' between the
energy bands through the dead time.
When the incident flux is much larger than $1/\tau$, which corresponds to
the maximum detectable count rate, detected count rate in the two energy
bands tend to show anti-correlation, because the total detectable count
rate is limited by $1/\tau$.
This anti-correlation tends to cancel out the original variation of the
energy bin.
The local minimum of $r(E)$ in the figure indicates that the original
time variation of the energy bin is almost completely canceled out
by the anti-correlation from the other energy bin.
Note that this cancelation is not perfect, because the time variation
is not sinusoidal under the effect of dead time.
For larger incident flux, variation due to the anti-correlation overcomes
the original variation, and the total variation increases with the increase of incident flux.
In this extreme flux range, time variations in the two energy bins are anti-correlated.
Conversely, with the assumptions that the pulse profiles are sinusoidal and that both the constant (non-pulsed) and pulsed components have power law energy spectra, we can calculate the (dead time corrected) pulse fractions from the detected (dead time included) pulse fractions. Using the average flux of about 160 c/s of L1 data and the photon index for the constant component of about $\Gamma = 1.0$ (see section 3.3), we found that the best-fit photon index of the pulsed component is $0.3\pm0.1$. Thus the energy dependent pulse fraction with no dead time is calculated, and is given in figure 7 with the solid line.
\newpage
\vspace{2cm}
\begin{description}
\item[Fig.~1]
Images of \gro\ in observations I (left panel) and
II(right panel) in the Galactic coordinate.
The contours are plotted with the logarithmic spacing.
In observation I, \gro\ was detected at the center of the GIS field
of view, but in observation II near the edge of the field of view.
Because of the distortion of the point spread function of XRT near
the edge of the field of view (Serlemitsos \etal\ 1995),
the source image in observation II has elongated morphology.
\item[Fig.~2]
X-ray light curves of \gro\ during observations I (upper
panel) and II (lower panels).
The light curves are calculated using the monitor data (L1),
which cover 0.7--10 keV, and are corrected for the vignetting.
Horizontal axis indicates the ASCA time (elapsed seconds since
1993/1/1 0:00:00 (UT)).
Data gaps are due to the earth occultation of the source, satellite
passage of the South Atlantic Anomaly or the lower telemetry bit rate.
\item[Fig.~3]
An example of the X-ray light curve in 0.7--10 keV which contains both
a giant and a small burst followed by an intensity dip.
The light curve is GIS2
L1 data in the observation I, and is plotted with 1 sec time resolution.
Horizontal axis indicates the time since the beginning of the observation.
\item[Fig.~4]
Correlation between the total burst counts and the total flux deficiency
in the subsequent dip for observations I and II\@.
Data from different detectors and observations are indicated by
different symbols.
\item[Fig.~5]
Fine time profile of a giant burst in observation I. The pulse fraction during the burst is larger than that in quiescent.
\item[Fig.~6]
Folded light curves of the persistent emission in observation I
(upper panel) and II (lower panel). The dead time effects are not corrected. The pulse profile is sinusoidal with
larger fractional amplitudes at higher energies.
The pulse fraction in observation I becomes smaller at lower energy
because of the deadtime effects (see Appendix).
\item[Fig.~7]
Energy dependence of the pulse fraction(dead time is not corrected) for the persistent emission in observation I of GIS2.
The local minimum of the pulse fraction at 6--7 keV is possibly related to the existence of iron.
The solid line is the pulse fraction after the dead time correction (see Appendix)
\item[Fig.~8]
Energy spectra of the persistent emission in observations I (left)
and II (right) calculated from the GIS data with the best-fit
model functions.
In the upper panels, observed energy spectra are plotted by crosses, while the best-fit model function (an absorbed power law with a broad
gaussian line) is given with the solid histograms.
The middle and lower panels show the fit residuals: middle panels used a model
of an absorbed power law and lower panels an absorbed power law plus a
broad gaussian line (same model in the upper panels).
Note the large residual structures around 7 keV in the middle panels.
\item[Fig.~9]
Ratio of the persistent energy spectra to the
best-fit model of a power law plus a disk-line.
The ratios are plotted for each sensor.
\item[Fig.~10]
Same as figure 9, but for a partial covering by a cold matter to the
power law continuum.
\item[Fig.~11]
The rms pulse amplitude fraction (upper panel) in the 1-5 keV band (solid line) and the 5-10 keV band (broken line), and the hardness ratio of the pulsed component (lower panel) as a function of incident flux in the 1-10 keV band. The pulse profile is assumed to be sinusoidal with the fraction of 0.1 in the 1-10 keV band. The spectrum for the constant (non-pulsed) component is assumed to be a power law of a photon index of 1.0, while those of the pulsed components are assumed to be 1.1 (left), 1.0 (center: solid broken lines are overlapped) and 0.9 (right).
\end{description}
\newpage
\scriptsize
\begin{deluxetable}{lcccccc}
\tablecaption{Journal of Observations}
\tablehead{
\multicolumn{1}{l}{} &\colhead{Start} &\colhead{End} &\colhead{Exposure\tablenotemark{a}}&\colhead{GIS} &\colhead{SIS} & \colhead{Flux\tablenotemark{d}}\\
\multicolumn{1}{l}{} &\colhead{(yy/mm/dd hh:mm)} &\colhead{(yy/mm/dd hh:mm)} &\colhead{(sec)} & & & \colhead{($10^{-8}$ erg/sec/cm$^2$)}\\
}
\startdata
Pre-outburst & 95/09/19 08:06 &95/09/21 21:20 & 5.6 $\times$ 10$^{4}$ & PH mode & (F/B mode for H/M bit)\tablenotemark{b} & $<1\times10^{-4}$\tablenotemark{e}\\
Observation I & 96/02/26 10:08 &96/02/27 02:05 &1.6 $\times$ 10$^{4}$ & PH mode & Bright mode & 2.0\\
Observation II& 97/03/16 15:56 &97/03/18 07:41 &3.8 $\times$ 10$^{4}$ & PH mode & (Faint mode)\tablenotemark{c} & 0.50\\
\enddata
\tablecomments{
}
\tablenotetext{a}{Calculated from the GIS data.}
\tablenotetext{b}{The X-ray source was outside the fov of SIS.}
\tablenotetext{c}{The X-ray source was located at the very edge of the fov of
SIS, and data were not used.}
\tablenotetext{d}{Calculated for 2--10 keV.}
\tablenotetext{e}{Absorbed power law with $N_H = 5\times10^{22}$ cm$^{-2}$ and $\Gamma = 1.0$ is assumed.}
\end{deluxetable}
\scriptsize
\begin{deluxetable}{lcccccc}
\tablecaption{Best-fit Parameters for a Power-law + Gaussian Line Model}
\tablehead{
\multicolumn{1}{l}{}&\multicolumn{4}{c}{Observation I} &\multicolumn{2}{c}{Observation II\tablenotemark{a}}\\
\multicolumn{1}{l}{} &\multicolumn{4}{c}{-----------------------------------------------------------------}
&\multicolumn{2}{c}{---------------------------------}\\
\multicolumn{1}{l}{} &\colhead{SIS0} &\colhead{SIS1} &\colhead{GIS2} &\colhead{GIS3}&\colhead{GIS2}
&\colhead{GIS3} \\
}
\startdata
\multicolumn{5}{l}{}\\
\multicolumn{5}{l}{\ \ \ \ \ \ -------- Persistent -------}\\
\multicolumn{5}{l}{}\\
$N_{\rm H}$ ($10^{22}$ cm$^{-2}$) & $5.8\pm0.1$ & $5.6\pm0.1$ & $5.4\pm0.1$ & $6.2\pm0.1$ & $5.2\pm0.1$ & $5.3\pm0.1$ \\
Photon index ($\Gamma$) & $1.34\pm0.03$ & $1.22\pm0.02$ & $1.17\pm0.03$ & $1.38\pm0.03$ & $1.03\pm0.03$ & $0.93\pm0.03$ \\
Line center (keV) & $6.66\pm0.05$ & $6.72\pm0.05$ & $6.75\pm0.08$ & $6.60\pm0.08$ & $6.63\pm0.07$ & $6.74\pm0.06$ \\
Line width ($\sigma$; keV) & $0.79\pm0.10$ & (0.7) & $0.59\pm0.13$ & $0.68^{+0.05}_{-0.10}$ & $0.34\pm0.09$ & $<0.26$ \\
Equivalent width (keV) & $0.40\pm0.06$ & $0.39\pm0.03$ & $0.19\pm0.04$ & $0.30^{+0.06}_{-0.15}$ & $0.16\pm0.04$ & $0.13\pm0.03$ \\
Reduced-$\chi^2$ (d.o.f.) & 1.62 (142) & 2.27 (143) &0.87 (89) & 1.20 (89) & 1.67 (89) & 1.93 (89) \\
\multicolumn{5}{l}{}\\
\multicolumn{5}{l}{\ \ \ \ \ \ -------- Burst ------------}\\
\multicolumn{5}{l}{}\\
$N_{\rm H}$ ($10^{22}$ cm$^{-2}$) & \multicolumn{2}{c}{------\tablenotemark{a}} & $6.0^{+0.5}_{-0.1}$ & $6.5\pm0.5$ & $6.3^{+0.7}_{-0.4}$ & $5.9\pm0.5$ \\
Photon index ($\Gamma$) & \multicolumn{2}{c}{------} & $1.07^{+0.26}_{-0.12}$ & $1.31\pm0.12$ & $1.06\pm0.13$ & $0.95^{+0.16}_{-0.04}$ \\
Line center (keV) & \multicolumn{2}{c}{------} & $6.5^{+0.6}_{-0.4}$ & $6.6^{+0.3}_{-0.2}$ & $6.57\pm0.12$ & $6.8\pm0.2$\\
Line width ($\sigma$; keV) & \multicolumn{2}{c}{------} & $<1.7$ & $<0.8$ & (0.0) & $<0.5$ \\
Equivalent width (keV) & \multicolumn{2}{c}{------} & $0.11^{+0.50}_{-0.07}$ & $0.19^{+0.25}_{-0.11}$ & $0.14\pm0.07$ & $0.24^{+0.15}_{-0.12}$ \\
Reduced-$\chi^2$ (d.o.f.) & \multicolumn{2}{c}{------} &1.24 (89) &1.01 (89) &1.04 (90) &0.85 (89) \\
\multicolumn{5}{l}{}\\
\multicolumn{5}{l}{\ \ \ \ \ \ -------- Dip --------------}\\
\multicolumn{5}{l}{}\\
$N_{\rm H}$ ($10^{22}$ cm$^{-2}$) & $5.5\pm0.2$ & $5.5\pm0.1$ & $5.8^{+0.7}_{-0.2}$ & $6.3\pm0.3$ & $5.3\pm0.2$ & $5.3\pm0.2$ \\
Photon index ($\Gamma$) & $1.22^{+0.12}_{-0.04}$ & $1.21\pm0.04$ & $1.30\pm0.08$ & $1.52^{+0.12}_{-0.08}$ & $1.06\pm0.05$ & $1.02\pm0.06$ \\
Line center (keV) & $6.80\pm0.10$ & $6.73\pm0.10$ & $6.78\pm0.22$ & $6.62\pm0.18$ & $6.59\pm0.11$ & $6.79\pm0.11$ \\
Line width ($\sigma$; keV) & $0.4^{+0.7}_{-0.1}$ & (0.7) & $0.7^{+0.4}_{-0.3}$ & $0.8\pm0.3$ & $0.4\pm0.2$ & $0.44\pm0.20$ \\
Equivalent width (keV) & $0.19^{+0.33}_{-0.05}$ & $0.37\pm0.06$ & $0.28^{+0.21}_{-0.12}$ & $0.45^{+0.21}_{-0.16}$ & $0.19^{+0.08}_{-0.06}$ & $0.27^{+0.11}_{-0.08}$ \\
Reduced-$\chi^2$ (d.o.f.) & 1.71 (93) & 1.77 (94) & 1.09 (89) & 1.13 (89) & 1.15 (89) & 1.15 (89) \\
\enddata
\tablecomments{The errors (upper limits) quoted are in 90 \% confidence
limit for a single parameter.
Parameters in the parenthesis are fixed in the fitting.}
\tablenotetext{a}{No SIS data are available.}
\end{deluxetable}
\scriptsize
\begin{deluxetable}{lcccccc}
\tablecaption{Fitting Results of the Persistent Emission}
\tablehead{
\multicolumn{1}{l}{}&\multicolumn{4}{c}{Observation I} &\multicolumn{2}{c}{Observation II\tablenotemark{a}}\\
\multicolumn{1}{l}{} &\multicolumn{4}{c}{-----------------------------------------------------}
&\multicolumn{2}{c}{---------------------------------}\\
\multicolumn{1}{l}{} &\colhead{SIS0} &\colhead{SIS1} &\colhead{GIS2} &\colhead{GIS3}&\colhead{GIS2}
&\colhead{GIS3} \\
}
\startdata
\multicolumn{5}{l}{}\\
\multicolumn{5}{l}{\ \ \ \ \ \ -------- Disk line model -------}\\
\multicolumn{5}{l}{}\\
$N_{\rm H}$ ($10^{22}$ cm$^{-2}$) & $5.7\pm0.1$ & $5.6\pm0,1$ & $5.0\pm0.1$ & $5.2\pm0.1$ & $5.2\pm0.1$ & $5.2\pm0.2$ \\
Photon index ($\Gamma$) & $1.30\pm0.02$ & $1.24\pm0.03$ & $1.20\pm0.02$ & $1.28\pm0.02$ & $1.02\pm0.03$ & $0.87\pm0.04$ \\
Line center (keV) & $6.56\pm0.07$ & $6.51\pm0.07$ & $6.45\pm0.06$ & $6.27\pm0.04$ & $6.92\pm0.07$ & $6.79\pm0.10$ \\
R$_{\rm in}$ (GM/c$^2$) & $13\pm3$ & $15\pm3$ & $22^{+9}_{-5}$ & $29\pm4$ & $11\pm5$ & $>30$\\
R$_{\rm out}$ (GM/c$^2$) & (1000) & (1000) & (1000) & (1000) & (1000) & (1000)
\\
Inclination (deg) & $48\pm6$ & $>65$ & $50^{+13}_{-8}$ & $>75$ & (15)\tablenotemark{a} & (15)\tablenotemark{a} \\
Equivalent width (keV) & $0.31\pm0.03$ & $0.46\pm0.05$ & $0.20\pm0.01$ & $0.21\pm0.01$ & $0.16\pm0.03$ & $0.11\pm0.02$ \\
Reduced-$\chi^2$ (d.o.f.) & 1.56 (141) & 2.32 (141) & 1.82 (90) & 1.97 (90) & 1.62 (92) & 1.90 (92) \\
\multicolumn{5}{l}{}\\
\multicolumn{5}{l}{\ \ \ \ \ \ ----- Partial covering model -----}\\
\multicolumn{5}{l}{}\\
$N_{\rm H}$ ($10^{22}$ cm$^{-2}$) & $6.1\pm0.1$ & $6.0\pm0.1$ & $5.6\pm0.1$ & $6.5\pm0.1$ & $5.8\pm0.2$ & $6.0\pm0.2$ \\
Photon index ($\Gamma$) & $1.54\pm0.04$ & $1.49\pm0.04$ & $1.53\pm0.03$ & $1.65\pm0.05$ & $1.35\pm0.07$ & $1.28\pm0.08$ \\
$N_{\rm H}^{\rm Partial\;Covering}$ ($10^{22}$ cm$^{-2}$) & $52\pm4$ & $59\pm4$ & $54\pm3$ & $46\pm3$ & $77\pm10$ & $90\pm15$\\
Covering fraction & $0.44\pm0.03$ & $0.52\pm0.03$ & $0.46\pm0.02$ & $0.46\pm0.03$ & $0.55\pm0.05$ & $0.62\pm0.06$ \\
Reduced-$\chi^2$ (d.o.f.) & 2.78 (143) & 2.25 (143) & 2.19 (92) & 2.29 (92) & 1.58 (93) & 1.73 (93) \\
\enddata
\tablecomments{The errors (upper limits) quoted are in 90 \% confidence
limit for a single parameter.
Parameters in the parenthesis are fixed in the fitting.}
\tablenotetext{a}{The parameter was fixed because the data could not constrain it.}
\end{deluxetable}
\end{document}
|
1,314,259,994,183 | arxiv | \section{Introduction}
\label{sec:Introduction}
Spectrum sensing is a key step in effectively realizing cognitive radio networks (CRN). In the CR access paradigm, secondary users (SU) in a CRN are allowed to access spectrum reserved for use by licensed or primary users (PU) given that 1) those resources are either currently unoccupied or 2) interference to the primary network is kept under an acceptable level \cite{Haykin2005}. The main goal of spectrum sensing is to accurately and efficiently detect the presence or absence of a PU in a given band.
Several spectrum sensing methods have been proposed in the literature \cite{Yucek2009}. In general, these methods can be categorized as being based on either energy detection, spectral correlation (cyclostationarity), or matched filtering. Energy detection requires the least prior knowledge about the signal, while matched filtering requires the most. Spectral correlation-based techniques lie in between, requiring either prior knowledge or accurate estimation of the cyclic frequencies present in the PU transmission signal. Although energy detection offers the lowest computational complexity and is the optimal blind detector in the presence of i.i.d. noise, its performance relies on accurate knowledge of noise power due to the SNR wall phenomenon \cite{Tandra2008}. The detection performance of energy detection also degrades in a temporally correlated noise environment.
In some scenarios, spectral correlation-based methods offer several advantages over other spectrum sensing approaches. Unlike energy detection, they do not suffer from the SNR wall issue. These methods are also resilient to temporally correlated noise and enable signal-selective spectrum sensing where the presence a signals-of-interest (SOI) can be detected based on their unique cyclic features due to their modulation type, symbol rate, and carrier frequency \cite{Gardner1987a}.
One issue encountered with all spectrum sensing methods is the effect of a fading in the channel between the PU and SU. There is some probability of low detection performance whenever the channel is in a deep fade. This can be alleviated by exploiting spatial diversity either through the use of cooperative spectrum sensing \cite{Quan2008} or if available, the use of multiple antennas. As a result, spectrum sensing algorithms exploiting multiple antennas have received considerable interest \cite{Taherpour2010,Tugnait2012}.
Algorithms that leverage the cyclostationarity property have been applied in the past for multiple antenna receivers. In \cite{Sadeghi2008}, the sum of the individual cyclic correlation for each antenna was proposed. Such methods are considered \textit{post-combining} techniques since knowledge of the channel state information (CSI) is not exploited. On the other hand, \textit{pre-combining} techniques which utilize an estimate of the CSI to varying degrees have been shown to have better performance. A method based on equal gain combining (EGC), was investigated in \cite{Chen2008}. This method uses phase offset estimates to align the raw samples from each antenna. The aligned signals are then summed before finding the cyclic correlation. Finally, a Blind Maximal Ratio Combining (BMRC) scheme was evaluated in \cite{Jitvanichphaibool2010a} which utilized the singular value decomposition (SVD) to find an estimate of the CSI and applied MRC on the raw samples.
In this paper we propose a spectrum sensing algorithm designed for use in a multiple antenna system based on the cyclic correlation significance test (CCST). The CCST was used in \cite{Schell1990a} to perform cyclostationary source enumeration using an information-theoretic criterion. However, the performance of this statistic in the context of multiple-antenna cyclostationary spectrum sensing has not been investigated in prior work. The performance of this method in fading channels has also not been evaluated. In this paper, we show that the proposed method has better detection performance than existing methods and has less computational complexity than BMRC from \cite{Jitvanichphaibool2010a}. We investigate the algorithm's performance in both uncorrelated and correlated noise environments. The scheme's robustness to intereference is also shown.
The rest of the paper is organized as follows. The system model is introduced in Section \ref{sec:Model} including a brief discussion of cyclostationarity. The proposed algorithm is detailed in Section \ref{sec:Proposed}. Numerical results for various scenarios are presented in Section \ref{sec:Results}. Finally, the paper is concluded in Section \ref{sec:Conclusion}.
\section{Background and System Model}
\label{sec:Model}
\subsection{Background on Cyclostationarity}
A signal is considered to be cyclostationary if its statistical properties are periodic. Equivalently, if the cyclic autocorrelation function, defined as:
\begin{equation}
\label{eqn:CyclicCovariance}
R_{x}^{\alpha}(\tau)=\!\!\!\lim_{\Delta t\rightarrow\infty}\frac{1}{\Delta t}\!\int_{-\frac{\Delta t}{2}}^{\frac{\Delta t}{2}}\!\!x\!\left(t+\frac{\tau}{2}\right)\!x^{*}\!\left(t-\frac{\tau}{2}\right)\!e^{-j2\pi\alpha t}dt,
\end{equation}
is non-zero with some $\tau$ for at least one $\alpha \neq 0$, the signal is said to exhibit second-order cyclostationary property with $\alpha$ referred to as the cyclic frequency.
For example, in BPSK signals, cyclostationary features exist at $\alpha=\frac{k}{T_b}$ and at $\alpha=\pm 2 f_c + \frac{k}{T_b}$, where $T_b$ is the symbol period, $f_c$ is the carrier frequency, and $k\in\{-1,0,1\}$. Detailed analysis of the cyclostationary features for various digital modulations can be found in \cite{Gardner1987a}.
\subsection{Signal Model and Assumptions}
We adopt a similar signal model as that used in \cite{Jitvanichphaibool2010a}. The spectrum sensing problem is to decide between two hypotheses: $\mathcal{H}_0$, where the signal is absent; and $\mathcal{H}_1$, where it is present. The received signal samples under the two hypothesis are given respectively as follows:
\begin{equation}
x(n)=\begin{cases}
\eta(n), & \mathcal{H}_0\\
s(n) + \eta(n), & \mathcal{H}_1.
\end{cases}
\end{equation}
The received signal, sampled at a rate of $1/T_s$, forms $M$ streams coming from each antenna with $N$ samples each. This received signal is defined as $
\mathbf{x}(n)\triangleq[x_{1}(n),x_{2}(n),\ldots,x_{M}(n)]^{T}$. The received signal is the superposition of $P$ signal sources (including both the SOI and any interferers) and can be expressed in vector form as
\begin{equation}
\label{eqn:ReceivedDecompose}
\mathbf{x}\left(n\right)=\sum_{j=1}^{P}\mathbf{h}_{j}\left(n\right)\otimes s_{j}\left(n\right)+\boldsymbol{\eta}\left(n\right),
\end{equation}
where $\otimes$ is the convolution operation over $n$ and $\boldsymbol{\eta}(n)$ is the receiver noise denoted by $\boldsymbol{\eta}(n)
\triangleq[\eta_{1}(n),\eta_{2}(n),\ldots,\eta_{M}(n)]^{T}$, where every $\eta_i$ is a purely stationary Gaussian random process ($R_{\eta}^{\alpha}(\tau)=0$ for any $\alpha\neq 0$) with variance of $\sigma^2_\eta$. For simplicity, we restrict that only one PU transmission, $s_1(n)$, is considered a SOI and that it is cyclostationary with a unique cyclic frequency at $\alpha=\alpha_0$. The other sources are considered interferers. The channel experienced by each of the $P$ sources is given by $\mathbf{h}_{j}(n)\triangleq[h_{j1}(n),h_{j2}(n),\ldots,h_{jM}(n)]^{T}$, where $h_{jk}(n)$ is the channel between the $j$th source and the $k$th antenna. We assume that the channel, although unknown to the receiver, stays constant over the frame of observation.
\subsection{Spatially Correlated Noise Environments}
In the case of spatially correlated noise, which can happen when there is substantial ambient noise in the band, we model $\boldsymbol{\eta}(n)$ to have a covariance matrix given by $\{\mathbf{R}_{\boldsymbol{\eta\eta}}\}_{ij}=\sigma_{\eta}\rho^{\left|i-j\right|}$. Thus with $\rho=0$, the covariance matrix simplifies to an identity matrix giving spatially white noise, while $\rho=1$ gives fully correlated noise over all antennas. Varying degrees of partial correlation can be achieved by setting $0<\rho<1$.
\section{Proposed Algorithm}
\label{sec:Proposed}
In this section, we describe the proposed method. We focus on a single cycle frequency detection, but this approach could be generalized to multi-cycle detection. The key idea of this detection algorithm is based on the theory of canonical variates or theory of common factors. As discussed in \cite{Lawley1959}, and subsequently utilized in \cite{Schell1990a}, the number of common factors between two $M\times1$ time-series vectors $\mathbf{u}(n)$ and $\mathbf{v}(n)$ can be inferred from the rank of the matrix
\begin{equation}
\label{eqn:CommonFactors}
\mathbf{R}=\mathbf{R}_{\mathbf{vv}}^{-1}\mathbf{R}_{\mathbf{vu}}\mathbf{R}_{\mathbf{uu}}^{-1}\mathbf{R}_{\mathbf{uv}},
\end{equation}
where the covariance matrices are defined as the time average $\mathbf{R}_{\mathbf{uv}}=\frac{1}{N}\sum_{n=0}^{N-1}\mathbf{u}(n)\mathbf{v}^H(n)$. In the non-asymptotic case, when this matrix is always full rank, a threshold on the eigenvalues can be applied to determine the asymptotic rank with a given confidence level. The same criterion, referred to as the Cyclic Correlation Significance Test (CCST), was applied for cyclostationary source enumeration in \cite{Schell1990a} by taking $\mathbf{u}(n)=\mathbf{x}(n)$ and $\mathbf{v}(n)=\mathbf{x}(n-\tau)e^{-j2\pi\alpha nT_s}$ for a given lag $\tau$ and cyclic frequency $\alpha$. Additionally, some cyclic frequencies, such as those located on $\alpha=\pm 2f_c$ for BPSK, only appear in the conjugate cyclic correlation matrix by instead taking $\mathbf{v}(n)=\mathbf{x}^*(n-\tau)e^{-j2\pi\alpha nT_s}$.
We further adapt the CCST into the binary hypothesis test required for signal-selective spectrum sensing. Under $\mathcal{H}_0$, (\ref{eqn:CommonFactors}) will have zero rank as $N\rightarrow\infty$. Thus, by applying a threshold to all the eigenvalues we can infer the presence or absence of the PU of interest using a $M$ antenna receiver. Prior to performing the detection, we pick the lag $\tau$ that provides the best detection performance based on the modulation format used by the PU. This could be done off-line by performing the maximization, $\tau_0=\arg\max_\tau |R_s^{\alpha_0}(\tau)|$.
The steps of the algorithm are summarized as follows:
\begin{enumerate}
\item Estimate the covariance matrix of size $M\times M$
%
\begin{equation}
\label{eqn:Covariance}
\hat{\mathbf{R}}_{\mathbf{xx}}(\tau_0)=\frac{1}{N}\sum_{n=0}^{N-1-\tau_0}\mathbf{x}\left(n\right)\mathbf{x}^{H}\left(n-\tau_0\right).
\end{equation}
%
\item Estimate the cyclic correlation matrix using a cyclic cross-correlogram at cyclic frequency $\alpha$ and lag $\tau_0$, defined as
%
\begin{equation}
\label{eqn:CyclicCovariance}
\hat{\mathbf{R}}_{\mathbf{xx}}^{\alpha_0}(\tau_0)=\frac{1}{N}\!\!\sum_{n=0}^{N-1-\tau_0}\!\!\!\!\!\mathbf{x}\left(n\right)\mathbf{x}^{H}\!\!\left(n-\tau_0\right)e^{-j2\pi\alpha_0 nT_s}.
\end{equation}
%
We will refer to the $\tau_0$-lag covariance matrices for both conventional and cyclic autocorrelation function simply as $\hat{\mathbf{R}}_{\mathbf{xx}}$ and $\hat{\mathbf{R}}_{\mathbf{xx}}^{\alpha_0}$ from this point for the sake of brevity, since other $\tau$ are not utilized by the proposed algorithm. The dependence on $\tau$ will be indicated explicitly whenever necessary. The CCST is then calculated by finding the matrix
%
\begin{equation}
\label{eqn:Metric}
\hat{\mathbf{R}}=\hat{\mathbf{R}}_{\mathbf{xx}}^{-1}\hat{\mathbf{R}}_{\mathbf{xx}}^{\alpha_0} \hat{\mathbf{R}}_{\mathbf{xx}}^{-1}\hat{\mathbf{R}}_{\mathbf{xx}}^{\alpha_0 H}.
\end{equation}
%
\item Calculate its singular value decomposition (SVD)
\begin{equation}
\label{eqn:SVD}
\hat{\mathbf{R}} = \mathbf{U \Sigma V}^H,
\end{equation}
where $\mathbf{U}$ and $\mathbf{V}$ are unitary matrices whose columns are the left and right singular vectors, respectively and $\mathbf{\Sigma}=\mathbf{I}_M\left[\mu_1,\mu_2,\ldots,\mu_M\right]^{T}$ is the diagonal matrix of real, non-negative singular values, $\mu_i$, and $\mathbf{I}_M$ is the identity matrix of size $M \times M$.
%
\item Compute the test statistic by taking
%
\begin{equation}
\label{eqn:TestStatistic}
\mathcal{T^{\alpha}_{\mathbf{xx}}}=-N\ln\prod_{i=1}^{M}\left(1-\mu_{i}\right).
\end{equation}
The factor $N$ is used to scale the test statistic so that its distribution is independent of the number of samples used.
%
\item Decision:
$\mathcal{T^{\alpha}_{\mathbf{xx}}}\mathop{\gtrless}_{\mathcal{H}_0}^{\mathcal{H}_1}\gamma$,
%
where $\gamma>0$ is a threshold chosen to achieve constant false alarm rate (CFAR) which will be discussed in the following section.
\end{enumerate}
Note that all $\mathbf{x}^H(n)$ can be replaced with $\mathbf{x}^T(n)$ if the conjugate cyclic correlation matrix is needed. We refer to these two metrics as the non-conjugate cyclic correlation significance test (NC-CCST) and the conjugate cyclic correlation significance test (C-CCST) respectively.
\subsection{Probability of False Alarm and Threshold Computation}
Two key parameters are used to evaluate the performance of spectrum sensing algorithms. The detection probability or $P_d$ is the probability of being at $\mathcal{H}_1$ and accurately detecting the PU ($P_d=\Pr(\mathcal{T^{\alpha}_{\mathbf{xx}}}>\gamma \mid \mathcal{H}_1)$). On the other hand, the false alarm probability, $P_{fa}$, is the probability of being at $\mathcal{H}_0$ and mistakenly detecting a PU ($P_{fa}=\Pr(\mathcal{T^{\alpha}_{\mathbf{xx}}}>\gamma \mid \mathcal{H}_0)$).
It has been shown in \cite{Lawley1959} that the limiting distribution ($N\rightarrow\infty$) of the test statistic based on the NC-CCST in (\ref{eqn:TestStatistic}) approaches a $\chi^2$ distribution with degree-of-freedom $M^2$. Following a similar proof, it can be shown that the distribution for the C-CCST is also $\chi^2$ but with degree-of-freedom $M(M+1)$.
Based on these asymptotic distributions of $\mathcal{T^{\alpha}_{\mathbf{xx}}}$ under $\mathcal{H}_0$, the detection threshold $\gamma$ can be set to achieve a desired $P_{fa}$ by satisfying
\begin{equation}
\label{eqn:threshold}
\int_{\gamma}^{\infty}f_{\chi_{k}^{2}}\left(x\right)dx=P_{fa},
\end{equation}
where
\begin{equation}
\\k=\begin{cases}
M^{2} & \text{NC-CCST}\\
M(M+1) & \text{C-CCST}
\end{cases},
\end{equation}
and $f_{\chi_{k}^{2}}(\cdot)$ is the probability density function (pdf) of a $\chi^2$ random variable with degree-of-freedom $k$.
These asymptotic distributions are verified to closely match simulation in Fig.~\ref{fig:pdf} for $N=4000$. Due to the scaling factor in (\ref{eqn:TestStatistic}), the distribution is independent of $N$ if the number of samples is large enough. The empirical pdfs for two different $\sigma^2_\eta$ values are also shown to demonstrate how the test statistic's distribution under $\mathcal{H}_0$ is independent of noise power.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{pdf/pdfs.pdf}
\caption{Verification of the asymptotic distribution of the proposed test statistic (C-CCST) under $\mathcal{H}_0$ with different number of antennas ($M=\{2,4\}$). Simulated pdfs for different noise variances ($\sigma^2_\eta=\{1,10\}$) show both the accuracy of the approximation and the test statistic's independence to noise power. The same number of samples $N=4000$ per antenna is used.}
\label{fig:pdf}
\end{figure}
\subsection{Comparison With Existing Approaches}
\label{subsec:others}
The algorithms for multiple-antenna spectrum sensing that exploit cyclostationarity that are currently in the literature can generally be classified into two categories. Techniques that do not exploit any knowledge of the CSI are referred to as \textit{post-combining} methods. The simplest method to do this is to find the sum of the spectral correlation test statistic estimated individually from each antenna \cite{Sadeghi2008}. We refer to this approach as SUM-MSDF (where MSDF means Modified Spectral Density Function). The MSDF is defined as the spectral correlation function normalized by signal energy as discussed in \cite{Jitvanichphaibool2010a}.
Another existing approach is to sum the raw signals from each antenna and then perform a single spectral correlation test. However, we encounter a problem when the channel is not simply AWGN but instead has random fading. In this case, each antenna will have some unknown phase offset and attenuation. Thus, simply adding the raw signals non-coherently would decrease the probability of detection. This problem is addressed in \cite{Chen2008} by first aligning the phase of each antenna. An estimate of the relative phase difference between each antenna is calculated by finding both the cyclic correlation of one antenna chosen as reference (auto-SCF) and the cross-cylic correlation of every other antenna and the reference antenna. The phase difference can then be extracted from these two. We refer to this method in our comparisons as EGC.
Finally, MRC is used in \cite{Jitvanichphaibool2010a}. Blind channel estimation is achieved by taking the vector corresponding to the highest singular value of (\ref{eqn:CyclicCovariance}) as an estimate of the channel, $\hat{\mathbf{h}}$. The raw samples from each antenna are combined using
\begin{equation}
y(n) = \frac{\hat{\mathbf{h}}^H\mathbf{x}(n)}{\Vert \hat{\mathbf{h}}\Vert } .
\end{equation}
The cyclic correlation test is then performed on the combined samples $y(n)$. This method is called MSDF with Blind Maximal Ratio Combining or BMRC-MSDF. It was shown to outperform the other techniques but at the cost of additional complexity due to the channel estimation and combining. One issue with this approach is the fact that the cyclic correlation is calculated twice. The first is used to blindly estimate the channel and the second to perform the detection on the combined samples. In contrast, the method proposed in this paper, which we refer to as eigenvalue-based cyclostationary spectrum sensing or EV-CSS only needs to perform the first part of BMRC-MSDF, the SVD, and uses the singular values themselves to infer the presence or absence of the PU.
\subsection{Advantages of the Proposed Algorithm}
As with other cyclostationarity-based spectrum sensing algorithms, one major advantage of the proposed method is its robustness to the noise uncertainty problem. Since the noise is assumed to be stationary and does not exhibit cyclostationarity at any $\alpha \neq 0$, its cyclic correlation approaches zero as $N\rightarrow\infty$. Thus, the effect of any error in the noise power estimate on the detection probability can be eliminated by taking more samples. However, in the interest of conserving power and arriving at a timely decision, both of which are high priority in the case of CR applications, we aim to minimize $N$ needed to achieve a target $P_d$. This presents another, more subtle, issue related to noise uncertainty.
In the non-asymptotic scenario, the SCF under $\mathcal{H}_0$ has been shown to depend on both $N$ and the noise power $\sigma_\eta$ \cite{Rebeiz2011}. Therefore, the proper detection threshold is still a function of the noise variance. By incorrectly specifying this threshold, the detector could be at the wrong point in the receiver operating characteristic (ROC) curve. Equivalently, the target CFAR cannot be achieved. However, as previously discussed and demonstrated in Fig.~\ref{fig:pdf}, the proposed test statistic is independent of both $\sigma^2_\eta$ and $N$. Consequently, the threshold $\gamma$ only needs to be chosen once for a given number of antennas $M$ to guarantee CFAR. This property has been shown for other eigenvalue-based approaches \cite{Zeng2009}. It derives from the fact that noise power estimation is built-in to the algorithm.
\subsection{A Note on Complexity}
\label{subsec:complexity}
We can make an approximate complexity comparison of the proposed algorithm with the best performing existing algorithm (BMRC-MSDF) by taking number of complex multiplications required for each under the same number of samples $N$. Since the cyclic covariance operation and the SVD are common to both algorithms they are not included in the analysis.
Assuming the MSDF is calculated using an $N_S$-point Fast Fourier Transform (FFT) it requires in the order of $N\log_2(N_S)$ multiplications. In addition, $(M+1)N$ multiplications are needed to perform the MRC and normalization. Finally, the correlation in frequency uses $NN_S/2$ multiplications. Thus, the BMRC-MSDF approach performs in the order of $N(\log_2(N_S)+N_S/2+M+1)$ multiplications without the SVD and the cyclic covariance.
In comparison, the proposed EV-CSS method finds the conventional covariance in addition to the same SVD and cyclic covariance as BMRC-MSDF, or in the order of $NM^2$ multiplications. The operation $\hat{\mathbf{R}}_{\mathbf{xx}}^{-1}\hat{\mathbf{R}}_{\mathbf{xx}}^{\alpha}$ in (\ref{eqn:Metric}) is essentially the solution to a generalized linear system which can be seen as an LU decomposition requiring approximately $2M^3/3$ multiplications. Therefore, the EV-CSS approach requires in the order of $NM^2+2M^3/3$ multiplications in addition to the common operations with BMRC-MSDF. Since $M$ is typically much less than both $N$ and $N_S$, there is overall a significant decrease in complexity with the proposed algorithm. For example, if we take $N=4000$, $N_S=128$, and $M=2$, (same parameters used in \cite{Jitvanichphaibool2010a}), the BMRC-MSDF requires $\sim$296K multiplications while EV-CSS needs only $\sim$16K multiplications, without counting the common operations.
\section{Numerical Results and Discussion}
\label{sec:Results}
In this section, simulation results are presented in order to compare the proposed algorithm with the various existing techniques discussed in Section~\ref{subsec:others}. The PU is assumed to be transmitting a BPSK signal at a carrier frequency $f_c=80$ KHz with symbol period of 25 $\mu$s. The SU is assumed to have $M=2$ antennas unless otherwise specified. Each antenna is sampled at a rate $f_s=320$ kHz. For CFAR experiments we set $P_{fa}=0.1$.
In all experiments, the channel between the PU and each antenna of the SU, $\mathbf{h}$, is modeled as a quasi-static Rayleigh fading channel, where the fading remains constant during the whole frame of $N=4000$ samples per antenna used for detection. This can be described using a channel vector for the $i$th frame as
$\mathbf{h}_i=[r_1e^{j\theta_1},r_2e^{j\theta_2},\ldots,r_Me^{j\theta_M}]^{T}$,
where $r_n$ is a Rayleigh distributed random variable of unit variance and $\theta_n$ is a uniformly distributed random variable in $[0,2\pi]$. We ignore dispersive channels where each element of $\mathbf{h}_i$ is modeled as a multi-tap filter, $\mathbf{h}_i(n)$, since cyclostationary spectrum sensing has already been shown to be robust to the effects of temporal correlation in prior work \cite{Jitvanichphaibool2010a}.
For all algorithms, the same cyclic frequency located at $\alpha_0=2f_c$ is used. This cyclic feature has been shown to only be present in the conjugate cyclic autocorrelation which means the C-CCST statistic is used for the proposed algorithm. This feature is chosen because of its relative strength compared to other cyclic frequencies. The maximum cyclic autocorrelation at this cyclic frequency is observed at $\tau_0=0$. As for the algorithm that computes the MSDF, the frequency resolution is chosen to be $f_s/100$.
\subsection{Spatially Uncorrelated Noise}
In the case where the noise between each antenna is uncorrelated ($\mathbf{R}_{\boldsymbol{\eta\eta}}=\sigma_\eta\mathbf{I}_M$) and only the SOI occupies the observed band, the ROC curve completely determines the performance of each detection algorithm under a given SNR and $N$. A comparison of these ROC curves for the algorithms discussed are shown in Fig.~\ref{fig:ROC}. For these simulations the SNR is fixed at -14 dB, although the trends are consistent for other SNR in the range $[-20,0]$ dB where the experiments were conducted. As seen from this figure, the proposed method has better detection performance than any of the other algorithms.
Interestingly, the method also outperforms BMRC-MSDF which as discussed in Section~\ref{subsec:complexity} has significantly higher computational complexity. Although this result initially appears to be counter-intuitive, further experiments show that in the case of perfect CSI (not shown in figure) the MRC-MSDF has comparable performance to EV-CSS. Therefore, we can conclude that at very low SNR the blind channel estimates based on the SVD have large errors and the full benefit of MRC is not achieved. In contrast, the EV-CSS is able to fully take advantage of the information from all antennas because it does not directly use an estimate of the CSI.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{pdf/roc_14.pdf}
\caption{ROC curves comparing the proposed EV-CSS to BMRC-MSDF and to a post-combining method using EGC. The same number of samples $N=4000$ and $\text{SNR}=-14$ dB are used for all techniques.}
\label{fig:ROC}
\end{figure}
The effect of varying SNR on probability of detection is shown in Fig.~\ref{fig:pd}. The $P_{fa}$ is kept constant at 0.1. It can be seen that all the methods except EGC reach the $P_d=1$ at 0 dB SNR. Again, we see that the proposed method consistently has better detection performance at all SNRs compared to the rest of the algorithms.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{pdf/pd_uncorrelated.pdf}
\caption{The effect of varying SNR to detection probability ($P_d$) for different multiple antenna cyclostationary spectrum sensing techniques including the proposed EV-CSS under ($P_{fa}=0.1$) under uncorrelated noise. The same number of samples $N=4000$ is used.}
\label{fig:pd}
\end{figure}
\subsection{Spatially Correlated Noise}
The effect of varying number of samples, $N$, is shown in Fig.~\ref{fig:pdcorr}. The second set of plots also show a spatially correlated noise environment with $\rho=0.5$. All methods, with the exception of EGC, are robust to spatial correlation and in fact, the proposed method shows a slight improvement in spatially correlated noise. We can also see that the proposed method shows good performance even at low number of samples while the MSDF based methods require more samples to achieve comparable performance.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{pdf/pd_samprho.pdf}
\caption{The effect of varying number of samples, $N$, to detection probability ($P_D$) of the various techniques. Solid lines are for spatially uncorrelated noise environments while dashed lines are for $\rho=0.5$.}
\label{fig:pdcorr}
\end{figure}
\subsection{Effect of Interfering Signal}
We test the robustness of these algorithms in the presence of a strong co-channel interferer by introducing another BPSK signal with the same symbol rate with 30\% spectral overlap. The effect of the interferer on the detection performance is shown in Fig.~\ref{fig:interfer} as the signal-to-interferer ratio (SIR), defined as the ratio of the interferer power to SOI power, is varied from -20 dB to 0 dB. The noise is kept constant at $\sigma_\eta=1$. The proposed algorithm shows very good signal selectivity, giving $P_d$ close to 1 even in the presence of a co-channel inteferer with 100 times the SOI's power. By performing the correlation entirely in time domain, the proposed method is able to suppress the interferer much better than the MSDF.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{pdf/pd_interferer.pdf}
\caption{The effect a co-channel BPSK interfer on detection probability with 30\% spectral overlap. The number of samples used for all antennas is $N=4000$ and the noise level is kept constant at $\sigma_\eta=1$.}
\label{fig:interfer}
\end{figure}
\subsection{Effect of Number of Antennas}
The effect of number of antennas, $M$, on detection accuracy is studied in Fig.~\ref{fig:varym}. Note that for EV-CSS, to keep the the $P_{fa}$ constant at 0.1 the threshold must be set to a new value based on (\ref{eqn:threshold}). On the other hand, for the rest of the algorithms, the threshold is set for different $N$, $M$, and SNR. The SNR across all antennas is assumed to be the same. Since the total number of samples increases with more antennas ($N_T=MN$), we expect both algorithms to perform better with higher $M$. In addition, more antennas also introduce spatial diversity which reduces the probability of all antennas being at a deep fade during the sensing period. Similar to previous results, the EV-CSS has better performance than BMRC-MSDF for different values of $M$.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{pdf/antenna.pdf}
\caption{The effect of the number of antenna on the detection probability. The same number of samples per antenna $N=4000$ is used.}
\label{fig:varym}
\end{figure}
\section{Conclusion}
\label{sec:Conclusion}
A multi-antenna cyclostationary-based spectrum sensing algorithm based on the cyclic correlation significance test was proposed. The method was shown to outperform current multiple antenna signal-selective spectrum sensing methods in the literature. The computational complexity of the algorithm was also compared with that of the best performing existing algorithm that uses MRC by blindly estimating the CSI and was shown to require substantially less multiplications. The detection threshold for CFAR was also determined both theoretically and via simulation to be independent of the noise variance or the number of samples. This means that a single threshold is required for a given number of antenna, eliminating the need for separate noise estimation. Future work includes theoretical performance analysis of the proposed algorithm.
\vspace{-3mm}
|
1,314,259,994,184 | arxiv | \section{Introduction}
Recently, the old subject of quantum field theories in rigid AdS background
\cite{Callan:1989em}
turned out to be conveniently seen as a non-gravitational instance of AdS/CFT, suggesting new ideas and methods.
For example, flat space scattering amplitudes
of a massive theory may be obtained at large curvature radius by studying
the large scaling dimension regime of the boundary conformal correlators
\cite{Paulos:2016fap,Paulos:2016but,Carmi:2018qzm}.
The specific AdS$_{2}$ case
attracted much interest from the very beginning
\cite{DHoker:1983zwg,DHoker:1983msr,Inami:1985di} and from the AdS/CFT perspective it has very
special features, like the conjectured duality between a gravitational theory in AdS$_2$ and a
chiral half of a 2d CFT \cite{Strominger:1998yg}.
\footnote{
Recently, the rigid AdS$_{2}$ background
played a key role in the analysis of correlators
of $\mathcal N=4$ SYM local operators inserted on a straight or circular Wilson line
\cite{Drukker:2000ep,Alday:2007he,Polyakov:2000ti,Polchinski:2011im,
Drukker:2006xg,Giombi:2017cqn,Beccaria:2018ocq,Beccaria:2019dws}.
At strong coupling, the AdS$_{5}\times S^5$ string action is expanded
near the minimal surface associated with the
1d defect and leads to a 2d field theory action in AdS$_2$ background
\cite{Giombi:2017cqn,Beccaria:2019dws}.
}
In the gravitational
context, 2d diffeomorphisms are a gauge symmetry and Virasoro symmetry
may appear as an asymptotic symmetry whose boundary manifestation
are 1d time reparametrizations \cite{Hotta:1998iq,Cadoni:1999ja,NavarroSalas:1999up}, possibly
spontaneously broken to $SL(2,\mathbb R)$
\cite{Almheiri:2014cka,Maldacena:2016upp,Engelsoy:2016xyb}.
Instead, for a rigid AdS$_{2}$ background, the natural counterpart of this setup is to consider
a theory that is locally conformal in the bulk and to explore the occurrence of
enhanced boundary conformal symmetry.
As a first step in this direction, the analysis in \cite{Beccaria:2019stp} examined the case of
Liouville theory \cite{Polyakov:1981rd,Teschner:2001rv,Nakayama:2004vk} with curved space action
\begin{equation}
\label{1.1}
\mathcal S = \frac{1}{4\pi}\int d^{2}x\,\sqrt{g}\, \big(\partial^{a}\varphi\partial_{a}\varphi
+\mu\,e^{2\,\beta\,\varphi}+Q\,R\,\varphi\big),\qquad\qquad Q=\beta+\beta^{-1} .
\end{equation}
It is Weyl-covariant on a fixed curved 2d background with the central charge $c=1+6\,Q^{2}$.
In particular, on Euclidean AdS$_{2}$ background with metric $ds^{2} = \frac{1}{\mathsf{z}^{2}}(dt^{2}+d\mathsf{z}^{2})$
the Liouville field $\varphi$ can be expanded near its constant vacuum expectation value and its fluctuations have classical
mass $m^2= 2$. Bulk properties of Liouville theory on AdS$_{2}$ have been discussed previously
\cite{DHoker:1983msr,
Zamolodchikov:2001ah,Menotti:2004uq}, while the recent study in \cite{Beccaria:2019stp} focused
on boundary correlators. With Dirichlet boundary condition on the AdS boundary $\mathsf{z}=0$ the field $\varphi$ has
asymptotics $\varphi(t, \mathsf{z})\big|_{\mathsf{z}\to 0} =\mathsf{z}^2 \Phi (t) + ...$, and is dual to the 1d
CFT operator $\Phi(t)$ with conformal dimension $\Delta=2$ obeying the AdS$_{2}$ relation $m^{2}=\Delta(\Delta-1)$.
The associated boundary correlators are defined as usual by
\begin{equation}
\label{1.2}
\llangle \Phi(t_{1})\cdots \Phi(t_{n})\rrangle
\equiv \lim_{\mathsf{z}_1,...,\mathsf{z}_n\to 0} \mathsf{z}^{-2}_1 \cdots \mathsf{z}^{-2}_n
\, \langle\varphi(t_{1}, \mathsf{z}_{1})\cdots\varphi(t_{n}, \mathsf{z}_{n})\rangle,
\end{equation}
and can be computed perturbatively in the weakly coupled Toda theory
by expanding in Witten diagrams. Since we start from a 2d conformal theory
in AdS$_{2}$, we can expect a correspondence between
the boundary correlators and standard two-dimensional Virasoro correlators with the same
central charge. Indeed, as noticed in \cite{Ouyang:2019xdd}, this is true at tree level, {\em i.e.}
$\beta\ll 1$ or $c\gg 1$. The 2-, 3- and 4-point boundary correlators (\ref{1.2}) match the correlators of the
holomorphic stress tensor $T(z)$ according to
\begin{equation}
\label{1.3}
\llangle \Phi(t_{1})\cdots \Phi(t_{n})\rrangle = \kappa^{n}\, \langle T(z_{1})\cdots T(z_{n})\rangle\Big|_{z_{i}\to t_{i}} \ ,
\end{equation}
where $\kappa=\kappa(\beta) $ is a proportionality coefficient appearing in the
formal identification $\Phi(t) \to \kappa\, T(t)$ upon restriction of the 2d chiral stress tensor
to the real boundary $z_i=t_i + i y_i \to t_i$. \footnote{The identification can be explained at semiclassical level
by identifying $\Phi$ as the surviving piece in the boundary limit of the Toda stress tensor. This simple
reasoning requires quantum refinements as discussed for the Liouville theory in \cite{Beccaria:2019stp}.}
In \cite{Beccaria:2019stp}, the relation
(\ref{1.3}) has been tested beyond the leading tree level
approximation by computing the one-loop corrections to various correlators
$\llangle \Phi(t_{1})\cdots \Phi(t_{n})\rrangle$.
One of the outcomes of the analysis is the following proposal for the all-order expression of the intertwining coefficient $\kappa(\beta)$
\begin{equation}
\label{1.4}
\kappa(\beta) = -\frac{4Q}{c} = - \frac{ 4\,\beta (1 + \beta^2)}{(3+2\,\beta^{2})(2+3\,\beta^{2})} =
-\frac{2}{3}\,\beta+\frac{7}{9}\, \beta^{3}+\cdots.
\end{equation}
A natural generalization
of the Liouville correspondence (\ref{1.3}) consists in its extension to
conformal Toda theories of non-affine type \cite{Gervais:1983am,Mansfield:1982sq,Braaten:1983pz}
on the AdS$_{2}$ background.
In the $A_{n}$ case, expanding near the minimum of the Toda potential,
one finds $n$ scalar fields $\varphi_{\Delta}$ with masses $m^{2}=\Delta(\Delta-1)$ corresponding to
$\Delta=2, \dots, n+1$ \cite{Ouyang:2019xdd}.
The expected generalization of the duality relation
(\ref{1.3}) reads then
\begin{equation}
\label{1.5}
\llangle \Phi_{\Delta_{1}}(t_{1})\cdots \Phi_{\Delta_{n}}(t_{n})\rrangle =\big( \prod_{i=1}^{n}\kappa_{\Delta_{i}}\big) \
\langle Q_{\Delta_{1}}(z_{1})\cdots Q_{\Delta_{n}}(z_{n})\rangle\Big|_{z_{i}\to t_{i}} \ ,
\end{equation}
where $\llangle \Phi_{\Delta_{1}}(t_{1})\cdots \Phi_{\Delta_{n}}(t_{n})\rrangle=
\lim_{\mathsf{z}_1,...,\mathsf{z}_n\to 0} \mathsf{z}^{-\Delta_{1}}_1 \cdots \mathsf{z}^{-\Delta_{n}}_n
\, \langle\varphi_{\Delta_{1}}(t_{1}, \mathsf{z}_{1})\cdots\varphi_{\Delta_{n}}(t_{n}, \mathsf{z}_{n})\rangle$,
$Q_\Delta= \{Q_{2}\equiv T, Q_{3}, \dots, Q_{n+1}\}$ are the generators of the chiral $\mathcal W_{n+1}$ algebra
replacing and extending the Virasoro symmetry
and with the same central charge of the Toda theory. The coefficients $\kappa_{\Delta_{i}}$
are functions of the Toda coupling entering the correspondence $\Phi_{\Delta}\to \kappa_{\Delta}Q_{\Delta}$.
The relation (\ref{1.5}) was noticed at tree level in \cite{Ouyang:2019xdd}
in a few sample 4-point functions
in the Toda theories associated to some rank-2 algebras
with two scalar fields (one dual to the
stress tensor $T$ and the other dual to a higher spin chiral field $Q_{s}$).
In this paper, we discuss the relation (\ref{1.5})
for the four point functions
involving the two fields with next-to-minimal higher 'spin' $\Delta=3,4$ in the general $A_{n}$ Toda theory.
This analysis aims to exclude possible low-rank accidental good properties. Despite being a leading order
analysis, not involving loops in AdS, \footnote{
It is natural to expect (\ref{1.5})
to hold also at the quantum level as
should be possible to check by the methods used in \cite{Beccaria:2019stp}, {\em cf.} \cite{BHT}.}
the AdS/CFT matching of the full dependence on the rank $n$ proves to be a quite stringent constraint.
Technically, the $A_{n}$ case is feasible and rather straightforward on the AdS side due to some peculiar regularities of
the cubic and quartic couplings with respect to the rank $n$. At leading non-trivial order,
selection rules reduce the calculation to a finite sum of
Witten diagrams that can be exactly computed. On the CFT side, the task is in principle harder and amounts to
the calculation of 4-point correlators of spin-3 and spin-4 generators of the Casimir W-algebra
$\mathcal W_{n+1}$ \cite{Bouwknegt:1992wg}. The structure of such algebras depends non-trivially on the rank $n$
and the fusion structure constants are functions of $n$ and the central charge that are not known
in general. Nevertheless, we shall be only interested in the leading and sub-leading correlators at large central charge.
Known results about semiclassical Virasoro blocks together with a careful use of crossing symmetry and the meromorphic
properties of the correlators will allow a simple determination of the desired correlators for generic rank. \footnote{
Clearly, at given rank, one can use the explicit fusion algebra of $\mathcal WA_{n}$ or, what is the same, compute the four-point
functions using free-field representations, {\em cf.} \cite{Fateev:1987zh}. However, as we pointed out,
our interest is focused on the
generic rank subleading
corrections to the large charge limit and a more direct approach will be more convenient to this aim.
}
Our analysis confirms the validity of (\ref{1.5}) for the considered 4-point functions
in the $A_{n}$ Toda theory, at least at classical level. This lends
further support to that relation and, in principle, allows to test higher loop AdS calculations
by $\mathcal W$-algebraic methods.
The structure of the paper is as follows. In Section \ref{sec:toda} we illustrate the tools needed to compute
tree level boundary correlators in the $A_{n}$ Toda theory on AdS$_{2}$ and present explicit results
for the 4-point functions of scalars with $\Delta=3,4$. Section \ref{sec:w-corr} presents the associated results
for the dual CFT fields in the $\mathcal W_{n}$ Virasoro extension. The correlators of generators with spin 3, 4
are computed for generic rank at subleading order in the large central charge expansion. Finally,
Section \ref{sec:final} concludes the analysis by checking agreement with the universal relation (\ref{1.5}).
Some technical tools are briefly collected in two appendices.
\section{Tree-level 4-point functions in $A_{n}$ Toda theory in AdS$_{2}$}
\label{sec:toda}
We shall consider the $A_{n}$ Toda theory in AdS$_{2}$ with classical action,
see for instance \cite{Fateev:2007ab},
\begin{equation}
\label{2.1}
\mathcal S_{n} = \int d^{2}x\,\sqrt{g}\,\bigg[\tfrac{1}{2}\partial^{\mu}\bm\phi\cdot\partial_{\mu}\bm\phi+
V_{n}(\bm \phi)\bigg],\qquad
V_{n}(\bm \phi) = \frac{1}{\beta^{2}}\sum_{i=1}^{n}q_{i}\,e^{\beta\,\bm\alpha_{i}\cdot\bm\phi}
+\frac{1}{\beta}\,R\,\bm \rho^{\vee}\cdot \bm \phi,
\end{equation}
where $\beta$ is a coupling and $V_{n}$ will be referred to as the {\em potential}.
In the special case of $A_{1}$ the action reduces to the Liouville action, {\em cf.} (\ref{1.1}).
The field $\bm\phi$ is an $n$-component multiplet of scalar fields, $\bm\alpha_{i}$ are the simple roots of the Lie algebra
$A_{n}$ and the Weyl vector $\bm\rho^{\vee}$ satisfies $\bm\alpha_{i}
\cdot\bm\rho^{\vee}=1$ for all $i=1,\dots, n$. The numbers $q_{i}$ are taken to be the unique solution to the condition
$\sum_{i=1}^{n}q_{i}\ \bm\alpha_{i}\cdot \bm\alpha_{j} = 2$, for $j=1, \dots, n$.
\footnote{The numbers $q_{i}$ are not restricted to be
integer since a shift of the scalar fields is understood and such that linear terms are removed from the action.}
The action (\ref{2.1}) is Weyl invariant \footnote{This requires a quantum shift $\frac{1}{\beta}\to
\frac{1}{\beta}+\beta$ in the last term of the potential in (\ref{2.1}). This correction will have no effects
at the perturbative order of the calculations in this paper. Instead, it is an important ingredient in the
loop corrections discussed in \cite{Beccaria:2019stp}.}
and flat space integrability carries over to any
conformally flat background. In particular we are interested in AdS$_{2}$ with unit radius and Poincar\'e coordinates
\begin{equation}
ds^{2} = \tfrac{1}{z^{2}}\,(dz^{2}+dt^{2}), \qquad (t,z)\in\mathbb{R}\times \mathbb{R}^{+}.
\end{equation}
The kinetic part of the action (including mass terms) is diagonalized by going to a basis of normalized eigenvectors
of the matrix $\mathcal A_{ij} = q_{k} (A^{1/2})_{ki}\,(A^{1/2})_{kj}$,
where $A$ is the (symmetric) Cartan matrix
of the $A_{n}$ algebra
\begin{equation}
\label{2.3}
A_{ij} = \frac{2\,\bm\alpha_{i}\cdot \bm\alpha_{j}}{|\bm\alpha_{i}|^{2}} = \tfrac{1}{2}\,\bm\alpha_{i}\cdot \bm \alpha_{j},
\qquad
A = {\footnotesize \begin{pmatrix}
2 & -1 & 0 & \cdots & 0 \\
-1 & 2 & -1 & \cdots & 0 \\
& \cdots & \cdots \\
0 & \cdots & -1 & 2 & -1 \\
0 & \cdots & 0 & -1 & 2
\end{pmatrix}}.
\end{equation}
The mass matrix $\mathcal A_{ij}$ can be put in a simple tridiagonal form. To this aim, one
writes the $A_{n}$ simple roots in the form
$\bm\alpha_{1} = \sqrt{2}\,\bm e_{1}$, $\bm \alpha_{2} = \beta_{21}\,\bm e_{1}+\beta_{22}\,\bm e_{2}$, $\dots$,
$\bm \alpha_{p} = \sum_{q=1}^{p}\beta_{pq}\bm e_{q}$,
where $\bm e_{1}, \dots, \bm e_{n}$ are orthonormal vectors in $\mathbb R^{n}$.
This gives the only non-zero elements
\begin{equation}
\mathcal A_{p, p} = \frac{n+1}{2}+n\,p-p^{2},\qquad
\mathcal A_{p, p+1}^{2} = \mathcal A_{p+1, p}^{2} = \frac{1}{4}\,p\,(p+2)\,(p-n)^{2}.\notag
\end{equation}
In terms of the rotated fields $\bm\phi\to \bm\phi'$, the Lagrangian reads
\begin{equation}
\mathcal L = \tfrac{1}{2}\,\partial^{\mu}\bm\phi'\cdot\partial_{\mu}\bm\phi'+\tfrac{1}{2}\,\sum_{i=1}^{n}\,
m_{i}^{2}\,(\phi_{i}')^{2}+V_{n}(\bm\phi'),
\end{equation}
where the masses can be written $m^{2}_{i} = \Delta_{i}\,(\Delta_{i}-1)$ with the simple pattern
\begin{equation}
\{\Delta_{i}\} = 2, 3, 4, \dots, n+1.
\end{equation}
According to the AdS/CFT dictionary, $\Delta_{i}$ is the conformal dimensions of the dual boundary fields.
Since they will turn out to be chiral fields, we shall often refer to $\Delta_{i}$ as the {\rm spin} quantum number.
The non-polynomial potentials $V_{n}(\bm\phi')$ can be expanded in powers of $\beta$ producing cubic, quartic and higher order
couplings.
It is convenient to relabel the fields by using their would-be conformal dimension
$\phi_{i}'\to \varphi_{\Delta_{i}}$. Besides, for the following discussion,
it will be convenient to rescale $\beta\to \sqrt{\frac{n(n+1)(n+2)}{6}}\,\beta$, depending on $n$,
in order to have the same $\varphi_{2}^{3}$ coupling for all $n$. With these
conventions, in the first three cases $n=1,2,3$, the explicit potentials read
\begin{align}
\label{2.6}
V_{1}(\varphi_{2}) &= \varphi_{2}^2+\tfrac{2}{3}\,\beta\,\varphi_2^3+\tfrac{1}{3} \beta ^2 \varphi_{2}^4+\cdots\,, \\
V_{2}(\varphi_{2}, \varphi_{3}) &= \varphi _2^2+3 \varphi _3^2+\beta (\tfrac{2}{3} \varphi _2^3+6 \varphi _3^2
\varphi _2) +\beta ^2 (\tfrac{1}{3} \varphi _2^4+6 \varphi _3^2 \varphi _2^2+3 \varphi_3^4)+\cdots\,, \notag \\
V_{3}(\varphi_{2}, \varphi_{3}, \varphi_{4}) &=
\varphi _2^2+3 \varphi _3^2+6 \varphi _4^2+\beta
(\tfrac{2}{3} \varphi _2^3+6
\varphi _3^2 \varphi _2+12 \varphi _4^2 \varphi _2-4 \varphi _4^3+12 \varphi _3^2 \varphi_4)\notag \\
& +\beta ^2 (\tfrac{1}{3}\,\varphi _2^4+6 \varphi _3^2 \varphi _2^2+12 \varphi _4^2
\varphi _2^2-8 \varphi _4^3 \varphi _2+24 \varphi _3^2 \varphi _4 \varphi _2+5 \varphi_3^4
+14 \varphi _4^4+24 \varphi _3^2 \varphi _4^2)
+\cdots\, .\notag
\end{align}
\bigskip\noindent
We want to discuss the 4-point function of the scalars dual to the first two
higher spin fields, those with spin 3 and 4. The non-zero cases are the two diagonal
and the one mixed 4-point functions
\begin{equation}
\label{2.7}
\llangle\varphi_{3}\varphi_{3}\varphi_{3}\varphi_{3}\rrangle,\qquad
\llangle\varphi_{3}\varphi_{3}\varphi_{4}\varphi_{4}\rrangle,\qquad
\llangle\varphi_{4}\varphi_{4}\varphi_{4}\varphi_{4}\rrangle.
\end{equation}
To compute them in the generic $A_{n}$ Toda theory, we need some of the cubic and quartic couplings at
generic $n$. Not surprisingly, the non zero couplings we shall need are only a finite set. Apart from the
$3333$, $3344$, and $4444$ contact Witten diagrams, we can have an exchange in one of three kinematical channels
mediated by two cubic interactions. The $33s$ couplings are non-zero only for $s=2,4$ while
the $44s$ couplings are non-zero only for $s=2,4,6$. Finally, for the $3344$ four-point function
we also need the non-zero couplings $34s$. Apart from $s=3$, there is only $s=5$.
A systematic analysis of the potentials at increasing rank shows that we can write
\begin{align}
\label{2.8}
V_{n}(\bm\varphi) &= \cdots+\beta\,(\tfrac{2}{3}\,\varphi_{2}^{3}+6\,\varphi_{3}^{2}\,\varphi_{2}
+12\,\varphi_{4}^{2}\,\varphi_{2}
+c_{334}\,\varphi_{3}^{2}\,\varphi_{4}
+c_{444}\,\varphi_{4}^{3}
+c_{446}\,\varphi_{4}^{2}\,\varphi_{6}
+c_{345}\,\varphi_{3}\,\varphi_{4}\,\varphi_{5}
)\notag \\
& +
\beta^{2}\,(c_{3333}\,\varphi_{3}^{4}+c_{3344}\,\varphi_{3}^{2}\,\varphi_{4}^{2}+
c_{4444}\,\varphi_{4}^{4})
+\cdots,
\end{align}
where the couplings are expressed by the following remarkably simple rational expressions
\footnote{The formulas for couplings involving the field $\varphi_{k}$
are valid if $n\ge k-1$, otherwise that field is simply absent.}
\begin{align}
\label{2.9}
(c_{334})^{2} &= \frac{1728}{7}\,\frac{(n+4)(n-2)}{(n+3)(n-1)}, &
c_{3333} &= \frac{15}{7}\,\frac{3n^{2}+6n-17}{(n+3)(n-1)}, \notag \\
(c_{444})^{2} &= \frac{448 (n^2+2 n-18)^2}{3 (n-2) (n-1) (n+3) (n+4)} , &
(c_{446})^{2} &= \frac{30000 (n-4) (n-3) (n+5) (n+6)}{11 (n-2) (n-1) (n+3) (n+4)} , \notag \\
c_{3344} &= \frac{12 (7 n^2+14 n-81)}{(n-1) (n+3)}, &
c_{4444} &= \frac{56 (9 n^4+36 n^3-238 n^2-548 n+2316)}{11 (n-2) (n-1) (n+3) (n+4)},\notag \\
(c_{345})^{2} &= \frac{24000 (n-3) (n+5)}{7 (n-1) (n+3)}.
\end{align}
Notice that in the case of affine Toda theories, the coupling structure is somewhat simpler and there are
known selection rules for the cubic couplings, as well as (recursion) relations for their values
\cite{Christe:1989ah,Christe:1989my,Braden:1989bu,Braden:1989bg,Gabai:2018tmm}. Expressions (\ref{2.9})
are valid for the finite Lie algebra Toda theories and are novel necessary ingredients.
\medskip
\noindent
To compute 4-point functions like (\ref{2.7}) at leading order, we need the
bulk-to-bulk propagator for a scalar field in AdS$_{2}$ with mass term $m^{2}=\Delta(\Delta-1)$. It is
\begin{align}
\label{2.10}
& G_{\Delta}(t_{1}, \mathsf{z}_{1}; t_{2}, \mathsf{z}_{2}) = \mathcal C_{\Delta}\,u^{-\Delta}\,_{2}F_{1}(\Delta, \Delta, 2\Delta; -\tfrac{4}{u}),\quad
\mathcal C_{\Delta} = \tfrac{\Gamma(\Delta)}{2\,\sqrt\pi\,\Gamma(\Delta+\frac{1}{2})},
\end{align}
where the chordal distance is $u = \frac{(t_{1}-t_{2})^{2}+(\mathsf{z}_{1}-\mathsf{z}_{2})^{2}}{\mathsf{z}_{1}\,\mathsf{z}_{2}}$.
The bulk-to-boundary propagator is, after suitable field rescaling,
\begin{equation}
K_{\Delta}(t_{0}; t, \mathsf{z}) = \mathcal C_{\Delta}\left[\frac{\mathsf{z}}{\mathsf{z}^{2}+(t-t_{0})^{2}}\right]^{\Delta},
\end{equation}
and is associated with fields whose 2-point function has coefficient $\mathcal C_{\Delta}$. Since we shall be interested in
fields with unit normalized 2-point function, one factor $\mathcal C_{\Delta_{i}}^{-1/2}$ will be attached to any boundary-to-bulk
line. Some technical tools for the evaluation of AdS integrals are collected in App. \ref{app-Dfun}.
\subsection{Four point functions involving $\Delta=3,4$ fields in the $A_{n}$ Toda theory}
We now compute the 4-point functions (\ref{2.7}) by evaluating the associated Witten diagrams and using
the general couplings in (\ref{2.9}). The leading order will be given by two boundary-to-boundary propagators
and is a disconnected contribution independent on $\beta$. This part is almost trivial and will be discussed in the end.
Instead, we focus on the non-trivial connected contribution. At leading order, this starts at quadratic order $\mathcal O(\beta^{2})$.
\subsubsection{$\Delta=3$ boundary correlator $\llangle \varphi_{3}\varphi_{3}\varphi_{3}\varphi_{3}\rrangle$}
From the couplings in (\ref{2.8}), we have the following (Witten) Feynman rules for the relevant cubic vertices
and quartic coupling
\begin{center}
\begin{tikzpicture}[line width=1 pt, scale=0.6]
\draw (0:1)--(0,0); \node[right] at (0:1) {2};
\draw (120:1)--(0,0); \node[left] at (120:1) {3};
\draw (-120:1)--(0,0); \node[left] at (-120:1) {3};
\node[right=0.3cm] at (0:1) {$= -12\,\beta,$};
\end{tikzpicture}
\hskip 1.5cm
\begin{tikzpicture}[line width=1 pt, scale=0.6]
\draw (0:1)--(0,0); \node[right] at (0:1) {4};
\draw (120:1)--(0,0); \node[left] at (120:1) {3};
\draw (-120:1)--(0,0); \node[left] at (-120:1) {3};
\node[right=0.3cm] at (0:1) {$= -2\,\beta\,c_{334},$};
\end{tikzpicture}
\hskip 1.5cm
\begin{tikzpicture}[line width=1 pt, scale=0.6]
\draw (45:1)--(0,0); \node[right] at (45:1) {3};
\draw (135:1)--(0,0); \node[left] at (135:1) {3};
\draw (225:1)--(0,0); \node[left] at (225:1) {3};
\draw (315:1)--(0,0); \node[right] at (315:1) {3};
\node[right=0.2cm] at (0:1) {$= -4!\,\beta^{2}\,c_{3333},$};
\end{tikzpicture}
\end{center}
where the value of the couplings $c_{334}$ and $c_{3333}$ has been given in (\ref{2.9}).
The four-point function is then given by the sum of diagrams in Fig.~(\ref{fig:3333}).
\begin{figure}[ht]
\centering
\begin{tikzpicture}[line width=1 pt, scale=0.5, rotate=0]
\coordinate (A1) at (140:2);
\coordinate (A2) at (-140:2);
\coordinate (A3) at (40:2);
\coordinate (A4) at (-40:2);
\coordinate (M1) at (-1,0); \coordinate (M2) at (1,0);
\draw[densely dashed,line width=0.5pt] (0,0) circle (2);
\draw (A1)--(M1)--(A2); \draw (A3)--(M2)--(A4); \draw (M1)--(M2);
\draw[fill=black] (A1) circle (0.12); \node[left] at (A1) {$3$};
\draw[fill=black] (A2) circle (0.12); \node[left] at (A2) {$3$};
\draw[fill=black] (A3) circle (0.12); \node[right] at (A3) {$3$};
\draw[fill=black] (A4) circle (0.12); \node[right] at (A4) {$3$};
\node[above] at (0,0) {$2, 4$};
\node at (3,0) {$+$};
\end{tikzpicture}
\begin{tikzpicture}[line width=1 pt, scale=0.5, rotate=0]
\coordinate (A1) at (140:2);
\coordinate (A2) at (-140:2);
\coordinate (A3) at (40:2);
\coordinate (A4) at (-40:2);
\coordinate (M1) at (0,0.8); \coordinate (M2) at (0,-0.8);
\draw[densely dashed,line width=0.5pt] (0,0) circle (2);
\draw (A1)--(M1)--(A3); \draw (A2)--(M2)--(A4); \draw (M1)--(M2);
\draw[fill=black] (A1) circle (0.12); \node[left] at (A1) {$3$};
\draw[fill=black] (A2) circle (0.12); \node[left] at (A2) {$3$};
\draw[fill=black] (A3) circle (0.12); \node[right] at (A3) {$3$};
\draw[fill=black] (A4) circle (0.12); \node[right] at (A4) {$3$};
\node[left] at (0,0) {$2, 4$};
\node at (4,0) {+ \scriptsize u-channel};
\end{tikzpicture}
\begin{tikzpicture}[line width=1 pt, scale=0.5, rotate=0]
\coordinate (A1) at (140:2);
\coordinate (A2) at (-140:2);
\coordinate (A3) at (40:2);
\coordinate (A4) at (-40:2);
\coordinate (M1) at (0,0.8); \coordinate (M2) at (0,-0.8);
\draw[densely dashed,line width=0.5pt] (0,0) circle (2);
\draw (A1)--(0,0)--(A3); \draw (A2)--(0,0)--(A4);
\draw[fill=black] (A1) circle (0.12); \node[left] at (A1) {$3$};
\draw[fill=black] (A2) circle (0.12); \node[left] at (A2) {$3$};
\draw[fill=black] (A3) circle (0.12); \node[right] at (A3) {$3$};
\draw[fill=black] (A4) circle (0.12); \node[right] at (A4) {$3$};
\end{tikzpicture}
\caption{Tree diagrams contributing $\llangle\varphi_{3}\varphi_{3}\varphi_{3}\varphi_{3}\rrangle$.
}\label{fig:3333}
\end{figure}
It can be computed in terms of the $\overline{D}$ functions defined in App.~(\ref{app-Dfun}). We find
the connected contribution
\begin{align}
\mathcal C_{3}^{-2}\,\llangle & \varphi_{3}(t_{1})\cdots\varphi_{3}(t_{4})\rrangle_{\rm conn} =
(-12\beta)^{2}(W^{s}_{3333; 2}+W^{t}_{3333; 2}+W^{u}_{3333; 2}) \notag \\
& +(-2\,\beta\,c_{334})^{2}\,(W^{s}_{3333; 4}+W^{t}_{3333; 4}+W^{u}_{3333; 4}) -4!\,
\beta^{2}\,c_{3333}\,D_{3333}\notag \\
&= \frac{15\,\pi\,\beta^{2}}{512}\,\bigg[
\frac{7 (c_{334}^2+36)
\overline{D}_{2,2,3,3}}{t_{12}^2 t_{13}^4
t_{24}^4 t_{34}^2}+\frac{7 (c_{334}^2+36)
\overline{D}_{2,3,2,3}}{t_{13}^6 t_{24}^6}+\frac{7 (c_{334}^2+36)
\overline{D}_{2,3,3,2}}{t_{13}^6 t_{24}^6}-\frac{756 c_{3333}
\overline{D}_{3,3,3,3}}{t_{13}^6 t_{24}^6}\notag \\
& +\frac{180
\overline{D}_{1,1,3,3}}{t_{12}^4 t_{13}^2 t_{24}^2 t_{34}^4}+\frac{180
\overline{D}_{1,3,1,3}}{t_{13}^6 t_{24}^6}+\frac{180
\overline{D}_{1,3,3,1}}{t_{13}^6 t_{24}^6}
\bigg].
\end{align}
Using the explicit values of the relevant $\overline{D}$ functions, {\em cf.} (\ref{B.4}), we may write
\begin{equation}
\mathcal C_{3}^{-2}\,
\llangle \varphi_{3}(t_{1})\cdots\varphi_{3}(t_{4})\rrangle_{\rm conn} = \frac{\beta^{2}}{t_{12}^{6}\,t_{34}^{6}}\,
G_{3333}^{\rm AdS}(\chi),\qquad \chi =\frac{t_{12}t_{34}}{t_{13}t_{24}},
\end{equation}
with
\begin{align}
G_{3333}^{\rm AdS}(\chi) &= -(c_{334}^2-72 c_{3333}+216)\,\frac{3 \pi \chi ^6 (\chi ^2-\chi +1)
(2 \chi ^2-7 \chi +7)}{256 (1-\chi)^5}\,\log\chi\notag \\
&-\frac{3}{256} \pi (c_{334}^2-72 c_{3333}+216) \chi (\chi ^2-\chi +1) (2 \chi ^2+3 \chi +2)\,\log(1-\chi)\notag \\
& -\frac{3 \pi \chi ^2}{1024 (1-\chi )^4} \bigg[c_{334}^2 (8 \chi ^6-24 \chi ^5+13 \chi ^4+14
\chi ^3+13 \chi ^2-24 \chi +8)\notag \\
& -48\, c_{3333} (12 \chi ^6-36 \chi ^5+37
\chi ^4-14 \chi ^3+37 \chi ^2-36 \chi +12)\notag \\
& -36 (2 \chi ^6-6 \chi ^5-3
\chi ^4+16 \chi ^3-3 \chi ^2-6 \chi +2)\bigg].
\end{align}
The logarithmic terms $\sim\log\chi, \log(1-\chi)$
vanish using the explicit form of $c_{334}$ and $c_{3333}$, see (\ref{2.9}).
This non-trivial fact will have an explanation in terms
of the boundary CFT and will be associated with the absence of anomalous dimensions of the various dual fields.
The remaining expression for the 4-point function takes then the following compact form
\begin{align}
G_{3333}^{\rm AdS}(\chi) &= \frac{675\,\pi}{256\,(n-1)(n+3)}
\,\frac{\chi^{2}}{(1-\chi)^{4}}\,\bigg[
2 (n-1) (n+3)(1-3\chi-3\chi^{5}+\chi^{6})\notag \\
& +(9 n^2+18 n-43) \chi ^2(1+\chi^{2}) -8 (n^2+2
n-7) \chi ^3
\bigg],
\end{align}
that obeys the correct crossing relations
\begin{equation}
G_{3333}^{\rm AdS}(\chi) = \tfrac{\chi^{6}}{(1-\chi)^{6}}\,G_{3333}^{\rm AdS}(1-\chi), \qquad
G_{3333}^{\rm AdS}(\chi) = G_{3333}^{\rm AdS}(\tfrac{\chi}{\chi-1}).
\end{equation}
\subsubsection{$\Delta=4$ boundary correlator $\llangle \varphi_{4}\varphi_{4}\varphi_{4}\varphi_{4}\rrangle$}
In this case, again from (\ref{2.8}), we have the following cubic vertices
and quartic coupling
\begin{center}
\begin{tikzpicture}[line width=1 pt, scale=0.6]
\draw (0:1)--(0,0); \node[right] at (0:1) {2};
\draw (120:1)--(0,0); \node[left] at (120:1) {4};
\draw (-120:1)--(0,0); \node[left] at (-120:1) {4};
\node[right=0.3cm] at (0:1) {$= -24\,\beta,$};
\end{tikzpicture}
\hskip 0.1cm
\begin{tikzpicture}[line width=1 pt, scale=0.6]
\draw (0:1)--(0,0); \node[right] at (0:1) {4};
\draw (120:1)--(0,0); \node[left] at (120:1) {4};
\draw (-120:1)--(0,0); \node[left] at (-120:1) {4};
\node[right=0.3cm] at (0:1) {$= -3!\,\beta\,c_{444},$};
\end{tikzpicture}
\hskip 0.1cm
\begin{tikzpicture}[line width=1 pt, scale=0.6]
\draw (0:1)--(0,0); \node[right] at (0:1) {6};
\draw (120:1)--(0,0); \node[left] at (120:1) {4};
\draw (-120:1)--(0,0); \node[left] at (-120:1) {4};
\node[right=0.3cm] at (0:1) {$= -2\,\beta\,c_{446},$};
\end{tikzpicture}
\hskip 0.1cm
\begin{tikzpicture}[line width=1 pt, scale=0.6]
\draw (45:1)--(0,0); \node[right] at (45:1) {4};
\draw (135:1)--(0,0); \node[left] at (135:1) {4};
\draw (225:1)--(0,0); \node[left] at (225:1) {4};
\draw (315:1)--(0,0); \node[right] at (315:1) {4};
\node[right=0.2cm] at (0:1) {$= -4!\,\beta^{2}\,c_{4444},$};
\end{tikzpicture}
\end{center}
where the couplings $c_{444}$, $c_{446}$, and $c_{4444}$ have been given in (\ref{2.9}).
The four-point (connected) function is then given by the diagrams in Fig.~(\ref{fig:4444}) and reads
\begin{figure}[ht]
\centering
\begin{tikzpicture}[line width=1 pt, scale=0.5, rotate=0]
\coordinate (A1) at (140:2);
\coordinate (A2) at (-140:2);
\coordinate (A3) at (40:2);
\coordinate (A4) at (-40:2);
\coordinate (M1) at (-1,0); \coordinate (M2) at (1,0);
\draw[densely dashed,line width=0.5pt] (0,0) circle (2);
\draw (A1)--(M1)--(A2); \draw (A3)--(M2)--(A4); \draw (M1)--(M2);
\draw[fill=black] (A1) circle (0.12); \node[left] at (A1) {$4$};
\draw[fill=black] (A2) circle (0.12); \node[left] at (A2) {$4$};
\draw[fill=black] (A3) circle (0.12); \node[right] at (A3) {$4$};
\draw[fill=black] (A4) circle (0.12); \node[right] at (A4) {$4$};
\node[above] at (0,0) {$2, 4, 6$};
\node at (3,0) {$+$};
\end{tikzpicture}
\begin{tikzpicture}[line width=1 pt, scale=0.5, rotate=0]
\coordinate (A1) at (140:2);
\coordinate (A2) at (-140:2);
\coordinate (A3) at (40:2);
\coordinate (A4) at (-40:2);
\coordinate (M1) at (0,0.8); \coordinate (M2) at (0,-0.8);
\draw[densely dashed,line width=0.5pt] (0,0) circle (2);
\draw (A1)--(M1)--(A3); \draw (A2)--(M2)--(A4); \draw (M1)--(M2);
\draw[fill=black] (A1) circle (0.12); \node[left] at (A1) {$4$};
\draw[fill=black] (A2) circle (0.12); \node[left] at (A2) {$4$};
\draw[fill=black] (A3) circle (0.12); \node[right] at (A3) {$4$};
\draw[fill=black] (A4) circle (0.12); \node[right] at (A4) {$4$};
\node[left=-0.05cm] at (0,0) {$2, 4, 6$};
\node at (4,0) {+ \scriptsize u-channel};
\end{tikzpicture}
\begin{tikzpicture}[line width=1 pt, scale=0.5, rotate=0]
\coordinate (A1) at (140:2);
\coordinate (A2) at (-140:2);
\coordinate (A3) at (40:2);
\coordinate (A4) at (-40:2);
\coordinate (M1) at (0,0.8); \coordinate (M2) at (0,-0.8);
\draw[densely dashed,line width=0.5pt] (0,0) circle (2);
\draw (A1)--(0,0)--(A3); \draw (A2)--(0,0)--(A4);
\draw[fill=black] (A1) circle (0.12); \node[left] at (A1) {$4$};
\draw[fill=black] (A2) circle (0.12); \node[left] at (A2) {$4$};
\draw[fill=black] (A3) circle (0.12); \node[right] at (A3) {$4$};
\draw[fill=black] (A4) circle (0.12); \node[right] at (A4) {$4$};
\end{tikzpicture}
\caption{Tree diagrams contributing $\llangle\varphi_{4}\varphi_{4}\varphi_{4}\varphi_{4}\rrangle$.
}\label{fig:4444}
\end{figure}
\begin{align}
\mathcal C_{4}^{-2}\,\llangle & \varphi_{4}(t_{1})\cdots\varphi_{4}(t_{4})\rrangle_{\rm conn} =
(-24\beta)^{2}(W^{s}_{4444; 2}+W^{t}_{4444; 2}+W^{u}_{4444; 2}) \notag \\
& +(-3!\,\beta\,c_{444})^{2}\,(W^{s}_{4444; 4}+W^{t}_{4444; 4}+W^{u}_{4444; 4}) \notag \\
& +(-2\,\beta\,c_{446})^{2}\,(W^{s}_{4444; 6}+W^{t}_{4444; 6}+W^{u}_{4444; 6})
-4!\,\beta^{2}\,c_{4444}\,D_{4444}\notag \\
&= \frac{\pi\,\beta^{2}}{6144}\,\bigg[
\frac{315 (9 c_{444}^2+224) \overline{D}_{2,2,4,4}}{t_{12}^4 t_{13}^4
t_{24}^4 t_{34}^4}+\frac{315 (9 c_{444}^2+224)
\overline{D}_{2,4,2,4}}{t_{13}^8 t_{24}^8}+\frac{315 (9 c_{444}^2+224)
\overline{D}_{2,4,4,2}}{t_{13}^8 t_{24}^8}\notag \\
& +\frac{385 (9
c_{444}^2+c_{446}^2+144) \overline{D}_{3,3,4,4}}{t_{12}^2 t_{13}^6
t_{24}^6 t_{34}^2}+\frac{385 (9 c_{444}^2+c_{446}^2+144)
\overline{D}_{3,4,3,4}}{t_{13}^8 t_{24}^8}+\frac{385 (9
c_{444}^2+c_{446}^2+144) \overline{D}_{3,4,4,3}}{t_{13}^8
t_{24}^8}\notag \\
& -\frac{60060 c_{4444} \overline{D}_{4,4,4,4}}{t_{13}^8
t_{24}^8}+\frac{39200 \overline{D}_{1,1,4,4}}{t_{12}^6 t_{13}^2
t_{24}^2 t_{34}^6}+\frac{39200 \overline{D}_{1,4,1,4}}{t_{13}^8
t_{24}^8}+\frac{39200 \overline{D}_{1,4,4,1}}{t_{13}^8 t_{24}^8}
\bigg].
\end{align}
This may be written
\begin{equation}
\mathcal C_{4}^{-2}\,
\llangle \varphi_{4}(t_{1})\cdots\varphi_{4}(t_{4})\rrangle_{\rm conn} = \frac{\beta^{2}}{t_{12}^{8}\,t_{34}^{8}}\,
G_{4444}^{\rm AdS}(\chi),
\end{equation}
where
\begin{align}
& G_{4444}^{\rm AdS}(\chi) = \frac{\pi\,\chi^{8}}{6144\,(1-\chi)^{7}}\,
\bigg[
-5121 c_{444}^2-110 c_{446}^2+30888 c_{4444}-350496\notag \\
& +3 (5121
c_{444}^2+110 c_{446}^2-30888 c_{4444}+350496) \chi \notag \\
& +(-26019
c_{444}^2-770 c_{446}^2+175032 c_{4444}-2034144) \chi ^2\notag \\
& +99 (267
c_{444}^2+10 c_{446}^2-1976 c_{4444}+23392) \chi ^3-2 (8253
c_{444}^2+350 c_{446}^2-64584 c_{4444}+772128) \chi ^4\notag \\
& +130 (45
c_{444}^2+2 c_{446}^2-360 c_{4444}+4320) \chi ^5-20 (45 c_{444}^2+2
c_{446}^2-360 c_{4444}+4320) \chi ^6
\bigg]\,\log\chi\notag \\
&
+\frac{\pi\chi}{6144}\bigg[
-20 (45 c_{444}^2+2 c_{446}^2-360 c_{4444}+4320)\notag \\
& -10 (45 c_{444}^2+2 c_{446}^2-360 c_{4444}+4320) \chi -36 (21 c_{444}^2-88 c_{4444}+896)
\chi ^2\notag \\
& +(-909 c_{444}^2+10 c_{446}^2+2952 c_{4444}-26784) \chi ^3-36
(21 c_{444}^2-88 c_{4444}+896) \chi ^4\notag \\
& -10 (45 c_{444}^2+2
c_{446}^2-360 c_{4444}+4320) \chi ^5-20 (45 c_{444}^2+2 c_{446}^2-360
c_{4444}+4320) \chi ^6
\bigg]\,\log(1-\chi)\notag \\
&+\frac{\pi\chi^{2}(1-\chi+\chi^{2})^{2}}{36864(1-\chi)^{6}}\,\bigg[
-120 (45 c_{444}^2+2 c_{446}^2-360 c_{4444}+400)\notag \\
& +360 (45 c_{444}^2+2
c_{446}^2-360 c_{4444}+400) \chi +(-4851 c_{444}^2-140
c_{446}^2+44208 c_{4444}-72576) \chi ^2\notag \\
& -2 (8649 c_{444}^2+460
c_{446}^2-63792 c_{4444}+47424) \chi ^3+(-4851 c_{444}^2-140
c_{446}^2+44208 c_{4444}-72576) \chi ^4\notag \\
& +360 (45 c_{444}^2+2
c_{446}^2-360 c_{4444}+400) \chi ^5-120 (45 c_{444}^2+2 c_{446}^2-360
c_{4444}+400) \chi ^6\bigg].
\end{align}
Again, the logarithmic terms $\sim \log\chi, \log(1-\chi)$ vanish using the explicit form of the couplings. This is due to the
{\em strange} identities
\begin{equation}
(c_{444})^{2} = \frac{8}{21}\,(-112+11\,c_{4444}),\qquad
(c_{446})^{2} = \frac{600}{7}\,(-14+\,c_{4444}),
\end{equation}
that can be easily checked using (\ref{2.9}). In conclusion, we find
\begin{align}
& G_{4444}^{\rm AdS}(\chi) = \notag \\
& \frac{245\,\pi}
{192\,(n-1)(n-2)(n+3)(n+4)}\,\frac{\,\chi^{2}\,(1-\chi+\chi^{2})^{2}}{(1-\chi)^{6}}\,\bigg[
10 (n-2) (n-1) (n+3) (n+4)\,(1-3\chi-3\chi^{5}+\chi^{6})\notag \\
&
+9 (2 n^4+8 n^3-39 n^2-94 n+348) \chi ^2(1+\chi^{2}) +2 (7 n^4+28 n^3+176 n^2+296
n-2532) \chi ^3
\bigg],
\end{align}
that obeys the correct crossing relations
\begin{equation}
G_{4444}^{\rm AdS}(\chi) = \tfrac{\chi^{8}}{(1-\chi)^{8}}\,G_{4444}^{\rm AdS}(1-\chi), \qquad
G_{4444}^{\rm AdS}(\chi) = G_{4444}^{\rm AdS}(\tfrac{\chi}{\chi-1}).
\end{equation}
\subsubsection{Mixed boundary correlator $\llangle \varphi_{3}\varphi_{3}\varphi_{4}\varphi_{4}\rrangle$}
Finally, we can evaluate the mixed 4-point function $\llangle \varphi_{3}\varphi_{3}\varphi_{4}\varphi_{4}\rrangle$
by the same methods. We do not repeat all the steps to get the final result, since they are completely similar to those
for the diagonal 4-point functions.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[line width=1 pt, scale=0.5, rotate=0]
\coordinate (A1) at (140:2);
\coordinate (A2) at (-140:2);
\coordinate (A3) at (40:2);
\coordinate (A4) at (-40:2);
\coordinate (M1) at (-1,0); \coordinate (M2) at (1,0);
\draw[densely dashed,line width=0.5pt] (0,0) circle (2);
\draw (A1)--(M1)--(A2); \draw (A3)--(M2)--(A4); \draw (M1)--(M2);
\draw[fill=black] (A1) circle (0.12); \node[left] at (A1) {$3$};
\draw[fill=black] (A2) circle (0.12); \node[left] at (A2) {$3$};
\draw[fill=black] (A3) circle (0.12); \node[right] at (A3) {$4$};
\draw[fill=black] (A4) circle (0.12); \node[right] at (A4) {$4$};
\node[above] at (0,0) {$2, 4$};
\node at (3,0) {$+$};
\end{tikzpicture}
\begin{tikzpicture}[line width=1 pt, scale=0.5, rotate=0]
\coordinate (A1) at (140:2);
\coordinate (A2) at (-140:2);
\coordinate (A3) at (40:2);
\coordinate (A4) at (-40:2);
\coordinate (M1) at (0,0.8); \coordinate (M2) at (0,-0.8);
\draw[densely dashed,line width=0.5pt] (0,0) circle (2);
\draw (A1)--(M1)--(A3); \draw (A2)--(M2)--(A4); \draw (M1)--(M2);
\draw[fill=black] (A1) circle (0.12); \node[left] at (A1) {$3$};
\draw[fill=black] (A2) circle (0.12); \node[left] at (A2) {$3$};
\draw[fill=black] (A3) circle (0.12); \node[right] at (A3) {$4$};
\draw[fill=black] (A4) circle (0.12); \node[right] at (A4) {$4$};
\node[left=-0.05cm] at (0,0) {$3, 5$};
\node at (4,0) {+ \scriptsize u-channel};
\end{tikzpicture}
\begin{tikzpicture}[line width=1 pt, scale=0.5, rotate=0]
\coordinate (A1) at (140:2);
\coordinate (A2) at (-140:2);
\coordinate (A3) at (40:2);
\coordinate (A4) at (-40:2);
\coordinate (M1) at (0,0.8); \coordinate (M2) at (0,-0.8);
\draw[densely dashed,line width=0.5pt] (0,0) circle (2);
\draw (A1)--(0,0)--(A3); \draw (A2)--(0,0)--(A4);
\draw[fill=black] (A1) circle (0.12); \node[left] at (A1) {$3$};
\draw[fill=black] (A2) circle (0.12); \node[left] at (A2) {$3$};
\draw[fill=black] (A3) circle (0.12); \node[right] at (A3) {$4$};
\draw[fill=black] (A4) circle (0.12); \node[right] at (A4) {$4$};
\end{tikzpicture}
\caption{Tree diagrams contributing the mixed boundary correlator
$\llangle\varphi_{3}\varphi_{3}\varphi_{4}\varphi_{4}\rrangle$.
}\label{fig:3344}
\end{figure}
From the connected diagrams in Fig.~(\ref{fig:3344}) we can write
\begin{equation}
\mathcal C_{3}^{-1}\,\mathcal C_{4}^{-1}\,
\llangle \varphi_{3}(t_{1}) \varphi_{3}(t_{2}) \varphi_{4}(t_{3}) \varphi_{4}(t_{4})\rrangle_{\rm conn}=
\frac{\beta^{2}}{t_{12}^{6}\,t_{34}^{8}}\,G_{3344}^{\rm AdS}(\chi),
\end{equation}
with
\begin{align}
G_{3344}^{\rm AdS}(\chi) &=
\frac{105\,\pi}
{128\,(n-1)(n+3)}\,\frac{\chi^{2}}{(1-\chi)^{4}}\,\bigg[
10 (n-1) (n+3) (1-3\chi)+4(n-2)(n+4)\chi^{5}(-8+3\chi)\notag \\
& +(-303+82 n+41 n^{2})\chi^{2}
-8(-57+8n+4n^{2})\,\chi^{3}+(-469+86n+43n^{2})\,\chi^{4}
\bigg],
\end{align}
obeying the single crossing relation
\begin{equation}
G_{3344}^{\rm AdS}(\chi) = G_{3344}^{\rm AdS}(\tfrac{\chi}{\chi-1}).
\end{equation}
\section{Chiral 4-point functions in $\mathcal W_{n}$ Virasoro extensions}
\label{sec:w-corr}
We now turn to the CFT side and discuss how to compute the relevant (chiral) 4-point functions
in the $\mathcal W_{3}$ and $\mathcal W_{4}$ extended Virasoro algebras. In particular, we analyze the structure of their large $c$ expansion
and show how to compute the subleading $\mathcal O(c)$ contribution to the spin 3 and spin 4 correlators
in the generic $\mathcal W_{n}$ algebra.
\subsection{General structure of 4-point functions in ${\rm 2d}$ CFT}
Let us review some basic facts about the conformal block decomposition of 4-point functions
of primary fields, see for instance \cite{Dolan:2000ut,Dolan:2003hv,Perlmutter:2015iya}.
Let us consider four chiral primaries with
dimensions $\Delta_{i}$ and define $G(z)$ by
\begin{equation}
\label{3.1}
\langle \mathcal O_{\Delta_{1}}(\infty)\,\mathcal O_{\Delta_{2}}(1)\,\mathcal O_{\Delta_{3}}(z)\,\mathcal O_{\Delta_{4}}(0)\rangle =
\lim_{w\to\infty}w^{2\,(\Delta_{1}+\Delta_{2})} \langle \mathcal O_{\Delta_{1}}(w)\,\mathcal O_{\Delta_{2}}(1)\,
\mathcal O_{\Delta_{3}}(z)\,\mathcal O_{\Delta_{4}}(0)\rangle = \frac{1}{z^{2\,(\Delta_{3}+\Delta_{4})}}\,G(z).
\end{equation}
The variable $z$ is again the cross ratio $\chi=\frac{(z_{1}-z_{2})(z_{3}-z_{4})}{(z_{1}-z_{3})(z_{2}-z_{4})}$ for the
configuration $z_{i}=(\infty, 1, z, 0)$. As we discussed in the Introduction,
the reason why we consider chiral primaries is that we want to compare
with the 1d boundary of AdS$_{2}$ that will be parametrized by $z$ restricted to be real $z=\overline z$.
The function $G(z)$ in (\ref{3.1}) may be expanded in the $s$-channel by summing over the
exchanged primaries $\mathcal O_{p}$
\begin{equation}
\label{3.2}
G(z) = \sum_{p}C_{12p}C_{34p}\,\mathcal F(\bm{\Delta}, \Delta_{p}; z),
\end{equation}
where $C_{abc}$ are the three point function coefficients for primaries with
unit normalization of 2-point function, and an obvious dependence on the central charge is understood.
The Virasoro conformal block $\mathcal F(\bm{\Delta}, \Delta_{p}; z)$ is fully determined by Virasoro symmetry.
It is convenient to present it as a sum over contributions from the level $q$ quasi-primaries appearing in the
Verma module $M(\Delta_{p})$ built upon $\mathcal O_{p}$. One has
\begin{equation}
\label{3.3}
\mathcal F(\bm{\Delta}, \Delta_{p}; z) = z^{\Delta_{p}}\,\sum_{q=0}^{\infty}\chi_{q}(\bm{\Delta}, \Delta_{p})\,z^{q}\,
_{2}F_{1}(\Delta_{p}+q+\Delta_{12}, \Delta_{p}+q+\Delta_{34}, 2\,(\Delta_{p}+q); z),\quad \Delta_{ij}=\Delta_{i}-\Delta_{j},
\end{equation}
where the expansion coefficients $\chi_{q}(\bm{\Delta}, \Delta_{p})$
are fully determined by the Virasoro algebra, {\em i.e.} they can be computed by summing over
the explicit quasi-primaries that appear in the Virasoro Verma module built on $\mathcal O_{-\Delta_{p}}|0\rangle$. One
important special case is the contribution from the identity $\mathcal O_{p}=\mathbb{I}$. In this case,
we have $\Delta_{p}\to 0$ and the constraint $\Delta_{1}=\Delta_{2}$ and $\Delta_{3}=\Delta_{4}$. Then,
\begin{equation}
\mathcal F(\bm{\Delta},0; z) = \sum_{q=0}^{\infty}\chi_{q}(\bm{\Delta}, 0)\,z^{q}\,
_{2}F_{1}(q, q, 2q; z),
\end{equation}
with
\begin{align}
\label{3.5}
\chi_{0}(\bm{\Delta}, 0) &=1, \quad
\chi_{2}(\bm{\Delta}, 0)=\frac{2\,\Delta_{1}\,\Delta_{3}}{c}, \quad
\chi_{4}(\bm{\Delta}, 0)=\frac{10\,(\Delta_{1}^{2}+\frac{\Delta_{1}}{5})
\,(\Delta_{3}^{2}+\frac{\Delta_{3}}{5})}{c\,(5c+22)}, \notag \\
\chi_{6}(\bm{\Delta},0) &= \frac{(14\Delta_{1}^{2}+\Delta_{1})(14\Delta_{3}^{2}+\Delta_{3})}{63\,c\,(70c+29)}\notag \\
&+\frac{4\Delta_{1}\Delta_{3}[c(70\Delta_{1}^{2}+42\Delta_{1}+8)+29\Delta_{1}^{2}-57\Delta_{1}-2]
[c(70\Delta_{3}^{2}+42\Delta_{3}+8)+29\Delta_{3}^{2}-57\Delta_{3}-2]}
{3\,c\,(2c-1)\,(5c+22)\,(7c+68)\,(70c+29)},
\end{align}
and so forth.
Remarkably, one has the general result
\begin{equation}
\lim_{c\to\infty} \mathcal F(\bm{\Delta}, \Delta_{p}; z) = z^{\Delta_{p}}\,_{2}F_{1}(\Delta_{p}+\Delta_{12}, \Delta_{p}
+\Delta_{34}, 2\Delta_{p}; z),
\end{equation}
{\em i.e.} the Virasoro block reduces to the global block. This fact will have a role in the following
discussion of the 4-point function analysis as a boundary 1d CFT, but we shall need some refinement, see
later (\ref{3.23}).
\subsection{Virasoro extensions and $\mathcal W_{n}$}
We are interested in CFT with extended Virasoro symmetry associated with
additional chiral generator $Q_{s}$ with integer spin $s\ge 3$
\cite{Zamolodchikov:1985wn}. This means that we have the singular operator product expansions (OPE)
\begin{equation}
T(z) T(0) \sim \frac{c}{2\,z^{4}}+\frac{2\,T(0)}{z^{2}}+\frac{T'(0)}{z},\qquad
T(z)Q_{s}(0) \sim \frac{s}{z^{2}}\,Q_{s}(0)+\frac{1}{z}\,Q'_{s}(0),
\end{equation}
where $T$ is the stress-energy tensor, and $c$ the central charge. The conformal Ward identities
read
{\small
\begin{align}
\label{3.8}
& \langle T(z_{1})T(z_{2})\cdots T(z_{N})\,Q_{s_{1}}(w_{1})\cdots Q_{s_{M}}(w_{M})\rangle \notag \\
& = \sum_{i=2}^{N}\frac{c}{2\,(z_{1}-z_{i})^{4}}\,\langle
T(z_{2})\cdots T(z_{i-1})\,T(z_{i+1})\cdots T(z_{N})\,Q_{s_{1}}(w_{1})\cdots Q_{s_{M}}(w_{M})\rangle\notag \\
&+\bigg\{
\sum_{i=2}^{N}\bigg[
\frac{2}{(z_{1}-z_{i})^{2}}+\frac{1}{z_{1}-z_{i}}\frac{\partial}{\partial z_{i}}
\bigg]+\sum_{j=1}^{M}\bigg[
\frac{s_{j}}{(z_{1}-w_{j})^{2}}+\frac{1}{z_{1}-w_{j}}\frac{\partial}{\partial w_{j}}
\bigg]
\bigg\}\,\langle T(z_{2})\cdots T(z_{N})\,Q_{s_{1}}(w_{1})\cdots Q_{s_{M}}(w_{M})\rangle.
\end{align}
}
Without higher spin fields, they imply the well known $T$ correlators
\begin{align}
\label{3.9}
\langle T(z_{1})T(z_{2})\rangle &= \frac{c}{2\,z_{12}^{4}}, \\
\label{3.10}
\langle T(z_{1})T(z_{2})T(z_{3})\rangle &= \sum_{i=2}^{3}\bigg[
\frac{2}{(z_{1}-z_{i})^{2}}+\frac{1}{z_{1}-z_{i}}\frac{\partial}{\partial z_{i}}
\bigg]\,\langle T(z_{2})T(z_{3})\rangle = \frac{c}{z_{12}^{2}\,z_{13}^{2}\,z_{23}^{2}},\notag \\
\langle T(z_{1}) T(z_{2}) T(z_{3}) T(z_{4})\rangle &= \frac{c^{2}}{4}\,\bigg(\frac{1}{z_{12}^{4}\,z_{34}^{4}}
+\frac{1}{z_{13}^{4}\,z_{24}^{4}}+\frac{1}{z_{14}^{4}\,z_{23}^{4}}\bigg)\notag \\
&+c\,\bigg(
\frac{1}{z_{12}^{2}\,z_{23}^{2}\,z_{34}^{2}\,z_{14}^{2}}
+\frac{1}{z_{13}^{2}\,z_{24}^{2}\,z_{14}^{2}\,z_{23}^{2}}
+\frac{1}{z_{12}^{2}\,z_{24}^{2}\,z_{34}^{2}\,z_{13}^{2}}
\bigg).
\end{align}
The simplest case with higher spin fields is $\langle T(z_{1}) Q_{s}(z_{2}) Q_{s}(z_{3})\rangle$.
Assuming the standard (Zamolodchikov) normalization $\langle Q_{s}(z) Q_{s}(0)\rangle = \frac{c}{s}\,\frac{1}{z^{2s}}$,
we obtain, using (\ref{3.8}), the expression, similar to (\ref{3.10}),
\begin{equation}
\label{3.11}
\langle T(z_{1})Q_{s}(z_{2})Q_{s}(z_{3})\rangle =
\frac{c}{z_{12}^{2}\,z_{13}^{2}\,z_{23}^{2s-2}}.
\end{equation}
For an extension with spins $s', s'', \dots$, usually denoted by $\mathcal W(2,s,s'',\dots)$, one can postulate
a set of OPEs for all the generators (including the stress-tensor). To be consistent, they should be
equivalent to associativity in correlators, or Jacobi identities. This is by far a non-trivial constraint, for a review of some
basic constructions see for instance \cite{Bowcock:1991zk,Bouwknegt:1992wg}. Generally speaking, there are
classes of solutions valid for generic central charge (apart from isolated special singular values) as well
as specific solutions that are valid only at certain central charges. Here we consider the simplest case of the former type,
{\em i.e.} the so-called quantum $\mathcal W_{n}\equiv \mathcal WA_{n-1}$ algebra that is
the simplest example of a Casimir algebra
\cite{Fateev:1987vh,Fateev:1987zh,Bais:1987dc,Bais:1987zk} and
contains higher spin generators $Q_{s}$ with spin $s=3, 4, \dots, n$. As a preliminary step,
we now discuss in some details the 4-point functions for spin 3 and spin 4 generators
in $\mathcal W_{3}$ and $\mathcal W_{4}$. Then, we discuss their large $c$ limit, and its
generalization to all $\mathcal W_{n}$.
\subsubsection{4-point functions in the $\mathcal W_{3}$ algebra}
The simplest Virasoro extension is $\mathcal W_{3}\equiv\mathcal W(2,3)$ first discussed in
\cite{Zamolodchikov:1985wn}. Denoting by $Q_{3}$
the spin-3 primary, we have the fusion data (singular OPE between local conformal families)
\begin{equation}
\label{3.12}
Q_{3}\,Q_{3} = \frac{c}{3}\, [\mathbb{I}],
\end{equation}
where $[\mathbb{I}]$ is the conformal family of the identity operator.
Making explicit the descendents, this means the following singular OPE
\begin{equation}
Q_{3}(z)\, Q_{3}(0) = \frac{c}{3\,z^{6}}+\frac{2T(0)}{z^{4}}+\frac{T'(0)}{z^{3}}
+\frac{1}{z^{2}}\bigg[\tfrac{3}{10}T''(0)+\frac{32}{22+5c}\,\Lambda(0)\bigg]+
\frac{1}{z}\bigg[\tfrac{1}{15}\,T'''(0)+\frac{16}{22+5c}\,\Lambda'(0)\bigg]+\cdots\,,
\end{equation}
where $\Lambda(z)$ is the quasi-primary
composite operator $\Lambda = (TT)-\frac{3}{10}T''$. \footnote{Here and later,
we shall denote by $(\cdots)$ the (conformally) normal ordered composite operators, see for instance
\cite{Bouwknegt:1992wg}. There should not be confusion with ordinary brackets.} Isolating the poles from the OPE
in the 4-point function $\langle Q_{3}Q_{3}Q_{3}Q_{3}\rangle$, we get the exact result
for the $G$-function in (\ref{3.1})
\begin{align}
\label{3.14}
G_{3333}(z) &= \frac{c^{2}}{9}\bigg[1+z^{6}+\frac{z^{6}}{(1-z)^{6}}\bigg]+c\,\bigg[
2 z^4+2 z^3+\frac{9 z^2}{5}+\frac{8 z}{5}-\frac{96}{5 (1-z)}+\frac{99}{5
(1-z)^2}\notag \\
& -\frac{10}{(1-z)^3}+\frac{2}{(1-z)^4}+\frac{37}{5}\bigg]
+\frac{512\,c}{5\,(22+5\,c)}\,\frac{z^{4}}{(1-z)^{2}}.
\end{align}
As a check, one can verify the correct crossing relations
\begin{equation}
G_{3333}(z) = G_{3333}(\tfrac{z}{z-1}) = \tfrac{z^{6}}{(1-z)^{6}}\,G_{3333}(1-z).
\end{equation}
\subsubsection{4-point functions in the $\mathcal W_{4}$ algebra}
The $\mathcal W_{4}\equiv\mathcal W(2,3,4)$ algebra fusion rules are discussed, {\em e.g.} in \cite{Kausch:1990bn}.
Denoting by $Q_{3}$ and $Q_{4}$ the spin-3 and 4 primaries, we have the OPEs
\begin{align}
\label{3.16}
Q_{3}\,Q_{3} &= \frac{c}{3}\,[\mathbb{I}]+\gamma\,[Q_{4}], \notag \\
Q_{3}\,Q_{4} &= \frac{3}{4}\,\gamma\,[Q_{3}],\notag \\
Q_{4}\,Q_{4} &= \frac{c}{4}\,[\mathbb{I}]+\mu\,[Q_{4}]+\lambda\,[\Phi_{6}],
\end{align}
where $\Phi_{6} = (Q_{3}Q_{3})+\cdots$ is the dimension 6 (composite) primary appearing in (\ref{A.4}). Up to automorphisms
changing the sign of $\gamma$,
the constants in (\ref{3.16}) are
\begin{equation}
\gamma=\pm\frac{4}{3}\,\sqrt\frac{3\,(7c+114)(c+2)}{(5c+22)(c+7)},\ \ \ \ \
\mu = -12\,\frac{c^{2}+c+218}{(5c+22)(c+7)\,\gamma},\ \ \ \ \
\lambda =\frac{45\,(5c+22)}{2\,(7c+114)(c+2)}.
\end{equation}
The explicit form of $\Phi_{6}$ is
\begin{align}
\Phi_{6} &= (Q_{3}Q_{3}) +\frac{(5 c+76) \sqrt{\frac{(c+2) (7
c+114)}{(c+7) (5 c+22)}}}{9 \sqrt{3} (c+24)} \, Q_{4}''
+\frac{88 \sqrt{\frac{(c+2) (7 c+114)}{(c+7) (5
c+22)}} }{3 \sqrt{3} (c+24)}\,(T\,Q_{4})\notag\\
& +\frac{1504-2 c (67 c+178)}{(2 c-1) (5 c+22) (7
c+68)}\, (T''\,T)
-\frac{c (225 c+1978)+776}{2 (2 c-1) (5
c+22) (7 c+68)}\,(T' \,T')\notag \\
& -\frac{16 (191
c+22) }{3 (2 c-1) (5 c+22) (7 c+68)}\,(TTT) -\frac{(c-8) [5 c (c+12)+4]}{6 (2
c-1) (5 c+22) (7 c+68)}\, T^{(4)},
\end{align}
with squared norm
\begin{equation}
\langle \Phi_{6}(z)\Phi_{6}(0)\rangle = \frac{4 (c-1) c (c+2) (c+13) (3 c+116) (7 c+114)}{27 (c+7) (c+24) (2 \
c-1) (7 c+68) } \
\frac{1}{z^{12}}.
\end{equation}
We want to compute the $G$-functions associated with the correlators
\begin{equation}
\langle Q_{3}Q_{3}Q_{3}Q_{3}\rangle,\qquad
\langle Q_{3}Q_{3}Q_{4}Q_{4}\rangle,\qquad
\langle Q_{4}Q_{4}Q_{4}Q_{4}\rangle.
\end{equation}
A tedious but straighforward calculation gives \footnote{The package \cite{Thielemans:1991uw} is useful for such
computations.}
\begin{align}
\label{3.21}
G_{3333}(z) &= \frac{c^{2}}{9}\bigg[1+z^{6}+\frac{z^{6}}{(1-z)^{6}}\bigg]+c\,\bigg[
2z^{4}+2z^{3}+\frac{11}{2}z^{2}+\frac{16}{3}z-\frac{80}{3\,(1-z)}+\frac{65}{3\,(1-z)^{2}}\notag \\
& -\frac{10}{(1-z)^{3}}+\frac{2}{(1-z)^{4}}+13\bigg]
+\frac{100\,c}{3\,(c+7)}\,\frac{z^{4}}{(1-z)^{2}},\notag \\
G_{3344}(z) &= \frac{c^{2}}{12}+c\,\bigg[
\frac{7z^{4}}{5}+\frac{28z^{3}}{15}+2z^{2}+2z+\frac{7}{5(1-z)^{4}}-\frac{112}{15(1-z)^{3}}
+\frac{16}{(1-z)^{2}}-\frac{86}{5(1-z)}+\frac{109}{15}
\bigg]\notag \\
&+\frac{c}{15(7+c)(22+5c)}\,\frac{z^{4}}{(1-z)^{4}}\,[2\,(2844-5688 z+3092 z^{2}-248 z^{3}+93 z^{4})
\notag \\
&+c\,(2484-4968 z+4412 z^{2}-1928 z^{3}+723 z^{4})],\notag \\
G_{4444}(z) &= \frac{c^{2}}{16}\bigg[1+z^{8}+\frac{z^{8}}{(1-z)^{8}}\bigg]+c\,\bigg[
2z^{6}+2z^{5}+\frac{279z^{4}}{140}+\frac{139z^{3}}{70}+\frac{55z^{2}}{28}+\frac{27z}{14}\notag \\
&+\frac{2}{(1-z)^{6}}-\frac{14}{(1-z)^{5}}+\frac{5879}{140(1-z)^{4}}-\frac{4897}{70(1-z)^{3}}
+\frac{9783}{140(1-z)^{2}}-\frac{585}{14(1-z)}+\frac{831}{70}\bigg]\notag \\
&+
\frac{9\,c\,(9264108+3031912 c+503031 c^{2}+16301 c^{3})}{140(2+c)(7+c)(22+5c)(114+7c)}
\frac{z^{4}(1-z+z^{2})^{2}}{(1-z)^{4}}.
\end{align}
These obey the exact crossing relations
\begin{align}
G_{3333}(z) &= G_{3333}(\tfrac{z}{z-1}) = \tfrac{z^{6}}{(1-z)^{6}}\,G_{3333}(1-z), &
G_{4444}(z) = G_{4444}(\tfrac{z}{z-1}) = \tfrac{z^{8}}{(1-z)^{8}}\,G_{4444}(1-z),\notag \\
G_{3344}(z) &= G_{3344}(\tfrac{z}{z-1}).
\end{align}
\subsection{Large $c$ analysis}
The $G$-functions in (\ref{3.14}) and (\ref{3.21}) may be expanded at large central charge
\begin{equation}
\label{3.23}
G(z) = c^{2}\,G_{0}(z)+c\,G_{1}(z) + \mathcal O(c^{0}).
\end{equation}
The $\mathcal O(c^{2})$ contributions are obvious, they come from {\em disconnected} contributions where two pairs of fields
fuse into the identity. The next-to-leading $\mathcal O(c)$ terms, {\em i.e.} $G_{1}(z)$,
display a certain regularity and structural similarity. Finally, the NNLO contributions $\mathcal O(c^{0})$
appear to be more involved, but for our present purposes we are interested in the leading and next-to-leading terms.
\medskip
\noindent
It is useful to analyze the small $z$ expansion of the $G$ functions in terms of conformal blocks
at large $c$. The wealth of data we want to reproduce is summarized by the following expansions
-- we add an algebra suffix for better clarity --
\begin{align}
\label{3.24}
G_{3333}^{\mathcal W_{3}}(z) &= c^2\, (\tfrac{1}{9}+\tfrac{2}{9}z^{6}+\tfrac{2}{3}z^{7}
+\tfrac{7}{3}z^{8}+\cdots)+c\, (2 z^2+2 z^3+\tfrac{9}{5}z^4+\tfrac{8}{5} z^5
+\tfrac{37}{5} z^6+\tfrac{96}{5} z^6+39 z^8+\cdots)+\mathcal O(c^{0}), \notag \\
G_{3333}^{\mathcal W_{4}}(z) &= c^2 \, (\tfrac{1}{9}+\tfrac{2}{9}z^{6}+\tfrac{2}{3}z^{7}
+\tfrac{7}{3}z^{8}+\cdots)+c\, (2 z^2+2 z^3+\tfrac{11}{3}z^4+\tfrac{16}{3} z^5+13 z^6
+\tfrac{80}{3}z^{7}+\tfrac{145}{3}z^{8}+\cdots)+\mathcal O(c^{0}), \notag \\
G_{3344}^{\mathcal W_{4}}(z) &= \tfrac{c^2}{12}+c (2 z^2+2 z^3+\tfrac{6}{5} z^4+\tfrac{2}{5} z^5
+\tfrac{10}{3} z^6+10 z^7+\tfrac{109}{5} z^8+\cdots)+\mathcal O(c^{0}), \notag \\
G_{4444}^{\mathcal W_{4}}(z) &= c^2\, (\tfrac{1}{16}+\tfrac{1}{8}z^{8}+\cdots)+c\,
(2 z^2+2 z^3+\tfrac{279}{140}z^{4}+\tfrac{139}{70}z^5+\tfrac{55}{28}z^6+\tfrac{27}{14} z^7
+\tfrac{831}{70}z^8+\cdots)+\mathcal O(c^{0}).
\end{align}
In order to match (\ref{3.24}) and the general representation (\ref{3.2}), we need
the large $c$ expansion of Virasoro blocks. Let us focus on the case
\begin{equation}
\Delta_{1}=\Delta_{2}=\Delta,\qquad \Delta_{3}=\Delta_{4}=\Delta'.
\end{equation}
A systematic expansion of the Virasoro conformal block at large $c$ and fixed dimensions $\bm{\Delta}$, $\Delta_{p}$
has been computed in \cite{Fitzpatrick:2016mtp,Hikida:2017ehf,Hikida:2018dxe}. The conformal block can be written
\begin{align}
\label{3.26}
\mathcal F(\Delta, \Delta', \Delta_{p}; z) &= z^{\Delta_{p}}\,\bigg[\, F_{0}(\Delta_{p})
+\frac{1}{c}\,F_{1}(\Delta, \Delta' ; \Delta_{p})
+\cdots\bigg],\notag \\
F_{0}(\Delta_{p}) &= {_{2}}F_{1}(\Delta_{p}, \Delta_{p}, 2\,\Delta_{p}; z), \notag \\
F_{1}(\Delta, \Delta'; \Delta_{p}) &= 12\,\bigg[\mathsf{f}_{a}(\Delta_{p})\,\Delta\,\Delta'
+\mathsf{f}_{b}(\Delta_{p})\,(\Delta+\Delta')+\mathsf{f}_{c}(\Delta_{p})
\bigg],
\end{align}
where the explicit functions $\mathsf{f}_{a}(\Delta_{p})$ may be found in convenient form in \cite{Bombini:2018jrg}.
In particular, for the vacuum block $\Delta_{p}=0$ one has simply
\begin{equation}
\label{3.27}
F_{1}(\Delta, \Delta'; 0) = -12\,\Delta\,\Delta'\,\bigg[2+\frac{2-z}{z}\log(1-z)\bigg].
\end{equation}
Let us now discuss the various cases in (\ref{3.24}) from this perspective and by means of these tools.
\subsubsection{The $\mathcal W_{3}$ case}
We begin with a finite $c$ analysis. The simple fusion algebra (\ref{3.12}) implies that $G^{\mathcal W_{3}}_{3333}(z)$
starts with the vacuum
block, {\em cf.} (\ref{3.2}), and continues with other primary contributions that belong to the
regular part of the OPE. So we expect
\begin{equation}
\label{3.28}
G^{\mathcal W3}_{3333}(z) = \frac{c^{2}}{9}\,\mathcal F(\{3,3,3,3\}, 0; z)+\text{other primary contributions}.
\end{equation}
The first primary is $\Phi_{6}= (Q_{3}Q_{3})+\cdots$ and has dimension 6, {\em cf.} (\ref{A.3}).
The explicit form of this primary, normalized in order to have unit 2-point
function, is
\begin{align}
\Phi_{6} &= \frac{3}{2} \sqrt{\frac{(2 c-1) (5 c+22) (7 c+68)}{c (c+2) (c+23) (5
c-4) (7 c+114)}}\,\bigg[
(Q_{3}Q_{3})
+\frac{1504-2 c (67 c+178)}{(2 c-1) (5 c+22) (7 c+68)}\, (T''\,T)\notag \\
& -\frac{c (225 c+1978)+776 }{2 (2 c-1) (5 c+22) (7 c+68)}\,(T'\,T')
-\frac{16 (191 c+22) }{3 (2 c-1) (5 c+22) (7 c+68)}\,(T,(T,T))\notag \\
& -\frac{(c-8) [5 c (c+12)+4]}{6 (2 c-1) (5 c+22) (7 c+68)}\,T^{(4)}
\bigg].
\end{align}
This is fully consistent with (\ref{3.2}). Indeed, from (\ref{3.14}) we can write
\begin{equation}
G^{\mathcal W_{3}}_{3333}(z) = \frac{c^{2}}{9}\bigg[\mathsf{F}_{0}(z)+\frac{18}{c}\,\mathsf{F}_{2}(z)
+\frac{4608}{5\,c\,(22+5\,c)}\,\mathsf{F}_{4}(z)+\frac{9710+2189\,c+70\,c^{2}}{7\,c\,(22+5\,c)}\,
\mathsf{F}_{6}(z)+\mathcal O(z^{8})\bigg],
\end{equation}
where $\mathsf{F}_{q}(z)=z^{q}\,_{2}F_{1}(q,q,2q; z)$. Comparing
the coefficients of the hypergeometric functions with (\ref{3.5}) at $\Delta_{1}=\Delta_{3}=3$
we see that we can continue (\ref{3.28}) as
\begin{equation}
\label{3.31}
G^{\mathcal W_{3}}_{3333}(z) = \frac{c^{2}}{9}\,\mathcal F(\{3,3,3,3\}, 0; z)+
\frac{4 \,c\,(c+2) (c+23) (5 c-4) (7 c+114)}{9(2 c-1) (5 c+22) (7 c+68)}\,\mathcal F(\{3,3,3,3\}, 6; z)+\mathcal O(z^{8}).
\end{equation}
The coefficients are in agreement with (\ref{3.2}) taking into account that
$\langle Q_{3}(z_{1})Q_{3}(z_{2})\rangle = \frac{c^{2}}{9\,z_{12}^{6}}$, and that the regular part
of the OPE $Q_{3}(z)Q_{3}(0)$ starts with $(Q_{3}Q_{3})+\cdots$ .
\medskip
\noindent
The large $c$ limit of (\ref{3.31}) may be computed by expanding both the
coefficients and the conformal blocks. This gives
\begin{equation}
\label{3.32}
G^{\mathcal W_{3}}_{3333}(z) = \frac{c^{2}}{9}\,\bigg[1+\frac{1}{c}\,F_{1}(3,3; 0)+\cdots\bigg]
+\bigg[\frac{2c^{2}}{9}+\frac{209\,c}{35}+\cdots\bigg]\,z^{6}\,
\bigg[F_{0}(6)+\frac{1}{c}\,F_{1}(3,3,6)+\cdots\bigg]+\mathcal O(z^{8}).
\end{equation}
Using the explicit expressions
\begin{align}
\label{3.33}
F_{1}&(3,3; 0) = 108 \left[-\frac{(2-z) \log (1-z)}{z}-2\right] =
18 z^2+18 z^3+\frac{81 z^4}{5}+\frac{72 z^5}{5}+\frac{90
z^6}{7}+\frac{81 z^7}{7}+\frac{21 z^8}{2}+\cdots, \notag \\
F_{1}&(3,3; 6) = \frac{997920}{z^{11}} (z-2) (z^4-28 z^3+154 z^2-252 z+126)
\text{Li}_2(z)\notag \\
&-\frac{36}{5 z^{11}} (z-2) (113207 z^4-2634800 z^3+13715240
z^2-22160880 z+11080440) \log (1-z)\notag \\
& +\frac{99792}{z^{12}}
(8 z^6-306 z^5+2835 z^4-10640 z^3+18900 z^2-15876 z+5082) \log
^2(1-z)\notag \\
& -\frac{12}{z^{10}} (157999
z^4-2511894 z^3+10520958 z^2-16018128 z+8009064) \notag \\
&= \frac{5832 z^2}{169}+\frac{23328 z^3}{169}+\frac{556767
z^4}{1690}+\frac{206847 z^5}{338}+\frac{955529751 \
z^6}{976820}+\frac{345575934 z^7}{244205}+\cdots\ ,
\end{align}
one indeed checks that (\ref{3.32}) reads
\begin{equation}
G^{\mathcal W_{3}}_{3333}(z) = c^2 [\tfrac{1}{9}+\tfrac{2}{9} z^6+\tfrac{2}{3} z^7+\mathcal O(z^8)]
+c \,[2 z^2+2
z^3+\tfrac{9}{5} z^4+\tfrac{8}{5} z^5+\tfrac{37}{5} z^6+\tfrac{96}{5} z^7+\mathcal O(z^8)]+\mathcal O(c^{0}),
\end{equation}
in agreement with (\ref{3.24}). The $\mathcal O(c)$ contribution, has a very non-trivial
origin. They depend on the $\mathcal O(c)$ term of the complicated coefficient in (\ref{3.31}). Besides, looking at the
$\mathcal O(z^{8})$ term in $G(z)$ and in the representation limited to the two terms in (\ref{3.31}) one sees that there are
contributions associated with the dimension 8 primary built with $Q_{3}$ and $T$.
\paragraph{Exploiting the analytic structure} At this point, let us make a simple but useful remark. At order
$\mathcal O(z^{5})$, the full contribution to $G^{\mathcal W_{3}}_{3333, 1}(z)$
comes from, {\em cf.} (\ref{3.32}),
\begin{equation}
\label{3.35}
G^{\mathcal W_{3}}_{3333, 1}(z) = \frac{1}{9}F_{1}(3,3; 0) +\mathcal O(z^{6}) =
2z^{2}+2z^{3}+\tfrac{9}{5}z^{4}+\tfrac{8}{5}z^{5}+\mathcal O(z^{6}).
\end{equation}
On the other hand, from crossing symmetry and the meromorphic structure of correlators, we can
write
\begin{equation}
\label{3.36}
G^{\mathcal W_{3}}_{3333, 1}(z) = P^{\mathcal W_{3}}(z)+P^{\mathcal W_{3}}(\tfrac{z}{z-1}),
\qquad P^{\mathcal W_{3}}(z) = 2\,z^{4}+2\,z^{3}+k_{1}\,z^{2}+k_{2}\,z.
\end{equation}
The second crossing condition $G^{\mathcal W_{3}}_{3333, 1}(z)=(\tfrac{z}{1-z})^{6}\,
G^{\mathcal W_{3}}_{3333, 1}(1-z)$ determines $k_{2} = 2\,(k_{1}-1)$. Thus, $P(z)$ has only one free parameter. This
appears in the small $z$ expansion of (\ref{3.36})
\begin{equation}
G^{\mathcal W_{3}}_{3333, 1}(z) = 2\,z^{2}+2\,z^{3}+k_{1}\,z^{4}+2\,(k_{1}-1)\,z^{5}+\mathcal O(z^{6}).
\end{equation}
Comparing with (\ref{3.35}) we fix $k_{1}=\frac{9}{5}$ and $P^{\mathcal W_{3}}(z)$ is determined.
\medskip
\noindent
In summary, it has been possible to compute $G^{\mathcal W_{3}}_{3333, 1}(z)$ by just using the expression for
$F_{1}(3,3,0)$ and some analytical constraint from the meromorphic structure of the correlators. The representation
(\ref{3.36}) fully captures the exact $\mathcal O(c)$ term in (\ref{3.21}).
\subsubsection{The $\mathcal W_{4}$ case}
Let us begin with $G^{\mathcal W_{4}}_{3333,1}$. Now, the starting point (\ref{3.35}) is not enough because
the fusion (\ref{3.16}) implies a contribution to (\ref{3.2}) from $Q_{3}\times Q_{3}\to Q_{4}$ at order $\mathcal O(z^{4})$.
However, this is governed by $\gamma^{2}= \frac{112}{15}+ \mathcal O(c^{-1})$ at large $c$. This means
\begin{equation}
\label{3.38}
G^{\mathcal W_{4}}_{3333, 1}(z) = \tfrac{1}{9}F_{1}(3,3; 0) +\tfrac{1}{4}\,\tfrac{112}{15}F_{0}(4)+\mathcal O(z^{6}) =
2z^{2}+2z^{3}+\tfrac{11}{3}z^{4}+\tfrac{16}{3}z^{5}+\mathcal O(z^{6}).
\end{equation}
Matching this to the representation (\ref{3.36}) and imposing crossing under $z\to 1-z$
is enough to completely determine
\begin{equation}
\label{3.39}
G^{\mathcal W_{4}}_{3333, 1}(z) = P^{\mathcal W_{4}}_{3333}(z)
+P^{\mathcal W_{4}}_{3333}(\tfrac{z}{z-1}),\qquad P^{\mathcal W_{4}}_{3333}(z) = 2\,z^{4}+2\,z^{3}+\tfrac{11}{3}\,z^{2}+
\tfrac{16}{3}\,z.
\end{equation}
The same strategy may be applied to $\langle Q_{3}Q_{3}Q_{4}Q_{4}\rangle$.
The identity exchange will require the correction
\begin{equation}
F_{1}(3,4,0) = \tfrac{4}{3}F_{1}(3,4,0).
\end{equation}
Besides, the s-channel exchange of $Q_{4}$
gets a contribution proportional to
\begin{equation}
\gamma\mu = -\frac{12}{5}+\mathcal O(c^{-1}).
\end{equation}
Thus we predict (the factor $\frac{1}{s}=\frac{1}{4}$ is due to the normalization of the 2-point function of
$Q_{4}$ )
\begin{equation}
\label{3.42}
G^{\mathcal W_{4}}_{3344, 1}(z) = \tfrac{1}{12}F_{1}(3,4; 0) -\tfrac{1}{4}\,\tfrac{12}{5}F_{0}(4)+\mathcal O(z^{5}) =
2z^{2}+2z^{3}+\tfrac{6}{5}z^{4}+\tfrac{2}{5}z^{5}+\mathcal O(z^{6}).
\end{equation}
This is not enough to determine the manifestly crossing invariant polynomial representation because we have
less symmetry than in the previous case of 4-point functions with equal $\Delta$'s. However, the fusion $Q_{4}\times Q_{4}\to \Phi_{6}$
has subleading coefficient $\lambda = \frac{225}{14c}+\cdots$ . Taking into account the normalization from the
three point function $\langle Q_{3}Q_{3}(Q_{3}Q_{3})\rangle$, this gives the improved version of (\ref{3.42})
\begin{align}
G^{\mathcal W_{4}}_{3344, 1}(z) &= \tfrac{1}{12}F_{1}(3,4; 0) -\tfrac{1}{4}\,\tfrac{12}{5}F_{0}(4)
+2\times (\tfrac{1}{3})^{2}\,\tfrac{225}{14}\,F_{0}(6)
+\mathcal O(z^{8}) \notag \\
& = 2z^{2}+2z^{3}+\tfrac{6}{5}z^{4}+\tfrac{2}{5}z^{5}+\tfrac{10}{3}z^{6}+10z^{7}+\mathcal O(z^{8}),
\end{align}
and this is enough to obtain the representation
\begin{equation}
\label{3.44}
G^{\mathcal W_{4}}_{3344, 1}(z) =
P^{\mathcal W_{4}}_{3344}(z)+P^{\mathcal W_{4}}_{3344}(\tfrac{z}{z-1}),\qquad
P^{\mathcal W_{4}}_{3344}(z) = 2\,z+2\,z^{2}+
\tfrac{28}{15}\,z^{3}+
\tfrac{7}{5}\,z^{4}.
\end{equation}
Alternatively, one can combine the s-channel expansion (\ref{3.42}), with the t-channel expansion $3\times 4\to 3 + \cdots$.
This gives \footnote{In (\ref{3.45}), the factor $\frac{3}{4}\gamma$ is the $Q_{3}Q_{4}\to Q_{3}$ fusion coefficient, see
(\ref{3.16}). The factor $\frac{1}{3}$ is the inverse spin of the exchanged field, again due to normalization of the two
point functions. Finally, the $\pm 1$ shift in the $_{2}F_{1}$ arguments are the conformal dimension difference, see (\ref{3.3}).}
\begin{equation}
\label{3.45}
z^{3}\,G^{\mathcal W_{4}}_{3344, 1}(z^{-1}) = \left(\tfrac{3}{4}\right)^{2}\,
\tfrac{112}{5}\,\tfrac{1}{3}\,z^{3}\,_{2}F_{1}(3+1,3-1,6,z)+\mathcal O(z^{5})
= \tfrac{7}{5}\,z^{3}+\tfrac{28}{15}\,z^{4}+\mathcal O(z^{5}),
\end{equation}
and the combination of (\ref{3.42}) and (\ref{3.45}) fully determine the polynomial $P(z)$ and agrees with (\ref{3.44}).
Finally, the $\langle Q_{4}Q_{4}Q_{4}Q_{4}\rangle$ 4-point function is rather simple. Its polynomial
representation obeying all crossing constraints is
\begin{align}
G^{\mathcal W_{4}}_{4444, 1}(z) &= P^{\mathcal W_{4}}_{4444}(z)+P^{\mathcal W_{4}}_{4444}(\tfrac{z}{z-1}), \notag \\
P^{\mathcal W_{4}}_{4444}(z) &= 2\,z^{6}+2\,z^{5}+ k\,z^{4}+2\,(k-1)\,z^{3}+(5\,k-8)\,z^{2}+2\,(5k-9)\,z.
\end{align}
Thus, we just need to determine $k$ that appears already at order $\mathcal O(z^{4})$
\begin{equation}
G^{\mathcal W_{4}}_{4444, 1}(z) = 2z^{2}+2z^{3}+k\,z^{4}+2(k-1)\,z^{5}+(5k-8)\,z^{6}+\mathcal O(z^{7}).
\end{equation}
On the other hand, using $\mu^{2} = \frac{27}{35}+\mathcal O(c^{-1})$, we can certainly write
\begin{align}
\label{3.48}
G^{\mathcal W_{4}}_{4444, 1}(z) &= \tfrac{1}{16}F_{1}(4,4; 0) +\tfrac{1}{4}\,\tfrac{27}{35}F_{0}(4)+
\mathcal O(z^{5}) = 2z^{2}+2z^{3}+\tfrac{279}{140}z^{4}+\cdots.
\end{align}
This fixes $k=\frac{279}{140}$ and determines
\begin{equation}
P^{\mathcal W_{4}}_{4444}(z) = 2 z^6+2 z^5+\tfrac{279}{140} z^4+\tfrac{139}{70}z^{3}+\tfrac{55}{28} z^2+\tfrac{27}{14} z.
\end{equation}
\subsection{Computing the 4-point functions in $\mathcal W_{n}$}
We have analyzed the $\mathcal W_{3}$ and $\mathcal W_{4}$ cases to understand
what is the origin of the $c\to \infty$ subleading contribution to the 4-point functions of spin 3 and 4 generators.
This is important to generalize the derivation to $\mathcal W_{n}$.
We have shown that the diagonal 4-point functions $\langle Q_{3}Q_{3}Q_{3}Q_{3}\rangle$ and
$\langle Q_{4}Q_{4}Q_{4}Q_{4}\rangle$
may be computed at order $\mathcal O(c)$
in terms of the large $c$ expansion of the couplings $\gamma, \mu$
in the $Q_{3}Q_{3}$ and $Q_{4}Q_{4}$ OPEs. Other primaries may be present in the
$Q_{4}Q_{4}$ OPE, but they do not enter our method of calculation.
Instead, in the mixed 4-point function $\langle Q_{3}Q_{3}Q_{4}Q_{4}\rangle$ we needed more information, and in particular
the primary structure at dimension 6, including the coupling $\lambda$. Nevertheless we have seen that
by combining the conformal block expansions in the $s$- and $t$-channels, these problems
can be overcome.
\medskip
\noindent
The above considerations are enough to compute the 4-point functions of spin 3 and 4 in the
extended $\mathcal W_{n}$ algebra. To this aim, we
just require the $n$-dependent values of the couplings $\gamma\to \gamma_{n}$ and $\mu\to \mu_{n}$.
These have been computed
in \cite{Hornfeck:1992he} (see also \cite{Hornfeck:1993kp,Blumenhagen:1994wg})
based on the free field representation derived in \cite{Fateev:1987zh}.
In our notation, we have the following couplings in $\mathcal W_{n}$ \footnote{
In a more modern perspective, the couplings in (\ref{3.50}) should be thought as a special limit
of the structure constants of the quantum algebra $\mathcal W_{\infty}[\nu]$ when $\nu=n$, see also
\cite{Linshaw:2017tvv}.
They
are known to obey a remarkable triality symmetry with respect to the $\nu$ parameter \cite{Gaberdiel:2012ku}.
}
\begin{align}
\label{3.50}
(\gamma_{n})^{2} &= 64\,\frac{n-3}{n-2}\,\frac{c+2}{5c+22}\,
\frac{c\,(n+3)+2\,(4n+3)\,(n-1)}{c\,(n+2)+(3n+2)\,(n-1)},\notag \\
\mu_{n}\,\gamma_{n} &= \frac{48}{n-2}\,
\frac{c^{2}(n^{2}-19)+3c(6n^{3}-25n^{2}+15)+2(n-1)(6n^{2}-41n-41)}
{(5c+22)[c(n+2)+(3n+2)(n-1)]}.
\end{align}
In particular, expanding at large central charge,
\begin{equation}
(\gamma_{n})^{2} = \frac{64}{5}\,\frac{n^{2}-9}{n^{2}-4}+
\mathcal O(c^{-1}),\qquad
(\mu_{n})^{2} = \frac{36}{5}\,\frac{(n^{2}-19)^{2}}{(n^{2}-4)(n^{2}-9)}+\mathcal O(c^{-1}).
\end{equation}
The same calculation we did in the $\mathcal W_{4}$ case, {\em cf.} (\ref{3.38}) and (\ref{3.48}),
gives now the general $\langle Q_{3}Q_{3}Q_{3}Q_{3}\rangle$ 4-point function (at order $\mathcal O(c)$) in terms of the
polynomial
\begin{align}
\label{3.52}
& P^{\mathcal W_{n}}_{3333}(z) = 2z^{4}+2z^{3}+
\frac{5 n^2-36}{(n-2) (n+2)} z^2+\frac{8 (n^2-8)}{(n-2) (n+2)}z.
\end{align}
It reads
\begin{align}
\label{3.53}
G_{3333,1}^{\mathcal W_{n}}(z) &= P^{\mathcal W_{n}}_{3333}(z)+P^{\mathcal W_{n}}_{3333}(\tfrac{z}{z-1}) =
\frac{1}{n^{2}-4}\frac{z^{2}}{(1-z)^{4}}\,\bigg[
2\,(n^{2}-4)\,(1-3z-3z^{5}+z^{6})\notag \\
& +(9n^{2}-52)\,z^{2}(1+z^{2})-8\,(n^{2}-8)\,z^{3}\bigg].
\end{align}
The mixed 4-point function $\langle Q_{3}Q_{3}Q_{4}Q_{4}\rangle$
is determined by the polynomial
\begin{equation}
P^{\mathcal W_{n}}_{\rm mix}(z) =
\frac{12}{5}\,\frac{n^{2}-9}{n^{2}-4} z^{4}
+\frac{16}{5}\,\frac{n^{2}-9}{n^{2}-4} z^{3}
+\frac{7n^{2}-88}{n^{2}-4} z^{2}+
12\,\frac{n^{2}-14}{n^{2}-4} z,
\end{equation}
and reads
\begin{align}
\label{3.55}
& G_{3344,1}^{\mathcal W_{n}}(z) = P^{\mathcal W_{n}}_{\rm mix}(z)+P^{\mathcal W_{n}}_{\rm mix}(\tfrac{z}{z-1}) =
\frac{1}{5\,(n^{2}-4)}\frac{z^{2}}{(1-z)^{4}}\, \\
&\qquad \bigg[10\,(n^{2}-4)(1-3z) +(41n^{2}-344)z^{2}-8(4n^{2}-61)z^{3}
+(43n^{2}-512)z^{4}+4(n^{2}-9)z^{5}(-8+3z)
\bigg].\notag
\end{align}
The general $\langle Q_{4}Q_{4}Q_{4}Q_{4}\rangle$ 4-point function turns out to be expressed in terms
of the polynomial
\begin{align}
P^{\mathcal W_{n}}_{4444}(z) &=
2 z^6
+2 z^5
+\frac{9 (2 n^4-51 n^2+397)}{5 (n^2-9) (n^2-4)} z^4
+\frac{2 (13 n^4-394 n^2+3393) }{5 (n^2-9) (n^2-4)} z^{3}\notag \\
& \ \ \
+\frac{5 (2 n^4-71 n^2+657) }{(n^2-9) (n^2-4)} z^{2}
+\frac{18 (n^2-19)^2 }{(n^2-9) (n^2-4)} z,
\end{align}
and reads \footnote{
We remark that the final expressions (\ref{3.53}, \ref{3.55}, \ref{3.57}) have a finite limit for $n\to \infty$.}
\begin{align}
\label{3.57}
G_{4444,1}^{\mathcal W_{n}}(z) &= P^{\mathcal W_{n}}_{4444}(z)+P^{\mathcal W_{n}}_{4444}(\tfrac{z}{z-1}) =
\frac{1}{5\,(n^{2}-4)(n^{2}-9)}\frac{z^{2}(1-z+z^{2})^{2}}{(1-z)^{6}}\,\notag \\
&\bigg[10\,(n^{2}-4)(n^{2}-9)(1-3z-3z^{5}+z^{6}) +9\,(397-51n^{2}+2n^{4})\,z^{2}\,(1+z^{2})
\notag \\
& +2\,(-2673+134n^{2}+7n^{4})\,z^{3}\bigg].
\end{align}
\section{Matching the two sides of the correspondence}
\label{sec:final}
We now have all the ingredients needed to compare the small $\beta$ limit on AdS with the
large $c$ limit on the CFT side.
\subsection{Field/generators normalization}
The matching is based on the correspondence
\begin{equation}
\varphi_{s} \to \kappa_{s}\,Q_{s}.
\end{equation}
The constant $\kappa_{2}$ is somewhat special since $Q_{2}\equiv T$, the stress-energy tensor. We can fix
$\kappa_{2}$ as in \cite{Ouyang:2019xdd} by considering the Liouville $A_{1}$ Toda theory.
The Lagrangian for the $\Delta=2$ field $\varphi_{2}$ is
\begin{equation}
\mathcal L= \frac{1}{2}\partial^{\mu}\varphi_{2}\partial_{\mu}\varphi_{2}+ \varphi_{2}^2+\frac{2}{3}\,\beta\,
\varphi_{2}^3+\frac{1}{3} \beta ^2 \varphi_{2}^4+\cdots\, .
\end{equation}
At leading order, the constant $\kappa_{2}$ is fixed by looking at the two point function. On AdS, we have -- for our
normalization of the bulk-to-boundary propagator --
\begin{equation}
\llangle \varphi_{2}(t)\varphi_{2}(0)\rrangle = \frac{1+\mathcal O(\beta^{2})}{t^{4}},
\end{equation}
and also
\begin{equation}
\llangle \varphi_{2}(t)\varphi_{2}(0)\rrangle = \kappa_{2}^{2}\,\langle T(t) T(0)\rangle = \frac{\kappa_{2}^{2}\,c}{2}
\,\frac{1}{t^{4}}.
\end{equation}
Hence (see later for the sign),
\begin{equation}
\label{4.5}
\kappa_{2} = -\sqrt\frac{2}{c}\,[1+\mathcal O(\beta^{2})].
\end{equation}
To find a relation connecting $\beta$ with $c$, we need connected diagrams.
The associated (Witten) Feynman rules are (minus is from $e^{-S}$)
\begin{center}
\begin{tikzpicture}[line width=1 pt, scale=0.6]
\draw (0:1)--(0,0);
\draw (120:1)--(0,0);
\draw (-120:1)--(0,0);
\node[right] at (0:1) {$= -4\,\beta,$};
\end{tikzpicture}
\hskip 2cm
\begin{tikzpicture}[line width=1 pt, scale=0.6]
\draw (45:1)--(0,0);
\draw (135:1)--(0,0);
\draw (225:1)--(0,0);
\draw (315:1)--(0,0);
\node[right] at (0:1) {$= -8\,\beta^{2}.$};
\end{tikzpicture}
\end{center}
Using (\ref{2.13}) we can compute the 3-point function
\begin{equation}
\mathcal C_{2}^{-3/2}\,\llangle \varphi_{2}(t_{1})\varphi_{2}(t_{2})\varphi_{2}(t_{3})\rrangle = (-4\beta)\times \frac{3\,\pi}{8}\,
\frac{1}{t_{12}^{2}\,t_{13}^{2}\,t_{23}^{3}}\,[1+\mathcal O(\beta^{2})].
\end{equation}
From (\ref{3.10}) we have \footnote{This is standard $6/\beta^{2}$ times $2\pi$ from the
missing $1/(2\pi)$ in the action compared with standard Liouville action.}
\begin{equation}
\label{4.7}
-\frac{3\pi}{2}\,\beta\,[1+\mathcal O(\beta^{2})] = c\,\kappa_{2}^{3}\,\mathcal C_{2}^{-3/2}\ \to \ c = \frac{12\pi}{\beta^{2}}
+\mathcal O(\beta^{0}).
\end{equation}
As a further check, we move to the connected 4-point function. This is given by the diagrams in Fig.~(\ref{fig:A1}).
\begin{figure}[ht]
\centering
\begin{tikzpicture}[line width=1 pt, scale=0.5, rotate=0]
\coordinate (A1) at (140:2);
\coordinate (A2) at (-140:2);
\coordinate (A3) at (40:2);
\coordinate (A4) at (-40:2);
\coordinate (M1) at (-1,0); \coordinate (M2) at (1,0);
\draw[densely dashed,line width=0.5pt] (0,0) circle (2);
\draw (A1)--(M1)--(A2); \draw (A3)--(M2)--(A4); \draw (M1)--(M2);
\draw[fill=black] (A1) circle (0.12); \node[left] at (A1) {$2$};
\draw[fill=black] (A2) circle (0.12); \node[left] at (A2) {$2$};
\draw[fill=black] (A3) circle (0.12); \node[right] at (A3) {$2$};
\draw[fill=black] (A4) circle (0.12); \node[right] at (A4) {$2$};
\node[above] at (0,0) {$2$};
\node at (3,0) {$+$};
\end{tikzpicture}
\begin{tikzpicture}[line width=1 pt, scale=0.5, rotate=0]
\coordinate (A1) at (140:2);
\coordinate (A2) at (-140:2);
\coordinate (A3) at (40:2);
\coordinate (A4) at (-40:2);
\coordinate (M1) at (0,0.8); \coordinate (M2) at (0,-0.8);
\draw[densely dashed,line width=0.5pt] (0,0) circle (2);
\draw (A1)--(M1)--(A3); \draw (A2)--(M2)--(A4); \draw (M1)--(M2);
\draw[fill=black] (A1) circle (0.12); \node[left] at (A1) {$2$};
\draw[fill=black] (A2) circle (0.12); \node[left] at (A2) {$2$};
\draw[fill=black] (A3) circle (0.12); \node[right] at (A3) {$2$};
\draw[fill=black] (A4) circle (0.12); \node[right] at (A4) {$2$};
\node[left] at (0,0) {$2$};
\node at (4,0) {+ \scriptsize u-channel};
\end{tikzpicture}
\begin{tikzpicture}[line width=1 pt, scale=0.5, rotate=0]
\coordinate (A1) at (140:2);
\coordinate (A2) at (-140:2);
\coordinate (A3) at (40:2);
\coordinate (A4) at (-40:2);
\coordinate (M1) at (0,0.8); \coordinate (M2) at (0,-0.8);
\draw[densely dashed,line width=0.5pt] (0,0) circle (2);
\draw (A1)--(0,0)--(A3); \draw (A2)--(0,0)--(A4);
\draw[fill=black] (A1) circle (0.12); \node[left] at (A1) {$2$};
\draw[fill=black] (A2) circle (0.12); \node[left] at (A2) {$2$};
\draw[fill=black] (A3) circle (0.12); \node[right] at (A3) {$2$};
\draw[fill=black] (A4) circle (0.12); \node[right] at (A4) {$2$};
\end{tikzpicture}
\caption{Tree diagrams contributing $\llangle\varphi_{2}\varphi_{2}\varphi_{2}\varphi_{2}\rrangle$.
The external points and internal exchange are labeled by their $\Delta$.
}\label{fig:A1}
\end{figure}
Their sum is
\begin{align}
\label{4.8}
\mathcal C_{2}^{-2}\,\llangle & \varphi_{2}(t_{1})\cdots\varphi_{2}(t_{4})\rrangle_{\rm conn} = 16\,\beta^{2}\,(
W^{s}_{2222; 2}+W^{t}_{2222; 2}+W^{u}_{2222; 2} -\tfrac{1}{2}\,D_{2222}) \notag \\
&= 16\,\beta^{2}\,(
\tfrac{1}{4\,t_{12}^{2}}D_{1122}+\tfrac{1}{4\,t_{13}^{2}}D_{1212}+
\tfrac{1}{4\,t_{14}^{2}}D_{1221}-\tfrac{1}{2}\,D_{2222}) \notag \\
&= \tfrac{1}{t_{12}^{4}t_{34}^{4}}\tfrac{3\pi\beta^{2}}{2}\,\chi^{2}\,[\overline{D}_{1122}+\chi^{2}\,(
\overline{D}_{1212}+\overline{D}_{1221}-5\overline{D}_{2222})]
= \frac{1}{t_{12}^{4}t_{34}^{4}}\frac{3\,\pi\,\beta^{2}\,\chi^{2}\,(1-\chi+\chi^{2})}{2\,(1-\chi)^{2}}.
\end{align}
This can be written -- let us add explicitly the higher order corrections --
\begin{equation}
\label{4.9}
\mathcal C_{2}^{-2}\,\llangle \varphi_{2}(t_{1})\cdots\varphi_{2}(t_{4})\rrangle_{\rm conn} = \frac{3\pi\,\beta^{2}}{4}\,\bigg(
\frac{1}{t_{12}^{2}\,t_{23}^{2}\,t_{34}^{2}\,t_{14}^{2}}
+\frac{1}{t_{13}^{2}\,t_{24}^{2}\,t_{14}^{2}\,t_{23}^{2}}
+\frac{1}{t_{12}^{2}\,t_{24}^{2}\,t_{34}^{2}\,t_{13}^{2}}
\bigg)+\mathcal O(\beta^{4}),
\end{equation}
and is consistent with the above identifications because the coefficient of the $\langle TTTT\rangle$ correlator is then
predicted to be
\begin{equation}
\kappa_{2}^{-4}\mathcal C_{2}^{2}\,\frac{3\pi\,\beta^{2}}{4}\,[1+\mathcal O(\beta^{2})] = c,
\end{equation}
in agreement with (\ref{3.10}). \footnote{We choose $\kappa_{2}<0$ in order to have $\beta>0$.
Notice also that the disconnected diagrams give the $\mathcal O(c^{2})$ first term in (\ref{3.9}).}
Of course, the relation found in (\ref{4.7}) between the $c$ and $\beta$, {\em i.e.}
\begin{equation}
\label{4.12}
c = \frac{12\,\pi}{\beta^{2}}+\mathcal O(\beta^{0}),
\end{equation}
should be considered as the leading order term at small $\beta$. The central charge of the $A_{n}$ Toda theory
is $c_{n}=n[1+(n+1)(n+2)(b+b^{-1})^{2}]$ where $b$ is proportional to $\beta$. With our conventions, {\em i.e.} requiring (\ref{4.12})
to hold at leading order for all $n$, this means that we could expect the following exact AdS/CFT map
between the coupling $\beta$ and the central charge $c$
\begin{equation}
c = n+12\,\pi\,\bigg[\frac{1}{\beta}+\frac{n(n+1)(n+2)}{12\,\pi}\,\beta\bigg]^{2} =
\frac{12\pi}{\beta^{2}}+n\,(2n^{2}+6n+5)+\frac{n^{2}(n+1)^{2}(n+2)^{2}}{12\,\pi}\,\beta^{2}+\cdots.
\end{equation}
As we reminded in the Introduction, the subleading $\mathcal O(\beta^{0})$ has been tested for the Liouville $A_{1}$
case in \cite{Beccaria:2019stp}, see also \cite{BHT} for the $A_{2}$ theory and other generalizations complementary to
this analysis.
\paragraph{Normalization of higher spin $s\ge 3$ duals}
If we assume that $\varphi_{s}=\kappa_{s}\,Q_{s}$ where $Q_{s}$ is a spin $s$ generator in a certain
Virasoro extension $\mathcal W(2, \dots, s, \dots)$, then the same analysis of the 2-point function and the
relation $\langle Q_{s}Q_{s}\rangle = \frac{c}{s}\frac{1}{t^{2s}}$, gives
\begin{equation}
\label{4.14}
\kappa_{s} = -\sqrt\frac{s}{c}+\mathcal O(\beta^{2}).
\end{equation}
This implies a constraint on the vertices of the form $V=\beta\,g_{2ss}\,\varphi_{2}\varphi_{s}^{2}$.
The associated Feynman rule is
\begin{center}
\begin{tikzpicture}[line width=1 pt, scale=0.6]
\draw (0:1)--(0,0); \node[right] at (0:1) {2};
\draw (120:1)--(0,0); \node[left] at (120:1) {s};
\draw (-120:1)--(0,0); \node[left] at (-120:1) {s};
\node[right=0.3cm] at (0:1) {$= -2\,\beta\,g_{2ss},$};
\end{tikzpicture}
\end{center}
then from (\ref{2.13})
\begin{equation}
\mathcal C_{2}^{-1/2}\,\mathcal C_{s}^{-1}\,
\llangle \varphi_{2}(t_{1})\varphi_{s}(t_{2})\varphi_{s}(t_{3})\rrangle = (-2\beta g_{2ss})\times
\frac{\sqrt\pi\,\Gamma(s+\frac{1}{2})}{2\,(s-1)\,\Gamma(s)}\,
\frac{1}{t_{12}^{2}\,t_{13}^{2}\,t_{23}^{2s-2}}\,(1+\mathcal O(\beta^{2})).
\end{equation}
On the other hand,
\begin{equation}
\mathcal C_{2}^{-1/2}\,\mathcal C_{s}^{-1}\,\llangle \varphi_{2}(t_{1})\varphi_{s}(t_{2})\varphi_{s}(t_{3})\rrangle =
\mathcal C_{2}^{-1/2}\,\mathcal C_{s}^{-1}\,
\kappa_{2}\kappa_{s}^{2}\,\langle T\,Q_{s}\,Q_{s}\rangle = \mathcal C_{2}^{-1/2}\,\mathcal C_{s}^{-1}\,
\frac{\kappa_{2}\kappa_{s}^{2}\,c}
{t_{12}^{2}\,t_{13}^{2}\,t_{23}^{2s-2}}.
\end{equation}
From (\ref{4.5}) this gives for $s>2$ \footnote{The case $s=2$ is special because of the extra permutation
symmetry between the three $\varphi_{2}$ fields. In this case, the coefficient $\kappa$ is that in (\ref{4.5}).}
\begin{equation}
\label{4.17}
\kappa_{s}^{2} =\frac{g_{2ss}}{c} \frac{2\,\sqrt\pi\,\Gamma(s+\frac{1}{2})}{(s-1)\,\Gamma(s)}\,\mathcal C_{s}
= \frac{g_{2ss}}{c}\,\frac{1}{s-1}\,(1+\mathcal O(\beta^{2})).
\end{equation}
Using (\ref{4.14}), we find
\begin{equation}
g_{2ss} = s\,(s-1),
\end{equation}
consistently with the explicit values in (\ref{2.8}),
$g_{233}=6$ and $g_{244}=12$.
\subsection{Matching the 4-point functions involving $\Delta=3,4$}
First of all, let us notice that the $\mathcal O(c^{2})$ terms in the CFT results (\ref{3.21}) immediately match
the disconnected Witten diagrams where the four points on the boundary are connected with two boundary-to-boundary
propagators. The next correction is $\mathcal O(c)$ on the CFT side and should match the
$\mathcal O(\beta^{2}/\kappa^{4})$ connected 4-point functions in AdS. \footnote{This is correct using (\ref{4.12}) and taking
into account the $\kappa^{4}\sim 1/c^{2}$ normalizations.}
A comparison of $G_{3333}^{\rm AdS}(\chi)$ with the CFT result (\ref{3.53}) valid for the $\mathcal W_{n}$
theory, shows that we have indeed
\begin{equation}
\frac{\beta^{2}\,G_{3333}^{\rm AdS}(\chi)}{c\,G_{3333,1}^{\mathcal W_{n+1}}(\chi)} =
\frac{675\,\pi\,\beta^{2}}{256\,c}
=\mathcal C_{3}^{-2}\, (\kappa_{3})^{4},
\end{equation}
where we used (\ref{4.12}) and (\ref{4.14}), and identified $z=\chi$. Similarly,
comparing $G_{4444}^{\rm AdS}(\chi)$ with the CFT result (\ref{3.57}) we find
\begin{equation}
\frac{\beta^{2}\,G_{4444}^{\rm AdS}(\chi)}{c\,G_{4444, 1}^{\mathcal W_{n+1}}(\chi)} =
\frac{1225\,\pi\,\beta^{2}}{192\,c}=\mathcal C_{4}^{-2}\, (\kappa_{4})^{4},
\end{equation}
where we used (\ref{4.12}) and (\ref{4.14}), and identified $z=\chi$. Finally, for the mixed correlator,
we compare $G_{3344}^{\rm AdS}(\chi)$ with (\ref{3.55}) and have again
\begin{equation}
\frac{\beta^{2}\, G_{3344}^{\rm AdS}(\chi)}
{c\,G_{3344, 1}^{\mathcal W_{n+1}}(\chi)} = \frac{525\,\pi\,\beta^{2}}{128\,c}
= \mathcal C_{3}^{-1}\,\mathcal C_{4}^{-1}\,(\kappa_{3}\,\kappa_{4})^{2}.
\end{equation}
These relations completes the proof that the four points functions of the $\Delta=3,4$ fields in the
general $A_{n}$ Toda theory obey the relation (\ref{1.5}).
\section*{Acknowledgments}
We are very grateful to A. A. Tseytlin, S. Giombi, and H. Jiang
for many useful discussions related to the subject of this paper.
|
1,314,259,994,185 | arxiv | \section{Introduction}
\label{sec::Introduction}
\PARstart{A}{} standard technique for estimating sparse signals is through the formulation of an inverse problem with the $\ell_1$ norm as convex proxy for sparsity. In particular, consider the problem of estimating a signal $\vec{x} \in \mathbb{R}^n$ from a noisy observation $\vec{y} \in \mathbb{R}^n$,
\begin{align}
\vec{y} = \vec{x} + \vec{w},
\end{align}where $\vec{w}$ represents AWGN. We assume the underlying signal to be sparse with respect to an overcomplete tight frame $\vec{A} \in \mathbb{R}^{m \times n}$, $m \geq n$, which satisfies the tight frame condition, i.e.,
\begin{align}
\label{eq::Parseval frame}
\vec{A}^{T}\vec{A} = r\vec{I}, \quad r > 0.
\end{align}Using an analysis-prior, we formulate the signal denoising problem as
\begin{align}
\label{eq::Cost function}
\arg\min_{\vec{x}} \Biggl \lbrace F(\vec{x}) := \dfrac{1}{2}\|\vec{y}-\vec{x}\|_2^2 + \sum_{i = 1}^{m}\lambda_i\phi\left([\vec{Ax}]_i; a_i \right)\Biggr\rbrace,
\end{align} where $\lambda_i > 0$ are the regularization parameters, and $\phi\colon\mathbb{R}\to\mathbb{R}$ is a non-smooth sparsity inducing penalty function. The parameters $a_i$ control the non-convexity of $\phi$ in case it is non-convex. The analysis prior is used in image processing and computer vision applications \cite{Selesnick2009,Elad2007, Cai2014,Cai2010,Xie2012, Turek2014,Portilla2009}.
Commonly, the $\ell_1$ norm is used to induce sparsity, i.e., $\phi(x) = |x|$ \cite{Tropp_2006_tinfo,Chen1998}. In that case, problem \eqref{eq::Cost function} is strictly convex and the global optimum can be reliably obtained.
The $\ell_1$ norm is not the tightest envelope of sparsity \cite{Jojic2011}. It under-estimates the non-zero values of the underlying signal \cite{Nikolova1998, Candes2008}. Non-zero values can be more accurately estimated using suitable non-convex regularizers. Non-convex regularization in an analysis model has been used for MRI reconstruction \cite{Chartrand2009}, EEG signal reconstruction \cite{Majumdar2014}, and for computer vision problems \cite{Ochs2015}. However, the use of non-convex regularizers comes at a price: the objective function is generally non-convex. Consequently, several issues arise (spurious local minima, a perturbation of the input data can change the solution unpredictably, convergence is guaranteed to the local minima only, etc.).
In order to maintain convexity of the objective function while using non-convex regularizers, we propose to restrict the parameter $a_i$ of the non-convex regularizer $\phi$. By controlling the degree of non-convexity of the regularizer we guarantee that the total objective function $F$ is convex. This idea which dates to Blake and Zisserman \cite{Blake1987} and Nikolova \cite{Nikolova1998}, has been applied to image restoration and reconstruction \cite{Nikolova1999,Nikolova2010}, total variation denoising \cite{Selesnick2015, Lanza2015}, and wavelet denoising \cite{Selesnick2015_2}.
In this letter we provide a critical value of parameter $a$ to ensure $F$ in \eqref{eq::Cost function} is strictly convex (even though $\phi$ is non-convex). In contrast to the above works, we consider transform domain regularization and prove that ADMM \cite{Boyd2010a} applied to the problem \eqref{eq::Cost function} converges to the global optimum. The convergence of ADMM is guaranteed, provided the augmented Lagrangian parameter $\mu$, satisfies $\mu > 1/r$.
\section{Sparse signal estimation}
\label{sec::Sparse Signal estimation}
\subsection{Non-convex Penalty Functions}
\label{subsec::Penalty Functions}
In order to induce sparsity more strongly than the $\ell_1$ norm, we use non-convex penalty functions $\phi\colon\mathbb{R}\to\mathbb{R}$ parameterized by the parameter $a \geq 0$. We make the following assumption of such penalty functions.
\newtheorem{assumption}{\bf Assumption}
\begin{assumption}
\label{theorem::assumption 1}
The non-convex penalty function $\phi\colon\mathbb{R} \to \mathbb{R}$ satisfies the following
\begin{enumerate}
\item $\phi$ is continuous on $\mathbb{R}$, twice differentiable on $\mathbb{R}\!\setminus\! \lbrace 0\rbrace$ and symmetric, i.e., $\phi(-x; a) = \phi(x; a)$
\item $\phi'(x) > 0, \forall x > 0$
\item $\phi''(x) \leq 0, \forall x > 0$
\item $\phi'(0^{+}) = 1$
\item $\inf\limits_{x\neq0}\phi''(x;a) = \phi''(0^+;a) = -a$
\item $\phi(x;0) = |x|$.
\end{enumerate}
\end{assumption}
\begin{figure}
\centering
{}\includegraphics[]{fig1}
\caption{ The non-differentiable rational penalty function $\phi(x;a)$ and the function $s(x;a) = \phi(x;a) - |x|$, $a = 0.4$.}
\label{fig::phi minus absolute}
\end{figure}
Since $\phi(x;0) = |x|$, the $\ell_1$ norm is recovered as a special case of the penalty function $\phi$. The parameter $a$ controls the degree of non-convexity of $\phi$. Note that the $\ell_p$ norm does not satisfy assumption \ref{theorem::assumption 1}. The rational penalty function \cite{Geman1992},
\begin{align}
\phi(x;a) = \dfrac{|x|}{1+a|x|/2},
\end{align}the logarithmic, and the arctangent penalty functions \cite{Selesnick_2014_MSC,Candes2008} are examples that satisfy Assumption \ref{theorem::assumption 1}. The rational penalty $\phi$ for $a=0.4$ is shown in Fig.~\ref{fig::phi minus absolute}.
The proximity operator of $\phi$ \cite{Combettes2007}, $\mbox{prox}_{\phi}:\mathbb{R}\to\mathbb{R}$, is defined as
\begin{align}
\label{eq::proximal of phi}
\mbox{prox}_{\phi}(y; \lambda,a) := \arg\min_{x \in \mathbb{R}} \left\lbrace \dfrac{1}{2}(y-x)^2 + \lambda\phi(x;a) \right\rbrace.
\end{align} For $\phi(x;a)$ satisfying Assumption \ref{theorem::assumption 1}, with $a < 1/\lambda$, the proximity operator is a continuous non-linear threshold function with $\lambda$ as the threshold value, i.e., $\mbox{prox}_{\phi}(y; \lambda,a) = 0, \forall |y| < \lambda$. The proximity operator of the absolute value function is the soft-thresholding function. There is a constant gap between the identity function and the soft-threshold function due to which the non-zero values are underestimated \cite{Fan2001}. On the other hand, non-convex penalty functions satisfying Assumption \ref{theorem::assumption 1} are specifically designed so that the threshold function approaches identity asymptotically. These non-convex penalty functions do not underestimate large values.
\subsection{Convexity Condition}
\par In order to benefit from convex optimization principles in solving \eqref{eq::Cost function}, we seek to ensure $F$ in \eqref{eq::Cost function} is convex by controlling the parameter $a_i$. For later, we note the following lemma.
\newtheorem{lemma}{\bf Lemma}
\begin{lemma}
\label{theorem::Lemma 1}
Let $\phi\colon\mathbb{R}\to\mathbb{R}$ satisfy Assumption \ref{theorem::assumption 1}. The function $s\colon\mathbb{R}\to\mathbb{R}$ defined as
\begin{align}
s(x;a):= \phi(x;a) - |x|,
\end{align}is twice continuously differentiable and concave with
\begin{align}
-a \leq s''(x;a) \leq 0.
\end{align}
\end{lemma}
\begin{IEEEproof}
Since $\phi$ and the absolute value function are twice continuously differentiable on $\mathbb{R} \setminus \lbrace 0\rbrace$, we need only show $s'(0^+) = s'(0^-)$ and $s''(0^+) = s''(0^-)$. From assumption \ref{theorem::assumption 1}, we have $\phi'(0^+) = 1$, hence $s'(0^+) = \phi'(0^+) - 1 = 0$. Again by assumption \ref{theorem::assumption 1} we have $\phi'(0^-) = -\phi'(0^+) = -1$, hence $s'(0^-) = \phi'(0^-) + 1 = 0$. Further, $s''(0^+) = \phi''(0^+)$ and $s''(0^-) = \phi''(0^-) = \phi''(0^+) = s''(0^+)$. Thus the function $s$ is twice continuously differentiable. The function $s$ is concave since $s''(x) = \phi''(x) \leq 0, \forall x \neq 0$. Using Assumption \ref{theorem::assumption 1} it follows that $-a \leq s''(x;a) \leq 0$.
\end{IEEEproof}
Figure~\ref{fig::phi minus absolute} displays the function $s(x;a)$, which is twice continuously differentiable even though the penalty function $\phi$ is not differentiable. The following theorem states the critical value of parameter $a_i$ to ensure the convexity of $F$ in \eqref{eq::Cost function}.
\newtheorem{corollary}{\bf Corollary}
\newtheorem{theorem}{\bf Theorem}
\begin{theorem}
\label{theorem::Theorem 1}
Let $\phi(x; a)$ be a non-convex penalty function satisfying Assumption \ref{theorem::assumption 1} and $\vec{A}$ be a transform satisfying $\vec{A}^{T}\vec{A} = r\vec{I}$, $r>0$. The function $F:\mathbb{R}^n \to \mathbb{R}$ defined in \eqref{eq::Cost function} is strictly convex if
\begin{align}
\label{eq::convexity condition}
0 \leq a_i < \dfrac{1}{r\lambda_i}.
\end{align}
\end{theorem}
\begin{IEEEproof}
Consider the function $G: \mathbb{R}^n \to \mathbb{R}$ defined as
\begin{align}
\label{eq::G}
G(\vec{x}) := \dfrac{1}{2}\|\vec{y}-\vec{x}\|_2^2 + \sum_{i=1}^{m}\lambda_i s([\vec{Ax}]_i; a_i).
\end{align} Since $G$ is twice continuously differentiable (using Lemma \ref{theorem::Lemma 1}), the Hessian of $G$ is given by
\begin{align}
\nabla^2 G(\vec{x}) = \vec{I} + \vec{A}^{T}\mbox{diag}\left(\lambda_1d_1, \hdots, \lambda_md_m \right)\vec{A},
\end{align}where $d_i = s''\left( [\vec{Ax}]_i;a_i\right)$. Using \eqref{eq::Parseval frame}, we write the Hessian as
\begin{align}
\nabla^2 G(\vec{x}) &= \vec{A}^{T}\left(\dfrac{1}{r}\vec{I} + \mbox{diag}(\lambda_1d_1,\hdots,\lambda_md_m) \right) \vec{A} \\
&= \vec{A}^{T}\mbox{diag}\left(\dfrac{1}{r} + \lambda_1d_1,\hdots, \dfrac{1}{r}+\lambda_md_m \right) \vec{A}.
\end{align}The transform $\vec{A}$ has full column rank, from \eqref{eq::Parseval frame}, hence $\nabla^2 G(\vec{x})$ is positive definite if
\begin{align}
\label{eq::condition diagonal}
\dfrac{1}{r} + \lambda_id_i > 0, \quad i = 1,\hdots,m.
\end{align} Thus, $\nabla^2 G(\vec{x})$ is positive definite if
\begin{align}
s''([\vec{Ax}]_i;a_i) > -\dfrac{1}{r\lambda_i}.
\end{align} Using Lemma \ref{theorem::Lemma 1}, we obtain the critical value of $a_i$ to ensure the convexity of $G$, i.e.,
\begin{align}
0 \leq a_i < \dfrac{1}{r\lambda_i}.
\end{align}It is straightforward that
\begin{align}
F(\vec{x}) &= G(\vec{x}) + \sum_{i=1}^{m}\lambda_i|[\vec{Ax}]_i|.
\end{align} Thus, being a sum of a strictly convex function and a convex function, $F$ is strictly convex.
\end{IEEEproof}
Note that if $a_i > 1/(r\lambda_i)$, then the function $G(\vec{x})$ is not convex, as the Hessian of $G(\vec{x})$ is not positive definite. As a result, $1/(r\lambda_i)$ is the critical value of $a_i$ to ensure the convexity of the function $F$. The following corollary provides a convexity condition for the situation where the same regularization parameter is applied to all coefficients.
\begin{corollary}
\label{theorem::corollary 1}
For $\lambda_i = \lambda, i = 1,\hdots,m$, the function $F$ in \eqref{eq::Cost function} is strictly convex if $0 \leq a_i < 1/(r\lambda).$ \hfill $\square$
\end{corollary}
\par We illustrate the convexity condition using a simple example with $n=2$. We set
\begin{align}
\vec{A}^{T} = \left[\begin{array}{cccc}
1 & 1 & 1 & 1\\
1 & 1 & -1 &-1
\end{array} \right], \quad \vec{A}^{T}\vec{A} = 4\vec{I},
\end{align}and $\lambda_1=\lambda_2=1$. Theorem \ref{theorem::Theorem 1} states that the function $G$ defined in \eqref{eq::G} is convex for $a_i \leq 1/4$ and non-convex for $a_i > 1/4$.
\begin{figure}
\centering
\includegraphics[scale = 0.95]{fig2}
\caption{Surface plots of the rational penalty function and the function $G$, for two different values of $a$. }
\label{fig::convexity condition}
\end{figure}
\noindent It can be seen in Fig.~\ref{fig::convexity condition} that the function $G$ is convex for $a_i = 0.25$, even though the penalty function is not convex. However, when $a_i > 0.25$, the function $G$ (hence $F$) is non-convex.
\section{Algorithm}
\label{sec::Algorithm}
\par A benefit of ensuring convexity of the objective function is that we can utilize convex optimization approaches to obtain the solution. In particular, for $\phi(x) = |x|$, the widely used methods for solving \eqref{eq::Cost function} are proximal methods \cite{Combettes2007, Combettes2010} and ADMM \cite{Boyd2010a,Goldstein2014}.
\par The convergence of ADMM to the optimum solution is guaranteed when the functions appearing in the objective function are convex \cite{Eckstein1992}. The following theorem states that ADMM can be used to solve \eqref{eq::Cost function} with guaranteed convergence, provided the augmented Lagrangian parameter $\mu$ is appropriately set. Such a condition on $\mu$ was also given in \cite{Lanza2015}. Note that $\mu$ does not affect the solution to which ADMM converges, rather the speed at which it converges.
\begin{table}
\centering
\caption{Iterative algorithm for the solution to \eqref{eq::Cost function}.}
\begin{tabular}{@{}l@{}}
\toprule
Input: $\vec{y}$, $\lambda_i$, $r$, $a_i$, $\mu$ \\
Initialization: $\vec{u} = 0$, $\vec{d} = 0$ \\ Repeat: \\
$\qquad$ $\vec{x} \gets \dfrac{1}{1+\mu r}\left(\vec{y} + \mu\vec{A}^{T}(\vec{u}-\vec{d}) \right)$ \\
$\qquad$ $\vec{u}_i \gets \mbox{prox}_{\phi}([\vec{Ax} + \vec{d}]_i; \lambda_i/\mu_i, a_i)$ \\
$\qquad$ $\vec{d} \gets \vec{d} - (\vec{u} - \vec{Ax})$ \\
Until convergence\\
\bottomrule
\end{tabular}
\label{table::Algorithm}
\end{table}
\begin{theorem}
Let $\phi$ satisfy Assumption \ref{theorem::assumption 1} and the transform $\vec{A}$ satisfy the Parseval frame condition \eqref{eq::Parseval frame}. Let $a_i < 1/(r_i\lambda_i)$. The iterative
algorithm \ref{table::Algorithm} converges to the global minimum of the function $F$ in \eqref{eq::Cost function} if
\begin{align}
\label{eq::mu condition}
\mu > 1/r.
\end{align}
\end{theorem}
\begin{IEEEproof}
We re-write the problem \eqref{eq::Cost function} using variable splitting \cite{Afonso2010} as
\begin{subequations}
\label{eq::variable split}
\begin{align}
\arg\min_\vec{u,x} &\left\lbrace \dfrac{1}{2}\|\vec{y}-\vec{x}\|_2^2 + \sum_{i=1}^{m}\lambda_i\phi\left(u_i;a_i \right) \right\rbrace \\
\mbox{s.t.} &\quad \vec{u} = \vec{Ax}.
\end{align}
\end{subequations} The minimization is separable in $\vec{x}$ and $\vec{u}$.
Applying ADMM to \eqref{eq::variable split} yields the following iterative procedure with the augmented Lagrangian parameter $\mu$.
\begin{subequations}
\label{eq::iterative procedure}
\begin{align}
\label{eq::subproblem in x}
\vec{x} &\gets \arg\min_{\vec{x}}\Biggl\lbrace \dfrac{1}{2}\|\vec{y} - \vec{x}\|_2^2 + \dfrac{\mu}{2}\|\vec{u}-\vec{Ax}-\vec{d}\|_2^2\Biggr\rbrace \\
\label{eq::subproblem in u}
\vec{u} &\gets \arg\min_{\vec{u}}\Biggl\lbrace \underbrace{\sum_{i=1}^{m}\lambda_i\phi\left(u_i; a_i \right) + \dfrac{\mu}{2}\|\vec{u}-\vec{Ax}-\vec{d}\|_2^2}_{R(u)} \Biggr\rbrace \\
\vec{d} &\gets \vec{d} - \left(\vec{u}-\vec{Ax} \right)
\end{align}
\end{subequations}
The sub-problem \eqref{eq::subproblem in x} for $\vec{x}$ can be solved explicitly as
\begin{align}
\label{eq::solution for x}
\vec{x} &= \left(\vec{I} + \mu\vec{A}^{T}\vec{A} \right)^{-1}\left(\vec{y} + \mu\vec{A}^{T}(\vec{u}-\vec{d})\right) \\
&= \dfrac{1}{1 + \mu r}\left(\vec{y} + \mu\vec{A}^{T}(\vec{u}-\vec{d}) \right),
\end{align}using \eqref{eq::Parseval frame}. The sub-problem \eqref{eq::subproblem in u} for $u$ can be solved using $\mbox{prox}_{\phi}$, provided the function $R$ is convex. Consider the function $Q\colon\mathbb{R}^m\to\mathbb{R}$ defined as
\begin{align}
\label{eq::Q}
Q(\vec{u}) := \sum_{i=1}^{m}\lambda_is(u_i;a_i) + \dfrac{\mu}{2}\|\vec{u-Ax-d}\|_2^2.
\end{align} From Lemma \ref{theorem::Lemma 1} and the proof of Theorem \ref{theorem::Theorem 1}, $\nabla^2 Q(\vec{u})$ is positive definite if
\begin{align}
s''(u_i;a_i) > \dfrac{-\mu}{\lambda_i} \quad \Rightarrow \quad \mu > a_i\lambda_i.
\end{align} Since $a_i < 1/(r\lambda_i)$, it follows that $\nabla^2Q(\vec{u})$ is positive definite if $\mu > 1/r$. Hence $Q$ is strictly convex for $\mu > 1/r$. Note that $R(u) = Q(u) + \|u\|_1$. Hence, the function $R$, being the sum of a convex and a strictly convex function, is strictly convex. As such, the minimization problem in \eqref{eq::subproblem in u} is well-defined and its solution can be efficiently computed using the proximity operator of $\phi$ \eqref{eq::proximal of phi}, i.e.,
\begin{align}
\label{eq::solution for u}
u_i \gets \mbox{prox}_{\phi}\Bigl([Ax + d]_i; \lambda_i/\mu_i, a_i \Bigr).
\end{align}
Since $\vec{A}$ has full column rank, ADMM converges to a stationary point of the objective function (despite having a non-convex function in the objective) \cite{Magnusson2014, Wang2014a}; see also \cite{Li2014, Bolte2014, Hong2015}. Moreover, the function $F$ is strictly convex (by Theorem \ref{theorem::Theorem 1}) and the sub-problems of the ADMM are strictly convex for $\mu > 1/r$. As a result, the iterative procedure \eqref{eq::iterative procedure} converges to the global minimum of $F$.
\end{IEEEproof}
A globally convergent algorithm based on a different splitting is presented in \cite{Ilker2015}. In that approach, the objective function is split into two functions, both of which are convex regardless of the auxillary parameter value. Hence, no parameter constraint is required to ensure convergence.
\section{Examples}
\label{sec::Examples}
\subsection{1D Signal Denoising}
We consider the problem of denoising a 1D signal that is sparse with respect to the undecimated wavelet transform (UDWT) \cite{Coifman1995}, which satisfies the condition \eqref{eq::Parseval frame} with $r=1$. In particular, we use a 4-scale UDWT with three vanishing moments. The noisy signal is generated using Wavelab (http://www-stat.stanford.edu/\%7Ewavelab/) with AWGN of $\sigma = 4.0$. We set the regularization parameters $
\lambda_j = \beta\sigma 2^{-j/2}, 1 \leqslant j \leqslant 4
$. We use the same $\lambda_j$ for all the coefficients in scale $j$. The value of $\beta$ is chosen to obtain the lowest RMSE for convex and non-convex regularization respectively. To maximally induce sparsity we set $a_i = 1/\lambda_i$. For the 1D signal denoising example, we use the non-convex arctangent penalty and its corresponding threshold function \cite{Selesnick_2014_MSC}. For comparison we use reweighted $\ell_1$ minimization \cite{Candes2008}, with $\beta$ chosen in order to obtain the lowest RMSE.
\begin{figure}[t!]
\centering
\includegraphics[]{fig3}
\caption{1D denoising example. Non-convex regularization yields lower RMSE than convex regularization.}
\label{fig::1D example}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[]{fig4}
\caption{RMSE values as a function of the noise level $\sigma$ for the 1D signal denoising example.}
\label{fig::RMSE comparison}
\end{figure}
Figure~\ref{fig::1D example} shows that the denoised signal obtained using non-convex regularization has the lowest RMSE and preserves the discontinuities. Further, the peaks are less attenuated using non-convex regularization in comparison with $\ell_1$ norm regularization.
For further comparison, we generate the noisy signal in Fig.~\ref{fig::1D example} for $1 \leqslant \sigma \leqslant 4$, and denoise it with non-convex and convex regularization. We also denoise the noisy signal by direct non-linear thresholding of the noisy wavelet coefficients and by reweighted $\ell_1$ minimization. We use the same $\beta$ values as in Fig.~\ref{fig::1D example}. The value of $\beta$ for direct non-linear thresholding is also chosen to obtain the lowest RMSE. As seen in Fig.~\ref{fig::RMSE comparison}, the non-convex regularization outperforms the three methods by giving the lowest RMSE. The RMSE values are obtained by averaging over 15 realizations for each $\sigma$.
\subsection{2D Image Denoising}
We consider the problem of denoising a 2D image corrupted with AWGN. We use the 2D dual-tree complex wavelet transform (DT-CWT) \cite{Selesnick2005}, which is 4-times expansive and satisfies \eqref{eq::Parseval frame} with $r = 1$. The noisy `peppers' image has peak signal-to-noise ratio (PSNR) value of 14.6 dB. We use the same $\lambda$ for all the sub-bands. As in the previous example, we set the value of $\lambda$ for each case (convex and non-convex) as a constant multiple of $\sigma$ that gives the highest PSNR.
\begin{figure}[t!]
\centering
\includegraphics[]{fig5}
\caption{Image denoising. Wavelet artifacts are more prominent when using $\ell_1$ norm regularization.}
\label{fig::2D example}
\end{figure}
\begin{figure}
\centering
\includegraphics[]{fig6}
\caption{Relative performance of convex and non-convex regularization for image denoising. (a) PSNR as a function of $\lambda$. (b) PSNR as a function of $\sigma$.}
\label{fig::PSNR values}
\end{figure}
Figure~\ref{fig::2D example} shows that the denoised image (non-convex case) contains fewer wavelet artifacts and has a higher PSNR. Figure.~\ref{fig::PSNR values}(a) shows the PSNR values (convex and non-convex) for different values of $\lambda$. To further assess the performance of tight-frame non-convex regularization, we realize several noisy `peppers' images with $10\leqslant \sigma \leqslant 100$. As in the case of the 1D signal denoising, Fig.~\ref{fig::PSNR values} shows that non-convex regularization offers higher PSNR across different noise-levels.
\section{Conclusion}
This letter considers the problem of signal denoising using a sparse tight-frame analysis prior. We propose the use of parameterized non-convex regularizers to maximally induce sparsity while maintaining the convexity of the total problem. The convexity of the objective function is ensured by restricting the parameter $a$ of the non-convex regularizer. We use ADMM to obtain the solution to the convex objective function (consisting of a non-convex regularizer), and guarantee its convergence to the global optimum, provided the augmented Lagrangian parameter $\mu$, satisfies $\mu > 1/r$. The proposed method outperforms the $\ell_1$ norm regularization and reweighted $\ell_1$ minimization methods for signal denoising.
|
1,314,259,994,186 | arxiv | \section{Introduction}\label{sec:intro}
The correspondence between three-dimensional topological field
theories and topological invariants of knots and links is well known
after the seminal paper of \cite{witten}.
From the amplitudes of Chern--Simons field theories it is for instance
possible to isolate invariants which are in the form of a sum of
multiple contour
integrals \cite{GMM,Labastida}. Each contour integral appearing in
invariants of this kind
may be explicitly represented as follows:
\begin{equation}
{\cal I}=\int_{a_1}^{b_1}ds_1\cdots\int_{a_n}^{b_n}ds_nf(s_1,\ldots,s_n)
\end{equation}
where $s_1,\ldots,s_n$ represent
the variables that parametrize the closed contours.
Often the integration is path ordered, which means that for some of the
pairs $s_i,s_j$ of variables it is requested that $s_i\le s_j$.
In this work the attention will be focused on invariants of this type,
which are particular cases of the
so-called numerical knot and link invariants. The problem that will
be addressed here and that has been formulated in much more details in
Ref.~\cite{FFNOVA}, can be summarized
by the following fundamental question:
{\it Given a particular numerical knot or link invariant expressed as a
sum of multiple contour integrals, is it possible to find a solvable
and local topological field theory, characterized by an amplitude that
is proportional to that invariant or to a function of it?}
That posed above is not just a theoretical question. Topological field
theories
that can be associated only to a particular invariant are one of the
main tools in studying the physics of knotted and linked polymer rings
\cite{edwards}.
Applications can also be found in other systems in which quasi
one-dimensional ring-shaped objects play a relevant role. This is the
case of magnetic lines on the surface of the Sun, which are heavily
entangled and give rise to complicated topological configurations
\cite{PriestForbes}.
As the observations point out, the probability of a coronal mass
ejection is growing
with the increasing of the topological complexity \cite{Cranmer,TitovDemoulin}.
An invariant associated to a topological field theory with a
particular non-semisimple group of local symmetries derived in
\cite{FFJMP} was independently studied in connection with the solar
magnetic fields in \cite{HornigMayer}. Many other
invariants can be obtained simply by choosing different
non-semisimple groups. An example of this strategy can be found in
Ref.~\cite{FFNOVA}, in which an invariant
describing the topological states of
a link composed by four knots has been derived.
While topological field theories with non-semisimple gauge groups have
been very successful in establishing a correspondence between
numerical link invariants and topological field theories, an important
issue is still left unsolved. Up to now, in fact, it was possible to
obtain only link invariants
containing contour integrals that are not path-ordered.
Unfortunately, all the known knot invariants and most of the link
invariants that can be cast in the form of multiple contour integrals
require path-ordering.
A breakthrough toward the solution of this issue is the work of
Ref.~\cite{LealPineda}, in which the case of the triple Milnor
linking coefficient $\bar\mu(1,2,3)$ has been treated.
The only drawback of the topological field theory constructed in
\cite{LealPineda} is that some of its observables are non-local,
because they contain a
bilocal vector density. For this reason,
the theory cannot be easily applied to polymer physics and its full
gauge symmetry remains hidden.
A full gauge symmetry is however required in order to deal with the
spurious degrees of freedom due to gauge invariance.
To make the model of \cite{LealPineda}
local, we start from
a simple observation. Let us consider the path-ordered double integral
$A=\int_a^bds\int_a^sdtf(s,t)$. With the help of
a Heaviside $\theta-$function, $A$
can be rewritten in the form
$A=\int_a^bds\int_a^bdt\theta(s-t)f(s,t)$. The crucial point is that the
$\theta-$function is the propagator of the "topological"
one-dimensional field theory
$S_\alpha=\int_{-\infty}^{+\infty}d\eta\alpha(\eta)\frac{d\alpha(\eta)}{d\eta}$. This
theory is topological in the sense that it is invariant under
reparametrization
of the infinite line $\mathbb R$.
We show here that the topological field theory with non-local observables of
\cite{LealPineda} may be converted into a local one thanks to the
introduction of a suitable set of $\alpha-$fields. It has
been possible to prove that this local version is invariant under a
non-semisimple gauge group of symmetry like the theories discussed in
\cite{FFJMP, FFNOVA}. Using the results of these previous works,
in particular the fact that for gauge transformations like those
considered here the Faddeev-Popov determinant is trivial,
the
partition function of the topological field theory associated to the
triple Milnor linking coefficient is explicitly computed.
\section{Conventions}
In the following Greek letters $\mu,\nu,\rho,\dots=1,2,3$ will be used
to denote the spatial indices on the flat three dimensional space
${\mathbb R}^3$.
The position of a point $x$ on
${\mathbb R}^3$ will be given by specifying its cartesian coordinates
$x^\mu$.
Latin letters $i,j,k,\ldots=1,2,3$ will be reserved
for the indices of the internal symmetries and for labeling the three
closed trajectories $P_1,P_2,P_3$.
Throughout this paper the convention of summing over repeated indices
will be followed. In the case in which this will not be possible,
barred indices $\bar \imath,\bar \jmath,\bar k,\ldots=1,2,3$ will be adopted.
For instance, $A^{\bar \imath}B_{\bar \imath}$ is the product of the $\bar \imath-$th
components of the vectors $\boldsymbol A$ and $\boldsymbol B$, while
their vector product is defined as: $\boldsymbol A\cdot \boldsymbol
B=A^iB_i\equiv \sum\limits_{\bar \imath=1}^3A^{\bar \imath}B_{\bar \imath}$
where the symbol $\equiv$ denotes equivalence.
The trajectories $P_i$, $i=1,2,3$, will be represented by curves
$x^\mu_i(s)$ parametrized by means of their arc-lengths. It will be
assumed that all three loops have the same length $L$, so that $0\le
s\le L$.
For the formulation of the topological field theory presented here the two
triplets of vector fields
$A^i_\mu(x)$ and $a_{i\mu}$
will be needed. In addition, we introduce the set of one-dimensional
scalar field
$\alpha_{ij}(\eta)$, where $-\infty<\eta<+\infty$.
For future purposes we define also the following three external sources:
\begin{eqnarray}
T_i^{\mu x}&=&\oint_{P_i}dx_i^\mu\delta^{(3)}(x-x_i)=
\int_0^Lds{\dot x}_i^\mu(s)\delta^{(3)}(x-x_i(s))
\label{dist1}\\
T_i^{\{\mu x,\nu y\}}&=&\oint_{P_i}
dx_i^\mu\int_0^xdx_i^{\prime\nu}\delta^{(3)}(x-x_i)
\delta^{(3)}(y-x_i')\label{dist2}
\end{eqnarray}
and
\begin{equation}
\xi_{ij}(\eta)=\int_0^Lds\delta(\eta -s){\dot
x}_i^\mu(s)a_{j\mu}(x_i(s))\label{dist3}
\end{equation}
Finally, the expression of the triple Milnor linking coefficient $\bar
\mu(1,2,3)$~\cite{Milnor}, which is able to distinguish the
topological states of a
link composed by three loops, is provided below \cite{Leal}:
\begin{equation}
\bar \mu(1,2,3)=-\frac12\int d^3x\epsilon^{\mu\nu\rho}\tilde
a_{1\mu}\tilde a_{2\nu} \tilde
a_{3\rho}+\frac12\epsilon^{ijk}\int d^3x\int d^3y T_i^{\{\mu x,\nu y\}}
\tilde a_{j\mu}(x)\tilde a_{k\nu}(y)\label{mti}
\end{equation}
where $\epsilon^{\mu\nu\rho}$ and $\epsilon^{ijk}$ are completely
antisymmetric tensors satisfying the convention $\epsilon^{123}=1$ and
\begin{equation}
\tilde a_{i\mu}(x)=\frac {\epsilon_{\mu\nu\rho}}{4\pi}\int_0^Lds{\dot
y}_i^\rho(s) \frac{(x-y_i(s))^\nu}{|x-y_i(s)|^3}\label{fieldstilda}
\end{equation}
Using the definition of the bilocal density $T_i^{\{\mu x,\nu y\}}$ of
Eq.~(\ref{dist2}), the Milnor linking coefficient may be
explicitly expressed in terms of contour integrals over the loops $P_i$:
\begin{equation}
\bar \mu(1,2,3)=-\frac12\int d^3x\epsilon^{\mu\nu\rho}\tilde
a_{1\mu}\tilde a_{2\nu} \tilde a_{3\rho}+\frac12\sum_{\bar
\imath=1}^3\epsilon^{\bar \imath jk}\int_0^Lds{\dot x}^\mu_{\bar
\imath}(s)\int_0^sdt{\dot x}^\nu_{\bar
\imath}(t)
\tilde a_{j\mu}(x_{\bar \imath}(s))\tilde a_{k\nu}(x_{\bar
\imath}(t))\label{Milnortripleinv}
\end{equation}
It is easy to check that the quantity $\bar \mu(1,2,3)$ in
Eq.~(\ref{Milnortripleinv}) coincides up to an overall constant factor
with the quantity $S^1(1,2,3)$ appearing in Eq.~(16) of
Ref. \cite{LealPineda}.
\section{The topological field theory}\label{sec2}
Let us consider the topological field theory defined by the action
\begin{eqnarray}
S&=&\int d^3x
\epsilon^{\mu\nu\rho}
\left\{
4A^i_\mu\partial_\nu a_{i\rho}+\frac23\lambda\epsilon^{ijk} a_{i\mu}
a_{j\nu} a_{k\rho}
\right\}
-2\int d^3xT_i^{\mu x} A_\mu^i(x)\nonumber\\
&+&2\lambda\sum_{\bar\imath=1}^3\epsilon^{\bar \imath
jk}\int_{-\infty}^{+\infty}d\eta\left[
\frac{\alpha_{\bar \imath j}{\dot \alpha}_{\bar\imath
k}}{2}-\xi_{\bar\imath j}\alpha_{\bar\imath k}
\right]\label{action}
\end{eqnarray}
The above action is manifestly invariant under diffeomorphisms in the
${\mathbb R}^3$ space and under reparametrizations of the $\eta$
variable.
In addition, it is possible to show that $S$ is invariant under the
set of
gauge transformations:
\begin{eqnarray}
A_\mu^i(x)&\longrightarrow& A_\mu^i(x)+\partial_\mu\Omega^i(x) +\lambda\left(
\frac 12\omega_j(x)\partial_\mu\omega_k(x)+\omega_j(x) a_{k\mu}(x)
\right)\epsilon^{ijk}\label{gtone}\\
a_{i\mu}(x)&\longrightarrow&a_{i\mu}(x)+\partial_\mu\omega_i(x)\label{gttwo}\\
\alpha_{ij}(\eta)&\longrightarrow&\alpha_{ij}(\eta)-
\int_0^Lds\theta(\eta-s)\frac{d\omega_j(x_i(s))} {ds} \label{gtthree}
\end{eqnarray}
Here $\theta(\eta-s)$ is the Heaviside theta function and
the $\omega_i(x)$'s are arbitrary functions of the
point $x\in{\mathbb R}^3$. The functions $\Omega^i(x)$ may be
split into a single-valued and a multi-valued contribution as follows:
\begin{equation}
\Omega^i(x)=\Omega^i_s(x)+\Omega^i_{m1}(x)+\Omega^i_{m2}(x)\label{thetavars}
\end{equation}
The $\Omega^i_s(x)'$s, $i=1,2,3$ are single-valued
functions, while
the $\Omega^i_{m1}(x)'$s and the $\Omega^i_{m2}(x)'$s have a special
form dictated by the requirement of gauge invariance:
\begin{eqnarray}
\Omega^{\bar\imath}_{m1}(x)&=&2\lambda\epsilon^{\bar \imath
jk}\int_{x_{\bar\imath}(0)}^x dz_{\bar
\imath}^\mu(t)\left[
a_{j\mu}(z_{\bar\imath}(t))
+\frac 12\frac{\partial\omega_j(z_{\bar\imath}(t))}{\partial
z_{\bar\imath}^\mu(t)}
\right]\left[
\omega_k(z_{\bar\imath}(t))-\omega_k(z_{\bar\imath}(0))
\right]\label{theta1}\\
\Omega^{\bar\imath}_{m2}(x)&=&-\lambda\epsilon^{\bar \imath
jk}\int_{x_{\bar\imath}(0)}^x dz_{\bar
\imath}^\mu(t)\left[
a_{\mu k}(z_{\bar
\imath}(t))+\frac 12\frac{\partial\omega_k(z_{\bar\imath}(t))}{ \partial
z_{\bar\imath}(t)}
\right]\omega_j(z_{\bar\imath}(t))\label{theta2}
\end{eqnarray}
In Eqs.~(\ref{theta1}) and (\ref{theta2}) $z_i^\mu(t)$ is an
arbitrary curve joining the point $x_i(0)$ belonging to the loop $P_i$
to the generic point $x\in{\mathbb R}^3$. It is easy to check that
$\Omega^i_{m1}(x)$ and $\Omega^i_{m2}(x)$ are multi-valued
functions depending on the choice of the trajectory $z_i^\mu(t)$. We
note also that $\Omega^i_{m2}(x)$ satisfies the
identity:
\begin{equation}
\oint_{P_i}dx_i^\mu\left[
\partial_\mu\Omega^{\bar\imath}_{m2}+\lambda\epsilon^{\bar\imath
jk}\left(
\frac 12\omega_j\partial_\mu\omega_k+\omega_ja_{\mu k}
\right)
\right]=0\label{impid}
\end{equation}
In words, the above relation means that the non-linear contributions
in the $\omega_i'$s appearing in the transformation (\ref{gtone}) of
the fields $A^i_\mu(x)$ is exactly canceled by the contribution due to
$\Omega_{m2}^i(x)$ when the fields $A^i_\mu$ are integrated along the
loops $P_i$.
To prove the invariance of the action (\ref{action}) under the gauge
transformations (\ref{gtone})--(\ref{gtthree}) it is convenient to
split $S$ into four contributions:
\begin{equation}
S=S_{top}+S_\alpha+S_{source1}+S_{source2}\label{actionnewdef}
\end{equation}
where $S_{top}$ is the "bulk action" on ${\mathbb R}^3$
\begin{equation}
S_{top}=\int d^3x
\epsilon^{\mu\nu\rho}
\left\{
4A^i_\mu\partial_\nu a_{i\rho}+\frac23\lambda\epsilon^{ijk} a_{i\mu}
a_{j\nu} a_{k\rho}
\right\}
\end{equation}
and $S_\alpha$ is the action of the one-dimensional fields
$\alpha_{ij}(\eta)$:
\begin{equation}
S_\alpha=\lambda\sum_{\bar\imath=1}^3\epsilon^{\bar\imath
jk}\int_{-\infty}^{+\infty} d\eta \alpha_{\bar\imath
j}{\dot\alpha}_{\bar \imath k}
\end{equation}
Finally, $S_{source1}$ and $S_{source2}$ take into account the source
terms in Eq.~(\ref{action}) and may be explicitly written as follows:
\begin{eqnarray}
S_{source1}&=&-2\sum_{\bar\imath=1}^3\int_0^Lds{\dot
x}_{\bar\imath}^\mu(s)A_\mu^{\bar\imath }(x_{\bar\imath }(s)) \\
S_{source2}&=&-2\lambda\sum_{\bar\imath=1}^3\epsilon^{\bar\imath jk}\int_0^Lds{\dot
x}_{\bar\imath}^\mu(s) a_{j\mu}(x_{\bar\imath}(s))\alpha_{\bar\imath
k}(s)
\end{eqnarray}
It is possible to check that $S_{top}$ is
fully gauge invariant, i.~e.:
\begin{equation}
S_{top}\longrightarrow S_{top}\label{stopinv}
\end{equation}
To prove Eq.~(\ref{stopinv})
we have used
the fact that the gauge
fields $a_{i\mu}$ and $A^i_\mu$ vanish at infinity sufficiently fast
and the identity $\epsilon^{\mu\nu\rho}\int
d^3x\Omega^i\partial_\mu\partial_\nu a_{i\rho}=0$.
This identity is verified because the spatial components of $a_{i\mu}$
are not multi-valued.
Next, we consider the combination $S_\alpha+S_{source2}$ which is not fully
invariant and
transforms as shown below:
\begin{eqnarray}
S_\alpha+S_{source2}&\longrightarrow& S_\alpha+S_{source2}+
2\lambda\sum_{\bar\imath=1}^3\epsilon^{\bar\imath jk}\int_0^Lds\Bigg[
{\dot
x}_{\bar\imath}^\mu(s)a_{j\mu}(x_{\bar\imath}(s))\nonumber\\
&+&
\frac12\frac{d\omega_j(x_{\bar\imath}(s))
}{ds}\Bigg]
\left[\omega_k(x_{\bar\imath}(s))-
\omega_k(x_{\bar\imath}(0))\right]
\label{salphasst}
\end{eqnarray}
The unwanted terms violating gauge invariance are canceled exactly by
the transformation of the term $S_{source1}$ if the functions
$\Omega^i(x)$ appearing in Eq.~(\ref{gtone})
are chosen as in
Eqs.~(\ref{thetavars})-(\ref{theta2}). In this case, in fact, a
straightforward calculation shows that:
\begin{eqnarray}
S_{source1}&\longrightarrow& S_{source1}-
2\lambda\sum_{\bar\imath=1}^3\epsilon^{\bar\imath jk}\int_0^Lds\Bigg[
{\dot
x}_{\bar\imath}^\mu(s)a_{j\mu}(x_{\bar\imath}(s))\nonumber\\
&+&
\frac12\frac{d\omega_j(x_{\bar\imath}(s))
}{ds}
\Bigg]\left[\omega_k(x_{\bar\imath}(s))-
\omega_k(x_{\bar\imath}(0))\right]\label{source1transf}
\end{eqnarray}
Eq.~(\ref{source1transf}) has been obtained by taking into account
Eq.~(\ref{impid}). Clearly, in the action $S$ of
Eq.~(\ref{actionnewdef}) the non-invariant
contributions appearing after a gauge transformation in the right hand
side of equations (\ref{salphasst}) and (\ref{source1transf}) vanish
identically.
As a conclusion, the action $S$ is gauge invariant.
\section{The Milnor invariant}
In the rest of this Section we will concentrate our attention
to the partition function of the theory, which is given by:
\begin{equation}
{\cal Z}=
\int{\cal D}\alpha_{ij}{\cal D}a_{i\mu}{\cal D} A^i_\mu e^{-iS}\label{pf}
\end{equation}
The above gauge field theory requires the introduction of
a gauge fixing, like for instance the Lorentz gauge. It has been shown
in \cite{FFJMP} that nonlinear gauge transformations like those of
Eq.~(\ref{gtone}) give rise to trivial Faddeev--Popov determinants, in
which the ghosts are decoupled from the gauge fields. For this reason,
the ghost sector can be ignored.
The connection with Ref.~\cite{LealPineda} is obtained after
eliminating the one-dimensional fields $\alpha_{ik}$. After performing a
Gaussian integration, it is possible to prove the
following identity:
\begin{equation}
\int {\cal
D}\alpha_{ij}\exp\left\{-2\lambda
i\epsilon^{\bar\imath jk}\sum_{\bar\imath=1}^3
\int_{-\infty}^{+\infty}d\eta\left[
\frac{\alpha_{\bar\imath j}{\dot \alpha}_{\bar\imath
k}}2-\xi_{\bar\imath j}\alpha_{\bar \imath k}
\right] \right\}=e^{2i\lambda I}
\end{equation}
where
\begin{equation}
I=\sum_{\bar\imath=1}^3\epsilon^{\bar \imath jk}\int_0^Lds\int_0^Ldt
\theta(s-t) {\dot x}_{\bar\imath}^\mu(s)
{\dot x}_{\bar\imath}^\nu(t) a_{j\mu}(x_{\bar\imath}(s))
a_{k\nu}(x_{\bar\imath}(t) )
\end{equation}
and the Heaviside function $\theta(s-t)$ is nothing but the propagator
of the fields $\alpha_{ij}$.
The quantity $I$ can be cast in the form of a double volume integral
in ${\mathbb R}^3$:
\begin{equation}
I=\epsilon^{ijk}
\int d^3x\int d^3y\epsilon^{ijk} T_i^{\{\mu x,\nu y\}}a_{j\mu}(x)a_{k\nu}(y)
\end{equation}
Thus,
the partition
function in Eq.~(\ref{pf}) may be rewritten as follows:
\begin{equation}
{\cal Z}=\int{\cal D}A_\mu^i{\cal D}a_{i\mu} e^{-iS'}\label{pfprime}
\end{equation}
where
\begin{eqnarray}
S'&=&\int d^3x\epsilon^{\mu\nu\rho}\left\{
4 A_\mu^i\partial_\nu a_{i\rho}+\frac 23\lambda\epsilon^{ijk}a_{i\mu}a_{j\nu}a_{k\rho}
\right\}\nonumber\\
&-&2\int d^3x T_i^{\mu x}A_\mu^i(x)+2\lambda\int d^3x\int d^3y T_i^{\{
\mu x,\nu y\} } a_{j\mu}(x)a_{k\nu}(y)
\end{eqnarray}
In this way the topological field theory discussed in
\cite{LealPineda} has been recovered.
In the partition function (\ref{pfprime}) the gauge fields $A_\mu^i$ are
playing the role of Lagrange multipliers imposing the condition:
\begin{equation}
\epsilon^{\mu\nu\rho}\partial_\nu a_{i\rho}(x)=\frac 12 T_i^{\mu x}\label{staeq}
\end{equation}
These fields can be integrated out giving as a result:
\begin{equation}
{\cal Z}=\int {\cal D}a_{i\mu}(x)e^{-iS^{\prime\prime}}
\delta\left(
4\epsilon^{\mu\nu\rho}\partial_\nu a_{i\rho}(x)-2T_i^{\mu x}
\right)
\end{equation}
where
\begin{equation}
S^{\prime\prime}=\int d^3x\frac
23\lambda\epsilon^{\mu\nu\rho}\epsilon^{ijk} a_{i\mu}
a_{j\nu}a_{k\rho}
+2\lambda\int d^3x\int d^3y \epsilon^{ijk}T_i^{\{\mu x,\nu
y\}}a_{j\mu}(x)a_{k\nu}(y)
\end{equation}
After a further integration over the fields $a_{i\mu}$, the partition
function becomes ${\cal Z}=e^{iS^{\prime\prime}}$, where
$S^{\prime\prime}$ is computed at the solutions of
Eq.~(\ref{staeq}). These solutions
are nothing but the
$\tilde a_{i\mu}'$s
provided in
Eq.~(\ref{fieldstilda}) apart from a proportionality factor $-2$.
Comparing the form of $S^{\prime\prime}$ with that of the Milnor
linking coefficient $\bar\mu(1,2,3)$ of Eq.~(\ref{mti}), it is clear
that they coincide and it is possible to conclude that:
\begin{equation}
{\cal Z}=e^{2i\lambda\bar\mu(1,2,3)}
\end{equation}
\section{Conclusions}
The topological field theory defined by the action (\ref{action}) is
invariant under diffeomorphisms on ${\mathbb R}^3$ and
reparametrizations of the variable $\eta\in\mathbb R$. As it has been
shown in Section~\ref{sec2}, it is also gauge invariant. Finally, it
is exactly solvable. Its partition function consists essentially in
the Milnor linking coefficient $\bar\mu(1,2,3)$.
This is the first
time that a local topological field theory has been constructed whose
partition function can be computed in closed form and is associated to a
topological invariant of the complexity of the Milnor linking
coefficient.
Applications of this topological field theory to polymer physics are
currently work in progress.
\section{Acknowledgments}
The support of the Polish National Center of Science,
scientific project No. N~N202~326240, is gratefully acknowledged.
|
1,314,259,994,187 | arxiv | \section{Introduction}
\label{intro}
The formulae by Erlang provided explicit expressions for percentages
of lost customers in certain queueing systems in the stationary regime
\cite{Erlang1917}.
Erlang models still remain highly important in the modern world. However,
what is crucial for applications and what is lacking in Erlang's old
results and some further studies is a knowledge of rate of convergence
to a stationary regime. In fact, this ``extended'' Erlang's problem -- with
estimated convergence rates -- is not fully solved even nowadays. For a
long time, estimations of convergence rate (mostly of exponential
decay) were known only for the cases where service times
have exponential distributions and under some additional assumptions,
cf. \cite{Asmussen}, \cite{Zei}, et al.
Bounds for the rates of convergence to stationary regimes for
close systems -- but not precisely Erlang's ones -- were a subject of study in many papers,
see below.
It is widely accepted that any important characteristic of quality of
any queueing system in practice is computed in a stationary regime and
it is, of course, a rare case where this characteristics is available
in a more or less explicit form, cf., for example, \cite{Pechinkin1979}.
However, if the rate of convergence
is unknown, then the error is unknown either.
Modelling may be some alternative
to theoretical bounds, nevertheless, it cannot fully replace a
rigorous theoretical analysis.
Our main goal is to attack the general non-Markov case with
non-exponential service times for classical telephone systems.
The key system to be studied is similar to one investigated in 50s by
Sevastyanov and in the following three decades by other
researchers.
This system consists of a finite (as in
Sevastyanov's works), or infinite
(as in some other works) number of servers;
the incoming flow of customers is a conditional Poisson process with
intensity that may depend on the number of customers in the system;
in particular, it may increase linearly
if this number is large. Each customer
upon his arrival goes to one of the free servers, or -- in the finite
case -- it may be lost if all servers are busy. All service times are
random variables independent of each other and all have the same
distribution function. Such models even with a finite number of servers
usually do not satisfy conditions of Doeblin--Doob's ergodic theorem about
{\em uniform convergence}
\cite{Doeblin}, \cite[Ch.5-6]{Doob}.
The celebrated B. A. Sevastyanov's
ergodic theorem \cite{Seva} (see also \cite{Seva2}) for Markov processes in general state
spaces provided for the first time not only existence and
uniqueness of stationary distribution for ``telephone systems'', but also
convergence in total variation. This was a pioneering result where such
convergence is {\em non-uniform} with respect to the initial data or
distribution and does not follow directly from Doeblin--Doob's ``uniform''
ergodic theory. The corollary of Sevastyanov's ergodic
theorem for queueing (``telephone'') model
will be briefly recalled below.
Practically simultaneously with \cite{Seva2}--\cite{Seva}, T. E. Harris \cite{Harris1956} suggested his
method to study stationary measures of recurrent Markov processes; a
presentation of his results and ideas, as well as of their further
development -- including studies of convergence rates --
may be
found in \cite{Baxendale2011}. It may be noted that one of the basic ideas of
this theory -- to exploit moments of ``regeneration'' of the process -- was
proposed in the fourtees in \cite{Doeblin} and further devepoled in
\cite{Kolmogorov} in relation to a very close issue of local limit theorems,
which may serve as a background for coupling.
A few years earlier than Sevastyanov and shortly after \cite{Kolmogorov},
Fortet proved \cite{Fortet50} that a
stationary distribution exists in ``Sevastyanov's case'' under a bit stronger
assumptions than eventually in \cite{Seva} (existence of a density was assumed),
along with the form of this distribution; however, he did not study uniqueness
nor convergence. Some special important part of the main result of \cite{Seva}
-- related to the property of ``insensitivity'' (see below) -- was
also rigorously obtained in \cite{Khinchin62}. The latter paper was
published only in 1963,
however, as quite reasonably suggested by the Editor
(B. V. Gnedenko)
of the volume of A. Ya. Khinchin's works
in \cite[The Editor's Introduction]{Khinchin63},
the paper was, in fact, fully prepared to publication in 1954--1955.
Earlier, the original Erlang's formulae with
exponential service time distribution were extended
on systems with an infinite number of
servers \cite{Khinchin55}, and later (1965) this result was tackled by a
different method in \cite{Mejzler1965}. Among all these results, \cite{Seva}
remains the most advanced achievement
in that period.
More general systems -- with infinitely many
servers and/or with more involved disciplines of serving -- were
studied further in \cite{Falin88}, \cite{Falin89}, \cite{Ivnickii1982},
\cite{Koenig1975},
\cite{Kov1962}, \cite{Matthes1962}, \cite{Schassberger1977},
\cite{Ve1977}, \cite{Ve1981},
et al. Even quite recently, results in
this direction were still under investigation under the name of
``insensitivity'' of a stationary regime (i.e., where there are some
general invariants of a stationary distribution, which depend
on the service time distribution only through its
mean value) for advanced versions of Erlang type models in
\cite{AdanWeiss2011},
\cite{Bonald2002},
\cite{Massoulie2007},
\cite{Walton2011},
\cite{Zachary2007}.
Note that most of these papers -- with the exception of
\cite{Ivnickii1982} and \cite{Ve1981} --
do not cite two other pioneering
publications \cite{Kov1962}--\cite{Kov1962b} and none of them except
\cite{Falin88} tackles convergence rates; in the latter paper, the result
about convergence rate bounds could be called partial in comparison to our
Theorem \ref{Thm1} below.
Sevastyanov's version of ergodic theorem
\cite{Seva2}--\cite{Seva} also proved to be
useful in some extensions, in particular, in the case $N=\infty$, see
\cite{Ve1977}. For other versions of such extensions see
\cite{Matthes1962},
\cite{Schassberger1977},
et al. (regrettably, the former publication \cite{Matthes1962}
is still hardly easily available even nowadays).
{\em Exponential} convergence rate for infinite server
systems of Sevastyanov's type (and a little more general)
with {\em non--exponential} service time distributions may be found in
\cite{KV1997}; however, the method used there was not suitable for
weaker sub--exponential rates under weaker assumptions.
Establishing such weaker convergence rates for a wider class of
queueing systems of Sevastyanov's type is the main
goal of the paper.
It would be an extremely hard task to mention all important publications
where convergence rates for general Markov processes -- or, indeed, just for
applications to queueing models -- were studied; some of them may be found
among the references below, or in the literature provided in these references. A
very incomplete list of names of major contributors includes Kalashnikov
\cite{Kalashnikov}, \cite{Kalashnikov93}, Borovkov
\cite{Borovkov84}, \cite{Borovkov},
Tuominen and
Tweedie \cite{TT79}, \cite{Tweedie1983}, Thorisson \cite{Thor85}, \cite{Thor00}, et
al.
Results about convergence rates
close to the Theorem \ref{Thm1} below for similar but yet a
bit different systems may be found in the fundamental monograph \cite{Thor00},
where, in particular, the Theorem 7.2 establishes convergence in total
variation,
\begin{equation}\label{noest}
\varphi(t)\; \|\mu_t - \mu\|_{TV} \to 0, \quad t\to \infty.
\end{equation}
Recall that the total variation distance between two
probability measures on a measurable space $(\Omega, {\mathcal F})$ is
defined as
to
\[
\|\mu - \nu\|_{TV} : = 2\,\sup_{A\in {\mathcal F}}\, |\mu(A) - \nu(A)|.
\]
For certain more particular models see also \cite{Sigman92} and
\cite{TT79}.
The background idea of the approach in \cite{Thor00}
is to use estimates of the
rate of convergence in the law of large numbers (LLN); its
implementation is involved and uses regeneration
technique. In our model with an infinite number of servers ($N=\infty$)
it is unclear how to use LLN directly and we use another method (eventually
leading to LLN, too) based on a ``markovisation'' of the
system -- due to Sevastyanov -- and on a local ``infinitesimal'' condition
on the basis of {\em service intensity}, $h(t)$, which allows to construct
Lyapunov functions. Regeneration is also in use in this paper, which gives
more precise {\em bounds} for the distance in
(\ref{noest}) and continues studies of various rates of convergence and mixing for a variety of
Erlang--Sevastyanov's type models commenced recently in
\cite{V2009} -- \cite{V2013}. The model in this paper is {\em
non--Markov}.
The paper is arranged as follows. Section one is introduction.
Section two contains the setting and a brief reminder of
Sevastyanov's result. Section three is devoted to the main result
of this paper -- polynomial convergence -- Section
four to some auxiliaries and Section five to the
proof of the main result.
\section{The setting: Erlang -- Sevastyanov system}
We consider the model with
$1\le N\le \infty$ (including $\infty$) identical servers working independently,
with a distribution function $G$ of service time. The incoming flow of
customers is conditionally Poisson with intensity
$\lambda_n$ given that $n$ servers are busy at the
moment ($0\le n\le N$). All service times on all servers are independent
on each other and on the incoming flow.
A newly arrived customer
chooses any server which is not busy and its serving immediately starts.
If $N<\infty$ and all servers are busy, then a new customer is lost or
blocked; if $N=\infty$, then under reasonable assumptions the number of
customers is finite at all times and no loss is possible.
A customer which was served, immediately quits the system. We assume that at any
moment $t$ the elapsed service times of all customers in the system, say,
$X^1_t,
\ldots, X^n_t$ are known; the process $X_t = (X^1_t, \ldots, X^n_t), \, t\ge 0$
is Markov (cf. the Lemma \ref{Le1} below); if there is no customers at $t$, then we denote
$X_t=\Delta_0$ (note that $X_t=0$ and $X_t=\Delta_0$ have different meanings).
At $t=0$, only finitely many servers may be busy.
Following \cite{Seva}, we assume that
a newly arrived customer is assigned a coordinate $X^k=0$
with any $k=0, \ldots, n+1$ with equal probabilities $(n+1)^{-1}$ if at his
arrival $n$ servers are busy.
The ``non-Markov property'' of this system signifies that the number $n=n_t \in
Z_+$ of
customers at any time $t$ is, generally speaking, {\em not} a Markov process (of course, unless the {\em intensity} of serving only depends on $n_t$).
However, we make it Markov by considering it as $(X_t)$ in the
following extended state space ${\mathcal X}$ of a variable
dimension, as in \cite{Seva}:
${\mathcal X}$ is a union of finitely many (if $N<\infty$), or countably many
(if $N=\infty$) subspaces,
$$
{\mathcal X}_0 = \Delta_0; \qquad
{\mathcal X}_1 = R_+, \quad \ldots,
\qquad
{\mathcal X}_n = R^n_+, \quad \ldots, 0\le n\le N.
$$
To any $x\in {\mathcal X}_n$ with $n>0$ there correspond $n$ non-negative
coordinates $(x^1, \ldots, x^n)$, which signify the elapsed time
of service of any of existing $n$ customers in the system.
If there is at least one customer in the system and $x=(x^1,
\ldots, x^n)$ is a vector of the elapsed service times,
then by $n(x)$ we denote this number $n$; if $x = \Delta_0$,
then $n(x):=0$.
~
\noindent
In \cite{Seva} (see also \cite{Seva2}) it is proved that for $N<\infty$ under
the only condition
$$
q^{-1} := \int_0^\infty x \,dG(x) < \infty,
$$
there is a (unique) stationary distribution $\mu$ with a
density ($1\le k\le N$),
$$
p_k(x^1, \ldots, x^k) = p_0\frac{\prod_{i=0}^{k-1}\lambda_i}
{k!} \, \prod_{j=1}^{k} (1-G(x^j)), \quad
p_0^{-1} = \sum_{j=0}^{N-1}\frac{\prod_{i=0}^{j-1}\lambda_i}
{q^j\, j!},
$$
and, moreover, for any initial distribution the following convergence holds true,
\begin{equation}\label{to}
\|\mu_t - \mu\|_{TV} \to 0, \qquad t\to\infty,
\end{equation}
where $\mu_t$ is a marginal distribution of the (Markov) process at $t$;
below $\mu_t^x$ will stand for the marginal distribution given initial state $x$.
~
\noindent
{\it Remark 1.}
In \cite{Ve1977} this result was extended to $N=\infty$
under the condition $q^{-1} \limsup_{n\to\infty}\lambda_n/(n+1)<1$.
By integrating $\int p_k(x)\,dx$, we obtain the
stationary
probabilities $p_k$ of $k$ customers in the system, which depend on $G$
only through its mean value; this property is called ``insensitivity'',
$$
p_k = p_0\frac{\prod_{i=0}^{k-1}\lambda_i}
{q^{k} k!}, \quad 1\le k\le N,
$$
and it is an object of studies for various queueing models until nowadays.
\section{Convergence rate bounds: Main result ($N=\infty$) }
We consider Erlang--Sevastyanov's system with $N=\infty$.
The {\em service intensity} $h(t)$ is defined as follows, which
we will assume to be {\em bounded} ($h\in B(R_+)$)
$$
h(t) : = \frac{g(t)}{1-G(t)}, \quad t \ge 0, \quad \mbox{where}\;\;
g(t) = G'(t).
$$
Notice that $h \equiv \mathop{\mbox{const}}$ means that the service time
has an exponential distribution, in which case
(and in a bit more general one) a sufficient condition for exponentially fast
convergence to the stationary distribution has been established in
\cite{KV1997}.
In all cases, $\displaystyle 1-G(t)=\exp\left(-\int_0^t h(s)\,ds\right)$.
Let for $a,m>0$,
$$
L_{m,a}(x):=
\left(\sum_{j=1}^{n(x)}(1+x^j)^{m}\right)^{a} \;\; (x\not= \Delta_0),
\quad \& \quad L_{m,a}(\Delta_0):= 0.
$$
To avoid triviality due to a degeneracy, we impose a condition
\begin{equation}\label{nondeg}
\lambda_0>0.
\end{equation}
\begin{Theorem}\label{Thm1
Assume (\ref{nondeg}),
$h\in B(R_+)$
and existence of $C_0 >0$
such that
\begin{equation}\label{ash}
h(t) \ge \frac{C_0}{1+t}, \quad t\ge 0,
\end{equation}
and
\begin{equation}\label{aslambda}
C_0 - 2(1 + 2\Lambda) > 0,
\end{equation}
where
\begin{equation}\label{Lambda}
\Lambda : = \sup_{n\ge 1} \left(\frac{\lambda_n}{n}\right) < \infty.
\end{equation}
Then for any $k>0$ small enough
there exist real values $C>0$, $a>1$ and $m > 1$ such that
for any $X_0=x$,
\begin{equation}\label{main}
\|\mu^x_t - \mu \|_{TV} \le
C \,\frac{1+L_{m,a}(x)}{(1+t)^{k+1}}.
\end{equation}
For $\Lambda$ fixed, $k$ may be chosen large if $C_0$ is large enough, see
(\ref{am}) below.
\end{Theorem}
\noindent
{\em Remark 2.} Without (\ref{nondeg}) -- i.e., for $\lambda_0=0$ -- the Theorem \ref{Thm1} is still valid
with a trivial stationary distribution, $\delta_{\Delta_0}$ and it does follow
from the proof below with minimal changes.
~
\noindent
{\em Remark 3.}
As we shall see in the proof, for a substantial part of the proof it suffices to assume a slightly weaker assumption
\begin{equation}\label{aslambda0}
C_0 - (1 + \Lambda) > 0.
\end{equation}
However, in the end of the calculus the full
version of (\ref{aslambda}) will be needed. More precisely,
we will, actually, use
\begin{equation}\label{am}
C_0 >
(a+(k\vee 1)/m)(m + \Lambda
\,2^{a-1+(k\vee 1)/m}).
\end{equation}
The latter bound is available with {\em some} $a,m>1$, at least, for small $k>0$ -- in fact, for $k\le m$ --
under the assumption (\ref{aslambda}).
As one more example, with $m=k$, the latter sufficient condition only in terms
of $C_0, \Lambda$ and $k$ reads,
\[
C_0> 2(k+2\Lambda),
\]
as in this case there exists $a>1$ for which (\ref{am}) holds.
Also notice that large values of $k$ in (\ref{am}) are available under
$C_0$ large
enough, or under just $C_0>1$ but with $\Lambda$ small enough, which agrees with the intuitive
idea that stability is stronger if intensity of service is in some
sense
significantly greater than intensity of arrivals. However, emphasize that $C_0$
itself is {\em not} intensity of serving itself, but only a multiplier in a lower bound
for this function (\ref{ash}).
~
\noindent
{\em Remark 4.} Of course, the greater $C_0$, the more moments has the distribution of serving $G$. However, the method requires existence of intensity $h$. It would be interesting to relax the latter assumption.
\section{Construction, martingales, estimates, strong Markov property}
We will use notations $x=(x_1, \ldots, x_n)$ and $X=(n,x)$ -- and also $E_X\equiv E_x$ -- and for any such
$x\in {\mathcal X}_n$ with $n\ge 1$ define with any $1\le j\le n$,
\[
x^{(+j)}:=(x_1, \ldots, x_j, 0, x_{j+1}, \ldots, x_n), \;\;
\& \;\;
x^{(-j)} := (x_1, \ldots, x_{j-1}, x_{j+1}, \ldots, x_n).
\]
To work with Lyapunov functions, it is very useful -- if not compulsory -- to know
that the process is {\em strong Markov}. In continuous time it is not
automatic and should be justified. About {\em Markov} property for finite $N$, cf.
\cite{Fortet50} and \cite{Seva}.
\noindent
{\em Some preliminaries: generators and martingales.} Suppose for a while that $h\in C_b(R_+)$; a bit later we will show how to relax this assymption.
Before the next Lemmae
we recall some well-known links between Markov processes and martingales,
which seem to be a bit less popular language in queueing theory (e.g., cf. \cite{Borovkov84}).
The generator (infinitesimal operator) ${\mathcal G}$ of the process $X$ in
the space ${\mathcal X}$ with a Borel topology ${\mathcal B}$ on all
subspaces ${\mathcal X}_n$ (with a convention that ${\mathcal X}_n$ is open and closed
for each $n$) and $\sup$-norm for $C({\mathcal X}, {\mathcal B})$
is an operator ${\mathcal G}$ such that (see \cite{Dynkin})
\begin{equation}\label{gen}
\sup_{X\in {\mathcal X}}\left|\frac{E_X f(X_t)-f(X)}{t}
- {\mathcal G}f(X)\right| \to 0, \quad t\to 0,
\end{equation}
for all $f$ from the {\em domain} ${\mathcal D}_{\mathcal G}$ of ${\mathcal G}$,
which is usually a hard task to determine precisely and which is usually enough to
have a wide enough sub-class of.
In our case,
it follows from (9)--(10) and continuity of $h$ that for
$f\in C^1_0({\mathcal X})$ -- with one continuous derivative and compact support
-- i.e., $f(X)$ vanishes if $n\ge N$ or if $\sup_i x^i\ge N$ for some $N$
-- (\ref{gen}) holds with
\begin{eqnarray}\label{ext-gen}
{\mathcal G}f(X) =
\sum_{i=1}^{n(X)}\left(\frac{\lambda_{n(X)}}{n(X)}(f(X^{(+i)}) - f(X))
+ h(X^i) (f(X^{(-i)}) - f(X)) \right.
\nonumber \\ \\ \nonumber
\left.
+ \frac{\partial}{\partial x^i}f(X)\right)
\times 1(n(X)>0) + \lambda_0 (f(0) - f(X))\, 1(n(X)=0). \hspace{1cm}
\end{eqnarray}
By Dynkin's formula \cite[see, e.g., corollary from the formula (1.36) as $\lambda\to\infty$]{Dynkin},
\begin{equation}\label{dynkin1}
E_{X_0} f(X_{t}) - f(X_0) = E \int_0^{t}
{\mathcal G}f(X_s)\,ds
\end{equation}
for any $f\in C^1_0$. For functions $(f(t,X), \, t\ge 0, \, X\in {\mathcal X})$
of class $C^1_0$ with respect to $(t,X)$ -- which vanish for large $n(x)$
and for large $X\in {\mathcal X}_n$ for any fixed $n$ -- Dynkin's formula for the process
$(t,X_t)$ reads,
\begin{equation}\label{dynkin2}
E_{X_0} f(t, X_{t}) - f(0,X_0) = E \int_0^{t}
\left(\frac{\partial}{\partial s}\, f(s,X_s) + {\mathcal
G}f(s,X_s)\right)\,ds.
\end{equation}
Note that, at least, intuitively this equality as well as (\ref{dynkin1}) may be regarded as a
complete probability formula,
as the right hand side presents all possible developments of the trajectory from $0$ to $t$.
In terms of martingales (cf. \cite{LSh}), (\ref{dynkin2}) is equivalent to
saying that the process
\begin{equation}\label{mart1}
M_{t}=f(t, X_{t}) - f(0,X_0) - \int_0^{t}
\left(\frac{\partial}{\partial s}\, f(s,X_s) + {\mathcal
G}f(s,X_s)\right)\,ds,\; \;t\ge 0
\end{equation}
is a martingale.
Recall for the sequel that a process is called a {\em local} martingale if there exists a sequence of
stopping times $\tau_n\to\infty$ a.s. such that the stopped process
\(
M_{t\wedge\tau_n}
\)
is a martingale for each $n$ (cf. \cite{LSh}). In turn, the statement
(\ref{mart1}) may be
equivalently (by definition) rewritten in the differential form as
\begin{equation}\label{difform}
df(t,X_t) = \left(\frac{\partial}{\partial t}\, f(t,X_t) + {\mathcal
G}f(t,X_t)\right)\,dt + dM_t
\end{equation}
for any $f\in C^1_0$ (${\mathcal G}$ is defined above). The latter formula itself
is {\em local} -- i.e., written at
any given $(t, X_t)$ and its small neighbourhood -- so it may be extended from $C^1_0$ to $C^1$,
under a natural convention that the process is stopped on the exit
from some neighbourhood of the state $X_t$; of course, this may require
a localizing stopping time procedure if using the integral form (\ref{mart1}) and possibly some justification that $M$ is a martingale (and not just a local martingale).
All the above starting from the formula (\ref{gen})
work well if the intensities are continuous. If this is wrong, the limit in (\ref{gen}) may just not exist. Nevertheless, following \cite{Davis} or \cite{GK}
it is possible to define the process by using a notion of {\em extended generator}, that is, an operator for which Dynkin's formulae (\ref{dynkin1}) and (\ref{dynkin2}) hold. The action of extended generator on functions is given by the same expression in (\ref{ext-gen}).
\begin{Lemma}\label{Le1}
Under the assumptions (\ref{Lambda}) and $h\in B(R_+)$
the process $X$ exists, has a unique distribution and is Markov and strong Markov. The Dynkin's formulae (\ref{dynkin1}) and (\ref{dynkin2}) hold for any
$f(x)\in C^1_0$ and $f(t,x)\in C^1_0$.
\end{Lemma}
{\bf Proof.} Existence (for possibly discontinuous $h$) follows from the results on piecewise linear or piecewise deterministic Markov processes in \cite{GK}, \cite{Kalashnikov77}, \cite{Davis}, as well as do uniqueness and Markov and strong Markov properties. The non-explosion is implied by the condition (\ref{Lambda}), for example, due to
\cite[Ch. 1.3.3]{GK}. Both Dynkin's formulae follow from \cite{Davis}. Another way to show Dynkin's formula for a sightly different model was suggested in \cite{VZ1}.
The Lemma \ref{Le1} is proved.
~
We admit the following convention for stochastic
differentials:
\[
A_t\,dt + dM_t \le B_t\,dt + dM_t \qquad \mbox{iff} \qquad
A_t \le B_t, \quad \forall \; t\ge 0, \;\;\mbox{a.s.}
\]
Recall that the process is called {\em cadlag} iff it is right
continuous with left limits at any $t$.
Note that $M_t$ in (\ref{mart1}) is cadlag because the right hand side is.
Below we use convention
$n^{-1}\sum_{i=1}^na_i \equiv 1$ for any real values $(a_i)$ if $n=0$.
\begin{Lemma}\label{Le3}
Under the assumptions (\ref{Lambda}) and $h\in C_b(R_+)$,
\begin{eqnarray}\label{M2}
L_{m,a}(X_t) - L_{m,a}(X_0)
= \int_0^t \lambda_{n(X_s)} \left(\frac1{n(X_s)} \sum_{i=1}^{n(X_s)} (L_{m,a}(X^{(+i)}_s) -
L_{m,a}(X_s))\right) \,ds
\nonumber \\ \\ \nonumber
+ \int_0^t \left[\sum_{i=1}^{n(X_s)} h(X^i_s) \left(L_{m,a}(X^{(-i)}_s) - L_{m,a}(X_s)\right)
+ \sum_{i=1}^{n(X_s)} \frac{\partial}{\partial x^i}L_{m,a}(X_s)\right]\,ds +M_t,
\end{eqnarray}
with some martingale $M_t$.
\end{Lemma}
{\bf Proof}
follows from (\ref{difform}) with $f(t,X)\equiv L_{m,a}(X)$ (see the remark above
about extension of (\ref{difform}) to $C^1$), due to
\begin{eqnarray}\label{MM}
dL_{m,a}(X_t)
= {\mathcal G}(X_{t})\,dt + dM_t
= \lambda_n \left(\frac1n \sum_{i=1}^{n} L_{m,a}(X^{(+i)}_t) -
L_{m,a}(X_t)\right) \,dt
\nonumber \\ \\ \nonumber
+ \sum_{i=1}^n h(X^i_t) \left(L_{m,a}(X^{(-i)}_t) - L_{m,a}(X_t)\right) \,dt
+ \sum_{i=1}^{n} \frac{\partial}{\partial x^i}L_{m,a}(X_t)\,dt +dM_t
\nonumber \\ \nonumber \\ \nonumber
\equiv (I_1 - I_2 + I_3)dt+ dM_t,
\hspace{3cm}
\end{eqnarray}
with $n=n(X_t)$. We shall see below in the Lemma \ref{Le4} (without a vicious circle) that no localization is needed
here as all terms in (\ref{M2}) will turn out to be integrable and $M$ is, indeed, a martingale.
The Lemma \ref{Le3} is proved.
\begin{Lemma}\label{Le3a}
Under the assumptions (\ref{Lambda}), $h\in B(R_+)$
and $m,a>1$,
the following bounds or equalities hold true:
\begin{equation}
I_1 \le
\Lambda \, a \,2^{a-1} \, L_{m-1,1}(X_{t}) L_{m,a-1}(X_{t})
1(X_t \not = \Delta_0) + \lambda_0 1(X_t = \Delta_0);
\end{equation}
\begin{eqnarray}\label{query}
I_2
\le \,a\, \|h\|_{B}\,L_{m,a+1}(X_t) 1(X_t\not = \Delta_0);
\end{eqnarray}
\begin{eqnarray}\label{i33}
I_3 = a m \, L_{m-1,1}(X_{t}) L_{m,a-1}(X_{t}) 1(X_t \not= \Delta_0);
\end{eqnarray}
\begin{eqnarray}\label{momm}
E_x L_{m,a} (X_{t})
\le (L_{m,a} (x)+\lambda_0 t) \exp((\Lambda a 2^{a-1} + m a)t).
\end{eqnarray}
If in addition
\begin{equation}\label{asam0}
C_0 > a(m + \Lambda \,2^{a-1}),
\end{equation}
then also
\begin{eqnarray}\label{query2}
I_2\ge 1(X_t\not = \Delta_0) \,C_0 L_{m-1,1}(X_{t})
L_{m,a-1}(X_{t}).
\end{eqnarray}
\end{Lemma}
\noindent
{\bf Proof.}
Let us establish the bound for $I_1$.
Notice that for $y = x +1 \ge 2$ and $a>1$
we have $y^{a-1}= (x+1)^{a-1}\le (2x)^{a-1}$, and, hence,
\(
y^a-x^a \le a(y-x)y^{a-1} \le a\,2^{a-1}\,x^{a-1}.
\)
Indeed, the first bound here follows for $y\ge x>0$ and $a>0$ from
\[
\frac{d}{dx} (y^a-x^a) = -ax^{a-1} \ge -ay^{a-1} =
\frac{d}{dx} (a(y-x)y^{a-1}).
\]
So, we estimate,
\begin{eqnarray*}
\frac1n \sum_{i=1}^{n} L_{m,a}(X^{(+i)}_t) -
L_{m,a}(X_t) \hspace{2cm}
\\\\
= \left(
\left((1+0)^m +\sum_{j=1}^{n}(1+X_t^j)^{m}\right)^{a}
- \left(\sum_{j=1}^{n}(1+X_t^j)^{m}\right)^{a}
\right)
\\\\
\le a\,2^{a-1}\, \left(\sum_{j=1}^{n}(1+X_t^j)^{m}\right)^{a-1}
= a\,2^{a-1}\,L_{m,a-1}(X_t).
\end{eqnarray*}
Hence, due to the inequlaity $n(X_t) \le L_{m-1,1}(X_{t})$, we get,
\begin{eqnarray}\label{i1}
I_1
= \lambda_n \left(L_{m,a}(X'_t) - L_{m,a}(X_t)\right)
\le \lambda_n \, a \,2^{a-1} \, L_{m,a-1}(X_{t}) dt
\nonumber \\ \\ \nonumber
\le \Lambda n \, a \,2^{a-1} \, L_{m,a-1}(X_{t})
\le \Lambda \, a \,2^{a-1} \, L_{m-1,1}(X_{t}) L_{m,a-1}(X_{t}).
\end{eqnarray}
Further, by taking derivatives, we find,
\begin{eqnarray}\label{i3}
I_3 = \sum_{i=1}^{n} \frac{\partial}{\partial x^i}L_{m,a}(X_t)
= a \left(\sum_{\ell=1}^{n}(1+X_t^\ell)^{m}\right)^{a-1}
\times \sum_{j=1}^n m \, (1+X_t^j)^{m-1}
\nonumber \\ \\ \nonumber
= a m \, L_{m-1,1}(X_{t}) L_{m,a-1}(X_{t}). \hspace{3cm}
\end{eqnarray}
The lower bound for $I_2$ under the additional (\ref{asam0}),
\begin{eqnarray}\label{query1}
I_2 \ge C_0 \sum_{i=1}^n
(1+X^i_t)^{m-1}
L_{m,a-1}(X_{t})
= C_0 L_{m-1,1}(X_{t})
L_{m,a-1}(X_{t}).
\end{eqnarray}
Emphasize that $L_{m,a-1}(X_{t})$ stands here in the middle term
and not $L_{m,a-1}(X^{(-i)}_{t})$ -- the latter would be a little bit
insufficient for our aims -- which is justified in the next few lines.
We used here the elementary inequality for real values $0<x\le y$ and $a>1$,
\begin{equation}\label{elem}
y^a-x^a \ge (y-x)y^{a-1}
\end{equation}
(instead of also correct $y^a-x^a \ge a(y-x)x^{a-1}$),
for $y=\sum_{j=1}^{n}(1+X_t^j)^{m}$ and $x =
\sum_{1\le j\le n, \, j\not=i}^{}(1+X_t^j)^{m}$.
The bound (\ref{elem}) follows from the observation that both sides in
(\ref{elem}) vanish at $y=x$ and the derivative function of the
right hand side is less than that of the left hand side for $y>x\, (>0)$:
$$
\frac{d}{dy} \,(y-x)y^{a-1} = ay^{a-1} - (a-1)xy^{a-2} < ay^{a-1}
= \frac{d}{dy} \,(y^a-x^a).
$$
Hence,
\begin{eqnarray*}
I_2 = \sum_{i=1}^n h(X^i_t)
\left(
\left(\sum_{j=1}^{n}(1+X_t^j)^{m}\right)^{a}
- \left(\sum_{1\le j\le n, \, j\not=i}^{}(1+X_t^j)^{m}\right)^{a}
\right)
\\\\
\ge \sum_{i=1}^n
\frac{C_0}{(1+X^i_t)}
\left(\sum_{j=1}^{n}(1+X_t^j)^{m} -
\sum_{1\le j\le n, \, j\not=i}^{}(1+X_t^j)^{m}\right)
\left(\sum_{j=1}^{n}(1+X_t^j)^{m}\right)^{a-1}
\\\\
= C_0\, \sum_{i=1}^n (1+X^i_t)^{m-1}
\left(\sum_{j=1}^{n}(1+X_t^j)^{m}\right)^{a-1}
= C_0 L_{m-1,1}(X_{t})
L_{m,a-1}(X_{t}).
\end{eqnarray*}
The upper bound for $I_2$ follows from its definition and from the
remark that $n(x) \le L_{m,1}(x)$ and $L_{m,1}L_{m,a}=L_{m,a+1}$.
~
\noindent
Further, for any $t\ge 0$,
\begin{eqnarray*}
I_1 -I_2+ I_3 \le
(\Lambda a 2^{a-1} + m a) \, L_{m-1,1}(X_{t}) L_{m,a-1}(X_{t}) + \lambda_0.
\end{eqnarray*}
So, from (\ref{MM}) and by virtue of Fatou's lemma -- and using a localization
for $M$ if needed so as to
vanish expectation of the (local) martingale term -- we get,
\begin{eqnarray*}
E_x L_{m,a} (X_{t\wedge \tau_n})
\le L_{m,a} (x) + \lambda_0 t \hspace{2cm}
\\\\
+ (\Lambda a 2^{a-1} + m a) E_x \int_0^{t \wedge \tau_n}
L_{m-1,1}(X_{s}) L_{m,a-1}(X_{s}) \, ds
\\\\
\le L_{m,a} (x) + \lambda_0 t + (\Lambda a 2^{a-1} + m a)
E_x \int_0^{t} L_{m,a}(X_{s\wedge \tau_n}) \, ds.
\end{eqnarray*}
By Gronwall's inequality (note that $E_x L_{m,a} (X_{t \wedge \tau_n})$ is bounded),
\begin{eqnarray*}
E_x L_{m,a} (X_{t \wedge \tau_n})
\le (L_{m,a} (x)+\lambda_0 t) \exp((\Lambda a 2^{a-1} + m a)t).
\end{eqnarray*}
and, as $\tau_n\to \infty$, by Fatou's Lemma, also
\begin{eqnarray*}
E_x L_{m,a} (X_{t})
\le (L_{m,a} (x)+\lambda_0 t) \exp((\Lambda a 2^{a-1} + m a)t).
\end{eqnarray*}
The Lemma \ref{Le3a} is proved.
\begin{Lemma}\label{Le4}
Under the assumptions (\ref{Lambda}), $h\in C_b(R_+)$,
$m,a>1$,
for any $t>0$,
\begin{equation}\label{supl}
E_x\sup_{0\le s\le t} L_{m,a}(X_s)
<\infty,
\end{equation}
and the local martingale $M$ in (\ref{M2}) is, in fact, a martingale.
\end{Lemma}
{\bf Proof.}
We estimate, {\em for any $a,m>0$},
\[
E_x\sup_{0\le s\le t} L_{m,a}(X_s) \le L_{m,a}(x)
+ \int_0^{t} E_x L_{m,a}(X_s)ds + E_x\sup_{0\le s\le t} |M_s|.
\]
In turn, $E_x\sup_{0\le s\le t} |M_s| \le C_p (E_x |M_{t}|^p)^{1/p}$
for any $p>1$ by Doob's inequality (recall that $M$ is cadlag) and further,
\begin{eqnarray}\label{modm}
|M_t| \le L_{m,a}(X_t) + L_{m,a}(x)
+ \left|\int_0^t {\mathcal G}(X_{s})\,ds\right|
\hspace{1.4cm}
\nonumber \\ \\ \nonumber
\le L_{m,a}(X_t) + L_{m,a}(x)
+\left|\int_0^t I_1\,ds\right|
+\left|\int_0^t I_2\,ds\right|
+\left|\int_0^t I_3\,ds\right|.
\end{eqnarray}
So, due to (\ref{momm}), which is valid for any $m, \,a>0$,
\begin{eqnarray*}
E_x |M_{t}|^p \le C_p L_{m,a}(x)^p + C_p E_x L_{m,a}(X_{t})^p +C_{p}
\int_0^{t} E_x L_{m,a+1}(X_s)^p ds
\\\\
= C_p L_{m,ap}(x) + C_p E_x L_{m,ap}(X_{t}) +C_{p} \int_0^{t} E L_{m,ap+p}(X_s)\,ds \hspace{0.5cm}
\\\\
\le C'(L_{m,ap+p}(x)+\lambda_0 t)\exp(Ct)<\infty.
\hspace{1.5cm}
\end{eqnarray*}
By virtue of H\"older's inequality, this implies (\ref{supl}). The Lemma \ref{Le4} is proved.
\section{Proof of Theorem \ref{Thm1}}
\noindent
{\bf 1.}
Consider a Lyapunov function $L_{m,a}$ at
any $X_t\not=\Delta_0$ and $m, a > 1$, satisfying also
(\ref{asam0})
(compare with (\ref{aslambda}) and (\ref{am})).
The idea of Lyapunov functions in a stochastic
context is to verify that the `main' negative term prevails
and that `on average' the process $L_{m,a}(X_t)$ decreases, as long as
$X_t \not=\Delta_0$.
From the bounds on $I_1$, $I_2$ and $I_3$ of the Lemma \ref{Le3a} it follows
that $I_1$ and $I_3$ are {\em dominated} by
$I_2$. Then it would imply that the stationary measure
integrates some polynomial. In turn, this would allow to
extend our Lyapunov function so as to include some
multiplier that depends on time. The latter would help obtain
the crucial bound
$
E_x~\tau_0^{k+1}~<~\infty
$
for some $k>0$ (see (\ref{simply}) below)).
Finally, the latter bound would imply ``coupling''
between the original process and its stationary version
with a certain rate of convergence.
\noindent
For any $t<\tau_0$ we have,
\begin{eqnarray*}
I_1 - I_2 + I_3 \le -
(C_0 - \Lambda a 2^{a-1} - m a) \, L_{m-1,1}(X_{t}) L_{m,a-1}(X_{t}) < 0.
\end{eqnarray*}
Denote
$$
C_{m,\Lambda,a}:= C_0 - \Lambda a 2^{a-1} - m a >0.
$$
By Fatou's lemma
we get,
\begin{eqnarray}\label{ineqlt}
E_x L_{m,a} (X_{t \wedge \tau_0}) \hspace{3cm} \nonumber \\\\ \nonumber
+ C_{m,\Lambda,a} E_x \int_0^{t \wedge \tau_0}
L_{m-1,1}(X_{s}) L_{m,a-1}(X_{s}) \, ds
\le L_{m,a} (x),
\end{eqnarray}
and, as $t\to \infty$,
\begin{eqnarray*}
E_x L_{m,a} (X_{\tau_0})
+ C_{m,\Lambda,a} E_x \int_0^{\tau_0}
L_{m-1,1}(X_{s}) L_{m,a-1}(X_{s}) \, ds
\le L_{m,a} (x).
\hspace{4cm}
\end{eqnarray*}
In particular, $E_x\tau_0 < \infty$ for any $x$ and also
$E_0\hat\tau_0 < \infty$ with $\hat\tau_0 :=
\inf(t > 0: \, X_t=\Delta_0; \; \exists\, s\in (0,t): \,
X_s \not=\Delta_0))$. In other words, the process $X$ is positive
recurrent.
According to the Harris--Khasminskii principle about invariant measures
(cf., for example, \cite{Ve00}),
there exists a (unique in our model) invariant measure $\mu$,
$\mu(A) = c E_0 \int_0^{\hat\tau_0} 1(X_s\in A)\,ds$, which integrates
the function $L_{m-1,1}(x) L_{m,a-1}(x)$. As noticed by the Referee, under the accepted assumtions both existence and
uniqueness of this measure also follow straightforward from \cite{Ve1977}.
~
\noindent
In a moment, we will show
one more elementary inequality
\begin{equation}\label{gm}
L_{m,1}(x)^{(m-1)/m} \le L_{m-1, 1}(x),
\end{equation}
so that (notice that $L_{m,a} (x) L_{m,b} (x) = L_{m,a+b} (x)$ and $L_{m,1}(x)^{a} = L_{m,a}(x)$)
$$
L_{m-1,1}(x) L_{m,a-1}(x) \ge L_{m,a-1+\frac{m-1}{m}}(x) = L_{m,a - 1/m}(x),
$$
and
\begin{eqnarray}\label{HH}
E_x L_{m,a} (X_{t\wedge \tau_0})
+ C_{m,\Lambda,a} E_x \int\limits_0^{t\wedge \tau_0}
L_{m,a-1/m}(X_{s}) \, ds
\le L_{m,a} (x).
\end{eqnarray}
So, due to Fatou's lemma,
\begin{eqnarray}\label{HH2}
E_x L_{m,a} (X_{\tau_0})
+ C_{m,\Lambda,a} E_x \int_0^{\tau_0}
L_{m,a-1/m}(X_{s}) \, ds
\le L_{m,a} (x).
\end{eqnarray}
Emphasize that both inequalities
(\ref{HH}) and (\ref{HH2}) have been established under the assumption
$
C_{m,\Lambda,a} >0,
$
that is,
\begin{equation}\label{encore1}
C_0 - \Lambda a 2^{a-1} - m a >0.
\end{equation}
~
\noindent
{\bf 2.}
The inequality (\ref{gm})
follows
from the inequalities with $a,b>0, \, \alpha\in (0,1)$ and $c = b/a$
$$
(a+b)^\alpha \le a^\alpha +b^\alpha \quad \sim \quad
(1+c)^\alpha \le 1 +c^\alpha,
$$
where the latter, in turn, follows from the
valid inequality
for the derivatives,
$$
\alpha(1+c)^{\alpha-1} \le \alpha c^{\alpha-1}.
$$
\noindent
{\bf 3.}
We are now prepared for considering a Lyapunov function
which depends on time. Let $k>0$ ({\em not} necessarily $k>1$),
$a, m>0$ and $L_{m,a,k}(t,x): = (1+t)^k L_{m,a}(x)$.
Similarly to the above and choosing $a, m>1$, we have,
\begin{eqnarray*}
dL_{m,a,k}(t,X_t)= L_{m,a,k}(t+dt,X_{t+dt}) - L_{m,a,k}(t,X_{t})
\\\\
= (1+t)^{k}\left[I_1 - I_2 +I_3\right]dt
+ k(1+t)^{k-1} \, L_{m,a}(X_{t})\, dt
+ d\tilde M_t
\\\\
\le -(1+t)^k (C_0 - \Lambda a 2^{a-1} - m a)
L_{m,a-\frac1{m}}(X_{t}) \, dt \hspace{0.5cm}
\\ \\
+ k(1+t)^{k-1} \, L_{m,a}(X_{t})\, dt + d\tilde M_t, \hspace{1.5cm}
\end{eqnarray*}
with some new local martingale $\tilde M$.
Now the task is again to ensure that the negative part in the
right hand side of the last expression prevails.
We will be using the
inequality established in the step 1 above. The second term may
be split into two parts,
\begin{eqnarray}\label{twoterms}
I:= k(1+t)^{k-1} \, L_{m,a}(X_{t}) \hspace{2cm}
\\\nonumber
= I\times 1(k(1+t)^{k-1} \, L_{m,a}(X_{t})
\le \epsilon (1+t)^k L_{m,a-1/m}(X_{t}))
\\\nonumber
+ I\times 1(k(1+t)^{k-1} \, L_{m,a}(X_{t})
> \epsilon (1+t)^k L_{m,a-1/m}(X_{t})).
\end{eqnarray}
The first term here with '$\le \epsilon$', clearly, is dominated by
the main negative expression if $\epsilon>0$ is small enough,
$
\epsilon < C_0 - a(m + \Lambda \,2^{a-1}).
$
The set of such values $\epsilon$ is non-empty
as long as $a$ and $m$ are chosen so as to satisfy
(\ref{asam0}).
Let us estimate the second term in (\ref{twoterms}).
We have, for any $\ell>0$ (later we will choose $\ell = k+\delta$ with small
$\delta>0$),
\begin{eqnarray*}
I\times 1(k(1+t)^{k-1} \, L_{ m,a}(X_{t})
> \epsilon (1+t)^k L_{m,a-1/m}(X_{t}))
\\\\
\le I \times \frac{(k \, L_{m,a}(X_{t}))^\ell}
{(\epsilon (1+t) L_{m,a-1/m}(X_{t}))^\ell}
= I \times \frac{k^\ell}{(\epsilon (1+t))^\ell}
L_{m,\ell/m}(X_{t}).
\end{eqnarray*}
Therefore, the second term in (\ref{twoterms}) does not exceed
\begin{eqnarray*}
k(1+t)^{k-1} \,
\times \frac{k^\ell}{(\epsilon (1+t))^\ell}
L_{m,a+\ell/m}(X_t)^{}.
\end{eqnarray*}
Now let us impose conditions on $\ell$:
let $a': =a+\ell/m$ and assume
\begin{equation}\label{encore2}
C_0 - \epsilon > a'(m + \Lambda 2^{a'-1}),
\end{equation}
in order to use inequalities simiar to (\ref{HH})--(\ref{HH2}) with $a'$ instead
of $a$.
Note that, at least, for $\ell>0$ -- and automatically $k$ -- small enough, the
latter inequality holds true due to
(\ref{encore1}).
Now, let us collect all terms and their bounds,
integrate and take expectations (exploiting
an appropriate localization for $\tilde M$ if necessary),
\begin{eqnarray*}
E_x L_{m,a,k}(t\wedge \tau_0,X_{t\wedge \tau_0})
+ (C_{m,\Lambda,a}-\epsilon) E_x \int_0^{t\wedge \tau_0}
(1+s)^{k} L_{m,a-1/m}(X_s) \,ds
\\\\
\le L_{m,a}(x)
+ C' E_x \int_{0}^{\infty}
(1+s)^{k-1-\ell} E_x 1(s\le t\wedge\tau_0)
L_{m,a+\ell/m}(X_s)\,ds.
\\\\
\le L_{m,a}(x) + C'' L_{m,a+(\ell-1)/m}(x). \hspace{2cm}
\end{eqnarray*}
(This writing does not necessarily mean that $\ell \ge 1$.) Due to Fatou's lemma, with
$\ell = k + \delta$ and $\delta>0$ (i.e., $\ell > k$), this imples,
\begin{eqnarray*}
E_x L_{m,a,k}(\tau_0,X_{\tau_0})
+ C' E_x \int_0^{\tau_0}
(1+s)^{k} L_{m,a-1/m}(X_s) \,ds
\\\\
\le L_{m,a}(x)
+ C'' L_{m,a+(\ell-1)/m}(x). \hspace{2cm}
\end{eqnarray*}
Since $L_{m,a-1/m}(X_s)\ge 1$ for $s<\tau_0$
(notice that $a+(\ell-1)/m>0$), we get,
\begin{eqnarray*}
\displaystyle E_x \tau_0^{k+1}
\le C L_{m,a}(x) + C L_{m,a+(\ell-1)/m}(x),
\end{eqnarray*}
or just
\begin{eqnarray}\label{simply}
E_x \tau_0^{k+1}
\le C L_{m,a+(\ell-1)_+/m}(x).
\end{eqnarray}
Notice that for $x=\Delta_0$, the inequality (\ref{simply}) also trivially
holds true.
~
\noindent
{\bf 4.}
The bound (\ref{simply}) along with moment inequalities (\ref{HH}--\ref{HH2})
for Markov models ``usually''
already suffice for establishing the desired rate of convergence and there are
several standard ways to accomplish the proof. So, in
principle, we may claim our result at this point. However, we
give a
sketch of the remaining proof
for completeness of the presentation and to address some specifics of the
models. It is due to this specifics that while considering a couple of processes
we need to take some additional care so as to tackle the hitting time of the
``origin'' for this couple, while ``usually'' it is enough to estimate
moments of the hitting time of some neighbourhood of the origin.
In this {\em second part of the proof,}
we consider two independent
versions $X$ and $\tilde X$ of our
Markov process, one starting at $x$ and another at the
stationary measure $\mu$. We are going to show
how arrange coupling. Recall that the stationary version
exists due to the Harris-Khasminskii principle, see the remark above.
Denote $\bar \tau_0 : = \inf(t\ge 0: X_t = \tilde X_t = \Delta_0)$.
Given $\tilde X_0 = y$, it may be proved that also
\begin{eqnarray}\label{taubar}
E_{x,y} \bar \tau_0^{k+1}
\le C \bar L_{m,a+\ell/m}(x,y).
\end{eqnarray}
Indeed, let $R>0$ and for given $m$, $a$ and $\ell$, denote
\begin{eqnarray*}
\bar\tau_{0,R}: = \inf(t\ge 0: \; X_t=\Delta_0 \;
\mbox{and}\; L_{m,a-1/m}(Y_t)\le R,
\\
\;\;
\mbox{or}\;\; Y_t=\Delta_0 \; \mbox{and}\; L_{m,a-1/m}(X_t)\le R).
\end{eqnarray*}
The idea of evaluating $\bar \tau_0$ is to establish a bound for
$\bar\tau_{0,R}$ and then to use it for managing $\bar \tau_0$
with the help of (\ref{simply}). For this goal, consider Lyapunov
functions
$$
\bar L_{m,a}(X_{t},Y_t):= L_{m,a}(X_t) + L_{m,a}(Y_t), \;\;
\bar L_{m,a,k}(t,X_t,Y_t)
:= (1+t)^k \bar L_{m,a}(X_{t},Y_t),
$$
with the same values
of $m,a,k$ as above for a single component. We notice that
at any moment $t$ when $(X_t, Y_t) \not = (\Delta_0, \Delta_0)$,
the Lyapunov function $\bar L_{m,a,k}$
serves well in the sense that it decreases on average at least
as fast as a single component one, $(1+t)^k L_{m,a}(X_t)$, say
(if $X_t\not=\Delta_0$).
If it occurs for the first time that $X_t=Y_t=\Delta_0$, then it means that
$t=\bar\tau_0$. So, we have to inspect what happens at $t$ when
$X_t=\Delta_0$ and $\Delta_0\not = Y_t$, but $L(Y_t)\ge R$, say.
In this case the idea is that the average increment of
$L_{m,a-1/m}(X_t)$ is, of course, positive but equals just $\lambda_0 dt$ and, hence,
may be easily compensated by a large negative on average increment
of the other component $L_{m,a-1/m}(Y_t)$. In this way we will establish below
the bound
\begin{eqnarray}\label{tau0r}
E_{x,y}\bar\tau_{0,R}^{k+1} \le C \bar L_{m,a+\ell/m}(x,y),
\end{eqnarray}
under the condition (\ref{encore2}).
Then, once $\bar\tau_{0,R}$ occurred, we may wait some fixed time
$t_1$ sufficient for $Y$ to achieve $\Delta_0$ with a large probability,
say, at least $1/2$, while $X$ remains at $\Delta_0$ all that time with probability $\exp(-\lambda_0 \, t_1)$.
If this scenario is {\em not} realised -- which probability
does not exceed some constant $\nu<1$
-- then we stop at $\bar\tau_{0,R} + t_1$ or a bit earlier
if either $X$ exits from $\Delta_0$, or $L_{m,a-1/m}(Y)$ exceeds level $R+1$ (say).
Then we wait again until the ``next'' moment $\bar\tau_{0,R}$
and repeat the whole procedure of the ``attempt'' to meet
both components at $\Delta_0$. Thus, we will evaluate $\bar\tau_{0}$
by means of some geometric series, which would guarantee the
desired inequality (\ref{taubar}). Hence, let us show the bound
(\ref{tau0r}) first. Recall that we have $C_{m,\Lambda,a}>0$ and even $C_{m,\Lambda,a'}>0$ ($a'=a+\ell/m$) due to (\ref{encore1}) and (\ref{encore1}), and choose
$\epsilon$ and $R$ so that
\begin{equation}\label{eqR}
(C_{m,\Lambda,a} - \epsilon) R > \lambda_0.
\end{equation}
Then there exists $C'>0$ such that
$(C_{m,\Lambda,a} - \epsilon- C') R \ge \lambda_0)$.
Denote
\begin{eqnarray*}
e^1_t:= 1(X_t\not=\Delta_0, Y_t\not=\Delta_0), \quad
e^2_t:= 1(X_t=\Delta_0, L_{m,a-1/m}(Y_t)\ge R),
\\
e^3_t:= 1(Y_t=\Delta_0, L_{m,a-1/m}(X_t)\ge R). \hspace{2cm}
\end{eqnarray*}
~
\noindent
{\bf 5.} We start with the function $\bar L_{m,a}(X_t,Y_t)$
on $t<\bar\tau_{0,R}$. Repeating the calculus at the step 1,
we obtain the following bounds,
\begin{eqnarray}\label{HH3}
E_{x,y} \bar L_{m,a} (X_{t\wedge \bar\tau_{0,R}},
Y_{t\wedge \bar\tau_{0,R}})
+ E_x \int\limits_0^{t\wedge \bar\tau_{0,R}}
\{[(e^1_s + e^3_s)C_{m,\Lambda,a} L_{m,a-1/m}(X_{s})
-e^3_s \lambda_0]
\nonumber \\ \\ \nonumber
+ [(e^1_s + e^2_s)C_{m,\Lambda,a} L_{m,a-1/m}(Y_{s}))
- e^2_s\lambda_0]\}\, ds
\le \bar L_{m,a} (x,y),
\hspace{1cm}
\end{eqnarray}
and, due to Fatou's lemma, also
\begin{eqnarray}\label{HH4}
E_{x,y} \bar L_{m,a} (X_{\bar\tau_{0,R}},
Y_{\bar\tau_{0,R}})
+ E_x \int\limits_0^{\bar\tau_{0,R}}
\{[(e^1_s + e^3_s)C_{m,\Lambda,a} L_{m,a-1/m}(X_{s})
-e^3_s \lambda_0]
\nonumber \\ \\ \nonumber
+ [(e^1_s + e^2_s)C_{m,\Lambda,a} L_{m,a-1/m}(Y_{s}))
- e^2_s\lambda_0]\}\, ds
\le \bar L_{m,a} (x,y).
\end{eqnarray}
Due to the condition (\ref{eqR}), all integrands ``$[\ldots]$'' in
(\ref{HH3}) and (\ref{HH4})
are non-negative, so, in particular, for any $t\ge 0$,
\begin{eqnarray}\label{eleqe}
E_{x,y} \bar L_{m,a} (X_{t\wedge \bar\tau_{0,R}},
Y_{t\wedge \bar\tau_{0,R}}) \vee
E_{x,y} \bar L_{m,a} (X_{\bar\tau_{0,R}},
Y_{\bar\tau_{0,R}}) \le \bar L_{m,a} (x,y).
\end{eqnarray}
~
\noindent
{\bf 6.} Now we are ready to consider the Lyapunov function $\bar L_{m,a,k}(t,X_t,Y_t)$ depending also on time.
Similarly to the step 3 -- see the formula (\ref{twoterms}) --
we have on $t<\bar \tau_{0,R}$ with some new local martingale $\hat M_t$,
\begin{eqnarray*}
d\bar L_{m,a,k}(t,X_t,Y_t)
= \bar L_{m,a,k}(t+dt, X_{t+dt}, Y_{t+dt}) -
\bar L_{m,a,k}(t,X_{t},Y_t)
\\\\
\le
e^1_t (1+t)^{k}
\left[(I^Y_1 - I^Y_2 + I^Y_3 + \frac{k L_{m,a}(Y_t)}{1+t} )\,dt \hspace{2cm}
\right.\\\\\left.
+ (I^Y_1 - I^Y_2 + I^Y_3 + \frac{k L_{m,a}(X_t)}{1+t})\,dt
+ d\hat M_t\right] \hspace{2,5cm}
\\\\
+ e^2_t (1+t)^{k}
\left[(I^Y_1 - I^Y_2 + I^Y_3
+ \lambda_0 + \frac{k L_{m,a}(Y_t)}{1+t}) \,dt
+ d\hat M_t\right] \hspace{1cm}
\\\\
+ e^3_t (1+t)^{k}
\left[(I^X_1 - I^X_2 + I^X_3
+ \lambda_0 + \frac{k L_{m,a}(X_t)}{1+t})\,dt + d\hat M_t\right]
\\\\
=: (J_1 + J_2 + J_3)dt + d\tilde M_t, \hspace{3cm}
\end{eqnarray*}
again with a new local martingale $\tilde M$ and with
\[
J_1 :=
e^1_t (1+t)^{k}
\left[I^Y_1 - I^Y_2 + I^Y_3 + \frac{k L_{m,a}(Y_t)}{1+t}
+ I^Y_1 - I^Y_2 + I^Y_3 + \frac{k L_{m,a}(X_t)}{1+t}\right],
\]
\[
J_2 :=
e^2_t (1+t)^{k}
\left[I^Y_1 - I^Y_2 + I^Y_3
+ \lambda_0 + \frac{k L_{m,a}(Y_t)}{1+t}
\right],
\]
\[
J_3 :=
e^3_t (1+t)^{k}
\left[I^X_1 - I^X_2 + I^X_3
+ \lambda_0 + \frac{k L_{m,a}(X_t)}{1+t}\right].
\]
Here the first term $J_1$ is estimated identically to what was
done at the step 3 for the only component $X$, and this gives
\begin{eqnarray*}
J_1
\le e^1_t\left(-(1+t)^k (C_{m,\Lambda,a} - \epsilon)
\bar L_{m,a-1/m}(X_{t},Y_t)\right)
\\
+ e^1_t k(1+t)^{k-1} \,
\times \frac{k^\ell}{(\epsilon (1+t))^\ell}
\bar L_{m,a+\ell/m}(X_t, Y_t)^{}.
\end{eqnarray*}
The second and the third terms allow the bounds,
\begin{eqnarray*}
J_2
\le e^2_t\left(-(1+t)^k C_{m,\Lambda,a}
L_{m,a-1/m}(Y_{t}) - \lambda_0 \right)
\\%\\
+ e^2_t k(1+t)^{k-1} \,
\times \frac{k^\ell}{(\epsilon (1+t))^\ell}
L_{m,a+\ell/m}(Y_t)^{},
\end{eqnarray*}
\begin{eqnarray*}
J_3
\le e^3_t\left(-(1+t)^k C_{m,\Lambda,a}
L_{m,a-1/m}(X_{t}) - \lambda_0 \right)
\\
+ e^3_t k(1+t)^{k-1} \,
\times \frac{k^\ell}{(\epsilon (1+t))^\ell}
L_{m,a+\ell/m}(X_t)^{},
\end{eqnarray*}
Now, let us collect all terms and their bounds,
integrate and take expectation, also using
localization for the martingale term if necessary. Notice that
$1(s<\bar\tau_0)(e^1_s+e^2_s+e^3_s)=1(s<\bar\tau_0)$
and
\begin{eqnarray*}
1(s<\bar\tau_0)[(e^1_s + e^3_s) L_{m,a-1/m}(X_s)
+ (e^1_s + e^2_s) L_{m,a-1/m}(Y_s)]
\\%\\
= 1(s<\bar\tau_0) \bar L_{m,a-1/m}(X_s,Y_s). \hspace{2cm}
\end{eqnarray*}
So, we have,
\begin{eqnarray*}
E_{x,y} \bar L_{m,a,k}(t\wedge \bar\tau_{0,R},
X_{t\wedge \tau_0}, Y_{t\wedge \tau_0}) \hspace{2.5cm}
\\\\
+ (C_{m,\Lambda,a}-\epsilon-\frac{\lambda_0}{R})
E_{x,y} \int_0^{t\wedge \bar\tau_{0,R}}
(1+s)^{k} \bar L_{m,a-1/m}(X_s,Y_s) \,ds
\\\\
\le \bar L_{m,a}(x,y)
+ C' \int_{0}^{\infty}
E_{x,y} 1(s\le t\wedge\bar\tau_{0,R})
(1+s)^{k-1-\ell} \bar L_{m,a+\ell/m}(X_s,Y_s)\,ds.
\end{eqnarray*}
Further, recall that $a'=a+\ell/m$ and (\ref{encore2}) holds true, whence,
\begin{eqnarray*}
E_{x,y} 1(s\le t\wedge\bar\tau_{0,R})
\bar L_{m,a+\ell/m}(X_s,Y_s)
\le \bar L_{m,a+\ell/m}(x,y).
\end{eqnarray*}
From here we conclude,
\begin{eqnarray*}
E_{x,y} \bar L_{m,a,k}(t\wedge \bar\tau_{0,R},
X_{t\wedge \tau_0}, Y_{t\wedge \tau_0})
+ C' E_{x,y} \int_0^{t\wedge\bar\tau_{0,R}}
(1+s)^{k} \bar L_{m,a-1/m}(X_s,Y_s) \,ds
\\\\
\le C \bar L_{m,a+\ell/m}(x,y).\hspace{3.5cm}
\end{eqnarray*}
Due to Fatous's lemma, with
$\ell = k + \delta$ (i.e., $\ell > k$) this imples,
\begin{eqnarray*}
E_{x,y} \bar L_{m,a,k}(\bar\tau_{0,R},X_{\bar\tau_{0,R}},Y_{\bar\tau_{0,R}})
+ C' E_{x,y} \int_0^{\bar\tau_{0,R}}
(1+s)^{k} \bar L_{m,a-1/m}(X_s,Y_s) \,ds
\\\\
\le C \bar L_{m,a+\ell/m}(x,y). \hspace{2.5cm}
\end{eqnarray*}
Since $\bar L_{m,a-1/m}(X_s)\ge 1$ on $s<\bar \tau_0$ (and on $s<\bar \tau_{0,R}$),
we get
\begin{eqnarray*}
\displaystyle E_{x,y} \bar\tau_{0,R}^{k+1}
\le C \bar L_{m,a+\ell/m}(x,y),
\end{eqnarray*}
so that (\ref{tau0r}) is established.
~
\noindent
{\bf 7.}
Now let us show (\ref{taubar}).
As explained above, to this aim we choose $t_1$ so that
$$
\sup_{u: \, L_{m,a-1/m}(u)\le R+1} \,
P_{u}(\tau_0>t_1) \le
t_1^{-(k+1)} \sup_{u: \, L_{m,a-1/m}(u)\le R+1}\, E_{u}\tau_0^{k+1} \le \frac12.
$$
Recall that $\nu<1$ is defined above in the step 4 as follows,
\begin{eqnarray*}
1-\nu:= \inf_{L_{m,a-1/m}(u)\le R+1} P_{\Delta_0, u}\left(X_s\equiv \Delta_0, \,
0\le s\le t_1, \, \& \,
\exists \, t \in [0, t_1]: \, Y_t = \Delta_0\right)
\\\\
\ge \frac12\, \exp(-\lambda_0 \, t_1) > 0. \hspace{3.5cm}
\end{eqnarray*}
Let
\(
\bar\tau(1):= \bar\tau_{0,R}, \;\;
\bar\tau(n+1):= \theta_{\bar\tau(n)+t_1}\bar\tau_{0,R},
\;\; n=1,2,\ldots,
\)
where $\theta_t$ is a shift operator for the process
$((X_t,Y_t), \, t\ge 0)$
(see \cite{Dynkin}).
Then we estimate,
\begin{eqnarray}\label{sumtau0}
E_{x,y}\bar \tau_{0}^{k+1} \le \sum_{n\ge 1}^{}
(E_{x,y} (\bar\tau(n) + t_1)^{k+1} \nu^{n-1}(1-\nu).
\end{eqnarray}
Whence,
\begin{eqnarray*}
E_{x,y} (\bar\tau(1) + t_1)^{k+1}
\le 2^k \,(E_{x,y} \bar\tau(1)^{k+1} + t_1^{k+1})
\le C \bar L_{m,a+\ell/m}(x,y) + C.
\end{eqnarray*}
By induction and using
$\bar\tau(n)
= \sum_{k=1}^{n}(\bar\tau(k) - \bar\tau(k-1))$
with $\bar\tau(0):=0$, we have,
\begin{eqnarray*}
E_{x,y} (\bar\tau(n) + t_1)^{k+1}
\le C n^{k}((n-1) +
\bar L_{m,a+\ell/m}(x,y) + 1)
\\%\\
\le Cn^{k+1}(\bar
L_{m,a+\ell/m}(x,y) + 1). \hspace{2cm}
\end{eqnarray*}
Substitution of this into (\ref{sumtau0}) shows that,
indeed, (\ref{tau0r}) implies (\ref{taubar}), under the assumption (\ref{eqR}), the latter being guaranteed by (\ref{encore2}).
~
\noindent
{\bf 8.}
From (\ref{taubar}) we conclude,
with invariant distribution $\mu$,
\begin{eqnarray}\label{taubar3}
E_{x,\mu} \bar \tau_0^{k+1}
\le C L_{m,a+\ell/m}(x)
+ C \int L_{m,a+\ell/m}(y)\,\mu(dy).
\end{eqnarray}
Recall that from (\ref{HH}) it follows that $\mu$ integrates the function
$L_{m,a-1/m}(x)$ for any couple $(a,m)$ satisfying
$a, m > 1$ and (\ref{am}): $C_0> a(m + \Lambda 2^{a-1})$.
Hence, for the integral in (\ref{taubar3}) to converge, it suffices
\(
C_0> (a + \ell/m)(m + \Lambda 2^{a -1 + \ell/m}).
\)
The latter is guaranteed by (\ref{encore2}), that is, by (\ref{encore1}) and by the choice of $\ell$ close enough to $k$.
In turn, (\ref{encore1}) is guaranteed by the assumption
(\ref{aslambda}) if $k>0$ and $\ell>0$ are sufficiently small. Then, for any $k>0$
small enough, there exist $a>1, \, m>1$ and $\ell > k$ such that
the integral in (\ref{taubar3}) converges and we get
\begin{eqnarray}\label{taubar4}
E_{x,\mu} \bar \tau_0^{k+1}
\le C L_{m,a+\ell/m}(x)
+ C.
\end{eqnarray}
~
\noindent
{\bf 9.}
Now, we may estimate
the right hand side in the coupling inequality,
$
\|\mu^x_t - \mu\|_{TV} = 2\sup_A(\mu^x_t - \mu)(A) \le
2 P_{x,\mu}(T>t),
$$
where $T:= \inf(t\ge0: \, X_t=\tilde X_t = 0)$. It follows from
(\ref{taubar4}) in a standard way (cf. \cite{Kalashnikov93}, \cite{Ve00}, et al.)
that for any $\nu>0$ there exists $C>0$ such that
\begin{equation}\label{lastineq}
P_{x,\mu}(T>t) \le
C (1+L_{m,a+\ell/m}(x))(1+t)^{-(k+1)+\nu}.
\end{equation}
This is equivalent to (\ref{main}).
The Theorem \ref{Thm1} is proved.
~
\noindent
{\it Remark 5.} The main result may be extended to
the case where both $\lambda$
and $h$ depend
on the whole state of the process $X$ satisfying
the same generic assumptions (\ref{ash}), (\ref{aslambda})
and (\ref{Lambda}), with $h(t)$ replaced by
$h(t,x)$ and $\Lambda:=\sup_{n\ge 1}(\lambda_n/n)$ by
$\Lambda:=\sup_{n\ge 1,\,x}(\lambda_{n, x}/n)$.
Similar convergence rate {\em independent of $N$} may be proved
in the same way for the model with any $N<\infty$; here ``usual'' bounds could
easily depend on this parameter.
Similar bounds may be established for
{\em mixing rates}
by using the approach from \cite{Ve00}.
For a {\em random} initial distribution $\mu_0$,
similar or weaker bounds may be proved depending on moments of $\mu_0$.
\section*{Acknowledgements}
The author is grateful to G. A. Zverkina and to two anonymous Referees for many useful remarks.
|
1,314,259,994,188 | arxiv | \section{Introduction}
The question addressed in this review is: What are the properties of the unusual two-sphere singularity discovered by B\"ohmer and Lobo? The answer is given for quantum as well as classical singularity structure. This conference proceeding is based on articles by B\"ohmer and Lobo \cite{BL} and by the authors \cite{HK1, HK2}.
\section{Types of Singularities}
\subsection{Classical Singularities}
A classical singularity is indicated by incomplete geodesics or incomplete paths of bounded acceleration \cite{HE, Geroch} in a maximal spacetime. Since, by definition, a spacetime is smooth, all irregular points (singularities) have been excised; a singular point is a boundary point of the spacetime. There are three different types of singularity \cite{ES}: quasi-regular, non-scalar curvature and scalar curvature. Whereas quasi-regular singularities are topological, curvature singularities are indicated by diverging components of the Riemann tensor when it is evaluated in a parallel-propagated orthonormal frame carried along a causal curve ending at the singularity.
\subsection{Quantum Singularities}
A spacetime is QM (quantum-mechanically) nonsingular if the evolution of a test scalar wave packet, representing the quantum particle, is uniquely determined by the initial wave packet, manifold and metric, without having to put boundary conditions at the singularity\cite{HM}. Technically, a static ST (spacetime) is QM-singular if the spatial portion of the Klein-Gordon operator is not essentially self-adjoint on $C_{0}^{\infty}(\Sigma)$ in $L^2(\Sigma)$ where $\Sigma$ is a spatial slice. This is tested (see, e.g., Konkowski and Helliwell \cite{HK1, HK2, KH}) using Weyl's limit point - limit circle criterion \cite{RS, Weyl} that involves looking at an effective potential asymptotically at the location of the singularity. Here a limit-circle potential is quantum mechanically singular, while a limit-point potential is quantum mechanically non-singular.
\section{2-Sphere Singularity -- B\"ohmer-Lobo Space-time}
\par The B\"ohmer and Lobo metric \cite{BL} is
\begin{equation}
ds^2 = -\frac{dt^2}{\cos\alpha} + R^2 d\alpha^2 + R^2 \sin^2\alpha \ d\Omega^2.
\end{equation}
\noindent where $R = \sqrt{3/8 \pi \rho_0}$ in terms of the constant energy density $\rho_0$, and $d\Omega^2 = d\theta^2 + \sin^2 \theta d\phi^2$. The coordinate ranges are $- \infty < t < \infty $, $0 \le \theta \le \pi$, and $0 \le \phi < 2\pi$. The radial coordinate $\alpha$ can either take the values $0 < \alpha \le \pi/2$ (half a three-sphere) or $- \pi/2 \le \alpha \le \pi/2$ (two half three-spheres joined at $\alpha = 0$ with $\alpha = - \pi /2$ identified with $\alpha = + \pi/2.$).
\par The B\"ohmer-Lobo spacetime is static, spherically symmetric, regular at $\alpha = 0$, and it has vanishing radial stresses \cite{BL}. It is also Petrov Type D and Segre Type A1 ([(11) 1, 1]), and it satisfies the strong energy condition automatically and the dominate energy condition with certain more stringent requirements \cite{BL}. Vertical cuts through the three-sphere define latitudinal two-spheres; in particular, the equatorial cut at $\alpha = \pi/2$ is a two-sphere on which scalar polynomial invariants diverge and the tangential pressure diverges as well.
\subsection{Classical singularity structure}
\par One can show that the B\"ohmer-Lobo spacetime is timelike geodesically complete but null geodesically incomplete. The equatorial two-sphere is a weak, timelike, scalar curvature singularity.
\subsection{Quantum singularity structure}
The Klein-Gordon equation
\begin{equation}
|g|^{-1/2}\left(|g|^{1/2}g^{\mu \nu} \Phi,_{\nu}\right),_{\mu} = M^2 \Phi
\end{equation}
\noindent for a scalar function $\Phi$ has mode solutions of the form
\begin{equation}
\Phi \sim e^{- i \omega t} F(\alpha) Y_{\ell m}(\theta, \phi)
\end{equation}
\noindent for spherically symmetric metrics, where the $Y_{\ell m}$ are spherical harmonics and $\alpha$ is the radial coordinate. The radial function $F(\alpha)$ for the B\"ohmer-Lobo metric obeys
\begin{equation}
F'' + \left(2\cot\alpha + \frac{1}{2}\tan\alpha\right) F' + \left[R^2 \omega^2 \cos\alpha - \frac{\ell(\ell + 1)}{\sin^2\alpha} - R^2 M^2\right] F = 0,
\end{equation}
\noindent and square integrability is judged by finiteness of the integral
\begin{equation}
I = \int d\alpha d\theta d\phi \sqrt{\frac{g_3}{g_{00}}} \Phi^* \Phi ,
\end{equation}
\noindent where $g_3$ is the determinant of the spatial metric. A change of coordinates puts the singularity at $x=0$ and converts the integral and differential equation to the one-dimensional Schr\"odinger forms $\int dx \psi^* \psi$ and
\begin{equation}
\frac{d^2 \psi}{dx^2} + (E - V)\psi = 0,
\end{equation}
\noindent where $E = R^2 \omega^2$ with a potential that is asymptotically
\begin{equation}
V(x) \sim \frac{R^2 M^2 + \ell (\ell + 1)}{x^{2/3}} < \frac{3}{4x^2}.
\end{equation}
\noindent It follows from Theorem X.10 in Reed and Simon \cite{RS} that $V(x)$ is in the limit circle case, so $x = 0$ is a quantum singularity. The Klein-Gordon operator is therefore not essentially self-adjoint. Quantum mechanics fails to heal the singularity.
|
1,314,259,994,189 | arxiv | \section{Introduction}
\input{sec_Introduction.tex}
\section{Problem Formulation and Notation} \label{ProblemSection}
\input{sec_ProblemFormulation.tex}
\section{Main Results} \label{MainResults}
\input{sec_MainResults.tex}
\section{Proofs} \label{Proofs}
\input{sec_Proofs.tex}
\section{Implementation and Numerical Experiments} \label{Numerics}
\input{sec_Numerics.tex}
\section{Discussion and Open Questions} \label{Conclusion}
\input{sec_Discussion.tex}
\section{Appendix}
\input{sec_Appendix.tex}
\section*{Acknowledgments}
MF and JM acknowledge the support of the DFG project “Information Theory and Recovery Algorithms for Quantized and Distributed Compressed Sensing”. MF and VN acknowledge the support of the DFG-FWF project “Multipenalty Regularization for High-Dimensional Learning”. VN acknowledges the support of the project No 251149/O70 “Function-driven Data Learning in High Dimension” (FunDaHD) funded by the Research Council of Norway. The authors thank Dominik Stöger for providing the very helpful counterexample in Remark \ref{DominiksRemark}.
\bibliographystyle{IEEEtranS}
\subsection{Proofs of \Cref{Proofs}}
\paragraph{} \purple{Proofs of} two technical \purple{results} used in \ref{Proofs} are provided here. The first \purple{result estimates} possible coverings of {$S_{s_1,s_2}^{R,\Gamma}$ defined in \eqref{S}} \purple{as stated in \Cref{CoveringNumber}}, while the second \purple{result} contains two integral estimates \purple{used in the proof of \Cref{GaussianRIP}}.
\begin{Proof}[of \Cref{CoveringNumber}]
Recall, each $Z \in S_{s_1,s_2}^{R,\Gamma}$ can be represented as $Z = U\Sigma V^T$ with $U = (u^1,...,u^R)$, $V = (v^1,...,v^R)$ where all unit norm columns $u^r \in \mathbb{R}^{n_1}$ are $s_1$-sparse, all unit norm columns $v^r \in \mathbb{R}^{n_2}$ are $s_2$-sparse, and $\Vert \Sigma \Vert_F \le \Gamma$. Let us first consider the larger set $S = \{ Z = U\Sigma V^T \colon U \in Q_{n_1,s_1}^R, \Sigma \in D_\Gamma, \text{ and } V \in Q_{n_2,s_2}^R \}$ where $D_\Gamma$ is the set of $R\times R$ diagonal matrices with Frobenius norm less or equal $\Gamma$ and $Q_{n,s}^R = \{ W \in \mathbb{R}^{n\times R} \colon \Vert W \Vert_F \le \sqrt{R} \text{ and all columns } w^r \text{ are } s\text{-sparse} \}$. Then, we know that $S_{s_1,s_2}^{R,\Gamma} \subset S$. We construct an $(\varepsilon/2)$-net $\tilde{S}$ of $S$ by covering the sets of permissible $U$, $\Sigma$, and $V$ and conclude the proof by applying the well-known relation $N(K,\Vert \cdot \Vert, \varepsilon) \le N(K',\Vert \cdot \Vert, \varepsilon/2)$ which holds whenever $K \subset K'$.
%
\paragraph{} First note that if $B$ is a unit ball in $D$ dimensions (with respect to some norm $\Vert \cdot \Vert_B$) there exists an $\varepsilon$-net $\tilde{B}$ (i.e., for all $b \in B$ there is some $\tilde{b} \in \tilde{B}$ with $\Vert b - \tilde{b} \Vert_B \le \varepsilon$) with $\tilde{B} \subset B$ and $|\tilde{B}| \le (3/\varepsilon)^D$. See for example \cite[Begin Section 3]{candes2011tight}. Moreover, note that $N(K,\Vert \cdot \Vert,\varepsilon) = N(cK,\Vert \cdot \Vert,c\varepsilon)$ for any set $K$ and $c > 0$. Hence, for any scaled unit ball $cB$ there exists an $\varepsilon$-net $\tilde{B} \subset cB$ and $|\tilde{B}| \le (3c/\varepsilon)^D$.
%
\paragraph{} Let $\tilde{D}_\Gamma$ be an $(\varepsilon/(6R))$-net of $D_\Gamma$ which is of size $|\tilde{D}_\Gamma| \le (18\Gamma R/\varepsilon)^R$. For $W \in \mathbb{R}^{n\times R}$ denote by $\mathrm{supp}(W) = \{ \mathrm{supp}(w^1),...,\mathrm{supp}(w^R) \}$ and by $\mathrm{supp}(W) \Subset \mathrm{supp}(W')$ that $\mathrm{supp}(w^r) \subset \mathrm{supp}((w')^r)$, for all $r \in [R]$. Define the set of all possible supports of maximal size
\begin{align*}
T_{n,s}^R = \{ \mathrm{supp}(W) \colon W \in \mathbb{R}^{n \times R} \text{ and all columns } w^r \text{ have exactly } s \text{ non-zero entries} \}.
\end{align*}
For any fixed $\theta \in T_{n,s}^R$ the set $\{ W \in Q_{n,s}^R \colon \mathrm{supp}(W) \Subset \theta \}$ is an $\mathbb{R}^{s\times R}$ Frobenius ball of radius $\sqrt{R}$ embedded into $\mathbb{R}^{n\times R}$ and $Q_{n,s}^R = \bigcup_{\theta \in T_{n,s}^R} \{ W \in Q_{n,s}^R \colon \mathrm{supp}(W) \Subset \theta \}$. Hence, there is an $(\varepsilon/(6\Gamma \sqrt{R}))$-net $\tilde{Q}_{n,s}^R$ of $Q_{n,s}^R$ with
\begin{align*}
| \tilde{Q}_{n,s}^R | \le | T_{n,s}^R | \left( \frac{18\Gamma R}{\varepsilon} \right)^{Rs} \le \binom{n}{s}^R \left( \frac{18\Gamma R}{\varepsilon} \right)^{Rs} \le \left( \frac{en}{s} \right)^{Rs} \left( \frac{18\Gamma R}{\varepsilon} \right)^{Rs}
\end{align*}
We define now $\tilde{S} = \{ \tilde{Z} = \tilde{U} \tilde{\Sigma} \tilde{V}^T \colon \tilde{U} \in \tilde{Q}_{n_1,s_1}^R, \tilde{\Sigma} \in \tilde{D}_\Gamma, \text{ and } \tilde{V} \in \tilde{Q}_{n_2,s_2}^R \}$. It is clear that
\begin{align*}
|\tilde{S}| \le | \tilde{Q}_{n_1,s_1}^R | \cdot | \tilde{D}_\Gamma | \cdot | \tilde{Q}_{n_2,s_2}^R | \le \left( \frac{18\Gamma R}{\varepsilon} \right)^{R(s_1+s_2+1)} \left( \frac{e n_1}{s_1} \right)^{Rs_1} \left( \frac{e n_2}{s_2} \right)^{Rs_2}.
\end{align*}
Let us conclude by showing $\tilde{S}$ is indeed an $(\varepsilon/2)$-net for $S$. Given any $Z = U\Sigma V^T \in S$, there exists ${\tilde{Z} = \tilde{U} \tilde{\Sigma} \tilde{V}^T} \in \tilde{S}$ with $\Vert U - {\tilde{U}} \Vert_F \le \varepsilon/(6\Gamma \sqrt{R})$, $\Vert \Sigma - \tilde{\Sigma} \Vert_F \le \varepsilon/(6R)$, and $\Vert V - {\tilde{V}} \Vert_F \le \varepsilon/(6\Gamma \sqrt{R})$. We can estimate
\begin{align*}
\Vert Z - {\tilde{Z}} \Vert_F &\le \Vert (U-{\tilde{U}})\Sigma V^T \Vert_F + \Vert \tilde{U} (\Sigma - \tilde{\Sigma}) V^T \Vert_F + \Vert {\tilde{U}} \tilde{\Sigma} (V - {\tilde{V}})^T \Vert_F \\
&\le \frac{\varepsilon}{6\Gamma \sqrt{R}} \Gamma \sqrt{R} + \sqrt{R} \frac{\varepsilon}{6R} \sqrt{R} + \sqrt{R} \Gamma \frac{\varepsilon}{6\Gamma \sqrt{R}} \\
&\le \frac{\varepsilon}{2}
\end{align*}
where we used triangle inequality in the first line and $\Vert AB \Vert_F \le \Vert A \Vert_F \Vert B \Vert_F$ in the second.
\end{Proof}
\begin{lemma} \label{GammaBounds}
If $\Gamma \ge 1$, we have for the sets $S_{s_1,s_2}^{R,\Gamma}$ and $K_{s_1,s_2}^{R,\Gamma}$ defined in \eqref{S} and \eqref{K} that
\begin{align*}
\int_{0}^{\frac{\Gamma \sqrt{R}}{\sqrt{m}}} \sqrt{ \log N \left( S_{s_1,s_2}^{R,\Gamma} , \Vert \cdot \Vert_F ,\sqrt{m} \varepsilon \right) } d\varepsilon
&\le \sqrt{\frac{C_S \Gamma^2 R^2 (s_1+s_2) \log \left( \max \left\{ n_1,n_2 \right\} \right) }{m}} \\
\int_{0}^{\frac{\Gamma \sqrt{R}}{\sqrt{m}}} \sqrt{\log N(K_{s_1,s_2}^{R,\Gamma},\Vert \cdot \Vert_F,\sqrt{m}\varepsilon)} \;d\varepsilon &\le \sqrt{\frac{C_K \Gamma^2 R^2 (s_1 + s_2) \log^3(\max\{n_1,n_2\})}{m}}
\end{align*}
where $C_S,C_K > 0$ are constants.
\end{lemma}
\begin{Proof}
For the first estimate apply \Cref{CoveringNumber} to obtain
\begin{align*}
\int_{0}^{\frac{\Gamma \sqrt{R}}{\sqrt{m}}} &\sqrt{ \log N \left( S_{s_1,s_2}^{R,\Gamma} , \Vert \cdot \Vert_F ,\sqrt{m} \varepsilon \right) } d\varepsilon
\le \sqrt{ \int_{0}^{\frac{\Gamma \sqrt{R}}{\sqrt{m}}} 1\; d\varepsilon \int_{0}^{\frac{\Gamma \sqrt{R}}{\sqrt{m}}}\log N \left( S_{s_1,s_2}^{R,\Gamma} , \Vert \cdot \Vert_F ,\sqrt{m} \varepsilon \right) d\varepsilon } \\
&\le \sqrt{ \frac{\Gamma^2 R^2 (s_1+s_2+1) {\left( 1 +\log\left( 18\sqrt{R} \right) \right)} + \Gamma^2 R^2 s_1 \log \left( \frac{en_1}{s_1} \right) + \Gamma^2 R^2 s_2 \log \left( \frac{en_2}{s_2} \right)}{m} } \\
&\le \sqrt{\frac{C_S \Gamma^2 R^2 (s_1+s_2) \log \left( \max \left\{ n_1,n_2 \right\} \right)}{m}},
\end{align*}
where we used Cauchy-Schwarz inequality in the first step and the fact that $\sqrt{R} \le \max\{ n_1,n_2 \}$ in the last inequality. $C_S > 0$ is an appropriate constant.
%
\paragraph{} To obtain the second estimate let us first assume $s_1/n_1 \le s_2/n_2$. We apply \Cref{CoveringNumber2} and find
\begin{align*}
&\int_{0}^{\frac{\Gamma \sqrt{R}}{\sqrt{m}}} \sqrt{\log N(K_{s_1,s_2}^{R,\Gamma},\Vert \cdot \Vert_F,\sqrt{m}\varepsilon)} \;d\varepsilon \\
&\le \int_{0}^{12\Gamma \sqrt{\frac{Rs_1}{mn_1}}} \sqrt{R(n_1+n_2+1) \log\left( \frac{36\Gamma R}{\sqrt{m}\varepsilon} \right)} \; d\eps + \int_{12\Gamma \sqrt{\frac{Rs_1}{mn_1}}}^{12\Gamma \sqrt{\frac{Rs_2}{mn_2}}} \sqrt{\frac{144\Gamma^2R^2s_1}{m\varepsilon^2} \log\left( \frac{9\sqrt{m}\varepsilon n_1}{6\Gamma \sqrt{R} s_1} \right)} \; d\eps \\
&+ \int_{12\Gamma \sqrt{\frac{Rs_1}{mn_1}}}^{12\Gamma \sqrt{\frac{Rs_2}{mn_2}}} \sqrt{R(n_2+1) \log\left( \frac{36\Gamma R}{\sqrt{m}\varepsilon} \right)} \; d\eps + \int_{12\Gamma \sqrt{\frac{Rs_2}{mn_2}}}^{\frac{\Gamma \sqrt{R}}{\sqrt{m}}} \sqrt{\frac{144\Gamma^2 R^2(s_1 + s_2)}{m\varepsilon^2} \log\left( \frac{9\sqrt{m}\varepsilon n_1}{6\Gamma \sqrt{R} s_1} \right)} \; d\eps \\
&+ \int_{12\Gamma \sqrt{\frac{Rs_2}{mn_2}}}^{\frac{\Gamma \sqrt{R}}{\sqrt{m}}} \sqrt{R \log\left( \frac{18\Gamma R}{\sqrt{m} \varepsilon} \right)} \\
&= I_1 + I_2 + I_3 + I_4 + I_5.
\end{align*}
We now estimate the five integrals. We use the short notation $a_i = 12\Gamma \sqrt{\frac{Rs_i}{mn_i}}$ for $i = 1,2$ and $b = \frac{\Gamma \sqrt{R}}{\sqrt{m}}$. The first integral can be bounded by
\begin{align*}
I_1 &\le \left( \int_0^{a_1} 1 \; d\eps \int_0^{a_1} R(n_1+n_2+1) \log\left( \frac{36\Gamma R}{\sqrt{m}\varepsilon} \right) \; d\eps \right)^{\frac{1}{2}} \\
&\le \left( a_1 R (n_1+n_2+1) \left[ \varepsilon \left(1+\log \left( \frac{36\Gamma R}{\sqrt{m}\varepsilon} \right)\right) \right]_{\varepsilon = 0}^{a_1} \right)^{\frac{1}{2}} \\
&= \left( \frac{144 \Gamma^2 R^2 s_1 (n_1 + n_2+1)}{mn_1} \left(1+\log \left( 3 \sqrt{\frac{Rn_1}{s_1}} \right)\right) \right)^\frac{1}{2} \le \begin{cases}
\left( \frac{432 \Gamma^2 R^2 s_1}{m} \left(1+\log \left( 3 \sqrt{\frac{Rn_1}{s_1}} \right)\right) \right)^\frac{1}{2} & n_1 \ge n_2 \\
\left( \frac{432 \Gamma^2 R^2 s_2}{m} \left(1+\log \left( 3 \sqrt{\frac{Rn_1}{s_1}} \right)\right) \right)^\frac{1}{2} & \text{else}
\end{cases}
\intertext{where we used in the last step the assumption $s_1/n_1 \le s_2/n_2$. As can be seen later, the case distinction is irrelevant in the final estimate. Let us now turn to the second integral.}
I_2 &= \sqrt{\frac{144\Gamma^2 R^2 s_1}{m}} \int_{a_1}^{a_2} \frac{1}{\varepsilon} \sqrt{\log\left( \frac{9\sqrt{m}\varepsilon n_1}{6\Gamma \sqrt{R} s_1} \right)} \; d\eps = \sqrt{\frac{144\Gamma^2 R^2 s_1}{m}} \left[ \frac{2}{3} \log^\frac{3}{2} \left( \frac{9\sqrt{m}\varepsilon n_1}{6\Gamma \sqrt{R} s_1} \right) \right]_{\varepsilon = a_1}^{a_2} \\
&{= \left( \frac{64\Gamma^2 R^2 s_1}{m} \right)^{\frac{1}{2}} \left( \log^\frac{3}{2} \left( \frac{18\sqrt{s_2}n_1}{\sqrt{n_2}s_1} \right) - \log^\frac{3}{2} \left( \frac{18\sqrt{n_1}}{\sqrt{s_1}} \right) \right) \le \left( \frac{64\Gamma^2 R^2 s_1}{m} \log^3 (18 n_1) \right)^{\frac{1}{2}}}
\intertext{The third integral is similar to the first. Again the case distinction does not play a major role in the end.}
I_3 &\le \left( (a_2-a_1) R (n_2+1) \left[ \varepsilon \left(1+\log \left( \frac{36\Gamma R}{\sqrt{m}\varepsilon} \right)\right) \right]_{\varepsilon = a_1}^{a_2} \right)^{\frac{1}{2}} \\
&= \left( (a_2-a_1) R (n_2+1) \left[ a_2 \left(1+\log \left( 3 \sqrt{\frac{Rn_2}{s_2}} \right) \right) - a_1 \left(1+\log \left( 3 \sqrt{\frac{Rn_1}{s_1}} \right)\right) \right] \right)^{\frac{1}{2}} \\
&\le \left( (a_2-a_1)^2 R (n_2+1) \left(1+\log \left( 3 \sqrt{\frac{Rn_1}{s_1}} \right) \right) \right)^{\frac{1}{2}} \le \left( (a_2^2+a_1^2) R (n_2+1) \left(1+\log \left( 3 \sqrt{\frac{Rn_1}{s_1}} \right) \right) \right)^{\frac{1}{2}} \\
&= \left( \frac{144 \Gamma^2 R^2}{m} \left( \frac{s_2 (n_2+1)}{n_2} + \frac{s_1 (n_2+1)}{n_1} \right) \left(1+\log \left( 3 \sqrt{\frac{Rn_1}{s_1}} \right) \right) \right)^{\frac{1}{2}} \\
&\le \begin{cases}
\left( \frac{432 \Gamma^2 R^2 (s_1+s_2)}{m} \left(1+\log \left( 3 \sqrt{\frac{Rn_1}{s_1}} \right) \right) \right)^\frac{1}{2} & n_1 \ge n_2, \\
\left( \frac{432 \Gamma^2 R^2 s_2}{m} \left(1+\log \left( 3 \sqrt{\frac{Rn_1}{s_1}} \right) \right) \right)^\frac{1}{2} & \text{else.}
\end{cases}
\intertext{In the third and the last line we again used $s_1/n_1 \le s_2/n_2$. The fourth integral is similar to the second.}
I_4 &= \sqrt{\frac{144 \Gamma^2 R^2 (s_1 + s_2)}{m}} \int_{a_2}^b \frac{1}{\varepsilon} \sqrt{\log\left( \frac{9\sqrt{m}\varepsilon n_1}{6\Gamma \sqrt{R} s_1} \right)} \; d\eps = \sqrt{\frac{144 \Gamma^2 R^2 (s_1 + s_2)}{m}} \left[ \frac{2}{3} \log^\frac{3}{2} \left( \frac{9\sqrt{m}\varepsilon n_1}{6\Gamma \sqrt{R} s_1} \right) \right]_{\varepsilon = a_2}^b \\
&{= \left( \frac{64 \Gamma^2 R^2 (s_1 + s_2)}{m} \right)^{\frac{1}{2}} \left( \log^\frac{3}{2} \left( \frac{3n_1}{2s_1} \right) - \log^\frac{3}{2} \left( \frac{18\sqrt{s_2}n_1}{\sqrt{n_2}s_1} \right) \right) \le \left( \frac{64 \Gamma^2 R^2 (s_1 + s_2)}{m} \log^3 (18 n_1) \right)^{\frac{1}{2}}}
\intertext{The last integral is similar to the third.}
I_5 &\le \left( (b-a_2) R \left[ \varepsilon \left(1 + \log\left( \frac{18\Gamma R}{\sqrt{m}\varepsilon} \right)\right) \right]_{\varepsilon = a_2}^b \right)^\frac{1}{2} \le \left( (b-a_2)^2 R \left( 1 + \log\left( 18 \sqrt{\frac{Rn_2}{s_2}} \right) \right) \right)^\frac{1}{2} \\
&\le \left( (b^2+a_2^2) R \left( 1 + \log\left( 18 \sqrt{\frac{Rn_2}{s_2}} \right) \right) \right)^\frac{1}{2} = \left( \left( \frac{\Gamma^2 R^2}{m} + \frac{144 \Gamma^2 R^2 s_2}{m n_2} \right) \left( 1 + \log\left( 18 \sqrt{\frac{Rn_2}{s_2}} \right) \right) \right)^\frac{1}{2} \\
&\le \left( \frac{145 \Gamma^2 R^2}{m} \left( 1 + \log\left( 18 \sqrt{\frac{Rn_2}{s_2}} \right) \right) \right)^\frac{1}{2}
\end{align*}
Let us now put all estimates together. If $s_1/n_1 \ge s_2/n_2$, the involved entities would just switch their roles. Hence, we obtain
\begin{align*}
\int_{0}^{\frac{\Gamma \sqrt{R}}{\sqrt{m}}} \sqrt{\log N(K,\Vert \cdot \Vert_F,\sqrt{m}\varepsilon)} \;d\varepsilon \le \sqrt{\frac{C_K \Gamma^2 R^2 (s_1 + s_2) \log^3(\max\{n_1,n_2\})}{m}}
\end{align*}
for some constant $C_K > 0$.
\end{Proof}
\subsection{Proof of \Cref{LocalConvergence}}
\purple{In this subsection we show the convergence of ATLAS to global minimizers as presented in \Cref{LocalConvergence}. To do so, we make use of the results from \cite{attouch2010proximal}. In particular, we first present two technical lemmas (Lemma \ref{Lemma 5} \& Lemma \ref{proposition6}), which are essentially generalizations of work \cite{attouch2010proximal}. These lemmas would be useful to prove the central theorem (here \Cref{theorem8}) of Attouch et.\ al.\ in our general setting. Finally, the theorem on local convergence, \Cref{LocalConvergence}, can essentially be derived from \Cref{theorem8} and combines the two statements \cite[Theorem 9]{attouch2010proximal} and \cite[Theorem 10]{attouch2010proximal}.
We refer the interested reader to \cite{attouch2010proximal} for further details. We also provide a reference to the original work in brackets.
}
\begin{lemma}[{\cite[Lemma 5]{attouch2010proximal}}] \label{Lemma 5}
Under assumptions $(H)$ and $(H1)$ the sequences $u_k^1,\dots ,v_k^R$ are well-posed
in the sense that all minimizations in (\ref{ProxAlgo0}) have unique and finite solutions.
Moreover, \smallskip \\
$(i)$
\begin{align*}
L(u_k^1,\dots ,v_k^R) + \sum_{r=1}^R \frac{1}{2\lambda_{k-1}^r} \Vert u_k^r -
u_{k-1}^r \Vert_2^2 + \sum_{r=1}^R \frac{1}{2\mu_{k-1}^r} \Vert v_k^r - v_{k-1}^r
\Vert_2^2 \leq L(u_{k-1}^1,\dots ,v_{k-1}^R),
\end{align*}
for all $k \geq 1$, hence $L(u_k^1,\dots ,v_k^R)$ is non-increasing. \smallskip \\
$(ii)$
\begin{align*}
\sum_{k=1}^\infty \left( \Vert u_k^1 - u_{k-1}^1 \Vert_2^2 + \cdots +
\Vert v_k^R - v_{k-1}^R \Vert_2^2 \right) < \infty,
\end{align*}
hence $\lim_{k \rightarrow \infty} \left( \Vert u_k^1 - u_{k-1}^1 \Vert_2 + \cdots +
\Vert v_k^R - v_{k-1}^R \Vert_2 \right) = 0$. \smallskip \\
$(iii)$ For $k \geq 1$, define
\begin{align*}
(\tilde{u}_k^1,\dots,\tilde{v}_k^R) :=
\begin{pmatrix}
\nabla_{u^1} Q(u_k^1,\dots,v_k^R) - \nabla_{u^1} Q(u_k^1,u_{k-1}^2,\dots,
v_{k-1}^R) \\
\vdots \\
0
\end{pmatrix} -
\begin{pmatrix}
\frac{1}{\lambda_{k-1}^1} (u_k^1 - u_{k-1}^1) \\
\vdots \phantom{\frac{1}{2}} \\
\frac{1}{\mu_{k-1}^R} (v_k^R - v_{k-1}^R)
\end{pmatrix}.
\end{align*}
Then $(\tilde{u}_k^1,\dots,\tilde{v}_k^R) \in \partial L(u_k^1,\dots,v_k^R)$ and for all
bounded subsequences $(u_{k'}^1,\dots,v_{k'}^R)$ we have $(\tilde{u}_{k'}^1,\dots,
\tilde{v}_{k'}^R) \rightarrow 0$, hence $\mathrm{dist} (0,\partial L(u_{k'}^1,\dots,v_{k'}^R))
\rightarrow 0$, for $k' \rightarrow \infty$.
\end{lemma}
\begin{Proof}
From $\inf L > -\infty$ and $(H)$ it follows that the functions to be minimized in
(\ref{ProxAlgo0}) are bounded below, coercive and lower semicontinuous and, therefore,
the sequence $(u_k^1,\dots,v_k^R)$ is well-posed.
\paragraph{$(i)$} \purple{Using the} minimizing properties of $u_k^1,\dots,v_k^R$ from
(\ref{ProxAlgo0}), \purple{we obtain}
\begin{align*}
L(u_{k-1}^1,\dots,v_{k-1}^R) &\geq L(u_{k}^1,u_{k-1}^2,\dots,v_{k-1}^1, \dots, v_{k-1}^R) + \frac{1}{2\lambda_{k-1}^1}
\Vert u_{k}^1 - u_{k-1}^1 \Vert_2^2 \\
&\geq \left( L(u_{k}^1,u_{k-1}^2,\dots,u_{k-1}^R,v_{k}^1,v_{k-1}^2,\dots,v_{k-1}^R) +
\frac{1}{2\mu_{k-1}^1} \Vert v_{k}^1 - v_{k-1}^1 \Vert_2^2 \right) + \frac{1}{2\lambda_{k-1}^1}
\Vert u_{k}^1 - u_{k-1}^1 \Vert_2^2 \\
&\phantom{\;\;\;\; \;\;\;\; \;\;\;\; \;\;\;\; \;\;\;\; \;\;\;\;} \vdots \\
&\geq L(u_{k}^1,\dots,v_{k}^R) + \sum_{r=1}^R \frac{1}{2\lambda_{k-1}^r} \Vert
u_{k}^r - u_{k-1}^r \Vert_2^2 + \sum_{r=1}^R \frac{1}{2\mu_{k-1}^r} \Vert v_{k}^r - v_{k-1}^r
\Vert_2^2.
\end{align*}
\paragraph{$(ii)$} From $(i)$ and $(H1)$ one has, for every $K \in \mathbb{N}$,
\begin{align*}
\frac{1}{2r_+} \sum_{k=1}^K \left( \Vert u_k^1 - u_{k-1}^1 \Vert_2^2 + \cdots +
\Vert v_k^R - v_{k-1}^R \Vert_2^2 \right) &\leq
\sum_{k=1}^K \left( L(u_{k-1}^1,\dots,v_{k-1}^R) - L(u_k^1,\dots,v_k^R) \right) \\
&= L(u_0^1,\dots,v_0^R) - L(u_K^1,\dots,v_K^R) \\
&< L(u_0^1,\dots,v_0^R) - \inf L < \infty.
\end{align*}
By letting $K \rightarrow \infty$ we get the claim.
\paragraph{$(iii)$} By definition of $u_k^1$, $0$ must lie in the subdifferential of $\xi
\mapsto L(\xi,u_{k-1}^2,\dots,v_{k-1}^R) + \frac{1}{2\lambda_{k-1}^1} \Vert \xi - u_{k-1}^1
\Vert_2^2$ at $u_k^1$. As a similar fact holds true for the other sequences, one gets,
for all $1 \leq r \leq R$
\begin{align*}
0 &\in \frac{1}{\lambda_{k-1}^r} (u_k^r - u_{k-1}^r) + \partial_{u^r} L(u_k^1,\dots,
u_k^{r},u_{k-1}^{r+1},\dots,u_{k-1}^R,v_k^1,\dots,v_k^{r-1},v_{k-1}^r,\dots,v_{k-1}^R), \\
0 &\in \frac{1}{\mu_{k-1}^r} (v_k^r - v_{k-1}^r) + \partial_{v^r} L(u_k^1,\dots,
u_k^{r},u_{k-1}^{r+1},\dots,u_{k-1}^R,v_k^1,\dots,v_k^{r},v_{k-1}^{r+1},\dots,
v_{k-1}^R).
\end{align*}
The structure of $L$ implies $\partial_{u^r} L(u_k^1,\dots,u_k^{r},u_{k-1}^{r+1}
\dots,u_{k-1}^R,v_k^1,\dots,v_k^{r-1},v_{k-1}^r,\dots,v_{k-1}^R) = \partial f_r(u_k^r) +
\nabla_{u^r} Q(u_k^1,\dots,u_k^{r},u_{k-1}^{r+1} \dots,u_{k-1}^R,v_k^1,
\dots,v_k^{r-1},v_{k-1}^r,\dots,v_{k-1}^R)$ and a similar equation for the $v$-components.
Hence, one may rewrite the inclusions above:
\begin{align*}
-\frac{1}{\lambda_{k-1}^1} (u_k^1 - u_{k-1}^1) - ( \nabla_{u^1} Q(u_k^1,u_{k-1}^2,\dots,
v_{k-1}^R) - \nabla_{u^1} Q(u_k^1,\dots,v_k^R) ) &\in \partial f_1(u_k^1) + \nabla_{u^1}
Q(u_k^1,\dots,v_k^R) , \\
\vdots \\
-\frac{1}{\mu_{k-1}^R} (v_k^R - v_{k-1}^R) &\in \partial g_R(v_k^R) + \nabla_{v^R}
Q(u_k^1,\dots,v_k^R).
\end{align*}
This, together with Proposition 3 in the paper, yields the claim.
\end{Proof}
\begin{lemma}[{\cite[Proposition 6]{attouch2010proximal}}] \label{proposition6}
Assume $(H)$ and $(H1)$ hold. Let $(u_k^1,\dots,v_k^R)$ be a sequence defined by
(\ref{ProxAlgo0}) and $\omega (u_0^1,\dots,v_0^R)$ be \purple{a} (possibly empty) set of limit
points. Then,
\begin{enumerate}
\item[(i)] if $(u_k^1,\dots,v_k^R)$ is bounded, \purple{then} $\omega (u_0^1,\dots,v_0^R)$ is nonempty,
compact and connected\\
and $\mathrm{dist} ((u_k^1,\dots,v_k^R),\omega (u_0^1,\dots,v_0^R))
\rightarrow 0$ \purple{as $k \rightarrow \infty$},
\item[(ii)] $\omega (u_0^1,\dots,v_0^R) \subset \mathrm{crit} L$, \purple{where $\mathrm{crit} L$ denotes a set of critical points of $L$,}
\item[(iii)] $L$ is finite and constant on $\omega (u_0^1,\dots,v_0^R)$, equal to
$\purple{\inf_{k \in \mathbb N}} L(u_k^1,\dots,v_k^R) = \purple{\lim_{k \rightarrow \infty}} L(u_k^1,\dots,v_k^R)$.
\end{enumerate}
\end{lemma}
\begin{Proof}
$(i)$ If $(u_k^1,\dots,v_k^R)$ is bounded, there exists a convergent subsequence, which
implies $\omega (u_0^1,\dots,v_0^R)$ is nonempty. It also follows $\omega
(u_0^1,\dots,v_0^R)$ is bounded. \\
Let now $(\hat{u}^1,\dots,\hat{v}^R) \notin \omega (u_0^1,\dots,v_0^R)$ be given. There
must exist some $\varepsilon > 0$ with $(u_k^1,\dots,v_k^R) \notin B((\hat{u}^1,\dots,
\hat{v}^R),\varepsilon)$, for all $k \in \mathbb{N}$. But then $\omega (u_0^1,\dots,v_0^R)
\cap B((\hat{u}^1,\dots,\hat{v}^R), \varepsilon) = \emptyset$. This proves $\omega (u_0^1,
\dots,v_0^R)$ is closed and, hence, compact. \\
{Let us assume $\omega (u_0^1,\dots,v_0^R)$ is not connected and let $\omega_c (u_0^1,\dots,v_0^R) \subset \omega (u_0^1,\dots,v_0^R)$ be a connected component. Then, $\omega (u_0^1,\dots,v_0^R) \setminus \omega_c (u_0^1,\dots,v_0^R) \neq \emptyset$ and there exists some
$\varepsilon > 0$ such that
$$\omega_c^\varepsilon (u_0^1,\dots,v_0^R) \cap \omega (u_0^1,\dots,v_0^R) \setminus \omega_c (u_0^1,\dots,v_0^R) = \emptyset,$$
where $\omega_c^\varepsilon (u_0^1,\dots,v_0^R)$ is an $\varepsilon$-neighborhood of $\omega_c (u_0^1,\dots,v_0^R)$. We know from \Cref{Lemma 5} $(ii)$ that
$$\lim_{k \rightarrow \infty} \left( \Vert u_k^1 - u_{k-1}^1 \Vert_2 + \cdots + \Vert v_k^R -
v_{k-1}^R \Vert_2 \right) = 0.$$
Combined with $\omega_c (u_0^1,\dots,v_0^R)$ and $\omega (u_0^1,\dots,v_0^R) \setminus \omega_c (u_0^1,\dots,v_0^R)$ being
sets of limit points of $(u_k^1,\dots,v_k^R),$ it implies the existence of a subsequence
$(u_{k'}^1,\dots,v_{k'}^R) \subset \omega_c^\varepsilon (u_0^1,\dots,v_0^R)
\setminus \omega_c^{\frac{\varepsilon}{2}} (u_0^1,\dots,v_0^R)$. As this subsequence
is bounded, it must have a limit point and $\omega (u_0^1,\dots,v_0^R) \cap
\omega_c^\varepsilon (u_0^1,\dots,v_0^R)
\setminus \omega_c^{\frac{\varepsilon}{2}} (u_0^1,\dots,v_0^R) \neq \emptyset$. Contradiction.} \\
The last part of $(i)$ can be proven in a similar way. If $\mathrm{dist} ((u_k^1,\dots,v_k^R),\omega
(u_0^1,\dots,v_0^R)) \nrightarrow 0$, there must exist a subsequence that keeps distance
to $\omega (u_0^1,\dots,v_0^R)$. But this subsequence again must have a limit point which
obviously lies in $\omega (u_0^1,\dots,v_0^R)$. Contradiction.
\paragraph{$(ii)$} We have, for all $k \geq 1$, $\xi^r \in \mathbb{R}^{n_1}$, $\eta^r \in
\mathbb{R}^{n_2}$
\begin{align*}
L(u_k^1,u_{k-1}^2,\dots,v_{k-1}^R) + \frac{1}{2\lambda_{k-1}^1} \Vert u_k^1 - u_{k-1}^1
\Vert_2^2 &\leq L(\xi^1,u_{k-1}^2,\dots,v_{k-1}^R) + \frac{1}{2\lambda_{k-1}^1} \Vert
\xi^1 - u_{k-1}^1 \Vert_2^2 \\
\vdots \\
L(u_k^1,\dots,v_k^R) + \frac{1}{2\mu_{k-1}^R} \Vert v_k^R - v_{k-1}^R \Vert_2^2 &\leq
L(u_k^1,\dots,v_k^{R-1},\eta^R) + \frac{1}{2\mu_{k-1}^R} \Vert \eta^R - v_{k-1}^R
\Vert_2^2
\end{align*}
Using the bounds on $\lambda_k^r$ and $\mu_k^r$ and the special form of $L$ one
gets
\begin{align*}
f_1(u_k^1) + Q(u_k^1,u_{k-1}^2,\dots,v_{k-1}^R) + \frac{1}{2r_+} \Vert u_k^1 - u_{k-1}^1
\Vert_2^2 &\leq f_1(\xi^1) + Q(\xi^1,u_{k-1}^2,\dots,v_{k-1}^R) + \frac{1}{2r_-} \Vert \xi^1
- u_{k-1}^1 \Vert_2^2 \\
\vdots \\
g_R(v_k^R) + Q(u_k^1,\dots,v_k^R) + \frac{1}{2r_+} \Vert v_k^R - v_{k-1}^R \Vert_2^2 &\leq
g_R(\eta^R) + Q(u_k^1,\dots,v_k^{R-1},\eta^R) + \frac{1}{2r_-} \Vert \eta^R - v_{k-1}^R
\Vert_2^2
\end{align*}
Let $(\overline{u}^1,\dots,\overline{v}^R) \in \omega (u_0^1,\dots,v_0^R)$. There exists a
subsequence $(u_{k'}^1,\dots,v_{k'}^R)$ of $(u_k^1,\dots,v_k^R)$ with $(u_{k'}^1
\dots,v_{k'}^R) \rightarrow (\overline{u}^1,\dots,\overline{v}^R)$. Together with Lemma
\ref{Lemma 5}.$(ii)$ this gives
\begin{align*}
\liminf_{k' \rightarrow \infty} f_r(u_{k'}^r) + Q(\overline{u}^1,\dots,\overline{v}^R) \leq
f_r(\xi^r) +
Q(\overline{u}^1,\dots,\xi^r,\dots,\overline{v}^R) + \frac{1}{2r_-} \Vert \xi^r -
\overline{u}^r \Vert_2^2,
\end{align*}
for all $1 \leq r \leq R$. We can now set $\xi^r = \overline{u}^r$ to obtain
\begin{align*}
\liminf_{k' \rightarrow \infty} f_r(u_{k'}^r) \leq f_r(\overline{u}^r).
\end{align*}
This and $f_r$ being lower semicontinuous yields
\begin{align*}
\lim_{k' \rightarrow \infty} f_r(u_{k'}^r) = f_r(\overline{u}^r).
\end{align*}
Repeating this for $g_r$, $1 \leq r \leq R$, and recalling the continuity of $Q$ we obtain
$L(u_{k'}^1,\dots,v_{k'}^R) \rightarrow L(\overline{u}^1,\dots,\overline{v}^R)$. Combined with
Lemma \ref{Lemma 5}.$(iii)$ and the closedness properties of $\partial L$(see Remark 1(b) {in
\cite{attouch2010proximal}}) proves $0 \in \partial L(\overline{u}^1,\dots,\overline{v}^R)$.
\paragraph{$(iii)$} \purple{As we just seen, for any point $(\overline{u}^1,\dots,\overline{v}^R) \in \omega (u_0^1,\dots,v_0^R),$ there exists a subsequence $(u_{k'}^1,\dots,v_{k'}^R)$ of $(u_k^1,\dots,v_k^R)$ with $L(u_{k'}^1
\dots,v_{k'}^R) \rightarrow L(\overline{u}^1,\dots,\overline{v}^R).$ Then} $L(\overline{u}^1,\dots,\overline{v}^R) = \inf L(u_k^1,\dots,v_k^R)$ as
$L(u_k^1,\dots,v_k^R)$ is non-increasing. This holds for every limit point. Hence, $L$ is finite and constant on the set of limit points.
\end{Proof}
\paragraph{} As in \cite{attouch2010proximal} we use the notation
\begin{align*}
z_k := (u_k^1,\dots,v_k^R), \;\;\;\; \;\;\;\; &l_k := L(z_k), \\
\overline{z} := (\overline{u}^1,\dots, \overline{v}^R), \;\;\;\; \;\;\;\; &\overline{l} := L(\overline{z}).
\end{align*}
\purple{The next theorem essentially says that a sequence $z_k$ that starts in the neighborhood of a point $\overline{z}$ as described in \eqref{rho} and that does not improve $L(\overline{z})$ as given in \eqref{eta} converges to a critical point near $\overline{z}.$}
\begin{theorem}[{\cite[Theorem 8]{attouch2010proximal}}] \label{theorem8}
Let $L$ satisfy $(H)$, $(H1)$ and have the KL-property at some $\overline{z}$. Denote by
$U$, $\eta$ and $\varphi : \left[ 0,\eta \right) \rightarrow \mathbb{R}$ the objects
connected to the KL-property of $L$ at $\overline{z}$. Let $\rho > 0$ be chosen such that
$B(\overline{z}, \rho) \subset U$.
Let $z_k$ be generated by (\ref{ProxAlgo0}) with $z_0$ as initial point. Let us assume that
\begin{align} \label{eta}
\overline{l} < l_k < \overline{l} + \eta,
\end{align}
for all $k \geq 0$, and
\begin{align} \label{rho}
M \varphi(l_0-\overline{l}) + 2 \sqrt{2r_+} \sqrt{l_0 - \overline{l}} + \Vert z_0 -
\overline{z} \Vert_2 < \rho
\end{align}
where $M = 2r_+ (C{\sqrt{2R}} + \frac{1}{r_-})$ and $C$ is a Lipschitz-constant for $\nabla Q$ on
$B(\overline{z}, \sqrt{2R} \rho)$.Then, the sequence $z_k$ converges to a critical point of $L$
and the following holds, for all $k \geq 0$:
\begin{enumerate}
\item[$(i)$] $z_k \in B(\overline{z}, \rho)$
\item[$(ii)$] $\sum_{i = k+1}^\infty \Vert z_{i+1} - z_i \Vert_2 \leq M \varphi(l_k -
\overline{l}) + \sqrt{2r_+} \sqrt{l_k-\overline{l}}$.
\end{enumerate}
\end{theorem}
\begin{Proof}
We may without loss of generality assume $L(\overline{z}) = 0$ (just replace $L$ by $L - L(
\overline{z})$ ). With Lemma \ref{Lemma 5}.$(i)$ we have
\begin{align} \label{19}
l_i - l_{i+1} \geq \frac{1}{2r_+} \Vert z_{i+1} - z_i \Vert_2^2,
\end{align}
for all $i \geq 0$. Moreover, $\varphi'(l_i)$ makes sense in view of (\ref{eta}) and
$\varphi'(l_i) > 0$. Hence,
\begin{align*}
\varphi'(l_i) (l_i - l_{i+1}) \geq \frac{\varphi'(l_i)}{2r_+} \Vert z_{i+1} - z_i \Vert_2^2.
\end{align*}
Owing to $\varphi$ being concave, we obtain
\begin{align} \label{20}
\varphi(l_i) - \varphi(l_{i+1}) \geq \frac{\varphi'(l_i)}{2r_+} \Vert z_{i+1} - z_i \Vert_2^2,
\end{align}
for all $i \geq 0$. Let us first check $(i)$ for $k = 0$ and $k = 1$. We know from (\ref{rho})
that $z_0$ lies in $B(\overline{z},\rho)$. Furthermore, (\ref{19}) yields
\begin{align*}
\frac{1}{2r_+} \Vert z_1 - z_0 \Vert_2^2 \leq l_0 - l_1 \leq l_0
\end{align*}
which gives
\begin{align*}
\Vert z_1 - \overline{z} \Vert_2 \leq \Vert z_1 - z_0 \Vert_2 + \Vert z_0 - \overline{z} \Vert_2
\leq \sqrt{2r_+} \sqrt{l_0} + \Vert z_0 - \overline{z} \Vert_2 < \rho.
\end{align*}
\paragraph{} Let us now prove by induction that $z_k \in B(\overline{z},\rho)$, for all
$k \geq 0$. We assume this holds true up to some $k \geq 0$. Hence, for $0 \leq i \leq k$,
using $z_i \in B(\overline{z},\rho)$ and $0 < l_i < \eta$ we can write the KL-inequality
\begin{align*}
\varphi'(l_i) \mathrm{dist}(0,\partial L(z_i)) \geq 1.
\end{align*}
Lemma \ref{Lemma 5}.$(iii)$ says
\begin{align*}
z_i^\ast :=
\begin{pmatrix}
\nabla_{u^1} Q(u_i^1,\dots,v_i^R) - \nabla_{u^1} Q(u_i^1,u_{i-1}^2,\dots,
v_{i-1}^R) \\
\vdots \\
0
\end{pmatrix} -
\begin{pmatrix}
\frac{1}{\lambda_{i-1}^1} (u_i^1 - u_{i-1}^1) \\
\vdots \phantom{\frac{1}{2}} \\
\frac{1}{\mu_{i-1}^R} (v_i^R - v_{i-1}^R)
\end{pmatrix}.
\end{align*}
is an element of $\partial L(z_i)$. So, we have
\begin{align} \label{KL}
\varphi'(l_i) \Vert z_i^\ast \Vert_2 \geq 1,
\end{align}
for all $1 \leq i \leq k$. \purple{Let us now examine} $\Vert z_i^\ast \Vert_2$, for $1 \leq i \leq k$. On the one
hand,
\begin{align*}
\left\Vert \left( \frac{1}{\lambda_{i-1}^1} (u_i^1 - u_{i-1}^1), \dots ,\frac{1}{\mu_{i-1}^R}
(v_i^R - v_{i-1}^R) \right) \right\Vert_2 \leq \frac{1}{r_-} \Vert z_i - z_{i-1} \Vert_2.
\end{align*}
On the other hand, for arbitrary $s_t \in \{ i-1,i \}$, $t \in \{ 1,\dots,2R \}$,
\begin{align*}
\Vert (u_{s_1}^1,\dots,v_{s_{2R}}^R) - (\overline{u}^1,\dots,\overline{v}^R) \Vert_2^2 &=
\Vert u_{s_1}^1 - \overline{u}^1 \Vert_2^2 + \cdots + \Vert v_{s_{2R}}^R - \overline{v}^R
\Vert_2^2 \\
&\leq \Vert z_{s_1} - \overline{z} \Vert_2^2 + \cdots + \Vert z_{s_{2R}} - \overline{z} \Vert_2^2
\leq 2R\rho^2.
\end{align*}
Hence, $(u_{s_1}^1,\dots,v_{s_{2R}}^R)$ and $z_i$ lie in $B(\overline{z}, \sqrt{2R} \rho)$. We
can use Lipschitz-continuity of $\nabla Q$ to obtain
\begin{align*}
\Vert \nabla_\theta Q(u_{s_1}^1,\dots,v_{s_{2R}}^R) - \nabla_\theta Q(u_i^1,\dots,v_i^R)
\Vert_2 \leq C \Vert z_i - z_{i-1} \Vert_2,
\end{align*}
for any $\theta \in \{ u^1,\dots,v^R \}$, which implies
\begin{align*}
\left\Vert
\begin{pmatrix}
\nabla_{u^1} Q(u_i^1,\dots,v_i^R) - \nabla_{u^1} Q(u_i^1,u_{i-1}^2,\dots,
v_{i-1}^R) \\
\vdots \\
0
\end{pmatrix}
\right\Vert_2
\leq C{\sqrt{2R}} \Vert z_i - z_{i-1} \Vert_2.
\end{align*}
We get
\begin{align*}
\Vert z_i^\ast \Vert_2 \leq (C{\sqrt{2R}} + \frac{1}{r_-}) \Vert z_i - z_{i-1} \Vert_2,
\end{align*}
for all $1 \leq i \leq k$. Now (\ref{KL}) yields
\begin{align*}
\varphi'(l_i) \geq \frac{1}{C{\sqrt{2R}} + \frac{1}{r_-}} \Vert z_i - z_{i-1} \Vert_2^{-1}, \;\;\;\;
1 \leq i \leq k,
\end{align*}
and combined with (\ref{20})
\begin{align*}
\varphi(l_i) - \varphi(l_{i+1}) \geq \frac{1}{M} \frac{\Vert z_{i+1} - z_i \Vert_2^2}{\Vert z_i
- z_{i-1} \Vert_2}, \;\;\;\; 1 \leq i \leq k.
\end{align*}
This is equivalent to
\begin{align*}
\Vert z_i - z_{i-1} \Vert_2^\frac{1}{2} (M(\varphi(l_i) - \varphi(l_{i+1})))^\frac{1}{2}
\geq \Vert z_{i+1} - z_i \Vert_2
\end{align*}
and, using $ab \leq (a^2 + b^2)/2$, gives
\begin{align} \label{phi}
\Vert z_i - z_{i-1} \Vert_2 + M(\varphi(l_i) - \varphi(l_{i+1}))
\geq 2\Vert z_{i+1} - z_i \Vert_2, \;\;\;\; 1 \leq i \leq k.
\end{align}
Summation over $i$ leads to
\begin{align*}
\Vert z_1 - z_0 \Vert_2 + M(\varphi(l_1) - \varphi(l_{k+1})) \geq
\sum_{i=1}^k \Vert z_{i+1} - z_i \Vert_2 + \Vert z_{k+1} - z_k \Vert_2.
\end{align*}
Therefore, by using the monotonicity properties of $\varphi$ and $l_k$
\begin{align*}
\Vert z_1 - z_0 \Vert_2 + M\varphi(l_0) \geq \sum_{i = 1}^k \Vert z_{i+1} - z_i \Vert_2.
\end{align*}
Finally,
\begin{align*}
\Vert z_{k+1} - \overline{z} \Vert_2 \leq \sum_{i = 1}^k \Vert z_{i+1} - z_i \Vert_2
+ \Vert z_1 - \overline{z} \Vert_2 \leq M\varphi(l_0) + 2 \sqrt{2r_+} \sqrt{l_0} +
\Vert z_0 - \overline{z} \Vert_2 < \rho
\end{align*}
which closes the induction and proves $(i)$. Moreover, (\ref{phi}) holds for all $i \geq 1$. We
can sum from $k$ to $K$ and get
\begin{align*}
\Vert z_k - z_{k-1} \Vert_2 + M(\varphi(l_k) - \varphi(l_{K+1})) \geq \sum_{i = k}^K
\Vert z_{i+1} - z_i \Vert_2 + \Vert z_{K+1} - z_K \Vert_2.
\end{align*}
For $K \rightarrow \infty$, this becomes
\begin{align*}
\sum_{i = k}^\infty \Vert z_{i+1} - z_i \Vert_2 \leq \Vert z_k - z_{k-1} \Vert_2 + M\varphi(l_k).
\end{align*}
We conclude with (\ref{19}) proving $(ii)$
\begin{align*}
\sum_{i = k}^\infty \Vert z_{i+1} - z_i \Vert_2 \leq M\varphi(l_k) + \sqrt{2r_+} \sqrt{l_{k-1}}
\leq M\varphi(l_{k-1}) + \sqrt{2r_+} \sqrt{l_{k-1}}.
\end{align*}
This implies $z_k$ is convergent and, therefore, its limit is a critical point. This was
guaranteed by Lemma \ref{proposition6}.
\end{Proof}
\subsection{Some Motivating Examples} \label{sec:Examples}
\red{Before moving to the main part of the paper, let us \red{consider}
a couple of motivating \red{examples and applications of the considered model}. \\
The first example views low-rank matrices with sparsity constraints from a machine learning perspective and extends the classical setting of sparse PCA to incomplete linear observations of the data matrix. \\
The second example is a classical problem in signal processing, that is blind deconvolution. By now this problem has been widely explored in the literature \cite{ahmed2014blind,li2018rapid,lee2016blind}\\
In particular, in both these examples we do not expect necessarily that the measurements fulfill an RIP condition.
}
\paragraph{Example 1: Sparse Principal Component Analysis from inaccurate and incomplete linear measurements.} Principal Component Analysis (PCA) \cite{jolliffe2011principal} is a classical tool for processing large amounts of data and performing data analysis such as dimensionality reduction and factor extraction. Its scope of application ranges from engineering and technology to social sciences, and biology.\\
PCA and, more generally, matrix completion \cite{Cand2009} has been widely used for recommendation systems as popularised by the so-called Netflix prize problem \cite{Bennett07thenetflix}. We illustrate PCA by considering a simple example of such recommendation system for a grocery store, which has $n_1$ regular customers and $n_2$ products. Let $X \in \mathbb{R}^{n_1\times n_2}$ be a matrix whose components $X_{i,j}$ encode the probability of customer $i$ buying product $j$.
It is reasonable to assume that there are only $R \ll \min\{n_1,n_2\}$ underlying basic factors like age, income, family size, etc.\ which govern the customer's purchase behavior. For each basic factor $r \in [R] \coloneqq \{1,...,R\}$ one defines two vectors: a vector $u^r \in \mathbb{R}^{n_1}$ of components $u_i^r$ encoding for each user $i \in [n_1]$ how much they are affected by the factor $r$, and a vector $v^r \in \mathbb{R}^{n_2}$ encoding the probability of buying product $j$ if having factor $r$. Then, one can decompose
\begin{align} \label{eq:PCA}
X \approx UV^T = \sum_{r=1}^{R} u^r (v^r)^T
\end{align}
as the product of two matrices $U \in \mathbb{R}^{n_1\times R}$ and $V \in \mathbb{R}^{n_2\times R}$ with columns $u^r$ and $v^r$. Even if the product $UV^T$ is only approximately $X$, the decomposition into orthogonal principal components $U$ and loadings $V$ is appealing for more interpretability and having less data to store ($\mathcal{O}(\max \{n_1, n_2\}R)$ instead of $\mathcal{O}(n_1n_2)$).\\
However, if we want to understand which factors mostly affect customer's behaviour, PCA might not be the best option, since principal components are usually a linear combination of all original variables. To further improve interpretability and reduce the number of explicitly used variables, sparse PCA \cite{zou2006sparse,d2005direct}, which promotes sparsity of the loadings $v^r$ in \eqref{eq:PCA}, has been proposed. Sparse PCA trades orthogonality of the principal components for sparse solutions. In the aforementioned example of the grocery store, it is quite reasonable to assume sparsity of the probability distributions $v^r$, as certain factors normally are more correlated with the probability of purchase of few specific items.
For some applications one may not have access to the complete matrix $X$ but only to a partial indirect information, i.e., one has only $m \ll n_1n_2$ scalars encoding information about $X$. In the example of the grocery store this may model the situation where customers do not possess all a fidelity card, which allows to identify them individually, and the grocery store still wishes to learn the matrix $X$ from aggregated revenues. Each day $d \in [D]$ the store caches in a certain amount of money $y^\ell_d$ corresponding to purchases of a {\it random} subset $T_d \subset [n_1]$ of its customers ($\ell \in \mathbb{N}$ is a fixed index, whose role will soon become clear). If $P^\ell \in \mathbb{R}^{n_2}$ is a vector encoding the prices $P_j^\ell$ of each product $j$ and $\mathcal{P}_{i,d} \subset [n_2]$ is the {\it random} set of products purchased by customer $i$ on a day $d$, we can express the takings as
\begin{align*}
y^\ell_d = \sum_{i \in T_d} \sum_{j \in \mathcal{P}_{i,d}} P_j^\ell.
\end{align*}
If we assume that each customer $i$ visits the grocery store with probability $q_i$, we can compute the expected takings as
\begin{align*}
\mathbb E_{T_d,\mathcal{P}_{\cdot,d}} \left [ {\sum_{i \in T_d} \sum_{j \in \mathcal{P}_{i,d}} P_j^\ell} \right ]= \sum_{i=1}^{n_1} q_i \sum_{j=1}^{n_2} X_{i,j} P_j^\ell.
\end{align*}
Choosing $D$ sufficiently large, the law of large numbers guarantees that
\begin{align*}
\lim_{D\to \infty} \frac{1}{D} \sum_{d=1}^{D} y^\ell_d =\mathbb E_{T_d,\mathcal{P}_{\cdot,d}} \left [ {\sum_{i \in T_d}\sum_{j \in \mathcal{P}_{i,d}} P_j^\ell} \right ],
\end{align*}
in probability and almost surely. Moreover, by Central Limit Theorem, we may model the average takings of $D$ days as
\begin{align*}
\frac{1}{D} \sum_{d=1}^{D} y^\ell_d= \sum_{i=1}^{n_1} q_i \sum_{j=1}^{n_2} X_{i,j} P_j^\ell + \eta_{D}^\ell,
\end{align*}
for a suitable Gaussian noise $\eta_{D}^\ell$. By defining $y^\ell = \frac{1}{D} \sum_{d=1}^{D} y^\ell_d$, we can rewrite the above equation as
\begin{align*}
y^\ell= \sum_{i=1}^{n_1} \sum_{j=1}^{n_2} (q_i P_j^\ell) X_{i,j} + \eta_{D,l} = \langle A_\ell, X \rangle_F + \eta_{D}^\ell
\end{align*}
where the matrix $A_\ell \in \mathbb{R}^{n_1\times n_2}$ has entries $(q_i P_j^\ell)_{i,j}$, and $\langle \cdot, \cdot \rangle_F$ is the Frobenius scalar product.\\
Tracking the daily sales over a time period of $m\cdot D$ days and perturbing the prizes in each subperiod $\ell \in [m]$ randomly\footnote{The random fluctuation of prizes is applied by groceries also for rotating promotions on products. Periodic price reductions, or sales, constitute a widely observed phenomenon in retailing. Sales occur on a regular basis, which suggests that they are not entirely due to random variations such as shocks to inventory holdings or demand.} would result in $m$ inaccurate linear measurements, where each single measurement is a random average over the entries of $X$ with an ineliminable additive noise $\eta_{D}^\ell$. The whole measurement process can be written as
\begin{align} \label{eq:measModel}
y = \mathcal{A} (X) + \eta
\end{align}
where $\mathcal{A} \colon \mathbb{R}^{n_1\times n_2} \rightarrow \mathbb{R}^m$ is a linear operator defined by the matrices $A_1,...,A_m$ and $\eta =(\eta_{D}^1,\dots,\eta_{D}^m)^T \in \mathbb{R}^m$ models the noise.
As we demonstrate in this paper, it is possible by means of our resource efficient algorithm to recover a low-rank-$R$ matrix $X$ with effectively $(s_1,s_2)$-sparse non-orthogonal rank-$1$ decomposition, from a number $m \approx R(s_1+ s_2)$ of random noisy measurement. This would offer a plausible solution to the grocery store problem. (We should stress that in general our algorithm will converge to a data fitting solution of low rank and sparse components for arbitrary linear measurement operators $\mathcal{A}$.)
\paragraph{Example 2: Blind deconvolution in signal processing.} In \emph{blind deconvolution} \cite{haykin1994blind} one is interested in recovering two unknown vectors $w$ and $s$ solely from their (cyclic) convolutional product
\begin{align} \label{eq:BlindDeconvolution}
y = w \ast s + \eta= \left( \sum_{i=1}^{m} w_i s_{(k-i)\, \mathrm{mod}\, m} \right)_{k = 1}^m + \eta,
\end{align}
where $\eta$ is again measurement noise.
In imaging applications, $s$ represents the picture and $w$ -- an unknown blurring kernel \cite{stockham1975blind}. In signal transmission, $s$ is a coded message and $w$ models the properties of the transmission channel \cite{godard1980self}. Independently of the concrete application, problem \eqref{eq:BlindDeconvolution} is highly under-determined and contains ambiguities.\\
In \cite{ahmed2014blind} the authors used that by bilinearity of the convolution, \eqref{eq:BlindDeconvolution} can be represented as a linear map acting on the tensor product $ws^T$, a technique commonly known as \emph{lifting}. They assumed in addition that the channel properties $w$ and the message $s$ are drawn from lower dimensional subspaces and are of the form $w = Bh$ and $s = Cx$ with $h \in \mathbb{C}^{n_1}$ and $x \in \mathbb{C}^{n_2}$ being coefficient vectors encoding channel and message ($B$ and $C$ are suitable transformation matrices). Accordingly, they re-write \eqref{eq:BlindDeconvolution} as
\begin{align*}
y = \mathcal{A} (X)+ \eta,
\end{align*}
where the rank-1 matrix $X=hx^T \in \mathbb{C}^{n_1\times n_2}$ has to be recovered from $m$ linear measurements, a quite popular model in compressed sensing literature \cite{Recht:2010fk}. Under suitable assumptions on $\mathcal{A}$ the recovery of $X$ is solved by (convex) nuclear norm minimization.\\
In \emph{blind demixing} \cite{ling2017blind,ling2017regularized,JungKrahmerStoeger2017} or MIMO channel identification \cite{dili01} a receiver gets the overlay of $R$ different convolutions which translates the above mentioned formulation into the recovery of rank-$R$ matrices from linear measurements of the type
\begin{align*}
y = \mathcal{A} \left( \sum_{r=1}^{R} h_r x_r^T \right)+ \eta.
\end{align*}
As already mentioned in \cite{JungKrahmerStoeger2017}, one can typically impose extra structure like sparsity on the channel impulse responses $h$ to further reduce the number of measurements $m$. In this case one wants to benefit from exploiting two different structures at the same time, low-rankness and sparsity. \\
\subsection{Properties of Minimizers of $J_{\alpha, \beta}^R$} \label{sec:Main1}
Let us begin with some basic properties that minimizers of $J_{\alpha, \beta}^R$ have under very general assumptions. For a given minimizer $(u_{\alpha, \beta}^1,...,u_{\alpha, \beta}^R,v_{\alpha, \beta}^1,...,v_{\alpha, \beta}^R)$ of $J_{\alpha, \beta}^R$ we denote
\begin{align} \label{Xab}
X_{\alpha, \beta} = U_{\alpha, \beta} \Sigma_{\alpha, \beta} V_{\alpha, \beta}^T = \sum_{r=1}^{R} (\sigma_{\alpha, \beta})_r \frac{u_{\alpha, \beta}^r}{\Vert u_{\alpha, \beta}^r \Vert_2} \left( \frac{v_{\alpha, \beta}^r}{\Vert v_{\alpha, \beta}^r \Vert_2} \right)^T
\end{align}
where $(\sigma_{\alpha, \beta})_r = \Vert u_{\alpha, \beta}^r \Vert_2 \Vert v_{\alpha, \beta}^r \Vert_2$, for all $r \in [R]$, and $\Sigma_{\alpha, \beta}$ is the diagonal matrix defined by the vector $\sigma_{\alpha, \beta}$. The first result bounds measurement misfit by $X_{\alpha, \beta}$.
\begin{proposition}[Measurement misfit] \label{Bound-y2}
Assume $(u_{\alpha, \beta}^1,...,v_{\alpha, \beta}^R)$ is a global minimizer of $J_{\alpha, \beta}^R$ and $\hat{X}$ is fulfilling the noisy measurements $y~=~\mathcal{A}(\hat{X})~+~\eta$. Then,
\begin{align}\label{measfit}
\Vert y - \mathcal{A}(X_{\alpha, \beta}) \Vert_2^2 \le \Vert \eta \Vert_2^2 + C_{2,1} \sqrt[3]{\alpha \beta^2} \sum_{r=1}^{R} \left( \Vert \hat{u}^r \Vert_2 \Vert \hat{v}^r \Vert_1 \right)^{\frac{2}{3}},
\end{align}
where $C_{2,1}$ is the constant from Lemma \ref{fpq} below.
\end{proposition}
\begin{lemma}[Boundedness] \label{Bound-uv2}
Assume $(u_{\alpha, \beta}^1,...,v_{\alpha, \beta}^R)$ is a global minimizer of $J_{\alpha, \beta}^R$ and $\hat{X}$ is fulfilling the noisy measurements $y~=~\mathcal{A}(\hat{X})~+~\eta$. If $\| y - \mathcal{A}(X_{\alpha, \beta}) \|_2 \ge \| \eta \|_2$, we have
\begin{align} \label{eq:Bounds}
\begin{split}
\sum_{r=1}^{R} \Vert u_{\alpha, \beta}^r \Vert_2^2 &\le C_{2,1} \sqrt[3]{ \frac{\beta^2}{\alpha^2}} \sum_{r=1}^{R} \left( \Vert \hat{u}^r \Vert_2 \Vert \hat{v}^r \Vert_1 \right)^\frac{2}{3}, \\
\sum_{r=1}^{R} \Vert v_{\alpha, \beta}^r \Vert_1 &\le C_{2,1} \sqrt[3]{ \frac{\alpha}{\beta}} \sum_{r=1}^{R} \left( \Vert \hat{u}^r \Vert_2 \Vert \hat{v}^r \Vert_1 \right)^\frac{2}{3},
\end{split}
\end{align}
and
\begin{align*}
\sum_{r=1}^{R} \left( \Vert u_{\alpha, \beta}^r \Vert_2 \Vert v_{\alpha, \beta}^r \Vert_1 \right)^\frac{2}{3} \le \sum_{r=1}^{R} \left( \Vert \hat{u}^r \Vert_2 \Vert \hat{v}^r \Vert_1 \right)^\frac{2}{3}
\end{align*}
where $C_{2,1}$ is the constant from Lemma \ref{fpq}.
\end{lemma}
The two estimates in \eqref{eq:Bounds} point out an interesting property of $J_{\alpha, \beta}^R$. If one chooses the parameters $\alpha$ and $\beta$ of different magnitude, either the left or the right components of a minimizer $(u_{\alpha, \beta}^1,...,v_{\alpha, \beta}^R)$ can be forced to become smaller in norm, while the grip on the others is lost. If $\alpha$ and $\beta$ are chosen to be equal the norm bounds are balanced and one obtains
\begin{align*}
\sum_{r=1}^{R} \left( \Vert u_{\alpha, \beta}^r \Vert_2^2 + \Vert v_{\alpha, \beta}^r \Vert_1 \right) \le C_{2,1} \sum_{r=1}^{R} \left( \Vert \hat{u}^r \Vert_2 \Vert \hat{v}^r \Vert_1 \right)^\frac{2}{3}.
\end{align*}
The assumption $\| y - \mathcal{A}(X_{\alpha, \beta}) \|_2 \ge \| \eta \|_2$ is not restrictive. As soon as $\| y - \mathcal{A}(X_{\alpha, \beta}) \|_2 = \| \eta \|_2$ one does not have to decrease $\alpha$ and $\beta$ any further \red{as this will lead to overfitting}.
\red{We can further control effective sparsity of the minimizer's right components.}
\begin{lemma}[Sparsity control] \label{SparsityControl}
Assume $\mathcal{A} \colon \mathbb{R}^{n_1\times n_2} \rightarrow \mathbb{R}^m$ is a linear operator and $y \in \mathbb{R}^m$. Let $(u_{\alpha, \beta}^1,...,v_{\alpha, \beta}^R)$ be a minimizer of $J_{\alpha, \beta}^R$. For all $r \in [R]$ we have that if $\Vert v_{\alpha, \beta}^r \Vert_2 \ge \Vert y \Vert_2^2/\gamma$ for some $\gamma > 0$, then
\begin{align*}
\frac{\Vert v_{\alpha, \beta}^r \Vert_1}{\Vert v_{\alpha, \beta}^r \Vert_2} < \frac{\gamma}{\beta}.
\end{align*}
\end{lemma}
The above Lemma states that those vector components $v_{\alpha, \beta}^r$, which lie not too close to zero, are effectively sparse. Numerical experiments suggest that if $\hat{X}$ has $s$-sparse right components $\hat{v}^r$, ATLAS yields solutions with exactly sparse right components $v_{\alpha, \beta}^r$. The theoretical necessity of considering
{\it effective} sparsity also when $\hat{X}$ has $s$-sparse right components is caused by the difficulty of obtaining better bounds on the support size of the vectors $v_{\alpha, \beta}^r$.\\
To conclude,
we can claim that $X_{\alpha, \beta}$ is, even without any more specific requirements on $\mathcal{A}$, a reasonable approximation of $\hat{X}$, in the sense that it is of rank $R$, fulfills the measurements up to noise level, and has effectively sparse right components. However, the parameters $\alpha$ and $\beta$ have to be chosen with care, neither too small nor too large. Moreover, Lemma \ref{Bound-uv2} shows that $\alpha$ and $\beta$ have to be chosen of similar magnitude. Otherwise either left or right components of $X_{\alpha, \beta}$ cannot be controlled.
\subsection{Recovery Properties of Minimizers of $J_{\alpha, \beta}^R$ with RIP}
\label{sec:RecoveryPropertiesRIP}
To explain the performance of ATLAS illustrated in Figure \ref{fig:Greyscale} we introduce two sets of matrices, which are sums of few rank-one matrices with sparse singular vectors.
\red{We stress here that we are not requiring the orthogonality of the components.}
We also define corresponding additive RIPs, which are useful for proving the approximation result and can be seen as a generalization of the rank-$R$ and $(s_1,s_2)$-sparse RIP of Lee et.\ al.\ in \cite{lee2013near}.
\paragraph{Matrix models.} The first matrix set is, for $\Gamma \ge 1$,
\begin{align} \label{S}
\begin{split}
S_{s_1,s_2}^{R,\Gamma} = \{ Z \in \mathbb{R}^{n_1\times n_2} \colon \exists\; u^1,...,u^R \in &\mathbb{R}^{n_1},\; v^1,...,v^R \in \mathbb{R}^{n_2}, \text{ and } \sigma = (\sigma_1,\dots,\sigma_R)^T \in \mathbb{R}^R, \text{ s.t. } \\
Z &= \sum_{r=1}^{R} \sigma_r u^r (v^r)^T, \\
\text{where } |\mathrm{supp}(u^r)| \le s_1,\; |\mathrm{supp}(v^r)| &\le s_2,\; \Vert u^r \Vert_2 = \Vert v^r \Vert_2 = 1, \text{ for all } r \in [R], \text{ and } \Vert \sigma \Vert_2 \le \Gamma \}.
\end{split}
\end{align}
It contains all matrices $Z$ which can be decomposed into three matrices $U\Sigma V^T$ such that $U \in \mathbb{R}^{n_1\times R}$ and $V \in \mathbb{R}^{n_2 \times R}$ have $s_1$-sparse (resp. $s_2$-sparse) unit norm columns and $\Sigma \in \mathbb{R}^{R\times R}$ is the diagonal matrix defined by $\sigma$. The set is restricted to decompositions with $\Vert \Sigma \Vert_F \le \Gamma$, and $\Gamma\geq 1$ is a natural condition as explained in Remark \ref{Gammacond} below.
The important difference w.r.t.\ \cite{lee2013near} is that the columns do not need to share a common support. Moreover, we do not require $U$ and $V$ to be orthogonal matrices. Nevertheless, all matrices $X$ with rank less or equal $R$, $s_1$-sparse (resp. $s_2$-sparse) left and right singular vectors, and $\Vert X \Vert_F \le \Gamma$ are in $S_{s_1,s_2}^{R,\Gamma}$. In this case $\Vert \Sigma \Vert_F = \Vert X \Vert_F$. We call such an admissible decomposition $U\Sigma V^T$ in \eqref{S} a Sparse Decomposition (SD) of $Z$. Note that the SD is not unique and that the SVD of $Z$ is not necessarily a SD of $Z$.\\
\red{We further generalize $S_{s_1,s_2}^{R,\Gamma}$ to effectively sparse vectors. Recall the definition of $K_{n,s}$ in Definition \ref{def:EffectivelySparse}.}
\red{For $\Gamma \ge 1$, we define}
\begin{align}\label{K}
\begin{split}
K_{s_1,s_2}^{R,\Gamma} = \{ Z \in \mathbb{R}^{n_1\times n_2} \colon \exists\; u^1,...,u^R \in K_{n_1,s_1},\; v^1,...,v^R &\in
K_{n_2,s_2}, \text{ and } \sigma = (\sigma_1,\dots,\sigma_R)^T \in \mathbb{R}^R, \text{ s.t. } \\
Z &= \sum_{r=1}^{R} \sigma_r u^r (v^r)^T, \\
\text{where } \Vert u^r \Vert_2 = \Vert v^r \Vert_2 &= 1, \text{ for all } r \in [R], \text{ and } \Vert \sigma \Vert_2 \le \Gamma \}
\end{split}
\end{align}
which is a relaxed version of $S_{s_1,s_2}^{R,\Gamma}$ as $S_{s_1,s_2}^{R,\Gamma} \subset K_{s_1,s_2}^{R,\Gamma}$.
One of the most important features of the class $K_{s_1,s_2}^{R,\Gamma}$ is that it is to a certain extent closed under summation: in fact if $Z \in K_{s_1,s_2}^{R,\Gamma}$ and $\hat Z \in K_{\hat s_1,\hat s_2}^{R,\hat\Gamma}$ then
\begin{equation}\label{sumrule}
Z - \hat Z \in K_{\max\{s_1,\hat s_1\},\max\{s_2,\hat s_2\}}^{2R,\sqrt{\Gamma^2+\hat{\Gamma}^2}}.
\end{equation}
We call such an admissible decomposition $Z = U\Sigma V^T$ in \eqref{K} an effectively Sparse Decomposition of $Z$ and use the same shorthand notation, i.e., SD. The context makes clear which decomposition is meant. Any $\hat{X}$ decomposed as in \eqref{xSVD} belongs to $K_{n_1,s}^{R,\Gamma}$ if $\sum_{r=1}^{R} \Vert \hat{u}^r \Vert_2^2 \Vert \hat{v}^r \Vert_2^2 \le \Gamma^2$. Having the sets $S_{s_1,s_2}^{R,\Gamma}$ and $K_{s_1,s_2}^{R,\Gamma}$ at hand we now define corresponding RIPs.
\begin{definition}[Additive Rank-$R$ and (effectively) $(s_1,s_2)$-sparse RIP$_\Gamma$] \label{apprRIPDef}
A linear operator $\mathcal{A} : \mathbb{R}^{n_1 \times n_2} \rightarrow \mathbb{R}^m$ satisfies the additive rank-$R$ and $(s_1,s_2)$-sparse RIP$_\Gamma$ with isometry constant $\delta > 0$ if
\begin{align} \label{apprRIP}
\left| \Vert \mathcal{A}(Z) \Vert_2^2 - \Vert Z \Vert_F^2 \right| \le \delta,
\end{align}
for all $Z \in S_{s_1,s_2}^{R,\Gamma}$. \\
If \eqref{apprRIP} holds for all $Z \in K_{s_1,s_2}^{R,\Gamma}$, we say $\mathcal{A}$ has the additive rank-$R$ and effectively $(s_1,s_2)$-sparse RIP$_\Gamma$. Note that the rank-$R$ and effectively $(s_1,s_2)$-sparse RIP$_\Gamma$ implies the rank-$R$ and $(s_1,s_2)$-sparse RIP$_\Gamma$ as $S_{s_1,s_2}^{R,\Gamma} \subset K_{s_1,s_2}^{R,\Gamma}$.
\end{definition}
\paragraph{Recovery results} We are ready now to state the main recovery result: If one assumes RIP, any appropriate global minimizer of $J_{\alpha, \beta}^R$ provides an approximation to $\hat{X}$, with an error bound depending on the magnitude of $\alpha$ and $\beta$, the sparsity $s$, the RIP constant $\delta$, and the magnitude of $\hat{X}$ measured in an appropriate Schatten quasi-norm. The approximation is worsened in an additive way by noise level.
\begin{theorem}[Approximation of $\hat{X}$] \label{ApproxX}
Fix the positive constants $\alpha, \beta > 0$, $\Gamma \ge 1$, and the effective sparsity indicator level $1\leq s \leq n_2$. Let $\mathcal{A}$ has the additive rank-$2R$ effectively $(n_1,{\max\{s,(\gamma/\beta)^2\}})$-sparse RIP$_{(c+1)\Gamma}$ with RIP-constant $0 < \delta < 1$, for a fixed choice of $\gamma > 0$ and $c \ge 1$.\\
%
If $\hat{X} \in K_{n_1,s}^{R,\Gamma}$ of rank $R$ and $y = \mathcal{A}(\hat{X}) + \eta \in \mathbb{R}^m$, then
{\begin{align} \label{Approximation}
\Vert \hat{X} - X_{\alpha, \beta} \Vert_F \le \sqrt{s^{\frac{1}{3}} R^{\frac{2}{3}} C_{2,1}c_{\hat U}} \sqrt[6]{\alpha \beta^2} \Vert \hat{X} \Vert_{\frac{2}{3}}^\frac{1}{3} + 2\Vert \eta \Vert_2 + \sqrt{\delta},
\end{align}}
for any global minimizer $(u_{\alpha, \beta}^1,...,v_{\alpha, \beta}^R)$ of $J_{\alpha, \beta}^R$ that fulfills $\Vert v_{\alpha, \beta}^r \Vert_2 \ge (\| \hat{X} \|_{F} + \| \eta \|_2 + \sqrt{\delta})^2/\gamma$ for all $r \in [R]$ and $\Vert \sigma_{\alpha, \beta} \Vert_F \le c\Gamma$ in \eqref{Xab}. In this case, in particular, $X_{\alpha, \beta} \in K_{n_1,(\gamma/\beta)^2}^{R,c\Gamma}$ with the SD in \eqref{Xab}.
\end{theorem}
\paragraph{} There are some aspects of this result we would like to discuss before we proceed:\\
$(a)$ If we could take the limits $\alpha \rightarrow 0$ and $\beta \rightarrow 0$, the error in \eqref{Approximation} would vanish up to noise-level and RIP-constant. However, {this limit cannot be performed as} there are important restrictions dictated by the need of fulfilling simultaneously the RIP and the assumptions on $X_{\alpha, \beta}$. If $\beta$ is getting small the conditions for having RIP degenerate, i.e., reconstruction for a fixed number of measurements only works up to a minimal $\beta$. Letting $\alpha$ to zero while keeping $\beta$ fixed leads to minimizers which violate the lower bound $\Vert v_{\alpha, \beta}^r \Vert_2 \ge (\| \hat{X} \|_{F} + \| \eta \|_2 + \sqrt{\delta})^2/\gamma$ or the upper bound $\Vert \sigma_{\alpha, \beta} \Vert_F \le c\Gamma$. To see this, note that by Lemma \ref{Bound-uv2} small $\alpha$ leads to strict bounds on $\| v_{\alpha, \beta}^r \|_2$ and weak bounds on $\| u_{\alpha, \beta}^r \|_2$.\\
$(b)$ Let us mention that in case $\hat{X} \in K_{n_1,s}^{R,\Gamma}$ and the SD of $\hat{X}$ coincides with its SVD, then in view of the identity \eqref{qneq0} the factor $c_{\hat U} R^{2/3}$ in the error estimates \eqref{Approximation} and \eqref{err1eq} can be substituted by $1$, hence there would be no dependence on the rank $R$.\\
$(c)$ In order to clarify how $(\gamma/\beta)^2$ and $s$ are related in the RIP in Theorem \ref{ApproxX} (and Corollary \ref{ApproxXcor} below), let us assume for simplicity that the SD of $\hat{X}$ coincides with its SVD and $\alpha = \beta$. Consequently, to get an error bound independent of $s$ in \eqref{Approximation}, $\alpha$ and $\beta$ have to be chosen of order $\mathcal{O}(s^{-\frac{1}{3}})$, i.e., $(\gamma/\beta)^2$ is of order $\mathcal{O}(s^{\frac{2}{3}})$ which means that an $(n_1,\gamma^2s)$-sparse RIP$_{(c+1)\Gamma}$ is sufficient for recovery.\\
$(d)$ The result only applies to minimizers whose scaling matrix $\Sigma_{\alpha, \beta}$ is bounded in Frobenius norm and whose right components $v_{\alpha, \beta}^r$ are not too close to zero. The first requirement is necessary as the RIP is restricted to SDs with scaling matrices within a ball around zero. The second one is needed to show some level of effective sparsity of the minimizers $X_{\alpha, \beta}$ (see also the discussion in Section \ref{sec:Main1}).
While effective sparsity of (right) component vectors of $X_{\alpha, \beta}$ is naturally wished and expected if $\hat X \in K_{n_1,s}^{R,\Gamma}$, we were not able in all cases to show {\it exact} sparsity of (right) component vectors of $X_{\alpha, \beta}$ if $\hat X \in S_{n_1,s}^{R,\Gamma}$, but again only their effective sparsity. Hence, we are bound to using as an artifact of the proof the stronger effectively $(s_1,s_2)$-sparse RIP$_\Gamma$ for theoretical analysis also in this case. In numerical experiments, however, for $\hat X \in S_{n_1,s}^{R,\Gamma}$ the obtained minimizers $X_{\alpha, \beta}$ are empirically {\it exactly} sparse (not just effectively sparse) and, hence, the weaker rank-$2R$ $(s_1,s_2)$-sparse RIP$_\Gamma$ might suffice in practice. The latter can already be guaranteed for a smaller number of measurements.\\
$(e)$ As argued in \Cref{MainProofSection} the above theorem can be straightforwardly extended to sparsity on left component vectors. In this case $J_{\alpha, \beta}^R$ has to be adapted by considering $\ell_1$-norm penalties on the $u$-components.\\
$(f)$ It is important to require $\mathrm{rank}(\hat{X}) = R$ as otherwise the equivalence of Schatten-norm and normed SD cannot be guaranteed as \eqref{qneq}. If the SD of $\hat{X}$ coincides with its SVD though, the rank condition may be dropped.
\paragraph{} By choosing $\alpha$ and $\beta$ in relation to the noise-to-signal ratio $\Vert \eta \Vert_2^2/\Vert \hat{X} \Vert_{\frac{2}{3}}^{\frac{2}{3}}$ we obtain the following version of Theorem \ref{ApproxX}, which has the form of a typical compressed sensing recovery bound. Assuming the RIP, the approximation error is linear in noise level while the slope of the linear function depends on sparsity level and possibly the rank. However, peculiarly, for fixed number of measurements the RIP fails for exceedingly small noise. Hence, the result is valid only for sufficiently small signal-to-noise ratio. As we will show in \Cref{Numerics} with numerical experiments, this apparently counterintuitive result is factual and not an \purple{artifact} of the proof technique. A possible intuitive explanation is that $J_{\alpha, \beta}^R$ becomes {a mere least-squares} without sparsifying effect for $\alpha$ and $\beta$ close to zero, which is caused by vanishing noise.
\begin{corollary} \label{ApproxXcor}
Let {$\hat{X} \in K_{n_1,s}^{R,\Gamma}$ with $\mathrm{rank}(\hat{X}) = R$ fulfill the noisy measurements $y = \mathcal{A}(\hat{X}) + \eta$} and let $\alpha = \beta = \Vert \eta \Vert_2^2/\Vert \hat{X} \Vert_{\frac{2}{3}}^{\frac{2}{3}} < 1$. Assume $\mathcal{A}$ has for some $\gamma>0$ and $c \ge 1$ the additive rank-$2R$ effectively $\left(n_1,{\max}\{s,\gamma^2 (\Vert \hat{X} \Vert_{\frac{2}{3}}^{\frac{2}{3}}/\Vert \eta \Vert_2^2)^2 \}\right)$-sparse RIP$_{(c+1)\Gamma}$ with RIP-constant $0 < \delta < 1$. Then, for $X_{\alpha, \beta}$ with $\Vert \Sigma_{\alpha, \beta} \Vert_F \le c\Gamma$ and $\Vert v_{\alpha, \beta}^r \Vert_2 \ge (\| \hat{X} \|_{F} + \| \eta \|_2 + \sqrt{\delta})^2/\gamma$, $r \in [R]$, we have
\begin{align}\label{err1eq}
\Vert \hat{X} - X_{\alpha, \beta} \Vert_F \le \left( 2 \sqrt{c_{\hat U} R^{2/3} s^{1/3} } + 2 \right) \Vert \eta \Vert_2 + \sqrt{\delta}.
\end{align}
\end{corollary}
\begin{remark} \label{ApproxXcorRemark}
One could object that the simple zero solution $\bar X=0$ is already a competitor in case of large noise $\|\eta \|_2 \geq \Xi(m)\| \hat X\|_F,$ i.e.,
\begin{equation}\label{naive}
\Vert \hat{X} - \bar X \Vert_F \le \Xi(m)^{-1} \|\eta \|_2.
\end{equation}
However, for a larger number $m$ of measurements we can consider lower level of noise, i.e., $ \Xi(m) \to 0$ and the bound \eqref{naive} would explode, while \eqref{err1eq} would remain effective. Moreover, our numerical experiments shows empirically that also in case of larger noise level, computing $X_{\alpha, \beta}$ gives a solution, which outperforms not only trivial competitors as $\bar X$, but also state-of-the-art methods such as SPF.
\end{remark}
\subsection{RIP Results for Subgaussian Operators} \label{SectionRIP}
As already mentioned above, a linear operator $\mathcal{A}$ of the form \eqref{A} which is drawn from a subgaussian distribution fulfills the above introduced RIPs with high probability. This is stated in the following Lemma. We first recall the definition of subgaussian random variables (for further details see \cite{vershynin2010introduction}).
\begin{definition}[Subgaussian Random Variable]
A random variable $\xi \in \mathbb{R}$ is called $\mathcal{K}$-subgaussian if the tail bound $\Pr[]{|\xi| > t} \le C\exp(-ct^2/\mathcal{K}^2)$ holds where $c,C > 0$ are absolute constants. The smallest possible number for $\mathcal{K} > 0$ is called subgaussian norm of $\xi$ and denoted by $\Vert \xi \Vert_{\psi_2}$.
\end{definition}
\begin{remark}
The class of subgaussian random variables covers important special cases as Gaussian, Bernoulli, and more generally all bounded random variables (see \cite{vershynin2010introduction}).
\end{remark}
\begin{lemma}[RIP for Subgaussian Operators] \label{GaussianRIP}
Let $\Gamma \ge 1$ and let $\mathcal{A} \colon \mathbb{R}^{n_1\times n_2} \rightarrow \mathbb{R}^m$ be the linear measurement operator of form \eqref{A}. Assume, all $A_i$, for $1 \le i \le m$, have i.i.d.\ $\mathcal{K}$-subgaussian entries $a_{i,j,k}$ with mean $0$ and variance $1$. If
\begin{align} \label{meas1}
m \gtrsim \left( \frac{\delta}{\Gamma^2 R} \right)^{-2} R (s_1+s_2) \log \left( \max \{n_1,n_2\} \right)
\end{align}
then ${\mathcal{A}}$ has the additive rank-$R$ and $(s_1,s_2)$-sparse RIP$_\Gamma$ with isometry constant $\delta \in (0,\Gamma^2 R)$ with probability at least $1-2\exp(-C (\delta/\Gamma^2 R) m)$ where $C > 0$ is a constant depending on $\mathcal{K}$. If
\begin{align} \label{meas2}
m \gtrsim \left( \frac{\delta}{\Gamma^2 R} \right)^{-2} R (s_1+s_2) \log^3 \left( \max \{n_1,n_2\} \right)
\end{align}
then ${\mathcal{A}}$ has the additive rank-$R$ and effectively $(s_1,s_2)$-sparse RIP$_\Gamma$ with isometry constant $\delta \in (0,\Gamma^2 R)$ with probability at least $1-2\exp(-C'(\delta/\Gamma^2 R) m)$ where $C' > 0$ is a constant depending on $\mathcal{K}$.
\end{lemma}
\begin{remark}\label{rem:dim}
Lemma \ref{GaussianRIP} states, for $\delta = \Delta (\Gamma^2 R)$, $\Delta \in (0,1)$, that, up to log-factors, $m \approx \mathcal{O} \left( \Delta^{-2} R(s_1+s_2)) \right)$ subgaussian measurements are sufficient to have $\delta$-stable embeddings of $S_{s_1,s_2}^{R,\Gamma}$ and $K_{s_1,s_2}^{R,\Gamma}$ (cf.\ \cite[Def. 1.1 \& Thm. 1.5]{Plan2014}). Note that $\Gamma^2 R$ is the squared Frobenius diameter of $S_{s_1,s_2}^{R,\Gamma}$ and $K_{s_1,s_2}^{R,\Gamma}$.
As we restrict ourselves below to {$s$-effective \purple{sparse right component vectors}} of $\hat{X}$, we only use the rank-$R$ and (effectively) $(n_1,s)$-sparse RIP$_\Gamma$.
\purple{For the presented results to have some meaning, a typical dimensional setting is} $R \ll s \approx n_1 \ll n_2$. In fact, if $n_1$ were close to $n_2$ in magnitude, the sparsity $s$ of the right component vectors would not be useful to reduce the order of the measurements $m\approx \mathcal O( R(n_1+s)) \approx \mathcal O( R(n_1)) \approx \mathcal O( R(n_2))$. Moreover, if $R$ were close to $n_1$, the matrix would not be low-rank as $n_1$ would be the maximal possible rank.\\
In \cite{lee2013near} the authors give information theoretical lower bounds on the necessary number of measurements for reconstructing low-rank matrices with sparse singular vectors sharing a common support, namely $m \gtrsim R(s_1+s_2)$. As we do not require orthogonality of SDs in $S_{s_1,s_2}^{R,\Gamma}$ resp. $K_{s_1,s_2}^{R,\Gamma}$ (excluding a scaling invariant RIP which is independent of the set diameter, see Remark \ref{DominiksRemark}), the bounds in \eqref{meas1} and \eqref{meas2} are up to $\log$-factors at the information theoretic limit for the class of matrices in \cite{lee2013near}. We are not aware of any information theoretical lower bounds for the more general class of matrices considered in the present paper.
\end{remark}
\begin{remark} \label{DominiksRemark}
The additive RIP in \eqref{apprRIP} differs from the commonly used multiplicative RIPs of the form
\begin{align} \label{multRIP}
(1-\delta) \Vert Z \Vert_F^2 \le \Vert \mathcal{A}(Z) \Vert_2^2 \le (1+\delta) \Vert Z \Vert_F^2
\end{align}
as it is not scaling invariant and $\mathcal{A}(Z) = \mathcal{A}(Z')$ does not imply $Z = Z'$ but only $\Vert Z - Z' \Vert_2^2 \le \delta$. In fact it is not possible to derive a classical scaling invariant RIP like \eqref{multRIP} on $K_{s_1,s_2}^{R,\Gamma}$ under similar conditions as \eqref{meas2}. The main problem is non-orthogonality of the SD. A simple example illustrates this point: Assume $R=2$ and $m \simeq 2 (n_1+s) \log^3 \left( \max \{n_1,n_2\} \right)$ and the linear operator $\mathcal{A}$ fulfills \eqref{multRIP} for all $Z \in K_{n_1,s}^{2,1}$. Choose some $u \in \mathbb{R}^{n_1}, v_1 \in \mathbb{R}^{n_2}$ of unit norm and $\Vert v_1 \Vert_1 \le \sqrt{s}/2$. Define $v_2 := -v_1 + \varepsilon w$ for any $w \in \mathbb{R}^{n_2}$ and choose $\varepsilon > 0$ sufficiently small to ensure $\Vert v_2 \Vert_1 \le \sqrt{s}$ and $\Vert v_2 \Vert_2 \approx 1$. Then $Z := (1/2)uv_1^T + (1/2)uv_2^T \in K_{n_1,s}^{2,1}$ and \eqref{multRIP} holds. But this implies by definition of $Z$ and scaling invariance of \eqref{multRIP} that
\begin{align*}
(1-\delta) \Vert uw^T \Vert_F^2 \le \Vert \mathcal{A}(uw^T) \Vert_2^2 \le (1+\delta) \Vert uw^T \Vert_F^2
\end{align*}
which means the RIP directly extends to all rank-$1$ matrices (not only those with sparse right component). If $n_1,s \ll n_2$, this is a clear contradiction to information theoretical lower bounds, as corresponding RIPs would require at least $m \simeq \max\{n_1,n_2\}$ (see \cite[Section 2.1]{candes2011tight}).
\end{remark}
\subsection{Convergence of ATLAS} \label{LocalConvergenceSection}
\red{In the following by adapting results of Attouch et.\ al.\ in \cite{attouch2010proximal} we show convergence of ATLAS.}
\red{Specifically,} there is a neighborhood $\mathcal{U}_{(u_{\alpha, \beta}^1,...,v_{\alpha, \beta}^R)}$ of a global minimizer $(u_{\alpha, \beta}^1,...,v_{\alpha, \beta}^R)$ such that the sequence $(u_k^1,...,v_k^R)$ defined by \eqref{ATLAS0} converges to $(u_{\alpha, \beta}^1,...,v_{\alpha, \beta}^R)$ if the initialization lies within $\mathcal{U}_{(u_{\alpha, \beta}^1,...,v_{\alpha, \beta}^R)}$. However, we do not provide proof for any initialization to fulfill the requirement \red{and we leave this open issue} for future research, \red{cf.\ Remark \ref{rem:Radius} below.} The techniques in \cite{attouch2010proximal} also \purple{might be adjusted} for an analysis of rate of convergence of ATLAS, but this would go beyond the scope of this work and {is} \purple{a topic for future investigation.}
We begin by a generalization of the basic conditions of \cite{attouch2010proximal}. Let $L$ be a functional of the following form:
\begin{align*}
(H) \;\;\;\;
&\begin{cases}
L(u^1,\dots,u^R,v^1,\dots,v^R) = \sum_{r=1}^R f_r(u^r) + Q(u^1,\dots ,v^R) + \sum_{r=1}^R
g_r(v^r), \\
f_r: \mathbb{R}^{n_1} \rightarrow {\mathbb{R} \cup \{ \infty \}}, \; g_r:\mathbb{R}^{n_2} \rightarrow {\mathbb{R} \cup \{ \infty \}}
\text{ are proper lower semicontinuous, for } 1 \leq r
\leq R, \\
Q: \mathbb{R}^{n_1} \times \cdots \times \mathbb{R}^{n_1} \times \mathbb{R}^{n_2} \times
\cdots \times \mathbb{R}^{n_2} \rightarrow \mathbb{R} \text{ is a }
C^1 \text{ function}, \\
\nabla Q \text{ is Lipschitz continuous on bounded subsets of } \mathbb{R}^{n_1} \times
\cdots \times \mathbb{R}^{n_1} \times \mathbb{R}^{n_2} \times \cdots \times
\mathbb{R}^{n_2}.
\end{cases}
\intertext{For given $(u_0^1,\dots ,v_0^R) \in (\mathbb{R}^{n_1})^R \times (\mathbb{R}^{n_2})^R$ and fixed sequences $(\lambda_k^1)_{k \in \mathbb{N}},\dots ,(\lambda_k^R)_{k \in \mathbb{N}},(\mu_k^R)_{k \in \mathbb{N}},\dots ,(\mu_k^R)_{k \in \mathbb{N}}$ assume that}
(H1) \;\;\;\;
&\begin{cases}
\inf L > -\infty, \\
L(\cdot ,u_0^2,\dots ,v_0^R) \text{ is proper}, \\
\text{for some positive } r_- < r_+ \text{ the sequences } \lambda_k^1,\dots,\mu_k^R
\text{ belong to } (r_- , r_+).
\end{cases}
\end{align*}
The adapted main result of \cite{attouch2010proximal} now guarantees convergence of the so-called Proximal Alternating Minimization
\begin{align} \label{ProxAlgo0}
(\text{PAM}) \;\;\;\;
&\begin{cases}
u_{k+1}^1 = \argmin_{u \in \mathbb{R}^{n_1}} L(u,u_k^2,\dots,u_k^R,v_k^1,\dots,v_k^R) + \frac{1}{2\lambda_k^1} \Vert u - u_k^1 \Vert_2^2,\\
v_{k+1}^1 = \argmin_{v \in \mathbb{R}^{n_2}} L(u_{k+1}^1,u_k^1,\dots,u_k^R,v,v_k^2\dots,v_k^R) + \frac{1}{2\mu_k} \Vert v - v_k^1 \Vert_2^2,\\
\vdots \\
u_{k+1}^R = \argmin_{u \in \mathbb{R}^{n_1}} L(u_{k+1}^1,\dots,u_{k+1}^{R-1},u,v_{k+1}^1,\dots,v_{k+1}^{R-1},v_k^R) + \frac{1}{2\lambda_k} \Vert u - u_k^R \Vert_2^2,\\
v_{k+1}^R = \argmin_{v \in \mathbb{R}^{n_2}} L(u_{k+1}^1,\dots,u_{k+1}^R,v_{k+1}^1,\dots,v_{k+1}^{R-1},v) + \frac{1}{2\mu_k} \Vert v - v_k^R \Vert_2^2,
\end{cases}
\end{align}
to a stationary point of $L$ (resp. convergence to a global minimizer $(u_\ast^1,\dots,v_\ast^R)$ of $L$ if the initialization $(u_0^1,\dots,v_0^R)$ of (PAM) lies sufficiently close to $(u_\ast^1,\dots,v_\ast^R)$) if $L$ fulfills $(H)$, $(H1)$ and the so called Kurdyka-Lojasiewicz Property, which requires $L$ to behave well around stationary points.
\begin{definition}[Kurdyka-Lojasiewicz Property]
A proper lower semicontinuous function $f:\mathbb{R}^n \rightarrow {\mathbb{R} \cup \{ \infty \}}$
is said to have the KL-property at $\ol{x} \in \mathrm{dom} \, \partial f$\footnote{Here $\partial f$ denotes the subdifferential of $f$ and $\mathrm{dom} \, \partial f$ the domain on which $\partial f$ takes finite values.} if there exist $\eta \in
\left( 0,\infty \right]$, a neighborhood $U$ of $\overline{x}$ and a continuous concave
function $\varphi : \left[ 0,\infty \right) \rightarrow \mathbb{R}_+$ such that
\begin{enumerate}
\item[-] $\varphi(0) = 0$,
\item[-] $\varphi$ is $C^1$ on $(0,\eta)$,
\item[-] $\varphi'(t) > 0$, for all $t \in (0,\eta)$,
\item[-] and, for all $x \in U \cap \{ x \in \mathbb{R}^n : f(\overline{x}) < f(x) <
f(\overline{x}) + \eta \}$, the KL-inequality holds:
\begin{align*}
\varphi'(f(x)-f(\overline{x}))\; \mathrm{dist}(0,\partial f(x)) \geq 1.
\end{align*}
\end{enumerate}
\end{definition}
\begin{theorem}[Local Convergence to Global {Minimizers}] \label{LocalConvergence}
Assume that $L$ satisfies $(H)$, $(H1)$. If $L$ has the Kurdyka-Lojasiewicz property at its global minimizer $(u_\ast^1,\dots,v_\ast^R)$, {then} there exist $\varepsilon, \eta > 0$, such that {the initial conditions
\begin{align*}
\Vert (u_0^1,\dots,v_0^R) - (u_\ast^1,\dots,v_\ast^R) \Vert_2 < \varepsilon, \;\;\;\; \min L < L(u_0,v_0) < \min L + \eta,
\end{align*}
imply that} the {iterations $(u_k^1,\dots,v_k^R)$ generated by} (PAM) converge to $(u_*^1,\dots,v_*^R) \in \argmin L$. If $L$ has the Kurdyka-Lojasiewicz at each point of its domain, {then} either $\Vert (u_k^1,\dots,v_k^R) \Vert_2 \rightarrow \infty$ or $(u_k^1,\dots,v_k^R)$ converges to a stationary point of $L$.
\end{theorem}
\begin{remark}{} \label{rem:Radius}
\red{(i) The main difficulty in characterizing the convergence radius in Theorem \ref{LocalConvergence} is to characterize the KL-parameters $U$ and $\eta$ of $L$. Doing so for a non-convex functional like $J_{\alpha, \beta}^R$ is a challenging task on its own and thus the main reason for us to defer the treatment of initialization to future work.\\
(ii) We will see below that $J_{\alpha, \beta}^R$ has the KL-property with $\varphi(t) = ct^{1-\theta}$, for $c > 0$ and $\theta \in [0,1)$. As \cite{attouch2010proximal} shows, a characterization of $\theta$ would determine the convergence speed of the alternating minimization of $L$. While \cite{li2013global} can be used to compute $\theta$ for piecewise convex polynomials, it is unclear how to do the same for non-convex polynomials. Addressing this more general issue would in particular provide a convergence speed analysis of ATLAS. }
\end{remark}
By applying \Cref{LocalConvergence} to {$L=J_{\alpha, \beta}^R$} and ATLAS we obtain convergence to stationary points and local convergence to global minimizers as the sequence $(u_k^1,\dots,v_k^R)$ is bounded by coercivity of $J_{\alpha, \beta}^R$. One can check that conditions $(H)$, $(H1)$ are fulfilled by $J_{\alpha, \beta}^R$ and ATLAS for a suitable choice of the sequences $(\lambda_k^1)_{k \in \mathbb{N}},\dots ,(\lambda_k^R)_{k \in \mathbb{N}}$, $(\mu_k^R)_{k \in \mathbb{N}},\dots ,(\mu_k^R)_{k \in \mathbb{N}}$. It remains to validate the KL-property. As mentioned in \cite[Section 4.3]{attouch2010proximal}, all semialgebraic functions satisfy the KL-property at each point with $\varphi(t) = ct^{1-\theta}$ for some $\theta \in [0,1) \cap \mathbb{Q}$ and $c > 0$. Hence, by showing that $J_{\alpha, \beta}^R$ is semialgebraic, we get the KL-property for free. But we pay the price of having no better knowledge on the parameters $\varepsilon$ and $\eta$ in \Cref{LocalConvergence}, which characterize the convergence radius. {Therefore, let us conclude} by showing that $J_{\alpha, \beta}^R$ is semialgebraic, i.e., $\mathrm{graph}(J_{\alpha, \beta}^R) \subset \mathbb{R}^{Rn_1+Rn_2} \times \mathbb{R}$ is a semialgebraic set.\\
A set in $\mathbb{R}^d$ is called semialgebraic if it can be written as a finite union of sets of the form
\begin{align*}
\{ x \in \mathbb{R}^d \; : \; p_i(x) = 0, \; q_i(x) \purple{>} 0, \; i = 1,\dots,p \},
\end{align*}
where $p_i,q_i$ are real polynomials.
First, the absolute value of one component of a vector $h(x) := \vert x_l \vert$ is a semialgebraic function as
\begin{align*}
\mathrm{graph} (h) = \{ (x,r) \in \mathbb{R}^d \times \mathbb{R} : x_i + r = 0, \; x_i < 0 \}
\cup \{ (x,r) \in \mathbb{R}^d \times \mathbb{R} : x_i = 0, \; r = 0 \} \\
\cup \{ (x,r) \in \mathbb{R}^d \times \mathbb{R} : x_i - r = 0, \; -x_i < 0 \}.
\end{align*}
Second, it is clear that polynomials $p$ are semialgebraic as $\mathrm{graph} (p) = \{ (x,r) \in \mathbb{R}^d \times \mathbb{R} : p(x) - r = 0 \}$ and, third, composition, finite sums and finite products of semialgebraic functions are semialgebraic. The semialgebraicity of $J_{\alpha, \beta}^R$ follows as
\begin{align*}
J_{\alpha, \beta}^R(u^1,\dots ,v^R) = \sum_{l = 1}^m \vert y_l - \sum_{r = 1}^R \langle A_l , u^r {v^r}^T
\rangle_F \vert^2 + \alpha \sum_{r = 1}^R \sum_{l = 1}^{n_1} \vert u_l^r \vert^2 + \beta
\sum_{r=1}^R \sum_{l = 1}^{n_2} \vert v_l^r \vert
\end{align*}
is just a finite composition of semialgebraic basic units.
\subsection{Validation of Corollary \ref{ApproxXcor}} \label{Numerics1}
\paragraph{} Figure \ref{Fig:NoiseTest} shows the average approximation error of $100$ randomly drawn $\hat{X} \in \mathbb{R}^{16 \times 100}$, $\Vert \hat{X} \Vert_F = 10$, with $\mathrm{rank}(\hat{X}) = 1$ (resp. $\mathrm{rank}(\hat{X}) = 5$) and $10$-sparse right singular vector(s) from $m = 90$ (resp. $m = 400$) noisy measurements $y = \mathcal{A}(\hat{X})+\eta$. The parameters have been chosen exemplarily for purpose of illustration. {The operator $\mathcal{A}$ is drawn once at random.} The error bound from Corollary \ref{ApproxXcor} is plotted as dashed red line, \purple{whereas the average approximation errors are in blue}. Though not tight the theoretical bound seems to describe the linear dependence of the approximation error on noise level {appropriately}. In addition, Figure \ref{Fig:NoiseTest} (b) shows a breakdown of approximation for noise to signal ratios below $\approx 0.25$. This occurrence is not surprising as the assumptions of \Cref{ApproxXcor} include a lower-bound on the {noise-to-signal} ratio for a fixed number of measurements. Below a certain value the RIP requirements will be too strong for $\mathcal{A}$ {to fulfill it}, the RIP {breaks down,} and the recovery guarantees fail.
\begin{figure}[]
\centering
\captionsetup{width=.8\linewidth}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{Figure3_Rank1.png}
\caption{$R=1$}
\label{FIG:a}
\end{subfigure}
\quad
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{Figure3_Rank5.png}
\caption{$R=5$}
\label{FIG:b}
\end{subfigure}
\caption{Approximation quality depending on noise level (see \Cref{Numerics1}). The $x$-axis shows noise to signal ratio $\Vert \eta \Vert_2/\Vert \hat{X} \Vert_F$ while the $y$-axis presents approximation error relative to $\Vert \hat{X} \Vert_F$. One can see the comparison of approximation results (solid blue) and theoretical bound (dashed red)}\label{Fig:NoiseTest}
\end{figure}
\subsection{Validation of Theorem \ref{ApproxX}} \label{Numerics2}
\paragraph{} \purple{In the second experiment, we study the influence of parameters $\alpha$ and $\beta$ on the reconstruction accuracy. In particular, we vary} the parameters $\alpha$ and $\beta$ when reconstructing one randomly drawn $\hat{X} \in \mathbb{R}^{16 \times 100}$, $\Vert \hat{X} \Vert_F = 10$, with $\mathrm{rank}(\hat{X}) = 1$ and $10$-sparse right singular vector {from $90$ measurements without noise}. Again parameter choice is exemplary. We compare the three settings: (a) $\alpha = \beta$, (b) $\alpha = 0.01\beta$ and (c) $\alpha = 100\beta$ in \Cref{Fig:Parameter}. One can observe a decrease of approximation error for $\alpha,\beta \rightarrow 0$ up to a certain threshold, {under which the approximation} seemingly fails. While this threshold lies at $\beta \approx 0.15$ in (a) and (b) it is hardly recognizable in (c). At the same time (a) and (b) show a much {smaller approximation error}. These observations suggest that the choice of $\alpha$ strongly influences the approximation quality of ATLAS. This is consistent with \Cref{ApproxX}, as a smaller $\alpha$ leads to a smaller theoretical approximation error bound.\\
Even though (a) and (b) show a linear decrease in approximation error which is in contrast to the square-root behavior of the theoretical bound, (c) suggests that the error, indeed, behaves similar to the theoretical bound.\\
\purple{Figure \ref{Fig:Parameter} shows that} the sparsity level remains stable for sufficiently large $\beta$ and breaks down precisely at the same threshold as the approximation error, coinciding with the \purple{violation} of the RIP \purple{conditions}.
\begin{figure}[]
\centering
\captionsetup{width=.8\linewidth}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{Figure4_alpha=beta.png}
\caption{$\alpha = \beta$}
\end{subfigure}
\\
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{Figure4_alpha=0,01beta.png}
\caption{$\alpha = 0.01\beta$}
\end{subfigure}
\quad
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{Figure4_alpha=100beta.png}
\caption{$\alpha = 100\beta$}
\end{subfigure}
\caption{Approximation quality and sparsity depending on parameter size (see \Cref{Numerics2}). The approximation error (solid blue) and the theoretical bound (dashed red) are measured relative to $\Vert \hat{X} \Vert_F$ while sparsity of the right singular vector (dotted yellow) is relative to $n_2$.}\label{Fig:Parameter}
\end{figure}
\begin{figure}[]
\centering
\captionsetup{width=.8\linewidth}
\includegraphics[width=0.45\textwidth]{Figure5.png}
\caption{{Approximation error depending on the magnitude of $\hat{X}$ {in Frobenius norm} (see \Cref{Numerics2}). Approximation error (solid blue) and theoretical bound (dashed red) are relative to $\Vert \hat{X} \Vert_F$.}}\label{Fig:NormTest}
\end{figure}
\begin{figure}[]
\centering
\captionsetup{width=.8\linewidth}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{Figure6_NoNoise.png}
\caption{No noise}
\end{subfigure}
\quad
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{Figure6_Noise.png}
\caption{Noise}
\end{subfigure}
\caption{Recovery probability comparison of SPF (dashed) and ATLAS (solid). Plotted are the thresholds for $90\%$ (red), $70\%$ (blue) and $30\%$ (yellow) successful recoveries. A recovery was counted successful if $\Vert \hat{X} - X_\text{appr} \Vert_F/\Vert \hat{X} \Vert_F \le 0.2$ (resp. $0.4$)}\label{Fig:Greyscale}
\end{figure}
\paragraph{} For a better understanding of ATLAS we made a third experiment reconstructing one randomly drawn $\hat{X} \in \mathbb{R}^{16 \times 100}$ with $\mathrm{rank}(\hat{X}) = 1$ and $10$-sparse right singular vector for different values of $\Vert \hat{X} \Vert_F$ {from $90$ measurements. The noise level was set to $0$ and the parameters to $\alpha = \beta = 0.5$.} The outcome is depicted in \Cref{Fig:NormTest}. One can see that the relative approximation error {decreasing with the magnitude of $\hat{X}$ as expected from the bound of \Cref{ApproxX}.} This seemingly confirms the theoretical dependence of reconstruction error on $\Vert \hat{X} \Vert_\frac{2}{3}^\frac{1}{3}$.
\subsection{ATLAS vs SPF} \label{Numerics3}
\paragraph{} After confirming the theoretical results numerically, we now turn to the comparison of ATLAS with its state-of-the-art counterpart SPF \cite{lee2013near}. To our knowledge, SPF is the only algorithm available so far in matrix sensing, which exploits low-rankness and sparsity constraints together and comes with near-optimal recovery guarantees (not relying on a special structure of $\mathcal{A}$ as in \cite{bahmani2016near}). As \cite{lee2013near} contains exhaustive numerical comparisons of SPF and {low-rank (resp. sparse) recovery strategies based on convex relaxation}, SPF suffices for numerical benchmark tests. From the structure of the algorithms and their respective theoretical analysis one would expect SPF to yield more accurate reconstruction in the noiseless-to-low-noise setting, while ATLAS should prove to be more reliable if noise becomes large. This theoretical expectation is confirmed by the following experiments.
\paragraph{} In Figure \ref{fig:Greyscale} we compare for $s/n_2 \in [0,1]$ and $m/(n_1n_2)$ the number of successful recoveries of $30$ randomly drawn $\hat{X} \in \mathbb{R}^{4 \times 128}$, $\Vert \hat{X} \Vert_F = 10$, with $\mathrm{rank}(\hat{X}) = 1$ and $s$-sparse right singular vectors from $m$ measurements. {The dimensions of $\hat{X}$ were chosen accordingly to similar experiments in \cite{lee2013near}.} We set the noise level to $0$ (resp. $0.3\Vert \hat{X} \Vert_F$) and counted the recovery successful if $\Vert \hat{X} - X_\text{appr} \Vert_F/\Vert \hat{X} \Vert_F \le 0.2$ (resp. $0.4$). {In order to compare the noisy and noiseless cases, we fix $\alpha = \beta = 0.5$ for both, which is a reasonable choice for high noise level, but perhaps sub-optimal if the noise level is low.} Selected quantiles are directly compared in Figure \ref{Fig:Greyscale} for convenience.\\
{As expected,} SPF outperforms ATLAS if there is no noise. In case of strong noise on the measurements, \purple{the situation changes}. In \purple{particular, we observe the improved performance of ATLAS, whereas the SPF performance remarkably deteriorates.}
\paragraph{} {To further quantify this effect, we \purple{perform} the experiments \purple{reflected} in Figure \ref{result}.} For varying number of measurements we compared average approximation error and recovery probability of SPF and ATLAS for $30$ randomly chosen $\hat{X} \in \mathbb{R}^{16 \times 100}$, $\Vert \hat{X} \Vert_F = 10$, with $\mathrm{rank}(\hat{X}) = 5$ and $10$-sparse right singular vectors which either share a common support or may have various support sets. The parameters are chosen as $\alpha = \beta = 0.5$. One can clearly see that SPF outperforms ATLAS even in the noisy case for common support sets of the singular vectors. This is not surprising as ATLAS makes no use of the additional information provided by shared support sets. If the singular vectors, however, do not share a common support set, ATLAS shows its strength in the noisy setting. SPF which needs pre-information on the row-/column-sparsity $\tilde{s}$ of $\hat{X}$ has to be initialized with $\tilde{s} = Rs$ as in the general case all support sets may differ.
\subsection{{Initialization}} \label{Numerics4}
\paragraph{} {We \red{also perform} a simple test on {the influence of the initialization}. The plots in Figure \ref{Fig:InitPlot} compared for $s/n_2 \in [0,0.5]$ and {$m/(n_1n_2) \in [0,1]$} the number of successful recoveries of $20$ randomly drawn $\hat{X} \in \mathbb{R}^{8 \times 128}$, $\Vert \hat{X} \Vert_F = 10$, with $\mathrm{rank}(\hat{X}) \in \{1,3\}$ and $s$-sparse right singular vectors from $m$ measurements. The noise level was set to $0.3\Vert \hat{X} \Vert_F$ and recovery was counted successful if $\Vert \hat{X} - X_\text{appr} \Vert_F/\Vert \hat{X} \Vert_F \le 0.4$. We compare the initializations by the leading singular vectors of $\mathcal{A}^*(y)$ and by the leading singular vectors of $X + Z$ where $Z$ was drawn at random, and scaled to $\Vert Z \Vert_F = 100$ (strong perturbation) resp. $\Vert Z \Vert_F = 0.2$ (mild perturbation).
\\
{For $\mathrm{rank}(\hat{X})=1$ we note remarkably that the convergence radius of ATLAS is seemingly very large (yet not global), as the phase transition diagrams in Figure \ref{Fig:InitPlot} do not show significant variations from choosing as initialization the leading singular vectors of $\mathcal{A}^*(y)$ and those of small random perturbation. Instead for $\mathrm{rank}(\hat{X})=3$, initialization plays a more important} role in performance and the \purple{initialization by leading singular vectors of $\mathcal{A}^*(y)$} does not yield optimal performance.
\begin{figure}[]
\centering
\captionsetup{width=.8\linewidth}
\begin{subfigure}[b]{\textwidth}
\includegraphics[width=\textwidth]{Figure7_Rank1.png}
\caption{$R = 1$}
\end{subfigure}
\quad
\begin{subfigure}[b]{\textwidth}
\includegraphics[width=\textwidth]{Figure7_Rank3.png}
\caption{$R = 3$}
\end{subfigure}
\caption{Comparison of different initializations for ATLAS for \eqref{A} with noise $\eta \neq 0$ on the measurments (see \Cref{Numerics4}), namely, initialization with a strongly perturbed approximation $X_0 \approx \hat X$ (left), initialization by the leading singular vectors of $\mathcal{A}^*(y)$ (middle), and initialization with a mildly perturbed approximation $X_0 \approx \hat X$ (right). Empirical recovery probability is depicted by color from zero (blue) to one (yellow).}\label{Fig:InitPlot}
\end{figure}
\subsection{Bounds on Minimizers} \label{BoundSection}
Recall the SD related representation $\hat{X} = \sum_{r=1}^R \hat{u}^r (\hat{v}^r)^T$ in \eqref{xSVD} where $\hat{\sigma}_r = \Vert \hat{u^r} \Vert_2 \Vert \hat{v}^r \Vert_2$ and the notation $X_{\alpha, \beta} = \sum_{r=1}^{R} u_{\alpha, \beta}^r (v_{\alpha, \beta}^r)^T$. For proving Proposition \ref{Bound-y2} and Lemma \ref{Bound-uv2} we need following technical lemma.
\begin{lemma} \label{fpq}
Let $\alpha,\beta,a,b,p,q > 0$. Then
\begin{align*}
f: \mathbb{R}^+ \rightarrow \mathbb{R}, \;\;\;\; f(\lambda) := \lambda^p \alpha a + \frac{1}{\lambda^q} \beta b,
\end{align*}
attains its minimum at $\tilde{\lambda} = \left( \frac{q}{p} \frac{\beta b}{\alpha a} \right)^\frac{1}{p+q}$ and has the minimal value
\begin{align*}
\min f = f(\tilde{\lambda}) = C_{p,q} (\alpha a)^\frac{q}{p+q} (\beta b)^\frac{p}{p+q},
\end{align*}
where $C_{p,q} = \left( \frac{q}{p} \right)^\frac{p}{p+q} + \left( \frac{p}{q} \right)^\frac{q}{p+q}$.
\end{lemma}
\begin{Proof}[of Lemma \ref{fpq}]
The result is obtained by differentiation of $f$ and by searching for its derivative's zeros.
\end{Proof}
\begin{Proof}[of Proposition \ref{Bound-y2}]
By applying Lemma \ref{fpq} $R$ times using $p=2, q=1, a=\Vert \hat{u}^r \Vert_2^2, b = \Vert \hat{v}^r \Vert_1$ we get $\tilde{\lambda}_1,...,\tilde{\lambda}_R$, such that
\begin{align}
\begin{split} \label{boundEstimate2}
J_{\alpha, \beta}^R (\tilde{\lambda}_1 \hat{u}^1,...,\tilde{\lambda}_R \hat{u}^R, \frac{1}{\tilde{\lambda}_1} \hat{v}^1,..., \frac{1}{\tilde{\lambda}_R} \hat{v}^R) &= \Vert y - \mathcal{A}(\hat{X}) \Vert_2^2 + \sum_{r=1}^{R} C_{2,1} \sqrt[3]{\alpha \beta^2} \sqrt[3]{\Vert \hat{u}^r \Vert_2^2 \Vert \hat{v}^r \Vert_1^2} \\
&= \Vert \eta \Vert_2^2 + C_{2,1} \sqrt[3]{\alpha \beta^2} \sum_{r=1}^{R} \left( \Vert \hat{u}^r \Vert_2 \Vert \hat{v}^r \Vert_1 \right)^\frac{2}{3}.
\end{split}
\end{align}
Note that, although not explicitly labeled, each $\tilde{\lambda}_r$ depends on the choice of $\alpha$ and $\beta$ as well as on $a,b,p$, and $q$. The minimality of $(u_{\alpha, \beta}^1,...,v_{\alpha, \beta}^R)$ implies
\begin{align*}
\Vert y - \mathcal{A}(X_{\alpha, \beta}) \Vert_2^2 &\le J_{\alpha, \beta}^R(u_{\alpha, \beta}^1,...,v_{\alpha, \beta}^R) \le J_{\alpha, \beta}^R(\tilde{\lambda}_1 \hat{u}^1,...,\tilde{\lambda}_R \hat{u}^R, \frac{1}{\tilde{\lambda}_1} \hat{v}^1,..., \frac{1}{\tilde{\lambda}_R} \hat{v}^R) \\
&= \Vert \eta \Vert_2^2 + C_{2,1} \sqrt[3]{\alpha \beta^2} \sum_{r=1}^{R} \left( \Vert \hat{u}^r \Vert_2 \Vert \hat{v}^r \Vert_1 \right)^\frac{2}{3}
\end{align*}
which is the claim.
\end{Proof}
The proof of Lemma \ref{Bound-uv2} works in a similar way.
\begin{Proof}[of Lemma \ref{Bound-uv2}]
From \eqref{boundEstimate2} in the proof of Proposition \ref{Bound-y2} we obtain
\begin{align*}
\| y - \mathcal{A}(X_{\alpha, \beta}) \|_2^2 + \sum_{r=1}^{R} \left( \alpha \Vert u_{\alpha, \beta}^r \Vert_2^2 + \beta \Vert v_{\alpha, \beta}^r \Vert_1 \right) &= J_{\alpha, \beta}^R(u_{\alpha, \beta}^1,...,v_{\alpha, \beta}^R) \\
&\le J_{\alpha, \beta}^R(\tilde{\lambda}_1 \hat{u}^1,...,\tilde{\lambda}_R \hat{u}^R, \frac{1}{\tilde{\lambda}_1} \hat{v}^1,..., \frac{1}{\tilde{\lambda}_R} \hat{v}^R) \\
&= \Vert \eta \Vert_2^2 + C_{2,1} \sqrt[3]{\alpha \beta^2} \sum_{r=1}^{R} \left( \Vert \hat{u}^r \Vert_2 \Vert \hat{v}^r \Vert_1 \right)^\frac{2}{3}
\end{align*}
The first part of the claim follows by subtracting $\| y - \mathcal{A}(X_{\alpha, \beta}) \|_2^2$ on both sides, leaving out half of the terms on the left-hand side, and dividing by $\alpha$ (resp.~$\beta$). To show the second part, note that by minimality of $(u_{\alpha, \beta}^1,...,v_{\alpha, \beta}^R)$ and Lemma \ref{fpq}
\begin{align*}
\sum_{r=1}^{R} \left( \alpha \Vert u_{\alpha, \beta}^r \Vert_2^2 + \beta \Vert v_{\alpha, \beta}^r \Vert_1 \right) = C_{2,1} \sqrt[3]{\alpha \beta^2} \sum_{r=1}^{R} \left( \Vert u_{\alpha, \beta}^r \Vert_2 \Vert v_{\alpha, \beta}^r \Vert_1 \right)^\frac{2}{3}
\end{align*}
and hence
\begin{align*}
\| y - \mathcal{A}(X_{\alpha, \beta}) \|_2^2 + C_{2,1} \sqrt[3]{\alpha \beta^2} \sum_{r=1}^{R} \left( \Vert u_{\alpha, \beta}^r \Vert_2 \Vert v_{\alpha, \beta}^r \Vert_1 \right)^\frac{2}{3} &= J_{\alpha, \beta}^R(u_{\alpha, \beta}^1,...,v_{\alpha, \beta}^R) \\
&\le J_{\alpha, \beta}^R(\tilde{\lambda}_1 \hat{u}^1,...,\tilde{\lambda}_R \hat{u}^R, \frac{1}{\tilde{\lambda}_1} \hat{v}^1,..., \frac{1}{\tilde{\lambda}_R} \hat{v}^R) \\
&= \Vert \eta \Vert_2^2 + C_{2,1} \sqrt[3]{\alpha \beta^2} \sum_{r=1}^{R} \left( \Vert \hat{u}^r \Vert_2 \Vert \hat{v}^r \Vert_1 \right)^\frac{2}{3}.
\end{align*}
Subtracting $\| y - \mathcal{A}(X_{\alpha, \beta}) \|_2^2$ on both sides and dividing by $C_{2,1} \sqrt[3]{\alpha \beta^2}$ concludes the proof.
\end{Proof}
To show the effective sparsity as in Lemma \ref{SparsityControl}, we combine the fact that $X_{\alpha, \beta}$ is a minimizer with the assumed lower bound on $v_{\alpha, \beta}^r$.
\begin{Proof}[of Lemma \ref{SparsityControl}]
\newcommand{{r}}{{r}}
By comparing $J_{\alpha, \beta}^R (u_{\alpha, \beta}^1,...,v_{\alpha, \beta}^r)$ to $J_{\alpha, \beta}^R (0,...,0)$, we get
\begin{align*}
\sum_{r=1}^{R} \left( \alpha \Vert u_{\alpha, \beta}^r \Vert_2^2 + \beta \Vert v_{\alpha, \beta}^r \Vert_1 \right) &\le J_{\alpha, \beta}^R(u_{\alpha, \beta}^1,...,v_{\alpha, \beta}^R) \le J_{\alpha, \beta}^R(0,...,0) = \Vert y \Vert_2^2.
\end{align*}
This implies $\Vert v_{\alpha, \beta}^r \Vert_1 < \Vert y \Vert_2^2/\beta$. As by assumption $\Vert v_{\alpha, \beta}^r \Vert_2 \ge \Vert y \Vert_2^2/\gamma$, we conclude
\begin{align*}
\frac{\Vert v_{\alpha, \beta}^{r} \Vert_1}{\Vert v_{\alpha, \beta}^{r} \Vert_2} < \frac{\Vert y \Vert_2^2}{\beta} \frac{\gamma}{\Vert y \Vert_2^2} = \frac{\gamma}{\beta}.
\end{align*}
\end{Proof}
\subsection{Proof of Theorem \ref{ApproxX}} \label{MainProofSection}
We have now all necessary tools at hand to prove our main approximation result. Most of the \purple{technical} work has been \purple{already presented} in {Proposition \ref{Bound-y2} and Lemma \ref{SparsityControl}}. By combining the RIP with the above bounds on norms and sparsity of minimizers, we can estimate the worst-case distance between $\hat{X}$ and $X_{\alpha, \beta}$ depending on the size of $\alpha$ and $\beta$, the sparsity $s$, the RIP constant $\delta$, and the size of $\hat{X}$ measured in a Schatten quasi-norm.
As the reader may notice, all technical results of \Cref{BoundSection} can be adapted to effective sparsity of the left components $(u_{\alpha, \beta}^1,...,u_{\alpha, \beta}^R)$ as well. \purple{This can be done by replacing $\ell_2$-norms by corresponding $\ell_1$-norms in $J_{\alpha, \beta}^R$}. The proof Lemma \ref{SparsityControl}, which guarantees effective sparsity of the right components, is independent of the minimization of the left components. Therefore, \Cref{SparsityControl} applies also to the left components if $\ell_2$-norms are replaced by $\ell_1$-norms in $J_{\alpha, \beta}^R$. \Cref{ApproxX} then can be adapted to this setting in a straightforward way.
\begin{Proof}[of Theorem \ref{ApproxX}]
As $\Vert y \Vert_2 \le \Vert \mathcal{A}(\hat{X}) \Vert_2 + \| \eta \|_2 \le (\| X \|_F + \sqrt{\delta}) + \| \eta \|_2$, \Cref{SparsityControl} applies and yields that $X_{\alpha, \beta}$ is in $K_{n_1,(\gamma/\beta)^2}^{R,c\Gamma}$. Combined with {$\hat{X} \in K_{n_1,s}^{R,\Gamma}$}, we know from {\eqref{sumrule}} that the difference $\hat{X} - X_{\alpha, \beta} \in K_{n_1,{\max} \{s,(\gamma/\beta)^2\} }^{2R,(c+1)\Gamma}$. Hence, we apply the rank-$2R$ and effectively $(n_1,{\max} \{ s,(\gamma/\beta)^2 \})$-sparse RIP$_{(c+1)\Gamma}$ of $\mathcal{A}$ to obtain (note that $|a^2 - b^2| \le \delta$ implies $|a - b| \le \sqrt{\delta}$, for $a,b > 0$)
\begin{align*}
\Vert \hat{X} - X_{\alpha, \beta} \Vert_F &\le \Vert \mathcal{A}(\hat{X}) - \mathcal{A}(X_{\alpha, \beta}) \Vert_2 + \sqrt{\delta} \le \left( \Vert y - \mathcal{A}(X_{\alpha, \beta}) \Vert_2 + \Vert \eta \Vert_2 \right) + \sqrt{\delta} \\
&\le \sqrt{s^{\frac{1}{3}} R^{\frac{2}{3}}C_{2,1}c_{\hat U} \sqrt[3]{\alpha \beta^2} \Vert \hat{X} \Vert_{\frac{2}{3}}^\frac{2}{3} + \Vert \eta \Vert_2^2 } + \Vert \eta \Vert_2 + \sqrt{\delta} \\
&\le \sqrt{s^{\frac{1}{3}} R^{\frac{2}{3}}C_{2,1}c_{\hat U}} \sqrt[6]{\alpha \beta^2} \Vert \hat{X} \Vert_{\frac{2}{3}}^\frac{1}{3} + 2 \Vert \eta \Vert_2 + \sqrt{\delta}.
\end{align*}
In the third inequality we used Proposition \ref{Bound-y2} in combination with $\Vert \hat{v}^r\Vert_1 \leq \sqrt s \Vert \hat{v}^r \Vert_2$ and
\begin{align*}
\sum_{r=1}^{R} \left( \Vert \hat{u}^r \Vert_2 \Vert \hat{v}^r \Vert_1 \right)^\frac{2}{3} \le s^\frac{1}{3} \sum_{r=1}^{R} \left( \Vert \hat{u}^r \Vert_2 \Vert \hat{v}^r \Vert_2 \right)^\frac{2}{3} \leq c_{\hat U} R^{\frac{2}{3}}s^\frac{1}{3} \Vert \hat{X} \Vert_\frac{2}{3}^\frac{2}{3},
\end{align*}
where we used again \eqref{qneq} for $p=2/3$.
\end{Proof}
\subsection{Proof of Lemma \ref{GaussianRIP}}\label{sec:coveringnumbers}
For proving \Cref{GaussianRIP} we need bounds on the covering numbers of $S_{s_1,s_2}^{R,\Gamma}$ and $K_{s_1,s_2}^{R,\Gamma}$. The covering number $N(M,\Vert \cdot \Vert,\varepsilon)$ of a set $M$ is the minimal number of $\Vert \cdot \Vert$-balls of radius $\varepsilon$ that are needed to cover the set $M$ completely. The cardinality of any $\varepsilon$-net $\tilde{M}$ of $M$, i.e., for all $z \in M$ there is $\tilde{z} \in \tilde{M}$ with $\Vert z - \tilde{z} \Vert < \varepsilon$, yields an upper bound for $N(M,\Vert \cdot \Vert,\varepsilon)$. The bound for $N(S_{s_1,s_2}^{R,\Gamma},\Vert \cdot \Vert_F,\varepsilon)$ below is an adaption of Lemma 3.1 in \cite{candes2011tight} and its proof can be found in the Appendix.
\begin{lemma}[Covering Number for Low-Rank Matrices with Sparse Rank-$R$ Decomposition] \label{CoveringNumber}
Let $S_{s_1,s_2}^{R,\Gamma}$ be the set defined in \eqref{S}. Then, for all $0 < \varepsilon < 1$, one has
\begin{align} \label{CoveringCardinality}
\log(N(S_{s_1,s_2}^{R,\Gamma},\Vert \cdot \Vert_F,\varepsilon)) \le R(s_1+s_2+1) \log\left( \frac{18\Gamma R}{\varepsilon} \right) + Rs_1 \log\left( \frac{e n_1}{s_1} \right) + Rs_2 \log\left( \frac{e n_2}{s_2} \right).
\end{align}
\end{lemma}
\paragraph{} To derive a similar bound on $N(K_{s_1,s_2}^{R,\Gamma},\Vert \cdot \Vert_F,\varepsilon)$ we need information on the covering number of the set of effectively $s$-sparse vectors $K_{n,s} \subset \mathbb{R}^n$. Plan and Vershynin derived several interesting properties of $K_{n,s}$ in \cite{Plan2013LP}. Among those \cite[Lemma 3.4]{Plan2013LP} gives the following bound for $N(K_{n,s},\Vert \cdot \Vert_2,\varepsilon)$.
\begin{lemma}\label{effectivelySparse}
For $0 < \varepsilon < 1$ the covering number of $K_{n,s}$ is bounded by
\begin{align*}
\log N(K_{n,s},\Vert \cdot \Vert_2,\varepsilon) \le
\begin{cases}
n \log \left(\frac{6}{\varepsilon}\right) & 0 < \varepsilon < 2\sqrt{\frac{s}{n}},\\
\frac{4s}{\varepsilon^2} \log \left( \frac{9\varepsilon n}{s} \right) & 2\sqrt{\frac{s}{n}} {\leq} \varepsilon < 1.
\end{cases}
\end{align*}
\end{lemma}
\begin{lemma}[Covering Number for Matrices with effectively Sparse Decomposition] \label{CoveringNumber2}
Let $K_{s_1,s_2}^{R,\Gamma}$ be the set defined in \eqref{K}. Assume w.l.o.g.\ that $s_1/n_1 \le s_2/n_2$. \red{Then, for all $0 < \varepsilon < 6\Gamma \sqrt{R}$, one has}
\begin{align} \label{CoveringCardinality2}
\log(N(K_{s_1,s_2}^{R,\Gamma}, \Vert \cdot \Vert_F, \varepsilon)) \le \begin{cases}
R(n_1+n_2+1) \log\left( \frac{36\Gamma R}{\varepsilon} \right) & 0 < \varepsilon < 12\Gamma\sqrt{\frac{Rs_1}{n_1}},\\
%
\frac{144\Gamma^2 R^2s_1}{\varepsilon^2} \log\left( \frac{9\varepsilon n_1}{6\Gamma \sqrt{R} s_1} \right) + R(n_2+1) \log\left( \frac{36\Gamma R}{\varepsilon} \right) & 12\Gamma\sqrt{\frac{Rs_1}{n_1}} {\le} \varepsilon < 12\Gamma\sqrt{\frac{Rs_2}{n_2}},\\
%
\frac{144\Gamma^2 R^2 (s_1 + s_2)}{\varepsilon^2} \log\left( \frac{9\varepsilon n_1}{6\Gamma \sqrt{R} s_1} \right) + R \log\left( \frac{18\Gamma R}{\varepsilon} \right) & \red{12\Gamma\sqrt{\frac{Rs_2}{n_2}} {\le} \varepsilon < 6\Gamma \sqrt{R}.}
\end{cases}
\end{align}
\end{lemma}
\begin{Proof}
Let $\tilde{K}_{n,s}$ be a minimal $\varepsilon/(6\Gamma \sqrt{R})$-net for $K_{n,s}$ in Euclidean norm. Let $D_\Gamma$ be the set of $R\times R$ diagonal matrices with Frobenius-norm less or equal $\Gamma$. It is well known that $N(D_\Gamma, \Vert \cdot \Vert_F, \varepsilon) \le (3\Gamma/\varepsilon)^R$. Denote by $\tilde{D}_\Gamma$ a minimal $(\varepsilon/(6R))$-net of $D_\Gamma$ and define the sets
\begin{align*}
K &= \{ Z \in \mathbb{R}^{n_1\times n_2} \colon Z = U\Sigma V^T \text{ with } u^r \in K_{n_1,s_1}, \; v^r \in K_{n_2,s_2} \text{ for all } r \in [R], \text{ and } \Vert \Sigma \Vert_F \le \Gamma \} \\
\tilde{K} &= \{ \tilde{Z} \in \mathbb{R}^{n_1\times n_2} \colon \tilde{Z} = \tilde{U}\tilde{\Sigma} \tilde{V}^T \text{ with } \tilde{u}^r \in \tilde{K}_{n_1,s_1}, \; \tilde{v}^r \in \tilde{K}_{n_2,s_2} \text{ for all } r \in [R], \text{ and } \tilde{\Sigma} \in \tilde{D}_\Gamma \}.
\end{align*}
We first show that $\tilde{K}$ is an $(\varepsilon/2)$-net of $K$. Let $Z = U\Sigma V^T \in K$ be given. There exists $\tilde{Z} = \tilde{U} \tilde{\Sigma} \tilde{V}^T \in \tilde{K}$ with $\Vert u^r - \tilde{u}^r \Vert_2 \le \varepsilon/(6\Gamma \sqrt{R})$, $\Vert v^r - \tilde{v}^r \Vert_2 \le \varepsilon/(6\Gamma \sqrt{R})$, for all $r \in [R]$, and $\Vert \Sigma - \tilde{\Sigma} \Vert_F \le \varepsilon/(6R)$. Therefore, $\Vert U - \tilde{U} \Vert_F^2 = \sum_{r=1}^R \Vert u^r - \tilde{u}^r \Vert_2^2 \le (\varepsilon/(6\Gamma))^2$ and $\Vert V - \tilde{V} \Vert_F^2 \le (\varepsilon/(6\Gamma))^2$. Moreover, $\Vert U \Vert_F^2 = \sum_{r=1}^{R} \Vert u^r \Vert_2^2 \le R$ (the same holds for $V,\tilde{U},\tilde{V}$) and $\Vert U\Sigma \Vert_F \le \Vert \Sigma \Vert_F$ (the same holds for $\Sigma V^T,\tilde{U} \Sigma,\Sigma \tilde{V}^T$). We now obtain by triangle inequality and the fact that $\Vert AB \Vert_F \le \Vert A \Vert_F \Vert B \Vert_F$
\begin{align*}
\Vert Z - \tilde{Z} \Vert_F &\le \Vert (U-\tilde{U})\Sigma V^T \Vert_F + \Vert \tilde{U} (\Sigma - \tilde{\Sigma}) V^T \Vert_F + \Vert \tilde{U} \tilde{\Sigma} (V - \tilde{V})^T \Vert_F \\
%
&\le \frac{\varepsilon}{6\Gamma} \Gamma + \sqrt{R} \frac{\varepsilon}{6R} \sqrt{R} + \Gamma \frac{\varepsilon}{6\Gamma} \le \frac{\varepsilon}{2}.
\end{align*}
Since $K_{s_1,s_2}^{R,\Gamma} \subset K$ one has $N(K_{s_1,s_2}^{R,\Gamma},\Vert \cdot \Vert_F, \varepsilon) \le N(K,\Vert \cdot \Vert_F, \varepsilon/2)$. Hence,
\begin{align*}
N(K_{s_1,s_2}^{R,\Gamma},\Vert \cdot \Vert_F, \varepsilon) \le |\tilde{K}| \le |\tilde{K}_{n_1,s_1}|^R |\tilde{D}_\Gamma| |\tilde{K}_{n_2,s_2}|^R
\end{align*}
which yields the claim by applying \Cref{effectivelySparse}.
\end{Proof}
Lemma \ref{GaussianRIP} can be proven by applying the following bound on suprema of chaos processes \cite[Theorems 1.4 \& 3.1]{krahmer2014suprema} in combination with the {bounds on the covering numbers} $N(S,\Vert \cdot \Vert_F,\varepsilon)$ and $N(K,\Vert \cdot \Vert_F,\varepsilon)$ of $S$ and $K$ of \Cref{CoveringNumber} and \Cref{CoveringNumber2}. {We recall below the relevant result in the form presented in \cite{JungKrahmerStoeger2017}.} The appearing $\gamma_2$-functional is defined in \cite{krahmer2014suprema} and can be bounded by
\begin{equation}\label{dudley}
\gamma_2 \left( \H, \Vert \cdot \Vert_{2 \rightarrow 2} \right) \lesssim \int_{0}^{d_{2 \rightarrow 2} \left( \H \right) } \sqrt{ \log N \left( \H , \Vert \cdot \Vert_{2\rightarrow 2} ,\varepsilon \right) } d\varepsilon,
\end{equation}
in the case of a set of matrices $\H$ equipped with the operator norm. {Here and below $d_\boxdot(\H) = \sup_{H \in \H} \Vert H \Vert_\boxdot$,
where $\boxdot$ is a generic norm.}
\begin{theorem} \label{KMR}
Let $ \mathcal{H} $ be a symmetric set of matrices, i.e., $ \mathcal{H} = - \mathcal{H} $, and let $ \xi $ be a random vector whose entries $\xi_i$ are independent $\mathcal{K}$-subgaussian random variables with mean $0$ and variance $1$. Set
\begin{align*}
E&= \gamma_2 \left( \mathcal{H}, \Vert \cdot \Vert_{2\rightarrow 2} \right) \left( \gamma_2 \left( \mathcal{H}, \Vert \cdot \Vert_{2\rightarrow 2} \right) + d_F (\mathcal{H}) \right) \\
V&= d_{2 \rightarrow 2} \left( \mathcal{H} \right) \left( \gamma_2 \left( \mathcal{H}, \Vert \cdot \Vert_{2\rightarrow 2} \right) + d_F (\mathcal{H}) \right)\\
U&= d^2_{2 \rightarrow 2} \left( \mathcal{H} \right)
\end{align*}
Then, for $t>0$,
\begin{equation*}
\Pr[]{\underset{H \in \mathcal{H}}{\sup} \big\vert \Vert H \xi \Vert_{\ell_2}^2 - \E[]{\Vert H\xi \Vert_{2}^2} \big\vert \ge c_1 E + t } \le 2 \exp \left( -c_2 \min \left( \frac{t^2}{V^2}, \frac{t}{U} \right) \right).
\end{equation*}
The constants $c_1$ and $c_2 $ are universal and only depend on $\mathcal{K}$.
\end{theorem}
{We refer the reader to \cite{krahmer2014suprema} and \cite{JungKrahmerStoeger2017} for further details.}
\begin{Proof}[of Lemma \ref{GaussianRIP}]
The proof consists of three main parts. We start in \textbf{(I)} by fitting our setting into the one of Theorem \ref{KMR}. In \textbf{(IIa)} resp. \textbf{(IIb)} the $\gamma_2$-functional gets bounded for $S_{s_1,s_2}^{R,\Gamma}$ and $K_{s_1,s_2}^{R,\Gamma}$, and in \textbf{(III)} we conclude by applying Theorem \ref{KMR}.
%
\paragraph{(I)} We first switch the roles of our random measurement operator ${\mathcal{A}}$ applied to the fixed matrices $Z$ to have fixed operators $H_Z$ applied to a random vector $\xi$. Denote by $\mathrm{vec}(Z) \in \mathbb{R}^{n_1n_2}$ the vectorization of $Z$. Observe, for all $Z\in \mathbb{R}^{n_1\times n_2}$,
\begin{align*}
{\mathcal{A}}(Z) = \frac{1}{\sqrt{m}} \begin{pmatrix}
\langle \mathrm{vec}(A_1),\mathrm{vec}(Z) \rangle \\
\vdots \\
\langle \mathrm{vec}(A_m),\mathrm{vec}(Z) \rangle
\end{pmatrix} = \frac{1}{\sqrt{m}} \begin{pmatrix}
\mathrm{vec}(Z)^T & 0 & \cdots \\
&\ddots & \\
\cdots & 0 & \mathrm{vec}(Z)^T
\end{pmatrix} \cdot \begin{pmatrix}
\mathrm{vec}(A_1) \\
\vdots \\
\mathrm{vec}(A_m)
\end{pmatrix} = H_Z \cdot \xi
\end{align*}
where $H_Z \in \mathbb{R}^{m\times mn_1n_2}$ is a matrix depending on $Z$ and $\xi \in \mathbb{R}^{mn_1n_2}$ has i.i.d.\ $\mathcal{K}$-subgaussian entries $\xi_l$ of mean $0$ and variance $1$. We define $\H_S = \{ H_Z \colon Z \in S_{s_1,s_2}^{R,\Gamma} \}$. Note that the mapping $Z \mapsto H_Z$ is an isometric linear bijection. In particular, we have $\Vert H_Z \Vert_F = \Vert Z \Vert_F$ and $\Vert H_Z \Vert_{2\rightarrow 2} = \Vert Z \Vert_F / \sqrt{m}$. For $Z \in S_{s_1,s_2}^{R,\Gamma}$ it holds that $\Vert Z \Vert_F \le \Vert U \Vert_F \Vert \Sigma V^T \Vert_F \le \Gamma \sqrt{R}$. Hence, $d_F(\H_S) \le \Gamma \sqrt{R}$ and $d_{2\rightarrow 2}(\H_S) \le \Gamma \sqrt{R}/\sqrt{m}$.
%
\paragraph{(IIa)} Since $\Vert H_Z \Vert_{2\rightarrow 2} = \Vert Z \Vert_F / \sqrt{m}$ and $Z \mapsto H_Z$ is a linear bijection, it follows that $N(\H_S,\Vert \cdot \Vert_{2\rightarrow 2},\varepsilon) = N(S,\Vert \cdot \Vert_F,\sqrt{m} \varepsilon)$. We can estimate by \eqref{dudley} and \Cref{GammaBounds}
\begin{align*}
\gamma_2 \left( \H_S, \Vert \cdot \Vert_{2 \rightarrow 2} \right) &\lesssim \int_{0}^{\frac{\Gamma\sqrt{R}}{\sqrt{m}}} \sqrt{ \log N \left( \H_S, \Vert \cdot \Vert_{2\rightarrow 2} ,\varepsilon \right) } d\varepsilon = \int_{0}^{\frac{\Gamma \sqrt{R}}{\sqrt{m}}} \sqrt{ \log N \left( S_{s_1,s_2}^{R,\Gamma} , \Vert \cdot \Vert_F ,\sqrt{m} \varepsilon \right) } d\varepsilon \\
&\le \sqrt{\frac{C_S \Gamma^2 R^2 (s_1+s_2) \log \left( \max \left\{ n_1,n_2 \right\} \right) }{m}} =: \mathcal{L}_S.
\end{align*}
for some constant $C_S > 0$.
%
\paragraph{(IIb)} In the same manner we obtain a bound on $\gamma_2 (\H_K,\Vert \cdot \Vert_{2\rightarrow 2})$ where $\H_K = \{ H_Z \colon Z \in K_{s_1,s_2}^{R,\Gamma} \}$. Recall that $\Vert H_Z \Vert_F = \Vert Z \Vert_F$, $\Vert H_Z \Vert_{2\rightarrow 2} = \Vert Z \Vert_F/\sqrt{m}$ and $Z \mapsto H_Z$ is an linear bijection. This implies $N(\H_K,\Vert \cdot \Vert_{2\rightarrow 2},\varepsilon) = N(K_{s_1,s_2}^{R,\Gamma},\Vert \cdot \Vert_F,\sqrt{m}\varepsilon)$. Note that $d_F(\H_K) \le \Gamma \sqrt{R}$ and $d_{2\rightarrow 2}(\H_K) \le \Gamma \sqrt{R}/\sqrt{m}$. We obtain by \eqref{dudley} and \Cref{GammaBounds}
\begin{align*}
\gamma_2(\H_K,\Vert \cdot \Vert_{2\rightarrow 2}) &\lesssim \int_{0}^{\frac{\Gamma \sqrt{R}}{\sqrt{m}}} \sqrt{\log N(\H_K,\Vert \cdot \Vert_{2\rightarrow 2},\varepsilon)} \;d\varepsilon = \int_{0}^{\frac{\Gamma \sqrt{R}}{\sqrt{m}}} \sqrt{\log N(K_{s_1,s_2}^{R,\Gamma},\Vert \cdot \Vert_F,\sqrt{m}\varepsilon)} \;d\varepsilon \\
&\le \sqrt{\frac{C_K \Gamma^2 R^2 (s_1 + s_2) \log^3(\max\{n_1,n_2\})}{m}} =: \mathcal{L}_K
\end{align*}
for some constant $C_K > 0$.
%
\paragraph{(III)} The final part of the proof is now equal for both sets $S_{s_1,s_2}^{R,\Gamma}$ and $K_{s_1,s_2}^{R,\Gamma}$. We write $\mathcal{L}$ for $\mathcal{L}_S$ resp. $\mathcal{L}_K$ and assume $m \gtrsim C_S \Delta^{-2} R (s_1+s_2) \log \left( \max \left\{ n_1,n_2 \right\} \right)$ resp. $m \gtrsim C_K \Delta^{-2} R (s_1 + s_2) \log^3(\max\{n_1,n_2\})$, for some $0 < \Delta < 1$. Then, $\mathcal{L} \le \Gamma \sqrt{R}$ and
\begin{align} \label{eq:2}
\mathcal{L}^2 + \Gamma \sqrt{R} \mathcal{L} \le \Gamma^2 R (\Delta^2 + \Delta) \le 2\Gamma^2 R \Delta.
\end{align}
We obtain the following bounds on the quantities (cf. \Cref{KMR}):
\begin{align} \label{ParameterBound}
E \le \mathcal{L}^2 + \Gamma \sqrt{R} \mathcal{L}, \;\;\;\;\tab V \le \frac{\Gamma \sqrt{R} \mathcal{L} + \Gamma^2 R}{\sqrt{m}}, \;\;\;\;\tab U \le \frac{\Gamma^2 R}{m}.
\end{align}
Observing now that $\E[]{\Vert H_Z \xi \Vert_2^2} = \Vert H_Z \Vert_F^2 = \Vert Z \Vert_F^2$ and recalling $\Gamma \ge 1$ we finally get, for $\delta \ge 3 c_1 \Gamma^2 R \Delta$ (which implies by \eqref{eq:2} that $\delta \ge c_1 E + c_1 \Gamma^2 R \Delta$),
\begin{align*}
\Pr[]{\sup_{Z \in S} \left| \Vert \mathcal{A}(Z) \Vert_2^2 - \Vert Z \Vert_F^2 \right| \ge \delta} &\le \Pr[]{\sup_{H_Z \in \H} \left| \Vert H_Z \xi \Vert_2^2 - \E[]{\Vert H_Z \xi \Vert_2^2} \right| \ge c_1 E + c_1 \Gamma^2 R \Delta } \\
&\le 2\exp \left( -c_2 \min \left\{ m \frac{c_1^2 \Gamma^4 R^2 \Delta^2}{\Gamma^2R(\mathcal{L} + \Gamma \sqrt{R})^2}, m \frac{c_1 \Gamma^2 R \Delta}{\Gamma^2 R} \right\} \right) \\
&\le 2\exp \left( -C \Delta^2 m \right)
\end{align*}
where $C > 0$ is a positive constant which depends on $\mathcal{K}$. In the last step we used that $\mathcal{L} + \Gamma \sqrt{R} \in [ \Gamma\sqrt{R},2\Gamma\sqrt{R} ]$ (because $0 < \mathcal{L} < \Gamma \sqrt{R}$).
\end{Proof}
\begin{remark}\label{Gammacond}
The condition $1 \leq \Gamma \leq \Gamma^2$ is used in a crucial way in part (III) of the proof above. Additionally, without condition $\Gamma\geq 1$, any $\hat X \in S_{s_1,s_2}^{R,\Gamma}$ could be decomposed also as follows
$$
\hat X= \sum_{r=1}^{R} \sigma_r u^r (v^r)^T =\sum_{r=1}^{R} \sum_{j=1}^K \frac{\sigma_r}{K} u^r (v^r)^T, \quad \left ( \sum_{r=1}^{R} \sum_{j=1}^K \frac{\sigma_r^2}{K^2} \right)^{1/2} \leq \frac{\Gamma}{\sqrt K},
$$
for any $K \in \mathbb N$, implying $\hat X \in S_{s_1,s_2}^{R K,\Gamma/\sqrt{K}}$ as well, which would result in a larger number of necessary measurements $m \gtrsim \left( \frac{\delta}{\Gamma^2 R} \right)^{-2} R K (s_1+s_2) \log^3 \left( \max \{n_1,n_2\} \right)$. Hence, $\Gamma \geq 1$ emerges as a natural condition, in order to have a correct proof of part (III) and to avoid ambiguities on the necessary measurements.
\end{remark} |
1,314,259,994,190 | arxiv | \section{Introduction}
The study of the interfacial region formed in diffusion limited \mbox{A $+$ B $\to$ 0}\
type reactions between domains of unlike species has attracted much
current interest \cite{Overview}-\cite{KozaHaim}. A natural way to
examine this problem is to prepare a system with the components
initially segregated along the plane $x=0$, and then investigate the
spatio-temporal evolution of their concentrations $\rho_{A}$ and $\rho_{B}$, and
the reaction rate $R$. Such geometry, first studied by G\'alfi and R\'acz\
\cite{G-R}, was already investigated by means of various methods,
including experiments \cite{Experiment,HaimExperiment,HaimKudowa},
numerical simulations \cite{J-E,C95,CDC92,CD-mn}, analytical
computations \cite{Haim91,Bstatic,Linear}, scaling
\cite{G-R,CD-mn,CD-Steady,Cornell} and dimensional
\cite{CD-Steady,hep} analysis.
A standard way to treat the initially separated problem analytically
is to solve the following partial differential equations \cite{G-R}
\begin{equation}
\label{GR}
\left.
\begin{array}{rcl}
\PT{\rho_{A}} &=& D_A \PXX{\rho_{A}} - R\\[2ex]
\PT{\rho_{B}} &=& D_B \PXX{\rho_{B}} - R
\end{array}
\right\}
\end{equation}
with the initial state given by
\begin{equation}
\label{IniCond}
\left.
\begin{array}{l}
\rho_{A}(x,t=0) \;=\; a_0 H(-x) \\[1ex]
\rho_{B}(x,t=0) \;=\; b_0 H(x)
\end{array}
\right\}
\end{equation}
where $\rho_{A}(x,t)$ and $\rho_{B}(x,t)$ are the local concentrations of A's
and B's, $R$ is the reaction rate, $H(x)$ denotes the Heavyside step
function, and $a_0$, $b_0$, $D_A$ and $D_B$ are some positive
constants related to the initial concentrations of species A and B and
their diffusion coefficients respectively. It is customary
\cite{G-R,CDC92,CD-mn,Linear,CD-Steady,Cornell,hep,BenRedner,Kudowa-H}
to assume $D_A = D_B \equiv D$, which leads to the conclusion that
$u(x,t) \equiv \rho_{A} - \rho_{B}$ obeys the readily solvable diffusion
equation $\partial_t u = D \partial^{2}_{x}u$ irrespective of $R$.
Finally some form of $R$ must be assumed, and in most cases either the
mean field approximation $R \propto \rho_{A}\rho_{B}$
\cite{G-R,Haim91,Linear,BenRedner}, or its generalization $R \propto
\rho_{A}^{m}\rho_{B}^{n}$ \cite{CDC92,CD-mn,CD-Steady,Cornell} was adopted.
With these assumptions, two fundamental concepts were developed, both
referring to the long time limit. According to the first one
\cite{G-R}, the long time behavior of the system inside the reaction
layer can be described with a help of some scaling functions $S_A$,
$S_B$ and $S_R$ through
\begin{eqnarray}
\label{4}
\rho_{A}(x,t) &\propto&
t^{-\gamma}S_A\BRA{x - x_f(t) \over t^\alpha}\;,\\[1ex]
\label{5}
\rho_{B}(x,t) &\propto&
t^{-\gamma}S_B\BRA{x - x_f(t) \over t^\alpha}\;,\\[1ex]
\label{6}
R(x,t) &\propto&
t^{-\beta} S_R\BRA{x - x_f(t) \over t^\alpha}\;,
\end{eqnarray}
where $x_f(t)$ denotes the point at which the reaction rate $R$
attains its maximal value, and exponents $\alpha$, $\beta$ and
$\gamma$ are some positive constants given, for $R \propto
\rho_{A}^{m}\rho_{B}^{n}$, by $\gamma = 1/(m+n+1)$, $\alpha = \frac{1}{2} -
\gamma$ and $\beta = 1-\gamma$ \cite{CD-mn}. The scaling ansatz is
based on the assumption that the width $w(t)$ of the reaction layer
grows with time as $t^\alpha$ with $\alpha < 1/2$, so that in addition
to the diffusion length scale $\lambda_D \sim \sqrt{Dt}$, the problem
possesses also another relevant length scale $w \propto t^\alpha$.
According to the second theory, called the quasistationary
approximation \cite{CD-Steady,BenRedner}, the currents $J_{A}(t)$ and
$J_{B}(t)$ of particles A and B arriving at the interface layer from
the two densely occupied domains are changing so slowly, that the
relatively narrow interface has enough time to equilibrate. To
'equilibrate' means here to reach a state completely determined by the
current boundary conditions, i.\ e.\ by $J_A$ and $J_B$.
Mathematically this is equivalent to the assumption that the state of
the reaction zone is entirely given by equations obtained from
(\ref{GR}) by replacing their left sides, or the time derivatives,
with zero. This leads to much simpler equations
\begin{equation}
\label{STAC}
\left.
\begin{array}{rcl}
D_A \PXX{\rho_{A}} &=& R\\[2ex]
D_B \PXX{\rho_{B}} &=& R
\end{array}
\right\}
\end{equation}
which are to be solved with the boundary conditions
$\partial\rho_{A}/\partial x \to -J_A(t)$ and $ \rho_{B} \to 0$ as $x \to
-\infty$, and $\rho_{A} \to 0$, $\partial\rho_{B}/\partial x \to J_B(t)$ as $x
\to +\infty$. The most important feature of the quasistationary
equations (\ref{STAC}) is that they depend only on $x$, with time $t$
being a parameter entering their solutions $\rho_{A}(x,t)$ and $\rho_{B}(x,t)$
only through the time dependent boundary currents $J_A$ and $J_B$.
It was conjectured by G\'alfi and R\'acz\ \cite{G-R} that the first of the above
assumptions, $D_A = D_B$, is irrelevant with regard to the long time
behavior of the system, the ratio $D_A/D_B$ affecting perhaps the form
of the scaling functions $S_A$, $S_B$ and $S_R$, but not the values of
exponents $\alpha$, $\beta$ and $\gamma$. This hypothesis was
generally accepted after numerical \cite{J-E} and experimental
\cite{Experiment} verification. However, there is still no analytical
theory referring to the general case $D_A \neq D_B$. For two reasons
this situation arouses some anxiety. First, it is practically
impossible to find in Nature two species with exactly the same
diffusion constants. Second, the above mentioned verification
encompassed only the case where the ratio $D_A/D_B$ was of order 1,
whereas it is known \cite{J-E,Kudowa-H}, that if one of the diffusion
constants is equal zero, the mean-field exponents assume values
entirely different from those predicted by G\'alfi and R\'acz, namely $\alpha = 0$,
$\beta = 1/2$ and $\gamma = 1/4$. The aim of this paper is to present
such general theory comprising the case of any positive diffusion
constants $D_A$ and $D_B$.
Unfortunately, we know of only one successful attempt to derive the
macroscopic form of $R$ from the microscopic properties of the system
\cite{Bstatic}. Dimensional analysis leads to another important
conclusion that the mean field approximation should be valid only in
spaces of dimension higher than $d_c = 2$ \cite{CD-mn,CD-Steady,hep}.
Therefore our basic equation (\ref{GR}) might seem useful only for
these two sorts of systems for which the form of $R$ is known. In our
approach, however, we will not need to impose any special restriction
on the form of $R$. Instead, we will require that the solutions of
(\ref{GR}) satisfy a few physically justifiable relations. Therefore
our theory can be applied even to the systems for which the form of
$R$ remains unknown, including experiments and microscopic models. In
such cases verification of our postulates should be far easier
than the task of finding the exact form of $R$, let alone solving
(\ref{GR}) afterwards.
The paper is organized as follows. In the next section we will present
the assumptions our theory has been founded on, as well as their short
physical justification. The general theory is formulated in the third
section. In the next section we will use it to derive and discuss the
scaling ansatz in the mean field approximation. The final, fifth
section is devoted to conclusions.
\section{Assumptions}
We will consider systems which can be described with the G\'alfi and R\'acz\
equations (\ref{GR}) and the boundary conditions (\ref{IniCond}). We
will assume that $D_A$, $D_B$, $a_0$ and $b_0$ are some known positive
constants. Our analysis will be based on a few physical assumptions.
\renewcommand{\theenumi}{\roman{enumi}}
\begin{enumerate}
\item At any time $t > 0 $ there exists a unique point $x_f(t)$ at
which the reaction term $R$ attains its maximal value, and a
unique point $x_0(t)$ at which $D_A\rho_{A}(x_0,t) - D_B\rho_{B}(x_0,t) =
0$.
\item The reaction is concentrated in a region $|x - x_f| \sim w(t)
\sim t^{\alpha}$ with $0 < \alpha < 1/2$. Outside this region,
for $x \ll x_f - w$, there is $\rho_{A} \gg \rho_{B}$, and for $x \gg x_f
+ w$ we have $\rho_{A} \ll \rho_{B}$.
\item The evolution of $\rho_A$ in the region $x \ll x_f - w$
can be approximated by
\begin{equation}
\label{ra}
\rho_{A}(x,t) \; = \; a_0 - C_A\SQR{ \erf{x / \sqrt{4D_At}} + 1}\;,
\end{equation}
where $C_A$ is a constant, and \hspace{0.3ex} $\erf{x} \equiv
2\pi^{-1/2}\int_{0}^{x} \exp(-\eta^2)\!\;\mbox{d}\eta$
\hspace{0.3ex} is the error function \cite{Luke}.
Similarly, for $x \gg x_f + w$, the evolution of $\rho_{B}$ can
be estimated by
\begin{equation}
\label{rb}
\rho_{B}(x,t) \;=\; b_0 + C_B \SQR{\erf{ x / \sqrt{4D_B t}} - 1}\;,
\end{equation}
where $C_B$ denotes another constant. Both $C_A$ and $C_B$ depend on
the initial parameters $a_0$, $b_0$, $D_A$ and $D_B$.
\item The quasistatic approximation is valid in the region
$-(D_A t)^{1/2} \ll x \ll (D_B t)^{1/2}$.
\end{enumerate}
The first assumption introduces two functions $x_f(t)$ and $x_0(t)$,
restricting the considerations to the cases where they are uniquely
defined. Function $x_f$ identifies directly the location of the
reaction layer at time $t$, and $x_0$ is an auxiliary, mathematical
object helpful in examining the behavior of $x_f$. That $x_0$ exists
for any $t>0$ stems from the initial conditions (\ref{IniCond}). As
for the second postulate, it was satisfied by all the \mbox{A $+$ B $\to$ 0}\
interfacial systems examined so far. The third assumption comes from
the observation that, due to postulate {\sf ii)}, in the region $x \ll x_f -
w$ the concentration of particles A is expected to be much bigger than
that of B's, the latter having to cross the whole reaction layer to
get there. Therefore, the evolution of A's is practically unaffected
by B's, and so it should be governed by the standard diffusion
equation $\partial_t\rho_{A} = D_A\partial^2_x{\rho_{A}}$. The particular, based
on the error function form (\ref{ra}) of its solution was predicted
and experimentally confirmed by Koo and Kopelman \cite{Experiment}.
Notice also that for any time $t$ such form of $\rho_{A}$ guarantees that
the relation $\lim_{x\to-\infty}\rho_{A} = a_0$ implied by the initial
conditions (\ref{IniCond}) is also fulfilled. A similar argument leads
to (\ref{rb}). As for the last postulate, the quasistationary
approximation is based on the following observation \cite{CD-Steady}.
The diffusion current of particles arriving at the reaction layer is
$J \propto t^{1/2}$, so the characteristic time scale on which this
current changes is $\tau_J \sim (\mbox{d}\log J/ \mbox{d} t)^{-1}
\propto t$, whereas the equilibration time of the reaction front is
$\tau_F \sim w^2 \propto t^{2\alpha}$; therefore $\alpha < 1/2$
implies that as time goes to infinity, the ratio $\tau_F/\tau_J$ goes
to $0$, validating the quasistatic approximation.
As we mentioned above, we will not impose any explicit restrictions on
the form of the macroscopic reaction rate $R$ requiring only that it
be consistent with the above postulates.
However, to investigate the behavior of the \mbox{A $+$ B $\to$ 0}\ system inside
the reaction zone we will need more detailed information about $R$.
Therefore in Section 4 we will concentrate on the mean-field
approximation $R \propto \rho_{A}\rho_{B}$.
\section{Analysis}
The following observation constitutes the basis of the analysis of our
model. For sufficiently long time $t$, at any point $x$ we can employ
either assumption {\sf iii)}\ or {\sf iv)}\ or both of them -- see Fig.\
\ref{fig1}. Therefore we can divide the $x$ axis into several regions,
and in each of them the initial problem of solving (\ref{GR}) can be
reduced to a much simpler one. Then, the overlapping of the domains of
applicability of {\sf iii)}\ and {\sf iv)}\ will enable us to merge the solutions.
\begin{figure}[htpb]
\thicklines
\begin{center}
\unitlength1mm
\begin{picture}(120,32)(0,12)
\put(9,20){\line(1,0){28}}
\put(40,20){\makebox(0,0){\ldots}}
\put(43,20){\line(1,0){39}}
\put(85,20){\makebox(0,0){\ldots}}
\put(88,20){\line(1,0){24}}
\put(4,20){\makebox{\ldots}}
\put(4,17){\makebox(0,0){$-\infty$}}
\put(118,20){\makebox(0,0)[r]{\ldots}}
\put(118,17){\makebox(0,0)[r]{$\infty$}}
\put(60,18){\line(0,1){2}}
\put(60,17){\makebox(0,0)[t]{$x_f$}}
\put(50,18){\line(0,1){2}}
\put(50,17){\makebox(0,0)[t]{$x_f-w$}}
\put(70,18){\line(0,1){2}}
\put(70,17){\makebox(0,0)[t]{$x_f+w$}}
\put(30,18){\line(0,1){2}}
\put(30,17){\makebox(0,0)[t]{$-\sqrt{D_A t}$}}
\put(100,18){\line(0,1){2}}
\put(100,17){\makebox(0,0)[t]{$\sqrt{D_B t}$}}
\put(27,24){\makebox(0,0)[b]{\shortstack{
\small \bf iii \\
\small free diffusion of A's
}}}
\put(27,22){\vector(-1,0){23}}
\put(27,22){\vector(1,0){23}}
\put(94,24){\makebox(0,0)[b]{\shortstack{
\small \bf iii \\
\small free diffusion of B's
}}}
\put(94,22){\vector(-1,0){24}}
\put(94,22){\vector(1,0){24}}
\put(65,35){\makebox(0,0)[b]{\shortstack{
\small \bf iv \\
\small quasistationary approximation
}}}
\put(65,33){\vector(-1,0){35}}
\put(65,33){\vector(1,0){35}}
\end{picture}
\end{center}
\caption{\sf Schematic diagram of the regions of applicability of
postulates {\sf iii)}\ and {\sf iv)}.
Asymptotically $w(t) \propto t^{\alpha} \ll t^{1/2}$.
}
\label{fig1}
\end{figure}
Consider first the region $-\sqrt{D_A t} \ll x \ll \sqrt{D_B t}$. By
assumption {\sf iv)}\ the system is governed here by quasistationary
equations (\ref{STAC}). They imply that $\Psi (x,t) \equiv D_B\rho_{B} -
D_A\rho_{A} $ satisfies $\partial^2\Psi/\partial x^2 = 0$. Therefore $\Psi$
is linear in $x$. Let $J(t)$ denote its slope. By definition of $x_0$
we have $\Psi(x_0,t) = 0$. Thus we arrive at the conclusion that at
sufficiently long time $t$, for $-\sqrt{D_A t} \ll x \ll \sqrt{D_B
t}$, there is
\begin{equation}
\label{defj}
D_B\rho_{B} - D_A\rho_{A} \;\approx\; J(t) (x-x_0(t)) \;,
\end{equation}
and so $J_A(t) = J_B(t) = J(t)$. The notation
$f(t) \approx g(t)$ means $\lim_{t\to\infty} f(t)/g(t) = 1$.
Consider now the region $-\sqrt{D_A t} \ll x \ll x_f - w$, so that
$\epsilon \equiv x_f - x$ fulfils $t^{\alpha} \ll \epsilon \ll
t^{1/2}$.
Applying assumption {\sf ii)}\ to (\ref{defj}) we can approximate the
form of $\rho_{A}$ by
\begin{equation}
\label{raj}
\rho_{A}(x,t) \;\approx\; -D_A^{-1}J(t) (x-x_0(t)) \;.
\end{equation}
On the other hand, however, by assumption {\sf iii)}, $\rho_{A}$ can be here as
well expressed by equation (\ref{ra}). So we have
\begin{equation}
\label{eps1}
a_0 - C_A \SQR{\erf{\frac{x_f(t)-\epsilon}{\sqrt{4D_At}}} + 1}
\;\approx\; -D_A^{-1}J(t)(x_f(t) - x_0(t) - \epsilon)
\end{equation}
and
\begin{equation}
\label{dereps}
\left.
\PX{}\BRA{a_0 - C_A\SQR{\erf{\frac{x}{\sqrt{4D_At}}}+1}}
\right|_{x_f - \epsilon}
\;\approx\;
-D_A^{-1}J(t) \;.
\end{equation}
By assumption {\sf ii)}, for any $x$ located outside the reaction layer, the
ratio $\rho_{A}/\rho_{B}$ will either converge to zero, or diverge to infinity
as $t\to\infty$. However, by definition of $x_0$, this ratio assumes
the constant value $D_B/D_A$ at $x =x_0$. So $x_0$ must lie inside the
reaction layer. As its width grows as $t^{\alpha}$, we conclude that
there must exist a number $\theta$ such that $|x_f(t) - x_0(t)| \le
\theta t^{\alpha}$. We can see now that in the long time limit $|x_f -
x_0|$ becomes negligibly small compared to $\epsilon$ which, in turn,
gets negligibly small compared to $t^{1/2}$. Therefore we can drop
$\epsilon$ on the left hand side of (\ref{eps1}) and (\ref{dereps}),
and $x_f - x_0$ on the r.\ h.\ s.\ of (\ref{eps1}). After these
transformations the asymptotic value of the l.\ h.\ s.\ of
(\ref{eps1}) turns out independent of $\epsilon$, whereas the r.\ h.\
s.\ of (\ref{eps1}) becomes proportional to $\epsilon J(t)$. As
$\epsilon$ can vary between $t^\alpha$ and $t^{1/2}$, we conclude that
$J(t)\epsilon(t)$ goes either to 0, or to $\infty$. The latter case is
impossible because (\ref{eps1}) approximates the value of $\rho_{A}$ which
must be finite. In the long time limit we therefore have
\begin{equation}
\label{epsj0}
J(t) \epsilon(t) \;\to\; 0 \;,
\end{equation}
\begin{equation}
\label{aat0}
a_0 - C_A \SQR{\erf{\frac{x_f(t)}{\sqrt{4D_At}}} + 1} \;\to\; 0 \;,
\end{equation}
and
\begin{equation}
\label{der2a}
J(t)\sqrt{t} \;\to\;
C_A\sqrt{ D_A/\pi} \exp\BRA{-\frac{x_f^2(t)}{4D_At}} \;.
\end{equation}
Similar arguments applied to the region $x_f + w \ll x \ll \sqrt{D_B
t}$ lead to
\begin{equation}
\label{bat0}
b_0 + C_B \SQR{\erf{\frac{x_f(t)}{\sqrt{4D_B t}}} - 1} \;\to\; 0 \;,
\end{equation}
and
\begin{equation}
\label{der2b}
J(t)\sqrt{t} \;\to\; C_B\sqrt{D_B/\pi}
\exp\BRA{-\frac{x_f^2(t)}{4D_B t}} \;.
\end{equation}
It follows from (\ref{aat0}) and (\ref{bat0}) that in the
long time limit
\begin{equation}
\label{defC_f}
x_f(t)/\sqrt{t} \;\to\; C_f \;,
\end{equation}
where $C_f$ is a constant given either by
\begin{equation}
\label{Cf1}
C_f \;=\; 2\sqrt{D_A}\ierfs{(a_0 - C_A)/C_A}
\end{equation}
or
\begin{equation}
\label{Cf2}
C_f \;=\; 2\sqrt{D_B}\ierfs{(C_B-b_0)/C_B} \;.
\end{equation}
Now (\ref{der2a}), (\ref{der2b}) and (\ref{defC_f}) imply that as
time goes to infinity we have
\begin{equation}
\label{defC_J}
J(t)\sqrt{t} \;\to\; C_J \;,
\end{equation}
where $C_J$ is another constant given either by
\begin{equation}
\label{CJ1}
C_J \;=\; C_A\sqrt{D_A/\pi}\exp\BRA{-\frac{C_f^2}{4D_A}}
\end{equation}
or
\begin{equation}
\label{CJ2}
C_J \;=\; C_B\sqrt{D_B/\pi}\exp\BRA{-\frac{C_f^2}{4D_B}} \;.
\end{equation}
Notice that (\ref{defC_J}) is consistent with (\ref{epsj0}).
So far we have introduced four constants $C_A$, $C_B$, $C_f$ and
$C_J$. The first two of them, $C_A$ and $C_B$, control the asymptotic
profile of the majority species outside the reaction layer. The third
constant, $C_f$, governs the location of the reaction layer center.
Finally, through the formula $J(t) \approx \int\! R(x,t) \mbox{d}x
\approx C_J/t^{1/2}$, parameter $C_J$ is related to the magnitude of
the current $J(t)$ of particles entering the reaction layer, or,
equivalently, the total reaction rate at time $t$. Due to the form of
the initial state (\ref{IniCond}) we expect $\partial_x \rho_{A} \le 0$ and
$\partial_x \rho_{B} \ge 0$, which implies $C_A > 0$, $C_B > 0$ and $C_J >
0$.
Equations (\ref{Cf1}), (\ref{Cf2}), (\ref{CJ1}) and (\ref{CJ2})
can be reduced to
\begin{equation}
\label{1eq}
\Phi\!\left(
\frac{-C_f}{2\sqrt{D_A}}
\right)
\;=\;
\frac{a_0\sqrt{D_A}}{b_0\sqrt{D_B}} \:
\Phi\!\left(
\frac{C_f}{2\sqrt{D_B}}
\right) \;,
\end{equation}
where
\begin{equation}
\label{defphi}
\Phi (x) \;\equiv\; \SQR{1 - \erf{x}}\exp(x^{2}) \;.
\end{equation}
An important feature of $\Phi(x)$ is that it diminishes monotonically
from $\infty$ to $0$ as $x$ grows from $-\infty$ to $\infty$. This
property guarantees that equation (\ref{1eq}) always has a unique
solution $C_f = C_f(a_0/b_0, D_A, D_B)$ which, moreover, can be
readily found numerically. The only problem that can appear while
solving (\ref{1eq}) numerically is that when $x$ is positive,
$\Phi(x)$ is a product of a very small and a very big numbers. For
this reason, if $x$ is greater than 5, we suggest to use the
asymptotic form $\Phi(x) \approx 1/(\sqrt{\pi}x)$ which comes from the
asymptotic properties of the error function \mbox{erf} \cite{Luke}.
With $C_f$ computed from (\ref{1eq}), the values of $C_A$, $C_B$ and
$C_J$ can now be calculated from (\ref{Cf1}), (\ref{Cf2}) and
(\ref{CJ1}). The opposite statement is also true: if we know (e.\ g.\
from an experiment) the values of $C_A$, $C_B$, $C_f$ and $C_J$, our
equations determine uniquely the values of $a_0$, $b_0$, $D_A$ and
$D_B$.
The immediate consequence of (\ref{1eq}) is that the sign of $C_f$ is
determined by the sign of $a_0\sqrt{D_A}/(b_0\sqrt{D_B}) - 1$. In
particular we conclude that
\begin{equation}
\label{Cf=0}
C_f \;=\; 0 \;\; \iff \;\;
a_0\sqrt{D_A} \;=\; b_0\sqrt{D_B} \;.
\end{equation}
This formula is important for planning experiments, as it clarifies
the way the initial concentrations of the species should be chosen in
order to have the reaction layer move asymptotically as slowly as
possible. Condition (\ref{Cf=0}) is consistent with that of Jiang and
Ebner's \cite{J-E} who, by numerical examination of the mean field
approximation $R\propto \rho_{A}\rho_{B}$, found a stronger relation $x_f \,=\,
0 \;\iff\; a_0\sqrt{D_A} \,=\, b_0\sqrt{D_B}$. Our general formula,
derived for any reaction term $R$, implies only that with this
particular choice of the initial parameters the function $x_f$ cannot
be changing as fast as $t^{1/2}$. An example of a system where $C_f =
0$ and $x_f(t) \propto t^{\alpha}$ was investigated in \cite{Cornell}.
Equation (\ref{1eq}) enables us also to observe a striking similarity
between the long and short time behavior of $x_f$. According to
\cite{Haim91}, in the short time limit the reaction term does not
affect the solutions of (\ref{GR}), and so $\rho_{A}$ and $\rho_{B}$ assume the
same forms as in the readily solvable case $R = 0$. The point $x_f$
can be then found as the point at which $\partial R/\partial x = 0$.
For $R \propto \rho_A^m \rho_B^n$ such procedure yields $\lim_{t\to 0}
\, x_f/\sqrt{t} = C_0$, where $C_0$ can be found from the relation
very similar to that of (\ref{1eq})
\begin{equation}
\label{phi2}
\Phi\!\left(
\frac{C_0}{2\sqrt{D_A}}
\right)
\;=\;
\frac{m\sqrt{D_B}}{n\sqrt{D_A}} \:
\Phi\!\left(
\frac{-C_0}{2\sqrt{D_B}}
\right) \;.
\end{equation}
\section{The reaction layer}
In the previous section we carried out our analysis without imposing
any restrictions on the form of the macroscopic reaction term $R$. As
we now proceed to examine the asymptotic properties of the reaction
layer, we will obviously need more specific information about $R$.
Therefore we will concentrate on the mean field approximation $R =
k\rho_{A}\rho_{B}$, $k= \mbox{const.}$, still allowing $a_0$, $b_0$,
$D_A$ and $D_B$ to take any positive values.
By assumption {\sf iv)}\ we expect that in the region $-(D_A t)^{1/2} \ll x
\ll (D_B t)^{1/2}$ we can apply the quasistatic approximation
equations (\ref{STAC}). Let $\rho_{A}(x,t)$ and $\rho_{B}(x,t)$ denote their
solutions for some values of $D_A$, $D_B$, $x_0(t)$ and $J(t)$. By the
following linear transformation we introduce two new functions of a
single variable $\tilde\rho_{A}(x)$ and $\tilde\rho_{B}(x)$
\begin{equation}
\label{Solutions}
\left.
\begin{array}{rcl}
\rho_{A}(x,t) &=& \eta_A (t)\tilde\rho_{A}\SQR{(x-x_0(t))/w(t)}\\[1.5ex]
\rho_{B}(x,t) &=& \eta_B (t)\tilde\rho_{B}\SQR{(x-x_0(t))/w(t)}
\end{array}
\right\}
\end{equation}
where
\begin{eqnarray}
\label{defw}
w(t) &\equiv& \sqrt[3]{\frac{D_A D_B}{kJ(t)}}
\;=\; \sqrt[3]{\frac{D_A D_B}{kC_J}}\,t^{1/6} \;, \\[1ex]
\label{etaa}
\eta_A(t) &\equiv& J(t)w(t)/D_A
\;=\;
\BRA{\frac{D_B}{k}}^{1/3}\BRA{\frac{C_J}{D_A}}^{2/3}t^{-1/3}
\;,\\[1ex]
\label{etab}
\eta_B(t) &\equiv& J(t)w(t)/D_B
\;=\;
\BRA{\frac{D_A}{k}}^{1/3}\BRA{\frac{C_J}{D_B}}^{2/3}t^{-1/3}
\;.
\end{eqnarray}
Denoting $\tilde{R}(x) \equiv \tilde\rho_{A}(x)\tilde\rho_{B}(x)$ we have also
\begin{equation}
\label{R}
R(x,t) \;\equiv\; k\rho_{A}\rho_{B} \;=\; C_J^{4/3}(D_A D_B)^{-1/3} k^{1/3}
t^{-2/3} \tilde{R}\SQR{(x-x_0)/w(t)} \;.
\end{equation}
The essential property of $\tilde\rho_{A}(x)$ and $\tilde\rho_{B}(x)$ is that they
constitute the particular solution to equations (\ref{STAC}) with $D_A
= D_B = J = k = 1$ and $x_0 = 0$. Therefore, by symmetry, $\tilde{R}(x)$
assumes its maximal value for $x = 0$, and so equation (\ref{R})
implies that $R$ attains the maximal value at $x = x_0$. In the long
time limit we can therefore identify $x_f$ with $x_0$. Comparing now
(\ref{Solutions}) and (\ref{R}) with the scaling ansatz (\ref{4}) -
(\ref{6}) we see that we can also identify $\tilde\rho_{A}$ with $S_A$, $\tilde\rho_{B}$
with $S_B$, and $\tilde{R}$ with $S_R$. Therefore not only are the above
formulae consistent with G\'alfi and R\'acz's scaling ansatz, but through $C_J(a_0,
b_0, D_A, D_B)$ they also {\em exactly} relate the quantities of
physical importance (e.\ g.\ $w(t)$) to the parameters of the system
($D_A$, $D_B$, $a_0$, $b_0$ and $k$).
Because (\ref{Solutions}) can be applied to systems with any positive
values of 'external' parameters $a_0$, $b_0$, $D_A$, $D_B$ and $k$, we
arrive at the conclusion that the long time evolution of initially
segregated \mbox{A $+$ B $\to$ 0}\ systems is even more universal that it was predicted
by G\'alfi and R\'acz; namely, not only the scaling exponents, but also the form of
the scaling functions does not depend on the external parameters.
Therefore, to find the scaling properties of the reaction layer it is
sufficient to concentrate on the simplest, symmetric case $D_A = D_B$
and $a_0 = b_0$.
Notice that we have achieved these results by means of a simple,
linear transformation (\ref{Solutions}). In this way we took advantage
of the very feature of equations (\ref{GR}) and (\ref{STAC}) that
prevents them from being solved analytically -- nonlinearity.
The above analysis is straightforward and can be easily generalized
for many other reaction terms $R$. In particular, for $R =
k\rho_{A}^m\rho_{B}^n$, with $k = \mbox{const}$ and $m$, $n$ being any
(positive) real numbers, the following relation should be used instead
of (\ref{defw})
\begin{equation}
\label{defwQ}
w^{m+n+1} \;\equiv\; D_A^m D_B^n k^{-1} J^{1-m-n}
\;.
\end{equation}
This formula, together with (\ref{defC_J}), (\ref{etaa}) and
(\ref{etab}), generalizes the scaling theory of Cornell {\em et al}
\cite{CD-mn} for the case of any positive $a_0$, $b_0$, $D_A$, $D_B$,
$m$ and $n$.
\section{Conclusions}
We have investigated the long time behavior of the concentrations
$\rho_{A}$ and $\rho_{B}$ of phases A and B in the G\'alfi and R\'acz's problem. Our analysis
is the first analytical attempt to consider it in the general case of
arbitrary positive initial concentrations $a_0$ and $b_0$, and
diffusion constants $D_A$ and $D_B$ of A's and B's.
Our approach is very general, as it does not impose any restrictions
on the form of the macroscopic reaction rate $R$. Instead, it is
based on the assumption that in the long time limit $\rho_{A}$ and $\rho_{B}$
satisfy a few physically justifiable relations. Therefore our theory
can be applied to various systems, including those for which the form
of the macroscopic reaction rate $R$ remains unknown. Another peculiar
feature of
our theory is that, unlike most of previous studies,
it does not concentrate on the investigation of the reaction layer
only, but takes into account the properties of the whole, infinite
system.
In this way we managed to derive general formulae for the
concentration profiles of the majority species outside the reaction
layer, the location of the layer, and the total reaction rate. It is
interesting to notice that these quantities turned out independent of
$R$. We also derived analytically Jiang and Ebner's condition for the
reaction front to be asymptotically stationary. This relation also
turned out independent of $R$. These results correspond to the recent
findings based on dimensional analysis \cite{CD-mn,CD-Steady,hep},
according to which the scaling properties of the reaction layer are
independent of the form of $R$.
Next we derived the general scaling ansatz for the mean field
approximation. We gave the formulae which exactly relate some
quantities of physical importance, (e.\ g.\ the width $w$ of the
reaction layer) to the external parameters of the system $a_0$, $b_0$,
$D_A$, $D_B$ and $k$. It turned out that not only the scaling
exponents, but also the forms of the scaling functions are independent
of the values of these parameters. This justifies the customary
approach of examining the properties of the reaction layer only in the
simplest, symmetric case $a_0 = b_0$ and $D_A = D_B$.
Our work suggests also that the behavior of the reaction-diffusion
system can be understood as a subtle interplay between two scaling
regimes. The first one is valid far from the reaction zone, where the
densities of particles A and B assume the scaling forms typical of
purely diffusive systems: $\rho_{A}(x,t) \approx \Psi_A(x/t^{1/2})$ and
$\rho_{B}(x,t) \approx \Psi_B(x/t^{1/2})$. These scaling laws determine
also the location of the point $x_f(t)$ of the maximal reaction, and
the magnitude of the current $J(t)$ of the particles entering the
reaction zone. However, at $x_f$ the spatial derivatives of $\Psi_A$
and $\Psi_B$ suffer discontinuity. Therefore in the vicinity of $x_f$
a new form of scaling develops, and $\rho_{A}$ and $\rho_{B}$ assume the form
$\rho_{A}(x,t) = S_A(x/t^\alpha)$ and $\rho_{B}(x,t) = S_B(x/t^\alpha)$ with
$\alpha < 1/2$.
Although we confined our considerations only to the long time limit,
it would be interesting to combine our results with those of
Taitelbaum {\em et al} \cite{Haim91} for the short and intermediate
times. We believe that the striking similarity between equations
(\ref{1eq}) and (\ref{phi2}) is not accidental and should lead to a
general theory comprising the short, intermediate and long time limit.
The first attempt in this direction has already been made
\cite{KozaHaim}.
Notice also that the quasistationary approximation leads to new
definitions of 'short', 'intermediate' and 'long' time regimes.
Namely, we can define them as the time intervals in which the reaction
term, in the vicinity of $x_f$, is vanishingly small compared to the
time derivative ('short time'); or the interval in which they are of
similar magnitude ('intermediate time'); or the interval in which it
is the time derivative that can be neglected ('long time').
Another interesting problem concerns the limit $D_A \to 0$ with
other external parameters fixed. In this limit the scaling exponents
(in the mean-field approximation) are expected to change from $\alpha
= 1/6$, $\beta = 2/3$ to $\alpha = 0$, $\beta = 1/2$. The paper in
which this problem is examined within the framework of the presented
here theory is under preparation. We will mention here only that as
$D_A $ goes to 0, the time at which the system reaches the long time
regime goes to infinity, so the case $D_A = 0$ can be considered as the
case where the system always remains in the 'intermediate' time regime.
\vspace{3ex}
{\bf Acknowledgments}
I am grateful to prof.\ {\L}.\ A.\ Turski for pointing my attention to
the case of unequal diffusion constants, and dr.\ H.\ Taitelbaum for
many inspiring discussions and hospitality at the Bar-Ilan University.
This work was supported by the Polish KBN Grant nr 2 P 302 181 07.
|
1,314,259,994,191 | arxiv | \section{Introduction}
\label{s:IRS}
For almost a quarter of a century, a specter has been haunting string theory: the accelerated expansion of our universe implies an asymptotically and approximately de~Sitter (dS) geometry with a small but positive cosmological constant~\cite{Riess:1998cb, Perlmutter:1998np}. Constructing solutions in string theory with these features has been argued to be notoriously hard if not impossible: see~\cite{Danielsson:2018ztv,Cicoli:2018kdo,BHM:2021review} for recent comprehensive reviews.
However, recent work~\cite{Banerjee:2018qey,Banerjee:2019fzz} and~\cite{Bento:2021nbb}, as well as~\cite{Blaback:2019zig}, turn out to correspond naturally with a stringy cosmic dS-brane toy-model~\cite{rBHM1,rBHM4,rBHM5,rBHM7,rBHM10} which we furthermore connect with a generalization of the proposal that allows for more general spacetime varying string vacua~\cite{rHitch}.
The main ideas are the following:
({\small\bf1})~Both the \text{AdS}-decaying~\cite{Banerjee:2018qey,Banerjee:2019fzz}
and the so called ``axilaton''~\cite{rBHM1,rBHM4,rBHM5,rBHM7,rBHM10} scenarios are {\em\/near\/} a singularity: They are ``resolutions/smoothings'' of a singular/critical model, and so are inherently non-perturbative
~\cite{Bento:2021nbb}.
As such, a closer examination can (and does) find special configurations in which the various competing contributions do allow for a four-dimensional (sub-)spacetime of the desired dS geometry.
%
({\small\bf2})~Just as in the \text{AdS}-decaying scenario, the axilaton models can be made dynamical by choosing their anisotropy, $\w$, differently on the two sides of the candidate for the observed universe, $\text{dS}^{1,3}_{z=0}$.
%
({\small\bf3})~Calabi-Yau 5-folds, $\mathfrak{X}_5$, provide an auxiliary Euclidean rendition of the total Lorentzian spacetime, $\mathscr{X}^{1,9}$, and a natural ``home'' for such phase-transition nucleating ``bubble-worlds'': Just as in the \text{AdS}-decaying scenario, the ``exceptional'' (real codimension-six) sub-spacetimes $W^{1,3}$ naturally have positive curvature, while their complement and local neighborhood in $\mathscr{X}^{1,9}$ is naturally hyperbolic.
The key effects of considering the theory near a singularity are discussed in Section~\ref{s:NS}, especially focusing on the conifold in~\ref{s:wdC}, which leads us to review the \text{AdS}-decaying scenario in~\ref{s:dSB}. In Section~\ref{s:axilaton} we recall the salient features of the axilaton toy models and highlight the similarities in the overall spacetime geometry with the AdS-decaying solutions. This in turn lets us modify the former so as to afford a dynamical scenario akin to the latter.
The global geometry of the ten-dimensional spacetime is reviewed in Section~\ref{s:CY5}, where we present evidence that the most generic of spacetime geometries in string theory necessarily include exceptional four-dimensional sub-spacetimes of positive curvature, the simplest of which being $\text{dS}^{1,3}$.
Finally, section~\ref{s:Coda} summarizes our key points and conclusions, while Appendix~\ref{s:ED} collects a few additional details about (spacetime) defects and their role in understanding dS solutions in string theory.
\section{Nearly Singular Spacetimes}
\label{s:NS}
The early discovery that certain singular features in spacetime are innocuous to string dynamics~\cite{Dixon:1985jw,Dixon:1986jc} shows that the choice of available geometries is, in this respect, considerably more general than in conventional quantum field theory. A large class of such singular geometries affords topology change~\cite{rGHC,Green:1988uw,Candelas:1989ug,Partouche:2000uq}, in a milder sense also~\cite{Aspinwall:1993nu,Aspinwall:1994zd}, and so involve some form of a phase transition. This tends to harbor radical effects both in physics and in geometry: massless black holes~\cite{Strominger:1995cz}, exoflops~\cite{Aspinwall:1993nu,Hubsch:2002st}, $D$-branes~\cite{Polchinski:1996fm}, and orientifolds~\cite{Sagnotti:1987tw,Horava:1989vt,Bianchi:1990tb}, and often involve strong coupling effects.
\subsection{Warped Deformed Conifolds and Alike}
\label{s:wdC}
Models that are even just {\em\/near\/} being singular then invariably involve multiple competing contributions to the effective (target-spacetime) action, some of which are often difficult to calculate. Nevertheless, suitable approximations have been carefully considered and well justified, as is the case of the warped deformations of conifold singularities~\cite{Klebanov:2000hb,DeWolfe:2002nn,Douglas:2007tu}; see also~\cite{Douglas:2009zn,Bena:2009xk}. Here, the complex structure deformation of the conifold singularity and the relative size of associated vanishing cycles, the $\overline{\text{D3}}$-brane contributions, and the metric warping all affect the target spacetime physics. A detailed examination of such a nearly singular warped direct product compactification scenario finds a fine-tuned regime within the parameter space in which the multiple competing contributions can be shown to induce a metastable dS metric in the non-compact spacetime factor~\cite{Bento:2021nbb}; see however also the review~\cite{Bena:2017uuz} and the references therein.
Note that the vanishing cycles in the deformation-singularizing complex, compact Calabi-Yau 3-folds are real three-dimensional subspaces, matching in dimension the ``probing'' $\overline{\text{D3}}$-branes. Thus, this example indicates that vacua with a stringy dS spacetime could require a highly specialized set of circumstances and qualities.
Mirror symmetry guarantees that the same type of phenomenon occurs near the conifold singularity on the K{\"a}hler (or symplectic) structure side, as is the case with very small relative sizes of so-called exceptional sets\footnote{In a standard ``blowup surgery,'' these exceptional sets have long since been familiar in physics as gravitational instantons~\cite{Eguchi:1978gw,Gibbons:1978tx}, while their higher codimension ``small resolution'' analogues show up as the so-called worldsheet instanton ``$\mathscr{O}(-1,-1)$-curves'' in Calabi-Yau 3-folds~\cite{rBeast}.} in a Calabi-Yau 3-fold compactification. As with the near singularization by complex structure deformation~\cite{Bento:2021nbb}, one again expects a detailed balance of various contributing factors.
While exact calculations on this, K{\"a}hler, side are much harder in general and not amenable to explicit probing and direct computation, such mirror-symmetric configurations are qualitatively identical. In turn, however, Calabi-Yau 3-folds are known to generically have many such exceptional sets, real two- and four-dimensional subspaces, each one of which ``resolves'' a singularity by replacing it with a complex subvariety while preserving the complex structure outside the surgery locus~\cite{rH-AG,rReidK0,rBeast}; by mirror symmetry, the (nearly) vanishing 3-cycles are then as abundant.
This implies that the pool of such nearly singular models, on both mirror-symmetry sides, is populous albeit {\em\/sub-generic,} as the individual such models involve highly specialized arrangements.
Generally, the examples discussed so far are static background configurations, wherein the observable four-dimensional spacetime\footnote{A candidate for the observable four-dimensional world with its geometry unspecified is denoted $W^{1,3}$, while $M^{1,3}$,
$\text{dS}^{1,3}$ and $\text{AdS}^{1,3}$ specify Minkowski, de~Sitter and anti de~Sitter geometries, respectively.}, $W^{1,3}$, and the extra dimensions ($\mathscr{Y}^6$) form a rigid direct product, as sketched at Figure~\ref{f:4Cases}\,(a), where the geometry of $\mathscr{Y}^6$ and of the observable four-dimensional spacetime are independent of each other.
\begin{figure}[tbp]
\begin{center}
\begin{picture}(160,35
\put(0,0){\includegraphics[width=160mm]{4Cases.pdf}}
\put(19,-3){\makebox[0pt][c]{\footnotesize(a)\,rigid product}}
\put(58,-3){\makebox[0pt][c]{\footnotesize(b)\,fibration (w/singularity)}}
\put(100,-3){\makebox[0pt][c]{\footnotesize(c)\,non-analyticity}}
\put(142,-3){\makebox[0pt][c]{\footnotesize(d)\,exceptional subset}}
\end{picture}
\end{center}
\caption{Some possible geometries in target spacetime; these simplified illustrations hint at the variations in any of the relevant structures (complex, K{\"a}hler, symplectic, supersymmetry), and may be combined in various ways}
\label{f:4Cases}
\end{figure}
Allowing for $\mathscr{Y}^6$ to vary in some observable {\em\/spatial\/} directions was shown to necessarily include singularizations of $\mathscr{Y}^6$ that manifest as stringy cosmic strings ('branes)~\cite{Greene:1989ya,rCYCY}, as sketched in Figure~\ref{f:4Cases}\,(b). Allowing furthermore for {\em\/non-analytic\/} variations of the geometry of some of the extra dimensions~\cite{Randall:1999ee,Randall:1999vf} warps the overall geometry and can allow for a {\em\/simultaneous\/} emergence of exponential mass-hierarchy, localization of gravity, matter, and an exponentially suppressed cosmological constant in $\text{dS}^{1,3}$~\cite{rBHM1,rBHM4,rBHM5,rBHM7}; see the sketch in Figure~\ref{f:4Cases}\,(c). We will return to these constructions in Section~\ref{s:axilaton}.
\subsection{de~Sitter Bubble-Worlds}
\label{s:dSB}
Consider now, in turn, the recent {\em\/dynamical\/} proposal with the spacetime geometry consisting of two copies of $\text{AdS}^{1,4}$, glued together non-analytically across a 3-brane-world, $\text{dS}^{1,3}_{z=0}$~\cite{Banerjee:2018qey,Banerjee:2019fzz}; see also~\cite{Hawking_2000, Karch_2001,Danielsson:2021tyb}; herein, we relabel coordinates so the shell is located at $z\!=\!0$. The remaining five dimensions compactified on a suitable space of positive curvature, such as $S^5$, complete a direct product ten-dimensional spacetime, as expected in string theory.
The hallmark novelty in the new ``\text{AdS}-decaying'' proposal~\cite{Banerjee:2018qey,Banerjee:2019fzz} as compared with earlier literature is that the key subspace, $\text{dS}^{1,3}_{z=0}$, emerges as a ``bubble,'' a domain wall-like interface between a metastable, false-vacuum $\text{AdS}^{1,4}_{\rm out}$ and the nucleated, expanding true-vacuum $\text{AdS}^{1,4}_{\rm in}$. The discontinuity at $z\!=\!0$ in the total space must be sourced (via standard matching conditions) by a suitably chosen but otherwise unspecified matter distribution, which then also provides the interfacing shell, $\text{dS}^{1,3}_{z=0}$, with a constant tension. It is then further verified that neither bulk nor shell-localized matter invalidate the main result, that the ``shellworld,'' $\text{dS}^{1,3}_{z=0}$, is indeed a dS universe with the cosmological constant that can have phenomenologically acceptable values.
As in~\cite{Randall:1999vf}, the graviton is shown to obey a Schr{\"o}dinger-like equation, which guarantees the existence of a graviton mode duly localized to the shell, $\text{dS}^{1,3}_{z=0}$, although here this is {\em\/not\/} the mode trapped by the $\d(z)$-function well~\cite{Banerjee:2018qey,Karch_2001}. Note that the so-obtained spacetime geometry is a combination of the sketches in Figure~\ref{f:4Cases}\,(a) and Figure~\ref{f:4Cases}\,(d): $\text{dS}^{1,3}_{z=0}$ appears as a (non-factor) subspace {\em\/within\/} the first five-dimensional spacetime factor of
$(\text{AdS}^{1,4}_{\rm in}\mathop{\texttt{\#}}\limits\text{AdS}^{1,4}_{\rm out})\times S^5$.
Studied ever since the original proposal~\cite{Coleman:1980aw} (see~\cite{Banks:2021wqu} for a recent update), such vacuum decay phenomena have been estimated to be generic in string theory~\cite{Ooguri:2016pdq,Freivogel:2016qwc}\footnote{Refs.~\cite{Ooguri:2016pdq,Freivogel:2016qwc} in fact conjecture that all de~Sitter vacua with small cosmological constant decay faster than their horizon size; the latter of these two notes the contradiction with the results of Ref.~\cite{Danielsson:2016rmq}.}, implying then the same also for nucleation, a subset of which resulting in de~Sitter bubble-worlds. These phenomena are again ``near'' phase transitions, for which no perturbative description can be expected to be complete. Notably, the analysis of Refs.~\cite{Banerjee:2018qey,Banerjee:2019fzz} focuses on the dynamically enfolding aftermath of such a phase transition, and so again pertains to nearly singular configurations. Thus, both in the claimed ubiquity and in this inherently non-perturbative nature, this is reminiscent of the string theory realization~\cite{Green:1988uw,Candelas:1989ug} of ``Reid's phantasy''~\cite{rReidK0}. \footnote{There is also a natural connection to the more recent and rather vast cobordism generalization~\cite{McNamara:2019rup}.} In fact, as we will argue shortly, this can be made more specific in the context of spacetime varying string vacua generalizing the early work by one of the authors~\cite{rHitch}. To see why this is the case, we will however next turn to a brief review of the axilaton toy model~\cite{rBHM5,rBHM7,rBHM10} in order to showcase the similarities with the $\text{AdS}^{1,4}$-decay model~\cite{Banerjee:2018qey,Banerjee:2019fzz}.
\section{A Discretuum of Toy Models}
\label{s:axilaton}
Generally accommodating the overall geometry types sketched in Figure~\ref{f:4Cases}\,(a--c) and aiming for a (de~Sitter) candidate four-dimensional spacetime, the metric Ansatz has a direct sum format,
$\mathrm{d} s^2 = A^2\,\mathrm{d} s^2_{\sss1,3} +B^2\,\mathrm{d} s^2_{\sss6}$,
where $\mathrm{d} s^2_{\sss1,3}$ and $\mathrm{d} s^2_{\sss6}$ are appropriate line elements for the candidate observable spacetime, $W^{1,3}$, and its co-factor, $\mathscr{Y}^6$, while $A,B$ are ``warp factors'' that are most often allowed to vary over at least some directions in $\mathscr{Y}^6$. The particular class of toy models~\cite{rBHM1,rBHM4,rBHM5,rBHM7} further specializes to
\begin{equation}
\mathrm{d} s^2 = A^2(z)\,\mathrm{d} s^2_{1,3} +\ell^2 B^2(z)(\mathrm{d} z^2{+}\mathrm{d}\q^2)_{\mathscr{Y}^2_\perp}
+\mathrm{d} s^2_{\mathscr{Y}^4},
\label{e:metric}
\end{equation}
where $\ell$ sets the length-scale in $\mathscr{Y}^2_\perp$, so the warp factors $A(z),B(z)$ and the coordinates $(z,\q)\in\mathscr{Y}^2_\perp$ are dimensionless. The observable spacetime ($\mathrm{d} s^2_{1,3}$) is expected to have de~Sitter geometry, and the warp factors are chosen to depend only on the log-radial coordinate, $z$; also, $\q\simeq\q+2\p$ is the standard azimuthal angle in $\mathscr{Y}^2_\perp$. The last summand, $\mathrm{d} s^2_{\mathscr{Y}^4}$, could also be warped but is for now assumed to be independent of the other six coordinates; see however below.
This class of toy models also neglects all other matter fields, but explicitly includes the ``universal'' axion-dilaton (``axilaton'' ) field,
$\t\mathrel{\buildrel{\rm def}\over=} B+ig_s^{-1}e^{-\F}$,
in string theory, which exhibit an $\mathop{\textrm{SL}}(2;\mathbb{Z})$-monodromy owing to modular invariance~\cite{Polchinski:1998rq}.
Assuming then that $\t=\t(\q)$ separates variables in an evidently non-holomorphic (and so non-supersymmetric) way, and admits two 3-parameter classes of such solutions, which the $\mathop{\textrm{SL}}(2;\mathbb{Z})$-monodromy restricts to a {\em\/discretuum.} Notably, $\t=\t(\w\q)$, and the $\mathscr{Y}^2_\perp$-anisotropy, $\w$, appears in the Einstein-Friedan equation~\cite{rF79a,rF79b},
\begin{alignat}9
R_{\mu\nu} &={\cal G}_{\t\bar\t}\,(\vd_{\mu}\t)(\vd_{\nu}\bar\t)
\mathrel{\buildrel{\rm def}\over=}\widetilde{T}_{\mu\nu}
=\mathrm{diag}[\>\underbrace{0,0,0,0}_{\text{dS}^{1,3}},\,\underset{z}{0\strut},
\underset{\q}{(\w/2\ell)^2\strut},\,\underbrace{0,0,0,0}_{\mathscr{Y}^4}\>]~,
\label{e:EinStein}\\
{\cal G}_{\t \bar\t} &=\frac{-1~~}{(\t{-}\bar\t)^2} ={\cal G}_{\bar\t\t},\quad
{\cal G}_{\t\t}=0={\cal G}_{\bar\t\,\bar\t}.
\end{alignat}
and sources the warped geometry~\eqref{e:metric}~\cite{rBHM5,rBHM7}:
\begin{subequations}
\label{e:newAB}
\begin{alignat}9
A(z) &= Z_\pm(z) \Big(1-\frac{\w^2 z_0^2}{160} Z_\pm(z)^2 +\dots\Big),
\label{e:newA}\\
B(z) &= \frac{1}{\ell z_0\sqrt{\Lambda_b}}
\Big(1-\frac{\w^2z_0^2}{40} Z_\pm(z)^2 +\dots\Big)~,
\label{e:newB}\\
\intertext{with the harmonic functions}
Z_\pm(z) &\mathrel{\buildrel{\rm def}\over=} 1\pm|z|/z_0,\quad z_0>0,
\label{e:newB}
\intertext{and the cosmological constant}
\Lambda_b&=\frac{\omega^2 - (2\xi/3z_0)A^2(0)}{48\,\ell^2},
\label{e:Lb}
\end{alignat}
\end{subequations}
where $0\leqslant\xi\leqslant12$ counts the source-branes\footnote{The stringy cosmic string-like~\cite{Shapere:1989kp} limit includes a total of $12{+}|\xi|$ supersymmetric 7-branes.}~\cite{rBHM2}. In~\cite{rBHM5,rBHM7,rBHM10}, the maximal, $\xi\!=\!12$, was used for simplicty. Thus, the $\mathscr{Y}^2_\perp$-anisotropy of the axilaton, $\t(\w\q)$, drives the cosmological constant~\eqref{e:Lb}. Furthermore, the inequalities $\w^2\!\geqslant\!(2\xi/3z_0)A^2(0)$ and $\Lambda_b\!\geqslant\!0$ are saturated only in the supersymmetric limit, where $\w,(\xi/z_0),\Lambda_b\!\to\!0$~\cite{rBHM5,rBHM7}. Finally, the length-scale $\ell$ emerges in~\eqref{e:EinStein} via dimensional transmutation (akin to $\L_{\rm QCD}$), as this equation is the condition (the vanishing of the worldsheet QFT beta-function, to the lowest order) for the target-spacetime metric to not renormalize.
Notably, the metric~\eqref{e:metric}--\eqref{e:newAB} is continuous at $z\to\pm z_0$ with both sign-choices in~\eqref{e:newA}, where $A(z)$ with $Z_-(z)$ merely ``bounces'' as does the flat-space line element in spherical coordinates at the origin. In stark contrast, assuming $\mathrm{d} s^2_{1,3}$ to be the Minkowski line element forces the solution with $Z_-(z)$ to exhibit a naked singularity at $|z|\!=\!z_0$~\cite{rBHM1,rBHM2}, beyond which the metric becomes complex. Thereby, the de~Sitter metric~\eqref{e:metric} explicitly desingularizes the total spacetime in this class of toy models~\cite{rBHM5}. This is corroborated by verifying that the standard curvature invariants for~\eqref{e:metric}--\eqref{e:newAB} are all finite, whereas the Kretschmann (Riemann-squared) invariant diverges at $z_0$ in the Minkowski solution. Finally,~\eqref{e:newB} implies that the cosmological constant effectively specifies the ``distance'' (in the parameter space) to the so-resolved singularity.
In this sense, this class of toy models also describe nearly singular spacetimes, much as those studied in Ref.~\cite{Bento:2021nbb,Douglas:2009zn,Bena:2009xk,Bena:2017uuz} --- except that the background configurations are here restricted to a discretuum by the $\mathop{\textrm{SL}}(2;\mathbb{Z})$-monodromy. Indeed, the axilaton, $\t(\w\q)$, varies over $\mathscr{Y}^2_\perp$ including values that oscillate about $g_se^{\F}=O(1)$, even a few orders of magnitude! For the whole model, this not only implies strong coupling, but is (owing to the $\mathop{\textrm{SL}}(2;\mathbb{Z})$-monodromy of $\t(\w\q)$) also discontinuous along a branch-cut, e.g., along $\q\!=\!\pm\p$. Across this branch-cut, the model non-perturbatively patches a stringy weak-coupling regime with a reciprocally strong-coupling regime --- akin to ``$S$-folds''~\cite{rHIS-nGeoCY}. Nevertheless, the effective observable string coupling,
$\vev{g_se^{\F}}$, remains perturbatively small in the sub-spacetime
$\text{dS}^{1,3}_{z=0}$, preserving the impression of a weakly coupled effective field theory within this toy-model candidate for the observable universe~\cite{rBHM7,rBHM10}.
For future reference, we note that the effective Planck mass-scale, $M_4$, in $\text{dS}^{1,3}$ is exponentially boosted~\cite{rBHM4,rBHM10},
\begin{equation}
M_4\!^2=M_6\!^4\,\ell^2\,z_0\!^{5/8}\,\xi^{-3/8}\,e^{+\xi z_0}\,
2\p\Gamma\!_\pm\big(\frc38;\xi z_0\big),
\label{e:M4M6}
\end{equation}
as compared with the six-dimensional Planck mass-scale resulting after compactifying on $\mathscr{Y}^4$.
Here, $\Gamma\!_-$ is the ``lower'' incomplete Gamma function, and $\Gamma\!_+$ its ``upper'' complement, and
$0\!\leqslant\!2\p\Gamma\!_\pm\big(\frc38;\xi z_0\big)\!\leqslant\!2\p\Gamma\!_\pm\big(\frc38\big)\!\approx\!14.89$. This in turn exponentially suppresses the cosmological constant,
$\Lambda_b\propto z_0\!^{-5/4}\,\xi^{3/4}\,e^{-2\xi z_0}M_4\!^2/\ell^2$.
As is by now standard~\cite{Randall:1999vf}, the graviton modes are governed by an effective potential that here depends on the source-counting parameter, $\xi$, that also appears in~\eqref{e:M4M6}~\cite{rBHM4}. This potential also has a superposed $\d(z)$-function well induced from the $|z|$-dependence of the metric~\eqref{e:newA}, which guarantees the existence of a $\text{dS}^{1,3}_{z=0}$-localized graviton mode. A more detailed computation shows that the Newton potential is corrected by $O(\ell^3/r^3)$ terms~\cite{rBHM5}. This is suppressed as compared to the na{\"\i}ve dimensional analysis estimate, $O(\ell^2/r^2)$, based on $\dim(\mathscr{Y}^2_\perp)\!=\!2$, and indicates that the effect depends on the (hyperbolic) curvature of $\mathscr{Y}^2_\perp$ near the mid-radius, $z\!=\!0$.
Also, this non-analyticity induces a $\d(z)$-contribution in the Ricci tensor, which must be matched by matter localized to the exceptional spacetime factor $\text{dS}^{1,3}_{z=0}$ by the same $\d(z)$-function well in the effective potential. This populates the candidate observable spacetime with universally $\d(z)$-localized modes of any matter added to this toy model.
With $Z_+(z)$ in~\eqref{e:newA}, the ``transverse'' annular region,
$\mathscr{Y}^2_\perp\approx(\mathscr{Z}\!\times\!S^1)$, has $\mathscr{Z}_+\!=\!(-\infty,\infty)$ so that
$\mathscr{Y}^2_\perp\approx\mathbb{C}^*$ is non-compact, as depicted in Figure~\ref{f:WxY}\,(left).
In turn, with $Z_-(z)$ in~\eqref{e:newA},
$\mathscr{Z}_-\!=\![-z_0,z_0]$ for
so that $\mathscr{Y}^2_\perp$ includes its two circular boundaries; see Figure~\ref{f:WxY}\,(left).
In both of these solutions, the proper (log-radial) length of $\mathscr{Z}$ is infinite, although the surface area of $\mathscr{Y}^2_\perp$ is finite.
The 2-point compactification/completion\footnote{To be precise: for $Z_-(z)$ in~\eqref{e:newA}, each of the two circular boundaries of $\mathscr{Y}^2_\perp$ shrinks to a point as $z_0\to\infty$, thus rendering $\mathscr{Z}_-$ compact. For $Z_-(z)$ in~\eqref{e:newA}, $\mathscr{Z}\!\approx\!\mathbb{C}^*$ is non-compact and two points must be added to compactify $\mathscr{Z}_+\to\mathbb{P}^1$.} $\mathscr{Y}^2_\perp\!\hookrightarrow\!S^2\!=\!\mathbb{P}^1$ then allows us to regard $\mathscr{Y}^2_\perp$ as a 2-point {\em\/puncturing\/} of $\mathbb{P}^1$. Fibering the remaining factor, $\mathscr{Y}^4$ (e.g., K3), over $\mathscr{Y}^2_\perp$ shows that the unobservable space, $\mathscr{Y}^2_\perp\!\ltimes\!\mathscr{Y}^4$ in the axilaton models~\cite{rBHM5,rBHM7,rBHM10} is also obtainable as a 2-fibre puncturing of K3-fibrations over $\mathbb{P}^1$ --- the latter of which are well understood Calabi-Yau 3-folds, as they are used in static, Calabi-Yau compactifications to Minkowski spacetime~\cite{rBeast}\footnote{We will return to this non-trivial K3-fibration in Section~\ref{s:CY5}.}.
\begin{figure}[tbp]
\begin{center}
\begin{picture}(130,55)(-5,0)
\put(-20,20){\includegraphics[height=30mm]{LanternR.pdf}}
\put(7,0){\includegraphics[height=50mm]{CylBWorldOutIn2.pdf}}
\put(57,0){\includegraphics[height=50mm]{CylBWorldInOut2.pdf}}
\put(93,23){\includegraphics[height=25mm]{FunnelsR.pdf}}
\put(50,20){\TikZ{\path[use as bounding box](0,0);
\draw[brown!75!black,ultra thick,<->](-.3,1.5)--++(0,1);
\draw[brown!75!black,ultra thick,<->](1.2,1.5)--++(0,1);
\path[brown!75!black](.45,2)node{\Large$W^{1,3}_{z=0}$};
\draw[orange,thick,->](-5,.5)to[out=-60,in=-90]++(1.5,-.4);
\draw[blue,thick,->](5,.5)to[out=-120,in=-30]++(-1.5,-.3);
\draw[blue,thick,->](-5,2.5)to[out=30,in=90]++(3.5,-.1);
\draw[orange,thick,->](5,2.2)to[out=120,in=90]++(-2.5,.2);
\path(.45,.25)node{\Large$\mathscr{Y}^2_\perp$};
\path(.45,-.25)node{(horizontally)};
\path(-5,-1)node{\shortstack{$\mathscr{Y}^2_\perp$ with\\[2pt]
$Z_+(z)=1+|z|/z_0$}};
\path(6,-1)node{\shortstack{$\mathscr{Y}^2_\perp$ with\\[2pt]
$Z_-(z)=1-|z|/z_0$}};
}}
\end{picture}
\end{center}
\caption{A depiction of the $W^{1,3}_z\rtimes\mathscr{Y}^2_\perp$ fibration with $Z_+(z)$
(left) and $W^{1,3}_z\rtimes\mathscr{Y}^2_\perp$ with $Z_-(z)$ (right). Far left and far right: the proper distance plotted vertically in $\mathscr{Y}^2_\perp$, indicating the radial dependence of the circumference --- which is obscured in the two central depictions. Only the mid-radius fiber, $W^{1,3}_{z=0}$, of the fibration $W^{1,3}_z\rtimes\mathscr{Y}^2_\perp$ is depicted as a vertical cylinder, $W^{1,3}_{z=0}\times S^1$; it defines the ``inside'' and ``outside' part of the six-dimensional spacetime'; see~\eqref{e:Geometries}, below.}
\label{f:WxY}
\end{figure}
The 2-fibre compactification/completion process
\begin{equation}
\text{dS}^{1,3}_z\!\rtimes\!\left(\mathscr{Z}\!\times\!S^1\right)\ltimes\text{K3}
~\xrightarrow[~z_0\to\infty~]{~\w\to 0~}~
M^{1,3}\!\times\!\left(\mathscr{Y}^6 =
(\mathbb{P}^1\ltimes\text{K3})\right)
\label{e:pDefo}
\end{equation}
may be understood as the right-hand side depicting a static,
Minkowski/CY geometry being identified as the (double, supersymmetry-restoring) limit, $\w,1/z_0\to0$, of the axilaton geometry on the left-hand side. Intermediate in this process is the ``decompactification'' of the Minkowski/CY geometry if $\mathscr{Y}^2_\perp\!\approx\!\mathbb{C}^*$ is a non-compact, unbounded cylinder, or a cylinder with boundaries if $\mathscr{Y}^2_\perp\!\approx\![-z_0,z_0]\!\times\!S^1$.
In this sense, the geometry given on the far left-hand side in~\eqref{e:pDefo} is the (supersymmetry-breaking, non-holomorphic) deformation of a punctured version of the geometry depicted on the far right-hand side. In turn, the $\text{dS}^{1,3}_{z=0}\!\times\!S^1\!\times\!\text{K3}$ configuration in the log-radial middle, $z\!=\!0$, of the annulus $\mathscr{Y}^2_\perp\!=\!S^1\!\times\!\mathscr{Z}$ serves as a ``cylindrical'' interface between the ``inside'' and ``outside'' regions; see Figure~\ref{f:WxY}.
Although somewhat different in the technical details, the overall spacetime geometry in the axilaton toy model~\eqref{e:metric}--\eqref{e:M4M6}~\cite{rBHM5,rBHM7,rBHM10} and in the $\text{AdS}^{1,4}$-decay model~\cite{Banerjee:2018qey,Banerjee:2019fzz} are remarkably similar: The Schr{\"o}dinger-like potential governing the graviton exhibits a discontinuity at $\text{dS}^{1,3}_{z=0}$, which then properly localizes a graviton mode.
Just as in the axilaton toy model, the $\text{AdS}^{1,4}$-decay model also shows the metric within the so embedded $\text{dS}^{1,3}_{z=0}$ to be of the de~Sitter type, with the cosmological constant in the correct ballpark. The respective Ricci tensors also require a balancing source by matter modes, again localized to the same locus.
The two spacetime geometries, shown side-by-side, with the interfacing $\text{dS}^{1,3}_{z=0}$ shown underscoring the joining ($\mathop{\texttt{\#}}\limits$) symbol\footnote{The product $\mathscr{A}\!\rtimes\!\mathscr{B}$ denotes that the warp-factors in the block-diagonal metric vary over $\mathscr{B}$.}:
\begin{equation}
\Big( \text{AdS}^{1,4}_{\rm in}\!\mathop{\texttt{\#}}\limits_{\text{dS}^{1,3}_{z=0}}\!\text{AdS}^{1,5}_{\rm out} \Big)
\times S^5
\quad\text{vs.}\quad
\Big( \big( \underbrace{\text{dS}^{1,3}_z\!\rtimes\!\mathscr{Y}^2_{\perp,\rm in}}_{z<0} \big)
\!\!\mathop{\texttt{\#}}\limits_{\text{dS}^{1,3}_{z=0}\times S^1}\!\!
\big( \underbrace{\text{dS}^{1,3}_z\!\rtimes\!\mathscr{Y}^2_{\perp,\rm out}}_{z>0} \big) \Big)
\times\mathscr{Y}^4.
\label{e:Geometries}
\end{equation}
The left-hand side scenario is rendered dynamical by choosing
$\text{AdS}^{1,5}_{\rm out}$ to be the false vacuum, with higher energy than the true-vacuum
$\text{AdS}^{1,5}_{\rm in}$.
This inequality provides the interfacing $\text{dS}^{1,3}_{z=0}$ with tension and drives its spatial expansion. The left-hand side scenario~\eqref{e:Geometries} thus depicts a point-nucleation of the ``true'' $\text{AdS}^{1,4}_{\rm in}$ within the ``false'' $\text{AdS}^{1,5}_{\rm out}$, with the
four-dimensional boundary serving as the candidate observable spacetime, $\text{dS}^{1,3}_{z=0}$~\cite{Banerjee:2018qey,Banerjee:2019fzz}.
The right-hand side of~\eqref{e:Geometries} axilaton configuration has been presumed so far to be static and in/out-symmetric~\cite{rBHM5,rBHM7,rBHM10} --- but this need not at all be the case. Since all physical features in this model depend on the anisotropy of the axilaton, $\t(\w\q)$, it suffices to choose $\w_{\rm in}\neq\w_{\rm out}$. The two sides of the interfacing $\text{dS}^{1,3}_{z=0}\!\times\!S^1$ in~\eqref{e:Geometries} are
\begin{equation}
\big(\text{dS}^{1,3}_z\!\rtimes\!\mathscr{Y}^2_{\perp,\rm in}\big)_{z<0}
\qquad\text{and}\qquad
\big(\text{dS}^{1,3}_z\!\rtimes\!\mathscr{Y}^2_{\perp,\rm out}\big)_{z>0},
\end{equation}
which are analogous to $\text{AdS}^{1,4}_{\rm in}$ and $\text{AdS}^{1,4}_{\rm out}$, as depicted in Figure~\ref{f:WxY}. Indeed, the log-radial $z$-dependence renders the proper distance between $z\!=\!0$ and $z_0$ infinite while keeping the surface area of $\mathscr{Y}^2_\perp$ finite, implying a hyperbolic geometry. This geometry is driven by the axilaton anisotropy, $\w>0$ in $\t=\t(\w\q)$, and differs from the well-known Ricci-flat geometry of the annular $\mathscr{Y}^2_\perp$ that one might have expected: the negative curvature of $\mathscr{Y}^2_\perp$ is forced by balancing the anisotropy-driven ``pressure,''
$\Tw{T}_{\q\q}=(\w/2\ell)^2$, in~\eqref{e:EinStein}.
With different values of the axilaton anisotropy, the regions
$\text{dS}^{1,3}_{z<0}\rtimes\mathscr{Y}^2_{\perp,\rm in}$ and
$\text{dS}^{1,3}_{z>0}\rtimes\mathscr{Y}^2_{\perp,\rm out}$ harbor different ``pressures,'' which drives the expansion of one region into the other.
By analogy then, the right-hand side configuration~\eqref{e:Geometries} depicts a cosmic string-sourced nucleation of an analogous phase transition with a $\text{dS}^{1,3}_{z=0}\times S^1$-shaped ``cylindrical'' $(3{+}1{+}1)$-dimensional interface.
However, unlike in the \text{AdS}-decay model~\cite{Banerjee:2018qey,Banerjee:2019fzz}, the
$\mathop{\textrm{SL}}(2;\mathbb{Z})$-monodromy of the axilaton models~\cite{rBHM5,rBHM7,rBHM10} restricts the anisotropy $\w$ to discrete values. This allows only discrete variations, i.e tunneling from one configuration to another.
\section{Calabi-Yau 5-Folds}
\label{s:CY5}
Let us reconsider then the overall, total spacetime in string theory,
relying only on the most general of restrictions while keeping the models of the foregoing discussion in the back of our mind. To this end, first generalize the geometry in the axilaton model by allowing for the K3 as in~(\ref{e:pDefo}) to be fibered over the $\mathscr{Z}\!\times\!S^1$, and hence leading to a K3-fibered Calabi-Yau 3-fold in the $z_0\to\infty,~\w\to 0$ limit.
In addition, we consider an auxiliary, ``Euclideanized'' rendition, which affords relating algebro-geometric features of this depiction to the Lorentzian features of the original. This affords an old idea~\cite{rHitch} a fresh look, and will turn out to improve this proposal.
To start, string dynamics is generally understood to require the overall target spacetime, $\mathscr{X}^{1,9}$, to be Ricci-flat --- i.e., to {\em\/admit\/} a metric the Ricci tensor of which is a total derivative. There are in general stringy corrections, but the metric is generally expected to be perturbatively ``close'' to Ricci-flat.
This understanding follows from several related, but rather distinct vantage points:
\begin{enumerate}
\item the inherent semi-infinite cohomology in string theory~\cite{rFrGaZu86},
\item the free loop space as the configuration space of (closed) strings~\cite{Bowick:1990wt,Bowick:1988nj,rBowRaj87b,rBowRaj88,rBowRaj87a,rBowRaj87,rAlGoRe87,Oh:1987sq,rHHRR-sDiffS1},
\item the worldsheet quantum field theory approach~\cite{Pilch:1987eb}.
\end{enumerate}
The orientability of closed strings~\cite{Polchinski:1998rq} (including zero-modes~\cite{Freidel:2015pka,Freidel:2017wst,Freidel:2017nhg}!) implies that the second-listed analysis should be extended to the orientation-doubled loop-space and (Super)Diff$(S^1)/S^1$. In lieu of this extension, we note that it will perforce include the results of the original loop-space approach. (Owing to the still very much developing nature of string theory, a technically precise meaning of this notion is still unclear, and the conceptual understanding itself may require some adjustments.)
The above studies show that the free loop-space of $\mathscr{X}^{1,9}$ must be Ricci-flat, which then determines the geometry of $\mathscr{X}^{1,9}$ in turn~\cite{rBeast}.
Consider then a suitable Euclidean Wick-rotation of $\mathscr{X}^{1,9}$ that is, at least in Type-II theories (owing to their $N\!=\!2$ supersymmetry) guaranteed to admit both a complex structure and a K{\"a}hler metric. This assigns to the
ten-dimensional spacetime, $\mathscr{X}^{1,9}$, a corresponding auxiliary Calabi-Yau 5-fold, $\mathfrak{X}_5$; various dualities extend this argument throughout other types of string vacua.
Just as Calabi-Yau 3-folds generically contain numerous exceptional, isolated so-called ``$\mathscr{O}(-1,-1)$ curves''~\cite{rReidK0}, Calabi-Yau 5-folds generically contain numerous exceptional, isolated Fano ($c_1,R_{\mu\bar\nu}>0$) compact complex {\em\/surfaces,} $\mathfrak{S}_2$, i.e., real four-dimensional subspaces~\cite{rHitch}. It the Lorentzian original, (the preimage of) at least some of these isolated subspaces are soliton-like (rather than instanton-like), and so admit a metric of the $(1,3)$-signature, perhaps singular at some (sub)locus. Owing to their positive scalar curvature\footnote{Being Fano, $c_1(\mathfrak{S})\!>\!0$, implies the scalar curvature invariant to be positive, $R(\mathfrak{S})>0$.}, the simplest of these will then admit a de~Sitter metric and may serve as candidates for the observable spacetime.
Recall that for all Einstein spacetimes, $R_{\mu\nu} = \L g_{\mu\nu}$ defines $1/\sqrt{\L}$ as the characteristic length scale, relating it to the curvature, and via $\int\!\sqrt{\det(g_{\mu\nu})}$ also to the volume. Wick rotation changes the overall coefficient in this tensorial equation, and so preserves the relation.
More precisely, it suffices for $\mathfrak{S}_2$ to admit a metric that equals the de~Sitter metric within the past light-cone of a typical/current observer; see Appendix~\ref{s:ED} for more detail. For such exceptional sets at least, $\Lambda_b>0$ parametrizes the relative size of such an exceptional, isolated real 4-manifold, $(\text{dS}^{1,3}\mapsto\mathfrak{S}_2)\subset\mathscr{X}^{1,9}$, --- very much akin to the de~Sitter desingularizing deformation~\eqref{e:metric}--\eqref{e:M4M6}.
Reid's argument for 3-folds~\cite{rReidK0} generalizes straightforwardly to higher dimensions, so Calabi-Yau 5-folds are expected to abound in such exceptional real four-dimensional subspaces, making the pool of candidates abundant.
Being the exceptional sets of small resolutions of conifold singularities the dynamical evolution of the Lorentzian of these four-dimensional subspaces $\text{dS}^{1,3}\mapsto\mathfrak{S}_2$ may well enfold akin to the $\text{dS}^{1,3}_{z=0}$ bubble-worlds discussed above~\eqref{e:Geometries}. In the coarse classification discussed in Section~\ref{s:NS} and referring to Figure~\ref{f:4Cases}, the scenario $\text{dS}^{1,3}_{z=0}\subset\mathscr{X}^{1,9}$ that is the preimage of the ``Euclideanizing'' Wick rotation assignment
\begin{equation}
\text{dS}^{1,3}_{z=0}\subset\mathscr{X}^{1,9}
~\mapsto~
\mathfrak{S}_2\subset\mathfrak{X}_5
\end{equation}
is depicted in Figure~\ref{f:4Cases}\,(d): the total spacetime, $\mathscr{X}^{1,9}$, is in no way assumed to factorize, nor even foliate\footnote{For our present purposes, a foliation $X\!\divideontimes\!Y$ means that the total space looks locally at every point as a direct product of local portions of the two factors, $X$ and $Y$.}, and $\text{dS}^{1,3}\mapsto\mathfrak{S}_2$ is an isolated, exceptional sub-spacetime. On general grounds, such non-factorizing spacetimes ought to be the generic case in string theory.
Now consider the auxiliary Calabi-Yau 5-fold, $\mathfrak{X}_5$, a particular exceptional Fano surface in it, $\mathfrak{S}_2\subset\mathfrak{X}_5$, and its non-compact complement, $\mathfrak{X}_5\smallsetminus\mathfrak{S}_2$, as well as their respective Lorentzian counterparts: $\mathscr{X}^{1,9}$, $\text{dS}^{1,3}$ and $\mathscr{X}^{1,9}\smallsetminus\text{dS}^{1,3}$. Note that the Ricci curvature is not additive for such triples (nor is then the scalar curvature), but we have the Tian-Yau theorem~\cite{rTY1,rTY2}:
\begin{alignat}9
\underbrace{\text{compact Fano$_{n{+}1}$}}_{c_1>0} &~\smallsetminus~
\underbrace{\text{compact CY$_n$-fold}}_{c_1=0} &~=~
\underbrace{\text{non-compact CY$_{n{+}1}$-fold}}_{c_1=0}. \label{e:F-CY=CY}
\intertext{Here, the non-compact space on the right-hand side is known to admit a suitable K{\"a}hler metric/form, $J$, and a holomorphic volume-form, $\Omega$ (which in turn implies the existence of a covariantly constant spinor, i.e., global supersymmetry), that satisfy the standard relationship $J^n=\bar\Omega\!\wedge\!\Omega$ asymptotically, near the locus where the compact Calabi-Yau $n$-fold was excised.
It would then seem reasonable to expect that, analogously:}
\underbrace{\text{compact CY$_{n{+}1}$}}_{c_1=0} &~\smallsetminus~
\underbrace{\text{compact Fano$_n$-fold}}_{c_1>0} &~\overset?=~
\underbrace{\text{non-compact Hyp$_{n{+}1}$-fold}}_{c_1<0},
\label{e:CY-F=AdS}
\end{alignat}
where Hyp$_{n+1}$ denotes a ``hyperbolic'' complex $n{+}1$-fold with negative Ricci curvature, again with analogous asymptotics near the locus where the compact Fano $n$-fold was excised.
Both of these relations are well-nigh trivial and easily seen for $n\!=\!0$:
\begin{enumerate}
\item $\mathbb{P}^1\smallsetminus\{2\,\text{pts.}\}\approx\mathbb{C}^*$ (a non-compact cylinder),
which is well known to be Ricci-flat.
\item $T^2\smallsetminus\{\text{pt.}\}$ is a 1-handled disc, and so a hyperbolic non-compact surface.
\end{enumerate}
Furthermore, we will need the following specializing generalization
\begin{equation}
\underbrace{\text{compact CY$_{n{+}1}$}}_{c_1=0} ~\smallsetminus~
\underbrace{\text{isolated comp.\ Fano$_{n-k}$-fold}}_{c_1>0} ~\overset?=~
\underbrace{\text{non-compact Hyp$_{n{+}1}$-fold}}_{c_1<0}.
\label{e:CY-kF=AdS}
\end{equation}
Locally, this result follows from the so-called adjunction theorem~\cite{rBeast}: For example, $c_1(\mathbb{P}^1)\!=\!2\!>\!0$ and $c_1(\text{CY}_3)\!=\!0$ imply that the local neighborhood of $\mathbb{P}^1\subset\text{CY}_3$ is any one of the rank-2 bundles $\mathscr{O}_{\mathbb{P}^1}(\ell{-}2,{-}\ell)$, with arbitrary $\ell\!\in\!\mathbb{Z}$. Depending on the choice of $\ell$, the curvature of this normal bundle will be negative along some fibers but positive along others.
However, we seek an {\em\/isolated\/} $\mathbb{P}^1$, which has no local holomorphic deformations, and which happens precisely when $\ell\!=\!1$. In that exceptional case, the local neighborhood of that $\mathbb{P}^1\subset\text{CY}_3$ has a uniformly negative Ricci curvature. The claim~\eqref{e:CY-kF=AdS} then aims to generalize this, albeit perhaps in an appropriate average sense, throughout the complement $\text{CY}_3\!\smallsetminus\!\mathbb{P}^1$ --- and more generally, throughout $\mathfrak{X}_5\smallsetminus\mathfrak{S}_2$; we are not aware of a rigorous global result either way.
The so-generalized claim~\eqref{e:CY-kF=AdS} then implies that the non-compact complex 5-fold $\mathfrak{X}_5\smallsetminus\mathfrak{S}_2$ admits a metric of negative Ricci curvature. (Again, {\em\/locally,} near an isolated $\mathfrak{S}_2\subset\mathfrak{X}_5$, this is a consequence of the adjunction theorem.)
With suitable boundary conditions to match the excised subspace $W^{1,3}\mapsto\mathfrak{S}_2$, the Lorentzian counterpart, $\mathscr{X}^{1,9}\smallsetminus W^{1,3}$, should then also admit a metric with negative curvature
--- analogous to $\text{AdS}^{1,4}_{\rm out}$ in the \text{AdS}-decaying scenario~\eqref{e:Geometries}. However, these exceptional 4-manifolds, $W^{1,3}\subset\mathscr{X}^{1,9}$, have {\em\/nothing\/} ``inside,'' much as a circle inside $\mathbb{R}^3$ does not carve it into two separate parts: The complement, $\mathscr{X}^{1,9}\!\smallsetminus\!W^{1,3}$ is a single-component connected space since $W^{1,3}\subset\mathscr{X}^{1,9}$ has real codimension~6. (This is unlike the real codimension-1 subspace in the \text{AdS}-decay scenario that does carve the total spacetime, $\text{AdS}^{1,5}\!\times\!S^5$, into two disconnected parts, an ``outside'' and an ``inside.'') Having only the ``outside,''
$\mathscr{X}^{1,9}\smallsetminus W^{1,3}$, these exceptional sub-spacetimes $W^{1,3}\subset\mathscr{X}^{1,9}$ are rather literally akin to ``bubbles of nothing''; see~\cite{Witten:1981gj} and also~\cite{Dibitetto:2020csn}.
In the Euclidean (and holomorphic) rendition, the standard K{\"a}hler class, $J$, (every metric in that cohomology class) of any ``bubble'' (desingularizing exceptional set) $\mathfrak{S}_2$ is both positive over every complex submanifold of $\mathfrak{S}_2$ and has a positive square (``volume''): $\int_{C\subset\mathfrak{S}}J>0$ and $\int_{\mathfrak{S}}J^2>0$ are the standard conditions on the K{\"a}hler cone.
In the simplest case, take the bubble to be $\mathbb{P}^2$. Then:
\begin{enumerate}
\item The K{\"a}hler class of $\mathbb{P}^2$ is positive over ``all complex submanifolds,''
all of which are equivalent to the $\mathbb{P}^1$ at the North Pole ``infinity.'' (This refers to the standard cell decomposition $\mathbb{P}^2\approx\mathbb{C}^2\cup\mathbb{P}^1$.)
\item There is therefore a Riemannian metric that differs from the above
K{\"a}hler metric only in being null over the $\mathbb{P}^1$ at the North pole,
\item which is therefore a valid metric on $(S^4 \smallsetminus \{\text{Noth pole}\})\approx\mathbb{C}^2$,
and fails in those positivity requirements only at the North pole,
where it vanishes --- and so is nowhere negative.
\end{enumerate}
Together, these imply that $W^{1,3}$ has positive curvature, the simplest (most symmetric) of which is $\text{dS}^{1,3}$; see also Appendix~\ref{s:ED}.
Other Fano complex surface candidates for the bubble $\mathfrak{S}_2$ will contain more than one exceptional (complex) curve, each isomorphic to $\mathbb{P}^1\!\approx\!S^2$~\cite{rBeast}. Their Lorentzian preimages, $W^{1,3}$, will contain more than one corresponding exceptional real two-dimensional sub-spacetime, with a mutual intersection pattern characteristic to the Euclidean (and complex) $\mathfrak{S}_2$. With a Lorentzian metric on $W^{1,3}$, these sub-spacetimes can serve as specific loci of interest, such as space-like horizons at infinite past/future. A further exploration of such correlations between the Lorentzian metric structure of topologically nontrivial spacetimes and the (almost) complex structure of their auxiliary Euclidean renditions, but beyond our present scope.
As it is, (almost) Fano complex surfaces are $\mathbb{P}^1\times\mathbb{P}^1$, or a blow-up of
$\mathbb{P}^2$ at $0,{\cdots},9$ points.
Their Euler numbers ($\int\!c_2$) are, in the given order: 4, or $3,4,{\cdots},12$~\cite{rGHSAR,rBeast}.
(The projective space, $\mathbb{P}^2$, itself may be regarded as a blowup of $S^4$, by replacing a point in $S^4$ by the exceptional hyperplane $\mathbb{P}^1\subset\mathbb{P}^2$.)
By the Todd-Chern-Hirzebuch theorem for Fano surfaces,
$\int(c_1\!^2{+}c_2)=12$, so that $\int\!c_1^2 = 12{-}\!\int\!c_2$,
and all characteristic classes (and their integrals) are controlled by $\int\!c_2$, i.e., the Euler number. For completeness, we note that $p_1 = c_1\!^2{-}2c_2 = 3(4{-}c_2)$, so the {\em\/signature\/} of these (real) four-dimensional spaces is $\s = 4{-}\chi_E$.
For all (almost) Fano surfaces except $\mathbb{P}^1\times\mathbb{P}^1$, the above argument shows that they admit a metric that is nowhere negative, and merely becomes null at select $1,{\cdots},10$ exceptional locations, which in such a metric become ``points''; the total space of these algebraic surfaces is then diffeomorphic to $S^4 \smallsetminus \{\text{points}\}$.
In the ``ruled (complex) surface,'' $\mathbb{P}^1\times\mathbb{P}^1$, the meridians of either $\mathbb{P}^1$ may be identified as ``time,'' which identifies the ``Southern'' and ``Northern'' instance of the other $\mathbb{P}^1$ as the infinite past and future horizon, respectively.
It seems plausible to expect a Lorentzian metric to exist wherein those exceptional loci are at infinite past/future, and whereby the positive curvature of of these surfaces (just as of $S^4$) should translate into a positive cosmological constant, for the most part of $W^{1,3}\mapsto\mathfrak{S}_2$; see Appendix~\ref{s:ED}.
Even without computational details about the embedding $W^{1,3}\subset\mathscr{X}^{1,9}$, the fact that this is (a Lorentzian preimage of) a small resolution smoothing has strong consequences:
Considering $W^{1,3}$ from ``outside,'' as a (metrically) isolated as $W^{1,3}\subset\mathscr{X}^{1,9}$ may be, at least locally near its locus within $\mathscr{X}^{1,9}$, the whole spacetime $\mathscr{X}^{1,9}$ must itself admit a Lorentzian metric with a {\em\/co-laminar\/} class of time coordinates. That is, any particular time-like geodesic, $\mathscr{C}_t\subset W^{1,3}\subset\mathscr{X}^{1,9}$, will have infinitesimally near time-like geodesics, $\mathscr{C}_{t,\e}\subset(\mathscr{X}^{1,9}\!\smallsetminus\!W^{1,3})$, such that $\lim_{\e\to0}\mathscr{C}_{t,\e}=\mathscr{C}_t\subset W^{1,3}$. (This may well become ill-defined at certain special locations of $ W^{1,3}\subset\mathscr{X}^{1,9}$, such as past/future horizons.)
That is, the Lorentzian metric of $ W^{1,3}$ must extend almost everywhere smoothly into the local neighborhood of $ W^{1,3}\subset\mathscr{X}^{1,9}$ and then $W^{1,3}$ itself. Thereby, the above-implied class of ``time-sliced, static'' snapshots and associated phase transition nucleation interpretation also extend to the ten-dimensional spacetime
$\mathscr{X}^{1,9}$, at least in the local neighborhood of $ W^{1,3}\subset\mathscr{X}^{1,9}$.
Finally, let us note that in this (putatively de~Sitter) ``exceptional sub-spacetime, $\text{dS}^{1,3}_{\rm us}\subset\mathscr{X}^{1,9}$,'' scenario $\mathscr{X}^{1,9}$ is Ricci-flat, and so admits a standard, supersymmetry-preserving, spacetime metric. The putatively negative-curvature complement, $\mathscr{X}^{1,9}\!\smallsetminus\!\text{dS}^{1,3}_{\rm us}$, may well admit even multiple covariantly-constant spinors {\em\/away from\/} $\text{dS}^{1,3}_{\rm us}$ but their restriction to $\text{dS}^{1,3}_{\rm us}$ will fail to be covariantly constant {\em\/within\/} $\text{dS}^{1,3}_{\rm us}$.
Now, in the Euclideanized complex rendition, an isolated $\mathfrak{S}_2\subset\mathfrak{X}_5$ is metrically null in the ``bulk'' K{\"a}hler metric; it needs an $\mathfrak{S}_2$-localized ``correction'' to be positive everywhere on $\mathfrak{X}_5$; see~\cite{rGHSAR} for a lower-dimensional explicit construction to this end.
This isolation then implies that $\mathscr{X}^{1,9}$ admits a metric in which $\text{dS}^{1,3}_{\rm us}$ is null, i.e., a point, so that for a typical ``bulk''-$\mathscr{X}^{1,9}$ observer, supersymmetry is broken only at a point --- a singularity of the supersymmetry structure\footnote{Supersymmetry is in string theory largely correlated with complex structure, and as mentioned above, $(S^4 \smallsetminus \{\text{pt.}\})\approx\mathbb{C}^2$ of course admits a complex structure, for which the excised point is an obstruction.}.
Thus, these metrically isolated, codimension-six sub-spacetimes, $\text{dS}^{1,3}_{\rm us}\subset\mathscr{X}^{1,9}$, are exceptional sets of local small resolution smoothings of the ostensibly most generic rendition of ten-dimensional string vacua; in these ``measure-zero,'' but generically abundant sub-spacetimes, supersymmetry is broken.
\section{Summary, Outlook and Conclusions}
\label{s:Coda}
In this paper, we have briefly reviewed some recent work on constructing concrete superstring models that do exhibit a sub-spacetime with a phenomenologically acceptable de~Sitter geometry and cosmological constant,~\cite{Banerjee:2018qey,Banerjee:2019fzz,Bento:2021nbb}. Notably, these indicate two separate aspects that render such work more fruitful:
({\small\bf1})~a careful analysis of multi-parameter and near-singular configurations,
and
({\small\bf2})~focusing on exceptional sub-spacetimes within larger-dimensional spacetimes with non-factoring geometry.
These two ideas in fact naturally resonate closely with the discretuum of ``axilaton'' models~\cite{rBHM1,rBHM4,rBHM5,rBHM7,rBHM10} that may be viewed as a non-holomorphic and non-analytic deformation of the stringy cosmic string ('brane) scenario~\cite{Greene:1989ya,rCYCY}.
In particular, we have shown herein that this class of models exhibits both of these hallmark characteristics, and can also be adapted so as to model the dynamical scenario of~\cite{Banerjee:2018qey,Banerjee:2019fzz}. In this variant, the model represents a stringy cosmic string-sourced nucleation in a phase transition, where the candidate observable world, $\text{dS}^{1,3}_{z=0}$, is in the interfacing boundary.
Inspired by these resonances, we have reexamined the global geometry of the ten-dimensional spacetime in string theory, $\mathscr{X}^{1,9}$, following~\cite{rHitch} and~\cite{rBHM7}. This finds plenty of exceptional sub-spacetimes that map to exceptional sets, $\mathfrak{S}_2\subset\mathfrak{X}_5$, of small resolution desingularizations of the auxiliary Euclidean, Calabi-Yau 5-fold re-rendering of the total spacetime, $\mathscr{X}^{1,9}\mapsto\mathfrak{X}_5$. Given their positive Ricci curvature, $c_1(\mathfrak{S}_2)\!>\!0$,
the simplest of these have a de~Sitter geometry and so can serve as candidates for the observed four-dimensional spacetime. Furthermore, the four-dimensional bubble-worlds, $\text{dS}^{1,3}_{z=0}\mapsto\mathfrak{S}_2$ are metrically isolated, codimension-six sub-spacetimes, that have no ``inside,'' and their local neighborhood in $\mathscr{X}^{1,9}$ has a negative Ricci curvature --- on par with
$\text{AdS}^{1,4}_{\rm in},\text{AdS}^{1,4}_{\rm out}$ in~\eqref{e:Geometries} in the \text{AdS}-decaying scenario of Section~\ref{s:dSB}.
It is striking to note that the above features pertain to the generic Ricci-flat ten-dimensional stringy spacetimes, each of which in turn contains a very large number of such sub-spacetimes. That is, no special requirement or restriction on the total ten-dimensional (super)stringy spacetime were needed for the above conclusions to hold, and only the general characteristics of (super)string spacetimes and their Euclideanized Calabi-Yau counterpart have been presumed.
\noindent
{\bf Acknowledgments:}
We thank Giuseppe Dibitetto and Ivonne Zavala for their kind invitation to contribute to this special issue.
PB would like to thank the CERN Theory Group for their hospitality over the past several years.
TH is grateful to the Department of Mathematics, University of Maryland, College Park MD, and the Physics Department of the Faculty of Natural Sciences of the University of Novi Sad, Serbia, for the recurring hospitality and resources.
DM is grateful to Perimeter Institute for hospitality and support.
The work of PB is supported in part by the Department of Energy
grant DE-SC0020220.
The work of DM is supported in part by Department of Energy
(under DOE grant number DE-SC0020262) and the Julian Schwinger Foundation.
|
1,314,259,994,192 | arxiv | \section{Introduction }
The overall energy budget of the universe is dictated by dark matter and dark energy with
a minor contamination from baryonic matter, where dark energy is supposed to dominate
the cosmic landscape causing the acceleration of universe. Since 1998,
we have witnessed astrophysical observations, still we are struggling to find the suitable
candidates for dark energy from fundamental physics. Many cosmologists believe that
the simplest candidate for the dark energy is the cosmological constant $(\Lambda)$ since it fits well
with observational data. During the cosmological evolution, the $\Lambda$-term has the density density
and pressure $p^{(de)} = -\rho^{(de)}$. However, one has the reason to
dislike $\Lambda$-term because it always suffers from the "fine-tunning" problems and ``cosmic
coincidence'' puzzles \cite{cope2006} on the theoretical ground.\\
In this regard we note that the universe is mostly observed to be flat and isotropic and supports the
predictions of $\Lambda$CDM model. However, observations of high resolution CMB radiation data from
Wilkinson Microwave Anisotropy Probe (WMAP) showing some large angle anomalies
of the large scale structure of the universe with an asymmetric expansion \cite{hinshaw2009, watanabe2009}.
Plank data also show a slight redshift of the primordial power spectrum of curvature perturbation from exact scale
invariant \cite{ade2014}. These observations obviously hint towards the presence of some anisotropy energy source
in the universe with anisotropic pressures. The issue of global anisotropy can be settled if anisotropy can be
incorporated to be FRW models as a sort of small perturbation. In order to address the issue of smallness in the
angular power spectrum, some anisotropic models have been proposed in recent times \cite{campanelli2007,campanelli2009}.
These models have a similarity to the Bianchi morphology \cite{jaffe2006}. Spatially homogeneous Bianchi I model is
more general than the FRW model and have anisotropic spatial sections. The Bianchi type models provide an
opportunity to consider asymmetric
expansion along different spatial sections. Very recently a study have been carried out \cite{yadav2012} in Bianchi I
space time with dominance of dark energy. Some other recent works on anisotropic dark energy are also available in the
literature \cite{yadav2011}$-$\cite{mishra2015}.\\
Lie group of transformations has been extensively applied to linear and nonlinear differential equations in
the area of theoretical physics such as: general relativity, particle physics and cosmology \cite{baum1, ibra1}.
The method of Lie symmetry group is one of the most useful tools for finding exact solutions for the
Einstein field equations described by a system of NLPDEs. \cite{blum1, blum2, olve1, ovsi1, step1}.
Recently we have developed a formalism to solve non linear Einstein's field equations in general relativity
\cite{ali2014, ali2014a, anil2014, ali2014b}. In our earlier work \cite{anil2014}, we have proposed the invariant solution of
dark energy (DE) model in cylindrically symmetric space-time while in this paper, we confine ourselves to
investigate the
similarity solution of anisotropic DE model in Bianchi - I space-time which is entirely different from \cite{anil2014}.
In the Astrophysical community, the
inhomogeneous cosmological models have gained interest due to exact perturbation of FRW model and are more often
employed to study cosmological phenomenon. That is why, here, we consider inhomogeneous Bianchi - I space-time.
The paper is organized as follows: In section 2, the basic formalism for anisotropic DE has been
discussed for an anisotropic and inhomogeneous Bianchi I space-time . Similar formalism has been already developed
in our earlier work \cite{anil2014}. In section 3, Lie group analysis method is developed for Bianchi I space-time.
Section 4 and 5 deal with the optimal system and similarity solutions of the models. The Physical viability of
the discussed dark energy model is tested with graphical analysis of cosmological parameters in section 6.
Finally the conclusions are summarized in section 7.
\section{The metric and field equations}
The Bianchi type-I space-time is given by
\begin{equation}
\label{spacetime}
ds^{2}=A^{2}\,dx^{2}+B^{2}\,dy^{2}+C^2\,dz^{2}-dt^{2},
\end{equation}
where the metric potentials $A$, $B$ and $C$ are functions of $x$
and $t$. Einstien's field equations in the case of a mixture of
perfect fluid and anisotropic dark energy are given by
\begin{equation}
\label{efe}
G^{i}_{j}=R^{i}_{j}-\frac{1}{2}g^{i}_{j}=-T^{(pf)i}_{j}-T^{(de)i}_{j},
\end{equation}
with
\begin{equation}
\label{pf}
T^{(pf)i}_{j} = diag[-\rho^{(pf)}, p^{(pf)}, p^{(pf)},
p^{(pf)}]=diag[-1, \omega^{(pf)}, \omega^{(pf)},
\omega^{(pf)}]\rho^{(pf)}
\end{equation}
and
\begin{equation}
\label{de}
T^{(de)i}_{j}=diag[-\rho^{(de)}, p^{(de)}_{x}, p^{(de)}_{y},
p^{(de)}_{z}] =diag[-1, \omega_x^{(de)}, \omega_y^{(de)},
\omega_z^{(de)}]\rho^{(de)}
\end{equation}
where $g^{i}_{j}$ are the metric tensor with $g_{ij}u^iu^j=-1$;
$u^i$ is the flow vector; $R^{i}_{j}$ is the Ricci tensor;
$R=R^{i}_{i}$ is the Ricci scalar; $p^{(pf)}$, $\rho^{(pf)}$ and
$\rho^{(de)}$ are, respectively the pressure and energy density of
the perfect fluid and dark energy components; $\omega^{(pf)}$ is the
EoS parameter of perfect fluid with $\omega^{(pf)}\geq0$;
$\omega_x^{(de)}$, $\omega_y^{(de)}$ and $\omega_z^{(de)}$ are the
deviation-free EoS parameters of dark energy, respectively, on the
$x$, $y$ and $z$ axis.\\
In co-moving coordinate system, the field equation (\ref{efe}), for the inhomogeneous
space-time (\ref{spacetime}), read as
\begin{equation}
\label{efe1}
p^{(pf)}+\omega_x^{(de)}\rho^{(de)}=\frac{B'C'}{A^{2}BC}-
\frac{\dot{B}\dot{C}}{BC}-\frac{\ddot{B}}{B}-\frac{\ddot{C}}{C},
\end{equation}
\begin{equation}
\label{efe2}
p^{(pf)}+\omega_y^{(de)}\rho^{(de)}=\frac{1}{A^{2}}\left[\frac{C''}{C}-
\frac{A'C'}{AC}\right]-\frac{\ddot{A}}{A}-\frac{\dot{A}\dot{C}}{AC}-\frac{\ddot{C}}{C},
\end{equation}
\begin{equation}
\label{efe3}
p^{(pf)}+\omega_z^{(de)}\rho^{(de)}=\frac{1}{A^{2}}\left[\frac{B''}{B}-
\frac{A'B'}{AB}\right]-\frac{\ddot{A}}{A}-\frac{\dot{A}\dot{B}}{AB}-\frac{\ddot{B}}{B},
\end{equation}
\begin{equation}
\label{efe4}
\rho^{(pf)}+\rho^{(de)}=\frac{1}{A^{2}}\left[\frac{B''}{B}+
\frac{B'C'}{BC}+\frac{C''}{C}-\frac{A'B'}{AB}-\frac{A'C'}{AC}\right]-\frac{\dot{A}\dot{B}}{AB}-\frac{\dot{A}\dot{C}}{AC}-\frac{\dot{B}\dot{C}}{BC},
\end{equation}
\begin{equation}
\label{efe5}
\frac{\dot{C}^{\prime}}{C}+\frac{\dot{B}^{\prime}}{B}=\frac{\dot{A}}{A}\Big[\frac{C^{\prime}}{C}+\frac{B^{\prime}}{B}\Big].
\end{equation}
Here $A^{\prime}=\frac{dA}{dx}$, $\dot{A} = \frac{dA}{dt}$ and so on.\\
The velocity field $u^i$ is ir-rotational. The scalar expansion
$\Theta$, shear scalar $\sigma^2$ and volume scalar $V$ are respectively have the form \cite{fein1, rayc1}:
\begin{equation} \label{u215}
\Theta\,=\,u_{;i}^{i}=\dfrac{\dot{A}}{A}+\dfrac{\dot{B}}{B}+\dfrac{\dot{C}}{C},
\end{equation}
\begin{equation} \label{u216}
\begin{array}{ll}
\sigma^2\,=\,\dfrac{1}{2}\,\sigma_{ij}\,\sigma^{ij}=\dfrac{\Theta^2}{3}-\dfrac{\dot{A}\dot{B}}{AB}-\dfrac{\dot{A}\dot{C}}{AC}-\dfrac{\dot{B}\dot{C}}{BC},
\end{array}
\end{equation}
\begin{equation} \label{u218}
V=\sqrt{-g}=A\,B\,C,
\end{equation}
where $g$ is the determinant of the metric (\ref{spacetime}). The shear tensor is
\begin{equation} \label{u219}
\begin{array}{ll}
\sigma_{ij}\,=\,u_{(i;j)}+\dot{u}_{(i}\,u_{j)}-\frac{1}{3}\,\Theta\,(g_{ij}+u_i\,u_j).
\end{array}
\end{equation}
and the non-vanishing components of the $\sigma_i^j$ are
\begin{equation} \label{u220}
\left\{
\begin{array}{ll}
\sigma_1^1\,&=\,\dfrac{1}{3}\Big(\dfrac{2\dot{A}}{A}-\dfrac{\dot{B}}{B}-\dfrac{\dot{C}}{C}\Big),\,\,\,\,\,\,\,\,\,\,
\sigma_2^2\,=\,\dfrac{1}{3}\Big(\dfrac{2\dot{B}}{B}-\dfrac{\dot{A}}{A}-\dfrac{\dot{C}}{C}\Big),\\
\\
\sigma_3^3\,&=\,\dfrac{1}{3}\Big(\dfrac{2\dot{C}}{C}-\dfrac{\dot{B}}{B}-\dfrac{\dot{A}}{A}\Big),\,\,\,\,\,\,\,\,\,\,
\sigma_4^4\,=0.
\end{array}
\right.
\end{equation}
The Einstein's field equations (\ref{efe1})-(\ref{efe5}) constitute
a system of five highly NLPDEs with six unknowns variables, $A$,
$B$, $C$, $p^{(pf)}$, $\rho^{(pf)}$ and $\rho^{(de)}$. Therefore,
one physically reasonable conditions amongst these parameters are
required to obtain explicit solutions of the field equations. Let us
assume that the metric potential function $A$ is a function of the
time only, i.e., $A(x,t)=A(t)$. If we substitute the metric function
$A(x,t)=A(t)$ in the Einstein field equations, the equations
(\ref{efe1})-(\ref{efe5}) transform to the NLPDEs of the
coefficients $B$ and $C$ only, as the following new form:
\begin{equation} \label{u210-1}
\begin{array}{ll}
E_1=\dfrac{\dot{A}\dot{B}}{AB}-\dfrac{\ddot{C}}{C}-\dfrac{B^{\prime\prime}}{A^2B^2}
+\left(\dfrac{\omega_z^{(de)}-\omega_y^{(de)}}{\omega_z^{(de)}-\omega_y^{(de)}}\right)\left[\dfrac{\ddot{A}}{A}-\dfrac{\dot{B}\dot{C}}{BC}+\dfrac{B'C'}{A^2BC}\right]\\
\\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
+\left(\dfrac{\omega_x^{(de)}-\omega_z^{(de)}}{\omega_z^{(de)}-\omega_y^{(de)}}\right)\left[\dfrac{\ddot{B}}{B}-\dfrac{\dot{A}\dot{C}}{AC}+\dfrac{C''}{A^2C}\right]
=0,
\end{array}
\end{equation}
\begin{equation} \label{u210-2}
\begin{array}{ll}
E_2=\dfrac{\dot{C}^{\prime}}{C}+\dfrac{\dot{B}^{\prime}}{B}-\dfrac{\dot{A}}{A}\left(\dfrac{C^{\prime}}{C}+\dfrac{B^{\prime}}{B}\right)=0,
\end{array}
\end{equation}
where
\begin{equation} \label{u210-3}
\begin{array}{ll}
p^{(pf)}(x,t)\,=\,\dfrac{\omega_x^{(de)}}{\omega_x^{(de)}-\omega_y^{(de)}}\left(\dfrac{C''}{A^2C}-\dfrac{\dot{A}\dot{C}}{AC}-\dfrac{\ddot{A}}{A}\right)
-\dfrac{\omega_y^{(de)}}{\omega_x^{(de)}-\omega_y^{(de)}}\left(\dfrac{B'C'}{BC}-\dfrac{\dot{B}\dot{C}}{BC}-\dfrac{\ddot{B}}{B}\right)-\dfrac{\ddot{C}}{C},
\end{array}
\end{equation}
\begin{equation} \label{u210-4}
\begin{array}{ll}
\rho^{(pf)}(x,t)\,=\,\dfrac{1}{\omega_x^{(de)}-\omega_y^{(de)}}\left(\dfrac{\dot{B}\dot{C}}{BC}+\dfrac{\ddot{B}}{B}+\dfrac{C''}{A^2C}
-\dfrac{\ddot{A}}{A}-\dfrac{\dot{A}\dot{C}}{AC}-\dfrac{B'C'}{A^2BC}\right)\\
\\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
+\dfrac{1}{A^2}\left(\dfrac{B'C'}{BC}+\dfrac{B''}{B}+\dfrac{C''}{A^C}\right)
-\dfrac{\dot{A}\dot{B}}{AB}-\dfrac{\dot{A}\dot{C}}{AC}-\dfrac{\dot{B}\dot{C}}{BC},
\end{array}
\end{equation}
\begin{equation} \label{u210-5}
\begin{array}{ll}
\rho^{(de)}(x,t)\,=\,\,\dfrac{1}{\omega_x^{(de)}-\omega_y^{(de)}}\left(\dfrac{B'C'}{A^2BC}+\dfrac{\ddot{A}}{A}+\dfrac{\dot{A}\dot{C}}{AC}
-\dfrac{\dot{B}\dot{C}}{BC}-\dfrac{\ddot{B}}{B}-\dfrac{C''}{A^2C}\right).
\end{array}
\end{equation}
\section{Lie point symmetry}
We consider a one-parameter Lie group of transformations
\begin{equation}\label{u31}
\left\{
\begin{array}{ll}
x_i^{*}=x_i+\epsilon\,\xi_{i}(x_j,u_{\beta})+\bold{o}(\epsilon^2),\\
u_{\alpha}^{*}=u_{\alpha}+\epsilon\,\eta_{\alpha}(x_j,u_{\beta})+\bold{o}(\epsilon^2),
\end{array}
\right.
\,\,\,i,j,\alpha,\beta=1,2,
\end{equation}
with a small parameter $\epsilon\,\ll\,1$, where $x_1=x$, $x_2=t$, $u_1=B$ and $u_2=C$. The coefficients $\xi_{1}$, $\xi_{2}$, $\eta_{1}$ and $\eta_{2}$ are the functions of $x$, $t$, $B$ and $C$. The system (\ref{u210-1})-(\ref{u210-2}) is invariant under the transformations given in
Eq. (\ref{u31}) and the corresponding infinitesimal generator of Lie groups
(symmetries)
\begin{equation}\label{u32}
X=\sum_{i=1}^{2}\xi_{i}\dfrac{\partial}{\partial x_{i}}+\sum_{\alpha=1}^{2}\eta_{\alpha}
\dfrac{\partial}{\partial u_{\alpha}},
\end{equation}
must satisfy the invariance conditions:
\begin{equation}\label{u33}
{\text{Pr}}^{(2)}\,X\Big(E_m\Big)|_{E_m=0}=0,
\end{equation}
where $E_m=0,\,m=1,2$ are the system (\ref{u210-1})-(\ref{u210-2}) under study and ${\text{Pr}}^{(2)}$ is the second
prolongation of the symmetries $X$. The detail of Lie
point symmetry has been already
given in \cite{anil2014}. Finally the characteristic equations are given by
\begin{equation}\label{u41}
\dfrac{dx}{a_1\,x+a_2}=\dfrac{dt}{a_3\,t+a_4}=\dfrac{dB}{a_5\,B}=\dfrac{dC}{a_6\,C}.
\end{equation}
\section{Optimal system of subalgebras}
The general Lie point symmetries (\ref{u32}) becomes
\begin{equation}\label{u32-1}
X=\sum_{i=1}^{6}\,a_i\,X_i,
\end{equation}
where, the non-linear Einstein field equations
(\ref{u210-1})-(\ref{u210-2}) admits the 6-dimensional Lie algebra
spanned by the independent symmetries shown below:
\begin{equation}\label{u32-2}
X_1=x\,\dfrac{\partial}{\partial x},\,\,\,\,
X_2=\dfrac{\partial}{\partial x},\,\,\,\,
X_3=t\,\dfrac{\partial}{\partial t},\,\,\,\,
X_4=\dfrac{\partial}{\partial t},\,\,\,\,
X_5=B\,\dfrac{\partial}{\partial
B},\,\,\,\,
X_6=C\,\dfrac{\partial}{\partial C}.
\end{equation}
The forms of the symmetries $X_i$, $i=1,...,6$ suggest their
significations: $X_2$, $X_3$ generate the symmetry of space
translation, $X_1$, $X_3$, $X_5$, $X_6$ are associated with the
scaling transformations. When the Lie algebra of these symmetries is
computed, the only non-vanishing relations are:
\begin{equation}\label{u32-3}
[X_1,X_2]\,=-X_2,\,\,\,\,\,\,\,\,\,\,[X_3,X_4]=-X_4.
\end{equation}
Following Yadav and Ali \cite{anil2014}, we acquire an
optimal system of one-dimensional subalgebras to be those spanned
by:
\begin{equation}\label{u32-5}
\begin{array}{ll}
\{X^{(1)}=X_1+a_3\,X_3+a_5\,X_5+a_6\,X_6,\,\,\,\,\,\,X^{(2)}=X_1+a_4\,X_4+a_5\,X_5+a_6\,X_6,\,\\
\,\,\,X^{(3)}=X_2+a_3\,X_3+a_5\,X_5+a_6\,X_6,\,\,\,\,\,\,X^{(4)}=X_2+a_4\,X_4+a_5\,X_5+a_6\,X_6,\,\\
\,\,\,X^{(5)}=X_3+a_5\,X_5+a_6\,X_6,\,\,\,\,\,X^{(6)}=X_4+a_5\,X_5+a_6\,X_6,\,\,\,\,\,
X^{(7)}=X_5+a_6\,X_6,\,\,\,\,\,X^{(8)}=X_6\}.
\end{array}
\end{equation}
\section{Similarity solutions}
If we considered the symmetries $X^{(5)}$ or $X^{(6)}$ or $X^{(7)}$ or $X^{(8)}$, then $a_1=a_2=0$,
we shall analyze the similarity solutions associated with the optimal systems of
symmetries $X^{(1)}$, $X^{(2)}$, $X^{(3)}$ and $X^{(4)}$ only as the following:\\
\textbf{Solution (I):} The symmetries $X^{(1)}$ has the characteristic equations:
\begin{equation}\label{u41-1}
\dfrac{dx}{x}=\dfrac{dt}{a_3\,t}=\dfrac{dB}{a_5\,B}=\dfrac{dC}{a_6\,C}.
\end{equation}
Then the similarity variable and the similarity transformations takes the form:
\begin{equation}\label{u42-1}
\begin{array}{ll}
\xi=\dfrac{x^a}{t},\,\,\,\,\,\,B(x,t)=\,x^{b}\,\Psi(\xi),\,\,\,\,\,\,C(x,t)=\,x^{c}\,\Phi(\xi),
\end{array}
\end{equation}
where $a=a_3$, $b=a_5$ and $c=a_6$ are an arbitrary constants. In
this case, we have
\begin{equation}\label{u42-2}
\begin{array}{ll}
A(t)=d\,t^{1-\frac{1}{a}}, \,\,\,\,\,\,\,
\omega_y^{(de)}(t)=\omega_x^{(de)}(t)+q\,t^{\frac{2}{a}-2},
\,\,\,\,\,\,\,
\omega_z^{(de)}(t)=\omega_x^{(de)}(t)+r\,t^{\frac{2}{a}-2},
\end{array}
\end{equation}
where $d=a_7\,a_3^{1-\frac{1}{a}}$, $q=a_8\,a_3^{\frac{2}{a_3}-2}$
and $r=a_9\,a_3^{\frac{2}{a_3}-2}$ are an arbitrary constants.
Substituting the transformations (\ref{u42-1}) in the field Eqs.
(\ref{u210-1})-(\ref{u210-2}) lead to the following system of
ordinary differential equations:
\begin{equation}\label{u43-1}
\begin{array}{ll}
\big(2\,a+b-1\big)\,\dfrac{\Psi'}{\Psi}+\big(2\,a+c-1\big)\,\dfrac{\Phi'}{\Phi}+a\,\xi\,\left[\dfrac{\Psi''}{\Psi}+\dfrac{\Phi''}{\Phi}\right]
\,=\,\dfrac{(a-1)(b+c)}{a\,\xi},
\end{array}
\end{equation}
\begin{equation}\label{u43-2}
\begin{array}{ll}
a^3\,\xi^{1-\frac{2}{a}}\Bigg[\Big(\alpha_1+a\,(r-q)\,\xi\dfrac{\Phi'}{\Phi}\Big)\dfrac{\Psi'}{\Psi}+a\,q\,\xi\,\dfrac{\Psi''}{\Psi}\Bigg]
-a\,d^2\,\xi\Bigg[\Big(\alpha_2+a\,(r-q)\,\xi\dfrac{\Phi'}{\Phi}\Big)\dfrac{\Psi'}{\Psi}+a\,r\,\xi\,\dfrac{\Psi''}{\Psi}\Bigg]
\\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
+d^2\Bigg[(a-1)(q-r)+a\,\xi\,\Big(\alpha_3\,\dfrac{\Phi'}{\Phi}+a\,q\,\xi\,\dfrac{\Phi''}{\Phi}\Big)\Bigg]
-a^2\,\xi^{-\frac{2}{a}}\Bigg[\alpha_4+a\,\xi\,\Big(\alpha_5\,\dfrac{\Phi'}{\Phi}+a\,r\,\xi\,\dfrac{\Phi''}{\Phi}\Big)\Bigg]=0,
\end{array}
\end{equation}
where
\begin{equation}\label{u43-3}
\left\{
\begin{array}{ll}
\alpha_1=c\,r+q\,\big(a+2\,b-c-1\big),\\
\alpha_2=q\,(1-a)+2\,a\,r,\\
\alpha_3=2\,a\,q+r\,(1-a),\\
\alpha_4=b\,q\,(c-b+1)+c\,r\,(c-b-1),\\
\alpha_5=b\,q)+r\,(a-b+2\,c-1).
\end{array}
\right.
\end{equation}
The equations (\ref{u43-1}) and (\ref{u43-2}) are non-linear
ordinary differential equations. One can not solve these equation in general. However, in a
special cases, one can find a solution. Now, we
propose the following conditions:
\begin{equation}\label{u43-4}
\begin{array}{ll}
\Psi(\xi)=\beta_1\,\xi^{\beta_2},\,\,\,\,\,\,\,\,\,\,\Phi(\xi)=\beta_5+\beta_3\,\xi^{\beta_4},\,\,\,\,\,a=-\dfrac{2}{\beta_4},
\end{array}
\end{equation}
where $\beta_1$, $\beta_2$, $\beta_3$, $\beta_4$ and $\beta_4$ are
an arbitrary constants. Substitute (\ref{u43-4}) in (\ref{u43-1}),
we have the following equation:
\begin{equation}\label{u43-7}
\begin{array}{ll}
\beta_5\,\left[2\,\beta_2\,\big(2\,\beta_2+2+\beta_4-b_1\,\beta_4\big)-\beta_4\,(2+\beta_4)(b+c)\right]\\
\,\,\,\,\,\,\,\,\,\,
+\beta_3\,\left[4\,\beta_2\,(1+\beta_2)-2\,\beta_4\,\big(b+c-2+b\,\beta_2-\beta_2\big)-\beta_4^2\,(b+3\,c-6)\right]\,\xi^{\beta_4}=0.
\end{array}
\end{equation}
The coefficients of $\xi^{\beta_4}$ and the absolute value must be
equal zero. Solving the two resulting conditions with respect to $b$
and $c$, we have:
\begin{equation}\label{u43-8}
\begin{array}{ll}
b=\dfrac{2\,\beta_2}{\beta_4}-\dfrac{(2+\beta_4)(2+3\,\beta_4)}{\beta_4\,(2+2\,\beta_2+\beta_4},\,\,\,\,\,\,\,\,\,\,
c=3+\dfrac{2}{\beta_4},
\end{array}
\end{equation}
Therefore the (\ref{u43-2} becomes
\begin{equation}\label{u43-9}
\begin{array}{ll}
\gamma_0+\gamma_1\,\xi^{\beta_4}+\gamma_2\,\xi^{2\,\beta_4}\,=\,0,
\end{array}
\end{equation}
where
\begin{equation}\label{u43-10}
\left\{
\begin{array}{ll}
\gamma_0=d^2\,\beta_4^2\,\beta_5\,(2\,\beta_2-\beta_4)\,(2+2\,\beta_2+\beta_4)^2\big[
r\,(2+2\,\beta_2+\beta_4)-q\,(2+\beta_4)\big],\\
\\
\gamma_1=d^2\,\beta_3\,\beta_4^2\,(2\,\beta_2+\beta_4)\,(2+2\,\beta_2+\beta_4)^2\Big[
r\,(2+2\,\beta_2+\beta_4)-q\,(2+3\,\beta_4)\big]\\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,+4\,\beta_5\,(2+3\,\beta_4)\,\Big(
r\,(2+2\,\beta_2+\beta_4)\,\big[4\,\beta_2\,(1+\beta_4)+(2+\beta_4)\,(4+5\,\beta_4)\big]\\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
-q\,(2+\beta_4)\,\big[(2+\beta_4)\,(4+7\,\beta_4)+4\,\beta_2\,(1+2\,\beta_4)\big]\Big),\\
\\
\gamma_2=4\,\beta_3\,(2+\beta_4)\,\Big(
r\,(2+2\,\beta_2+\beta_4)\big[4\,\beta_2+(2+\beta_4)\,(4+3\,\beta_4)\big]\\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
-q\,(2+3\,\beta_4)\big[4\,\beta_2\,(1+\beta_4)+(2+\beta_4)\,(4+5\,\beta_4)\big]
\Big).
\end{array}
\right.
\end{equation}
Solving the three equations $\gamma_0=0$, $\gamma_1=0$ and
$\gamma_2=0$ with respect to $r$, $\beta_2$ and $\beta_5$, we can
get the following solutions:
\begin{equation}\label{u43-11}
\left\{
\begin{array}{ll}
(1):
\,\,r=\dfrac{q\,(16+48\,\beta_4+42\,\beta_4^2+9\,\beta_4^3)}{2\,(8+8\,\beta_4+3\,\beta_4^2)},\,\,\,\,\,\,\,\,\,
\beta_2=-\dfrac{\beta_4}{2},\,\,\,\,\,\,\,\,\,\beta_5=0,\\
\\
(2):
\,\,r=-\dfrac{q\,(4+3\,\beta_4)}{2+3\,\beta_4},\,\,\,\,\,\,\,\,\,
\beta_2=-\dfrac{3\,(1+\beta_4)\,(2+\beta_4)}{4+3\,\beta_4},\,\,\,\,\,\,\,\,\,\beta_5=-\dfrac{d^2\,\beta_3\,\beta_4^2\,(12+14\,\beta_4+3\,\beta_4^2)}{4\,(4+3\,\beta_4)^2},\\
\\
(3):
\,\,r=\dfrac{q\,(16+56\,\beta_4+62\,\beta_4^2+21\,\beta_4^3)}{16+40\,\beta_4+30\,\beta_4^2+6\,\beta_4^3},\,\,\,\,
\beta_2=\dfrac{\beta_4}{2},\,\,\,\,\beta_5=-\dfrac{4\,d^2\,\beta_3\,\beta_4^3\,(1+\beta_4)^3}{64+272\,\beta_4+408\,\beta_4^2+256\,\beta_4^3+57\,\beta_4^4}.
\end{array}
\right.
\end{equation}
Here, we consider only one of the above solutions.\\
By using the
solution (3) in (\ref{u43-11}), (\ref{u43-8}), (\ref{u43-4}),
(\ref{u42-2}) and (\ref{u42-1}), we obtain the solution of the
Einstein field equations as the following:
\begin{equation}\label{uu1}
\left\{
\begin{array}{ll}
A(t)=d\,t^{1+\frac{\beta_4}{2}},\,\,\,\,\,
B(x,t)=\beta_1\,x^{-\frac{(2+\beta_4)(2+3\,\beta_4)}{2\,\beta_4\,(1+\beta_4)}}\,t^{1+\frac{2}{\beta_4}},\,\,\,\,\,
C(x,t)=\beta_3\,x^{1+\frac{2}{\beta_4}}\,\Big(t^{-\beta_4}-d_0^2\,x^2\Big),\\
\\
\omega_y(t)=\omega_x(t)+q\,t^{-2-\beta_4},\,\,\,\,\,\,\,\,\omega_z(t)=\omega_x(t)+r\,t^{-2-\beta_4},\,\,\,\,\,\,\,\,
d^2=\dfrac{d_0^2\,(64+272\,\beta_4+408\,\beta_4^2+256\,\beta_4^3+57\,\beta_4^4)}{4\,\beta_4^3\,(1+\beta_4)^3},
\end{array}
\right.
\end{equation}
where $d_0$, $q$ and $\beta_4$ are an arbitrary constants, while
$\omega_x$ is an arbitrary function of $t$.\\
It is observed from
equations (\ref{uu1}), the line element (\ref{spacetime}) can be
written in the following form:
\begin{equation} \label{s1}
\begin{array}{ll}
ds_{1}^2=d_{0}^2\,t^{2+\beta_4}\,dx^2+\beta_1^2\,x^{-\frac{(2+\beta_4)(2+3\,\beta_4)}{\beta_4\,(1+\beta_4)}}\,t^{2+\frac{4}{\beta_4}}\,dy^2
+\beta_3^2\,x^{2+\frac{4}{\beta_4}}\,\Big(t^{-\beta_4}-d_0^2\,x^2\Big)^2\,dz^2-dt^2.
\end{array}
\end{equation}
\textbf{Remark:} In the above solution, we can replace $t$ by $t+\zeta_1$ and $x$ by $x+\zeta_2$ without loss of
generality, where $\zeta_1$ and $\zeta_2$ are an some arbitrary constants.\\
\textbf{Solution (II):} The symmetries $X^{(2)}$ has the characteristic equations:
\begin{equation}\label{u51-1}
\dfrac{dx}{x}=\dfrac{dt}{a_4}=\dfrac{dB}{a_5\,B}=\dfrac{dC}{a_6\,C}.
\end{equation}
Then the similarity variable and the similarity transformations takes the form:
\begin{equation}\label{u52-1}
\begin{array}{ll}
\xi=x\,\exp\big[a\,t\big],\,\,\,\,\,\,B(x,t)=\Psi(\xi)\,\exp\big[b\,t\big],\,\,\,\,\,\,C(x,t)=\Phi(\xi)\,\exp\big[c\,t\big],
\end{array}
\end{equation}
where $a=-\frac{1}{a_4}$, $b=a_5$ and $c=a_6$ are an arbitrary
constants. In this case, we have
\begin{equation}\label{u52-2}
\begin{array}{ll}
A(t)=d\,\exp\big[a\,t\big],\,\,\,\,\,\,\,
\omega_y^{(de)}(t)=\omega_x^{(de)}(t)+q\,\exp\big[-2\,a\,t\big],\,\,\,\,\,\,\,
\omega_z^{(de)}(t)=\omega_x^{(de)}(t)+r\,\exp\big[-2\,a\,t\big],
\end{array}
\end{equation}
where $d=a_7$, $q=a_8$ and $r=a_9$ are an arbitrary constants.
Substituting the transformations (\ref{u52-1}) in the field Eqs.
(\ref{u210-1})-(\ref{u210-2}), we can get the following system of
ordinary differential equations:
\begin{equation}\label{u53-1}
\begin{array}{ll}
\dfrac{b\,\Psi'}{\Psi}+\dfrac{c\,\Phi'}{\Phi}+a\,\xi\,\left(\dfrac{\Psi''}{\Psi}+\dfrac{\Phi''}{\Phi}\right)=0,
\end{array}
\end{equation}
\begin{equation}\label{u53-2}
\begin{array}{ll}
\alpha_1-\alpha_2\,\xi\,\dfrac{\Phi'}{\Phi}+\dfrac{\Psi'}{\Psi}\left[\alpha_3\,\xi-(q-r)\,\big(a^2\,d^2\,\xi^2-1\big)\,\dfrac{\Phi'}{\Phi}\right]
+\big(a^2\,d^2\,r\,\xi^2-q\big)\dfrac{\Psi''}{\Psi}-\big(a^2\,d^2\,q\,\xi^2-r\big)\dfrac{\Phi''}{\Phi}=0,
\end{array}
\end{equation}
where
\begin{equation}\label{u53-3}
\left\{
\begin{array}{ll}
\alpha_1=d^2\,(a+b+c)\,\big[(a-c)\,q+(b-a)\,r\big]n\,(1+q)-1\big],\\
\alpha_2=a\,d^2\,\big[(a+b+2\,c)\,q+(a-b)\,r\big],\\
\alpha_3=a\,d^2\,\big[(a-c)\,q+(a+2\,b+c)\,r\big].
\end{array}
\right.
\end{equation}
\textbf{Solution (III):} The symmetries $X^{(3)}$ has the
characteristic equations:
\begin{equation}\label{u61-1}
\dfrac{dx}{1}=\dfrac{dt}{a_3\,t}=\dfrac{dB}{a_5\,B}=\dfrac{dC}{a_6\,C}.
\end{equation}
Then the similarity variable and the similarity transformations
takes the form:
\begin{equation}\label{u62-1}
\begin{array}{ll}
\xi=t\,\exp\big[a\,x\big],\,\,\,\,\,\,B(x,t)=\Psi(\xi)\,\exp\big[b\,x\big],\,\,\,\,\,\,C(x,t)=\Phi(\xi)\,\exp\big[c\,x\big],
\end{array}
\end{equation}
where $a=-\frac{1}{a_3}$, $b=a_5$ and $c=a_6$ are an arbitrary
constants. In this case, we have
\begin{equation}\label{u62-2}
\begin{array}{ll}
A(t)=d\,t,\,\,\,\,\,\,\,
\omega_y^{(de)}(t)=\omega_x^{(de)}(t)+\frac{q}{t^2},\,\,\,\,\,\,\,
\omega_z^{(de)}(t)=\omega_x^{(de)}(t)+\frac{r}{t^2},
\end{array}
\end{equation}
where $d=a_7$, $q=a_8$ and $r=a_9$ are an arbitrary constants.
Substituting the transformations (\ref{u52-1}) in the field Eqs.
(\ref{u210-1})-(\ref{u210-2}), we obtain the following system of
ordinary differential equations:
\begin{equation}\label{u63-1}
\begin{array}{ll}
\dfrac{b_1\,\Psi'}{\Psi}+\dfrac{c_1\,\Phi'}{\Phi}+a\,\xi\,\left(\dfrac{\Psi''}{\Psi}+\dfrac{\Phi''}{\Phi}\right)=\dfrac{b_1+c_1}{\xi},
\end{array}
\end{equation}
\begin{equation}\label{u63-2}
\begin{array}{ll}
\alpha_4+\xi\,\left(\dfrac{\alpha_1\,\Psi'}{\Psi}+\dfrac{\alpha_5\,\Phi'}{\Phi}\right)+\xi^2\,\left(\dfrac{\alpha_3\,\Psi''}{\Psi}
+\dfrac{\alpha_2\,\Psi'\Phi'}{\Psi\Phi}+\dfrac{\alpha_6\,\Phi''}{\Phi}\right)=0,
\end{array}
\end{equation}
where
\begin{equation}\label{u63-3}
\left\{
\begin{array}{ll}
\alpha_1=q\,\big[d^2-2\,a\,b-1\big]+a\,c\,(q-r),\\
\alpha_2=(a-d)\,(a+d)\,(q-r),\\
\alpha_3=r\,d^2-q\,a^2,\\
\alpha_4=(c-b)\,\big[b\,q+c\,r\big],\\
\alpha_5=a\,b\,q-r\,d^2+a\,r\,(a-b+2\,c),\\
\alpha_6=a^2\,r-d^2\,q.
\end{array}
\right.
\end{equation}
Equation (\ref{u63-1}) can be written in the following form:
\begin{equation}\label{u64}
\begin{array}{ll}
\Psi''=\Psi\,\left[\dfrac{b_1+c_1}{a\,\xi^2}-\dfrac{\Phi''}{\Phi}-\dfrac{1}{a\,\xi}\Big(\dfrac{b_1\,\Psi'}{\Psi}+\dfrac{c_1\,\Phi'}{\Phi}\Big)\right],
\end{array}
\end{equation}
Substituting $\Psi''$ into (\ref{u63-2}), we obtain the following
equation:
\begin{equation}\label{u64-2}
\begin{array}{ll}
a\,(a^2-d^2)\,\xi\,\left[\dfrac{(q+r)\,\Phi''}{\Phi}+\dfrac{(q-r)\,\Psi'\,\Phi'}{\Psi\,\Phi}\right]=\dfrac{\alpha_6}{\xi}+\dfrac{\alpha_7\,\Psi'}{\Psi}
+\dfrac{\alpha_8\,\Phi'}{\Phi},
\end{array}
\end{equation}
where
\begin{equation}\label{u64-3}
\left\{
\begin{array}{ll}
\alpha_6=(b+c)\,(a^2\,q-d^2\,r)+a\,(b-c)\,(b\,q+c\,r),\\
\alpha_7=a\,q\,(a^2-d^2)+d^2\,b\,r+a^2\,(b\,q-c\,q+c\,r),\\
\alpha_8=a\,r\,(d^2-a^2)+c\,r\,(d^2-2\,a^2)-a^2\,(b\,q+c\,q-b\,r).
\end{array}
\right.
\end{equation}
It is easy to solve the above equation when $d=a$ or
$\dfrac{(q+r)\,\Phi''}{\Phi}+\dfrac{(q-r)\,\Psi'\,\Phi'}{\Psi\,\Phi}=0$. When $d=a$, the equation (\ref{u64-2}) takes the following form:
\begin{equation}\label{u65}
\begin{array}{ll}
\dfrac{a\,(b+c)\,(q-r)+(b-c)\,(b\,q+c\,r)}{a\,\xi}=\Big[c\,(r-q)+b\,(r+q)\Big]\,\dfrac{\Psi'}{\Psi}
+\Big[b\,(q-r)+c(q+r)\Big]\,\dfrac{\Phi'}{\Phi}.
\end{array}
\end{equation}
The solution of the above equation is
\begin{equation}\label{u66}
\begin{array}{ll}
\Psi(\xi)=\beta_1\,\xi^{\frac{a\,(b+c)\,(r-q)-(b-c)\,(b\,q+c\,r)}{a\,c\,(r-q)+a\,b\,(r+q)}}\,\Phi^{\frac{b\,(q-r)+c(q+r)}{c\,(r-q)+b\,(r+q)}}(\xi),
\end{array}
\end{equation}
where $\beta_1$ is an arbitrary constant of integration.\\
Substituting the above equation into (\ref{u63-1}), we get the
following equation
\begin{equation}\label{u67-1}
\begin{array}{ll}
\gamma_1+\dfrac{a\,\gamma_2\,\xi\,\Phi'}{\Phi}+2\,a^2\,\xi^2\,\left[\dfrac{\gamma_3\,\Phi^{\prime2}}{\Phi^2}+\dfrac{\gamma_4\,\Phi''}{\Phi}\right]=0,
\end{array}
\end{equation}
where
\begin{equation}\label{u67-2}
\left\{
\begin{array}{ll}
\gamma_1=2\,a^2\,b\,q\,(b+c)\,(q-r)-r\,(b-c)\,(b^2+c^2)\,(b\,q+c\,r)+a\,
\Big[b^3\,q\,(q-3\,r)\\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
+c^3\,q\,(r-q)+b\,c^2\,(q^2-q\,r-2\,r^2)-b^2\,c\,(q^2+q\,r+2\,r^2),\\
\\
\gamma_2=2\,a\,(b+c)\,(r-q)\,\big[b\,(q-r)+c\,(q+r)\big]-(b^2+c^2)\,\big[b\,(q-r)^2-c\,(q^2+3\,r^2)\big],\\
\\
\gamma_3=b^2\,r\,(r-q)+c^2\,q\,(q+r)+b\,c\,(q^2-2\,q\,r-r^2),\\
\\
\gamma_4=c^2\,r\,(r-q)+b^2\,q\,(q+r)+b\,c\,(r^2+2\,q\,r-q^2).
\end{array}
\right.
\end{equation}
The general solution of the equation (\ref{u67-1}) is read as
\begin{equation}\label{u68-1}
\begin{array}{ll}
\Phi(\xi)=\beta_2\,\xi^{\gamma_5}\,\Big(\beta_3+\xi^{\gamma_6}\Big)^{\gamma_7},
\end{array}
\end{equation}
where $\beta_2$ and $\beta_3$ are an arbitrary constants of
integration and
\begin{equation}\label{u68-2}
\left\{
\begin{array}{ll}
\gamma_5=\dfrac{a\,q\,(q-r)-r\,(b\,q+c\,r)}{a\,(q^2+r^2)},\\
\\
\gamma_6=\dfrac{2\,a\,(b\,r+c\,q)+(b^2+c^2)\,(q+r)}{2\,a\,(b\,q+c\,r)},\\
\\
\gamma_7=\dfrac{(b\,q+c\,r)\big[c\,(r-q)+b\,(r+q)\big]}{(b^2+c^2)\,(q^2+r^2)}.
\end{array}
\right.
\end{equation}
Substituting (\ref{u68-1}) in (\ref{u66}), we have
\begin{equation}\label{u69-1}
\begin{array}{ll}
\Psi(\xi)=\beta_4\,\xi^{\gamma_8}\,\Big(\beta_3+\xi^{\gamma_6}\Big)^{\gamma_9},
\end{array}
\end{equation}
where
$\beta_4=\beta_1\,\beta_2^{\frac{b\,(q-r)+c\,(q+r)}{c\,(r-q)+b\,(r+q)}}$
is the new arbitrary constant and
\begin{equation}\label{u69-2}
\left\{
\begin{array}{ll}
\gamma_8=\dfrac{a\,r\,(r-q)-q\,(b\,q+c\,r)}{a\,(q^2+r^2)},\\
\\
\gamma_9=\dfrac{(b\,q+c\,r)\big[b\,(q-r)+c\,(q+r)\big]}{(b^2+c^2)\,(q^2+r^2)}.
\end{array}
\right.
\end{equation}
Without loss of generality, we can take $r=0$. Now using the
solution (\ref{u68-1}) and (\ref{u69-1}) in (\ref{u43-11}),
(\ref{u43-8}), (\ref{u43-4}), (\ref{u42-2}) and (\ref{u42-1}), we
can find the solution of the Einstein field equations as the
following:
\begin{equation}\label{uu2}
\left\{
\begin{array}{ll}
A(t)=a\,t,\,\,\,\,\,\,\,\,\omega_y^{(de)}(t)=\omega_x^{(de)}(t)+\dfrac{q}{t^2},\,\,\,\,\,\,\,\,
\omega_z^{(de)}(t)=\omega_x^{(de)}(t),\\
\\
B(x,t)=\beta_4\,t^{-b/a}\,\left[\beta_3+\Big(t\,e^{a\,x}\Big)^{\frac{b^2+2\,a\,c+c^2}{2\,a\,b}}\right]^{\frac{b\,(b+c)}{b^2+c^2}},\\
\\
C(x,t)=\beta_2\,t\,e^{(a+c)\,x}\,\left[\beta_3+\Big(t\,e^{a\,x}\Big)^{\frac{b^2+2\,a\,c+c^2}{2\,a\,b}}\right]^{\frac{b\,(b-c)}{b^2+c^2}},\\
\end{array}
\right.
\end{equation}
where $a$, $b$ $c$, $\beta_2$, $\beta_3$, $\beta_4$ and $q$ are an
arbitrary constants, while $\omega_x$ is an arbitrary function of
$t$. It is observed from equations (\ref{uu2}), the line element
(\ref{spacetime}) can be written in the following form:
\begin{equation} \label{s2}
\begin{array}{ll}
ds_{2}^2=a^2\,t^2\,dx^2+\beta_4^2\,t^{-2\,b/a}\,\left[\beta_3+\Big(t\,e^{a\,x}\Big)^{\frac{b^2+2\,a\,c+c^2}{2\,a\,b}}\right]^{\frac{2\,b\,(b+c)}{b^2+c^2}}\,dy^2\\
\\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
+\beta_2^2\,t^2\,e^{2\,(a+c)\,x}\,\left[\beta_3+\Big(t\,e^{a\,x}\Big)^{\frac{b^2+2\,a\,c+c^2}{2\,a\,b}}\right]^{\frac{2\,b\,(b-c)}{b^2+c^2}}\,dz^2-dt^2.
\end{array}
\end{equation}
\textbf{Solution (IV):} The symmetries $X^{(4)}$ has the
characteristic equations:
\begin{equation}\label{u71-1}
\dfrac{dx}{1}=\dfrac{dt}{a_4}=\dfrac{dB}{a_5\,B}=\dfrac{dC}{a_6\,C}.
\end{equation}
Then the similarity variable and the similarity transformations
takes the form:
\begin{equation}\label{u72-1}
\begin{array}{ll}
\xi=t-a\,x,\,\,\,\,\,\,B(x,t)=\Psi(\xi)\,\exp\big[b\,t\big],\,\,\,\,\,\,C(x,t)=\Phi(\xi)\,\exp\big[c\,t\big],
\end{array}
\end{equation}
where $a=a_4$, $b=\frac{a_5}{a_4}$ and $c=\frac{a_6}{a_4}$ are an
arbitrary constants. In this case, we have
\begin{equation}\label{u72-2}
\begin{array}{ll}
A(t)=d,\,\,\,\,\,\,\,
\omega_y^{(de)}(t)=\omega_x^{(de)}(t)+q,\,\,\,\,\,\,\,
\omega_z^{(de)}(t)=\omega_x^{(de)}(t)+r,
\end{array}
\end{equation}
where $d=a_7$, $q=a_8$ and $r=a_9$ are an arbitrary constants.\\
Substituting the transformations (\ref{u52-1}) in the field Eqs.
(\ref{u210-1})-(\ref{u210-2}), we can get the following system of
ordinary differential equations:
\begin{equation}\label{u73-1}
\begin{array}{ll}
\dfrac{b\,\Psi'}{\Psi}+\dfrac{c\,\Phi'}{\Phi}+\dfrac{\Psi''}{\Psi}+\dfrac{\Phi''}{\Phi}=0,
\end{array}
\end{equation}
\begin{equation}\label{u73-2}
\begin{array}{ll}
d^2\,(b+c)\big[b\,r-c\,q\big]+d^2\big[2\,b\,r+c\,(r-q)\big]\,\dfrac{\Psi'}{\Psi}+d^2\big[b\,(r-q)-2\,c\,r\big]\,\dfrac{\Phi'}{\Phi}\\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, +
(a^2-d^2)\,(q-r)\,\dfrac{\Psi'\Phi'}{\Psi\Phi}+(d^2\,r-a^2\,q)\dfrac{\Psi''}{\Psi}+(a^2\,r-d^2\,q)\dfrac{\Phi''}{\Phi}=0.
\end{array}
\end{equation}
\section{Physical and geometrical properties of the models}
\textbf{For the Model (\ref{s1}):}\\
The expressions of $p^{(pf)}$, $\rho^{(pf)}$ and $\rho^{(de)}$ for
the model (\ref{s1}), are given by:
\begin{equation}\label{uu1-1}
\begin{array}{ll}
p^{(pf)}(x,t)=\dfrac{\beta_4}{4\,f(x,t)}\,\Bigg[
8\,q\,(1+\beta_4)^2\,(2+\beta_4)^2\,(2+3\,\beta_4)\,t^{-2-\beta_4}\\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
+q\,d_0^2\,\Big[320+1728\,\beta_4+3600\,\beta_4^2+3616\,\beta_4^3+1750\,\beta_4^4+327\,\beta_4^5\Big]\,x^2\,t^{-2}\\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
-q\,d_0^4\,(2+\beta_4)\,(4+3\,\beta_4)\,\Big[16+56\,\beta_4+60\,\beta_4^2+19\,\beta_4^3\Big]\,x^2\,t^{-2+\beta_4}\\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
+4\,(1+\beta_4)\,(8+12\,\beta_4+3\,\beta_4^2)\Big[2\,(1+\beta_4)\,(2+\beta_4)+d_0^2\,(4+10\,\beta_4+5\,\beta_4^2)\,x^2\,t^{\beta_4}\Big]\omega_x^{(pf)}(t)
\Bigg],
\end{array}
\end{equation}
\begin{equation}\label{uu1-2}
\begin{array}{ll}
\rho^{(pf)}(x,t)=\dfrac{\beta_4}{4\,f(x,t)}\,\Bigg[
4\,(1+\beta_4)\,(8+12\,\beta_4+3\,\beta_4^2)\Big[2\,(1+\beta_4)\,(2+\beta_4)+d_0^2\,(4+10\,\beta_4+5\,\beta_4^2)\,x^2\,t^{\beta_4}\Big]\\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
-4\,q\,(1+\beta_4)\,(2+\beta_4)\,(8+24\,\beta_4+26\,\beta_4^2+9\,\beta_4^3)\,t^{-2-\beta_4}\\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
-q\,d_0^2\,\Big[320+1344\,\beta_4+1968\,\beta_4^2+1160\,\beta_4^3+194\,\beta_4^4-27\,\beta_4^5\Big]\,x^2\,t^{-2}\\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
+q\,d_0^4\,(2+\beta_4)\,(4+3\,\beta_4)\,\Big[16+56\,\beta_4+60\,\beta_4^2+19\,\beta_4^3\Big]\,x^2\,t^{-2+\beta_4}
\Bigg],
\end{array}
\end{equation}
\begin{equation}\label{uu1-3}
\begin{array}{ll}
\rho^{(de)}(x,t)=-\dfrac{\beta_4\,(1+\beta_4)\,(8+12\,\beta_4^2+3\,\beta_4^2)\,\Big[
2\,(1+\beta_4)\,(2+\beta_4)+d_0^2\,(4+10\,\beta_4+5\,\beta4)\,x^2\,t^{\beta_4}\Big]}{f(x,t)},
\end{array}
\end{equation}
where
$$
f(x,t)=q\,d_0^2\,(4+3\,\beta_4)\,\Big[16+56\,\beta_4+60\,\beta_4^2+19\,\beta_4^3\Big]\Big(d_0^2\,x^4\,t^{\beta_4}-x^2\Big).
$$
The volume element is
\begin{equation} \label{uu1-4}
V=d\,\beta_1\,\beta_3\,x^{-\frac{2+\beta_4}{2+2\,\beta_4}}\,t^{1-\beta_4}\Big(d_0^2\,x^2\,t^{\beta_4}-1\Big).
\end{equation}
The expansion scalar is given by:
\begin{equation}\label{uu1-5}
\begin{array}{ll}
\Theta=\dfrac{1}{t}\,\Big(1+\dfrac{\beta_4}{d_0^2\,x^2\,t^{\beta_4}-1}\Big).
\end{array}
\end{equation}
The non-vanishing components of the shear tensor, $\sigma_i^j$, are:
\begin{equation}\label{uu1-6}
\begin{array}{ll}
\dfrac{\sigma_1^1}{\Theta}\,=\,\dfrac{d_0^2\,(4+3\,\beta_4)\,x^2\,t^{\beta_4}-4-5\,\beta_4}{6\,\Big[\beta_4-1+d_0^2\,x^2\,t^{\beta_4}\Big]},
\end{array}
\end{equation}
\begin{equation}\label{uu1-7}
\begin{array}{ll}
\dfrac{\sigma_2^2}{\Theta}\,=\,\dfrac{2+\beta_4-d_0^2\,(4+3\,\beta_4)\,x^2\,t^{\beta_4}}{6\,\Big[\beta_4-1+d_0^2\,x^2\,t^{\beta_4}\Big]},
\end{array}
\end{equation}
\begin{equation}\label{uu1-8}
\begin{array}{ll}
\dfrac{\sigma_3^3}{\Theta}\,=\,\dfrac{1+2\,\beta_4-d_0^2\,x^2\,t^{\beta_4}-4-5\,\beta_4}{3\,\Big[\beta_4-1+d_0^2\,x^2\,t^{\beta_4}\Big]}.
\end{array}
\end{equation}
The shear scalar is:
\begin{equation}\label{uu1-9}
\begin{array}{ll}
\dfrac{\sigma^2}{\Theta^2}\,=\,\dfrac{4+10\,\beta_4+7\,\beta_4^2-2\,d_0^2\,(4+8\,\beta_4+3\,\beta_4^2)\,x^2\,t^{\beta_4}
+d_0^4\,(4+6\,\beta_4+3\,\beta_4^2)\,x^4\,t^{2\,\beta_4}}{12\,\Big[\beta_4-1+d_0^2\,x^2\,t^{\beta_4}\Big]^2}.
\end{array}
\end{equation}
The deceleration parameter is given by \cite{fein1, rayc1}
\begin{equation}\label{uu1-11}
\begin{array}{ll}
\mathbf{q}=\dfrac{\Big(\beta_4-1+d_0^2\,x^2\,t^{\beta_4}\Big)\Big[(1-\beta_4)\,(2+\beta_4)-d_0^2\,(4-\beta_4-3\,\beta_4^2)\,x^2\,t^{\beta_4}
+2\,d_0^4\,x^4\,t^{2\,\beta_4}\Big]}{t^4\,\Big[d_0^2\,x^2\,t^{\beta_4}-1\Big]^4}.
\end{array}
\end{equation}
\begin{figure*}[thbp]
\begin{center}
\begin{tabular}{rl}
\includegraphics[width=6.5cm]{ali01F1.eps}
\includegraphics[width=6.5cm]{ali01F2.eps}\\
\includegraphics[width=6.5cm]{ali01F3.eps}
\includegraphics[width=6.5cm]{ali01F4.eps}
\end{tabular}
\end{center}
\caption{Variation of energy density (upper left panel), dark energy density (upper right panel),
volume (lower left panel) and DP (lower right panel) versus time for model (\ref{s1})}
\end{figure*}
\begin{figure*}[thbp]
\begin{center}
\begin{tabular}{rl}
\includegraphics[width=6.5cm]{ali01F5.eps}
\includegraphics[width=6.5cm]{ali01F6.eps}\\
\includegraphics[width=6.5cm]{ali01F7.eps}
\includegraphics[width=6.6cm]{ali01F8.eps}
\end{tabular}
\end{center}
\caption{Variation of energy density (upper left panel), dark energy density (upper right panel),
volume (lower left panel) and DP (lower right panel) versus time for model (\ref{s2})}
\end{figure*}
\textbf{For the Model (\ref{s2}):}\\
The expressions of $p^{(pf)}$, $\rho^{(pf)}$ and $\rho^{(de)}$ for
the model (\ref{s2}), are given by:
\begin{equation}\label{uu2-1}
\begin{array}{ll}
p^{(pf)}(x,t)=\dfrac{b\,c\,q\,\big[b^2+c^2+2\,a\,(b+c)\big]\Big(t\,e^{a\,x}\Big)^{\frac{b^2+2\,a\,c+c^2}{2\,a\,b}}
-\beta_3\,(b^2+c^2)\,\Big[b^2\,q+(b^2+2\,a\,c+c^2)\,t^2\,\omega_x^{(de)}(t)\Big]}{g(x,t)},
\end{array}
\end{equation}
\begin{equation}\label{uu2-2}
\begin{array}{ll}
\rho^{(pf)}(x,t)=\dfrac{1}{g(x,t)}\,\Bigg(q\,(b^2+b\,c+c^2)\,\big[b^2+c^2+2\,a\,(b+c)\big]\,\Big(t\,e^{a\,x}\Big)^{\frac{b^2+2\,a\,c+c^2}{2\,a\,b}}\\
\\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
+\beta_3\,(b^2+c^2)\,\Big[q\,\big[c^2+2\,a\,(b+c)\big]-(b^2+2\,a\,c+c^2)\,t^2\Big]\Bigg),
\end{array}
\end{equation}
\begin{equation}\label{uu2-3}
\begin{array}{ll}
\rho^{(de)}(x,t)=\dfrac{\beta_3\,(b^2+c^2)\,(b^2+2\,a\,c+c^2)\,t^2}{g(x,t)},
\end{array}
\end{equation}
where
$g(x,t)=a^2\,q\,(b^2+c^2)\,t^2\,\Big[\beta_3+\Big(t\,e^{a\,x}\Big)^{\frac{b^2+2\,a\,c+c^2}{2\,a\,b}}\Big]$.\\
The volume element is
\begin{equation} \label{uu2-4}
V=a\,\beta_2\,\beta_4\,t^{2-b/a}\,e^{(a+c)\,x}\,\Big[\beta_3+\Big(t\,e^{a\,x}\Big)^{\frac{b^2+2\,a\,c+c^2}{2\,a\,b}}\Big]^{\frac{2\,b^2}{b^2+c^2}}.
\end{equation}
The expansion scalar is given by:
\begin{equation}\label{uu2-5}
\begin{array}{ll}
\Theta=\dfrac{1}{(a^2+c^2)\,t}\Bigg(2\,(b^2+b\,c+c^2)-\dfrac{b\,\beta_3\,(b^2+2\,a\,c+c^2)}{a\,\Big[\beta_3+\Big(t\,e^{a\,x}\Big)^{\frac{b^2+2\,a\,c+c^2}{2\,a\,b}}\Big]}\Bigg),
\end{array}
\end{equation}
The non-vanishing components of the shear tensor, $\sigma_i^j$, are:
\begin{equation}\label{uu2-6}
\begin{array}{ll}
\dfrac{\sigma_1^1}{\Theta}\,=\,\dfrac{\beta_3\,(a+b)\,(b^2+c^2)+a\,(b-c)^2\,\Big(t\,e^{a\,x}\Big)^{\frac{b^2+2\,a\,c+c^2}{2\,a\,b}}}{
3\,\beta_3\,(2\,a-b)\,(b^2+c^2)+6\,a\,(b^2+b\,c+c^2)^2\,\Big(t\,e^{a\,x}\Big)^{\frac{b^2+2\,a\,c+c^2}{2\,a\,b}}},
\end{array}
\end{equation}
\begin{equation}\label{uu2-7}
\begin{array}{ll}
\dfrac{\sigma_2^2}{\Theta}\,=\,\dfrac{4\,\beta_3\,(a+b)\,(b^2+c^2)+(b-c)\,\big[2\,a\,(2\,b+c)+3\,(b^2+c^2)\big]\,\Big(t\,e^{a\,x}\Big)^{\frac{b^2+2\,a\,c+c^2}{2\,a\,b}}}{
6\,\beta_3\,(b-2\,a)\,(b^2+c^2)-12\,a\,(b^2+b\,c+c^2)^2\,\Big(t\,e^{a\,x}\Big)^{\frac{b^2+2\,a\,c+c^2}{2\,a\,b}}},
\end{array}
\end{equation}
\begin{equation}\label{uu2-8}
\begin{array}{ll}
\dfrac{\sigma_3^3}{\Theta}\,=\,\dfrac{2\,\beta_3\,(a+b)\,(b^2+c^2)+(b-c)\,\big[2\,a\,(b+2\,c)+3\,(b^2+c^2)\big]\,\Big(t\,e^{a\,x}\Big)^{\frac{b^2+2\,a\,c+c^2}{2\,a\,b}}}{
6\,\beta_3\,(2\,a-b)\,(b^2+c^2)+12\,a\,(b^2+b\,c+c^2)^2\,\Big(t\,e^{a\,x}\Big)^{\frac{b^2+2\,a\,c+c^2}{2\,a\,b}}}.
\end{array}
\end{equation}
The shear scalar is:
\begin{equation}\label{uu2-9}
\begin{array}{ll}
\dfrac{\sigma^2}{\Theta^2}\,=\,\dfrac{1}{h(x,t)}\,\Bigg(
4\,\beta_3^2\,(a+b)^2\,(b^2+c^2)^2\\
\\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
+2\beta_3\,(a+b)\,(b-c)\,(b^2+c^2)\,\big[2\,a\,(2\,b+c)+3\,(b^2+c^2)\big]\,\Big(t\,e^{a\,x}\Big)^{\frac{b^2+2\,a\,c+c^2}{2\,a\,b}}\\
\\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
+(b-c)^2\,\big[6\,a\,(b+c)\,(b^2+c^2)+3\,(b^2+c^2)^2+4\,a\,(b^2+b\,c
+c^2)\big]\,\Big(t\,e^{a\,x}\Big)^{\frac{b^2+2\,a\,c+c^2}{a\,b}}\Bigg),
\end{array}
\end{equation}
where
\begin{equation}\label{uu2-9-2}
\begin{array}{ll}
h(x,t)=12\Bigg[\beta_3\,(b-2\,a)\,(b^2+c^2)-2\,a\,(b^2+b\,c+c^2)^2\,\Big(t\,e^{a\,x}\Big)^{\frac{b^2+2\,a\,c+c^2}{2\,a\,b}}\Bigg]^2.
\end{array}
\end{equation}
The deceleration parameter is given by:
\begin{equation}\label{uu2-11}
\begin{array}{ll}
\mathbf{q}=\dfrac{1}{k(x,t)}\,\Bigg(
2\,\beta_3^2\,(a+b)\,(2\,a-b)\,(b^2+c^2)^2-\beta_3\,(b^2+c^2)\,\Big[3\,(b^2+c^2)^2\\
\\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
+4\,a^2\,(c^2+b\,c-2\,b^2)-2\,a\,(b^3-2\,b^2\,c+b\,c^2-6\,c^3)\Big]\,\Big(t\,e^{a\,x}\Big)^{\frac{b^2+2\,a\,c+c^2}{2\,a\,b}}\\
\\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
+4\,a^2(b-c)^2\,(b^2+b\,c+c^2)\,\Big(t\,e^{a\,x}\Big)^{\frac{b^2+2\,a\,c+c^2}{a\,b}}\Bigg)\\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
\times\,\Bigg(b\,\beta_3\,(b^2+2\,a\,c+c^2)+2\,a\,(b^2+b\,c+c^2)^2\,\Big(t\,e^{a\,x}\Big)^{\frac{b^2+2\,a\,c+c^2}{2\,a\,b}}\Bigg)^2,
\end{array}
\end{equation}
where
\begin{equation}\label{uu2-11-2}
\begin{array}{ll}
k(x,t)=2\,a^4\,(b^2+c^2)^4\,t^4\,\Bigg[\beta_3+\Big(t\,e^{a\,x}\Big)^{\frac{b^2+2\,a\,c+c^2}{2\,a\,b}}\Bigg]^4.
\end{array}
\end{equation}
\section {Conclusion}
In this paper, we have investigated some models of accelerating universe with minimal interaction between normal matter
and anisotropic DE in anisotropic and inhomogeneous Bianchi I space-time.
Generally, the models behave like an expanding, shearing and non-rotating universe in which
flow vector is geodetic. On the basis of optimal systems of symmetries $X^{(1)}$ and $X^{(2)}$, we obtain two
models (\ref{s1}) and (\ref{s2}) respectively. The main features of the work are as follows:
\begin{itemize}
\item The models are based on similarity solution of field equation and we have obtained the new class of
exact solution.
\item We have, in general discussed several physical features and geometrical
properties of the models. However, as a special case, most notable aspect of the solutions
have been studied that are non-singular in nature. All figures depicts interesting
features of the present cosmological models in terms of DP and other physical parameters.
\item In the derived models, the matter energy density and dark energy density remains positive.
Therefore, the WEC and NEC are satisfied, which in turn imply that the derived models are
physically realistic.
\item As $t \rightarrow \infty$, $V \rightarrow \infty$ but $\rho^{(de)} \rightarrow 0$, hence
the volume increases with passage of time while DE density decreases.
\item The derived models seem to describes the dynamics of universe from big bang to present epoch and
DE dominates the universe at present time which may be attributed to the current accelerated expansion of
universe.
\item Hypothetical DE is the most reasonable way of explaining why the universe is expanding at an ever
increasing rate. DE plays a massive part in shaping our reality, however, it is to be noted that
no body seems certain of what the dang
stuff actually is. Future space mission hope to solve this mystery and shake up our current understanding
of the universe. To our knowledge, this work is the first study of minimally interacting anisotropic DE
with normal matter in
Bianchi - I inhomogeneous space-time and its general form.
\end{itemize}
It is important to note that $x$ = constant removes the inhomogeneity from the derived models and they
simply represent model of universe based on power law cosmology. Numerous cosmological models with
power law expansion exist in the literature. Thus the solutions, presented in this paper generalize the
solution obtained by numerous authors in spatially homogeneous and anisotropic Bianchi I space-time,
in particular Yadav and Saha \cite{yadav2012}. \\
\textbf{Acknowledgment:} This paper is funded by the Deanship of
Scientific Research (DSR), King Abdulaziz University, Jeddah, under
grant No. (130--682--D1435). The authors, therefore, acknowledge
with thank DSR technical and financial support.
|
1,314,259,994,193 | arxiv | \section{Introduction}
Let ${\mathbb B}_n$ be the open unit ball in $\C^n$ and $H({\mathbb B}_n)$ be the space of all holomorphic
functions on ${\mathbb B}_n$. For $f\in H({\mathbb B}_n)$ we use
$$Rf(z)=z_1\frac{\partial f}{\partial z_1}(z)+\cdots+z_n\frac{\partial f}{\partial z_n}(z)$$
to denote the radial derivative of $f$ at $z$. If
$$f(z)=\sum_{k=0}^\infty f_k(z)$$
is the homogeneous expansion of $f$, then it is easy to see that
$$Rf(z)=\sum_{k=0}^\infty kf_k(z)=\sum_{k=1}^\infty kf_k(z).$$
More generally, for any real $\beta$ and any $f\in H({\mathbb B}_n)$ with the homogeneous expansion
above, we define
$$R^\beta f(z)=\sum_{k=1}^\infty k^\beta f_k(z)$$
and call it the radial derivative of $f$ of order $\beta$.
It is clear that these fractional radial differential operators satisfy $R^\alpha
R^\beta=R^{\alpha+\beta}$. When $\beta<0$, the effect of $R^\beta$ on $f$ is
actually ``integration'' instead of ``diffferentiation''. For example, radial differentiation
of order $-3$ is actually radial integration of order $3$.
For $\beta\in{\mathbb R}$ the Hardy-Sobolev space $H^2_\beta$ consists of all holomorphic
functions $f$ on ${\mathbb B}_n$ such that $R^\beta f$ belongs to the classical
Hardy space $H^2$. It is clear that $H^2_\beta$ is a Hilbert space with the inner
product
$$\langle f,g\rangle_\beta=f(0)\overline{g(0)}+\langle R^\beta f, R^\beta g\rangle_{H^2}.$$
The induced norm in $H^2_\beta$ is then given by
$$\|f\|^2_\beta=|f(0)|^2+\|R^\beta f\|^2_{H^2}.$$
The multiplier algebra of $H^2_\beta$, denoted by ${\mathcal M}_\beta$, consists of all functions
$\varphi\in H({\mathbb B}_n)$ such that $\varphi f\in H^2_\beta$ for every $f\in H^2_\beta$.
A standard application of the closed-graph theorem shows that every $\varphi\in{\mathcal M}_\beta$
induces a bounded linear operator $M_\varphi: H^2_\beta\to H^2_\beta$. The
purpose of this paper is to study the spectral properties of these multiplication operators.
Our main results are the following.
\begin{thma}
Suppose $\beta\in{\mathbb R}$ and $\varphi\in{\mathcal M}_\beta$. Then the spectrum of
$M_\varphi: H^2_\beta\to H^2_\beta$ is the closure of $\varphi({\mathbb B}_n)$ in the
complex plane.
\end{thma}
Note that the theorems above and below may look like simple extensions of
known results on the Hardy and Bergman spaces to the setting of Hardy-Sobolev
spaces. This is far from the truth. In fact, these results are surprising when we realize
that, for general $\beta$, the norm of $M_\varphi: H^2_\beta\to H^2_\beta$ is usually
much bigger than $\|\varphi\|_\infty$!
\begin{thmb}
Suppose $\beta\in{\mathbb R}$ and $\varphi\in{\mathcal M}_\beta$. Then the essential spectrum
of $M_\varphi:H^2_\beta\to H^2_\beta$ is given by
$$\sigma_e(M_\varphi)=\bigcap_{r\in(0,1)}\overline{\varphi({\mathbb B}_n-r{\mathbb B}_n)},$$
where $r{\mathbb B}_n=\{z\in\C^n: |z|<r\}$.
\end{thmb}
\begin{thmc}
Suppose $\beta\in{\mathbb R}$ and $\varphi\in{\mathcal M}_\beta$. Then $M_\varphi: H^2_\beta\to H^2_\beta$
is Fredholm if and only if there exist $r\in(0,1)$ and $\delta>0$ such that
$|\varphi(z)|\ge\delta$ for all $z\in{\mathbb B}_n-r\bn$. Moreover, when $M_\varphi$ is Fredholm,
its Fredholm index is always $0$ for $n>1$ and is equal to minus the winding number of
the mapping $e^{it}\mapsto\varphi(re^{it})$, where $r\in(0,1)$ is sufficiently close to $1$.
\end{thmc}
Hardy-Sobolev spaces have been studied in various papers in the literature, including
\cite{AB1, AB2, BO, CO1, CO2, CZ, CV, OF}. However, several different names have
appeared for such spaces. For example, they were called weighted Bergman spaces
in \cite{ZZ}, they were called holomorphic Sobolev spaces in \cite{BeBu}, and they
were called Besov-Sobolev spaces in \cite{VW}.
It is well known that the Hardy-Sobolev spaces $H^2_\beta$ include several important
spaces as special cases: the Hardy space ($\beta=0$), the Bergman space ($\beta=-1/2$),
the Dirichlet space ($\beta=n/2$), and the Drury-Arveson space ($\beta=(n-1)/2$).
Our main results are certainly well known in the case of Hardy and Bergman spaces.
However, these results are highly nontrivial in the general case. In particular, we mention
that our results are new for the case of the Drury-Arveson space in higher dimensions
and the case of the Dirichlet space even in dimension $1$. In fact, these two cases are the
main motivation for our general theory here. See \cite{Ar, Dr} for the original introduction
of the Drury-Arveson space and \cite{BBF1, BBF2, FX1, FX2, RS, S} for some recent work
about operator theory and function theory for the Drury-Arveson space.
The difficulty in the general case stems from the fact that the underlying
space is defined by properties of a certain derivative. This makes some problems that are
obvious for Bergman and Hardy spaces very difficult for the general case. For example, in
the computation of spectrum, we need to show that if $\varphi$ is a multiplier of the
space $H^2_\beta$ and $\lambda$ is a constant such that $|\lambda-\varphi(z)|\ge
\delta$ for some positive $\delta$ and all $z\in{\mathbb B}_n$, then the function
$1/(\lambda-\varphi)$ is also a multiplier of $H^2_\beta$. If the space is defined in terms of
the integrability of the function itself (such as the Hardy space and the Bergman space), this
desired property is obvious. However, if the space is defined in terms of the integrability of a
certain fractional derivative of $f$, then the problem becomes challenging.
\section{Some characterizations of $H^2_\beta$}
Recall that $H^2$ is the space of holomorphic functions $f$ on ${\mathbb B}_n$ such that
$$\|f\|^2_{H^2}=\sup_{0<r<1}\int_{\sn}|f(r\zeta)|^2\,d\sigma(\zeta)<\infty,$$
where $d\sigma$ is the normalized Lebesgue measure on the unit sphere
${\mathbb S}_n=\partial{\mathbb B}_n$. It is well known that functions $f\in H^2$ have radial limits
$$f(\zeta)=\lim_{r\to1^-}f(r\zeta)$$
for almost all $\zeta\in{\mathbb S}_n$. Moreover, the radial limit function $f(\zeta)$ above belongs
to $L^2({\mathbb S}_n,d\sigma)$. The inner product in $H^2$ can then be written as
$$\langle f,g\rangle_0=\langle f,g\rangle_{H^2}=\int_{\sn} f(\zeta)\overline{g(\zeta)}
\,d\sigma(\zeta),$$
and its induced norm on $H^2$ is given by
$$\|f\|^2_0=\|f\|^2_{H^2}=\int_{\sn}|f(\zeta)|^2\,d\sigma(\zeta).$$
It is well known that a function $f\in H({\mathbb B}_n)$ belongs to $H^2$ if and only if
$$\int_{\bn}|Rf(z)|^2(1-|z|^2)\,dv(z)<\infty,$$
where $dv$ is normalized volume measure on ${\mathbb B}_n$. See \cite{Ru, ZZ, Z}. More
generally, for any $t>-1$, we consider the weighted volume measure
$$dv_t(z)=c_t(1-|z|^2)^t\,dv(z),$$
where $c_t$ is a positive normalizing constant such that $v_t({\mathbb B}_n)=1$. The spaces
$$A^2_t=L^2({\mathbb B}_n,dv_t)\cap H({\mathbb B}_n)$$
are called weighted Bergman spaces (with standard weights).
We begin with the following well-known Hardy-Littlewood type theorem for weighted
Bergman spaces.
\begin{lem}
Suppose $f\in H({\mathbb B}_n)$, $p>0$, $t>-1$, and $\beta$ is real. If $p\beta+t>-1$, then
$$\int_{\bn}|f(z)|^p\,dv_t(z)<\infty$$
if and only if
$$\int_{\bn}(1-|z|^2)^{p\beta}|R^\beta f(z)|^p\,dv_t(z)<\infty.$$
Moreover, the two integrals above are comparable when $f(0)=0$, namely, each
one dominates the other by a positive constant multiple that is independent of $f$.
\label{1}
\end{lem}
\begin{proof}
The case $\beta\ge0$ is well known to experts in the field. See \cite{Gr, ZZ, Z}. In
particular, it was shown in Theorem 4.2 of \cite{CZ} that the fractional derivative
$R^\beta$ can be replaced by another fractional derivative $R^{s,\beta}$, and in
Theorem 2.19 of \cite{Z}, our desired result here was proved in terms of $R^{s,\beta}$.
See Theorem 14 of \cite{ZZ} as well.
If $\beta<0$, we let $\alpha=-\beta>0$ and let $g=R^\beta f$. Then the condition
$$\int_{\bn}(1-|z|^2)^{p\beta}|R^\beta f(z)|^p\,dv_t(z)<\infty$$
can be written as
$$\int_{\bn}|g(z)|^p\,dv_{p\beta+t}(z)<\infty,$$
which, according to the nonnegative case in the previous paragraph, is equivalent to
$$\int_{\bn}(1-|z|^2)^{p\alpha}|R^\alpha g(z)|^p\,dv_{p\beta+t}(z)<\infty,$$
or
$$\int_{\bn}|f(z)|^p\,dv_t(z)<\infty.$$
This proves the desired result.
\end{proof}
The following result characterizes Hardy-Sobolev spaces in terms of integrability with
respect to weighted volume measures.
\begin{prop}
Suppose $\beta\in{\mathbb R}$ and $f\in H({\mathbb B}_n)$. Then the following conditions are equivalent.
\begin{enumerate}
\item[(a)] $f\in H^2_\beta$.
\item[(b)] $R^{\beta+1}f\in A^2_1$.
\end{enumerate}
If $N$ is a nonnegative integer with $N>\beta$, then the conditions above are
also equivalent to
\begin{enumerate}
\item[(c)] $R^Nf\in A^2_{2(N-\beta)-1}$.
\end{enumerate}
\label{2}
\end{prop}
\begin{proof}
The equivalence of (a) and (b) follows from the definition of $H^2_\beta$, the fact that
a function $g\in H({\mathbb B}_n)$ belongs to $H^2$ if and only if $Rg\in A^2_1$, and the fact that
$RR^\beta=R^{\beta+1}$.
If $N$ is a nonnegative integer such that $\beta<N$, we have $2(N-\beta)-1>-1$.
It follows from the equivalence of (a) and (b) that $f\in H^2_\beta$ if and only if
$$\int_{\bn}|R^{\beta+1}f(z)|^2(1-|z|^2)\,dv(z)<\infty.$$
By Lemma \ref{1}, the condition above is equivalent to
$$\int_{\bn}|R^{N-\beta-1}R^{\beta+1}f(z)|^2(1-|z|^2)^{1+2(N-\beta-1)}\,dv(z)<\infty,$$
or
$$\int_{\bn}|R^Nf(z)|^2(1-|z|^2)^{2(N-\beta)-1}\,dv(z)<\infty.$$
This proves the equivalence of (a) and (c).
\end{proof}
Write the integral condition above as
\begin{equation}
\int_{\bn}(1-|z|^2)^{2N}|R^Nf(z)|^2(1-|z|^2)^{-(2\beta+1)}\,dv(z)<\infty.
\label{eq1}
\end{equation}
With the language of generalized Bergman spaces from \cite{ZZ}, the condition in
(\ref{eq1}) above tells us the space $H^2_\beta$ is the generalized Bergman space
$A^2_{-(2\beta+1)}$ that was defined in \cite{ZZ}. We single out a few special cases
that are of particular importance in complex analysis and functional analysis.
First, if $\beta=0$, $H^2_\beta$ of course becomes the classical
Hardy space $H^2$, whose reproducing kernel is
$$K(z,w)=\frac1{(1-\langle z,w\rangle)^n}.$$
If $-(2\beta+1)=0$, or $\beta=-1/2$, it follows from \cite{ZZ, Z} that $H^2_\beta$ becomes
the ordinary (unweighted) Bergman space $A^2$, whose reproducing kernel is
$$K(z,w)=\frac1{(1-\langle z,w\rangle)^{n+1}}.$$
If $-(2\beta+1)=-n$, or $\beta=(n-1)/2$, it follows from \cite{ZZ} that $H^2_\beta$ is
the so-called Drury-Arveson space, whose reproducing kernel is
$$K(z,w)=\frac1{1-\langle z,w\rangle}.$$
Finally, if $-(2\beta+1)=-(n+1)$, or $\beta=n/2$, it follows from \cite{ZZ} that
$H^2_\beta$ is the Dirichlet space of the unit ball, whose reproducing kernel is
$$1+\log\frac1{1-\langle z,w\rangle}.$$
Note that, in order to obtain the specific form of the various reproducing kernels
above, it may be necessary to adjust the inner product in $H^2_\beta$ to a slightly
different (but equivalent) form. Details are left to the interested reader.
\section{Multipliers of $H^2_\beta$}
Although our main focus here is not on characterizations of pointwise multipliers (which
is a notoriously difficult problem in general), we still want to look at a couple of special
cases in which the multipliers are relatively easy to determine.
It is well known that every pointwise multiplier $\varphi$ of $H^2_\beta$ must be in
$H^\infty$, the space of all bounded holomorphic functions. Also, the size of the space
$H^2_\beta$ decreases as $\beta$ increases. Thus
$$H^\infty\subset H^2\subset H^2_\beta$$
for $\beta\le0$.
\begin{prop}
For $\beta\le0$ we have ${\mathcal M}_\beta=H^\infty$.
\label{3}
\end{prop}
\begin{proof}
If $\beta=0$, $H^2_\beta$ is the Hardy space $H^2$, whose multiplier algebra is
of course $H^\infty$. If $\beta<0$, it follows from (\ref{eq1}) that $H^2_\beta$ is
the weighted Bergman space $A^2_t$ with $t=-(2\beta+1)>-1$. The multiplier algebra
for such a Bergman space is clearly $H^\infty$.
\end{proof}
\begin{prop}
If $\beta>n/2$, then every function in $H^2_\beta$ is continuous up to the boundary,
$H^2_\beta$ is an algebra, and ${\mathcal M}_\beta=H^2_\beta$.
\label{4}
\end{prop}
\begin{proof}
Since each space $H^2_\beta$ contains the constant function $1$, we always have
${\mathcal M}_\beta\subset H^2_\beta$.
Suppose $\beta>n/2$ and $f\in H^2_\beta$. Then $R^\beta f\in H^2$, so there exists
a positive constant $C$ such that
$$|R^{\beta}f(z)|\le\frac C{(1-|z|^2)^{n/2}},\qquad z\in{\mathbb B}_n.$$
See Theorem 4.17 of \cite{Z}. It follows that
$$(1-|z|^2)^{\frac n2}|R^{\frac n2}R^{\beta-\frac n2}f(z)|\le C,\qquad z\in{\mathbb B}_n.$$
This shows that the function $R^{\beta-\frac n2}f(z)$ belongs to the Bloch space
(see \cite{ZZ, Z} for example). It is well known that the fractional integral of any positive
order of a Bloch function is in the ball algebra. Thus the function
$$f(z)=R^{\frac n2-\beta}R^{\beta-\frac n2}f(z)$$
is continuous up to the boundary.
Finally, suppose $\beta>n/2$ and $f,g\in H^2_\beta$. Let $N$ be a positive integer
greater than $\beta$. By Proposition~\ref{2}, $R^Nf$ and $R^Ng$ both belong to
$A^2_{2(N-\beta)-1}$. We proceed to show that $R^N(fg)$ also belongs to
$A^2_{2(N-\beta)-1}$, which will prove that $H^2_\beta$ is an algebra and
${\mathcal M}_\beta=H^2_\beta$.
If $N=1$, then the desired result follows from
$$R(fg)=fRg+gRf$$
and the fact that both $f$ and $g$ are in $H^\infty$.
If $N=2$, we have
$$R^2(fg)=fR^2g+2RfRg+gR^2f.$$
The first and third term on the right-hand side both belong to $A^2_{2(N-\beta)-1}$,
because $R^2f$ and $R^2g$ both belong to $A^2_{2(N-\beta)-1}$ and $f$ and $g$ are
both bounded. To estimate the middle term, let
$$I=\int_{\bn}|Rf(z)Rg(z)|^2(1-|z|^2)^{2(N-\beta)-1}\,dv(z).$$
By Holder's inequality, we have $I^2\le I(f)I(g)$, where
$$I(f)=\int_{\bn}|Rf(z)|^4(1-|z|^2)^{2(N-\beta)-1}\,dv(z).$$
By Lemma \ref{1}, there is a positive constant $C$ such that
$$I(f)\le C\int_{\bn}|R^2f(z)|^4(1-|z|^2)^{4+2(N-\beta)-1}\,dv(z).$$
Since $f$ belongs to the Bloch space, the function $(1-|z|^2)^2R^2f(z)$ is bounded.
Thus there exists another positive constant $C$ such that
$$I(f)\le C\int_{\bn}|R^2f(z)|^2(1-|z|^2)^{2(N-\beta)-1}\,dv(z)<\infty.$$
Similarly, $I(g)<\infty$. Thus $I<\infty$.
The case of more general $N$ is proved in the same manner. More specifically, by
the binomial formula, we have
$$R^N(fg)=\sum_{k=0}^N\binom{N}{k}R^kfR^{N-k}g.$$
The two terms corresponding to $k=0$ and $k=N$ are disposed of easily, as both $f$
and $g$ are bounded. Fix $0<k<N$ and consider the integral
$$I=\int_{\bn}|R^kf(z)R^{N-k}g(z)|^2(1-|z|^2)^{2(N-\beta)-1}\,dv(z).$$
For any $1<p<\infty$ with $1/p+1/q=1$, we use Holder's inequality to obtain
$$I\le I_1^{1/p}I_2^{1/q},$$
where
$$I_1=\int_{\bn}|R^kf(z)|^{2p}(1-|z|^2)^{2(N-\beta)-1}\,dv(z),$$
and
$$I_2=\int_{\bn}|R^{N-k}g(z)|^{2q}(1-|z|^2)^{2(N-\beta)-1}\,dv(z).$$
By Lemma~\ref{1}, there exists a positive constant $C$ such that
\begin{eqnarray*}
I_1&\le& C\int_{\bn}|R^{N-k}R^kf(z)|^{2p}(1-|z|^2)^{2p(N-k)+2(N-\beta)-1}\,dv(z)\\
&=&C\int_{\bn}|R^Nf(z)|^{2p}(1-|z|^2)^{2p(N-k)+2(N-\beta)-1}\,dv(z).
\end{eqnarray*}
Let us now choose $p=N/k$ and $q=N/(N-k)$. Write $2p=2+2(p-1)$, observe that
$$2p(N-k)=2(p-1)N,$$
and use the boundedness of the function $(1-|z|^2)^NR^Nf(z)$ (which follows from
$f\in H^\infty$ and the fact that $H^\infty$ is contained in the Bloch space). We obtain
another positive constant $C$ such that
$$I_1\le C\int_{\bn}|R^Nf(z)|^2(1-|z|^2)^{2(N-\beta)-1}\,dv(z)<\infty.$$
The proof for $I_2<\infty$ is the same. This completes the proof of the proposition.
\end{proof}
The determination of the multiplier algebra ${\mathcal M}_\beta$ for $0<\beta\le n/2$ is a much
more challenging (open) problem. Our focus here is not to characterize the multiplier
algebra. Instead, we will assume that $\varphi\in{\mathcal M}_\beta$ and consider the spectral
properties of the bounded operator $M_\varphi: H^2_\beta\to H^2_\beta$.
\section{A differentiation formula}
A key step in the computation of spectrum for holomorphic multiplication operators
on $H^2_\beta$ is the following: if $\varphi$ is a multiplier of $H^2_\beta$ and if
$|\varphi(z)|\ge\delta$ for some positive constant $\delta$ and all $z\in{\mathbb B}_n$, then
$1/\varphi$ is a multiplier of $H^2_\beta$ as well. This is obvious in the case of Hardy
and Bergman spaces. But when the space $H^2_\beta$ has to be defined in terms of
derivatives, the desired result becomes highly nontrivial.
In this section we prove a general formula of differentiation that will be critical to our
spectral analysis of multiplication operators on $H^2_\beta$. The formula is clearly of
independent interest, but unfortunately, its proof is not easy.
\begin{thm}
Suppose $N$ is a positive integer and $I$ is an open interval. If $f$ and $g$ are functions
on $I$ that are differentiable up to order $N$. Then
\begin{equation}
\sum_{k=0}^N(-1)^k\binom{N}{k}g^kD^{N-1}(g^{N-k}f)=0,
\label{eq2}
\end{equation}
where
$$D^n=\frac{d^n}{dx^n},\qquad n\ge0,$$
is the $n$-th order differential operator.
\label{5}
\end{thm}
\begin{proof}
For $n=1$ we simply write
$$D=D^1=\frac{d}{dx}.$$
We will prove the result by mathematical induction on $N$.
When $N=1$, the desired formula takes the form $gf-gf=0$, which is trivial.
Now assume that the formula in (\ref{eq2}) holds for some positive integer $N$
and all functions $f$ and $g$ on $I$ that are differentiable up to order $N-1$. We
will show that the same formula also holds when $N$ is replaced by $N+1$. To this end,
we assume that $f$ and $g$ are arbitrary functions on $I$ that are differentiable up to
order $N$. We apply $D$ to (\ref{eq2}) to obtain
\begin{align}
\sum_{k=0}^N&(-1)^k\binom{N}{k}kg^{k-1}D(g)D^{N-1}(g^{N-k}f)\label{eq3}\\
&=-\sum_{k=0}^N(-1)^k\binom{N}{k}g^kD^N(g^{N-k}f).\nonumber
\end{align}
Since this formula holds for all $f$ and $g$ differentiable up to order $N$, we can
replace $f$ by $gf$ in (\ref{eq3}) to obtain
\begin{align}
\sum_{k=0}^N&(-1)^k\binom{N}{k}kg^{k-1}D(g)D^{N-1}(g^{N+1-k}f)\label{eq4}\\
&=-\sum_{k=0}^N(-1)^k\binom{N}{k}g^kD^N(g^{N+1-k}f).\nonumber
\end{align}
Multiply equation (\ref{eq2}) by $g$. We obtain
$$\sum_{k=0}^N(-1)^k\binom{N}{k}g^{k+1}D^{N-1}(g^{N-k}f)=0.$$
Applying $D$ to this equation, we get
\begin{align}
\sum_{k=0}^N&(-1)^k\binom{N}{k}(k+1)g^kD(g)D^{N-1}(g^{N-k}f)\label{eq5}\\
&+\sum_{k=0}^N(-1)^k\binom{N}{k}g^{k+1}D^N(g^{N-k}f)=0.\nonumber
\end{align}
It is elementary to check that the first term above is equal to
\begin{align}
-\frac1N\sum_{k=0}^N&(-1)^k\binom{N}{k}kg^{k-1}D(g)D^{N-1}(g^{N+1-k}f)\label{eq6}\\
&+\frac{N+1}N g\sum_{k=0}^N(-1)^k\binom{N}{k}kg^{k-1}D(g)D^{N-1}(g^{N-k}f).\nonumber
\end{align}
In fact, if we denote by $S$ the sum of the two terms above, then
\begin{align*}
S&=-\frac1N\sum_{k=1}^{N}(-1)^k\binom{N}{k}kg^{k-1}D(g)D^{N-1}(g^{N+1-k}f)\\
&\qquad+\frac{N+1}N\sum_{k=0}^N(-1)^k\binom{N}{k}kg^kD(g)D^{N-1}(g^{N-k}f)\\
&=\frac1N\sum_{k=0}^{N-1}(-1)^k\binom{N}{k+1}(k+1)g^kD(g)D^{N-1}(g^{N-k}f)\\
&\qquad+\frac{N+1}N\sum_{k=0}^N(-1)^k\binom{N}{k}kg^kD(g)D^{N-1}(g^{N-k}f)\\
&=\sum_{k=0}^{N-1}(-1)^k\left[\frac{k+1}N\binom{N}{k+1}+
\frac{k(N+1)}{N}\binom{N}{k}\right]g^kD(g)D^{N-1}(g^{N-k}f)\\
&\qquad+(-1)^N(N+1)g^ND(g)D^{N-1}(f)\\
&=\sum_{k=0}^N(-1)^k\binom{N}{k}(k+1)g^kD(g)D^{N-1}(g^{N-k}f).
\end{align*}
Combining (\ref{eq5}) and (\ref{eq6}), we see that equation (\ref{eq5}) becomes
\begin{align*}
\frac1N\sum_{k=0}^N&(-1)^k\binom{N}{k}g^kD^N(g^{N+1-k}f)\\
&-\frac{N+1}N g\sum_{k=0}^N(-1)^k\binom{N}{k}g^kD^N(g^{N-k}f)\\
&+\sum_{k=0}^N(-1)^k\binom{N}{k}g^{k+1}D^N(g^{N-k}f)=0.
\end{align*}
Cancel the third term from part of the second term above. We obtain
\begin{align*}
\frac1N\sum_{k=0}^N&(-1)^k\binom{N}{k}g^kD^N(g^{N+1-k}f)\\
&-\frac1N\sum_{k=0}^N(-1)^k\binom{N}{k}g^{k+1}D^N(g^{N-k}f)=0.
\end{align*}
Multiply both sides by $N$ and shift the index of summation in the second term. Then
$$\sum_{k=0}^N(-1)^k\binom{N}{k}g^kD^N(g^{N+1-k}f)+\sum_{k=1}^{N+1}
(-1)^k\binom{N}{k-1}g^kD^N(g^{N+1-k}f)=0.$$
Rewrite this as
\begin{align*}
&D^N(g^{N+1}f)+(-1)^{N+1}g^{N+1}D^N(f)\\
&+\sum_{k=1}^N(-1)^k\left[\binom{N}{k}+\binom{N}{k-1}\right]g^kD^N(g^{N+1-k}f)=0,
\end{align*}
and observe that
$$\binom{N}{k}+\binom{N}{k-1}=\binom{N+1}{k}.$$
We conclude that
$$\sum_{k=0}^{N+1}(-1)^k\binom{N+1}{k}g^kD^N(g^{N+1-k}f)=0.$$
This completes the induction step and finishes the proof of the theorem.
\end{proof}
Once again, we have tried very hard to find a more simple proof for the seemingly
elementary formula above, but we have been unsuccessful. Also, we certainly
realize that the formula may have appeared in the literature before. But we searched
and it appears to be new!
We restate Theorem~\ref{5} as follows, which will be more directly related to our
spectral analysis for multiplication operators later on.
\begin{cor}
Suppose $N$ is a positive integer, $f$ and $g$ are $N$ times differentiable on the open
interval $I$, and $g$ is non-vanishing on $I$. Then
$$D^N\left(\frac fg\right)=\frac{(-1)^{N}}{g^{N+1}}
\sum_{k=0}^N(-1)^k\binom{N+1}{k}g^kD^N(g^{N-k}f).$$
\label{6}
\end{cor}
\begin{proof}
By Theorem~\ref{5}, we have
$$D^N(f)=-\frac{(-1)^{N+1}}{g^{N+1}}\sum_{k=0}^N(-1)^k\binom{N+1}{k}
g^kD^N(g^{N+1-k}f).$$
Replace $f$ by $f/g$. We obtain the desired result.
\end{proof}
\section{The spectrum of $M_\varphi$}
In this section we determine the spectrum of $M_\varphi: H^2_\beta\to H^2_\beta$
for $\varphi\in{\mathcal M}_\beta$ and $\beta\in{\mathbb R}$. Note that the case of Hardy and Bergman spaces
is well known; see \cite{Z2} for example. Some preliminary work about the spectrum
of multiplication operators on the Dirichlet space can be found in \cite{CH}.
The key here is the following result about reciprocals of pointwise multipliers. See
\cite{RS} for related partial results in the case of the Drury-Arveson space.
\begin{prop}
Suppose $\varphi\in{\mathcal M}_\beta$ and $|\varphi(z)|\ge\delta$ for some positive constant
$\delta$ and all $z\in{\mathbb B}_n$. Then $1/\varphi$ belongs to ${\mathcal M}_\beta$ as well.
\label{7}
\end{prop}
\begin{proof}
Choose a positive integer $N$ such that $N>\beta$. By Proposition~\ref{2}, a function
$f\in H({\mathbb B}_n)$ belongs to $H^2_\beta$ if and only if $R^N(f)\in A^2_{2(N-\beta)-1}$.
Since $2(N-\beta)-1>-1$, the multiplier algebra of $A^2_{2(N-\beta)-1}$ is $H^\infty$.
The radial derivative $R$ obeys the same differentiation rules as the ordinary derivative
$D$. Thus by Corollary~\ref{6}, we have
\begin{equation}
R^N\left(\frac f\varphi\right)=\frac{(-1)^{N}}{\varphi^{N+1}}
\sum_{k=0}^N(-1)^k\binom{N+1}{k}\varphi^kR^N(\varphi^{N-k}f)
\label{eq7}
\end{equation}
for all $f\in H^2_\beta$. Since $\varphi\in{\mathcal M}_\beta$, each function $\varphi^{N-k}f$ belongs
to $H^2_\beta$, or equivalently, $R^N(\varphi^{N-k}f)\in A^2_{2(N-\beta)-1}$. Since
$\varphi$ and $1/\varphi$ are both bounded holomorphic functions on ${\mathbb B}_n$, it
follows from (\ref{eq7}) that $R^N(f/\varphi)\in A^2_{2(N-\beta)-1}$, or equivalently,
$f/\varphi\in H^2_\beta$ whenever $f\in H^2_\beta$.
\end{proof}
We can now prove the first main result of the paper.
\begin{thm}
For any real $\beta$ and any $\varphi\in{\mathcal M}_\beta$ the spectrum of
the operator $M_\varphi: H^2_\beta\to H^2_\beta$ is given by
$\sigma(M_\varphi)=\overline{\varphi({\mathbb B}_n)}$, which is the closure of the range of
$\varphi$ in the complex plane.
\label{8}
\end{thm}
\begin{proof}
It is easy to see that the point-evaluation at any $a\in{\mathbb B}_n$ is a bounded linear
functional on $H^2_\beta$ (by Taylor expansion for example). Thus each $H^2_\beta$
is a reproducing kernel Hilbert space. For any $a\in{\mathbb B}_n$ let $K(z,a)=K_a(z)$ denote
the reproducing kernel of $H^2_\beta$ at $a$. Then
\begin{eqnarray*}
M_\varphi^*K_a(z)&=&\langle M_\varphi^*K_a, K_z\rangle\\
&=&\langle K_a, M_\varphi K_z\rangle=\langle K_a, \varphi K_z\rangle\\
&=&\overline{\varphi(a)}\overline{K_z(a)}=\overline{\varphi(a)}K_a(z).
\end{eqnarray*}
This shows that $M_\varphi^*K_a=\overline{\varphi(a)}K_a$, or $\overline{\varphi(a)}$
is an eigenvalue of $M_\varphi^*$. Thus $\overline{\varphi(a)}\in\sigma(M_\varphi^*)$,
and so $\varphi(a)\in\sigma(M_\varphi)$. Since the spectrum of any bounded linear
operator is closed, we must have $\overline{\varphi({\mathbb B}_n)}\subset\sigma(M_\varphi)$.
On the other hand, if $\lambda\in{\mathbb C}-\overline{\varphi({\mathbb B}_n)}$, then there exists a
positive number $\delta$ such that $|\lambda-\varphi(z)|\ge\delta$ for all $z\in{\mathbb B}_n$.
Since $\varphi\in{\mathcal M}_\beta$, we also have $\lambda-\varphi\in{\mathcal M}_\beta$. By Proposition~\ref{7},
the function $\psi=1/(\lambda-\varphi)$ is also a pointwise multiplier of
$H^2_\beta$. It is clear that
$$M_\psi(\lambda I-M_\varphi)=(\lambda I-M_\varphi)M_\psi=I.$$
Thus $\lambda I-M_\varphi$ is invertible, or $\lambda\not\in\sigma(M_\varphi)$.
Combining this with what we proved in the previous paragraph, we conclude that
$\sigma(M_\varphi)=\overline{\varphi({\mathbb B}_n)}$.
\end{proof}
The theorem above is somewhat surprising, because it shows that the spectral
radius of the operator $M_\varphi: H^2_\beta\to H^2_\beta$ is always equal to
$\|\varphi\|_\infty$, while the norm of $M_\varphi$ can be strictly larger than
$\|\varphi\|_\infty$! For example, the norm of the operator $M_z$ on the classical
Dirichlet space on the unit disk is clearly greater than one.
More generally, in dimension one, the operator $M_z: H^2_\beta\to H^2_\beta$ is
a weighted shift whose spectrum is always the closed unit disk, although the weight
sequence can vary greatly as $\beta$ changes over ${\mathbb R}$.
\section{The Fredholm theory for $M_\varphi$}
In this section we compute the essential spectrum of $M_\varphi: H^2_\beta\to
H^2_\beta$ when $\varphi$ is a multiplier of $H^2_\beta$.
Recall that a bounded linear operator $T$ on $H^2_\beta$ is called a Fredholm operator
if it has closed range, has finite dimensional kernel, and has finite dimensional co-kernel.
When $T$ is Fredholm, the integer
$${\rm Ind}\,(T)=\dim\ker(T)-\dim\ker(T^*)$$
is called the Fredholm index of $T$.
The essential spectrum of a bounded linear operator
$T$ on $H^2_\beta$, denoted by $\sigma_e(T)$, is the set of complex numbers
$\lambda$ such that $\lambda I-T$ is not Fredholm. See \cite{D, Z2} for basic
information about Fredholm operators, the Fredholm index, and the essential spectrum.
\begin{lem}
Let $T$ be a bounded linear operator on $H^2_\beta$. Then the following conditions
are equivalent.
\begin{enumerate}
\item[(a)] $T$ is not Fredholm.
\item[(b)] There exists a sequence $\{f_n\}$ of unit vectors in $H^2_\beta$ such that
$f_n\to0$ weakly and $\|Tf_n\|\to0$ or $\|T^*f_n\|\to0$.
\end{enumerate}
\label{9}
\end{lem}
\begin{proof}
This is well known. See \cite{CH} for example.
\end{proof}
Our determination of the essential spectrum for $M_\varphi: H^2_\beta\to H^2_\beta$
depends on several different techniques that are valid in different situations. We begin
with the high dimensional case.
\begin{lem}
Suppose $n>1$, $\beta$ is real, and $\varphi\in{\mathcal M}_\beta$. Then
$$\sigma_e(M_\varphi)=\bigcap_{0<r<1}\overline{\varphi({\mathbb B}_n-r{\mathbb B}_n)}
=\overline{\varphi({\mathbb B}_n)}=\sigma(M_\varphi).$$
\label{10}
\end{lem}
\begin{proof}
If $\lambda\in\varphi({\mathbb B}_n)$, then the function $\lambda-\varphi$ has a zero inside
${\mathbb B}_n$. Since $n>1$, the zero set of $\lambda-\varphi$ cannot be isolated. Thus
$\lambda-\varphi$ has inifitely many distinct zeros inside ${\mathbb B}_n$. If $\lambda-\varphi(a_k)
=0$ for infinitely many distinct points $a_k$ in ${\mathbb B}_n$, then by the proof of
Theorem~\ref{8}, $\ker(M^*_{\lambda-\varphi})$ contains all the kernel functions
$K_{a_k}$, which span an infinite dimensional subspace in $H^2_\beta$. Thus $M^*_{\lambda-\varphi}$ is not Fredholm. This shows that $\varphi({\mathbb B}_n)\subset\sigma_e(M_\varphi)$. Taking the
closure, we obtain
$$\bigcap_{0<r<1}\overline{\varphi({\mathbb B}_n-r{\mathbb B}_n)}\subset\overline{\varphi({\mathbb B}_n)}
\subset\sigma_e(M_\varphi)\subset\sigma(M_\varphi).$$
On the other hand, if
$$\lambda\not\in\bigcap_{0<r<1}\overline{\varphi({\mathbb B}_n-r{\mathbb B}_n)},$$
then there exist $r\in(0,1)$ and $\delta>0$ such that $|\lambda-\varphi(z)|\ge\delta$
for all $z\in{\mathbb B}_n-r{\mathbb B}_n$. In particular, the function $\psi=1/(\lambda-\varphi)$ is
holomorphic on the shell ${\mathbb B}_n-r{\mathbb B}_n$. Since $n>1$, it follows from the Hartogs
extension theorem (see \cite{K}) and the maximum modulus principle that $\psi$ can
be extended to a bounded holomorphic function on the whole unit ball ${\mathbb B}_n$. Now
the function $\psi(z)(\lambda-\varphi(z))$ is holomorphic on ${\mathbb B}_n$ and equals $1$ on
the shell ${\mathbb B}_n-r{\mathbb B}_n$. By the identity theorem, we have
$$\psi(z)(\lambda-\varphi(z))=1,\qquad z\in{\mathbb B}_n.$$
This shows that $\lambda-\varphi$ is non-vanishing on ${\mathbb B}_n$ and
$$\psi(z)=\frac1{\lambda-\varphi(z)},\qquad z\in{\mathbb B}_n.$$
Since $\lambda-\varphi$ is bounded below on the shell ${\mathbb B}_n-r{\mathbb B}_n$ and is non-vanishing
on $r{\mathbb B}_n$, it follows that $\lambda-\varphi$ is bounded below on the whole unit ball.
By Proposition~\ref{7}, the function $\psi$ is also a multiplier of $H^2_\beta$. Since
$$M_{\lambda-\varphi}M_\psi=M_\psi M_{\lambda-\varphi}=I,$$
we conclude that $\lambda I-M_\varphi=M_{\lambda-\varphi}$ is invertible, or
$\lambda\not\in\sigma(M_\varphi)$. This shows
$$\sigma(M_\varphi)\subset\bigcap_{0<r<1}\overline{\varphi({\mathbb B}_n-r{\mathbb B}_n)}$$
and completes the proof of the lemma.
\end{proof}
It is easy to see that the result above does not hold for $n=1$. In fact, when $n=1$ and
$\varphi$ is continuous up to the boundary, then it is clear that, in general,
$$\overline{\varphi({\mathbb D})}=\varphi(\overline{\mathbb D})\not=\varphi({\mathbb T})
=\bigcap_{0<r<1}\overline{\varphi({\mathbb D}-r{\mathbb D})},$$
where ${\mathbb D}={\mathbb B}_1$ is the open unit disk and ${\mathbb T}$ is the unit circle in the complex plane.
\begin{lem}
Suppose $n=1$, $\beta\le1/2$, and $\varphi\in{\mathcal M}_\beta$. Then
$$\sigma_e(M_\varphi)=\bigcap_{0<r<1}\overline{\varphi({\mathbb D}-r{\mathbb D})}.$$
\label{11}
\end{lem}
\begin{proof}
Let $K_a(z)=K(z,a)$ be the reproducing kernel of $H^2_\beta$ at $a\in{\mathbb B}_n$. Let
$k_a=K_a/\|K_a\|$ be the normalized reproducing kernel of $H^2_\beta$ at $a\in{\mathbb D}$.
When $n=1$, it is easy to see that $k_a\to0$ weakly as $|a|\to1^-$ if and only if
$\beta\le 1/2$.
First assume that
$$\lambda\in\bigcap_{0<r<1}\overline{\varphi({\mathbb D}-r{\mathbb D})}.$$
Then we can find a sequence $\{a_k\}\subset{\mathbb D}$ such that $|a_k|\to1$ and
$\varphi(a_k)\to\lambda$ as $k\to\infty$. Let
$$T=\lambda I-M_\varphi=M_{\lambda-\varphi}: H^2_\beta\to H^2_\beta.$$
It follows from the proof of Theorem~\ref{8} that
$$T^*k_{a_k}=(\overline\lambda-\overline{\varphi(a_k)})k_{a_k},\qquad k\ge1.$$
Therefore,
$$\lim_{k\to\infty}\|T^*k_{a_k}\|=\lim_{k\to\infty}|\varphi(a_k)-\lambda|=0.$$
By Lemma~\ref{9}, the operator $T$ is not Fredholm, or
$\lambda\in\sigma_e(M_\varphi)$.
On the other hand, if
$$\lambda\not\in\bigcap_{0<r<1}\overline{\varphi({\mathbb D}-r{\mathbb D})},$$
then there exist $r\in(0,1)$ and $\delta>0$ such that $|\lambda-\varphi(z)|\ge\delta$
for all $z\in{\mathbb D}-r{\mathbb D}$. Let $a_1,\cdots, a_N$ denote the zeros of $\lambda-\varphi$ in
$|z|\le r$, with multiple zeros repeated according to their multiplicities. Then
$$\lambda-\varphi(z)=\psi(z)p(z),$$
where
$$p(z)=(z-a_1)\cdots(z-a_N)$$
and $\psi$ is an invertible element of $H^\infty({\mathbb D})$. It follows from the proof of
Proposition~\ref{7} that $\psi\in{\mathcal M}_\beta$ as well (which then also implies that
$1/\psi\in{\mathcal M}_\beta$), because the estimates there only involve membership in the
Bergman spaces $A^2_t$ with $t>-1$, and such membership of non-vanishing
functions is determined by the behavior of the functions near the boundary.
For each $k$ the operator $M_{z-a_k}$ is Fredholm on $H^2_\beta$. In fact, the range
of $M_{z-a_k}$ is the closed subspace
$$I_k=(z-a_k)H^2_\beta=\{f\in H^2_\beta: f(a_k)=0\},$$
the kernel of $M_{z-a_k}$ is trivial (the operator is one-to-one), and the co-kernel of
$M_{z-a_k}$ is equal to the one dimensional space spanned by $K_{a_k}$. Since
$M_\psi$ is invertible on $H^2_\beta$ and $M_p=M_{z-a_1}\cdots M_{z-a_N}$ is
Fredholm (the product of Fredholm operators is still Fredholm), we conclude that
$$\lambda I-M_\varphi=M_{\lambda-\varphi}=M_\psi M_p$$
is Fredholm. This completes the proof of the lemma.
\end{proof}
It remains for us to tackle the case $n=1$ and $\beta>1/2$. We need to come up with
a new proof, because for $\beta>1/2$, the normalized reproducing kernels $k_a$ no
longer converge to $0$ weakly as $|a|\to1^-$. Our new proof for this case will
be based on the notion of peak functions.
Fix some $\zeta\in{\mathbb T}$ and consider the functions
$$f_k(z)=\left(\frac{1+\overline\zeta z}2\right)^k,\qquad k=1,2,3,\cdots.$$
The function $f(z)=(1+\overline\zeta z)/2$ is traditionally called the peak function at
the boundary point $\zeta$.
\begin{lem}
There exists a positive constant $c$ such that
$$\|f_k\|^2\ge c(k+1)^{2\beta-1}$$
for all $k\ge1$, where the norm is taken in $H^2_\beta$.
\label{12}
\end{lem}
\begin{proof}
Let $N$ be the smallest positive integer greater than $\beta$. Let us first consider
the integrals
$$I_k=\int_\D|f_k(z)|^2\,dA_{2(N-\beta)-1}(z),\qquad k\ge1.$$
By the binomial formula and Stirling's formula,
\begin{eqnarray*}
I_k&=&\frac1{4^k}\sum_{j=0}^k\binom{k}{j}^2\int_\D|z|^{2j}(1-|z|^2)^{2(N-\beta)-1}\,dA(z)\\
&\sim&\frac1{4^k}\sum_{j=0}^k\binom{k}{j}^2\int_0^1r^j(1-r)^{2(N-\beta)-1}\,dr\\
&\sim&\frac1{4^k}\sum_{j=0}^k\binom{k}{j}^2\frac1{(j+1)^{2(N-\beta)}}\\
&\ge&\frac1{4^k(k+1)^{2(N-\beta)}}\sum_{j=0}^k\binom{k}{j}^2.
\end{eqnarray*}
By Cauchy-Schwarz inequality,
$$[2^k]^2=\left[\sum_{j=0}^k\binom{k}{j}\right]^2\le(k+1)\sum_{j=0}^k\binom{k}{j}^2.$$
It follows that
$$\sum_{j=0}^k\binom{k}{j}^2\ge\frac{4^k}{k+1},\qquad k\ge1.$$
Therefore, there exists a positive constant $c$, independent of $k$, such that
$$I_k\ge\frac c{(k+1)^{2(N-\beta)+1}}$$
for all $k\ge1$. If $k>N$, then by Proposition~\ref{2},
\begin{eqnarray*}
\|f_k\|^2&\sim&|f_k(0)|^2+\int_\D|R^Nf_k(z)|^2\,dA_{2(N-\beta)-1}(z)\\
&=&\frac1{4^k}+\int_\D|R^Nf_k(z)|^2\,dA_{2(N-\beta)-1}(z)\\
&\sim&\frac1{4^k}+(k+1)^{2N}I_{k-N}.
\end{eqnarray*}
Combining this with our earlier estimates for $I_k$, we obtain another positive
constant $c$ such that
$$\|f_k\|^2\ge\frac{c(k+1)^{2N}}{(k+1)^{2(N-\beta)+1}}=c(k+1)^{2\beta-1}$$
for all $k\ge1$.
\end{proof}
\begin{cor}
Let $g_k=f_k/\|f_k\|$ for $k\ge1$. Then $g_k\to0$ weakly in $H^2_\beta$ as
$k\to\infty$.
\label{13}
\end{cor}
\begin{proof}
Each $g_k$ is a unit vector, and it follows from Lemma~\ref{12} above that $g_k(z)
\to0$ pointwise in ${\mathbb D}$ as $k\to\infty$. Thus for every $a\in{\mathbb D}$ we have
$\langle g_k, K_a\rangle\to0$ as $k\to\infty$. Since the set of finite linear combinations
of kernel functions is dense in $H^2_\beta$, we conclude that for every $f\in H^2_\beta$
we have $\langle g_k,f\rangle\to0$ as $k\to\infty$. Consequently, $g_k\to0$ weakly in
$H^2_\beta$ as $k\to\infty$.
\end{proof}
We are now ready to finish the last case in the determination of essential spectrum for
the multiplication operators $M_\varphi$ on $H^2_\beta$.
\begin{lem}
Suppose $n=1$, $\beta>1/2$, and $\varphi\in{\mathcal M}_\beta$. Then
$$\sigma_e(M_\varphi)=\bigcap_{0<r<1}\overline{\varphi({\mathbb D}-r{\mathbb D})}.$$
\label{14}
\end{lem}
\begin{proof}
The proof for
$$\sigma_e(M_\varphi)\subset\bigcap_{0<r<1}\overline{\varphi({\mathbb D}-r{\mathbb D})}$$
is the same as the case $\beta\le1/2$.
On the other hand, if
$$\lambda\in\bigcap_{0<r<1}\overline{\varphi({\mathbb D}-r{\mathbb D})},$$
then there exists a sequence $\{z_k\}$ in ${\mathbb D}$ such that $|z_k|\to1$ and
$\varphi(z_k)\to\lambda$ as $k\to\infty$. Since $\beta>1/2$, it follows from
Proposition~\ref{4} that $\varphi$ belongs to the disk algebra. Going down to a
subsequence if necessary, we may assume that $z_k\to\zeta$ for some $\zeta\in{\mathbb T}$.
Thus $\varphi(\zeta)=\lambda$ for some boundary point $\zeta$.
Let $\psi(z)=\lambda-\varphi(z)$. Then $\psi\in{\mathcal M}_\beta$ and $\psi(\zeta)=0$. We will show
that $M_\psi=\lambda I-M_\varphi$ cannot be Fredholm, or $\lambda\in
\sigma_e(M_\varphi) $.
By Lemma~\ref{9} and Proposition~\ref{2}, it suffices for us to show that the integrals
$$\int_\D|R^N(\psi g_k)(z)|^2\,dA_{2(N-\beta)-1}(z)$$
converge to $0$ as $k\to\infty$, where $N$ is the smallest positive integer greater
than $\beta$ and $\{g_k\}$ is the sequence defined in Corollary~\ref{13}.
Let us write
$$R^N(\psi g_k)=\sum_{j=0}^N\binom{N}{j}R^j\psi R^{N-j}g_k$$
and consider the integrals
\begin{eqnarray*}
I_{k,j}&=&\int_\D|R^j\psi(z)R^{N-j}g_k(z)|^2\,dA_{2(N-\beta)-1}(z)\\
&=&\frac1{\|f_k\|^2}\int_\D|R^j\psi(z)R^{N-j}f_k(z)|^2\,dA_{2(N-\beta)-1}(z)\\
&\lesssim&\frac{(k+1)^{2(N-j)}}{\|f_k\|^2}\int_\D|R^j\psi(z)f_{k-(N-j)}(z)|^2\,
dA_{2(N-\beta)-1}(z),
\end{eqnarray*}
where $k\ge1$ and $0\le j\le N$. Since $N$ is fixed and we are considering the limit
as $k\to\infty$, we may assume that $k$ is much larger than $N$. In this case, the
denominator above can be estimated by Lemma~\ref{12}, namely, we can find a positive
constant $C$ such that
\begin{equation}
I_{k,j}\le C(k+1)^{2(N-\beta)-2j+1}\int_\D|R^j\psi(z) f_{k-(N-j)}(z)|^2\,dA_{2(N-\beta)-1}(z)
\label{eq8}
\end{equation}
for all $k$ and $j$. Our goal is to show that $I_{k,j}\to0$ for all $0\le j\le N$ as
$k\to\infty$.
The case $j=0$ calls for special attention, and this is the case where we critically use
the condition that $\psi(\zeta)=0$. Recall that
$$I_{k,0}=\int_\D|\psi(z)R^Ng_k(z)|^2\,dA_{2(N-\beta)-1}(z).$$
Given $\varepsilon>0$ we break the unit disk into two parts, ${\mathbb D}=D_1\cup D_2$, where
$$D_1=\{z\in{\mathbb D}:|z-\zeta|<\delta\},\qquad D_2=\{z\in{\mathbb D}:|z-\zeta|\ge\delta\},$$
and $\delta$ is chosen so that $|\psi(z)|<\varepsilon$ for $z\in D_1$. Then
\begin{eqnarray*}
I_{k,0}&=&\int_{D_1}|\psi(z) R^Ng_k|^2\,dA_{2(N-\beta)-1}(z)\\
&&\qquad+\int_{D_2}|\psi(z)R^Ng_k(z)|^2\,dA_{2(N-\beta)-1}(z)\\
&<&\varepsilon^2\int_\D|R^Ng_k(z)|^2\,dA_{2(N-\beta)-1}(z)\\
&&\qquad+\|\psi\|_\infty^2\int_{D_2}|R^Ng_k(z)|^2\,dA_{2(N-\beta)-1}(z).
\end{eqnarray*}
Since $\{g_k\}$ is a sequence of unit vectors in $H^2_\beta$, it follows from
Proposition~\ref{2} that there exists a positive constant $C_1$, independent of $k$,
such that
$$\int_\D|R^Ng_k(z)|^2\,dA_{2(N-\beta)-1}(z)\le C_1,\qquad k\ge1.$$
Since
$$R^Ng_k(z)\sim\frac{k^N}{\|f_k\|}f_{k-N}(z),$$
it follows from Lemma~\ref{12} that $R^Ng_k(z)\to0$ uniformly on $D_2$ as $k\to\infty$.
This shows that $I_{k,0}\to0$ as $k\to\infty$.
The case $j\ge2$ (which forces $N\ge2$) is the simplest. In fact, since $0<N-\beta\le1$,
we have
$$2(N-\beta)-2j+1\le-1$$
for $j\ge2$. It follows from (\ref{eq8}) that
$$I_{k,j}\le\frac C{k+1}\int_\D|R^j\psi|^2\,dA_{2(N-\beta)-1}(z)\to0,\quad k\to\infty,$$
because $\psi\in H^2_\beta$ together with Proposition~\ref{2} implies that
\begin{align*}
\int_\D|R^j\psi(z)|^2&\,dA_{2(N-\beta)-1}(z)\\
&\sim\int_\D|R^N\psi(z)|^2(1-|z|^2)^{2(N-j)+2(N-\beta)-1}\,dA(z)\\
&\le\int_\D|R^N\psi(z)|^2\,dA_{2(N-\beta)-1}(z)<\infty.
\end{align*}
If $j=1$ and $0<N-\beta<1/2$, then
$$2(N-\beta)-2j+1<0.$$
It follows from (\ref{eq8}) and the argument above again that $I_{k,1}\to0$ as $k\to\infty$.
If $j=1$ and $0<N-\beta=1/2$, then
$$I_{k,1}\le C\int_\D|R\psi(z)f_{k+1-N}(z)|^2\,dA_{2(N-\beta)-1}(z)\to0, \quad k\to\infty,$$
by dominated convergence.
Finally, if $j=1$ and $N-\beta>1/2$ (which forces $N\ge2$), then
$$2(N-\beta)-2j+1>0,$$
or $2(N-\beta)-1>0$. In this case,
we recall from the remarks following Theorem 2.1 of \cite{Z} that
$R^N\psi\in A^2_{2(N-\beta)-1}$ implies
$$\lim_{|z|\to1}|R^N\psi(z)|(1-|z|^2)^{N-\beta+\frac12}=0,$$
which is equivalent to
$$\lim_{|z|\to1}|R^{N-1}\psi(z)|(1-|z|^2)^{N-\beta-\frac12}=0.$$
Since $N\ge2$, we have $N-1\ge1$. Thus
$$\lim_{|z|\to1}|R\psi(z)|^2(1-|z|^2)^{2(N-\beta)-1}=0.$$
Recall that $\beta<N\le\beta+1$, so $0<2(N-\beta)\le2$. It follows from (\ref{eq8}) that
$$I_{k,1}\le C(k+1)\int_\D|R\psi(z)f_{k+1-N}(z)|^2\,dA_{2(N-\beta)-1}(z).$$
Given any $\varepsilon>0$, we choose $\delta\in(0,1)$ such that
$$|R\psi(z)|^2(1-|z|^2)^{2(N-\beta)-1}<\varepsilon,\qquad \delta<|z|<1.$$
Then by the change of variables $w=(1+\overline\zeta z)/2$ we have
\begin{eqnarray*}
I(\delta)&=:&(k+1)\int_{\delta<|z|<1}|R\psi(z) f_{k+1-N}(z)|^2\,dA_{2(N-\beta)-1}(z)\\
&\le&\varepsilon(k+1)\int_\D|f_{k+1-N}(z)|^2\,dA(z)\\
&=&\varepsilon(k+1)\int_\D\left|\frac{1+\overline\zeta z}{2}\right|^{2(k+1-N)}\,dA(z)\\
&\le&4\varepsilon(k+1)\int_\D|w|^{2(k+1-N)}\,dA(w)\\
&=&4\pi\varepsilon(k+1)/(k+1-N).
\end{eqnarray*}
On the other hand, it follows from uniform convergence that
$$\lim_{k\to\infty}(k+1)\int_{|z|\le\delta}|R\psi(z)f_{k+1-N}(z)|^2\,dA_{2(N-\beta)-1}(z)
=0.$$
This shows that $I_{k,1}\to0$ as $k\to\infty$. The proof of the lemma is now complete.
\end{proof}
Combining the last few lemmas, we obtain the following result about the essential
spectrum of multiplication operators on $H^2_\beta$.
\begin{thm}
Suppose $\beta$ is real and $\varphi\in{\mathcal M}_\beta$. Then we always have
$$\sigma_e(M_\varphi)=\bigcap_{0<r<1}\overline{\varphi({\mathbb B}_n-r{\mathbb B}_n)}.$$
If $n>1$, then
$$\sigma_e(M_\varphi)=\bigcap_{0<r<1}\overline{\varphi({\mathbb B}_n-r{\mathbb B}_n)}
=\overline{\varphi({\mathbb B}_n)}=\sigma(M_\varphi).$$
\label{15}
\end{thm}
As a consequence of the theorem above and its proof, we obtain the following index
formulas for $M_\varphi$.
\begin{thm}
Suppose $\beta$ is real and $\varphi\in{\mathcal M}_\beta$. Then $M_\varphi: H^2_\beta\to
H^2_\beta$ is Fredholm if and only if there exist $r\in(0,1)$ and $\delta>0$ such that
$|\varphi(z)|\ge\delta$ for all $r\le|z|<1$. When $M_\varphi$ is Fredholm,
${\rm Ind}\,(M_\varphi)=0$ for $n>1$, and for $n=1$, ${\rm Ind}\,(M_\varphi)$ is equal to the
winding number of the mapping $e^{it}\mapsto\varphi(re^{it})$ from the unit circle
into ${\mathbb C}-\{0\}$.
\label{16}
\end{thm}
\begin{proof}
The desired characterization of Fredholm multiplication operators $M_\varphi$ is a
direct consequence of Theorem~\ref{15}. If $n>1$ and $M_\varphi$ is Fredholm, it follows
from the proof of Lemma~\ref{10} that $M_\varphi$ is actually invertible, so
${\rm Ind}\,(M_\varphi)=0$.
If $n=1$ and $M_\varphi$ is Fredholm, it follows from the proof of Lemma~\ref{11} that
$\varphi=\psi p$, where both $\psi$ and $1/\psi$ are multipliers of $H^2_\beta$ and
$p$ is a polynomial. Thus $M_\psi$ is invertible on $H^2_\beta$ and ${\rm Ind}\,(M_\varphi)
={\rm Ind}\,(M_p)$, which is equal to the number of zeros of $p$ inside ${\mathbb D}$, with multiple
zeros counted according to multiplicity. This shows that ${\rm Ind}\,(M_\varphi)$ is equal to
the winding number of $\varphi$ restricted to the circle $|z|=r$.
\end{proof}
If $n>1$ and $\varphi\in{\mathcal M}_\beta$ is continuous up to the boundary, then it is clear that
$$\varphi(\overline{{\mathbb B}_n})=\overline{\varphi({\mathbb B}_n)}=
\bigcap_{0<r<1}\overline{\varphi({\mathbb B}_n-r{\mathbb B}_n)}=\varphi(\partial{\mathbb B}_n).$$
This is certainly a purely high dimensional phenomenon.
|
1,314,259,994,194 | arxiv | \section{Automatic Calibration System}
\label{sec:AutomaticExtrinsicCalibrationSystem}
The optimal shape determined in Sec.~\ref{sec:ShapeSensitivity} are used as targets
for both intrinsic calibration and extrinsic calibration. Target-based LiDAR-camera extrinsic
calibration methods often suffer from manual target extraction and feature
association. Because comparing and validating the intrinsic calibration methods of
Sec.~\ref{sec:ProposedIntrinsicModel}
and~\ref{sec:BaselineIntrinsicCalibrationModels} would require significant data
collection, we decided to create a fully automatic pipeline for both intrinsic and
extrinsic calibration as shown in Fig~\ref{fig:AutomaticPipeline}. We demonstrate the
capability of this pipeline by collecting a significant amount of data allowing us to
check the quality of the calibration of our sensor suite (a LiDAR-camera pair) at different
distances; see~\bh{sec:ExpExtrinsicCalibration}.
\begin{figure*}[t]
\centering
\includegraphics[width=1.9\columnwidth]{automatic-calibration-system/AutomaticPipeline.png}
\caption{\bh{took from previous paper}A system diagram for
automatic intrinsic and extrinsic calibration. The top shows the front-end of
the pipeline. Its input is raw camera and LiDAR~ data, which are subsequently
synchronized. The AprilTag and LiDARTag~ packages are used to extract the target
information, which is then examined to remove outliers and ensure that
targets are still properly synchronized. Each run of the front-end saves all
related information into a ROS bagfile, which can then be sent to the
back-end for further processing. The back-end takes in (possibly many) ROS
bagfiles and does the following: (i) refines the image corners; and (ii)
extracts vertices of the LiDAR~ targets. The correspondences are established
between corners and vertices and an extrinsic transformation is determined
PnP as in \cite{lepetit2009epnp}. For intrinsic calibration, the resulting vertices of
the LiDAR~ targets are used to extract normal vectors and a point on the
plane. The calibration parameters are determined to minimize the P2P distance
from the plane to the target points provided by the LiDARTag~ package.}
\label{fig:AutomaticPipeline}
\end{figure*}
\subsection{Front-End: Data Pre-processing}
\label{sec:FrontEnd}
Fiducial markers, such AprilTags\cite{olson2011apriltag, wang2016apriltag,
krogiusflexible}, are widely used in vision-based perception tasks because they can
be easily identified and automatically extracted from background.
In \cite{huang2019lidartag}, we introduced a similar concept for LiDAR~ point clouds,
called a LiDARTag. Inspired by AprilTags, they consist of a payload and a pattern that
makes each marker uniquely distinguishable and identifiable in real-time by both a
LiDAR~ and a camera. An open-source package for processing LiDARTags is available
\cite{githubFileLiDARTag} while both \atagN2 and \atagN3 are available as a Robot
Operating System (ROS) package \cite{githubFileAprilTagROS}.
\begin{remark}
A payload is roughly\footnote{No measurement required for placing payloads}
placed at the center of the optimal target, as shown in
Fig.~\bh{fig:CalibrationTargets}. Another step will be performed to refine
corners; see Sec.~\ref{sec:ImageCorners}.
\end{remark}
Each step in the front-end of the Fig.~\ref{fig:AutomaticPipeline} is a node in ROS
\cite{quigley2009ros}. As the LiDAR~ and camera data streams arrive at the
frame-synchronization node, they are synchronized by the approximate time policy.
Each synchronized frame is assigned a frame id, which is defined by the order of
arriving and published at the same frequency. The LiDARTag~ and AprilTag node will process
the synchronized data and publish arrays of detected fiducial markers separately. To
ensure the two streams remain synchronized, regardless if a fiducial marker is
detected, a message is published (i.e., an empty message will be published if no tag
is detected).
In the tag-paring node, types of messages\footnote{Each message of each type is
labeled with a frame id.} (LiDARTags, AprilTags, images) arrive at different frame
rates again at the tag-paring node due to different working frequency of each node.
However, the same synchronization policy cannot be used because working frequency for
each node is fluctuated. \bh{perhaps not clear enough?} A queue is created for each
type of messages to store its own messages. To further prevent from unexpected
dropping packets between communications in the pipeline, we create a master clock to
group the same frame id as shown in Fig.~\bh{fig:udpDropping}. \bh{say more maybe}
False positives and false negatives are examined after messages (LiDARTags, AprilTags,
images) with the same frame id are grouped together. For each grouped message, if the
number of the detected markers in the LiDARTag~ and the AprilTag message is not the same,
or if the detected ids in the LiDARTag~ and the AprilTag message are not the same, this
grouped message will be abandoned. Empty messages will also be dropped at this node.
The synchronized, examined data (images, AprilTag corners, points on LiDARTags) are saved
as ROS bagfiles for post-processing in MATLAB.
\subsubsection{Extrinsic Calibration}
For LiDAR-camera extrinsic calibration, corresponding features from camera data and LiDAR~
data must be identified. We use target vertices as our correspondences in this
pipeline. The target's LiDAR~ returns (LiDARTag~ points) are extracted from a full scan
of LiDAR~ point cloud in the front end by the LiDARTag~ package. In the image plane,
the target vertices are typically called corners and are extracted in the front end
using the AprilTag ROS package.
\subsubsection{Intrinsic Calibration}
For a spining LiDAR~ intrinsic calibration, points on a target are spitted based on rings and
points on the same ring but on different targets are gathered as a collection of
points to compute P2P cost \eqref{eq:OptSolution}. A target's normal vector and a
point on the target are computed by the estimated target vertices; see
Sec.~\ref{sec:TargetVertices}.
\subsection{Back-End: LiDAR~ Target Vertices Determination}
\label{sec:TargetVertices}
At this stage, extracted features (AprilTags corners and LiDARTag~ points) have been
properly synchronized and examined automatically for false positives. The next step
is to estimate the LiDARTag~ vertices, which, due to sparsity of the LiDAR~ point
cloud, are not directly observable. We use the method developed in
\cite{huang2019lidartag} to determine the vertices. However, instead of a
diamond-shaped target stated in our previous paper \cite{huang2019lidartag}, we
change the shape to the optimal shape determined in Sec.~\ref{sec:ShapeSensitivity}.
From the vertices, the target normal $\N[t]$ and point on the target, $p_{0,t}$, can
be easily determined for evaluating the P2P cost function \eqref{eq:OptSolution} for
intrinsic calibration, while the vertices in combination with the camera corners are
used to determine the extrinsic calibration via a PnP problem or a IoU problem; see
Sec.~\bh{sec:ExtrisicProblem}.
The following is partially taken from \cite{huang2019improvements} and provided for
clarity. Let $\mathcal{PC} := \{\cal X_i\}_{i=1}^{M}$ be a collection of 3D points on a LiDAR~
target, where $M$ is the number of points. Given the target geometry (here is the
optimal shape determined in Sec.~\ref{sec:ShapeSensitivity}), we define an
ideal target with vertices $\{ \bar{X}_i \}_{i=1}^4$ located at the origin of the
LiDAR~ as defined in Fig.~\bh{fig:IdealFrame}. We seek a rigid body transformation
from LiDAR~ to the target, $H_L^T$, that ``best fits'' the ideal target onto the point
cloud.
\bh{Need to modify for the new shape}
In practice, it is actually easier to pull the $\mathcal{PC}$ back to the origin of the LiDAR~
through the inverse of the current estimate of transformation $H_T^L:=(H_L^T)^{-1}$
and measure the error there. For $a \ge 0$ and $\lambda \in \mathbb{R}$, an $L_1$-inspired
cost is defined as
\begin{equation}
\label{eq:L1cost}
c(\lambda,a):=\begin{cases}
\min\{ |\lambda-a|, |\lambda + a| \} & \text{if}~|\lambda| >a \\
0 & \text{otherwise}
\end{cases}.
\end{equation}
Let $\{\tilde{\chi}_i\}_{i=1}^N:=H_T^L(\mathcal{PC}):=\{ H_T^L(\chi_i) \}_{i=1}^N$ denote
the pullback of the point cloud by $H_T^L$, and denote a point's $(x,y,z)$-entries by
$(\tilde{x}_i,\tilde{y}_i,\tilde{z}_i)$.
The total fitting error of the point cloud is defined as
\begin{equation}
\label{eq:JKHcost}
C(H_T^L(\mathcal{PC})) :=\sum_{i=1}^{M} c(\tilde{x}_i,\epsilon) +
c(\tilde{y}_i,d/2) + c(\tilde{z}_i,d/2),
\end{equation}
where $\epsilon \ge 0$ is a tuning parameter to account for uncertainty in the depth
measurement of the planar target. For more detail about this method and target
placement for extrinsic calibration, see~\cite{huang2019improvements}.
\subsection{Back-End: Camera Corners Detection, Refinement and Correspondences Association}
\label{sec:ImageCorners}
The AprilTag package detects inner (black) corners of an AprilTag, while correspondences
for extrinsic calibration are target vertices as shown in
Fig.~\bh{fig:CalibrationTargets}. Given the dimension of an AprilTag, simple linear
interpolation is used to estimate the vertices in the AprilTag frame. The target
corners in the image coordinates are computed by projecting the vertices with a
homograph matrix.
The corners obtained from the AprilTag package are automatically refined as shown in
Fig.~\bh{refineImageCorners} using an process detailed in \cite{huang2019improvements}.
Given the camera corners ($\{ Y_i\}_{i=1}^4$) and LiDAR~ vertices $\{ X_i\}_{i=1}^4$,
we use the vertical and horizontal positions in their own coordinates to sort them
and establish the correspondences.
\subsection{Optimizations}
\subsubsection{Extrinsic Calibration}
Once ``enough'' independent target vertices are estimated from both camera data and
LiDAR~ point cloud, the optimization for the extrinsic transformation can be
formulated in a standard Perspective-n-Point (PnP) problem \cite{lepetit2009epnp}:
minimize Euclidean distance of the corresponding corners. In
\cite{huang2019improvements}, we also proposed maximizing the intersection over union
(IoU) of the corresponding projected polygons. More detail for both methods is provided in
\cite{huang2019improvements}.
Let $X_i$ be the (homogeneous) coordinates of the LiDAR features, $Y_i$ be the
coordinates of the camera features. We denote by $\Pi$ the often-called ``projection map''
\cite[(8)]{huang2019improvements} and let $H_{L}^{C}$ be the (homogeneous representation of)
the LiDAR-frame to camera-frame transformation with rotation matrix $R_{C}^{L}$ and
translation $T_{C}^{L}$.
\textbf{i) PnP Formulation} Given $n$ targets (i.e., $4n$ features), the PnP problem
is
\begin{equation}
\label{eq:PnP}
\left({R_L^C}^*, {T_L^C}^*\right) := \argmin_{R,
T}\sum_{i=1}^{4n}\norm{\Pi\left(X_i; R, T\right)-\pre[C]Y_i}_2^2.
\end{equation}
\textbf{ii) IoU Formulation}
Let $\pre[L]\mathcal{V}:=\{\pre[L]Y_i\}_{i=1}^{4n}$ be the vertices of the estimated target
polygons projected from the LiDAR~ frame to the camera frame as in
\cite[(8)]{huang2019improvements}, and let $\pre[C]\mathcal{V}:=\{\pre[C]Y_i\}_{i=1}^{4n}$ be
the vertices of the corresponding camera images of the targets. The corresponding
areas of the polygons are $\A(\pre[L]\mathcal{V}), \A(\pre[C]\mathcal{V})$ and let $\A(\pre[I]\mathcal{V})$
denote the intersection of the two polygons. The IoU of the two polygons is then
\begin{equation}
\label{eq:IoUCost}
IoU(\pre[L]\mathcal{V},\pre[C]\mathcal{V},\pre[I]\mathcal{V}) = \frac{\A(\pre[I]\mathcal{V})}{\A(\pre[L]\mathcal{V}) +
\A(\pre[C]\mathcal{V}) - \A(\pre[I]\mathcal{V})}.
\end{equation}
The resulting optimization problem is
\begin{align}
\label{eq:IoUOptimization}
\left({R_L^C}^*, {T_L^C}^*\right) &:= \argmax_{R,T}IoU(\pre[L]\mathcal{V},\pre[C]\mathcal{V})\\
&= \argmax_{R,T}IoU(\Pi\left(\{X_i\}_{i=1}^{4n}; R, T\right),\pre[C]\mathcal{V}), \nonumber
\end{align}
where the projection map in \cite[(8)]{huang2019improvements} has been used to make
the dependence on the rigid-body transformation explicit.
\subsubsection{Intrinsic Calibration}
With the method in Sec.~\ref{sec:TargetVertices} and \cite{huang2019lidartag}, the
target vertices are automatically co-planar and hence uniquely define a unit normal
vector and a point on the target. The target vertices are
used to compute a normal vector and a point on the target. The P2P distance is
minimized for each of the three calibration models, $\textbf{BL}_1$, $\textbf{BL}_2$, and $\mathrm{Sim}(3)$,
for a collection of targets, yielding $\alpha^*$ (for $\textbf{BL}_1$ and $\textbf{BL}_2$) and
$\Sigma^*$ for $\mathrm{Sim}(3)$.
The processes of intrinsic calibration involves iterating the estimation of the
target vertices and the parameters in the LiDAR~ model until a given stopping
criterion is met\footnote{In simulation, no iteration is required as the target
information is known from ``ground truth''. }. Here, stopped when the maximum change
in a target vertex was less than $1e-5$.
\begin{algorithmic}
\While{$\delta_m>10^{-5}$}
\State \textbf{Step 1:} Re-estimate the target vertices $\{ X_i\}_{i=1}^4$ after
applying the current intrinsic calibration parameters to the point cloud. This
gives updated values for the target normals and points on the targets.
\State \noindent \textbf{Step 2:} Using the updated target vertices, re-estimate
the intrinsic calibration parameters for the three calibration models. \EndWhile
\end{algorithmic}
\section{Baseline Intrinsic Calibration Models}
\label{sec:BaselineIntrinsicCalibrationModels}
There are two standard calibration models for spining LiDARs. They are inspired by
postulating spining LiDARs' working mechanisms, which is based on spherical coordinates.
These types of calibration models is not applicable to other types of sensors that do
not function the same means as the models are designed. As shown in
Fig.~\bh{fig:spherical-coordinate}, the spherical coordinates $(\rho, \theta, \phi)$
refers to as range, elevation and azimuth,
\begin{equation}
\label{eq:spherical2Cartesian}
\begin{bmatrix}
x\\y\\z
\end{bmatrix} = f(\rho, \theta, \phi)
=\begin{bmatrix}
\rho\cos\theta\sin\phi\\
\rho\cos\theta\cos\phi\\
\rho\sin\theta
\end{bmatrix},
\end{equation}
and the inverse function,
\begin{equation}
\label{eq:Cartesian2spherical2}
\begin{bmatrix}
\rho\\\theta\\\phi
\end{bmatrix}
= f^{-1}(x,y,z)
=\begin{bmatrix}
\sqrt{x^2+y^2+z^2}\\
\asin[\frac{z}{\sqrt{x^2+y^2+z^2}}]\\
{\rm atan2}(x,y)
\end{bmatrix}.
\end{equation}
\subsection{3-Parameter Model}
The most basic model~\cite{pandey2010extrinsic} are shown in
Fig.~\bh{fig:basic-model}, and is denoted as $\textbf{BL}_1$. The basic model assumes the
LiDAR~ measurements are made in spherical coordinates, $(\rho, \theta,\phi)$.
Corrections to a measurement are given by a collection additive offsets
\mbox{$\alpha:=(\delta_\rho, \delta_\theta, \delta_\phi).$} Expressing the calibrated
measurement in Cartesian coordinates gives
\begin{equation}
\label{eq:SimpleCalibration}
\Gamma_{\alpha}(\rho, \theta, \phi) :=
\begin{bmatrix}
(\rho+\delta_\rho)\cos(\delta_\theta)\sin(\phi-\delta_\phi)\\
(\rho+\delta_\rho)\cos(\delta_\theta)\cos(\phi-\delta_\phi) \\
(\rho+\delta_\rho)\sin(\delta_\theta)
\end{bmatrix}
\end{equation}
\begin{remark}
\textbf{(a)} The nominal elevation for each ring is taken as zero;, i.e.,
$\theta=0.$ \textbf{(b)} While it is not necessarily a drawback, most LiDAR~
interfaces only return a Cartesian representation of a measured point. Hence, the
user must assure the transformation to spherical coordinates, make the
measurement correction coming from the calibration, and then transform back to
Cartesian coordinates.
\end{remark}
\subsection{6-Parameter Model}
Fig.~\bh{fig:complex-model} shows a more complex model \cite{glennie2010static,
nouira2016point} for a spining LiDAR, and is denoted as $\textbf{BL}_2$. This model also works in
spherical coordinates. In addition to the three offsets above, it includes $s$, a
scale factor of $\rho$, $h$, a horizontal offset of the origin, and $v$, a vertical
offset for the origin. The correction model becomes
\begin{equation}
\label{eq:ComplexCalibration}
\bar{\Gamma}_{\alpha}:=
\left[
\begin{array}{l}
(s\rho+\delta_\rho)\cos\delta_\theta\sin(\phi-\delta_\phi) - h\cos(\phi-\delta_\phi)\\
(s\rho+\delta_\rho)\cos\delta_\theta\cos(\phi-\delta_\phi) + h\sin(\phi-\delta_\phi)\\
(s\rho+\delta_\rho)\sin\delta_\theta + v\\
\end{array}\right].
\end{equation}
and $\alpha:=(\delta_\rho, \delta_\theta,
\delta_\phi, s, h, v).$
\subsection{Can these be expressed as transformations by similar matrices?}
The short answer is no. This section decomposites the 6-parameter model
\eqref{eq:ComplexCalibration} viewed in Cartesian coordinates into cascaded
transformations. The model is fundamentally nonlinear and can be expressed as
$\R[1](\R[2]\t[1] + \t[2])$,
where $\R[1], \R[2], \t[1] \text{ and } \t[2]$ depend on the measured point,
$$\x[][\mathsf{T}] = [x,y,z] \xrightarrow[]{f^{-1}} (\rho, \theta, \phi),$$
\begin{align}
\label{eq:ExpandComplexModelMatrixForm}
\R[1]&=
\begin{bmatrix}
\sin(\phi-\delta_\phi) & -\cos(\phi-\delta_\phi) & 0\\
\cos(\phi-\delta_\phi) & \sin(\phi-\delta_\phi) & 0\\
0 &0 &1
\end{bmatrix},\\ \nonumber
\R[2]\t[1]+\t[2] &=
\begin{bmatrix}
\cos(\delta_\theta) & 0 & -\sin(\delta_\theta)\\
0 &1 &0\\
\sin(\delta_\theta) & 0 & \cos(\delta_\theta)
\end{bmatrix}
\begin{bmatrix}
s\rho+\delta_\rho\\
0
\\0
\end{bmatrix} +
\begin{bmatrix}
0\\h\\
v
\end{bmatrix}.
\end{align}
The 6-parameter model first applies a corrected elevation angle ($\R[2]$) to a range
value that has been offset and scaled ($\t[1]$). Next, the $(y,z)$ position of the
origin is offset ($\t[2]$), and lastly, a corrected rotation about the $z$-axis is
applied ($\R[1]$). Because the resulting cascading transformation cannot be written as a single
$\mathrm{Sim}(3)$ transformation, therefore, the proposed model is not the same as either the
3-parameter model or the 6-parameter model.
\subsection{Cost function for baseline models}
\label{sec:BaselineCostFunctions}
For a given collection of points $\mathcal{PC}_t = \{\x[i]\}_{i=1}^{M_t}$, possibly from
multiple targets $t\in\{1, \ldots, T\}$, we seek a set of calibration parameters
$\alpha$ that solve
\begin{equation}
\label{eq:OptSolutionBaseline}
\min_{\alpha}
\sum_{t=1}^T\sum_{i=1}^{M_t}
| \n[t][\mathsf{T}] \left(F(\x[i], \alpha) - \p[0,t] \right) |,
\end{equation}
where
\begin{enumerate}
\item for baseline model $\textbf{BL}_1$~\cite{pandey2010extrinsic} in
\eqref{eq:SimpleCalibration}, $\alpha =(\delta_\rho, \delta_\theta, \delta_\phi)$ and
$$F({\x}, \alpha):= \Gamma_{\alpha} \circ f^{-1}({\x});$$
\item for baseline model $\textbf{BL}_2$~\cite{glennie2010static, nouira2016point} in
\eqref{eq:ComplexCalibration}, $\alpha =(\delta_\rho, \delta_\theta, \delta_\phi, s, h, v)$
and
$$F({\x}, \alpha):= \bar{\Gamma}_{\alpha} \circ f^{-1}({\x})$$
\end{enumerate}
\begin{remark}
\bh{Do we need to include this? I am not sure how to say about $\textbf{BL}_2$}
For the 3-parameter model $\textbf{BL}_1$, a single planar target is sufficient to
uniquely determine a set of parameters $\alpha =(\delta_\rho, \delta_\theta, \delta_\phi)$. For the
6-parameter model $\textbf{BL}_2$, non-uniqueness can occur as outlined in
Proposition~\ref{thm:HopefullyTrue} and Fig.~\ref{fig:PropositionIllustration}.
\end{remark}
\section{Conclusion}
\label{sec:OptimalShapeConclusion}
We presented the concept of optimizing target shape to enhance pose estimation for LiDAR~ point clouds.
We formulated the problem in terms of choosing a target shape that induces large gradients at edge points under translation and
rotation so as to mitigate the effects of quantization uncertainty associated with point cloud
sparseness. For additional robustness, the cost function or score for a candidate target was defined to be the minimum score under a set of relative rotations of the edge points; this had the effect of breaking symmetry in the candidate target, which also removes pose ambiguity.
For a given target, we
used the target's geometry to jointly estimate the target's
pose and its vertices. The estimation problem was formulated so
that an existing semi-definite programming global solver could be modified to
globally and efficiently compute the pose and vertices of the target. A LiDAR~
simulator generated synthetic ground truth of the target pose and vertices. We validated that the combination of the
optimal shape with the global solver achieved centimeter error in vertex estimation,
centimeter error in translation, and a few degrees off in rotation in pose estimation
when a partially illuminated target was placed 30 meters from the LiDAR~.
In experiments, when compared to
ground truth data collected by a motion capture system with 33 cameras, we achieved results similar to those of the simulations.
In the future, we shall establish a system to automatically detect the shape in both
images and LiDAR~ point clouds. If a dataset has been collected and labeled,
automatic detection using deep-learning architectures is also an exciting future
direction. Currently, the proposed algorithm assumes the point cloud has been motion
compensated; how to adopt motion distortion into the algorithm is another direction
for future work. Applying it as a fiducial marker system or as an automatic
calibration system also offers another interesting area for further research. Furthermore, applying the proposed algorithm to 3D target shape
fitting and generating shapes with more sides provide interesting research
directions.
\section{Experimental Results}
\label{sec:OptimalShapeExperiment}
We now present experimental evaluations of the pose and vertex estimation of the
optimal shape. All the experiments are conducted with a \textit{32-Beam Velodyne ULTRA Puck LiDAR~} and an Intel
RealSense camera rigidly attached to the torso of a Cassie-series bipedal robot, as shown in Fig.~\ref{fig:torso}. We use the Robot Operating System
(ROS)~\cite{ros} to communicate and synchronize between the sensors. Datasets are
collected in the atrium of the Ford Robotics Building at the University of Michigan,
and a spacious outdoor facility, M-Air~\cite{MAir}, equipped with a motion capture
system.
\begin{figure}[t]%
\centering
\includegraphics[trim=0 10 0 0,clip,width=0.8\columnwidth]{torso2.png}%
\caption[]{
The sensor suite consists of a LiDAR, a camera and several motion capture markers.
}%
\label{fig:torso}%
\end{figure}
\subsection{Quantitative Experimental Results in M-Air}
The Qualisys motion capture system in M-Air is used as a proxy for ground truth
poses and vertices. The setup consists of 33 motion capture cameras with passive markers
attached to the target, the LiDAR~ and the camera, as shown in Fig.~\ref{fig:torso} and Fig.~\ref{fig:MAirSetup}. Datasets
are collected at various distances and angles. Each of the datasets contains images
(20 Hz) and scans of point clouds (10 Hz). Similar to the simulation environment, the
optimal-shape target is placed at distances from 2 to 16 meters (maximum possible in M-Air) in 2 meter increments. At each distance, data is collected with a
target face-on to the LiDAR~ and another dataset with the target roughly rotated by
the Euler angles (roll = $20^\circ$, pitch = $30^\circ$, yaw = $30^\circ$) as a
challenging case. The results are shown in Table~\ref{tab:ExpResults}. As expected,
the results are slightly worse (approximately one degree) than the simulator's due to the white noise of the
LiDAR~ and many missing returns on the targets, as shown in Fig~\ref{fig:EXPResults}.
\begin{figure}[t]%
\centering
\includegraphics[trim=0 0 0 0,clip,width=1\columnwidth]{ExpSetup2.png}%
\caption[]{
The Experimental setup. The left shows passive markers are attached to the four corners of the optimized target shape and the right shows a LiDAR~ scan in M-Air.
}%
\label{fig:MAirSetup}%
\end{figure}
\begin{remark}
A Velodyne LiDAR~
return consists of the point's Cartesian coordinates, intensity, and ring number. For each ring, the first and the last
point are the edge points of the ring. Since we define a template at the LiDAR~
origin, we first center the target points by subtracting its centroid. Each
centered edge point is associated with the closest edge. Given the current association, we estimate the pose and then
redo edge-point-to-edge-line association. Therefore, the optimization process is an
alternating process. The optimization is terminated if $\|\mathrm{Log}(H_{k-1}H_k)\|$ is
smaller than $1e^{-5}$, where $\mathrm{Log}(\cdot)$ is the logarithm map.
\end{remark}
\subsection{Qualitative Experimental Results and Target Partial Illumination}
For distances beyond 16 meters (the distance limit in M-Air), we present qualitative results from the atrium in Fig.~\ref{fig:EXPResults} to support the simulation-based analysis. The blue dots are LiDAR~ measurements, and the red
frame is the fitting result. Figure~\ref{fig:OccludedEXPResults} illustrates a partially illuminated target (the green dots are assumed missing and only blue
dots are used for pose estimation); the resulting fitting results are the red frame.
\input{table-sim-short}
\input{table-exp}
\begin{figure*}[!t]%
\centering
\begin{subfigure}{0.48\columnwidth}
\centering
\includegraphics[height=0.7\columnwidth, trim={0 0 0 0},clip]{EXP-20m-front.png}
\end{subfigure}%
\vspace{2pt}
\begin{subfigure}{0.48\columnwidth}
\centering
\includegraphics[height=0.7\columnwidth, trim={0 0 0 0},clip]{EXP-24m-front.png}
\end{subfigure}%
\vspace{2pt}
\begin{subfigure}{0.48\columnwidth}
\centering
\includegraphics[height=0.7\columnwidth, trim={0 0 0 0},clip]{EXP-28m-front.png}
\end{subfigure}%
\vspace{2pt}
\begin{subfigure}{0.48\columnwidth}
\centering
\includegraphics[height=0.7\columnwidth, trim={0 0 0 0},clip]{EXP-32m-front.png}
\end{subfigure}
\vspace{2pt}
\begin{subfigure}{0.48\columnwidth}
\centering
\includegraphics[height=1.1\columnwidth, trim={0 0 0 0},clip]{EXP-20m-side.png}
\end{subfigure}%
\vspace{2pt}
\begin{subfigure}{0.48\columnwidth}
\centering
\includegraphics[height=1.1\columnwidth, trim={0 0 0 0},clip]{EXP-24m-side.png}
\end{subfigure}%
\vspace{2pt}
\begin{subfigure}{0.48\columnwidth}
\centering
\includegraphics[height=1.1\columnwidth, trim={0 0 0 0},clip]{EXP-28m-side.png}
\end{subfigure}%
\vspace{2pt}
\begin{subfigure}{0.48\columnwidth}
\centering
\includegraphics[height=1.1\columnwidth, trim={0 0 0 0},clip]{EXP-32m-side.png}
\end{subfigure}%
\caption[]{Fitting results of the optimal shape at various distances (20, 24, 28,
32 meters) in the atrium of the Ford Robotics Building at the University of
Michigan. The blue dots are the LiDAR~ returns on the target and the red
frame are the fitting results. The top and bottom show the front view and a
side view of the results, respectively.}
\label{fig:EXPResults}%
\end{figure*}
\begin{figure*}[!t]%
\centering
\begin{subfigure}{0.48\columnwidth}
\centering
\includegraphics[height=0.7\columnwidth, trim={0 0 0 0},clip]{OccludedEXP-4m-front.png}
\end{subfigure}%
\vspace{2pt}
\begin{subfigure}{0.48\columnwidth}
\centering
\includegraphics[height=0.7\columnwidth, trim={0 0 0 0},clip]{OccludedEXP-10m-front.png}
\end{subfigure}%
\vspace{2pt}
\begin{subfigure}{0.48\columnwidth}
\centering
\includegraphics[height=0.7\columnwidth, trim={0 0 0 0},clip]{OccludedEXP-22m-front.png}
\end{subfigure}%
\vspace{2pt}
\begin{subfigure}{0.48\columnwidth}
\centering
\includegraphics[height=0.7\columnwidth, trim={0 0 0 0},clip]{OccludedEXP-30m-front.png}
\end{subfigure}
\vspace{2pt}
\begin{subfigure}{0.48\columnwidth}
\centering
\includegraphics[height=1.1\columnwidth, trim={0 0 0 0},clip]{OccludedEXP-4m-side.png}
\end{subfigure}%
\vspace{2pt}
\begin{subfigure}{0.48\columnwidth}
\centering
\includegraphics[height=1.1\columnwidth, trim={0 0 0 0},clip]{OccludedEXP-10m-side.png}
\end{subfigure}%
\vspace{2pt}
\begin{subfigure}{0.48\columnwidth}
\centering
\includegraphics[height=1.1\columnwidth, trim={0 0 0 0},clip]{OccludedEXP-22m-side.png}
\end{subfigure}%
\vspace{2pt}
\begin{subfigure}{0.48\columnwidth}
\centering
\includegraphics[height=1.1\columnwidth, trim={0 0 0 0},clip]{OccludedEXP-30m-side.png}
\end{subfigure}%
\caption[]{Fitting results of the partially illuminated target at various distances (4, 10, 22,
30 meters) in the atrium of the Ford Robotics Building at the University of
Michigan. The selected distances are different from Fig.~\ref{fig:EXPResults}
to show more results. The red frames are the fitting results. The blue dots
are the LiDAR~ returns on the targets while the green dots are considered
missing. The top and bottom show the front view and a side view of the
fitting results, respectively.
\label{fig:OccludedEXPResults}%
\end{figure*}
\section{Extrinsic LiDAR-Camera Calibration}
\label{sec:ExtrinsicCalibration}
\bhw{}
In our previous work \cite{huang2019improvements}, we assumed each target is planar,
square, and has to be rotated in the frame of the LiDAR by roughly 45$^\circ$ to form
a diamond as. We also noted that placing the targets so that the rings of the LiDAR
run parallel to its edges leads to ambiguity in the vertical position due to the
spacing of the rings, as indicated in \cite{liao2018extrinsic, zhou2018automatic}.
However, by using the optimal shape in Sec.~\ref{sec:ShapeSensitivity}, we only need
to put the shape as it is, no further requirement above is needed. In this section,
we assume that standard methods have been used to automatically isolate the target's
point cloud \cite{huang2019lidartag}; see Sec.~\ref{sec:FrontEnd} and
Fig.~\ref{fig:AutomaticPipeline}.
points on a LiDAR~ target were isolated from
a full scan of point cloud, and corresponding camera corners had been given by
manually clicking on an image, and concentrated on accurately estimating the vertices
of a LiDAR~ target and extrinsic calibration. The only manual process in the previous
software release \cite{huang2019improvements}, determining the image corners, was
made automatic in \bh{cite:AutomaticCalibration}.
\section{Extrinsic Calibration and Refinement}
\label{sec:ExtrinsicCalibration}
\section{Image Corners and Correspondences with the LiDAR~ Vertices}
\label{sec:ImageCorners}
\section{Introduction}
\label{sec:OptimalIntro}
Targets have been widely employed as fiducial markers \cite{huang2021lidartag,
huang2020lidartaglonger, olson2011apriltag, wagner2003artoolkit,fiala2005artag,
degol2017chromatag, atcheson2010caltag, fiala2005comparing, wang2020lftag} and for
target-based sensor calibration \cite{huang2020intinsic, huang2020improvements,
liao2018extrinsic, zhou2018automatic, gong20133d, dhall2017lidar,
verma2019automatic, jiao2019novel, kim2019extrinsic, guindel2017automatic,
mishra2020extrinsic, xue2019automatic}. Fiducial markers
(artificial landmarks or targets) help robots estimate their pose by estimating
the target pose and are applied to simultaneous localization and mapping (SLAM)
systems for robot state estimation and loop closures. Additionally, it can
facilitate human-robot interaction, allowing humans to give commands to a robot by
showing an appropriate marker. Extrinsic target-based calibration between sensors
(cameras, Light Detection and Ranging (LiDAR) sensors, Inertial Measurement Units
(IMU), etc) is crucial for modern autonomous navigation\cite{huang2021efficient}. Particularly in
target-based LiDAR-camera calibration, one seeks to estimate a set of corresponding
features of a target (e.g., edge lines, normal vectors, vertices, or plane
equations) in the LiDAR's point cloud and the camera's image plane.
\begin{figure}[!t]%
\centering
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[width=1\columnwidth, trim={0 0 0 0},clip]{FirstImage.png}
\end{subfigure}\vspace{15pt}
\begin{subfigure}{0.55\columnwidth}
\centering
\includegraphics[height=0.8\columnwidth, trim={0 0 0 0},clip]{FirstimgOccludedEXP-30m-front.png}
\end{subfigure}%
\begin{subfigure}{0.48\columnwidth}
\centering
\includegraphics[height=0.95\columnwidth, trim={0 0 0 0},clip]{FirstimgOccludedEXP-30m-side.png}
\end{subfigure}
\caption[]{Illustration of the vertex and pose estimation using the proposed optimal
target shape placed 30 meters away from a LiDAR~
in the atrium of the Ford Robotics Building at the University of Michigan.
The red frame is the proposed optimal shape that induces large gradients at
edge points under translation and rotation. The pose and vertices of the target
are jointly estimated by a global solver that uses known target geometry. The bottom two figures
show the front view and a side view of the fitting results, respectively.
The blue and the green dots are the LiDAR~ returns on the target. Only the
blue dots are used for pose estimation while to demonstrate robustness of the approach, the green dots are considered missing. If one were to apply \textit{RANSAC} --- a commonly used method --- to regress the target boundaries and subsequently estimate the vertices by line
intersections, the method would fail due to the sparsity of inliers.}
\label{fig:OptimalFirstImg}%
\vspace{-4mm}
\end{figure}
The targets applied in these critical tasks are typically symmetric (square, diamond,
rectangle, or circle). A single symmetric target, such as a square\cite{huang2021lidartag,
olson2011apriltag}, leads to an ambiguous pose. This can be solved by adding an observable pattern to the target or by assuring that several asymmetrically-placed symmetric targets can be observed in a single scene. Furthermore,
estimating the pose or features of a target of injudicious design suffers from the
quantization uncertainty of a sensor, especially for LiDAR~ sensors. A high-end
LiDAR, such as \textit{32-Beam Velodyne ULTRA Puck LiDAR}, still has roughly six centimeters of quantization error
at 10 meters, and 18 centimeters at 30 meters. The quantization uncertainty in the
LiDAR~ point cloud leads to rotation errors greater than 15 degrees for
targets farther away than 15 meters.
In this paper, we propose the concept of optimizing target shape to ameliorate
problems caused by quantization uncertainty and sparsity of the LiDAR~ image of a
target. Specifically, we propose that a ``good target shape'' is one that possesses
large gradients at edge points when the target undergoes rotations or translations.
Moreover, we present a means that exploits target geometry to extract target vertices
while estimating pose. The pose estimation problem is formulated so that an existing
Semidefinite programming (SDP) global solver can be modified to globally and
efficiently compute the target's pose. Figure~\ref{fig:OptimalFirstImg} shows the
obtained pose estimation of a partially illuminated target placed 30 meters away and
having only nine returns (blue dots) from a \textit{32-Beam Velodyne ULTRA Puck LiDAR}, and three LiDAR~ rings on the
target after the green dots are considered missing.
\vspace{-2mm}
\subsection{Contributions}
In particular, this work presents the following contributions:
\begin{enumerate}
\item We propose the concept of target shape optimization for estimating pose and vertices from LiDAR~
point clouds. Specifically, we design a target so that its edge points induced by LiDAR~ rings are ``highly'' sensitive to translation and rotation. This attenuates the effects of quantization uncertainty and sparsity of a target's
LiDAR~ image. The resulting shape is asymmetric to
remove pose ambiguity.
\item We present a means that uses target shape to jointly estimate target vertices
and pose. Because the cost function of the proposed method can be formulated
as an SDP, the target's pose and vertices can be globally estimated with an open-source solver \cite{briales2017convex}.
\item We utilize an open-source LiDAR~ simulator \cite{githubFileLiDARSimulator} to provide ground truth of the poses and
vertices. In the simulation, we validate that the optimal shape with the
global solver achieves centimeter error in translation and a few degrees of
error in rotation when the targets are at a distance of 30 meters and
partially illuminated. In addition, we conduct experimental evaluations where
the ground truth data are provided by a motion capture system, and achieve results similar to the simulation.
\item We open-source all the related software for this work, including the
generation of the optimal shape, our means for pose estimation, and the simulated/experimental datasets; see
\cite{githubFileOptimalShapeGeneration,
githubFilePoseEstimationForOptimalShape}.
\end{enumerate}
\section{3D LiDAR~ Intrinsic Calibration}
\section{Proposed LiDAR~ Intrinsic Calibration Model}
\label{sec:ProposedIntrinsicModel}
The proposed calibration parameters are modeled as an element of the similarity Lie
group $\mathrm{Sim}(3).$ An element of the group is isometry composed with an additional
isotropic scaling, i.e., a rigid-body transformation with scaling. We note that this
is the most general form of transformation in 3D space while preserving the shape
(i.e. preserving ratio of lengths, and angles); and therefore, it is suitable for
intrinsic calibration of LiDARs. The proposed calibration model works for both
spining LiDARs~ and solid-state LiDARs. An element of the group is given by
\begin{equation}
\label{eq:sim3tf}
\H = \begin{bmatrix}
s \R & \t \\
\zeros & 1
\end{bmatrix} \in \mathrm{Sim}(3),
\end{equation}
where $\R \in \mathrm{SO}(3)$ is the 3D rotation matrix, $\t \in \mathbb{R}^3$ is the
translation vector, and $s \in \mathbb{R}^+$ is the scale parameter. In particular,
an element has seven degrees of freedom and the action of $\mathrm{Sim}(3)$ on $\mathbb{R}^3$
is $\H \cdot \x = s\R \x + \t$, where $\x \in \mathbb{R}^3$.
Let $\n[t]$ be a unit normal vector of a planar target and let $\p[0, t]$ be a fixed
point on the target. Let $\mathcal{PC} = \{ \x[i] | i=1,\dots,M~\text{and}~\x[i] \in
\mathbb{R}^3 \}$ denote a collection of LiDAR~ returns on the target, and $M$ is the
number of points in the collection. Wherever convenient, we abuse notation by passing
to from regular coordinates to or homogeneous coordinates without noting the
distinction. Given the collection of LiDAR~ returns $\mathcal{PC}$, the cost is defined by the
point-to-plane (P2P) distance:
\begin{equation}
J := \sum_{i=1}^M
\lvert \n[t][\mathsf{T}] \left(\x[i] - \p[0,t] \right) \rvert,
\end{equation}
where $\n[t][\mathsf{T}] \left(\x[i] - \p[0,t] \right)$ is the orthogonal projection
of the measurement onto the normal vector and $\lvert\,\cdot\,\rvert$ is the absolute
value. Then the calibration problem can be formulated as follows
\begin{problem}
\label{prob:intrinsic_calibration}
For a given collection of points $\mathcal{PC}_t = \{\x[i]\}_{i=1}^{M_t}$, possibly
from multiple targets $t\in\{1, \ldots, T\}$, we seek a similarity transformation
$\H[][\star]$ that solves
\begin{equation}
\label{eq:OptSolution}
\min_{\H \in \mathrm{Sim}(3)}
\sum_{t=1}^T\sum_{i=1}^{M_t}
\lvert \n[t][\mathsf{T}] \left( \H \cdot \x[i] - \p[0,t] \right) \rvert.
\end{equation}
\end{problem}
\begin{remark}
In practice each target's normal vector and a point on the target must be estimated
from data; see Sec.~\bh{sec:L1-optimization}.
\end{remark}
\subsection{Parsing points on a target to collections of points}
\label{sec:CollectionOfPoints}
The specific method for parsing a target’s point cloud and the number of calibration
transformations depends on the nature of the sensor. For a spining LiDAR, the points $\mathcal{PC}$
on a target are typically separated by beam number so each beam has its own
optimization problem~\eqref{eq:OptSolution}. For example, for a $K$-beam spining LiDAR~,
the set of calibration parameters is $\{\H[k] | k=1,\dots,K~\text{and}~\H[k] \in
\mathrm{Sim}(3)\}$.
For an OPA-based solid-state LiDAR, we form an $m\times n$ grid over the planar target and
then parse the point cloud based on each point's projection onto the target along the
normal of each grid. This way, the set of calibration parameters becomes
$\{\H[k] | k=1,\dots,m\times n~\text{and}~\H[k] \in \mathrm{Sim}(3)\}$.
\subsection{Over parameterization analysis for a Spinning LiDAR}
\label{sec:TargetPlacementGuideline}
The proposed calibration model is over parameterized (i.e., the unique $\H[][*]$ does
not exist) when less than four targets are used to calibrate a spining LiDAR. In
particular, for a single planar target, the rotation about the target normal $\n[t]$,
two components of the translation vector $\t$ (translation in the plane of the
target) and the scale $s$ are unconstrained as shown in Fig.~\bh{fig:one-target}. For
two planar targets, the translation along the line orthogonal to the targets' normal
vectors and the scale are unconstrained as shown in Fig.~\bh{fig:two-targets}.
For $\mathcal{S}\in\mathbb{R}^3$, let $[\mathcal{S}] \text{ and } [\mathcal{S}]^\perp$ denote its span and
its orthogonal complement. Let $\{\e[1], \e[2], \e[3]\}$ be the canonical basis for
$\mathbb{R}^3$. We also denote $\P\subset\mathbb{R}^3$ a set traced out by a single ring of
a perfectly calibrated spining LiDAR. Without loss of generality, we assume
$\P=[\e[3]]^\perp$.
Consider four targets with unit normal vectors $\n[i]$ and let $\p[0,i]$ be a point
on the $i$-th target. For $1\le i \le 4$, the plane defined by the $i$-th target is
$\V[i]:= \p[0,i] + [\n[i]]^\perp$. If the following two assumptions are satisfied,
Theorem~\ref{thm:uniqueness} states that there exists the unique answer to
Problem~\ref{prob:intrinsic_calibration} and is proved in
Appendix~\ref{sec:Uniqueness}.
\begin{assumption}[Assumption N]
\label{asm:assumtionN}
All sets of three distinct vectors from $\{ \n[1], \n[2], \n[3], \n[4], \e[3]\}$
are linearly independent.
\end{assumption}
\begin{assumption}[Assumption B]
\label{asm:assumtionB}
Given the six intersection points, any two form a set of basis for the ring
plane. There are $\binom{6}{2}$ sets of basis. Subsets of the basis' are linearly independent.
\begin{enumerate}[(a)]
\item $\{ \p[12], \p[13] \}$, $\{ \p[13], \p[14] \}$, $\{ \p[14], \p[12] \}$,
\item $\{ \p[12], \p[23] \}$, $\{ \p[23], \p[24] \}$, $\{ \p[24], \p[12] \}$,
\item $\{ \p[13], \p[23] \}$, $\{ \p[23], \p[34] \}$, $\{ \p[34], \p[13] \}$,
\item $\{ \p[14], \p[24] \}$, $\{ \p[24], \p[34] \}$, $\{ \p[34], \p[14] \}$,
\item $\{ \p[14], \p[23] \}$
\end{enumerate}
\end{assumption}
\begin{remark}
Assumption~\ref{asm:assumtionN} considers a set of condition numbers
($\binom{5}{3}=10$) for any three of 3D normal vectors.
Assumption~\ref{asm:assumtionB} examines a set of condition numbers for
the chosen 2D basis. If the two sets of condition numbers are well conditioned,
there exists the unique answer to Problem~\ref{prob:intrinsic_calibration} as
proved in Appendix~\ref{sec:Uniqueness}.
\end{remark}
\begin{theorem}
\label{thm:uniqueness}
Assume that Assumptions N (Assumption~\ref{eq:assumtionN}) and B
(Assumption~\ref{eq:assumtionB}) hold and let $\H \in \mathrm{Sim}(3)$. If for each $i
\neq j \in \{1, 2, 3, 4 \}$, $\H\cdot\p[ij] \in \V[i] \cap
\V[j]$, then
\begin{equation}
\H =
\begin{bmatrix}
\I & \zeros \\
\zeros & 1
\end{bmatrix}.
\end{equation}
\end{theorem}
The proof is given in Appendix~\ref{sec:Uniqueness}. The resulting target placement
for a spining LiDAR~ is an oriented tetrahedron\footnote{Each of the four targets extends an
infinite plane. The extended infinite planes create a tetrahedron.} where the
normal vector of each face\footnote{Faces of the tetrahedron are extended planes
of targets. Therefore, normal vectors of faces are normal vectors of targets}
satisfies Assumption~\ref{asm:assumtionN} and the orientation of the
tetrahedron intersects with the ring plane fulfills
Assumptions~\ref{asm:assumtionB} as shown in Fig.~\ref{fig:oriented-tetrahedron} and
Fig.~\ref{fig:tetrahedron-Cassie}.
\begin{figure}[t]%
\centering
\subfloat[]{%
\label{fig:oriented-tetrahedron}%
\centering
\includegraphics[width=1\columnwidth, trim={0.00cm 0cm 0cm 0.cm},clip]{tetrahedron-with-ring-plane.png}}
\hspace{5pt}%
\subfloat[]{%
\label{fig:tetrahedron-Cassie}%
\centering
\includegraphics[width=1\columnwidth, trim={0.5cm 3.5cm 1.8cm 3.5cm},clip]{Cassie-with-targets.png}}
\caption[]{}%
\end{figure}
\section{Related Work}
\label{sec:OptimalRelatedWork}
To the best of our knowledge, there is no existing work on target shape design for LiDAR~ point clouds. The closet publication on target shape design is \cite{muralikrishnan2017relative}, which evaluated the relative range error
of dense terrestrial laser scanners using a plate, a sphere, and a novel dual-sphere-plate target. We therefore review instead some techniques to improve the pose estimation of fiducial markers and to assist in extracting features of calibration targets.
\vspace{-2mm}
\subsection{Fiducial Markers}
\label{sec:RelatedFiducialMarkers}
Fiducial markers for cameras were originally developed and used for augmented reality applications
\cite{wagner2003artoolkit,fiala2005artag} and have been widely used for object
detection and tracking, and pose estimation \cite{klopschitz2007automatic}. Due to
their uniqueness and fast detection rate, they are also often used to improve
Simultaneous Localization And Mapping (SLAM) systems \cite{degol2018improved}. CCTag
\cite{calvet2016detection} adopts a set of rings (circular target) to enhance pose
estimation from blurry images. ChromaTag \cite{degol2017chromatag} proposes color
gradients on a squared target to speed up the detection process and obtain more
accurate pose estimation. More recently, LFTag \cite{wang2020lftag} has taken
advantage of topological markers, a kind of uncommon topological pattern, on a
squared target to improve pose estimation at a longer distance. However, all the
mentioned fiducial markers only work on cameras.
In our prior work on LiDAR~
\cite{huang2021lidartag}, we proposed the first fiducial marker for LiDAR~ point
clouds, which can be perceived by both LiDARs~ and cameras.
We achieved millimeter error in translation and a few degrees in rotation.
However, due to the quantization error of the LiDAR, the performance of the
pose estimation (especially in-plane rotation) was noticeably degraded when the target was farther than 12 meters.
Thus, this work proposes the concept of target shape design to specifically
address the quantization uncertainty present in LiDAR~ returns and push the range of pose estimation
to more than 30 meters. In passing, we note that symmetric targets, such as a square or hexagon, suffer from rotational ambiguity. Hence, our designed target will not be symmetric.
\vspace{-2mm}
\subsection{Target-Based LiDAR-camera Calibration}
\label{sec:RelatedCalibration}
LiDAR-camera calibration \cite{huang2020improvements, liao2018extrinsic, zhou2018automatic,
gong20133d, dhall2017lidar, verma2019automatic, jiao2019novel, kim2019extrinsic,
guindel2017automatic, mishra2020extrinsic, xue2019automatic} requires feature
correspondences from the image pixels and the LiDAR~ point cloud. However, the
representations and inherent properties of camera images and LiDAR~ point clouds are
distinct. An image (pixel arrays) is dense and very structured, with the pixels
arranged in a uniform (planar) grid, and each image has a fixed number of data
points. On the other hand, each scan of a LiDAR~ returns a 3D point cloud consisting of a sparse set of
$(x,y,z)$ coordinates with associated intensities. In particular, LiDAR~
returns are not uniformly distributed in angle or distance
\cite[III-A]{huang2020lidartaglonger}. Target-based LiDAR-camera calibration utilizes targets
to identify and estimate the corresponding features, such as vertices, 2D/3D edge
lines, normal vectors, or the plane equations of the targets. References
\cite{huang2020improvements, huang2020intinsic, liao2018extrinsic,
zhou2018automatic} have noted that placing the targets so that the rings of the
LiDAR~ ran parallel to its edges led to ambiguity in the vertical position due to the
spacing of the rings and thus was detrimental to vertex or feature estimation.
References \cite{verma2019automatic, mishra2020extrinsic} utilize \textit{RANSAC}
\cite{fischler1981random} and plane fitting to remove the outliers of the LiDAR~
returns, while \cite{zhou2018automatic} proposes a ``denoising process'' for LiDAR~ returns
around the target boundaries before applying \textit{RANSAC} to extract features.
When estimating the target vertices, the later references separate the edge points into groups and
then apply the \textit{RANSAC} algorithm. However, regressing the line equation of
edge points will fail when there are not enough edge points or inliers, as shown in
Fig.~\ref{fig:OptimalFirstImg}. Additionally, no target geometry information is used
while estimating the features.
The remainder of this paper is organized as follows.
Section~\ref{sec:ShapeSensitivity} formulates the design of an optimal target shape
for LiDAR~ point clouds. The extraction of the target vertices while
globally estimating the pose is discussed in Sec.~\ref{sec:GlobalPoseForOptimal}.
The simulation and experimental results are presented in
Sec.~\ref{sec:OptimalShapeSimulation} and Sec.~\ref{sec:OptimalShapeExperiment}.
Finally, Sec.~\ref{sec:OptimalShapeConclusion} concludes the paper and provides
suggestions for further work.
\section{Proof on Uniqueness of Affine Transformation}
If we relax the problem to being an affine transformation, the problem becomes
linear and we only have to check the invertibility of a matrix. This also
allows us to calculate the conditioning number to get an instant measure on how robust
the setup is.
\begin{proposition}
Assume that the matrix $A$ in \eqref{eq:18_matrix} has rank 15. If
for each $i \neq j \in \{1, 2, 3, 4 \}$, $\H\cdot\p[ij] \in \V[i] \cap \V[j]$, then
\begin{equation}
\H = \begin{bmatrix}
\I & \zeros \\
\zeros & 1
\end{bmatrix}.
\end{equation}
\end{proposition}
\begin{proof}
We reformulate \eqref{eq:NecssSuff},
\begin{equation}
\label{eq:rewrite-necssStuff}
\H\cdot\p[ij] \in \V[i] \cap \V[j] \iff (s\R-\I) \p[ij] + \t = \alpha_{ij}(\n[i]\times \n[j]),
\end{equation}
where the $\alpha_{ij}\in \mathbb{R}$ are unknown coefficients. By relaxing to affine
transformation, rotations become arbitrary matrices. Let superscripts denote
components of vectors/matrices and let subscripts be indices. We further rewrite
\eqref{eq:rewrite-necssStuff}:
\begin{equation}
\label{eq:linear_sys_part2}
\M[3\times3]\p[ij] + \t = \alpha_{ij} \cdot v_{ij},
\end{equation}
where $\M[3\times3]$ is an arbitrary matrix and $v_{ij}=\n[i]\times \n[j]$. If we
write \eqref{eq:linear_sys_part2} as a summation:
\begin{equation}
\label{eq:linear_sys_part3}
\sum_{\ell=1}^3 \, M^{k\ell}\cdot p_{ij}^\ell + t^k = \alpha_{ij}\cdot v_{ij}^k,
\end{equation}
this system clearly has $3n$ equations and $12+n$ unknowns where $n$ is the number of
$p_{ij}$. In our case, $n=6$ and \eqref{eq:linear_sys_part2} forms an $18$ by $15$
system. Therefore if its matrix \textcolor{green}{has maximal rank (it will always be singular as it is not square}, there exists a unique solution. The
matrix is given by
\begin{equation}\label{eq:18_matrix}
A = \begin{bmatrix}
X & 0 & 0 & 1_{6\times 1} & 0 & 0 & -\Lambda_1 \\
0 & X & 0 & 0 & 1_{6\times 1} & 0 & -\Lambda_2 \\
0 & 0 & X & 0 & 0 & 1_{6\times 1} & -\Lambda_3
\end{bmatrix},
\end{equation}
where
\begin{equation*}
\begin{split}
X &= \left[ p_{12},p_{13},p_{14},p_{23},p_{24},p_{34}\right]^\intercal \\
\Lambda_k &= \mathrm{diag}\left[ v_{12}^k, v_{13}^k, v_{14}^k, v_{23}^k, v_{24}^k, v_{34}^k \right].
\end{split}
\end{equation*}
The condition matrix $A$ is given in Appendix~\ref{sec:condition-matrix}.
\end{proof}
\begin{remark}
\bh{need to be rephrased later on}
The matrix $A$ has three columns which are trivially zero. This leaves the last
column of $U$ free (so uniqueness fails). But! a (3 dimensional) rotation matrix is
uniquely determined by its first two columns (as the third is taken to be the third
ONB). Therefore, we obtain a unique rotation matrix and the result holds.
\end{remark}
\section{Condition Matrix}
\label{sec:condition-matrix}
The condition matrix in \eqref{eq:18_matrix} is an $18$ by $15$ matrix and is given by
\begin{figure*}[h
\center
\begin{equation*}
A = \left(\begin{array}{ccccccccccccccc}
p_{12}^1 & p_{12}^2 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & -v_{12}^1 & 0 & 0 & 0 & 0 & 0 \\
p_{13}^1 & p_{13}^2 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & -v_{13}^1 & 0 & 0 & 0 & 0 \\
p_{14}^1 & p_{14}^2 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & -v_{14}^1 & 0 & 0 & 0 \\
p_{23}^1 & p_{23}^2 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & -v_{23}^1 & 0 & 0 \\
p_{24}^1 & p_{24}^2 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & -v_{24}^1 & 0 \\
p_{34}^1 & p_{34}^2 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -v_{34}^1 \\
0 & 0 & p_{12}^1 & p_{12}^2 & 0 & 0 & 0 & 1 & 0 & -v_{12}^2 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & p_{13}^1 & p_{13}^2 & 0 & 0 & 0 & 1 & 0 & 0 & -v_{13}^2 & 0 & 0 & 0 & 0 \\
0 & 0 & p_{14}^1 & p_{14}^2 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & -v_{14}^2 & 0 & 0 & 0 \\
0 & 0 & p_{23}^1 & p_{23}^2 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & -v_{23}^2 & 0 & 0 \\
0 & 0 & p_{24}^1 & p_{24}^2 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & -v_{24}^2 & 0 \\
0 & 0 & p_{34}^1 & p_{34}^2 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & -v_{34}^2 \\
0 & 0 & 0 & 0 & p_{12}^1 & p_{12}^2 & 0 & 0 & 1 & -v_{12}^3 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & p_{13}^1 & p_{13}^2 & 0 & 0 & 1 & 0 & -v_{13}^3 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & p_{14}^1 & p_{14}^2 & 0 & 0 & 1 & 0 & 0 & -v_{14}^3 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & p_{23}^1 & p_{23}^2 & 0 & 0 & 1 & 0 & 0 & 0 & -v_{23}^3 & 0 & 0 \\
0 & 0 & 0 & 0 & p_{24}^1 & p_{24}^2 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & -v_{24}^3 & 0 \\
0 & 0 & 0 & 0 & p_{34}^1 & p_{34}^2 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & -v_{34}^3
\end{array}\right)_{18\times 15}
\end{equation*}
\end{figure*}
\section{Robust Shape for Real LiDAR Sensors}
In previous section (Sec.~\ref{sec:InitialCost}), we added an extra term to
\eqref{eq:final-cost} to account for the discrete measurements in azimuth direction of
the LiDAR.
Additionally, LiDARs~ have different ring densities at different elevation angles.
For example, \textit{32-Beam Velodyne ULTRA Puck LiDAR~} has dense ring density between $-5^\circ$ and $3^\circ$, and
has sparse ring density from $-25^\circ$ to $-5^\circ$ and from $3^\circ$ to
$15^\circ$ \cite{velodyneUltraPuck}. A target could be partially illuminated in the
sparse region, as shown in Fig.~\ref{fig:partially-illuminated}. Therefore, assuming
edge points are uniformly distributed is not practical and using a distribution of edge
points that is similar to reality is critical while maximizing
\eqref{eq:final-cost}. Additionally, LiDAR~ rings from mobile robots are not always parallel to
the ground plane. We account for non-horizontal LiDAR~
rings by rotating the candidate target.
\vspace{-2mm}
\subsection{Partial Illumination of Target}
\label{sec:OptimalOcclusion}
To have the shape being robust to illuminated area and the angle of the rings with respect to the target, we
first rotate the generated polygon $n$ times, and then divide the rotated polygon
into $m$ areas. Only one area is illuminated by LiDAR~ rings at a time to determine
edge points and to compute the score \eqref{eq:final-cost}, $\Psi_{ij}$, for $1 \le i \le n$ and $1 \le j \le m$.
Figure~\ref{fig:partially-illuminated-edge-points} shows the edge points being determined for a partially illuminated target. Equation \eqref{eq:final-cost} is consequently evaluated
$n\times m$ times for illumination of the target and the lowest among the $n\times m$ scores is the final score of the candidate target shape.
\begin{figure}[!t]%
\centering
\begin{subfigure}{0.49\columnwidth}
\centering
\includegraphics[height=0.65\columnwidth, trim={0cm 0cm 0cm 0.cm},clip]{OccludedArea.jpg}
\caption[]{}%
\label{fig:partially-illuminated}
\end{subfigure}%
\begin{subfigure}{0.49\columnwidth}
\centering
\includegraphics[height=0.65\columnwidth, trim={0cm 0cm 0cm 0.cm},clip]{IlluminatedAreas.png}
\caption[]{}%
\label{fig:partially-illuminated-edge-points}
\end{subfigure}%
\caption[]{The left shows a partially illuminated candidate shape. Because we also rotate the target when computing a score, we can without loss of generality use horizontal strips to partially occlude the target. The right shows
that the target is divided into $m$-areas of partial illumination, and that for each of $n$-rotations of the candidate target, a score is assigned to each subarea based on \eqref{eq:final-cost}. The final score of
the shape is the lowest among the $n\times m$ scores.
}%
\label{fig:stepsOfCostDecomposition}%
\vspace{-4mm}
\end{figure}
\vspace{-2mm}
\subsection{Optimization for the Optimal Shape}
To summarize, the resulting optimization problem depends on the projective
transformation parameters that are used to generate a convex polygon, edge points
illuminated by horizontal LiDAR~ rings lied on the rotated quadrilateral, the
transformation of the edge points in $\mathfrak{se}_2$, and distances between two edge points
on the corresponding LiDAR~ rings. Thus, the optimization problem is defined as:
\begin{equation}
\label{eq:OptimalShapeOptimization}
\P[][*] = \argmin_{\P}\min_{\omega, u, v}\max_{i, j}\{-\Psi_{ij}\}.
\end{equation}
The optimization problem \eqref{eq:OptimalShapeOptimization} was (locally) solved by
\texttt{fmincon} in MATLAB, after the optimization parameters were randomly
initialized. We rotated the generated polygon six times. Each rotated polygon was
divided into five areas, and four LiDAR~ rings were used to illuminate one area at a
time. The unit sphere of unit vectors in $\mathfrak{se}(2)$, mentioned in
Sec.~\ref{sec:ShapeSensitivityComputation}, was discretized into $25\times25$ faces (by normalizing the vectors to have unit length, we can reduce the dimension from three to two).
Once the generated polygon was rotated and illuminated, the sensitivity of the
resulting edges points were evaluated at each face on the unit sphere. The resulting
optimal shape is shown in Fig.~\ref{fig:optimal-shape}. One can observe that the resulting shape satisfies:
\textbf{1)} it has sufficient area so as to collect LiDAR~ returns; \textbf{2)} the length of the shortest side is still long enough to be identified through edge points; \textbf{3)} its asymmetric shape avoids the issue of pose ambiguity.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth, trim={0cm 0cm 0cm 0.cm},clip]{OptimalShape.png}
\caption{The resulting optimal shape from \eqref{eq:OptimalShapeOptimization} in arbitrary units.
} \label{fig:optimal-shape}
\vspace{-4mm}
\end{figure}
\begin{remark}
The main point of this paper is that target shape can be used to enhance the estimation of target vertices and relative pose between a target and a LiDAR. We have proposed one algorithmic means to produce an ``optimal target shape''.
Different notions of cost will result in different shapes.
\end{remark}
\section{Optimal Shape For Sparse LiDAR Point Clouds}
\label{sec:ShapeSensitivity}
In this section, we propose a mathematical formulation of target shape design. The
main idea is for target translation and rotation to result in large gradients at edge
points defined by the LiDAR~ returns. Fig.~\ref{fig:sensitivity-process} summarizes a
high-level optimization process.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{sensitivity-process2.png}
\caption{Optimization process for determining an optimal target shape. Projective transformations
applied to a nominal quadrilateral generate candidate convex quadrilaterals (targets). Edge points are intersections of LiDAR~ rings with the target boundaries. The objective is to maximize the gradient of edge points under actions of $\mathrm{SE}(2)$ applied to the target. To enhance robustness, the gradients are computed for $n$-discrete rotations of the quadrilateral under partial illumination, and the score is the worst case.}
\label{fig:sensitivity-process}
\end{figure}
\subsection{Convex Shape Generation}
We apply projective transformations on a nominal convex quadrilateral in 3D to
generate planar candidate targets. We will see that applying a projective transformation rather than working with a collection of vertices makes it easier to ensure convexity of the target and to generate a cost function that is invariant under scaling and translations.
Let $\mathcal{V} := \{X_i| X_i := (x_i, y_i, 1)|x_i,
y_i\in\mathbb{R}\}_{i=1}^4$ denote the 3D vertices $X_i$ of a nominal convex
quadrilateral, such as a square. Given $\P$, a projective transformation defined by a non-singular $3\times3$ matrix\cite[p.33]{hartley2000multiple}, let
$\widetilde{X}_i$ denote the new vertices transformed by $\P$:
\begin{equation}
\label{eq:projective}
\widetilde{X}_i =
\begin{bmatrix}
x_i^\prime\\y_i^\prime\\\lambda_i^\prime
\end{bmatrix} =
\P X_i =
\begin{bmatrix}
p_{11} & p_{12} & p_{13} \\
p_{21} & p_{22} & p_{23} \\
p_{31} & p_{32} & \upsilon
\end{bmatrix}
\begin{bmatrix}
x_i\\y_i\\1
\end{bmatrix}.
\end{equation}
The resulting vertices $\widetilde{\mathcal{V}}:=\{\widetilde{X}_i\}_{i=1}^4$
lie in the projective space $\mathbb{P}^2$ \cite[p.26]{hartley2000multiple}. Let
$\mathcal{V}^\prime$ be the corresponding transformed vertices in the Cartesian space
$(\mathbb{R}^2)$ \cite[p.27]{hartley2000multiple} and $\Pi:\mathbb{P}^2\rightarrow
\mathbb{R}^2$ be the mapping function
\begin{equation}
\label{eq:CandidateTargetFamily}
\Pi(\widetilde{\mathcal{V}}):=\Pi(\P(\mathcal{V}))):=\left\{X_i^\prime\bigg|X_i^\prime :=
\left(\frac{x_i^\prime}{\lambda_i^\prime}, \frac{y_i^\prime}{\lambda_i^\prime}\right)\right\}_{i=1}^4.
\end{equation}
To summarize, given a nominal convex quadrilateral, $\mathcal{V}$, and a projective transformation, $\P$, we construct a new quadrilateral via
\begin{equation}
\mathcal{V} \longmapsto \P\mathcal{V}=:\widetilde{\mathcal{V}} \text{ and } \mathcal{V}^\prime:= \Pi(\widetilde{\mathcal{V}}).
\end{equation}
\begin{remark}
From here on, we will abuse notation
and pass from Cartesian coordinates to homogeneous coordinates without noting the
distinction.
\end{remark}
It is important to note that for any desired quadrilateral, there exists a projective transformation yielding $\mathcal{V}^\prime$ from $\mathcal{V}$. Hence, our procedure for generating candidate targets is without loss of generality.
\begin{theorem}[\hspace{1sp}{\cite[p.274]{cederberg2013course}}]
\label{thm:uniquessOfProjective}
There exists a unique projective transformation that maps any four points, no
three of which are collinear, to any four points, no three of which are collinear.
\end{theorem}
While Theorem~\ref{thm:uniquessOfProjective} can be used to construct an arbitrary quadrilateral, convexity need not be conserved as shown in Fig.~\ref{fig:convexity-comparison}. We need the following condition to ensure that the resulting polygon is convex:
\begin{theorem}[\hspace{1sp}{\cite[p.39]{boyd2004convex}}]
\label{thm:convexity}
Let $\Omega:\mathbb{R}^{3}\rightarrow\mathbb{R}^2$ by $\Omega(x,y,\lambda) = (x/\lambda,y/\lambda)$. If $\mathrm{dom} \Omega =
\mathbb{R}^2 \times \mathbb{R}_{++},$ where $\mathbb{R}_{++} =
\{x\in\mathbb{R}|x>0\}$ and the set $C\subseteq\mathrm{dom}\Omega$ is convex, then its image
\begin{equation}
\Omega(C) = \{\Omega(x)|x\in C\}
\end{equation}
is also convex.
\end{theorem}
For the domain of our projective transformation to be $\mathbb{R}^2 \times \mathbb{R}_{++}$, and hence for the candidate target to be automatically convex, the following linear inequality should be imposed in \eqref{eq:projective},
\begin{equation}
\label{eq:convexityConstraints}
p_{31}x_i + p_{32}y_i + \upsilon > 0.
\end{equation}
Because we consider two candidate targets to be equivalent if one can be obtained from the other by translation and
scaling, we are led to decompose the projective transformation as follows\cite{hartley2003multiple,
decomposition},
\begin{align}
\P &= \H[S]\H[SH]\H[SC]\H[E] \nonumber\\
\label{eq:decomposition}
&=
\begin{bmatrix}s\mathbf{R} & \mathbf{t} \\ \mathbf{0}^\top & \mathbf{1}\end{bmatrix}
\begin{bmatrix} 1 & k & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end{bmatrix}
\begin{bmatrix} \lambda & 0 & 0\\ 0 & 1/\lambda & 0 \\ 0 & 0 & 1\end{bmatrix}
\begin{bmatrix}\mathbf{I} & \mathbf{0} \\ \mathbf{v}^\top & \mathbf{\upsilon}\end{bmatrix},
\end{align}
where $\H[S], \H[SH], \H[SC], \H[E]$ are similarity, shearing, scaling and elation
transformations, respectively; see\cite{hartley2003multiple} for more details. By
setting $s=1, \t =(0,0)$ in \eqref{eq:decomposition}, the degree of freedom (DoF) of the projective transformation drops from eight to five. Our family of candidate targets is now given by \eqref{eq:CandidateTargetFamily} with $\P$
satisfying \eqref{eq:convexityConstraints} and \eqref{eq:decomposition}.
In summary, we can describe candidate convex target shapes via projective transformations while reducing the number of degrees of freedom to five.
\begin{figure}[t]%
\centering
\begin{subfigure}{0.48\columnwidth}
\centering
\includegraphics[width=1\columnwidth, trim={0 0 0 0},clip]{convex.png}
\caption{}
\label{fig:convex}%
\end{subfigure}%
\begin{subfigure}{0.48\columnwidth}
\centering
\includegraphics[width=1\columnwidth, trim={0 0 0 0},clip]{non-convex.png}
\caption{}
\label{fig:non-convex}%
\end{subfigure}%
\caption[]{Equation \eqref{eq:convexityConstraints} versus convexity.
The red indicates the resulting shapes transformed by projective transformations
from the same nominal shape (black). The left shows a case where
\eqref{eq:convexityConstraints} is satisfied and thus the transformed shape is
convex; otherwise, it is non-convex, as shown on the right, where
$ p_{31}x_3 + p_{32}y_3 + \upsilon =-0.5$.}%
\label{fig:convexity-comparison}%
\end{figure}
\begin{remark}
The same design process could be run with an $N$-gon, for $N \ge 3$. If $N$ is too large, the target will have at least one very short edge which will be impossible to discern in a point cloud. In between, it's a tradeoff between having adequate area to collect LiDAR~ returns, non-parallel edges to minimize pose ambiguity, and long enough edges to pick them out of a point cloud. We used a $4$-gon as a reasonable starting point. Investigating $N$ equals 3 and 5 would be interesting as well.
\end{remark}
\subsection{Edge Points Determination}
\label{sec:EdgePointsDetermination}
As mentioned in \eqref{eq:CandidateTargetFamily}, $\mathcal{V}^\prime$ are the 2D vertices of a candidate target.
An edge point $E_i$ is defined by the intersection point of a LiDAR~ ring and the
line connecting two vertices of the polygon as shown in Fig.~\ref{fig:edge-points}.
If we let $\mathcal{S}$ be the boundary of the quadrilateral with vertices $\mathcal{V}'$, the collection of edge points detected by the LiDAR~ is the set $\mathcal{EP}:=\{E_i\}_{i=1}^M$, where $M$ is the number of edge points and is given by the intersection of $\mathcal{S}$ with the LiDAR~ rings $\mathcal{L}{\mathcal{R}}$, i.e.
\begin{equation}
\mathcal{S} = \partial\mathrm{conv}(\mathcal{V}'), \quad
\left\{ E_i\right\}_{i=1}^M = \mathcal{S}\cap \mathcal{L}{\mathcal{R}}.
\end{equation}
When the LiDAR~ rings are horizontal $(y=y_r)$,
an edge point $E_i = (\widecheck{x}_i, \widecheck{y}_i)$ can always be computed in closed form,
\begin{equation}
\label{eq:edge-points}
\widecheck{x}_i = x_i + \frac{x_{i+1}-x_i}{y_{i+1}-y_i}(y_r-y_i)\text{~~and~~} \widecheck{y}_i=y_r.
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=0.85\columnwidth]{EdgePointsAdditionalCost.png}
\caption{The pink dots are the edge points determined by
a LiDAR~ ring (orange line) intersecting with the edge line connecting two vertices
($X_i$ and $X_{i+1}$). The distance between the two edge points on the same LiDAR~ ring is
$d_i$.
The height and width of the shape is $w$ and $h$, respectively.
}
\label{fig:edge-points}
\end{figure}
\subsection{Shape Sensitivity}
\label{sec:ShapeSensitivityComputation}
From experience gained in LiDARTag~[1], we observed that the pose estimation suffers the most from in-plane
rotation. Therefore, we compute the shape sensitivity in $\mathrm{SE}(2)$.
The sensitivity of a polygon is defined as the gradient of the edge points with respect to rigid-body transformations of the polygon, with the LiDAR~ rings held constant, as
shown in Fig.~\ref{fig:sensitivity-comparison}. Hence, the sensitivity captures
the horizontal movement of an edge point after the shape is rotated and translated.
For a transformation in the Special Euclidean group $\H\in \mathrm{SE}(2)$, let $E_i^\prime$, denote the transformed edge point by
\begin{equation}
E_i^\prime := \H\circ E_i =
\begin{bmatrix}
\cos\theta &-\sin\theta &t_x\\
\sin\theta & \cos\theta &t_y\\
0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}
\widecheck{x}_i\\\widecheck{y}_i\\1
\end{bmatrix},
\label{eq:SE2action}
\end{equation}
where $\theta, t_x, t_y$ are the rotation angle, the translation on x-axis, and the
translation on y-axis, respectively. Denote $\mathrm{Exp}(\kappa,\omega, u, v)$ as the
exponential map that maps from the Lie algebra $\mathfrak{se}(2)$ to the continuous
Lie group $\mathrm{SE}(2)$,
\begin{align}
\label{eq:lie-algebra}
&\mathrm{Exp}(\kappa,\omega,u,v)=\mathrm{expm}\left(\kappa \begin{bmatrix}
0 & -\omega & u\\
\omega & 0 & v\\
0&0&0
\end{bmatrix}\right) \nonumber\\
&=
\begin{bmatrix}
\cos(\omega \kappa) & -\sin(\omega \kappa) & \frac{1}{\omega}(v\cos(\omega \kappa)+u\sin(\omega \kappa)-v)\\
\sin(\omega \kappa) & \cos(\omega \kappa) & \frac{1}{\omega}(v\sin(\omega \kappa)-u\cos(\omega \kappa)+u)\\
0&0&1
\end{bmatrix},
\end{align}
where $(\omega, u, v)$ parameterize the unit sphere $S^2\subset\mathbb{R}^3$, $\mathrm{expm}$ is the usual matrix exponential, and $\kappa$ is a dummy variable
for differentiation. Comparing $\H$ in \eqref{eq:SE2action} and
\eqref{eq:lie-algebra} leads to
\begin{equation}
\begin{cases}
\label{eq:coefficients}
\theta = \omega \kappa\\
t_x = \frac{1}{\omega}(v\cos(\omega \kappa)+u\sin(\omega \kappa)-v)\\
t_y = \frac{1}{\omega}(v\sin(\omega \kappa)-u\cos(\omega \kappa)+u).
\end{cases}
\end{equation}
For each triple of values $(\omega, u, v)$, the action of \eqref{eq:lie-algebra} on a candidate target quadrilateral results a path $p_i(\kappa):=p_i(\mathrm{Exp}(\kappa\omega, \kappa u,\kappa v))$ being traced out by the edge points along a LiDAR~ ring. Using
\eqref{eq:coefficients} to differentiate the path
$p_i$ at the identity of $SE(2)$
produces an action of
$\mathfrak{se}_2$,
\begin{equation}
\label{eq:differentiated-path}
v_i(\omega,u,v) := \left.\frac{d}{dt}\right\vert_{\kappa=0}p_i(\mathrm{Exp}(\kappa\omega, \kappa u,
\kappa v)), ~(\omega, u, v)\in \mathfrak{se_2}.
\end{equation}
From \eqref{eq:edge-points}, \eqref{eq:lie-algebra}, \eqref{eq:coefficients}, and \eqref{eq:differentiated-path}, the gradient of the edge point with respect to the
LiDAR~ ring is
\begin{equation}
\label{eq:gradient_x}
\begin{aligned}
v_{i_x} = &\omega\left( \frac{(x_i-x_{i+1})(x_iy_{i+1}-y_ix_{i+1}+x_{i+1}y_r-x_iy_r)}{(y_i-y_{i+1})^2}-y_r\right)\\
&~+ u + \left(\frac{x_{i+1}-x_i}{y_{i+1}-y_i}\right)v.
\end{aligned}
\end{equation}
Notice that $v_{i_y} = 0$ because the $y$-coordinates of $\H(\mathcal{V}^\prime)\cap \mathcal{L}{\mathcal{R}}$ remain fixed.
Finally, we define the sensitivity $\mathcal{M}$ of the polygon
\begin{equation}
\label{eq:sensitivity-cost}
\mathcal{M}(\mathcal{V}, \mathcal{L}{\mathcal{R}}, \omega, u, v) :=\frac{1}{h} \sum_{i=1}^M v_{i_x}^2,
\end{equation}
where $M$ is the number of edge points, and $h$, defined in Fig.~\ref{fig:edge-points} is included because gradients in \eqref{eq:gradient_x} scale with vertical height.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{sensitivity-comparison.png}
\caption{Shape sensitivity under rotation. The sensitivity of a shape is defined
by the gradient ($v_{ix}$) of each edge point as it moves along the ring lines (orange) under rotations and translations of the shape.
The green dots are the original edge points and the pink dots are the edge
points on the rotated shape. The same process can be done for translation.
The left shows a gradient $v_{ix}$
of a square where the dotted square has been rotated from the nominal positioned square. The right
shows that the gradient of an edge point on a circle is zero under rotation about its center. The gradient would be non-zero for translations.
}
\label{fig:sensitivity-comparison}
\end{figure}
\subsection{Initial Cost Function}
\label{sec:InitialCost}
The candidate target's sensitivity defined in \eqref{eq:sensitivity-cost} does not take into account the discrete nature of the LiDAR~ returns on a given ring. Let $d_i$ denote the distance between two edge points on the $i$-th LiDAR~ ring. We want to encourage targets that have $d_i$ larger than the spatial quantization of the LiDAR~ returns. We can do this in two ways, by scaling \eqref{eq:sensitivity-cost} by $w$ and by including a term in the cost of the form $\sum_{i=1}^K d_i$ ($d_i$ is larger for wider targets). The resulting score of shape ($\Psi$)
becomes
\begin{equation}
\label{eq:final-cost}
\Psi = w \mathcal{M} + \mu\sum_{i=1}^K d_i,
\end{equation}
where $w$ is the width of the polygon, $K$ is the number of rings illuminating the polygon, and $\mu$ is a weight trading off the two terms in the cost function.
\section{Simulation Results}
\label{sec:OptimalShapeSimulation}
Before carrying out experiments with the new target shape, we used a MATLAB-based
LiDAR~ simulator introduced in \cite{huang2020intinsic} to extensively evaluate the
pose and vertex estimation of the optimal shape. Both quantitative and qualitative
results are provided. We do not compare against standard targets, such as unpatterned rectangles, diamonds, or circles, because their symmetry properties result in pose ambiguity. At large distances, a pattern would not be discernible.
We simulate a \textit{32-Beam Velodyne ULTRA Puck LiDAR}, whose data sheet can be found
at\cite{velodyneUltraPuck}. A target is placed at distances from 2 to 40 meters in 2
m increments. At each distance, simulation data is collected with a target face-on to
the LiDAR~ as an easy case, and another dataset with the target rotated by the Euler
angles (roll = $20^\circ$, pitch = $30^\circ$, yaw = $30^\circ$) under the ${XYZ}$
convention as a challenging case. In addition, we induce two different levels
of noise to each dataset to examine the robustness of the algorithm.
The results of vertex estimation are reported as the root-mean-square-error (RMSE):
\begin{equation}
\label{eq:e_RMSE}
\text{RMSE} = \sqrt{\frac{1}{4}\sum_{i=1}^4\|\widetilde{X}_i - X_i\|_2^2},
\end{equation}
where $\widetilde{X}_i$ is the estimated vertex and $X_i$ is the ground truth vertex
from the simulator. The pose on $\mathrm{SE}(3)$ is evaluated on translation $e_t$ in
$\mathbb{R}^3$ and rotation $e_r$ on $\mathrm{SO}(3)$, separately. In particular, $e_t$ and $e_r$
are computed by
\begin{equation}
\label{eq:PoseError}
e_t := \|\t - \widetilde{\t}\| \text{~~and~~}
e_r := \|\mathrm{Log}(\R\widetilde{\R}^\mathsf{T})\|,
\end{equation}
where $\|\cdot\|$ is the Euclidean norm, $\widetilde{\cdot}$ is the estimated
quantity, $\R$ and $\t$ are the ground truth rotation and translation, respectively,
and $\mathrm{Log}(\cdot)$ is the logarithm map in the Lie group $\mathrm{SO}(3)$. Additionally, we
also report the percentage of the RMSE and translation error, which are computed by
each quantity divided by the centroid of the target. The quantization error $e_q$
is the distance between two adjacent points on the same ring and can be approximated
by the azimuth resolution ($0.4^\circ$) of the LiDAR~ times the target distance.
Qualitative results are shown in Figure~\ref{fig:simResults}. The complete
quantitative results of the distances and noise levels are shown as line charts in
Fig.~\ref{fig:simChart}. Table~\ref{tab:SimNoiseFreeResults} shows a subset of
quantitative results of the pose and vertex estimation using the noise-free dataset. For vertex estimation, we achieve
less than 1\% error in most cases. The
translation errors are less than the quantization error $e_q$. We also achieve a few degrees of rotation errors. It can be seen that the estimation limit of the optimal target of width 0.96 meter with our \textit{32-Beam Velodyne ULTRA Puck LiDAR~} optimal is 30 meters. However, for a LiDAR with
a different number of beams or points, the estimation limit may be different.
Based on these results, we were motivated to build the target and run physical
experiments.
\section{Finding the LiDAR~ Vertices}
\label{sec:LiDARVertices}
\section{Proof on Uniqueness of Similarity Transformation}
\label{sec:Uniqueness}
This section completes the proof in \ref{sec:TargetPlacementGuideline} and
\bh{cite[Proposition 1]{AutomaticCalibration}}. It provides a guideline to place
targets such that an answer to the optimization problem is unique. In particular,
under Assumption~\ref{eq:assumtionN} and Assumption~\ref{eq:assumtionB}\bh{need to
fix numbers}, the optimization problem has only one unique answer. As stated in
Sec.~\ref{sec:ProposedIntrinsicModel}, the intrinsic calibration parameters are
modeled as an element of the similarity Lie group ($\mathrm{Sim}(3)$). An element of this
group in matrix from is
\begin{equation}
\H =
\begin{bmatrix}
s\R & \t \\
\zeros & 1
\end{bmatrix} \in \mathrm{Sim}(3),
\end{equation}
where $\R\in\mathrm{SO}(3), \t \in \mathbb{R}^3$ and $s \in \mathbb{R}^+$.
\subsection{Mathematical Definitions and Preliminaries}
Let $[\mathcal{S}]$ and $[\mathcal{S}]^\perp$ be the span and its orthogonal complement of
$\mathcal{S}\subset \mathbb{R}^3$, respectively\footnote{$\because \mathcal{S} \subset
\mathbb{R}^3$, $[\mathcal{S}]=\mathbb{R}^3 \iff [\mathcal{S}]^\perp =0$}. We denote the union of
intersection of $K$ spans as $\left( [\mathcal{S}_1] + [\mathcal{S}_2] + \cdots +
[\mathcal{S}_K]\right) = \cap_{i=1}^K[\mathcal{S}_i]$. Let $\P \subset \mathbb{R}^3$ be a plane.
$\P$ modeled the set traced out by a single ring of a perfectly calibrated
spinning LiDAR. Let $\{\e[1], \e[2], \e[3] \}$ denote the canonical basis for
$\mathbb{R}^3$. Without loss of generality, we assume $\P=[\e[3]]^\perp = \{\e[1],
\e[2]\}$. Therefore, $\forall s>0, \R\in\mathrm{SO}(3)$,
$$ (s\R -\I)(\P)=0 \iff s=1, \R=\I.\footnote{$(s\R-\I)(\P) = [(s\R-\I)\e[1]] +
[(s\R-\I)\e[2])]$}$$
\subsection{Assumptions}
Consider four targets with unit normal vectors $\n[i]$ and let $\p[0, i]$ be a points
on the $i$-th target.
\begin{assumption}[Assumption N]
\label{eq:assumtionN}
All sets of three distinct vectors from $\{ \n[1], \n[2], \n[3], \n[4], \e[3]\}$
are linearly independent.
\end{assumption}
For $1\le i \le 4$, the plane defined by the $i$-th target is $\V[i]:= \p[0,i] +
[\n[i]]^\perp.$ Under Assumption~\ref{eq:assumtionN}, some facts are listed below
without proofs:
\begin{enumerate}[(a)]
\item $\V[i] \cap \V[j]=\p[ij] + [\n[i], \n[j]]^\perp$.
\item For each $i \neq j \in \{1, 2, 3, 4 \}$, $\p[ij]:=\P \cap \V[i] \cap \V[j]$
exists and is unique. There are $\binom{4}{2}$ intersection points.
\item $\H\cdot \p[ij] \in \V[i] \cap \V[j]$ if, and only if, $$(s \R-\I)\p[ij]
+\t \in [\n[i], \n[j]]^\perp. $$
\item Because $\dim [\n[i], \n[j]]^\perp=1$, $(s \R-\I)\p[ij] +\t \neq 0$ if,
and only if, $ [(s \R-\I)\p[ij] +\t] = [\n[i], \n[j]]^\perp. $
\item Let $\{\p[a], \p[b] \}$ be a basis for $\P$. Then $(s \R-\I)(\P)=0$ if,
and only if, $(s \R-\I)(\p[a])=0$ and $(s \R-\I)(\p[b])=0$.
\end{enumerate}
\begin{assumption}[Assumption B]
\label{eq:assumtionB}
Given the six intersection points, any two form a set of basis for the ring
plane. There are $\binom{6}{2}$ sets of basis. Subsets of the basis' are linearly independent.
\begin{enumerate}[(a)]
\item $\{ \p[12], \p[13] \}$, $\{ \p[13], \p[14] \}$, $\{ \p[14], \p[12] \}$,
\item $\{ \p[12], \p[23] \}$, $\{ \p[23], \p[24] \}$, $\{ \p[24], \p[12] \}$,
\item $\{ \p[13], \p[23] \}$, $\{ \p[23], \p[34] \}$, $\{ \p[34], \p[13] \}$,
\item $\{ \p[14], \p[24] \}$, $\{ \p[24], \p[34] \}$, $\{ \p[34], \p[14] \}$,
\item $\{ \p[14], \p[23] \}$
\end{enumerate}
\end{assumption}
\subsection{A Complete Proof}
\begin{theorem}
Assume that Assumptions N (Assumption~\ref{eq:assumtionN}) and B
(Assumption~\ref{eq:assumtionB}) hold and let $\H \in \mathrm{Sim}(3)$. If for each $i
\neq j \in \{1, 2, 3, 4 \}$, $\H\cdot\p[ij] \in \V[i] \cap
\V[j]$, then
\begin{equation}
\H =
\begin{bmatrix}
\I & \zeros \\
\zeros & 1
\end{bmatrix}.
\end{equation}
\end{theorem}
\begin{proof}
The proof is by exhaustion on the dimension of $(s\R-\I)(\P)$. We have that
$\H\cdot\p[ij] = s\R \p[ij] + \t$, and therefore
\begin{equation}
\label{eq:NecssSuff}
\H\cdot\p[ij] \in \V[i] \cap \V[j] \iff (s\R-\I) \p[ij] + \t \in [\n[i], \n[j]]^\perp.
\end{equation}
\begin{itemize}
\item \textbf{Case 1:} $\dim (s\R-\I)(\P) =0$. \par
Then $\t \in [\n[i],
\n[j]]^\perp$ for all $i\neq j$. Hence, $\t \in [\n[1], \n[2]]^\perp \cap
[\n[2], \n[3]]^\perp =0$, which implies that $\t=0$. Therefore, by
\textbf{Simple Fact 1}, we are done. We next show that $\dim (s\R-\I)(\P) > 0$
and $\H\cdot\p[ij] \in \V[i] \cap \V[j]$ for all $i \ne j$ lead to
contradictions.
\item \textbf{Case 2:} $\dim (s\R-\I)(\P) =1$ \par
(a) Suppose $\t \not \in (s\R-\I)(\P)$. Then, for any $\p[ij]$, $(s\R-\I)\p[ij] +
\t \neq 0$. Hence,
\begin{align*}
[\n[1], \n[2]]^\perp &= [(s\R-\I)(\p[12]) + \t] \\
[\n[2], \n[3]]^\perp &= [(s\R-\I)(\p[23]) +\t] \\
[\n[3], \n[4]]^\perp &= [(s\R-\I)(\p[34]) +\t].
\end{align*}
Because $[\n[1], \n[2]] \cap [\n[2], \n[3]]\cap [\n[3], \n[4]] =0$, we have
that $[\n[1], \n[2]]^\perp + [\n[2], \n[3]]^\perp + [\n[3], \n[4]]^\perp =
\mathbb{R}^3$. We deduce that $\dim (s\R-\I)(\P) =2$, which is a contradiction. \\
(b) Suppose $\t \in (s\R-\I)(\P)$ and thus there exists $\p[t]\in \P$ such that
$\t=(s\R-\I)(\p[t])$. The condition \eqref{eq:NecssSuff} can therefore be written as
\begin{equation}
\label{eq:NecssSuffWithT}
\H\cdot\p[ij] \in \V[i] \cap \V[j] \iff (s\R-\I)( \p[ij] + \p[t]) \perp \{\n[i],\n[j] \}.
\end{equation}
Because $\{ \p[12], \p[13] \}$, $\{ \p[13], \p[14] \}$, $\{ \p[14], \p[12] \}$
are bases for $\P$, we conclude that
\begin{equation}
[\p[12]+\p[t], \p[13]+\p[t], \p[14]+\p[t] ]=\P.
\end{equation}
Applying \eqref{eq:NecssSuffWithT} to this set of vectors, we deduce that
$$(s\R-\I)(\P) \perp \n[1].$$
Applying the same reasoning to $\{ \p[12], \p[23] \}$,
$\{ \p[23], \p[24] \}$, $\{ \p[24], \p[12] \}$ , we deduce that $(s\R-\I)(\P) \perp
\n[2]$. Repeating again, we have $(s\R-\I)(\P) \perp \n[3]$ and thus $\dim
(s\R-\I)(\P)=0$.
\item \textbf{Case 3:} $\dim (s\R-\I)(\P) =2$ \par
We rewrite \eqref{eq:NecssSuff} as
\begin{equation}
\label{eq:NecssSuffv02}
\H\cdot\p[ij] \in \V[i] \cap \V[j] \iff -\t \in (s\R-\I) \p[ij] +[\n[i], \n[j]]^\perp.
\end{equation}
Hence, if for all $i \neq j \in \{1, 2, 3, 4 \}$, $\H\cdot\p[ij] \in \V[i] \cap
\V[j]$, then the lines $(s\R-\I) \p[ij] +[\n[i], \n[j]]^\perp$ in $\mathbb{R}^3$ must have
a common point of intersection, namely $-\t$ and hence
\begin{equation}
\label{eq:MasternIntersection}
\bigcap \limits_{i \neq j \in \{1, 2, 3, 4 \}} \{ (s\R-\I) \p[ij] + [\n[i], \n[j]]^\perp \} \neq \emptyset.
\end{equation}
\begin{remark}
When $(s\R-\I)(\P)=0$, the intersections in \eqref{eq:MasternIntersection} are
non-empty; indeed, the equation reduces to
$$\bigcap \limits_{i \neq j \in \{1,
2, 3, 4 \}} [\n[i], \n[j]]^\perp = \left(~ \sum \limits_{i \neq j} [\n[i], \n[j]]
~\right)^\perp = (\mathbb{R}^3)^\perp =0,$$
and hence $\t=0$.
\end{remark}
In the remainder of the proof, we show the intersection being non-empty
contradicts ${\rm dim}~(s\R-\I)(\P) = 2$. We do this by examining the
intersections in \eqref{eq:MasternIntersection} pairwise to arrive at a set of
necessary conditions for $\H \cdot \p[ij] \in \V[i] \cap \V[j]$ for $i \neq
j$, and then use the necessary conditions to complete the proof. We note that for $ij
\neq kl$,
\begin{align*}
&\{ (s\R-\I) \p[ij] + [\n[i], \n[j]]^\perp \} \cap \{ (s\R-\I) \p[kl] +
[\n[k], \n[l]]^\perp \} \neq \emptyset \\
&\iff (s\R-\I)(\p[ij] - \p[kl]) \in
[\n[i], \n[j]]^\perp + [\n[k], \n[l]]^\perp.
\end{align*}
The indices are a bit easier to keep track of if we set
\begin{align}
&\q[1]:=(s\R-\I)(\p[12]), ~\U[1]:=[\n[1], \n[2]]^\perp \nonumber\\
&\q[2]:=(s\R-\I)(\p[13]), ~\U[2]:=[\n[1], \n[3]]^\perp \nonumber\\
&\q[3]:=(s\R-\I)(\p[23]), ~\U[3]:=[\n[1], \n[4]]^\perp \nonumber\\
&\q[4]:=(s\R-\I)(\p[14]), ~\U[4]:=[\n[2], \n[3]]^\perp \nonumber,
\end{align}
where $\{\U[k]|k=1,\cdots,4\}$ denote the indicated one-dimensional subspaces. Then,
for each $i \neq j \in \{1, 2, 3, 4 \}$, $\U[i] \cap \U[j] =0$, and we have $\U[1]
\oplus \U[2] \oplus \U[3]=\mathbb{R}^3$. Let $\u[i]$ be a basis for $\U[i]$, so that
$\U[i] = [\u[i]]$, and write $$\u[4] =\alpha \u[1] + \beta \u[2] + \gamma \u[3].$$
\begin{claim}
\label{claim:abg}
Each of the coefficients $\alpha, \beta, \gamma$ is non-zero.
\end{claim}
\begin{proof} Suppose $\alpha=0$. Then $\U[4] \subset \U[2] + \U[3]$, that is,
$$[\n[2], \n[3]]^\perp \subset [\n[1], \n[3]]^\perp +[\n[1], \n[4]]^\perp $$ But this is equivalent to
$$ [\n[1]] = [\n[1], \n[3]] \cap [\n[1], \n[4]] \subset [\n[2], \n[3]], $$ and hence $\{
\n[1], \n[2], \n[3]\}$ is not linearly independent, contradicting Assumption N. The same argument holds for the other coefficients.
\end{proof}
\begin{claim} A necessary condition for
\begin{equation}
\label{eq:MasternIntersection}
\bigcap \limits_{i=1}^{4} \{\q[i] + \U[i] \} \neq \emptyset
\end{equation}
is that there exist real numbers $c_1, c_2, c_3, c_4$ such that
\begin{equation}
\label{eq:SimulEqns}
\begin{aligned}
\Delta \q[12]:=\q[1]-\q[2]&=c_1 \u[1] + c_2\u[2] \\
\Delta \q[13]:=\q[1]-\q[3]&=c_1 \u[1] + c_3\u[3] \\
\Delta \q[14]:=\q[1]-\q[4]&=c_1 \u[1] + c_4 \u[4] \\
\Delta \q[23]:=\q[2]-\q[3]&=c_3 \u[3] - c_2\u[2] \\
\Delta \q[24]:=\q[2]-\q[4]&=c_4 \u[4] - c_2\u[2] \\
\Delta \q[34]:=\q[3]-\q[4]&=c_3 \u[3] - c_4\u[4] \\
\end{aligned}
\end{equation}
\end{claim}
\begin{proof}
Each row of \eqref{eq:SimulEqns} corresponds to a condition of the form $\q[i]-\q[j] \in \U[i] \oplus \U[j]$.
The proof proceeds by expressing each of the six rows in \eqref{eq:SimulEqns} with distinct coefficients (12 in total), and then writing down three necessary compatibility conditions,
\begin{equation}
\label{eq:CompatibilityEqns}
\begin{aligned}
\Delta \q[12]-\Delta \q[13]+\Delta \q[23]&=0 \\
\Delta \q[12]-\Delta \q[14]+\Delta \q[24]&=0\\
\Delta \q[14]-\Delta \q[13]+\Delta \q[34]&=0.
\end{aligned}
\end{equation}
Because $-\t$ is in the intersection of the lines in \eqref{eq:MasternIntersection},
the resulting linear equations must have a solution, and indeed direct computation
shows that the set of solutions can be parameterized as given in the claim.
\end{proof}
The next step is to note that $[\Delta \q[12], \cdots, \Delta \q[34] ] \subset
(s\R-\I)(\P)$, and hence its dimension must be less than three. Additional
straightforward calculations show that
$$\dim ~[\Delta \q[12], \Delta \q[13], \Delta \q[34] ] = {\rm rank~}
\begin{bmatrix}
c_1 & c_1 & -\alpha c_4 \\ c_2 & 0 & c_2 - \beta c_4 \\ 0 & c_3 &-\gamma c_4
\end{bmatrix}.
$$
In light of Claim~\ref{claim:abg}, the rank is less than three if, and only if, any
two coefficients of $\{ c_1, c_2, c_3, c_4\}$ are zero. But if this is the case, then
at least one row of \eqref{eq:SimulEqns} must be zero. Each row of
\eqref{eq:SimulEqns}, however, has the form $(s\R-\I)(\p[a] - \p[b])$, where
$\{\p[a], \p[b] \}$ is a basis for $\P$, and thus it cannot be the case that $\dim
(s\R-\I)(\P) = 2.$ This completes the proof.
\end{itemize}
\end{proof}
\section{Global Pose and Feature Estimation}
\label{sec:GlobalPoseForOptimal}
In this section, we propose a means to use known target geometry to extract target
vertices while globally estimating relative pose between target and LiDAR. For a collection of LiDAR~ returns $\mathcal{TP} := \{\mathcal{X}_i\}_{i=i}^N$, let
$\mathcal{EP}:=\{E_i\}_{i=1}^M\in\mathcal{TP}$ be the $M$ target edge points. Given the target geometry, we define a template with vertices
$\{\bar{X}_i\}_{i=1}^4$ located at the origin of the LiDAR~ and aligned with the
$y$-$z$ plane as defined in Fig.~\ref{fig:PoseDefinition}. We also denote
$\overbar{\mathcal{L}}:=\{\overbar{\ell}_i\}_{i=1}^4$ as the line equations of the adjacent
vertices of the template. We seek a rigid-body transformation from the template to
the target, $\H[L][T] \in \mathrm{SE}(3)$, that ``best fits'' the template onto the edge points. In practice, it is actually easier to project the
edge points $\mathcal{EP}$ back to the origin of the LiDAR~ through the inverse of the
current estimate of transformation $\H[T][L]:=\inv{\left(\H[L][T]\right)}$ and
measure the error there. The action of $\H\in\mathrm{SE}(3)$ on $\mathbb{R}^3$ is $\H\cdot\mathcal{X}_i
= \R\mathcal{X}_i+\t$, where $\R\in\mathrm{SO}(3)$ and $\t\in\mathbb{R}^3$.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth, trim={0cm 0cm 0cm 0.cm},clip]{PoseIllustration_exp2m.png}
\caption{Pose definition and illustration of template fitting.
A coordinate frame for the template (target shown in black) is defined by aligning the plane of the template with the $y$-$z$ plane of the LiDAR~ frame and also aligning the mean of its vertices with the origin of the LiDAR.
Let $H_T^L$ (blue arrow) be an estimate of the rigid-body transformation from target to LiDAR, projecting the edge points of the target back to the template. The
estimated pose of the target is then given by the
inverse transformation, $H_L^T$ (green arrow). The optimal $H_T^L$ is obtained by minimizing \eqref{eq:FittingError} (based on point-to-line distance).
This figure also shows a fitting result of a target at 2 meters in the Ford
Robotics Building. The red frame is the template re-projected onto the LiDAR~ point cloud by $H_L^T$.
} \label{fig:PoseDefinition}
\vspace{-4mm}
\end{figure}
\begin{figure*}[!b]%
\centering
\begin{subfigure}{0.48\columnwidth}
\centering
\includegraphics[width=1\columnwidth, trim={0 0 0 0},clip]{Sim-10m-front.png}
\end{subfigure}%
\vspace{2pt}
\begin{subfigure}{0.48\columnwidth}
\centering
\includegraphics[width=1\columnwidth, trim={0 0 0 0},clip]{Sim-20m-front.png}
\end{subfigure}%
\vspace{2pt}
\begin{subfigure}{0.48\columnwidth}
\centering
\includegraphics[width=1\columnwidth, trim={0 0 0 0},clip]{Sim-30m-front.png}
\end{subfigure}%
\vspace{2pt}
\begin{subfigure}{0.48\columnwidth}
\centering
\includegraphics[width=1\columnwidth, trim={0 0 0 0},clip]{Sim-40m-front.png}
\end{subfigure}
\vspace{2pt}
\begin{subfigure}{0.48\columnwidth}
\centering
\includegraphics[width=1\columnwidth, trim={0 0 0 0},clip]{Sim-10m-side.png}
\end{subfigure}%
\vspace{2pt}
\begin{subfigure}{0.48\columnwidth}
\centering
\includegraphics[width=1\columnwidth, trim={0 0 0 0},clip]{Sim-20m-side.png}
\end{subfigure}%
\vspace{2pt}
\begin{subfigure}{0.48\columnwidth}
\centering
\includegraphics[width=1\columnwidth, trim={0 0 0 0},clip]{Sim-30m-side.png}
\end{subfigure}%
\vspace{2pt}
\begin{subfigure}{0.48\columnwidth}
\centering
\includegraphics[width=1\columnwidth, trim={0 0 0 0},clip]{Sim-40m-side.png}
\end{subfigure}%
\caption[]{Simulation results of the noise-free dataset of the pose estimation at various distances (10, 20, 30, 40 m).
LiDAR~ returns (blue dots) on the target are provided by the LiDAR~ simulator.
Black indicates the ground truth pose from the simulator, and red is the estimated pose and vertices. The top
and bottom show the front view and a side view of the fitting results,
respectively.
}%
\label{fig:simResults}%
\end{figure*}
The cost $j_i$ of edge point $E_i\in\mathcal{EP}$ is defined as the point-to-line distance,
\begin{equation}
j_i(E_i; \overbar{\mathcal{L}}) =
\min_{\overbar{\ell}_i\in\overbar{\mathcal{L}}}\|E_i-\overbar{\ell}_i\|_2^2
\end{equation}
where $\overbar{\mathcal{L}}$ is the set of line equations for the target. Let
$\{\bar{E}_i\}_{i=1}^{M} := H_T^L(\mathcal{EP})=\{H_T^L\cdot E_i\}_{i=1}^M$ denote the
projected points by $H_T^L$. The total fitting error is defined as
\begin{equation}
\label{eq:FittingError}
J(H_T^L(\mathcal{EP})) := \sum_{i=1}^M j_i(\bar{E}_i)
\end{equation}
To minimize \eqref{eq:FittingError}, we adopt techniques that were used to globally
solve 3D registration or 3D SLAM~\cite{tron2015inclusion, carlone2015lagrangian,
olsson2008solving, briales2017convex} where the problem is formulated as a QCQP, and
the Lagrangian dual relaxation is used. The relaxed problem becomes a Semidefinite
Programming (SDP) and convex. The problem can thus be solved globally and efficiently
by off-the-shelf specialized solvers~\cite{grant2014cvx}. As shown
in~\cite{briales2017convex}, the dual relaxation is empirically always tight (the
duality gap is zero).
Once we (globally) obtain $\H[T][L]$, the pose of the target is $\H[L][T] =
\inv{(\H[T][L])}$, and the estimated vertices are $\{\widetilde{X_i}\}_{i=i}^4 :=
\{\H[L][T]\cdot\bar{X}_i\}_{i=1}^4$. The edge-line equations, the normal vector, and
the plane equation of the target can be readily obtained from the vertices.
\begin{figure*}[!t]%
\centering
\begin{subfigure}{0.68\columnwidth}
\centering
\includegraphics[width=1\columnwidth, trim={0 0 0 0},clip]{RMSE-all.png}
\end{subfigure}%
\begin{subfigure}{0.68\columnwidth}
\centering
\includegraphics[width=1\columnwidth, trim={0 0 0 0},clip]{Translation-all.png}
\end{subfigure}%
\begin{subfigure}{0.68\columnwidth}
\centering
\includegraphics[width=1\columnwidth, trim={0 0 0 0},clip]{Rotation-all.png}
\end{subfigure}
\begin{subfigure}{0.68\columnwidth}
\centering
\includegraphics[width=1\columnwidth, trim={0 0 0 0},clip]{RMSE-percentage-all.png}
\end{subfigure}%
\begin{subfigure}{0.68\columnwidth}
\centering
\includegraphics[width=1\columnwidth, trim={0 0 0 0},clip]{Translation-percentage-all.png}
\end{subfigure}%
\caption[]{Simulation results with a target placed at distances from 2
to 40 meters in 2 m increments in the LiDAR~ simulator. At each distance, the simulation data are collected
with the target face-on to the LiDAR~ as an easy case (solid line), and for the
other, the target is
rotated by the Euler angle (roll = $20^\circ$, pitch = $30^\circ$, yaw = $30^\circ$)
as a challenging case (dashed line). In addition, we induce two different levels of
noises to each dataset, as indicated by the different colors.}%
\label{fig:simChart}%
\end{figure*}
\section{Vertices Determination From LiDAR~ Measurement}
Vertices of a target is important feature for target-based extrinsic calibration,
target plane estimation, target pose estimation, etc. However, vertices are not
directly observable due to sparse LiDAR~ point clouds. We proposed a concept of
$L_1$-inspired norm to estimate vertices in \cite[(2)]{huang2019improvements}. In the
paper, We seek a rigid body transformation from LiDAR~ to the target, $H_L^T$, that
``best fits'' the ideal target onto the point cloud. In practice, it is actually
easier to pull the $\mathcal{PC}$ back to the origin of the LiDAR~ through the inverse of the
current estimate of transformation $H_T^L:=(H_L^T)^{-1}$ and measure the error there.
For more details about this method, see \cite[Sec.~II]{huang2019improvements}.
However, instead of a diamond-shaped target stated in \cite{huang2019improvements},
we use the optimal shape in Sec.~\ref{sec:ShapeSensitivity}. The cost function
\cite[(3)]{huang2019improvements} has to be modified because the target is no longer
a square. This section will provide a generic cost function to the any polygonal
shape, the gradient, and the convexity analysis of the cost function.
\subsection{$L_1$-inspired cost function for a polygonal shape}
Let $\mathcal{PC} := \{\mathcal{X}_i\}_{i=1}^{M}$ be a collection of 3D points on a LiDAR~ target,
where $M$ is the number of points. Given the target geometry (here is the optimal
shape determined in Sec.~\ref{sec:ShapeSensitivity}), we define an ideal target with
$m$ vertices $\bar{\mathcal{V}} := \{ \bar{X}_i| \bar{X}_i=(y_i, z_i)\}_{i=1}^m$ located at
the origin of the LiDAR~ with thickness $\epsilon$ as defined in
Fig.~\bh{fig:IdealFrame}. Without loss of generality, we assume the centroid of the
ideal target is at the origin of the LiDAR\footnote{the ideal target is a 2D shape
placed on $y$-$z$ plane with thickness $\epsilon$ on $x$-axis.}.
\subsubsection{Region of interest}
Let $\mathcal{L} :=\{l_i\}_{i=1}^m$ be the line equations\footnote{All the lines lie on
$y$-$z$ plane.} for the edges of the ideal target and are computed as
\begin{align}
l_i(\bar{X}_i, \bar{X}_{i+1}) := &(y_{i+1} - y_i)z + (z_i-z_{i+1})y \nonumber\\
&-z_iy_{i+1} + z_{i+1}y_1 = 0,
\end{align}
where $X_{m+1} = X_1$, $l_i$ is the line equation for the $i$-th edge of the ideal
target, and the $i$-th edge connects $\bar{X}_i$ and $\bar{X}_{i+1}$.
Given a line $l_i$, the $y$-$z$ plane is split into two areas: $l_i>0$ and
$l_i<0$\footnote{$l_i=0$ is the line itself.}. We denote $\RI[i][+]$ and $\RI[i][-]$
as the area of $l_i>0$ and the area of $l_i<0$, respectively. Therefore, the ideal
target located the origin can be represented as a union of $\{RI_i\}_{i=1}^m$, called
``region of interest'' (RoI):
\begin{equation}
\RI[yz][*] = \bigcap_{i=1}^m \RI[i][*]
\end{equation}
where $\RI[i][*]$ is the region containing the ideal target. Specifically,
$\RI[yz][*]$ for the optimal shape in Fig.~\bh{fig:regionOfInterest} is
\begin{equation}
\RI[yz][*] = \RI[1][+] \cap \RI[2][-] \cap \RI[3][-]\cap \RI[4][+].
\end{equation}
Similarly, $\RI[xz][*]$ can be expressed as:
\begin{equation}
\RI[xz][*] = \bigcap_{i=1}^2 \RI[i][*] = \RI[1][-] \cap \RI[2][+],
\end{equation}
where again $\RI[i][*]$ is the region containing the
\begin{remark}
$\RI[yz][*]$ is a polygonal shape with $m$ vertices and $\RI[xz][*]$ is two
vertical lines separated $\epsilon$ away from each other. Thus, $\RI[yz][*]$ has
$m$ unions and $\RI[xz][*]$ has $2$ unions.
\end{remark}
\subsubsection{Cost function of a polygonal shape}
Let $\{\tilde{\mathcal{X}}_i\}_{i=1}^M:=H_T^L(\mathcal{PC}):=\{
H_T^L(\mathcal{X}_i) \}_{i=1}^M$ denote the pullback of the point cloud by $H_T^L$, and
denote a point's Cartesian coordinates by $(\td[x]_i,\td[y]_i,\td[z]_i)$. Let
$\td[p]_{ixz}$ and $\td[p]_{iyz}$ be the orthogonal projection of $\td[\mathcal{X}_i]$
onto $x$-$z$ plane and $y$-$z$ plane, respectively, as shown in
Fig.~\bh{fig:costFunction}. Because the 2D optimal shape is placed on $y$-$z$ plane with thickness $\epsilon$ ,
the cost function is computed from $y$-$z$ plane and $x$-$z$ plane.
Let $\mathds{1}(\td[p]_i, l_i)$ be the indicator function determining if $\td[p]_i$ is within
the $\RI[i][*]$ of $l_i$ and let $\mathcal{M}(\td[p]_i, l_i)$ denote the Manhattan distance between
$\td[p]_i$ and $l_i$. The cost for a pullback point $\td[\mathcal{X}_i]$ is defined as
\begin{align}
f(\td[\mathcal{X}_i]; \epsilon, \mathcal{L}) &= f(H_T^L\mathcal{X}_i; \epsilon, \mathcal{L}) \nonumber\\
&:= \mathcal{M}_{xz}\mathds{1}_{xz}(\td[p]_i, x=\epsilon) +
\mathcal{M}_{yz}\mathds{1}_{yz}(\td[y]_i,\td[z]_i, \mathcal{L}),
\end{align}
where $\mathds{1}_x(\td[x]_i, \epsilon)$ and $\mathcal{M}_x\mathds{1}_x(\td[x]_i, \epsilon)$ are defined
\begin{align}
\mathds{1}_{x}(\td[x]_i, \epsilon) &=
\begin{cases}
1 & \text{if } \td[x]_i \notin \RI[x][*]\\
0 & \text{if } \td[x]_i \in \RI[x][*]\\
\end{cases}, \\
\mathcal{M}_x\mathds{1}_x(\td[x]_i, \epsilon) &=
\begin{cases}
~~(\td[x]_i - \frac{\epsilon}{2}) \cdot \mathds{1}_x(\td[x]_i, \epsilon) & \text{if } \td[x]_i>0\\
- (\td[x]_i + \frac{\epsilon}{2}) \cdot \mathds{1}_x(\td[x]_i, \epsilon) & \text{if } \td[x]_i\leq0\\
\end{cases}
\end{align}
and $\mathds{1}_{yz}(\td[y]_i,\td[z]_i, l_i)$ and $\mathcal{M}_{yz}\mathds{1}_{yz}(\td[y]_i,\td[z]_i \mathcal{L})$ is defined
\begin{align}
\mathds{1}_{yz}(\td[y]_i,\td[z]_i, l_i) &=
\begin{cases}
1 & \text{if } (\td[y]_i,\td[z]_i) \notin \RI[i][*]\\
0 & \text{if } (\td[y]_i,\td[z]_i) \in \RI[i][*]\\
\end{cases},\\
\mathcal{M}_{yz}\mathds{1}_{yz}(\td[y]_i,\td[z]_i, \mathcal{L}) &=
\sum_{i=1}^m \frac{|a_i\td[y]_i + b_i\td[z]_i + c_i|}{\sqrt{a_i^2+b_i^2}} \mathds{1}_{yz}(\td[y]_i,\td[z]_i, \mathcal{L})
\end{align}
and
\subsubsection{Gradient of the cost function}
\subsection{Convexity Analysis}
\section{Vertices Determination From $L_1-$ inspired cost}
\label{sec:VertexFromL1}
\section*{Acknowledgment}
\small{
Toyota Research Institute provided funds to support this
work. Funding for J. Grizzle was in part provided by NSF
Award No. 1808051 and 2118818. The first author thanks Wonhui Kim for
useful conversations.}
\bibliographystyle{bib/IEEEtran}
\section{Conclusions}
\label{sec:Conclusions}
We proposed a new method to determine the extrinsic calibration of a LiDAR camera pair. When evaluated against a state-of-the-art baseline, it resulted in, on average, more than a 50\% reduction in LiDAR-to-camera projection error and a 70\% reduction in its variance. These results were established through a round-robin validation study when two targets are used in training, and further buttressed with results for training on four, six, and eight targets.
Two other benefits of our $L_1$-based method are: (1) it does not require the estimation of a target normal vector from an inherently noisy point cloud; and (2) it also obviates the identification of edge points and their association with specific sides of a target. In combination with lower RMS error and variance, we believe our results may provide an attractive alternative to current target-based methods for extrinsic calibration.
\section{Experimental Results}
\label{sec:ExperimentsAndResults}
In this section, we extensively evaluate our proposed method on seven different
scenes through a form of ``cross-validation'': in a round-robin fashion, we train on
one or more scenes and then evaluate on the remaining scenes. The quantitative
evaluation consists of computing pixel error per corner, where we take the image
corners as ground truth. We also show qualitative validation results by projecting
the LiDAR scans onto camera images; we include here as many as space allows, with
more scenes and larger images available at \cite{githubFile}.
\subsection{Data Collection}
\label{sec:DataGathering}
The scenes include both outdoor and indoor settings. Each scene includes two targets,
one approximately 80.5~cm square and the other approximately 15.8~cm square, with the
smaller target placed closer to the camera-LiDAR pair. We use an \textit{Intel
RealSense Depth Camera D435} and a \textit{32-Beam Velodyne ULTRA Puck LiDAR},
mounted on an in-house designed torso for a Cassie-series bipedal robot
\cite{CassieAutonomy2019ExtendedEdition}. From the CAD file, the camera is roughly
20~cm below the LiDAR and 10~cm in front of it. The angle of the camera is
adjustable. Here, its ``pitch'', in the LiDAR frame, is approximately zero.
A scan consists of the points collected in one revolution of the LiDAR's 32 beams.
The data corresponding to a single beam is also called a ring. For each scene, we
collect approximately 10~s of synchronized data, resulting in approximately 100 pairs
of scans and images.
For each target, five consecutive pairs of LiDAR scans and camera images are selected
as a data set. For each data set, we apply two methods to estimate the vertices of
the targets, a baseline and the method in Sect.~\ref{sec:NewMethod}.
For each data set, we apply two methods to estimate the
vertices of the targets, a baseline and the method in Sect.~\ref{sec:NewMethod}.
\subsection{Baseline Implementation for LiDAR Vertices}
\label{sec:Baseline}
As a baseline, we use the method in \cite{zhou2018automatic}. Because an open-source
implementation was not released, we built our own, attempting to be as faithful as
possible to the described method.
For each scan, the large and small targets are individually extracted from background
\cite{huang2019lidartag}. For each target and group of five scans, we compute the
extracted point cloud's centroid and center the point cloud about the origin. SVD is
then used to find the target's normal and to orthogonally project the point cloud
onto the plane defined by the normal and the centroid. For each scan, and for each
ring hitting the target, the left and right end points of the ring are selected and
then associated with one of the four edges of the target. Lines are fitted to each
collection of edge points using least-squares and \textit{ransac}. The vertices are
obtained as the intersections of the lines in the plane defined by the target's
normal and centroid. The four vertices are then re-projected to 3D space, as in
Fig.~\ref{fig:payload_LN2D}.
\subsection{Camera Corners and Associations}
\label{sec:CameraCorners}
The process of determining the target corners begins by clicking near them in the
image. This is the only manual intervention required in the paper and even this
process will soon be automated. As shown in Fig.~\ref{fig:Cannyedges},
between any given two clicked points, i.e., an edge of the target, a bounding box is
drawn around the two points. Once we have roughly located the corners of the target,
we process the image using the \textit{edge} command in MATLAB with the `Canny'
option to detect the edge points within the bounding box. A line is then fit through
each edge using \textit{ransac}, and the intersections of the resulting lines define
the target's corners, as shown in Fig.~\ref{fig:CannyLineFitted}. A video and an
implementation of this process is released along with the code.
\begin{figure}[t]%
\centering
\subfloat[]{%
\label{fig:Cannyedges}%
\includegraphics[width=0.48\columnwidth, trim={3.5cm 12.6cm 3cm 10cm},clip]{CannyLineFitted.png}}~
\hspace{2pt}%
\subfloat[]{%
\label{fig:CannyLineFitted}%
\includegraphics[width=0.5\columnwidth, trim={2cm 1.7cm 2cm 0.7cm},clip]{CannyLineFittedEnlarge-eps-converted-to.pdf}}%
\caption[]{\subref{fig:Cannyedges} shows the result of edge detection.
\subref{fig:CannyLineFitted} shows the interior pixels (marked in green) of a
bounding box given two clicked corners. The edge points and the edge line (as found by
\textit{ransac}) are marked in magenta and cyan, respectively. The corners are defined
by the intersections of the resulting lines.
\vspace{-4mm}
\end{figure}
\subsection{Extrinsic Calibration}
\label{sec:Extrinsic}
On the basis of the associated target vertices and camera-image corners, the PnP
method of \eqref{eq:PnP} is used to find the rigid-body transformation from LiDAR to
camera. We also checked using the IoU (see Sec.~\ref{sec:IoU}), but because the
results are similar, we are not reporting them here. The interested reader can
consult \cite{githubFile}. SVD and \textit{ransac} were computed using corresponding
MATLAB commands, while the optimizations in \eqref{eq:OurNewCost} and \eqref{eq:PnP}
were done with \texttt{fmincon}.
\subsection{Quantitative Results and Round-robin Analysis}
\label{sec:QuantitativeResultS}
In Table~\ref{tab:cross-v1}, we present the RMS error of the LiDAR vertices projected
into the camera's image plane for the baseline\footnote{SVD to extract the normal and
\textit{ransac} for individual line fitting, with the vertices obtained as the
intersections of the lines.}, labeled \textbf{RN}, and our method in
\eqref{eq:JKHcost} and \eqref{eq:OurNewCost}, labeled \textbf{GL$_1$}. In the case of
two targets, a full round-robin study is performed: the rigid-body transformation
from LiDAR to camera is ``trained'' on the combined set of eight vertices from both
targets and then ``validated'' on the eight vertices of each of the remaining six
scenes.
A complete round-robin study for four targets from two scenes would require 21
validations, while for six and eight targets, 35 validations each would be required.
For space reasons, we report only a subset of these possibilities.
To be crystal clear, we are reporting in units pixel per corner
\begin{equation}
\label{eq:ErrorMeasure}
\resizebox{0.4\columnwidth}{!}{%
$\sqrt{\frac{1}{4n}\sum_{i=1}^{4n}\norm{\pre[L]Y_i-\pre[C]Y_i}_2^2},$
}
\end{equation}
where $4n$ is the total number of target corners. In the case of two targets and one
scene, $n=2$. In the case of six targets from three scenes, $n=6$.
Figure~\ref{fig:quantitative_results} illustrates several point clouds projected to
the corresponding camera images. Summary data is presented in Table~\ref{tab:SummaryTable}.
\input{ExperimentTables.tex}
\begin{figure*}[b!]%
\centering
\subfloat{%
\includegraphics[width=0.69\columnwidth, trim={0.5cm 1.0cm 0cm 0.5cm},clip]{v1-2.png}}~
\hspace{2pt}%
\subfloat{%
\includegraphics[width=0.69\columnwidth, trim={0.5cm 1.0cm 0cm 0.5cm},clip]{v3-2.png}}~
\hspace{2pt}%
\subfloat{%
\includegraphics[width=0.69\columnwidth, trim={0.5cm 1.0cm 0cm 0.5cm},clip]{v4-2.png}}\\
\hspace{2pt}%
\subfloat{%
\includegraphics[width=0.69\columnwidth, trim={0.5cm 1.0cm 0cm 0.5cm},clip]{v5-2.png}}~
\hspace{2pt}%
\subfloat{%
\includegraphics[width=0.69\columnwidth, trim={0.5cm 1.0cm 0cm 0.5cm},clip]{v6-2.png}}~
\hspace{2pt}%
\subfloat{%
\includegraphics[width=0.68\columnwidth, trim={0.5cm 1.0cm 0cm 0.5cm},clip]{v7-2.png}}%
\caption[]{Visual depiction of the validation data in the last row of
Table~\ref{tab:cross-v1}. For the method \textbf{GL$_1$}, five sets of estimated
LiDAR vertices for each target have been projected into the image plane and marked in
green, while the target's point cloud has been marked in red. Blowing up an image
allows the numbers reported in the table to be visualized. The vertices are key.}%
\label{fig:quantitative_results}%
\end{figure*}
\section{Qualitative Results and Discussion}
\label{sec:QualitativeResults}
In LiDAR to camera calibration, due to the lack of ground truth, it is traditional to
show projections of LiDAR point clouds onto camera images. Often it is unclear if one
is viewing training data or validation data. In Figure~\ref{fig:TestingDataSet}, we
show a set of projections of LiDAR point clouds onto camera images for validation
data. An additional set of images can be found in~\cite{githubFile}. All of them show
that the key geometric features in the image and point cloud are well aligned. The
quality of the alignment has allowed our laboratory to build a high-quality
(real-time) 3D semantic map with our bipedal robot Cassie Blue \cite{gan2019bayesian}. The
map fuses LiDAR, camera, and IMU data; with the addition of a simple planner, it led
to autonomous navigation \cite{CassieSemanticMappingYT}.
Tables~\ref{tab:cross-v1} and~\ref{tab:SummaryTable} show that \textbf{GL$_1$}
outperforms the baseline: on average, there is more than a 50\% reduction in projection error and a 70\% reduction in its variance. As for the sources of this improvement, we highlight the following points:
\begin{enumerate}
\renewcommand{\labelenumi}{(\alph{enumi})}
\setlength{\itemsep}{.2cm}
\item Least-squares-based methods estimate a normal vector for the target's point
cloud. As shown in Fig.~\ref{fig:FigureLiDARScanExample} and
Fig.~\ref{fig:FigureLiDARScanExample3Rings}, even though the target is flat and
has a well-defined normal, the returns in the LiDAR point cloud do not lie on a
plane. Hence, a global normal vector does not exist.
\item Least-squares-based methods extract the target edge points from the point
cloud for use in line fitting. The line fitting must be done in the plane defined
by the estimated normal because, in 3D, non-parallel lines do not necessarily
intersect to define a vertex.
\item \textbf{GL$_1$} explicitly uses the target geometry in formulating the cost
function. By assigning zero cost to interior points in the ``ideal target
volume'' and non-zero otherwise, the optimization problem is focused on orienting
the boundary of the target volume within the 3D point cloud. This perspective
seems not to have been used before.
\item Hence, our approach does not require the (tedious and error-prone)
\textit{explicit} extraction and assignment of points to target edges. The
determination of \textit{boundary points} in 3D is \textit{implicitly} being done
with the cost function. \end{enumerate}
At the present time, our extrinsic calibration method is not ``automatic''. The one manual intervention, namely clicking on the approximate target corners in the camera image, will be automated soon.
We have not conducted a study on how to place the targets. This is an interesting piece of future work because of the nonlinear operation required when projecting LiDAR points to the camera plane; see \eqref{eq:projection_linear} and \eqref{eq:projection_nonlinear}.
\begin{figure*}[t!]%
\centering
\subfloat{%
\includegraphics[width=0.65\columnwidth, trim={0.5cm 1.0cm 0cm 0.5cm},clip]{test1_3.png}}~
\hspace{3pt}%
\subfloat{%
\includegraphics[width=0.65\columnwidth, trim={0.5cm 1.0cm 0cm 0.5cm},clip]{test2_3.png}}~
\hspace{3pt}%
\subfloat{%
\includegraphics[width=0.65\columnwidth, trim={0.5cm 1.0cm 0cm 0.5cm},clip]{test3_3.png}}\\
\hspace{3pt}%
\subfloat{%
\includegraphics[width=0.65\columnwidth, trim={0.5cm 1.0cm 0cm 0.5cm},clip]{test5_3.png}}~
\hspace{3pt}%
\subfloat{%
\includegraphics[width=0.65\columnwidth, trim={0.5cm 1.0cm 0cm 0.5cm},clip]{test6_3.png}}~
\hspace{3pt}%
\subfloat{%
\includegraphics[width=0.65\columnwidth, trim={0.5cm 1.0cm 0cm 0.5cm},clip]{test7_3.png}}%
\caption[]{Qualitative validation results. For the method \textbf{GL$_1$} trained on S$_1$, the LiDAR point cloud has been projected into the image plane for the other data sets and marked in green. The red circles highlight various poles, door edges, desk legs, monitors, and sidewalk curbs where the quality of the alignment can be best judged. The reader may find other areas of interest. Enlarge in your browser for best viewing.
\label{fig:TestingDataSet}
\vspace{-4mm}
\end{figure*}
\section{Extrinsic Optimization and Pose Refinement}
\section{Extrinsic Transformation Optimization}
\label{sec:extrinsic_opt}
This section assumes the vertices of the target in the LiDAR frame and in the
camera's image plane have been determined, along with their correspondences. The
optimization for the extrinsic transformation can be formulated in a standard PnP
problem: minimize Euclidean distance of the corresponding corners. We also propose
maximizing the intersection over union (IoU) of the corresponding projected polygons.
\subsection{Euclidean distance}
The standard PnP formulation is
\begin{align}
\label{eq:PnP}
\left({R_L^C}^*, {T_L^C}^*\right) &:= \argmin_{R,
T}\sum_{i=1}^{4n}\norm{\Pi\left(X_i; R, T\right)-\pre[C]Y_i}_2^2\\
&=\argmin_{R,
T}\sum_{i=1}^{4n}\norm{\pre[L]Y_i-\pre[C]Y_i}_2^2,
\end{align}
where $\pre[C]Y_i\in \mathbb{R}^2$ are the camera corners, $\pre[L]Y_i\in
\mathbb{R}^2$ are defined in \eqref{eq:Projection}, and $n$ is the number of target
poses.
\subsection{IoU optimization}
\label{sec:IoU}
To compute the IoU of two projected polygons of the targets in an image plane, we
define the intersection as a polygon with known coordinates\footnote{The vertices of
the polygon consist of the 2D corners and the 2D line intersections and thus can be
computed efficiently.}. We sort the vertices of the polygon counterclockwise using
Graham scan, a method to find the convex hull of a finite set of points on a plane
\cite{graham1972efficient, andrew1979another}. The intersection area of the polygon ($\A$)
can be calculated via the Shoelace algorithm \cite{braden1986surveyor}:
\begin{equation}
\label{eq:Shoelace}
\A = \frac{1}{2}\left|\sum_{i=1}^{N} \det\begin{bmatrix} x_i & x_{i+1} \\ y_i &
y_{i+1}\end{bmatrix}\right|,
\end{equation}
where $x_{n+1} = x_1, x_0 = x_n$ as well as $y_{n+1} = y_1, y_0 = y_n$.
We use Venn diagram to solve the union of the two projected polygons of the targets
given the intersection calculated above and the area of each projection polygon.
\section{Image Plane Corners and Correspondences with the LiDAR Vertices}
\label{sec:CameraStuff}
For a given camera image of a LiDAR target, let $\{ \pre[C]Y_i\}_{i=1}^4$ denote the
vertices of the camera image. We assume that these have been obtained through the
user's preferred method, such as corner detection \cite{harris1988combined,
rosten2006machine, rosten2008faster}, edge detection \cite{canny1987computational,
lim1990two, vincent2009descriptive}, or even manual selection. This is not the hard
part of the calibration problem. To achieve simple correspondences $X_i
\leftrightarrow \pre[C]Y_i$, the order of the indices on $\{ X_i\}_{i=1}^4$ may need
to be permuted; we use the vertical and horizontal positions to sort them.
Once we have the correspondences, the next step is to project the vertices of the
LiDAR~~target, $\begin{bmatrix} x_i & y_i & z_i & 1 \end{bmatrix}^\mathsf{T} = X_i$,
into the image coordinates. The standard relations are \cite{hartley2000multiple,
forsyth2002computer}
\begin{align}
\label{eq:projection_linear}
\begin{bmatrix}
u^\prime\\
v^\prime\\
w^\prime
\end{bmatrix} &=
\begin{bmatrix}
f_x & s & c_x \\
0 & f_y & c_y \\
0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}
\mathbb{1}_{3\times3} \\ \zeros[1\times3]
\end{bmatrix}^\mathsf{T}
\begin{bmatrix}
R_L^C & T_L^C \\
\zeros[1\times 3] & 1 \\
\end{bmatrix}
\begin{bmatrix}
x_i \\
y_i \\
z_i \\
1
\end{bmatrix}\\
\label{eq:projection_nonlinear}
\pre[L]Y_i &= \begin{bmatrix}
u & v & 1
\end{bmatrix}^{\mathsf{T}} =
\begin{bmatrix}
\frac{u^\prime}{w^\prime} & \frac{v^\prime}{w^\prime} & 1
\end{bmatrix}^{\mathsf{T}},
\end{align}
where \eqref{eq:projection_linear} includes the camera's intrinsic parameters and the
extrinsic parameters $(R_L^C, T_L^C)$ that we seek.
For later use, we combine \eqref{eq:projection_linear} and \eqref{eq:projection_nonlinear} to define
\begin{equation}
\label{eq:Projection}
\Pi\left(X_i; R_L^C, T_L^C \right):=\pre[L]Y_i,
\end{equation}
the projection map from LiDAR coordinates to image coordinates. Note that it is
a function of both the extrinsic variables and the LiDAR point cloud.
\section{Introduction and Related Work}
\label{sec:intro}
The desire to produce 3D-semantic maps \cite{Lu2020BKI} with our Cassie-series bipedal robot~\cite{gong2019feedback} has motivated us to fuse 3D-LiDAR and RGB-D
monocular camera data for autonomous navigation
\cite{CassieAutonomy2019ExtendedEdition}. Indeed, by mapping spatial LiDAR points
onto a segmented and labeled camera image, one can associate the label of a pixel (or
a region about it) to the LiDAR point as shown in Fig.~\ref{fig:semantic}. An error of a few degrees in rotation or a few percent in translation in the estimated rigid-body transformation between LiDAR and camera can lead to 20 cm reprojection errors at a distance of 5 m when overlaying a LiDAR point cloud on a camera image. Such errors will lead to navigation errors.
In this paper, we assume that the intrinsic calibration of the two sensors has
already been done~\cite{mirzaei20123d} and focus on obtaining the rigid-body
transformation, i.e. rotation matrix and translation, between a LiDAR and camera.
This is a well studied problem with a rich literature that can be roughly divided
into methods that require targets:
\cite{gong20133d,dhall2017lidar,verma2019automatic,jiao2019novel,kim2019extrinsic,
guindel2017automatic, garcia2013lidar} and those that do not:
\cite{pandey2012automatic,taylor2013automatic,jeong2019road, jiang2018line,
zhen2019joint}. Our modest contributions are applicable to methods that use planar
polygonal targets, such as
checkerboards, triangles, and diamonds.
In target-based methods, one seeks to estimate a set of target features (e.g., edge
lines, normal vectors, vertices) in the LiDAR's point cloud and the camera's image
plane. If ``enough'' independent correspondences can be made, the LiDAR to camera
transformation can be found by Perspective-n-Point (PnP) as in
\cite{lepetit2009epnp}, that is, through an optimization problem of the form
\begin{equation}\label{eq:ConceptualProblem} (R_{C}^{L*}, T_{C}^{L*})= \mathrm{arg}
\min_{(R,T)}{\mathrm{dist}(P(H_{L}^{C}(X_i)),Y_i)}, \end{equation} where $X_i$ are
the (homogeneous) coordinates of the LiDAR features, $Y_i$ are the coordinates of the
camera features, $P$ is the often-called ``projection map'', $H_{L}^{C}$ is the
(homogeneous representation of) the LiDAR-frame to camera-frame transformation with
rotation matrix $R_{C}^{L}$ and translation $T_{C}^{L}$, and $\mathrm{dist}$ is a
distance or error measure.
\subsection{Rough Overview of the Most Common Target-based Approaches}
The works closest to ours are \cite{liao2018extrinsic,zhou2018automatic}. Each of
these works has noted that rotating a square target so that it presents itself as a
diamond can help to remove pose ambiguity due to the spacing of the ring lines; in
particular, see Fig.~2 in \cite{liao2018extrinsic} and Fig.~1 in
\cite{zhou2018automatic}. More generally, we recommend the literature overview in
\cite{liao2018extrinsic} for a recent, succinct survey of LiDAR to camera
calibration.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth, bb=0 0 1595 618]{graphics/FirstImage.jpg}
\caption{Building a semantic map for autonomous navigation. Once the transformation from LiDAR to camera is known, it is possible to fuse the LiDAR and RGB-camera information to build a 3D-map that is annotated with semantic information. The lower right shows a single camera image; its segmentation with semantic labels is shown in the upper right. Synchronized LiDAR points are mapped onto the camera image, associated to a semantic label, and then re-projected to a local frame \cite{hartley2019contact} along with the semantic labels to create a 3D semantically-labeled map \cite{Lu2020BKI}, shown in the upper left. The sidewalk is marked in white, the grass in yellow-green, and trees in dark green. In the lower left, Cassie is using the map to plan a path around the North Campus Wave Field, while staying on the sidewalk.}
\label{fig:semantic}
\vspace{-4mm}
\end{figure}
The two most common sets of features in the area of target-based calibration are (a)
the 3D-coordinates of the vertices of a rectangular or triangular planar target, and
(b) the normal vector to the plane of the target and the lines connecting the
vertices in the plane of the target. Mathematically, these two sets of data are
equivalent: knowing one of them allows the determination of the other. In practice,
focusing on (b) leads to use of the SVD to find the normal vector and more broadly to
least squares line fitting problems \cite{zhou2018automatic}, while (a) opens up
other perspectives, as highlighted in \cite{liao2018extrinsic}.
\begin{figure}[t]%
\centering
\subfloat[]{%
\label{fig:FigureLiDARScanExample}%
\includegraphics[width=0.49\columnwidth, trim={10cm 0cm 8cm 1cm},clip]{FigureLiDARScanExample2.png}}%
\hspace{3pt}%
\subfloat[]{%
\label{fig:FigureLiDARScanExample3Rings}%
\includegraphics[width=0.47\columnwidth, trim={10cm 0cm 10cm 0cm},clip]{payload_3d_selected.png}}\\
\hspace{3pt}%
\subfloat[]{%
\label{fig:payload2D}%
\includegraphics[width=0.46\columnwidth, trim={1.5cm 0.5cm 3.5cm 1cm},clip]{payload_2d.png}}%
\hspace{3pt}%
\subfloat[]{%
\label{fig:payload_LN2D}%
\includegraphics[width=0.43\columnwidth, trim={.5cm 0 3cm 0},clip]{payload_LN2D.png}}\\
\hspace{3pt}%
\subfloat[]{%
\label{fig:reference_frame}%
\includegraphics[width=0.45\columnwidth, trim={0.05cm 0cm 0cm 1cm},clip]{reference_frame2.png}}%
\hspace{3pt}%
\subfloat[]{%
\label{fig:reference_frame_side}%
\includegraphics[width=0.45\columnwidth, trim={0.05cm 0cm 0cm 0.77cm},clip]{reference_frame_side.png}}%
\caption[]{\subref{fig:FigureLiDARScanExample} Twenty five LiDAR scans of a planar
target. The point cloud is roughly 7~cm thick.
\subref{fig:FigureLiDARScanExample3Rings} The ``noise'' in the point cloud is not
random. A zoom for four of the rings (13, 15, 17, 19) is typical and shows
systematic errors in distance. \subref{fig:payload2D} LiDAR~~points orthogonally
projected to the plane defined by the normal. \subref{fig:payload_LN2D} Example
edge points selected to regress a line via \textit{ransac}.
\subref{fig:reference_frame} Target reference frame and real point cloud data.
The dotted blue square is the reference target; its vertices are denoted
$\{\bar{X}_i\}_{i=1}^4$. \subref{fig:reference_frame_side} A side-$(x-z)$-view
highlighting the non-zero thickness of a typical point cloud. These figures and all
others in the paper are of sufficient resolution that they can be blown up to see
detail. }%
\label{}%
\vspace{-4mm}
\end{figure}
Figure~\ref{fig:FigureLiDARScanExample} shows a 3D view of 25 scans from a
factory-calibrated 32-Beam Velodyne ULTRA Puck LiDAR on a diamond shaped planar
target, with a zoom provided in Fig.~\ref{fig:FigureLiDARScanExample3Rings}. There
are clearly systematic errors in the distance (or depth) measurement of the target,
and these errors affect the estimation of the target's ``centroid'', which is
commonly used to determine the translation of the target from the LiDAR and the
target's ``normal vector'', which is used to define the plane of the target as in
Fig.~\ref{fig:payload2D}. Subsequently, the point cloud is orthogonally projected to
this plane and the line boundaries of the target are found by performing
\textit{ransac} on the appropriate set of ring edges; see
Fig.~\ref{fig:payload_LN2D}. The lines along the target's boundaries then define its
vertices in the plane, which for later reference, we note are not constrained to be
compatible with the target's geometry.
Once the vertices in the plane of the target have been determined, then knowledge of
the target's normal allows the vertices to be lifted back to the coordinates of the
point cloud. This process may be repeated for multiple scans of a target, aggregates
of multiple scans of a target, or several targets, leading to a list of target
vertices $\{ X_i \}_{i=1}^{4n} $, where $n$ is the number of target poses.
The target is typically designed so that the vertices are easily distinguishable in
the camera image. Denoting their corresponding coordinates in the image plane by $\{
Y_i \}_{i=1}^{4n}$ completes the data required for the conceptual fitting problem in
\eqref{eq:ConceptualProblem}. While the problem is nonlinear and non-convex, this is
typically not a problem because CAD data can provide an adequate initial guess for
local solvers, such as Levenberg-Marquardt; see \cite{zhou2018automatic} for example.
\subsection{Our Contributions}
\label{sec:Contributions}
Our contributions can be summarized as follows.
\begin{enumerate}
\setlength{\itemsep}{.15in}
\renewcommand{\labelenumi}{(\alph{enumi})}
\item We introduce a novel method for estimating the rigid-body transform from target
to LiDAR, $H_{T}^{L}$. For the point cloud pulled back to the origin of the LiDAR
frame via the current estimate of the transformation, the cost is defined as the
sum of the distance of a point to a 3D-reference target of known
geometry\footnote{It has been given non-zero volume.} in the LiDAR frame, for
those points landing outside of the reference target, and zero otherwise. We use
an $L_1$-inspired norm in this work to define distance. Consistent with
\cite{liao2018extrinsic}, we find that this is less sensitive to outliers than
$L_2$-line fitting to edge points\footnote{Defined as the left-right end points
of each LiDAR ring landing on the target.} using \textit{ransac}
\cite{zhou2018automatic}.
\item In the above, by using the entire target point cloud when estimating the
vertices, we avoid all together the delicate task of identifying the edge points
and parsing them into individual edges of the target, where small numbers of
points on some of the edges accentuate the quantization effects due to sparsity
in a LiDAR point cloud.
\item We provide a round-robin validation study to compare our approach to a
state-of-the-art method from 2018, namely \cite{zhou2018automatic}. A novel
feature is the use of the camera image as a proxy for ground truth; in the
context of 3D semantic mapping, this makes sense. A standard PnP method~\cite{lepetit2009epnp} is
used to estimate the rigid-body transformation
from LiDAR to camera. We also validate our algorithm
on different numbers of targets.
\item Code for our method, our implementation of the baseline, and all the
data sets used in this paper are released as
\href{https://github.com/UMich-BipedLab/extrinsic_lidar_camera_calibration}{open
source}; see \cite{githubFile}.
\end{enumerate}
\begin{comment}
\subsection{More on Related Work}
\label{sec:RelatedWork}
Papers to talk about
\begin{itemize}
\item Simultaneous intrinsic and extrinsic
\item When there is a large separation between lidar and camera
\item SVD prevleance
\item Ransac
\item \textbf{Most closely related to our work:} Extrinsic calibration of lidar and camera with polygon
\end{itemize}
\end{comment}
\section{Problem Statement and Target Pose}
\label{sec:lidar}
This section describes what elements we need to solve the LiDAR-camera~calibration problem,
LiDAR~'s point clouds analysis, how to find 3D-2D correspondences and some important
things to keep in mind when doing LiDAR-camera~calibration as specially LiDAR~'s
irregular returns.
\subsection{Problem Statement}
The core of the problem is to
find a rigid body transform $H_L^C$ between a LiDAR~~and a camera, which
consists of 6 parameters, including a rotation
60
matrix $\pre[L]R_L^C$ in $\mathrm{SO}(3)$ and
a translation $\pre[L]T_L^C$ in $\mathbb{R}^3$, where $\pre[L]R_L^C$ and
$\pre[L]T_L^C$ stands for the rotation and translation from a LiDAR~~ to a camera
represented in LiDAR~~frame, respectively. In target-based calibration, we need some
correspondences, features to match between a LiDAR~~and a camera. There are several
ways to represent features such as plane-line, plane-point, 3D line-line
correspondence or formulate as a Perspective-n-Point (PnP) problem.
In this paper, we use fiducial tags to find correspondences in both 3D LiDAR~~point
cloud and in 2D images. Fiducial tags that detected from a LiDAR~~is called
LiDARTags~and the same fiducial tags that detected from a camera is called AprilTags. A
pose of a LiDARTag~~is provided from \red{cite}; the original method to obtain a
LiDARTag~'s pose, a transformation from LiDAR~~from to LiDARTag~~frame $(H_L^T)$, is
running at realtime at 30 Hz; however, it is not accurate enough for calibration due
to noise level of LiDAR~~point cloud and thus has to be re-optimized mentioned in
Sec.~\ref{sec:lidartag_opt}. The noise level of LiDAR~~point cloud is discussed in
Sec.~\ref{sec:lidar_analysis}. Once the pose of the LiDARTag~~is accurately obtained,
we then have 4 corners of the LiDARTag~~in 3D. The same target in an image, the 4
corners of the AprilTag~can be detected from AprilTag~detector in 2D. Therefore, this
allows us to have a set of 3D-2D correspondences on a target. The problem then
becomes a PnP optimization problem to compute a projection matrix mentioned in
Sec.~\ref{sec:extrinsic_opt}. However, this is not the end of the calibration. The
calibration up to this point assumes the corners given from LiDARTags~are absolutely
correct. However the noise level of LiDAR~~point cloud, we still cannot fully trust
the optimization result of the LiDARTag~. We have to do another refinement to the
LiDARTags' poses given the projection matrix along with a regularizer mentioned in
Sec.~\ref{sec:refinement}.
\begin{remark}{
\normalfont
One may argue why not directly using the optimized poses to do calibration, since two
poses from the two different sensors have been given already. It could be done easily
by:
\begin{equation}H_L^C = H_L^{T_L} H_{T_L}^{T_C}\inv{\left(H_C^{T_{C}}\right)},
\end{equation}
where $H_{T_L}^{T_C}$ is a
transformation from the LiDARTag~~frame $(T_L)$ to the AprilTag~frame $(T_C)$. However,
this yet gives us good enough calibration result due to estimating an AprilTag~pose from
a monocular camera.}
\end{remark}
\subsection{LiDAR~~Point Cloud Analysis}
\label{sec:lidar_analysis}
LiDARs~~have dense region and sparse region on the z-axis as shown in Fig.~\red{dense
region}. We use a 32-beam Velodyne Ultra Puck, the resolution on z-axis is
\red{number} and \red{number} of the dense region and the sparse region respectively.
The points resolution on the y-axis is \red{number}. The quantization error on both
y-axis and z-axis direction could be roughly computed from: As a result, the
quantization error could get quite large if one place the tag at a far distance.
In Fig.~\red{Calibration error} shows if we have 1 degree on each axis and 5\% of
translation error, the calibration result is not good enough for our application.
Thus, it is essential to use geometry constraint to estimate a LiDARTag~~pose as well as
to place a LiDARTag~~at a tilted angle so that the geometry constraint can pick up the
quantization error on y-axis and z-axis. To overcome quantization error on x-axis, we
accumulate a few scans to estimate one \lidrt~pose and condition on the current
estimate of projection matrix to refine the LiDARTag~'s poses mentioned in
Sec.~\ref{sec:refinement}.
\subsection{Correspondences and Automatic Detection}
\label{sec:correspondences}
\subsubsection{LiDARTag~~Poses Optimization and Automatic Detection}
\label{sec:lidartag_opt}
To find 3D correspondences in LiDAR~~point cloud, we use corners of a LiDARTag~~as our
LiDAR~~target. Patches LiDARTags' point clouds are automatically detected from a full
scan of point cloud by \red{cite} and therefore we can assume they are given in this
paper. When estimating a LiDARTag~' pose using its centroid and a normal vector
computed by taking svd of the LiDARTag~'s points (more detail, please refer to
\red{cite}), which is fast and suitable for realtime application; however, we notice
there are observable noise of the normal vector and centroid. We further look into
the LiDARTag~~points; we observe the points on the LiDARTag~, a planner object, they
appears as a cube, as shown in Fig.~\red{lidartag cube}. The min/max of the
difference of depth could be 5 cm to 15 cm on x-axis direction; the standard
deviation is also noticeable as shown in Table.~\red{payload point analysis}. The
orientation also has a few degree of variance. As shown in Fig.~\red{calibration
error}, this error is large enough to ruin the calibration. To more accurately
estimate a LiDARTag~~pose, we formulate another optimization problem that does not use
its centroid nor normal vector but take into account its underline noise.
We place an ideal reference target at the center of the LiDAR~~frame as shown in
Fig.~\red{reference frame}. We want to find the best homogeneous transformation
$H_T^L$ that takes the LiDARTag~~''back'' to the ideal reference target. To do so, we
define a cost function based upon the known geometry (diamond), size (d) and the
noisy level on x-axis (k) of the LiDARTag~. Given a set of LiDARTag~'s point cloud
$\pre[L]X_i(x_i, y_i, z_i)$ , 8 corners of the ideal reference frame $\pre[L]Y_i^\prime$,
and the current estimate of $\pre[L]\tilde{R}_T^L$ and $\pre[L]\tilde{T}_T^L$ the
cost for the LiDARTag~'s point cloud is defined as:
\begin{equation}
\pre[L]\tilde{X_i} = \left(\pre[L]\tilde{R}_T^L\right)\left(\pre[L]X_i\right) +
\pre[L]\tilde{T}_T^L
\end{equation}
\begin{equation}
C(\pre[L]\tilde{X}) = \sum_{i=1}^{N} &c_x(\tilde{x}_i) + \\
&c_y(\tilde{y}_i) + \\
&c_z(\tilde{z}_i),
\end{equation}
where $c_x(\tilde{x}), c_y(\tilde{y})$ and $c_z(\tilde{z})$ are:
\begin{equation}
c_x(\tilde{x}_i) =
\begin{cases}
\min\left(||\tilde{x}_i- \frac{k}{2}||^p, ||(\tilde{x}_i- \frac{-k}{2}||^p\right), &
\text{if } \frac{-k}{2}\leq \tilde{x}_i \leq \frac{k}{2}\\\
0, & \text{otherwise}.
\end{cases}
\end{equation}
\begin{equation}
c_y(\tilde{y}_i) =
\begin{cases}
\min\left(||\tilde{y}_i- \frac{d}{2}||^p, ||(\tilde{y}_i- \frac{-d}{2}||^p\right), &
\text{if } \frac{-d}{2}\leq \tilde{y}_i \leq \frac{d}{2}\\\
0, & \text{otherwise}.
\end{cases}
\end{equation}
\begin{equation}
c_z(\tilde{z}_i) =
\begin{cases}
\min\left(||\tilde{z}_i- \frac{d}{2}||^p, ||(\tilde{z}_i- \frac{-d}{2}||^p\right), &
\text{if } \frac{-d}{2}\leq \tilde{z}_i \leq \frac{d}{2}\\\
0, & \text{otherwise}.
\end{cases}
\end{equation}
The optimal $H_T^L^*$ can be determined:
\begin{align}
H_T^L^* &= \argmin_{\pre[L]\tilde{R}_T^L, \pre[L]\tilde{T}_T^L}
C(\pre[L]\tilde{X}, \pre[L]\tilde{R}_T^L, \pre[L]\tilde{T}_T^L)\nonumber \\ &=
\argmin_{\pre[L]\tilde{R}_T^L, \pre[L]\tilde{T}_T^L} \sum_{i=1}^{N} c_x(\tilde{x}_i) +
c_y(\tilde{y}_i) + c_z(\tilde{z}_i).
\end{align}
Conceptually, if a transformed point is inside the ideal reference frame, the cost is zero;
otherwise, we penalize its each axis with p-norm to a closest boundary of the ideal reference
frame along the same axis. In this work, we choose L1-norm and we find this cost is also robust
to outliers as shown in Fig.\red{robust to outliers}.
We now have an optimized $H_T^L$, a transformation from a LiDARTag~~frame to the ideal
reference frame and we also have the 8 corners of the ideal reference target
$\pre[L]Y_i^\prime$. We can compute the 8 corners \footnote{These 8 corners are two
planes with geometry constraints; we can take the average of the two planes to
find the neutral plane with 4 corners.} of the LiDARTag~~($\pre[L]Y_i$) easily by:
\begin{equation}
\pre[L]Y_i = \inv{\left(H_T^L\right)} \pre[L]Y_i^\prime,
\end{equation}
where these corners will be refined again after the projection matrix is estimated.
\subsubsection{Image correspondences from AprilTags}
\label{sec:apriltag_correspondences}
Getting corners correspondence from the same fiducial tag in an image is fairly
easier and straight forward. The corners of the fiducial tag in an image, namely an
AprilTag, can be automatically detected by the AprilTag detector. In original
papers\cite{olson2011apriltag}, \cite{wang2016apriltag}, \cite{krogiusflexible}, the corners are defined at the payload corners\footnote{The
detector finds four-sided regions that have darker interior than their exterior in
the whole image.}, shown in Fig.~\red{payload explain}. However, we can use the detected
corners and tag-relative coordinates to backstep to the outer 4 corners $\pre{C}Y_i$.
\section{Related Work}
\label{sec:related_work}
\section{Finding the LiDAR Target Vertices}
\label{sec:LiDARTargetVertices}
In this conference version of the work, we will assume that each target is planar,
square, and rotated in the frame of the LiDAR by roughly 45$^\circ$ to form a diamond
as in Fig.~\ref{fig:FigureLiDARScanExample}. As indicated in
\cite{liao2018extrinsic,zhou2018automatic}, placing the targets so that the rings of
the LiDAR run parallel to its edges leads to ambiguity in the vertical position due
to the spacing of the rings. We assume that standard methods have been used to
automatically isolate the target's point cloud \cite{huang2019lidartag} and speak no
further about it.
We take as a target's features its four vertices, with their coordinates in the LiDAR
frame denoted $\{ X_i \}_{i=1}^{4};$ when useful, we abuse notation and pass from
ordinary coordinates to homogeneous coordinates without noting the distinction. The
vertices $\{ X_i \}_{i=1}^{4} $ are of course not directly observable in the point
cloud. This section will provide a new method for estimating their coordinates in the
frame of the LiDAR using an $L_1$-like norm.
\begin{figure}[t]%
\centering
\subfloat[]{%
\label{fig:disturbance}%
\centering
\includegraphics[width=0.42\columnwidth, trim={3cm 0.56cm 4.5cm 3cm},clip]{disturbance.png}}%
\hspace{10pt}%
\subfloat[]{%
\label{fig:undisturbance}%
\centering
\includegraphics[width=0.41\columnwidth, trim={3cm 0.6cm 4.5cm 3cm},clip]{undisturbance.png}}%
\caption[]{\subref{fig:disturbance} shows that a calibration result is not usable if
it has few degrees of rotation error and a few percent of translation error.
\subref{fig:undisturbance} shows good alignment of a LiDAR~~point cloud projected
onto a camera image.
\vspace{-2mm}
\end{figure}
\subsection{Remarks on LiDAR Point Clouds}
\label{sec:lidar_analysis}
LiDARs~ have dense regions and sparse regions along the z-axis, as shown in
Fig.~\ref{fig:payload2D}. For a 32-beam Velodyne Ultra Puck, we estimate the
resolution along the z-axis is $0.33^\circ$ and $1.36^\circ$ in the dense and sparse
regions, respectively. A point's distance resolution along the y-axis is roughly
$0.1^\circ$.
The quantization error could be roughly computed from: $d\sin{\theta}$
when a point is at $d$ meter away. As a result, the quantization error could get
quite large if one place the tag at a far distance. Figure~\ref{fig:disturbance}
shows that a $1^\circ$ of rotation error on each axis and 5\% error in translation
can significantly degrade a calibration result. As noted in
\cite{liao2018extrinsic,zhou2018automatic}, it is essential to place a target at an
appropriate angle so that known geometry can mitigate quantization error in the $y$-
and $z$-axes.
\subsection{New Method for Determining Target Vertices}
\label{sec:NewMethod}
Let ${\cal PC}$ denote the LiDAR target point cloud and let the collection of 3D points be
${\cal X}_i$ so that ${\cal PC} = \{ \chi_i\}_{i=1}^N$. The objective is to fit a
reference target with vertices $\{ \bar{X}_i \}_{i=1}^4$, defined as in
Fig.~\ref{fig:reference_frame}, onto the point cloud with ``minimum error''. It is
actually easier to pull the $\mathcal{PC}$ to the origin of the LiDAR through the current
estimate of the inverse transformation $H_T^L:=(H_L^T)^{-1}$ and measure the error
there.
\begin{remark}
\label{remark:Xis}
To be clear, what we seek is a ``best estimate'' of the target vertices in the LiDAR
frame and not the transformation itself. Our method is indirect because from the
point cloud, we estimate a ``best'' LiDAR to target transformation, call it
$H_L^{T^*}$, and use it to define the vertices
\begin{equation}
\label{eq:vertices}
X_i^*:=H_L^{T^*}(\bar{X}_i), ~~1 \le i \le 4.
\end{equation}
The correspondences of the estimated vertices with the physical top, bottom, and left
or right sides of the target are not needed at this point. $\blacksquare$
\end{remark}
For $a \ge 0$ and $\lambda \in \mathbb{R}$, define
\begin{equation}
\label{eq:L1cost}
c(\lambda,a):=\begin{cases}
\min\{ |\lambda-a|, |\lambda + a| \} & \text{if}~|\lambda| >a \\
0 & \text{otherwise}
\end{cases};
\end{equation}
see also the ``soft $L_1$ norm'' in \cite{liao2018extrinsic}. Let
$\{\tilde{\chi}_i\}_{i=1}^N:=H_T^L(\mathcal{PC}):=\{ H_T^L(\chi_i) \}_{i=1}^N$ denote
the pullback of the point cloud by $H_T^L$, and denote a point's $(x,y,z)$-entries by
$(\tilde{x}_i,\tilde{y}_i,\tilde{z}_i)$. The cost is defined as
\begin{equation}
\label{eq:JKHcost}
C(H_T^L(\mathcal{PC})):=\sum_{i=1}^{N} c(\tilde{x}_i,\epsilon) +
c(\tilde{y}_i,d/2) + c(\tilde{z}_i,d/2),
\end{equation}
where $\epsilon \ge 0$ is a tuning parameter to account for uncertainty in the depth
measurement of the planar target; see Fig.~\ref{fig:reference_frame}.
We propose to determine the optimal $H_T^L$ by
\begin{equation}
\label{eq:OurNewCost}
H_T^{L^*} := \argmin_{\pre[L]\tilde{R}_T^L, \pre[L]\tilde{T}_T^L}
C(H_T^L(\mathcal{PC})).
\end{equation}
Once the transformation is determined, the target vertices are defined by
$X_i := (H_T^{L*})^{-1}(\bar{X}_i), i=1,\cdots,4,$ as in Remark~\ref{remark:Xis}.
\begin{remark} It is underlined that the target vertices are being determined without
extraction of edge points and their assignment to a side of the target. The cost in
\eqref{eq:JKHcost} treats the target as a rectangular volume.
\end{remark}
\section*{Acknowledgment}
\small{
Funding for this work was provided by the Toyota Research Institute (TRI) under award number N021515. Funding for J. Grizzle was in part provided by TRI and in part by NSF Award No.~1808051. This article solely reflects the opinions and conclusions of its authors and not the funding entities. Dr. M Ghaffari offered advice during the course of this project. The first author thanks Wonhui Kim for useful conversations and Ray Zhang for generating the semantic image.}
\balance
\bibliographystyle{bib/IEEEtran}
|
1,314,259,994,195 | arxiv | \section{Introduction}
The azimuthal anisotropy of hadron transverse momentum
distributions in heavy ion collisions at RHIC is sensitive to the
properties of produced matter. In the framework of transport
approach, it was shown in Ref.\cite{Zhang99} using the ZPC
model \cite{Zhang:1997ej} that the value of the second harmonic,
i.e., the elliptic flow, depends sensitively on the magnitude
of parton scattering cross sections. With a more realistic
collision dynamics via the AMPT model \cite{Zhang:2000bd,Lin:2001cx},
it was further shown that the observed large elliptic flow and
its ordering according to hadron masses \cite{Lin:2001zk} as well
as high-order anisotropic flows \cite{chen1} could be explained if
partons scatter with cross sections much larger than those given
by the perturbative QCD. Including also charm quarks in the AMPT model
\cite{zhang}, the observed large elliptic flow of electrons from
the decay of charmed mesons is again consistent with a large charm
scattering cross section. In the present contribution, recent
results from the AMPT model on the rapidity and collision system
dependence of anisotropic flows are presented
\cite{chen2,chen3,chen4}.
\section{The AMPT model}
The AMPT model is a hybrid model that uses minijet partons from
hard processes and strings from soft processes in the HIJING model
\cite{Wang:1991ht} as the initial conditions for modelling heavy ion
collisions at ultra-relativistic energies. In the default version,
time evolution of resulting minijet partons is described by the ZPC
model \cite{Zhang:1997ej} with an in-medium cross section derived
from the lowest-order Born diagram with an effective gluon screening
mass taken as a parameter for fixing the magnitude and angular
distribution of parton scattering cross section. After minijet
partons stop interacting, they are combined with their parent
strings, as in the HIJING model with jet quenching, to fragment
into hadrons using the Lund string fragmentation model as implemented
in the PYTHIA program \cite{Sjostrand:1994yb}. The final-state
hadronic scatterings are then modelled by the ART model
\cite{Li:1995pr}. In an extended string melting version of the
AMPT model \cite{Lin:2001zk}, hadrons that would have been
produced from string fragmentation are converted to valence quarks
and/or antiquarks in order to model the initially formed partonic
matter. Interactions among these partons are again described by
the ZPC parton cascade model. The transition from the partonic matter
to the hadronic matter is achieved using a simple coalescence model,
which combines two nearest quark and antiquark into mesons and
three nearest quarks or antiquarks into baryons or anti-baryons that
are close to the invariant mass of these partons.
\section{Pseudorapidity dependence of anisotropic flows}
\vspace{-0.5cm}
\begin{figure}[htb]
\begin{minipage}{18pc}
\includegraphics[scale=0.75]{fig1a.eps}
\end{minipage}
\begin{minipage}{18pc}
\includegraphics[scale=0.75]{fig1b.eps}
\end{minipage}
\vspace{-0.5cm}
\caption{Pseudorapidity dependence of $v_{1}$ (left panel) and $v_2$
(right panel) in minimum bias events of Au + Au collisions at
$\sqrt{s}=200$ AGeV.}
\label{rapidity}
\end{figure}
Results from the AMPT model on the pseudorapidity ($\eta$) dependence
of the directed ($v_{1}$) and elliptic ($v_2$) flows of charged hadrons
in minimum bias events of Au + Au collisions at $\sqrt{s}=200$ AGeV
are shown, respectively, in the left and right panels of
Fig. \ref{rapidity} \cite{chen2}. For $v_1$, both the default version
and the version with string melting can reproduce approximately the
STAR data (solid circles) \cite{STAR03} around the mid-pseudorapidity
region, while only the default version can describe the data at
large $\left\vert \eta \right\vert$. For $v_2$, the string melting
scenario with a parton scattering cross section of
$\sigma _{p}=$ $10$ mb (solid squared) describes very
well the PHOBOS data (solid stars) \cite{manly03} around mid-$\eta $
($\left\vert \eta \right\vert \leq 1.5$) but a smaller
$\sigma_{p}=3$ mb (open squares) or the default version (solid
triangles) gives a better description of both PHOBOS and STAR
\cite{oldenburg04} data at large pseudorapidity
($\left\vert \eta \right\vert \geq 3$). These interesting features
may imply that initially the matter produced at large pseudorapidity
is dominated by strings while that produced around mid-rapidity
mainly consists of partons. This is a reasonable picture as particles
at large rapidity are produced later in time when the volume of the
system is large and the energy density is small.
\section{System size dependence of anisotropic flows}
\vspace{-0.5cm}
\begin{figure}[htb]
\begin{minipage}{18pc}
\includegraphics[scale=0.65]{fig2a.eps}
\end{minipage}
\begin{minipage}{18pc}
\includegraphics[scale=0.65]{fig2b.eps}
\end{minipage}
\vspace{-0.5cm} \caption{Transverse momentum dependence of the
$v_{2}$ of mid-rapidity charged hadrons in minimum bias events
of Au+Au (solid squares) and Cu+Cu (open squares) collisions at
$\sqrt{s}=200$ AGeV.}\label{cucu}
\end{figure}
In Fig. \ref{cucu}, results from the AMPT model with string melting
for charged hadron elliptic flows in minimum bias Cu+Cu and Au+Au
collisions at $\sqrt{s}=200$ AGeV are shown for parton scattering
cross sections of $\sigma _{p}=3$ mb (left panel) and 10 mb
(right panel) \cite{chen3}. It is seen that the elliptic flow
in the lighter Cu+Cu collisions is about a factor of 3 smaller than
that in the heavier Au+Au collisions at same energy as shown by
solid lines. This is consistent with the linear scaling of
the system size as well as the combined effect of the initial
energy density and spatial eccentricity.
\section{Anisotropic flows in collisions of asymmetric systems}
\vspace{-0.4cm}
\begin{figure}[htb]
\begin{minipage}{18pc}
\includegraphics[scale=0.65]{fig3a.eps}
\end{minipage}
\begin{minipage}{18pc}
\includegraphics[scale=0.65]{fig3b.eps}
\end{minipage}
\vspace{-0.5cm}
\caption{Pseudorapidity dependence of $v_1$ (left panel) and
$v_{2}$ (right panel) for charged hadrons in minimum bias events
of Cu+Au collisions at $\sqrt{s}=200$ AGeV.}
\label{cuau}
\end{figure}
In Fig. \ref{cuau}, the pseudorapidity dependence of $v_1$ (left
panel) and $v_{2}$ (right panel) for charged hadrons in
minimum bias events of Cu+Au collisions at $\sqrt{s}=200$ AGeV are
shown for the string melting scenario with parton scattering cross
sections $\sigma _{p}=3$ (open squares) and $10$ mb (solid squares)
as well as the default AMPT model without string melting
(solid triangles) \cite{chen4}. Comparing with results
in symmetric Au+Au collisions, we find that charged hadrons produced around
mid-rapidity in asymmetric Cu+Au collisions display a stronger $v_1$
and their $v_2$ is also more sensitive to the parton cross
section used in the parton cascade. Furthermore, both $v_{1}$ and $v_{2}$
are appreciable and show an asymmetry in the forward and backward
rapidities.
\section{Summary}
Using the \textrm{AMPT} model, we have studied the rapidity and
colliding system size dependence of anisotropic flows in
heavy ion collisions at \textrm{RHIC}. We find that results on
the rapidity dependence of anisotropic flows suggest that a partonic
matter is formed during early stage of relativistic heavy ion
collisions only around mid-rapidity and that strings remain
dominant at large rapidities. Furthermore, to reproduce the
experimental data requires a parton cross section that is larger
than that given by the perturbative QCD, indicating that
nonperterbative effects are important in the produced partonic
matter at RHIC. Also, a linear scaling with the
colliding system size is observed for the elliptic flow of
charged hadrons in minimum bias collisions. For collisions of
asymmetric systems, there is a strong $v_1$ around mid-rapidity,
and both $v_1$ and $v_2$ are asymmetric in the forward and backward
rapidities. Experimental verification of latter predictions will be
very useful in testing the AMPT model as well as in understanding
the dynamics of the partonic matter produced in the collisions.
|
1,314,259,994,196 | arxiv | \section*{Acknowledgements}
We acknowledge V. Kaiser for his help with the Thomas--Fermi
model and computation time through CIMENT infrastructure (Rhône-Alpes
CPER07\_13 CIRA) and Equip@Meso project (ANR-10-EQPX-29-01). We also
acknowledge funding from the ANR project TAMTAM (ANR-15-CE08-0008-01).
AS acknowledges funding from the DFG under Germany's Excellence Strategy ---
EXC 2075 --- 390740016 and SFB 1313 (Project Number 327154368) and support by
the
Stuttgart Center for Simulation Science (SimTech).
\section*{Author contributions}
B.C.\, L.B.\ and A.S.\ conceived the
research. A.S.\ carried out the molecular simulations with support from
D.J.
A.S., B.C.\ and L.B.\ analyzed the data. A.S.\ and B.C.\ wrote the
paper with inputs from all authors.
\section*{Competing Interests}
The authors declare no competing interests.
\begin{methods}
\subsection{Molecular Dynamics simulations.}
All simulations are carried out using LAMMPS simulation
package\cite{plimpton_1995_fast} (stable release 7 Aug 2019).
Electrostatic interactions are calculated using the PPPM
method with an accuracy of at least
$10^{-4}$ and a real-space cut-off $r_\mathrm{c}=12.5\,\textrm{\AA}$.
Periodic boundary conditions are used in all dimensions with
the non-electrostatic interactions being cut and shifted to zero at
$r_\mathrm{c}$.
For the simulations of the TF fluid and the salt in contact with an
insulating/vacuum interface, the interactions between periodic images
are not screened so that we employ the slab correction method proposed
by Yeh and
Berkowitz\cite{yeh_1999_ewald} with a vacuum layer of three times the
simulation cell height.
The non-electrostatic part of the salt--salt interactions are
described using the Born--Meyer--Huggins
potential which accurately reproduces the properties of NaCl (either
as a crystal or molten salt),\cite{anwar_2002_calculation}
\begin{equation}
U_\mathrm{BMH}(r) = A \exp\left(\frac{\sigma-r}{B}\right) -
\frac{C}{r^6} -
\frac{D}{r^8},
\end{equation}
The corresponding force field parameters are
given in \textit{Supplementary Table 1}.
Reflective walls of width $\xi=0.2\,\mathrm{nm}$ are used at each
metal/dielectric
interface to prevent the Thomas--Fermi fluid/charged system to migrate
to the pore space/confining media.
The latter implies that, if an atom moves through the wall by a
distance $\delta$ in a timestep, its position is set back to $-\delta$
away from the wall and the sign of the corresponding velocity component
is flipped.
In all simulations presented in the main text, the confining
media filled with the Thomas--Fermi fluid are chosen to have a length
$d_\mathrm{TF} = 10\,\mathrm{nm}$. Increasing $d_\mathrm{TF}$ increases
the agreement between theory and simulations in Fig.\ 3 due to
the decay in the disjoining energy between the two TF surfaces but at
the price of enhanced numerical cost (\textit{Supplementary Fig.~S10}).
For the TF--TF interaction, a purely repulsive power
law of the form $U(r) = E/r^n$ is added to avoid numerical infinities
when particles overlap.
We use $n=8$ and $E=10^3 \,\textrm{kcal/mol/\AA}^8$ but we checked that
the detailed form of the interaction potential does not qualitatively
influence our simulation results as shown in \textit{Supplementary
Fig.~S9}.
The positive and negative TF particles differ only in their
partial charge $\pm q_\mathrm{TF}$ and only interact through
electrostatic interactions
with the salt.
The density of the TF fluid is fixed at $\rho_\mathrm{TF} =
57.5\,\mathrm{nm^{-3}}$
at a temperature $T_\mathrm{TF} = 12000\,\mathrm{K}$ and mass
$m_\mathrm{TF} = 1 \,\mathrm{amu}$ to ensure fast relaxation.
The mass of the Na and Cl atoms is set to 22.9898 and 35.446 amu,
respectively.
Time integration is performed using a Verlet scheme with a timestep of
$0.1\,\mathrm{fs}$ to allow for fast relaxation of the TF liquid.
The molten salt is simulated at a 2000~K and temperature coupling
is performed using separate Nose--Hoover thermostats for the salt and
TF fluids with a characteristic time of 100 timesteps.
\subsection{Capacitance determination.}
The capacitive behavior of our virtual Thomas--Fermi fluid was checked
as this provides an important benchmark to assess its physical
validity. Using a direct measurement approach, the capacitance was
estimated using MD simulations in which the system is sandwiched
between two electrodes having an overall charge $+Q$ and $-Q$. As
discussed in the main text, two systems were considered to verify the
consistency of the obtained results: the virtual Thomas--Fermi alone
and a composite system made up of a dielectric layer confined by the
virtual Thomas--Fermi fluid (for the latter, two dielectric materials
were considered: either a vacuum layer or a molten salt). The
electrodes used for the capacitance measurements consist of point
charges $q_\mathrm{w} = 0.01$ arranged on a 1\AA\, 2D grid (see
\textit{Supplementary Fig.~S2} for a molecular simulation snapshot),
resulting in
a total charge of $Q=\pm 0.166\,\mathrm{C/m^2}$. It was checked that
this value is low enough to ensure that the capacitance response of the
system is in the linear response regime so that the capcacitance $C$
is readily obtained from the electrostatic potential drop $\Delta
\Psi$ between the two electrodes.
The TF fluid is separated from the point charges by
1\AA\ via a reflecting wall denoted by the gray shaded areas in
Fig.\ 4(a).
The potential drop is obtained from Poisson equation by
integrating twice the charge density profile, $\Delta \Psi(z) =
- \int_{-\infty}^z {\mathrm{d}} z^\prime \int_{-\infty}^{z\prime} {\mathrm{d}}
z^{\prime\prime} \,e(\rho_+ - \rho_-)/\varepsilon_0$ as
shown in \textit{Supplementary Fig.~S2(b)}.
\section*{Data availability}
All relevant simulation input scripts are available in this repository:
Schlaich, Alexander, 2021, "Simulation input scripts for 'Electronic
screening using a virtual Thomas–Fermi fluid for predicting wetting and
phase transitions of ionic liquids at metal surfaces'",
https://doi.org/10.18419/darus-2115, DaRUS.
\section*{Code availability}
Molecular simulations were performed using using the open source package
LAMMPS, stable release 7 Aug 2019, available under
{https://www.lammps.org/}.
Post-processing has been performed in Python using our open source
toolbox MAICoS ({https://gitlab.com/maicos-devel/maicos/}).
\end{methods}
|
1,314,259,994,197 | arxiv | \section{Introduction}
A Pick function of one variable is a holomorphic map from the upper half-plane, which we shall denote
by $\Pi$, into $\overline{\Pi}$.
A Pick function of two variables is a holomorphic map from $\pitwo$ to $\overline{\Pi}$.
The purpose of this note
is to extend to two variables certain well-known results about the asymptotic analysis of
Pick functions in one variable.
\subsection{One variable results}
\label{subsec1.1}
In 1922, R.~Nevanlinna showed that
a Pick class function of one variable
that decays at infinity is the Cauchy transform of a finite measure on $\mathbb R$.
\begin{theorem}
\label{thma2}
\cite{nev22}
If $F: \Pi \to \Pi$ is analytic and satisfies
\setcounter{equation}{\value{theorem}} \begin{equation}
\label{eqa111}
\limsup_{y \to \infty} \, | y F(iy)| < \infty ,
\end{equation} \addtocounter{theorem}{1}
then there exists a unique finite positive Borel measure $\mu$ on $\mathbb R$ so that
\setcounter{equation}{\value{theorem}} \begin{equation}
\label{eq1.1.2}
F(z) \=
\int \frac{d\mu(t)}{t-z} .
\end{equation} \addtocounter{theorem}{1}
\end{theorem}
We shall say that a set $S$ in $\Pi$ approaches $\infty$ non-tangentially,
$S \stackrel{nt}{\to}\infty$, if
$\infty$ is in the closure, and
there is a constant $c$ such that
$ |z| \le c\ \impart{z}$
for all $z \in S$. If $F$ has a representation as in (\ref{eq1.1.2}), then
\[
F(z) \ = \ \frac{\rho}{z} + o(1/|z|)
\]
as $z \stackrel{nt}{\to}\infty$, where $\rho = - \| \mu \|$.
If $\mu$ has more moments, then there is a higher order asymptotic expansion
at $\infty$.
H. Hamburger proved the following two theorems \cite{ham20, ham21}.
For a proof of Theorem~\ref{thma3} as stated, see \cite[Thm. 2.2]{shota43}
or \cite[Thm 3.2.1]{akh65}.
\begin{theorem}
\label{thma3}
Let real constants $\rho_1, \dots, \rho_{2N-1}$ be given. There exists a Pick function $F$
satisfying
\setcounter{equation}{\value{theorem}} \begin{equation}
\label{eqa112}
F(z) \= \frac{\rho_1}{z} + \frac{\rho_2}{z^2} + \dots + \frac{\rho_{2N-1}}{z^{2N-1}}
+ o(|z|^{-(2N-1)})
\end{equation} \addtocounter{theorem}{1}
as $z \stackrel{nt}{\to}\infty$ if and only if
there is a measure $\mu$ on $\mathbb R$ whose first $2(N-1)$ moments are finite and satisfy
\setcounter{equation}{\value{theorem}} \begin{equation}
\label{eqa.1.4}
\int t^k d\mu(t) \ = \ - \rho_{k+1}, \qquad 0 \leq k \leq 2(N-1).
\end{equation} \addtocounter{theorem}{1}
Moreover, in this case
$F$ has a representation as in (\ref{eq1.1.2}) for some measure $\mu$ satisfying
(\ref{eqa.1.4}).
\end{theorem}
Hamburger gave an alternate equivalent condition.
There is also a proof in \cite[Thm. 1.2]{shota43};
and see \cite[Thm. 3.3]{las10} for an alternative formulation (but without a proof).
\begin{theorem}
\label{thma4}
Let $\rho_1, \dots, \rho_{2N-1}$ be given real numbers. There exists a Pick function
$F$
satisfying (\ref{eqa112}) if and only if the
$N$-by-$N$ Hankel matrix
\[
H \ = \
- \
\left(
\begin{array}{cccc}
\rho_1 & \rho_2 & \dots & \rho_{N} \\
\rho_2 & \rho_3 & \dots & \rho_{N+1} \\
\vdots & \vdots && \vdots \\
\rho_{N} & \rho_{N+1} & \dots & \rho_{2N-1}
\end{array}
\right)
\]
is
positive semi-definite and has the property
that whenever $(c_1, c_2, \dots, c_{N-1}, 0)^t$ is in the kernel of $H$,
then $ (0,c_1, c_2, \dots, c_{N-1})^t$ is also in the kernel.
\end{theorem}
In 1881, L. Kronecker proved the following theorem
\cite{kro81} (see \cite[Thm. I.3.1]{pel02} for a modern treatment).
\begin{theorem}
\label{thmkro1}
The infinite Hankel form
\[
\left(
\begin{array}{cccc}
\rho_1 & \rho_2 & \rho_3 & \dots \\
\rho_2 & \rho_3 & \rho_4 &\dots \\
\rho_3 & \rho_4 & \rho_5 &\dots \\
\vdots & \vdots & \vdots & \vdots
\end{array}
\right)
\]
is finite rank if and only if
\[
F(z) \ = \ \sum_{n=1}^\infty \frac{\rho_n}{z^n}
\]
is a rational function.
\end{theorem}
\subsection{Two variable results}
A two variable version of Theorem~\ref{thma2}
was proved in \cite{aty11}; see also Theorem~\ref{thm2b1} below.
Before stating it, let us introduce some notation.
If $Y$ is an operator on a Hilbert space, and $z = (z_1, z_2)$ is a point in
${\mathbb C}^2$, we shall use $z_Y$ to denote the operator
\[
z_Y \ = \ z_1 Y + z_2 (I - Y).
\]
\begin{thm}
\label{thma5}
\cite{aty11}
Let $h : \pitwo \to {\Pi}$ be a Pick function of two variables. Then
\[
\limsup_{s \to \infty} | s F(is,is) | \ < \ \infty
\]
if and only if there is a Hilbert space $\H$, a self-adjoint densely defined operator $A$ on $\H$,
a positive contraction $Y$ on $\H$, and a vector $\alpha$ in $\H$,
such that
\setcounter{equation}{\value{theorem}} \begin{equation}
\label{eqa1.2.1}
h(z)\ =\ <{(A-z_Y)}^{-1}\alpha,\alpha>, \ \ \ z \in \pitwo.
\end{equation} \addtocounter{theorem}{1}
\end{theorem}
We shall say that $h$ has a {\em type I Nevanlinna representation} if it has a representation
as in (\ref{eqa1.2.1}).
In one variable, the Poisson integral of any finite positive measure on $\mathbb R$ is the real part of
a Pick function that decays like (\ref{eqa111}), so the study of asymptotic expansions (\ref{eqa112})
and solutions to the moment problem (\ref{eqa.1.4}) for arbitrary measures are tightly bound.
In two variables, their study diverges. The infinite Hamburger moment problem in several variables
is studied in \cite{puva99} and \cite{vas02}; for an algorithm for solving the problem in two variables, see \cite{zag10}. For the truncated problem, see for example the memoir
\cite{curfi98} and subsequent papers.
Our objective is to study the two variable analogue of (\ref{eqa112}).
If one restricts $z$ to the diagonal $\{ z_1 = z_2 \}$, then (\ref{eqa1.2.1})
becomes (\ref{eq1.1.2}), where $\mu$ is the scalar spectral measure of $A$ for the vector
$\alpha$. Saying that an even moment $\gamma_{2k}$ exists in this case is the assertion that
$t^{k-1}$ is in the domain of $A$. We shall generalize this idea to two variables.
We shall let $m$ and $n$ denote ordered pairs of nonnegative integers.
We set $e_1=(1,0)$ and $e_2=(0,1)$. If $n=(n_1,n_2)$, we set $\abs{n}=n_1+n_2$, and for a pair $z=(z_1,z_2)$ we follow the usual convention of letting $z^n=z_1^{n_1}z_2^{n_2}$. For $N$ a positive integer we set $I_N=\set{n}{1 \le \abs{n} \le N}$.
We now define an object that we shall call a \emph{finite Hankel vector moment sequence}, or for short, a \emph{finite HVMS}. For simplicity, we take $N \geq 2$; see (\ref{defhvms}) for general $N$.
\begin{defin}
\label{defhvms2}
For a fixed positive integer $N \geq 2$, a finite Hankel vector moment sequence is a 3-tuple, $(\alphanin{I_N},Y,A)$ where:
$\alphanin{I_N}$ is a sequence of vectors in some Hilbert space $\H$;
$Y$ is a positive contraction acting on $\H$, satisfying for each $l=1,\ldots,N$
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.1A}
Y\alpha_{(0,l)}=0 = (1-Y)\alpha_{(l,0)}=0;
\end{equation} \addtocounter{theorem}{1}
$A$ is a partially defined symmetric operator on $\H$ with the property that
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.1.5A}
\set{\alpha_n}{1 \le \ \abs{n} \ \le N-1} \subset \dom{A};
\end{equation} \addtocounter{theorem}{1}
for each $n \in I_{N-1}$,
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.2A}
A\alpha_n=Y\alpha_{n+e_1}+(1-Y)\alpha_{n+e_2}.
\end{equation} \addtocounter{theorem}{1}
\end{defin}
Here is the main result of this paper.
\begin{theorem}
\label{thmaf}
A Pick function $h$ of two variables
satisfies
\setcounter{equation}{\value{theorem}} \begin{equation}
\label{eqa.2.1}
h(z) \ = \ \sum_{n \in I_{2N-1}} \frac{\rho_n}{z^n} \ + \ o(\|z \|^{-(2N-1)})
\end{equation} \addtocounter{theorem}{1}
as $z \stackrel{nt}{\to}\infty$, for some real numbers $\rho_n$,
if and only if it has a representation
as in (\ref{eqa1.2.1}) and there is a finite HVMS
$(\alphanin{I_N},Y,A)$
with $\alpha = \alpha_{(1,0)} + \alpha_{(0,1)}$.
Moreover,
$\rho_k$ is given by the formula:
\[
\rho_k \ = \
- \ \sum \{ \langle \alpha_n , A \alpha_m \rangle \ : \
m_1 + n_1 = k_1,\ m_2 + n_2 = k_2,\
m_1 + m_2 = \lfloor |k|/2 \rfloor \, \} .
\]
\end{theorem}
When $k = 1$, one interprets the right-hand side of the inner product as $\alpha$
(so $\rho_{(1,0)} = - \langle \alpha_{(1,0)}, \alpha \rangle$ and
$\rho_{(0,1)} = - \langle \alpha_{(0,1)}, \alpha \rangle$ ).
By $z \stackrel{nt}{\to}\infty$ we mean that $\| z \| \to \infty$ while
$z$ stays in an approach region
$$
\{z \in \pitwo \ : \ \| z \| \leq c \ \min{\Im z_1,\ \Im z_2} \}$$ for some $c$.
The notation $\lfloor M/2 \rfloor $ stands for the
greatest integer less than or equal to $M/2$.
The forward implication of (\ref{thmaf}) is Theorem~\ref{thm4.1}; the converse is
Theorem~\ref{prop3.10}.
To relate Theorem~\ref{thmaf} to Theorems~\ref{thma3} and \ref{thma4},
think in one variable of
$\alpha_n$ as $t^{n-1}$ in $L^2(\mu)$, and $A$ as multiplication by $t$ on $L^2(\mu)$.
Then $\rho_k$ is given by a single term, $- \langle t^{\lceil k/2 \rceil -1}, t^{\lfloor k/2 \rfloor} \rangle$.
Theorem~\ref{thma4} also has a two variable analogue, which we give in Theorem~\ref{thm5.1}.
This justifies our nomenclature of Hankel vector moment sequence.
The last condition in Theorem~\ref{thm5.1} is an analogue of the last condition in Theorem~\ref{thma4};
for an explanation of it, see Section~\ref{sec5}.
\vskip 10pt
{\bf Theorem \ref{thm5.1}}
{\em
Let $a = (a^1,a^2)$ be a pair of matrices on $\isubn$.
Then there is a finite HVMS $(\alphanin{I_N},Y,A)$
such that
\begin{eqnarray*}
a^1_{mn} & \ = \ &\langle Y \alpha_n, \alpha_m \rangle
\\
a^2_{mn} & \ = \ &\langle (1- Y) \alpha_n, \alpha_m \rangle
\end{eqnarray*}
if and only if the following four conditions obtain:
\[
a^1 \text{ and } a^2 \text{ are positive semi-definite.}
\]
\[
a^1_{m+e_1,n}+a^2_{m+e_2,n}=a^1_{m,n+e_1}+a^2_{m,n+e_2} \ \text{ whenever }\ m,n \in \isub{N-1}.
\]
\[
a^1_{(0,l),(0,l)} = a^2_{(l,0),(l,0)}=0\ \text{ for }\ l=1,\ldots,N.
\]
\[
\supp{f} \in \isub{N-1}\text{ and } (a^1+a^2)f=0 \Rightarrow (a^1S_1+a^2S_2)f=0.
\]
}
In Section~\ref{secinf}, we discuss infinite sequences.
One multi-variable generalization of Kronecker's Theorem \ref{thmkro1} was proved by S. C. Power \cite{po82}. In Theorem~\ref{thmf3}, we prove another.
{\bf Theorem \ref{thmf3}:}
{\em
Let $h$ have non-tangential asymptotic expansions of all orders at infinity.
Then there is an infinite HVMS
$(\{ \alpha_n \},Y,A)$
with $\alpha = \alpha_{(1,0)} + \alpha_{(0,1)}$,
and $
h(z) = < (A -z_Y)^{-1} \alpha, \alpha >
$.
The sequence can be chosen with
${\rm rank} \langle \alpha_n, \alpha_m \rangle < \infty $
if and only if $h$ is a rational function.
}
\vskip 10pt
In Section~\ref{secex}, we give an example of a construction of functions in the Pick class that
have asymptotic expansions. In Section~\ref{secm}, we give some technical results on models.
\section {Finite Hankel Vector Moment Sequences}
\label{secb}
\begin{defin}
\label{defhvms}
For a fixed positive integer $N$, a finite Hankel vector moment sequence is a 3-tuple, $(\alphanin{I_N},Y,A)$ where:
$\alphanin{I_N}$ is a sequence of vectors in some Hilbert space $\H$;
$Y$ is a positive contraction acting on $\H$, satisfying for each $l=1,\ldots,N$
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.1}
Y\alpha_{(0,l)}=0 = (1-Y)\alpha_{(l,0)}=0;
\end{equation} \addtocounter{theorem}{1}
$A$ is a partially defined symmetric operator on $\H$ with the properties that, if $N \geq 2$
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.1.5}
\set{\alpha_n}{1 \le \ \abs{n} \ \le N-1} \subset \dom{A};
\end{equation} \addtocounter{theorem}{1}
and, for each $n \in I_{N-1}$,
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.2}
A\alpha_n=Y\alpha_{n+e_1}+(1-Y)\alpha_{n+e_2}.
\end{equation} \addtocounter{theorem}{1}
\blue
When $N=1$ conditions (\ref{eq2.1.5}) and (\ref{eq2.2}) are dropped.
\black
\end{defin}
Every symmetric operator has a self-adjoint extension on a possibly larger Hilbert space;
so there is no loss in generality in assuming $A$ is self-adjoint.
If $(\alphanin{I_N},Y,A)$ is a finite HVMS, we frequently shall abuse the notation somewhat and refer to the entire tuple by simply $\alphan$. If $\alphan$ is an HVMS as above, we refer to $N$ as the \emph{size} of $\alphan$, $Y$ as the \emph{Hankel weight} of $\alphan$, $A$ as the \emph{Hankel shift} of $\alphan$, and finally the vectors, $\alpha_n$ are called the \emph{vector moments} of $\alphan$.
Our first proposition gives a simple yet fundamental property of HVMS's. If $z \in \ctwo$ and $Y$ is a positive contraction on a Hilbert space $\mathcal{H}$, we defined
$z_Y=z_1Y+z_2(1-Y).$ As $Y$ is a positive contraction, the spectral theorem implies that $\zyinv$ is a well defined analytic operator valued function on the set $ \set{z \in \ctwo}{z_2 \ne 0, z_1/z_2\notin (-\infty, 0]}.$ If $\alphan$ is an HVMS with shift $A$ and weight $Y$, and $l$ is a positive integer we shall adopt the notation, $$R_l(z)=\zyinv(A\zyinv)^{l-1}.$$
Note that if $z \in \set{z \in \ctwo}{z_2 \ne 0, z_1/z_2\notin (-\infty, 0]}$, then the domain of $R_l(z)$ is all of ${\cal H}$ if $l=1$ and for $l \ge2$ is inductively defined by
\begin{equation*}
\dom{(R_l(z))}=\set{\alpha \in {\cal H}}{{(\zyinv A)}^i \zyinv \alpha \in \dom{A} \ i=0,\ldots ,l-2}.
\end{equation*}
Note also that
\[
R_l(\bar z) \ \subseteq \ R_l(z)^*.
\]
\begin{prop}\label{prop2.1}
Let $\{\alpha_n\}$ be an HVMS of size $N$ and let
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.3}
\alpha=\alpha_{(1,0)}+\alpha_{(0,1)}.
\end{equation} \addtocounter{theorem}{1}
If $1 \le l \le N$, then
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.3.5}
\alpha \in \dom{R_l(z)}
\end{equation} \addtocounter{theorem}{1}
and
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.4}
R_l(z)\alpha=\sum_{\mid n\mid=l}\frac{1}{z^n}\alpha_n
\end{equation} \addtocounter{theorem}{1}
\blue
for all $z$ in $\set{z \in \ctwo}{z_2 \ne 0, z_1/z_2\notin (-\infty, 0]}$.
\black
\end{prop}
\begin{proof}
We induct on $N$. If $N=1$ and $l=1$, then trivially \ref{eq2.3.5} holds. Also, by \ref{eq2.1}, $Y\alpha_{(0,1)}=0 = (1-Y)\alpha_{(1,0)}=0$. Hence,
\begin{align*}
R_1(z)\alpha & =\zyinv \alpha \\&= \zyinv\alpha_{(1,0)}+\zyinv\alpha_{(0,1)} \\& =
\frac{1}{z_1}\alpha_{(1,0)}+\frac{1}{z_2}\alpha_{(1,0)} \\& =\sum_{\mid n\mid=1}\frac{1}{z^n}\alpha_n.
\end{align*}
Now assume that the proposition holds for HVMS's of size $N$. Fix an HVMS, $\alphanin{I_{N+1}}$, of size $N+1$. The case when $l=1$ is handled as in the previous paragraph. If $2\le l\le N+1$, as $\alphanin{I_{N}}$ is an HVMS of size $N$, the inductive hypothesis implies that
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.4.5}
R_l(z)\alpha=\zyinv A R_{l-1}(z)\alpha=\zyinv A \sum_{\mid n\mid=l-1}\frac{1}{z^n}\alpha_n.
\end{equation} \addtocounter{theorem}{1}
As $\alphanin{I_{N+1}}$ is of size $N+1$ and $l-1 \le N$, \ref{eq2.1.5} implies that $\alpha_n \in \dom{A}$ whenever $\abs{n} = l-1$. Hence, \ref{eq2.4.5} implies that $\alpha \in \dom{R_l(z)}$. Also, using \ref{eq2.1} and \ref{eq2.2} we see via \ref{eq2.4.5} that
\begin{align*}
R_l(z)\, \alpha &= \zyinv A \sum_{\mid n\mid=l-1}\frac{1}{z^n}\alpha_n
\\
& = \zyinv \sum_{\mid n\mid=l-1}\frac{1}{z^n} Y \alpha_{n+e_1} +
\zyinv \sum_{\mid n\mid=l-1}\frac{1}{z^n} (1-Y) \alpha_{n+e_2}
\\&= \zyinv \frac{1}{z_1^{l-1}} Y \alpha_{(l,0)}
+ \zyinv \sum_{\substack{\mid m\mid=l \\ m \ne (l,0),(0,l)}}\frac{z_1}{z^m} Y \alpha_m +
\\
& \qquad
\zyinv \sum_{\substack{\mid m\mid=l \Mult \ne(l,0), (0,l)}}\frac{z_2}{z^m} (1-Y) \alpha_m
+ \zyinv \frac{1}{z_2^{l-1}} (1-Y) \alpha_{(0,l)}
\\& =\frac{1}{z_1^l}\alpha_{(l,0)}
+\zyinv \sum_{\substack{\mid m\mid=l \\ m \ne (l,0),(0,l)}}\frac{z_1}{z^m} Y \alpha_m +
\\& \qquad
\zyinv \sum_{\substack{\mid m\mid=l \Mult \ne (l,0)(0,l)}}\frac{z_2}{z^m} (1-Y) \alpha_m +\frac{1}{z_2^l}\alpha_{(0,l)}
\\& = \frac{1}{z_1^l}\alpha_{(l,0)}+
\zyinv \sum_{\substack{\mid m\mid=l \Mult \ne (l,0)(0,l)}}(z_1 Y + z_2 (1-Y)) \frac{1}{z^m}\alpha_m +\frac{1}{z_2^l}\alpha_{(0,l)}
\\& = \frac{1}{z_1^l}\alpha_{(l,0)}+
\sum_{\substack{\mid m\mid=l \Mult \ne (l,0)(0,l)}} \frac{1}{z^m}\alpha_m +\frac{1}{z_2^l}\alpha_{(0,l)}
\\& =\sum_{\mid n\mid=l}\frac{1}{z^n}\alpha_n.
\end{align*}
\end{proof}
The property described by \ref{eq2.3.5} in Proposition \ref{prop2.1} arises as an issue in many of the applications of HVMS's that we have in mind. Accordingly, we introduce the following definition.
\begin{defin}\label{def2.1}
Let ${\cal H}$ be a Hilbert space, $\alpha \in {\cal H}$, and assume that $Y$ is a positive contraction on ${\cal H}$. If $A$ is a symmetric operator on $H$, we say that $A$ has finite complex vector $(Y,\alpha)$-moments to order $N$ if for each $z \in
\blue
\set{z}{z_2 \ne 0, z_1/z_2\notin (-\infty, 0]}
\black
$, $\alpha \in \dom{(Az_Y^{-1})^l}$ for $l=1,\ldots,N$. We say that $A$ has finite real vector $(Y,\alpha)$-moments to order $N$ if for each $b \in {\rplus}^2$, $\alpha \in \dom{(Ab_Y^{-1})^l}$ for $l=1,\ldots,N$.
\end{defin}
The following converse to Proposition \ref{prop2.1} provides a useful criterion to verify that a given symmetric operator and positive operator are associated with an HVMS.
\begin{prop}\label{prop2.2}
Let ${\cal H}$ be a Hilbert space, let $\alpha \in {\cal H}$ and assume that $A$ and $Y$ are operators acting on ${\cal H}$, with $A$ symmetric and $Y$ a positive contraction. The following conditions are equivalent.\\ \\
(i)\ \ \ There exists a sequence $\alphanin{I_N}$ in ${\cal H}$ such that \\
\hspace*{1mm}$ \quad \quad \; \alpha=\alpha_{(1,0)}+\alpha_{(0,1)}$ and $(\alphan,A,Y)$ is an HVMS.\\ \\
(ii)\ \ \ $A$ has finite complex vector $(Y,\alpha)$-moments to order $N-1$ and \\
\hspace*{10mm} for each $l=1,\ldots,N$ there exist vectors $\alpha_n$, $\abs{n}=l$ such that
\setcounter{equation}{\value{theorem}} \begin{equation}
R_l(z)\alpha=\sum_{\mid n\mid=l}\frac{1}{z^n}\alpha_n \notag
\end{equation} \addtocounter{theorem}{1}
\ \ \ \ whenever $z \in \blue \set{z}{z_2 \ne 0, z_1/z_2\notin (-\infty, 0]}$.
\black
\\\\
(iii)\ \ \ $A$ has finite real vector $(Y,\alpha)$-moments to order $N-1$ and\\
\hspace*{10mm} for each $l=1,\ldots,N$ there exist vectors $\alpha_n$, $\abs{n}=l$ such that
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.4.6}
R_l(b)\alpha=\sum_{\mid n\mid=l}\frac{1}{b^n}\alpha_n
\end{equation} \addtocounter{theorem}{1}
\ \ \ \ whenever $b \in {\rplus}^2$.
\end{prop}
\begin{proof}
That (i) implies (ii) follows from \ref{prop2.1}. Obviously, (ii) implies (iii).
Assume that (iii) holds.
\blue
To show that \ref{eq2.1} holds when $l=1$ and that $\alpha=\alpha_{(1,0)}+\alpha_{(0,1)}$,
equate coefficients in the following equation obtained from \ref{eq2.4.6} when $l=1$.
\begin{align*}
\alpha &=\ b_Y \byinv \alpha \\
&=\ b_Y R_1(b) \alpha\\
&=\ (b_1Y+b_2(1-Y))(\frac{1}{b_1}\alpha_{(1,0)}+\frac{1}{b_2}\alpha_{(0,1)}).
\end{align*}
Now assume $N \geq 2$.
\black
Note that the moment condition implies that for $1 \le l\le N-1$, $R_l(b)\alpha \in \dom{A}$. Hence by \ref{eq2.4.6},
$$\sum_{\mid n\mid=l}\frac{1}{b^n}\alpha_n \in \dom{A},$$
for all $b \in {\rplus}^2$. As
$$\sp{\set{\sum_{\mid n\mid=l}\frac{1}{b^n}\alpha_n}{b \in {\rplus}^2}}=
\sp{\set{\alpha_n}{\abs{n}=l}},$$
it follows that \ref{eq2.1.5} holds.
Now fix $l$ with $1 \le l \le N-1$. Noting that $b_Y R_{l+1} (b) = A R_l (b)$, we compute using \ref{eq2.4.6} that
\begin{align*}
\sum_{\mid m\mid=l}\frac{1}{b^m}A\alpha_m &= A\sum_{\mid m\mid=l}\frac{1}{b^m}\alpha_m \\&=b_Y \sum_{\mid n\mid=l+1}\frac{1}{b^n}\alpha_n
\\&=\frac{b_1}{b_2^{l+1}}Y\alpha_{(0,l+1)}+\sum_{\mid m\mid=l}\frac{1}{b^m}(Y\alpha_{m+e_1}+(1-Y)\alpha_{m+e_2})
\\& \ \ +
\frac{b_2}{b_1^{l+1}} (1-Y) \alpha_{(l+1,0)}.
\end{align*}
Equating terms in this formula yields that \ref{eq2.2} holds for $2 \le l \le N-1$ and that \ref{eq2.1} for $2 \le l \le N$.
\end{proof}
We now turn to a much more subtle characterization of HVMS's given in Theorem \ref{thm2.1} below. Suppose that $\alphan$ is an HVMS of size $N$ with weight $Y$ and shift $A$ and let $\alpha$ be as in \ref{eq2.3}. Let $\rplus=\set{t \in \mathbb{R}}{t>0}$. For $1 \le k \le 2N-1$ define functions $r_k:{\rplus}^2 \to \mathbb{R}$ by the formulas,
\begin{align}
\label{eq2.4.9}
r_1(b)\ &=\ < R_1(b) \alpha, \alpha > \ = \ < b_Y^{-1} \alpha, \alpha > &\text{ if } k = 1 \\
\label{eq2.5}
r_k(b)\ &=\ <R_l(b)\alpha,A R_{l-1}(b)\alpha> &\text{ if } 3 \leq k=2l-1
\\
\label{eq2.6}
r_k(b)\ &=\ <R_l(b)\alpha,A R_l(b)\alpha> &\text{ if }2 \leq k=2l,
\end{align}
where the expressions $R_l(b)\alpha$ make sense by Proposition \ref{prop2.1}.
Computing $r_k(b)$ using \ref{eq2.4} yields the qualitative information that for each $k$ with $1 \le k \le 2N-1$, $r_k(b)$ is a homogenous polynomial in $\frac{1}{b} =(\frac{1}{b_1},\frac{1}{b_2})$ of degree $k$. To formalize these properties of $\alpha$, $Y$, and $A$ we introduce the following definition.
\begin{defin}\label{def2.2}
Let ${\cal H}$ be a Hilbert space, $\alpha \in {\cal H}$, and assume that $Y$ is a positive contraction on ${\cal H}$. Assume that $A$ is a symmetric operator on $H$ with finite real vector $(Y,\alpha)$-moments to order $N-1$. For $1 \le k \le 2n-1$ we define the $k^{\text{th}}$ scalar $(Y,\alpha)$-moment of $A$ by equations (\ref{eq2.4.9}) to (\ref{eq2.6}).
\end{defin}
Before continuing, we remark that ontologically the scalar $(Y,\alpha)$-moments of $A$ are functions on $\rtwoplus.$ However, if these functions happen to be given by homogenous polynomials (as e.g. occurs in the case of an HVMS), then there is an obvious way to extend the moment functions to all of $\ctwo.$ Concrete formulas for this case would be
\begin{align}
\label{eq2.5.01}
r_1(z)\ &=\ < R_1(z) \alpha, \alpha > \ = \ < z_Y^{-1} \alpha, \alpha > &\text{ if } k = 1 \\
\label{eq2.5.1}
r_k(z)\ &=\ <R_l(z)\alpha,A R_{l-1}(z)^*\alpha> &\text{ if } 3 \leq k=2l-1
\\
\label{eq2.5.2}
r_k(z)\ &=\ <R_l(z)\alpha,A R_l(z)^*\alpha> &\text{ if } 2 \leq k=2l .
\end{align}
\begin{rem}
\label{rem2.1}
{\rm
If $(\alphan,A,Y)$ is a finite HVMS, then by Proposition~\ref{prop2.2} the $k^{\rm th}$
scalar $(Y,\alpha)$-moments of $A$
are given by
\begin{eqnarray*}
r_1(b) & = \frac{1}{b_1} \langle \alpha_{(1,0)} , \alpha \rangle
+ \frac{1}{b_2} \langle \alpha_{(0,1)} , \alpha \rangle &\text{ if } k = 1 \\
r_k(b) & =
\sum_{|m|=l - 1, |n| = l} \frac{1}{b^{m+n}} \langle \alpha_n ,
Y \alpha_{m+e_1} + (1-Y) \alpha_{m+e_2} \rangle &\text{ if } 3 \leq k=2l-1
\\
r_k(b) & =
\sum_{|m|=l, |n| = l} \frac{1}{b^{m+n}} \langle \alpha_n ,
Y \alpha_{m+e_1} + (1-Y) \alpha_{m+e_2} \rangle &\text{ if } 2 \leq k=2l .
\end{eqnarray*}
In particular, they only depend on the Gram matrices $ a^1 = \langle Y \alpha_n, \alpha_m \rangle$
and $ a^2 = \langle (1- Y) \alpha_n, \alpha_m \rangle$.
}
\end{rem}
\begin{thm}\label{thm2.1}
Let ${\cal H}$ be a Hilbert space, $\alpha \in {\cal H}$, and $N \ge 1$. Assume that $Y$ is a positive contraction on ${\cal H}$ and $A$ is a symmetric operator on $H$. There exists an indexed sequence $\alphanin{I_N}$ of vectors in ${\cal H}$ such that $(\alphan,A,Y)$ is an HVMS of size $N$ and
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.6.5}
\alpha=\alpha_{(1,0)}+\alpha_{(1,0)}
\end{equation} \addtocounter{theorem}{1}
if and only if $A$ has finite real vector $(Y,\alpha)$-moments to order $N-1$ and for each $k \le 2N-1$, the $k^{th}$ scalar $(Y,\alpha)$-moment of $A$ is a homogeneous polynomial in $\frac{1}{b}$ of order $k$.
\end{thm}
\begin{proof}
The necessity of the homogeneity condition follows by the discussion leading up to Definition \ref{def2.2}. To prove the sufficiency we proceed by induction on $N$.
When $N=1$, there is only one scalar moment given by $$r_1(b)=<\byinv\alpha,\alpha>.$$ If $r_1$ is homogenous of degree one, then there exist constants $a_1$ and $a_2$ such that
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.7}
<\byinv\alpha,\alpha>=a_1\frac{1}{b_1}+a_2\frac{1}{b_2}.
\end{equation} \addtocounter{theorem}{1}
We analyze \ref{eq2.7} by making the substitutions,
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.8}
b_1=x \ \ \ \ \ \text{and} \ \ \ \ \ b_2=\frac{t}{t-1}x.
\end{equation} \addtocounter{theorem}{1}
Noting that in the new variables $x$ and $t$,
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.9}
\byinv=\frac{t-1}{x}\tyinv,
\end{equation} \addtocounter{theorem}{1}
one computes that \ref{eq2.7} becomes
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.10}
<\tyinv \alpha,\alpha>=\frac{a_1}{t-1}+\frac{a_2}{t}.
\end{equation} \addtocounter{theorem}{1}
Now, \ref{eq2.10} implies that the scalar spectral measure of $Y$ w.r.t. $\alpha$ is supported in the set $\{0,1\}$, which in turn implies that $Y(1-Y)\alpha=0$. Letting $\alpha_{(1,0)}=Y\alpha$ and $\alpha_{(0,1)}=(1-Y)\alpha$, we see immediately that \ref{eq2.6.5} holds. As
$$(1-Y)\alpha_{(1,0)}=(1-Y)Y\alpha=0$$
and
$$Y\alpha_{(1,0)}=Y(1-Y)\alpha=0,$$
we see that \ref{eq2.1} holds. Finally, as \ref{eq2.1.5} and \ref{eq2.2} are both vacuous when $N=1$, the theorem is proved for the special case when $N=1$
Now suppose the sufficiency of the homogeneity conditions whenever $A$ has finite vector $(Y,\alpha)$-moments to order $N-1$ and $2N-1$ homogenous scalar $(Y,\alpha)$-moments. Fix $A,Y$, and $\alpha$ with the properties that $A$ has finite real vector $(Y,\alpha)$-moments to order $N$ and $2N+1$ homogenous real scalar $(Y,\alpha)$-moments, $r_k(b)$. We need to show that there exists an indexed sequence $\alphanin{I_{N+1}}$ in ${\cal H}$ such that $(\alphan,A,Y)$ is an HVMS of size $N+1$ and such that $\alpha=\alpha_{(1,0)}+\alpha_{(1,0)}$. By Proposition \ref{prop2.2} this will be accomplished if we can construct an indexed set $\alphanin{I_{N+1}}$ in ${\cal H}$ such that
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.10.1}
\alphanin{I_{N}} \subset \dom{A}
\end{equation} \addtocounter{theorem}{1}
and
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.10.2}
R_l(b)\alpha=\sum_{\mid n\mid=l}\frac{1}{b^n}\alpha_n \text{ for } l=1,\ldots,N+1.
\end{equation} \addtocounter{theorem}{1}
By the induction hypothesis, there exists an indexed set of vectors in ${\cal H}$, $\alphanin{I_N}$ such that $\ref{eq2.6.5}$ holds and such that $(\alphanin{I_N},A,Y)$ is an HVMS of size $N$.
By the homogeneity of $r_{2N+1}(b)$, there exist scalars $\rho_n, \abs{n}=2N+1,$ such that
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.11}
r_{2N+1}(b)=\sum_{\mid n\mid=2N+1}\frac{\rho_n}{b^n}
\end{equation} \addtocounter{theorem}{1}
On the other hand, by the definition of the odd scalar moments, \ref{eq2.5},
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.12}
r_{2N+1}(b)=<\byinv A R_N(b)\alpha,A R_{N}(b)\alpha>.
\end{equation} \addtocounter{theorem}{1}
Finally, Proposition \ref{prop2.1} implies that
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.13}
R_N(b)\alpha=\sum_{\mid n\mid=N}\frac{1}{b^n}\alpha_n.
\end{equation} \addtocounter{theorem}{1}
The remainder of the proof consists of employing the substitutions, \ref{eq2.8}, to make various deductions from \ref{eq2.11}, \ref{eq2.12}, and \ref{eq2.13} pertinent to establishing \ref{eq2.10.1} and \ref{eq2.10.2}. To facilitate our calculations we shall employ the notation,
$$t^n=t^{n_1}{(t-1)}^{n_2}.$$
Making the substitutions, \ref{eq2.8}, in \ref{eq2.13}, we obtain that
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.14}
R_N(b)\alpha={(tx)}^{-N}\sum_{\mid n\mid=N}t^n \alpha_n.
\end{equation} \addtocounter{theorem}{1}
As $A$ is assumed to have finite real vector $(Y,\alpha)$-moments to order $N$, the left side of \ref{eq2.14} is in the domain of $A$ for all $b \in \rtwoplus$. Hence, the right side of \ref{eq2.14} is in the domain of $A$ for all $t \in \rplus$. Noting that the set $\set{t^n}{\abs{n} = N}$ is a basis for the polynomials of degree less than or equal to $N$, it follows that $\alpha_n \in \dom{A}$ whenever $\abs{n} = N$. On the other hand, as $(\alphanin{I_N},A,Y)$ is an HVMS of size $N$, it follows from \ref{eq2.1.5} that $\alpha_n \in \dom{A}$ whenever $\abs{n} < N$. Thus, we have shown that
\ref{eq2.10.1} holds.
In order to verify \ref{eq2.10.2} we must first explain how $\alpha_n$ is defined when $\abs{n}=N+1$. Substitute \ref{eq2.13} into \ref{eq2.12} and then equate the right hand sides of \ref{eq2.11} and \ref{eq2.12} to obtain,
\begin{align*}
\lefteqn{{(tx)}^{-(2N+1)}\sum_{\mid n\mid=2N+1}\rho_n t^n} \\
&\quad = \frac{t-1}{x}<{(t-Y)}^{-1}{(tx)}^{-N}\sum_{\mid m\mid=N}t^m A\alpha_m,{(tx)}^{-N}\sum_{\mid m\mid=N}t^m A\alpha_m>
\end{align*}
which simplifies to
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.16}
\frac{p(t)}{t(t-1)}=<{(t-Y)}^{-1}\sum_{\mid m\mid=N}t^m A\alpha_m,\sum_{\mid m\mid=N}t^m A\alpha_m>,
\end{equation} \addtocounter{theorem}{1}
where $p$ is the polynomial of degree less than or equal to $2N+1$ defined by
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.17}
p(t)=\sum_{\mid n\mid=2N+1}\rho_n t^n.
\end{equation} \addtocounter{theorem}{1}
For $m$ a multi-index, we define $Q_m(t)$, an operator-valued polynomial of degree $\abs{m}-1$, by the formula,
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.18}
Q_m(t)=\frac{t^m - Y^m}{t-Y}. \notag
\end{equation} \addtocounter{theorem}{1}
Computing with the right side of \ref{eq2.16} yields that
\begin{align*}
\lefteqn{ <{(t-Y)}^{-1}\sum_{\mid m\mid=N}t^m A\alpha_m,\sum_{\mid m\mid=N}t^m A\alpha_m>} \\
&\ =\ <{(t-Y)}^{-1}\sum_{\mid m\mid=N}[(t-Y)Q_m(t)+Y^m] A\alpha_m,\sum_{\mid m\mid=N}t^m A\alpha_m>\\
&\ =\ <\sum_{\mid m\mid=N}Q_m(t) A\alpha_m,\sum_{\mid m\mid=N}t^m A\alpha_m>\\
&\qquad\qquad + <{(t-Y)}^{-1}\sum_{\mid m\mid=N}Y^m A\alpha_m,\sum_{\mid m\mid=N}t^m A\alpha_m>\\
&\ =\ <\sum_{\mid m\mid=N}Q_m(t) A\alpha_m,\sum_{\mid m\mid=N}t^m A\alpha_m>\\
&\qquad\qquad + <\sum_{\mid m\mid=N}Y^m A\alpha_m,{(t-Y)}^{-1}\sum_{\mid m\mid=N}t^m A\alpha_m>\\
&\ =\ <\sum_{\mid m\mid=N}Q_m(t) A\alpha_m,\sum_{\mid m\mid=N}t^m A\alpha_m> + <\sum_{\mid m\mid=N}Y^m A\alpha_m,\sum_{\mid m\mid=N} Q_m(t) A\alpha_m>\\
&\qquad\qquad +<\sum_{\mid m\mid=N}Y^m A\alpha_m,{(t-Y)}^{-1}\sum_{\mid m\mid=N}Y^m A\alpha_m>.
\end{align*}
As the first two terms of this last expression are polynomials of degree less than or equal to $2N-1$ and $N-1$ respectively, recalling that $p$ has degree less than or equal to $2N+1$, we see that the third term in the above expression must have the form,
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.19}
<\sum_{\mid m\mid=N}Y^m A\alpha_m,{(t-Y)}^{-1}\sum_{\mid m\mid=N}Y^m A\alpha_m>
=\frac{c_1}{t}+\frac{c_1}{t-1}+q(t),
\end{equation} \addtocounter{theorem}{1}
where $c_1$ and $c_2$ are scalars and $q$ is a polynomial of degree less than or equal to $2N-1$. \ref{eq2.19} implies that if we set $$\beta=\sum_{\mid m\mid=N}Y^m A\alpha_m$$ and $E$ is the spectral measure for $Y$, then $dE_{\beta,\beta}$ is supported in $\{0,1\}$, which in turn implies that
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.20}
Y(Y-1)\beta=0.
\end{equation} \addtocounter{theorem}{1}
Now observe in light of \ref{eq2.20}, that
$$t(t-1){(t-Y)}^{-1}\beta=(t+Y-1)\beta$$
Hence,
\begin{align*}\label{eq2.21}
\lefteqn{t(t-1){(t-Y)}^{-1}\sum_{\mid m\mid=N}t^m A\alpha_m}\\
&\ = \ t(t-1){(t-Y)}^{-1}\sum_{\mid m\mid=N}[(t-Y)Q_m(t)+Y^m] A\alpha_m\\
&\ =\ t(t-1)\sum_{\mid m\mid=N}Q_m(t) A\alpha_m + t(t-1){(t-Y)}^{-1}\sum_{\mid m\mid=N}Y^m A\alpha_m\\
&\ =\ t(t-1)\sum_{\mid m\mid=N}Q_m(t) A\alpha_m + t(t-1){(t-Y)}^{-1}\beta\\
&\ =\ t(t-1)\sum_{\mid m\mid=N}Q_m(t) A\alpha_m + (t+Y-1)\beta.
\end{align*}
As $Q_m(t)$ has degree $N-1$, this implies that
$$t(t-1){(t-Y)}^{-1}\sum_{\mid m\mid=N}t^m A\alpha_m$$
is a vector valued polynomial of degree $N+1$. But the set $\set{t^n}{\abs{n}=N+1}$ forms a basis for the polynomials of degree less than or equal to $N+1$. Hence, there exist vectors
$\alpha_n \in {\cal H},\ \abs{n}=N+1$, such that
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.22}
t(t-1){(t-Y)}^{-1}\sum_{\mid m\mid=N}t^m A\alpha_m = \sum_{\mid n\mid=N+1}t^n \alpha_n.
\end{equation} \addtocounter{theorem}{1}
Unraveling the substitutions \ref{eq2.8}, and \ref{eq2.22} becomes
$$\byinv A R_{N}(b)\alpha=\sum_{\mid n\mid=N+1}\frac{1}{b^n}\alpha_n,$$
or,
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.23}
R_{N+1}(b)\alpha=\sum_{\mid n\mid=N+1}\frac{1}{b^n}\alpha_n.
\end{equation} \addtocounter{theorem}{1}
In addition, recalling that $(\alphanin{I_N},A,Y)$ is an HVMS of size $N$, we see from Proposition \ref{prop2.1} that
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq2.24}
R_{l}(b)\alpha=\sum_{\mid n\mid=l}\frac{1}{b^n}\alpha_n \text{ for } l=1,\ldots,N.
\end{equation} \addtocounter{theorem}{1}
Taken together, \ref{eq2.23} and \ref{eq2.24} imply \ref{eq2.10.2}.
\end{proof}
\section{From HVMSs to Loewner Functions}
We let $\pitwo$ denote the set $\set{z \in \ctwo}{\impart{z_1}\ge 0,\impart{z_2}\ge 0 }$. We let $\p$ denote the \emph{Pick class} on $\pitwo$, i.e. the set of holomorphic functions on $\pitwo$ that have nonnegative imaginary part. If $D \subseteq \rtwo$, we define the Loewner class, $\lofd$, by
$$\lofd=\set{h \in \p}{h \text{ is analytic and real valued on } D }.$$
$\lofd$, which captures a semi-local version of the notion of inner, arises in a variety of problems involving interpolation, the real edge of the wedge theorem, and the analysis of operator monotone functions --- see {\em e.g.} \cite{amy11c}. In this section we wish to consider a fully local version of $\lofd$. To that end we shall require a number of definitions.
Let $J_k = I_k \cup \{ (0,0) \}$.
\begin{defin}\label{def3.05}
For $x \in \rtwo$ and $S \subseteq \pitwo$ let us agree to say that $S$ approaches $x$ non-tangentially, $S \stackrel{nt}{\to}x$, if $x \in S^-$ and there exists a constant $c$ such that
$$\norm{z-x} \le c\ \min{\impart{z_1}}{\impart{z_2}}$$
for all $z \in S$
\end{defin}
\begin{defin}\label{def3.10}
Let $h \in \p$ and $x \in \rtwo$. We say that $x$ is a $\cpoint{k}$ of $h$ if $h$ is ``non-tangentially $C^k$ at $x$" i.e. there exists an indexed set of scalars, $\delta = \deltanin{J_k}$, such that if $S \subset \pitwo$ and $S \stackrel{nt}{\to} x$,then
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq3.010}
\lim_{\substack{z \to x \\ z \in S}}\frac{h(z)-\sum_{n \in J_k} \delta_n z^n}{{\norm{z}}^k}=0.
\end{equation} \addtocounter{theorem}{1}
\end{defin}
Evidently, if $h \in \lofd$, $x \in D$, and $x$ is a $\cpoint{k}$ of $h$, then $\delta$, as uniquely determined by \ref{eq3.010}, has the property that $\delta_n$ is real whenever $n \in J_k$. This suggests the following definition as a reasonable localization of the Loewner class.
\begin{defin}\label{def3.20}
Let $k \ge 0$ and $x \in \rtwo$. If $h \in \p$, we say that $h$ is Loewner to order $k$ at $x$ if $x$ is a $\cpoint{k}$ of $h$ and if $\delta$, as uniquely determined by
\ref{eq3.010}, has the property that $\delta_n$ is real for all $n \in J_k$.
\end{defin}
We introduce in Definition \ref{def3.40} below a class of functions, $\mathcal{L}^N$, obtained by adding three extra minor provisos to the notion in Definition \ref{def3.20}.
First we shall assume that $k=2N-1$ is odd. Secondly, we wish to consider regularity as $z$
approaches infinity non-tangentially rather than as $z$ approaches a finite point $x \in \rtwo$.
Finally, we shall normalize $h$ to have the value zero at infinity.
To formalize regularity at $\infty$, we introduce the following two definitions.
\begin{defin}\label{def3.25}
If $\{z_n\}$ is a sequence in $\pitwo$, we say $z_n\to \infty$ if $z_n=(\lambda_n,\mu_n)$ and both $\lambda_n \to \infty$ and $\mu_n \to \infty$.
For $S \subseteq \pitwo$ we say that $S$ approaches $\infty$ non-tangentially, $S \stackrel{nt}{\to}\infty$, if there is a sequence $\{z_n\}$ in $S$ such that $z_n\to \infty$ and a constant $c$ such that
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq3.015}
\norm{z} \le c\ \min{\impart{z_1}}{\impart{z_2}}
\end{equation} \addtocounter{theorem}{1}
for all $z \in S$. If $S \stackrel{nt}{\to}\infty$, we let $\adj{S}$ denote the smallest constant such that \ref{eq3.015} holds for all $z \in S$.
\end{defin}
\begin{defin}\label{def3.26}
Let $\Omega$ be a metric space, $\omega \in \Omega$, and $F:\pitwo \to \Omega$ a map. We say
$$F(z) \to \omega \text{ as } z \stackrel{nt}{\to} \infty$$
if for each $S \subset \pitwo$ such that $S \stackrel{nt}{\to}\infty$,
$$\lim_{\substack{z \to \infty \\ z\in S}}F(z)=\omega.$$
\end{defin}
We now can extend the notion of $\cpoint{k}$ to $\infty.$
\begin{defin}\label{def3.30}
If $h \in \p$ we say $\infty$ is a $\cpoint{k}$ of $h$ if there exists an indexed set of scalars, $\rho = \rhonin{J_k}$, referred to as residues, such that
$$ {\norm{z}}^k(h(z) - \sum_{n \in J_k} \frac{\rho_n}{z^n}) \to 0 \text{ as }z \stackrel{nt}{\to} \infty.$$
\end{defin}
Finally, notice that the residue, $\rho_{(0,0)}$, when it exists, is the limit of $h(z)$ as $z \to \infty$ non-tangentially, and hence we denote it by $h(\infty)$. Our third proviso is to normalize $h$ by requiring that $h(\infty) = 0$.
\begin{defin}\label{def3.40}
For $N$ a positive integer, let $\ln$ denote the set of all $h \in \p$ such that $\infty$ is a $\cpoint{2N-1}$ for $h$ with real residues and $h(\infty)=0. $
\end{defin}
Let us note that Theorem \ref{thma5} implies that any function in
${\mathcal L}^1$ must have a representation as in (\ref{eqa1.2.1}).
In previous work \cite{amy11c}, we required $Y$ to be a projection. This was inspired
by representations on the bidisk, as in \cite{baltre98} and \cite{agmc_bid}.
Here, we do not require $Y$ to be a projection necessarily. But, in order for
$(\alphanin{I_N},Y,A)$ to be an HVMS of size $N$, the operator $Y(1-Y)$ annihilates
$\alpha_{(l,0)} $ and $\alpha_{(0,l)}$ for $1 \leq l \leq N$. So these vectors ``think''
$Y$ is a projection.
We now can formulate the main result of this section.
\begin{thm}\label{prop3.10}
Suppose that ${\cal H}$ is a Hilbert space, A is a densely defined self-adjoint operator on ${\cal H}$, $\alpha \in {\cal H}$, and $h \in \p$ is defined by the type I Nevanlinna representation,
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq3.020}
h(z)=<{(A-z_Y)}^{-1}\alpha,\alpha>, \ \ \ z \in \pitwo.
\end{equation} \addtocounter{theorem}{1}
If $(\alphan,A,Y)$ is a
of size $N$ and $\alpha = \alpha_{(1,0)}+\alpha_{(0,1)}$, then $h \in \ln$. Furthermore, if $r_l$ are the scalar $(Y,\alpha)$-moments of $A$ (as given by the formulas \ref{eq2.5.1} and \ref{eq2.5.2}) and $\rho_n$ are the residues of $h$ (as given in Definition \ref{def3.30}),then
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq3.025}
\sum_{|n|=l} \frac{\rho_n}{b^n}=-r_l(b)
\end{equation} \addtocounter{theorem}{1}
for $l=1,\ldots,2N-1$.
\end{thm}
The remainder of the section will be devoted to the proof of Theorem \ref{prop3.10}. Accordingly, fix an HVMS of size $N$, $(\alphan,A,Y)$, with the property that $A$ is densely defined and self-adjoint, set $\alpha = \alpha_{(1,0)}+\alpha_{(0,1)}$, and assume that $h$ is given by \ref{eq3.020}.
The point $z$ will always lie in $\Pi^2$, so $(A - z_Y)$ is invertible.
Observe that as,
\setcounter{equation}{\value{theorem}} \begin{equation}
\label{eq3.02.9}
\ares=(A-\zy+\zy)\res=1+\zy\res,
\end{equation} \addtocounter{theorem}{1}
the operator $\ares$ is bounded. Likewise, the operator $\resa$ is bounded. Also, we have the following simple identities involving these operators:
\begin{align}
\label{eq3.030}
\zy \resa &= \ares \zy
\\
\label{eq3.040}
\res &= -\zyinv + \zyinv \ares
\\
\label{eq3.050}
\res &= -\zyinv + \resa \zyinv.
\end{align}
\begin{claim}\label{cl3.10}
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq3.060}
<\res\alpha,\alpha>=-\sum_{k=1}^{2N-1}r_k(z) + <\ares \zy R_N(z)\alpha,R_N(z)^*\alpha>. \notag
\end{equation} \addtocounter{theorem}{1}
\end{claim}
Note that as $(\alphan,A,Y)$ is an HVMS, Condition (ii) of Proposition \ref{prop2.2} guarantees that $\alpha \in \dom{R_N(z)}$ and in addition, that the residues, $r_k(z)$, $k=1,\ldots, 2N-1$, are well defined by \blue equations \ref{eq2.5.01} to \ref{eq2.5.2}. \black
Thus, the expression that appears on the right side of the claim is well defined.
To prove Claim \ref{cl3.10} we proceed by induction. Note that when $N=1$ the claim follows immediately from \ref{eq3.040}. Suppose the claim holds for HVMSs of size $N$. If $(\alphan,A,Y)$ is an HVMS of size $N+1$, then as $(\alphan,A,Y)$ is also a HVMS of size $N$, the inductive hypothesis yields that
\begin{equation}\label{eq3.070}
<\res\alpha,\alpha> = -\sum_{k=1}^{2N-1}r_k(z) + <\ares \zy R_N(z)\alpha,R_N(z)^*\alpha>.
\end{equation}
But,
\begin{align*}
&<\ares \zy R_N(z)\alpha,R_N(z)^*\alpha>\\
(i)\ \ \ \ \ &=\ <\res \zy R_N(z)\alpha,AR_N(z)^*\alpha>\\
(ii)\ \ \ \ &=\ <(-\zyinv+\resa\zyinv) \zy R_N(z)\alpha,AR_N(z)^*\alpha>\\
&=\ -<R_N(z)\alpha,AR_N(z)^*\alpha>
+<\resa R_N(z)\alpha,AR_N(z)^*\alpha>\\
(iii)\ \ \ &=\ -<R_N(z)\alpha,AR_N(z)^*\alpha>\\
&\qquad +<(-\zyinv + \zyinv \ares)AR_N(z)\alpha,AR_N(z)^*\alpha>\\
&=\ -<R_N(z)\alpha,AR_N(z)^*\alpha>
-<\zyinv AR_N(z)\alpha,AR_N(z)^*\alpha>\\
&\qquad +<\zyinv \ares AR_N(z)\alpha,AR_N(z)^*\alpha>\\
&=\ -<R_N(z)\alpha,AR_N(z)^*\alpha>-<\zyinv AR_N(z)\alpha,AR_N(z)^*\alpha>\\
&\qquad +<\ares \zy \zyinv AR_N(z)\alpha,{\zyinv}^*AR_N(z)^*\alpha>\\
(iv)\ \ \ \ &=\ -<R_N(z)\alpha,AR_N(z)^*\alpha>-<R_{N+1}(z)\alpha,AR_N(z)^*\alpha>\\
&\qquad +<\ares \zy R_{N+1}(z)\alpha,R_{N+1}(z)^*\alpha>\\
(v)\ \ \ \ \ &=\ -r_{2N}(z)-r_{2N+1}(z)\\
&\qquad +<\ares \zy R_{N+1}(z)\alpha,R_{N+1}(z)^*\alpha>.
\end{align*}
Here, the following facts were used.
\begin{align*}
(i)&\ \ \text{as }(\alphan,A,Y) \text{ is an HVMS of size }N+1, \ R_N(z)^*\alpha = R_N(\bar z) \alpha \in \dom{A}\\
(ii)&\ \ \ref{eq3.050}\\
(iii)&\ \ \ref{eq3.040}\\
(iv)&\ \ R_{N+1}(z)\alpha=\zyinv AR_N(z)\\
(v)&\ \ \ref{eq2.5.1} \text{ and }\ref{eq2.5.2}
\end{align*}
Combining the result of this calculation with \ref{eq3.070}, we deduce that
\begin{equation}\label{eq3.080}
<\res\alpha,\alpha> = -\sum_{k=1}^{2N+1}r_k(z) + <\ares \zy R_{N+1}(z)\alpha,R_{N+1}(z)^*\alpha>, \notag
\end{equation}
which is \ref{eq3.060} with $N$ replaced with $N+1$. This concludes the proof of Claim \ref{cl3.10}.
Now observe that both the facts we need to prove to establish Theorem \ref{prop3.10}, that $h \in \ln$ and \ref{eq3.025}, will follow from Claim \ref{cl3.10} if we can show that
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq3.090}
{\norm{z}}^{2N-1}<\ares \zy R_N(z)\alpha,R_N(z)^*\alpha> \to 0 \ \ \text{ as }\ \ z \stackrel{nt}{\to} \infty.
\end{equation} \addtocounter{theorem}{1}
On the other hand, we claim that \ref{eq3.090} will follow if we can show
\begin{claim}\label{cl3.20}
If $\beta,\gamma \in {\cal H}$, then
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq3.100}
<\ares \beta,\gamma> \to 0 \ \ \text{ as }\ \ z \stackrel{nt}{\to} \infty.
\end{equation} \addtocounter{theorem}{1}
\end{claim}
To see how Claim \ref{cl3.20} implies \ref{eq3.090} we use the following simple property of sets that approach $\infty$ non-tangentially.
\begin{lem}\label{lem3.10}
If $n$ is a multi-index, $S\subset \pitwo$ and $S \stackrel{nt}{\to} \infty$, then
$$|\frac{1}{z^n}| \le {(\adj{S})}^{|n|} \norm{z}^{-|n|}$$
for all $z \in S$.
\end{lem}
\begin{proof}
If $z \in S$, then Definition \ref{def3.25} implies that
\begin{align*}
\norm{z} &\le \adj{S}\ \min{\impart{z_1}}{\impart{z_2}}\\
&\le \adj{S}\ \impart{z_1}\\
&\le \adj{S}\ |z_1|.
\end{align*}
Hence, $$|z_1|^{-n_1} \le \adj{S}^{n_1}\ \norm{z}^{-n_1}.$$
Likewise, $$|z_2|^{-n_2} \le \adj{S}^{n_2}\ \norm{z}^{-n_2}.$$
The lemma follows by multiplying these last two inequalities together.
\end{proof}
Now, using Proposition \ref{prop2.1}, if $N \geq 2$,
\begin{align*}
&<\ares \zy R_N(z)\alpha,R_N(z)^*\alpha>\\
&=\ <\ares A R_{N-1}(z)\alpha,R_N(z)^*\alpha>\\
&=\ <\ares A\sum_{\mid m\mid=N-1}\frac{1}{z^m}\alpha_m,\sum_{\mid n\mid=N}\frac{1}{z^n}\alpha_n>\\
&=\ \sum_{\substack{\mid m\mid=N-1\\ \mid n\mid=N}} \frac{1}{z^{m+n}}<\ares A \alpha_m,\alpha_n>.
\end{align*}
Thus, using Lemma \ref{lem3.10}, we see that if $S\stackrel{nt}{\to} \infty$ and $z \in S$, then
\begin{align*}
&|<\ares \zy R_N(z)\alpha,R_N(z)^*\alpha>|\\
&\le \sum_{\substack{\mid m\mid=N-1\\ \mid n\mid=N}} |\frac{1}{z^{m+n}}|\ |<\ares A \alpha_m,\alpha_n>|\\
& \le \adj{S}^{2N-1}\norm{z}^{-(2N-1)} \sum_{\substack{\mid m\mid=N-1\\ \mid n\mid=N}} |<\ares A \alpha_m,\alpha_n>|.
\end{align*}
When $N = 1$, we get
\begin{align*}
&| <\ares \zy R_1(z)\alpha,R_1(z)^*\alpha>| \\
&=\ |\sum_{\substack{ \mid n\mid=1}} \frac{1}{z^{n}}<\ares \alpha,\alpha_n>| \\
& \leq \ \adj{S}\norm{z}^{-1} \sum_{\substack{ \mid n\mid=1}} |<\ares \alpha,\alpha_n>|.
\end{align*}
So we see that Claim \ref{cl3.20} does indeed imply \ref{eq3.090}.
There remains to prove Claim \ref{cl3.20}. For this we shall require three lemmas.
These lemmas involve the notion of a \emph{proximity estimate}, an idea which we make precise in the following definition.
\begin{defin}\label{def3.50}
Let $\Omega$ be a metric space and $F:\pitwo \to \Omega$ a map. We say that $F$ is proximal (or more precisely, proximal at $\infty$) if for each $S\subset \pitwo$ such that $S \stackrel{nt}{\to} \infty$, there exists a constant $c$ such that
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq3.110}
d(F(z),F(w)) \le c\frac{\norm{z-w}}{\norm{z}}
\end{equation} \addtocounter{theorem}{1}
for all $z,w \in S$. We refer to the inequality \ref{eq3.110} as a proximity estimate.
\end{defin}
It turns out that frequently, as a consequence of various forms of the Schwarz Lemma, quantities that are formed from holomorphic functions satisfy proximity estimates. In such cases, the following lemma greatly simplifies the analysis of non-tangential regularity.
\begin{lem}\label{lem3.20}
Let $\Omega$ be a metric space, let $\omega \in \Omega$ and let $F:\pitwo \to \Omega$ be a proximal map. $F(z) \to \omega$ as $z \stackrel{nt}{\to} \infty$ if and only if for each $\delta \in \pitwo$,
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq3.120}
\lim_{s\to \infty}F(s \delta)=\omega.
\end{equation} \addtocounter{theorem}{1}
\end{lem}
\begin{proof}
Clearly, if $F(z) \to \omega$ as $z \stackrel{nt}{\to} \infty$, then \ref{eq3.120} holds. To prove the converse we argue by contradiction. Suppose \ref{eq3.120} holds. If it is false that $F(z) \to \omega$ as $z \stackrel{nt}{\to} \infty$, then there exist $\epsilon>0$, $S\subset \pitwo$, and a sequence $\{z_l\}$ in $S$ such that $S \stackrel{nt}{\to} \infty$, $z_l \to \infty$, and
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq3.130}
d(F(z_l),\omega)\ge \epsilon
\end{equation} \addtocounter{theorem}{1}
for all positive $l$. By compactness, there exist $\delta \in {\mathbb{C}}^2$ and a subsequence $z_{l_j}$, such that $\norm{z_{l_j}}^{-1} z_{l_j} \to \delta$ as $j \to \infty.$
In fact, $\delta \in \pitwo$. To see this, let $\delta=(\delta_1,\delta_2)$ and $z_{l_j}=(\lambda_{l_j},\mu_{l_j})$ and observe that
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq3.140}
\impart{\delta_1}=\lim_{j \to \infty}\frac{\impart{\lambda_{l_j}}}{\norm{z_{l_j}}}\ge \lim_{j \to \infty}\frac{\min{\impart{\lambda_{l_j}}}{\impart{\mu_{l_j}}}}{\norm{z_{l_j}}} \ge\frac{1}{\adj{S}} > 0,
\end{equation} \addtocounter{theorem}{1}
Likewise, $\impart{\delta_2} >0$ and we conclude that $\delta \in \pitwo$.
Now, let $w_j=z_{l_j}$ and $s_j=\norm{z_{l_j}}$. By construction we have that $w_j-s_j\delta=o(s_j)$ so that the proximity estimate gives that
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq3.150}
d(F(w_j),F(s_j\delta))\le c\frac{\norm{w_j-s_j\delta}}{\norm{s_j\delta}}\to 0.
\end{equation} \addtocounter{theorem}{1}
Also, \ref{eq3.120} implies that
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq3.160}
d(F(s_j\delta),\omega)\to 0.
\end{equation} \addtocounter{theorem}{1}
Given \ref{eq3.150} and \ref{eq3.160}, the triangle inequality gives that
$$d(F(w_j),\omega))\to 0,$$
contradicting \ref{eq3.130}.
\end{proof}
\begin{lem}\label{lem3.30}
Let $\mathcal{L}({\cal H})$ denote the algebra of bounded operators on ${\cal H}$ equipped with the operator norm. $F:\pitwo \to \mathcal{L}({\cal H})$, defined by
$$F(z)=\ares \ \ \ \ ,z \in \pitwo,$$
is proximal.
\end{lem}
\begin{proof}
Fix $S\subset \pitwo$ with $S \stackrel{nt}{\to} \infty$. For $z \in \pitwo$ we have that
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq3.170}
\norm{z_Y} \le \max{\abs{z_1}}{\abs{z_2}}\le \sqrt{2}\norm{z}.
\end{equation} \addtocounter{theorem}{1}
Also, as $\impart{A-\zy} = -\impart{\zy} \le -\min{\impart{z_1}}{\impart{z_2}}$, we have that if in addition, $z \in S$, then
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq3.180}
\norm{\res} \le \frac{1}{\min{\impart{z_1}}{\impart{z_2}}} \le \frac{\adj{S}}{\norm{z}}.
\end{equation} \addtocounter{theorem}{1}
Now, using (\ref{eq3.02.9}), we get
\begin{align*}
\lefteqn{F(z)-F(w)}\\
&={A(A-z_Y)}^{-1}-{A(A-w_Y)}^{-1}\\
&=(1+ z_Y{(A-z_Y)}^{-1})-(1 + w_Y{(A-w_Y)}^{-1})\\
&=z_Y{(A-z_Y)}^{-1}-w_Y{(A-w_Y)}^{-1}\\
&=(z_Y-w_Y){(A-z_Y)}^{-1}+w_Y({(A-z_Y)}^{-1}-{(A-w_Y)}^{-1})\\
&=(z_Y-w_Y){(A-z_Y)}^{-1}+w_Y{(A-w_Y)}^{-1}(z_Y-w_Y){(A-z_Y)}^{-1}.
\end{align*}
Hence using \ref{eq3.170} and \ref{eq3.180},
\begin{align*}
\lefteqn{\norm{F(z)-F(w)}}\\
&\leq \ \norm{z_Y-w_Y}\ \norm{{(A-z_Y)}^{-1}}+\norm{w_Y}\ \norm{{(A-w_Y)}^{-1}}\ \norm{z_Y-w_Y}\ \norm{{(A-z_Y)}^{-1}}\\
&\le\ \sqrt{2}\ \norm{z-w}\ \frac{\adj{S}} {\norm{z}}+\sqrt{2}\ \norm{w}\ \frac{\adj{S}}{\norm{w}}\ \sqrt{2}\ \norm{z-w}\ \frac{\adj{S}}{\norm{z}}\\
&=\ (\sqrt{2}\ \adj{S}+ 2\ \adj{S}^2)\frac{\norm{z-w}}{\norm{z}},
\end{align*}
which is \ref{eq3.110} with $c=\sqrt{2}\adj{S}+ 2\adj{S}^2.$
\end{proof}
\begin{lem}\label{lem3.40}
If $\beta,\gamma \in {\cal H}$ and $\delta \in \pitwo$, then
$$\lim_{s \to \infty}<A{(A-t\delta_Y)}^{-1}\beta,\gamma>=0$$
\end{lem}
\begin{proof}
We claim that for each vector $u\in {\cal H}$,
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq3.190}
\delta_Y(\epsilon A-\delta_Y)^{-1} u \to -u\ \ \ \text{weakly in }{\cal H}
\end{equation} \addtocounter{theorem}{1}
as $\epsilon \to 0$. To prove this claim first notice that as $\impart{\delta_Y} \ge \min{\impart{\delta_1}}{\impart{\delta_2}}$, we have both that $\delta_Y$ is invertible and
that $(\epsilon A-\delta_Y)^{-1}$ is uniformly bounded. In particular, as $A$ is densely defined, $Mult = \delta_Y \dom{A}$ is dense in ${\cal H}$. If $u=\delta_Y v \in Mult$, then as $v \in \dom{A}$ and $(\epsilon A-\delta_Y)^{-1}$ is uniformly bounded,
$$(\epsilon A-\delta_Y)^{-1}\epsilon Av \to 0$$
as $\epsilon \to 0$. Hence,
\begin{align*}
(\epsilon A-\delta_Y)^{-1}u&=(\epsilon A-\delta_Y)^{-1}\delta_Y v\\
&=(\epsilon A-\delta_Y)^{-1}((\delta_Y-\epsilon A)+\epsilon A) v\\
&=-v+(\epsilon A-\delta_Y)^{-1}\epsilon A v\\
&\to -v\\
&=-\delta_Y^{-1} u.
\end{align*}
Applying the bounded operator $\delta_Y$, yields that \ref{eq3.190} holds whenever $u\in Mult$. As, $Mult$ is dense and $\delta_Y(\epsilon A-\delta_Y)^{-1}$ is uniformly bounded, it follows that \ref{eq3.190} holds for all $u\in {\cal H}$. This proves the claim.
Now notice that if in the claim, we substitute $\epsilon=s^{-1}$, we deduce that for all $u\in {\cal H},$
$$s\delta_Y(A-s\delta_Y)^{-1} u \to -u\ \ \ \text{weakly in }{\cal H}$$
as $s\to \infty.$ Hence, for all $u\in {\cal H},$
$$1+s\delta_Y(A-s\delta_Y)^{-1} u \to 0\ \ \ \text{weakly in }{\cal H}$$
as $s\to \infty$. The lemma now follows by observing that from (\ref{eq3.02.9})
$$1+s\delta_Y(A-s\delta_Y)^{-1} = A{(A- s \delta_Y)}^{-1}$$
\end{proof}
Armed with the above lemmas it is a simple matter to prove Claim \ref{cl3.20} and thereby complete the proof of Theorem \ref{prop3.10}. If $\beta,\gamma \in {\cal H}$, then by Lemma \ref{lem3.30} $F(z)=<\ares \beta,\gamma>$ is proximal. As Lemma \ref{lem3.40} gives that $\lim_{s\to \infty}F(s \delta)=0$ whenever $\delta \in \pitwo$, Lemma \ref{lem3.20} yields that $F(z) \to 0$ as $z \stackrel{nt}{\to} \infty$ as was to be proved.
\section{From Loewner Functions to HVMSs}
In this section we shall formulate and then prove a converse to Theorem \ref{prop3.10},
using Theorem~\ref{thm2.1}.
If $h \in \ln$, then it is easy to check that $h$ is type I and accordingly has a Nevanlinna representation of the form,
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq4.20}
h(z)=<{(A-z_Y)}^{-1}\alpha,\alpha>, \ \ \ z \in \pitwo,
\end{equation} \addtocounter{theorem}{1}
where $A$ and $Y$ are operators acting on a Hilbert space ${\cal H}$, $A$ is a densely defined and self-adjoint, $Y$ is a positive contraction, and $\alpha \in {\cal H}$.
\begin{thm}
\label{thm4.1}
If $h \in \ln$ and $A$, $Y$, and $\alpha$ are such that \ref{eq4.20} holds, then $A$ has real vector $(Y,\alpha)$-moments to order $N-1$ and homogenous scalar $(Y,\alpha)$-moments to order $2N-1$. Furthermore,
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq4.30}
\sum_{|n|=l} \frac{\rho_n}{b^n}=-r_l(b)
\end{equation} \addtocounter{theorem}{1}
whenever $1 \le l \le 2N-1$ and $b \in \rtwoplus$, where $\rho_n$ are the residues of $h$.
\end{thm}
\begin{proof}
We proceed by induction. Let $N=1$ and assume that $h \in \ln$ has a Nevanlinna representation as in \ref{eq4.20}. As $N=1$, the assertion that $A$ have real vector $(Y,\alpha)$-moments to order $N-1$ is vacuous. To see that $A$ has homogenous scalar $(Y,\alpha)$-moments to order $2N-1$, first note that since $\infty$ is a $\cpoint{1}$ for $h$ with real residues, we have that there exist $\rho_{(1,0)},\rho_{(0,1)} \in \rtwo$ such that
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq4.40}
<{(A-z_Y)}^{-1}\alpha,\alpha> = h(z)
=\frac{\rho_{(1,0)}}{z_1} + \frac{\rho_{(0,1)}}{z_2} + o({\norm{z}}^{-1}),
\end{equation} \addtocounter{theorem}{1}
non-tangentially at $\infty$. Fixing $b \in \rtwoplus$ and setting $z = isb$ in
\ref{eq4.40} gives that
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq4.41}
is<{(A-isb_Y)}^{-1}\alpha,\alpha>\ \rightarrow\ \frac{\rho_{(1,0)}}{b_1} + \frac{\rho_{(0,1)}}{b_2}
\end{equation} \addtocounter{theorem}{1}
as $s \to \infty$ in $\rplus$. Noting that for $b \in \rtwoplus$, $b_Y$ is strictly positive definite and hence, invertible, we define a self-adjoint operator, $X_b$, by the formula,
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq4.42}
X_b = b_Y^{-\frac{1}{2}}A \by-half.
\end{equation} \addtocounter{theorem}{1}
Noting that
\begin{align}
is<{(A-isb_Y)}^{-1}\alpha,\alpha> &= is<{(\byhalf (X_b - is) \byhalf)}^{-1} \alpha,\alpha> \notag \\
&= is<\by-half {(X_b - is)}^{-1} \by-half \alpha,\alpha> \notag\\
&= is<{(X_b - is)}^{-1} \by-half \alpha,\by-half \alpha> \notag\\
&= is<\frac{X_b + is}{X_b^2 + s^2} \by-half \alpha,\by-half \alpha> \notag\\
&= <\frac{-s^2 + isX_b}{X_b^2 + s^2} \by-half \alpha,\by-half \alpha>, \notag\\ \notag
\end{align}
we see upon taking real parts in \ref{eq4.41} that
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq4.44}
-<\frac{s^2}{X_b^2 + s^2} \by-half \alpha,\by-half \alpha>\ \rightarrow\ \frac{\rho_{(1,0)}}{b_1} + \frac{\rho_{(0,1)}}{b_2}
\end{equation} \addtocounter{theorem}{1}
as $s \to \infty$ in $\rplus$. Now, the Lesbesgue Dominated Convergence Theorem guarantees that
$$\frac{s^2}{X_b^2 + s^2} \by-half \alpha \to \by-half \alpha$$
as $s \to \infty$ in $\rplus$. Hence,
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq4.45}
-<\byinv \alpha,\alpha> = -<\by-half \alpha,\by-half \alpha> = \frac{\rho_{(1,0)}}{b_1} + \frac{\rho_{(0,1)}}{b_2}
\end{equation} \addtocounter{theorem}{1}
As \ref{eq4.45} holds for all $b \in \rtwoplus$, we conclude that $A$ has homogenous scalar $(Y,\alpha)$-moments to order 1 as was to be shown. Also note that \ref{eq4.45} implies that \ref{eq4.30} holds.
We now turn to the inductive step of the proof. Accordingly, assume that
\begin{align}
&A \text{ has real vector } (Y,\alpha)\text{-moments to order } N-1, \label{eq4.46}\\
&A \text{ has homogenous scalar } (Y,\alpha)\text{-moments to order } 2N-1, \text{ and} \label{eq4.47}\\
&1 \le l \le 2N-1, b \in \rtwoplus \implies \sum_{|n|=l} \frac{\rho_n}{b^n}=-r_l(b)\label{eq4.48}
\end{align}
whenever $h \in \ln$ and has a representation as in \ref{eq4.20}. Fix $h$ with a representation as in \ref{eq4.20} and assume that $h \in \lnplus$. We need to show that
\ref{eq4.46}, \ref{eq4.47}, and \ref{eq4.48} hold with $N$ replaced with $N+1$. However, as $h \in \lnplus \subset \ln$, the inductive hypothesis implies that \ref{eq4.46}, \ref{eq4.47}, and \ref{eq4.48} hold for $N$. Therefore, the induction will be complete if we can show the following three conditions:
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq4.49}
\alpha \in \dom{{(A\byinv)}^N},
\end{equation} \addtocounter{theorem}{1}
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq4.50}
r_{2N+1}(b)=-\sum_{|n|=2N+1} \frac{\rho_n}{b^n},\ \ b \in \rtwoplus,
\end{equation} \addtocounter{theorem}{1}
and
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq4.51}
r_{2N}(b)=-\sum_{|n|=2N} \frac{\rho_n}{b^n},\ \ b \in \rtwoplus.
\end{equation} \addtocounter{theorem}{1}
First note that as $h \in \lnplus$ and \ref{eq4.20} holds, there exist scalar residues, $\rho_n,$ $n \in \isub{2N+1}$, such that
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq4.52}
<{(A-z_Y)}^{-1}\alpha,\alpha> = \sum_{n \in I_{2N+1}} \frac{\rho_n}{z^n} + o({\norm{z}}^{-(2N+1)})
\end{equation} \addtocounter{theorem}{1}
as $z \to \infty$ non-tangentially in $\pitwo$. Fixing $b \in \rtwoplus$ and setting $z = isb$ in \ref{eq4.52} we deduce that
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq4.53}
<{(A-isb_Y)}^{-1}\alpha,\alpha> = \sum_{l = 1}^{2N+1} {(is)}^{-l} \sum_{|n|=l}\frac{\rho_n}{b^n} + o(s^{-(2N+1)}), \notag
\end{equation} \addtocounter{theorem}{1}
as $s \to \infty$ in $\rplus$,
which, upon taking the imaginary parts, yields that
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq4.54}
\impart{<{(A-isb_Y)}^{-1}\alpha,\alpha>} = \sum_{k = 1}^{N+1} \frac{{(-1)}^k}{s^{2k-1}} \sum_{|n|=2k-1}\frac{\rho_n}{b^n} + o(s^{-(2N+1)})
\end{equation} \addtocounter{theorem}{1}
as $s \to \infty$ in $\rplus$. Finally, upon multiplying \ref{eq4.54} by the factor $s^{2N+1}$, we deduce the limit,
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq4.55}
\lim_{s \to \infty}G_b(s)
= {(-1)}^{N+1}\sum_{|n|=2N+1}\frac{\rho_n}{b^n}
\end{equation} \addtocounter{theorem}{1}
where for $s \in \rplus$ and $b \in \rtwoplus$, $G_b(s)$ is defined by
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq4.56}
G_b(s) = s^{2N+1}\impart{<{(A-isb_Y)}^{-1}\alpha,\alpha>} - \sum_{k = 1}^{N} {(-1)}^k s^{2(N-k+1)} \sum_{|n|=2k-1}\frac{\rho_n}{b^n}.
\end{equation} \addtocounter{theorem}{1}
We now compute $G_b(s)$ using the substitution \ref{eq4.42}. We set
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq4.57}
\gamma_b = \by-half\alpha.
\end{equation} \addtocounter{theorem}{1}
Note that \ref{eq4.46} implies that $\gb \in \dom{X_b^l}$ for $l=1, \ldots ,N-1$. Using \ref{eq4.48} and \ref{eq2.5} we see for $k=1,\ldots,N$, that
\begin{align*}
\sum_{|n|=2k-1} \frac{\rho_n}{b^n}&=-r_{2k-1}(b)\\
&=-<\byinv {(A\byinv)}^{k-1}\alpha,{(A\byinv)}^{k-1}\alpha>\\
&=-<{(X_b)}^{k-1}\gb,{(X_b)}^{k-1}\gb>.
\end{align*}
Also, just as in the calculation leading up to \ref{eq4.44} we compute that
$$<{(A-isb_Y)}^{-1}\alpha,\alpha> = <\frac{X_b + is}{X_b^2 + s^2} \by-half \alpha,\by-half \alpha>,$$
so that
\begin{align*}
\impart{<{(A-isb_Y)}^{-1}\alpha,\alpha>} &= <\frac{s}{X_b^2 + s^2} \by-half \alpha,\by-half \alpha>\\
&=<\frac{s}{X_b^2 + s^2} \gb,\gb>.
\end{align*}
Hence we have that
\setcounter{equation}{\value{theorem}} \begin{equation}
G_b(s)=<\frac{s^{2N+2}}{X_b^2 + s^2} \gb,\gb>+\sum_{k = 1}^{N} {(-1)}^k s^{2(N-k+1)} <{(X_b)}^{k-1}\gb,{(X_b)}^{k-1}\gb>. \notag
\end{equation} \addtocounter{theorem}{1}
We claim that the above sum telescopes. Indeed, using the fact that
\begin{align*}
&<{(X_b)}^{k-1}\gb,{(X_b)}^{k-1}\gb>\\
=&<(\frac{X_b^2}{X_b^2 + s^2}+\frac{s^2}{X_b^2 + s^2}) {(X_b)}^{k-1}\gb,{(X_b)}^{k-1}\gb>\\
=&<\frac{X_b^2}{X_b^2 + s^2}{(X_b)}^{k-1}\gb,{(X_b)}^{k-1}\gb>+<\frac{s^2}{X_b^2 + s^2} {(X_b)}^{k-1}\gb,{(X_b)}^{k-1}\gb>
\end{align*}
we compute that
\begin{align*}
G_b(s)&=\ <\frac{s^{2N+2}}{X_b^2 + s^2} \gb,\gb>+\sum_{k = 1}^{N} {(-1)}^k s^{2(N-k+1)} <{(X_b)}^{k-1}\gb,{(X_b)}^{k-1}\gb>\\
&=\ <\frac{s^{2N+2}}{X_b^2 + s^2} \gb,\gb>\\
&\qquad -s^{2N}(<\frac{X_b^2}{X_b^2 + s^2} \gb,\gb>+<\frac{s^2}{X_b^2 + s^2} \gb,\gb>)\\
&\qquad +s^{2N-2}(<\frac{X_b^2}{X_b^2 + s^2} X_b\gb,X_b\gb>+<\frac{s^2}{X_b^2 + s^2} X_b\gb,X_b\gb>)\\
&\vdots\\
&\qquad +{(-1)}^N s^2(<\frac{X_b^2}{X_b^2 + s^2}{(X_b)}^{N-1}\gb,{(X_b)}^{N-1}\gb>
\\
&\qquad \qquad \qquad +<\frac{s^2}{X_b^2 + s^2} {(X_b)}^{N-1}\gb,{(X_b)}^{N-1}\gb>)\\
&=\ {(-1)}^N s^2<\frac{X_b^2}{X_b^2 + s^2}{(X_b)}^{N-1}\gb,{(X_b)}^{N-1}\gb>.
\end{align*}
This last calculation makes sense since
$$\frac{X_b^2}{X_b^2 + s^2}$$
is a bounded operator and $\gb \in \dom{{X_b}^l}$ for $l=1,\ldots,N-1.$
Now recall \ref{eq4.55}. From the formula for $G_b(s)$ just derived, we see that
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq4.60}
\lim_{s \to \infty} <\frac{s^2 X_b^2}{X_b^2 + s^2}{(X_b)}^{N-1}\gb,{(X_b)}^{N-1}\gb>
= -\sum_{|n|=2N+1}\frac{\rho_n}{b^n}.
\end{equation} \addtocounter{theorem}{1}
As $X_b$ is self-adjoint and $\gb \in \dom{{X_b}^{N-1}}$, we can apply the spectral theorem to $X_b$ and thereby obtain the scalar spectral measure of $\gb$, $\mu$. Analyzing the very existence of the limit on the left side of \ref{eq4.60} in the space $L^2(\mu)$ yields via the Lesbesgue Dominated Convergence Theorem that
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq4.62}
\gb \in \dom{{X_b}^{N}}.
\end{equation} \addtocounter{theorem}{1}
Unraveling \ref{eq4.62} via \ref{eq4.42} and \ref{eq4.57} gives that,
$$\alpha \in \dom{(A\byinv)^N},$$
which is \ref{eq4.49}. Note also from \ref{eq4.60} we have that
$$<{(X_b)}^2{(X_b)}^{N-1}\gb,{(X_b)}^{N-1}\gb>
= -\sum_{|n|=2N+1}\frac{\rho_n}{b^n},$$
which unravels to
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq4.63}
r_{2N+1}(b)=-\sum_{|n|=2N+1}\frac{\rho_n}{b^n}.
\end{equation} \addtocounter{theorem}{1}
which is \ref{eq4.50}.
There remains to check \ref{eq4.51}. This is done by following the same line of reasoning that led from \ref{eq4.52} to \ref{eq4.60}. One starts with \ref{eq4.52} but with $2N+1$ replaced with $2N$:
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq4.64}
<{(A-z_Y)}^{-1}\alpha,\alpha> = \sum_{n \in I_{2N}} \frac{\rho_n}{z^n} + o({\norm{z}}^{-(2N)})
\end{equation} \addtocounter{theorem}{1}
Proceeding as before, for a fixed $b \in \rtwoplus$ and $s \in \rplus$ we set $z=isb$ in \ref{eq4.64}. However unlike before, where we took imaginary parts to obtain \ref{eq4.54}, we now take real parts. This results in
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq4.65}
\repart{<{(A-isb_Y)}^{-1}\alpha,\alpha>} = \sum_{k = 1}^{N} \frac{{(-1)}^k}{s^{2k}} \sum_{|n|=2k}\frac{\rho_n}{b^n} + o(s^{-2N})
\end{equation} \addtocounter{theorem}{1}
as $s \to \infty$ in $\rplus$. Finally, upon multiplying \ref{eq4.64} by the factor $s^{2N}$ (rather than $s^{2N+1}$ as before), we deduce the limit,
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq4.66}
\lim_{s \to \infty}F_b(s)
= {(-1)}^{N}\sum_{|n|=2N}\frac{\rho_n}{b^n}
\end{equation} \addtocounter{theorem}{1}
where for $s \in \rplus$ and $b \in \rtwoplus$, $F_b(s)$ is defined by
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq4.67}
F_b(s) = s^{2N}\repart{<{(A-isb_Y)}^{-1}\alpha,\alpha>} - \sum_{k = 1}^{N-1} {(-1)}^k s^{2(N-k)} \sum_{|n|=2k}\frac{\rho_n}{b^n}.
\end{equation} \addtocounter{theorem}{1}
Carrying out the telescoping argument, one computes that
$$F_b(s)={(-1)}^{N-1} s^2
<\frac{X_b^2}{X_b^2 + s^2}{(X_b)}^{N-2}\gb,{(X_b)}^{N-1}\gb>,$$
which implies via \ref{eq4.66} the existence of the limit
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq4.68}
\lim_{s \to \infty} <\frac{s^2 X_b^2}{X_b^2 + s^2}{(X_b)}^{N-2}\gb,{(X_b)}^{N-1}\gb>
= -\sum_{|n|=2N}\frac{\rho_n}{b^n}.
\end{equation} \addtocounter{theorem}{1}
As \ref{eq4.62} holds, \ref{eq4.68} implies that
$$<{(X_b)}^2{(X_b)}^{N-2}\gb,{(X_b)}^{N-1}\gb>
= -\sum_{|n|=2N}\frac{\rho_n}{b^n}.$$
As this last equation unravels via \ref{eq4.42} and \ref{eq4.57} to
$$r_{2N}(b)=-\sum_{|n|=2N}\frac{\rho_n}{b^n},$$
the proof that \ref{eq4.51} holds is complete.
\end{proof}
\section{Finite Hankel Pairs}
\label{sec5}
In this section we give an alternate matrix theoretic treatment of HVMS's based on the fact that it is possible to cleanly characterize the Gram matrix formed from the moment vectors of an HVMS.
For $X$ a set, we let $\ltwoof{X}$ denote the Hilbert space of square summable complex valued functions on $X$. If $f \in \ltwoof{X}$, we let $\supp{f}$, \emph{the support of $f$}, denote the subset of $X$ defined by
$$\supp{f}=\set{x\in X}{f(x) \ne 0}.$$
By a \emph{matrix on X} we mean a square array of scalars, doubly indexed by the elements of $X$. If $a=[a_{x,y}]$ is a matrix on $X$, then $a$
induces a densely defined linear operator, also denoted by $a$, on the finitely supported functions in
$\ltwoof{X}$ by the formula
$$(af)(x)=\sum_{y \in \supp{f}}a_{x,y}f(y).$$
If $a=[a_{x,y}]$ is a matrix on $X$, then we say that \emph{a is symmetric} if
$$a_{x,y}=\overline{a}_{y,x} \text{ for all } x,y \in X,$$
and we say that \emph{a is positive semi-definite} if for each (finite)
choice of elements, $x_1,x_2,\ldots,x_l \in X$, and each choice of scalars, $c_1,c_2,\ldots,c_l \in \mathbb{C}$,
$$\sum_{i,j=1}^l a_{x_i,x_j}c_j\overline{c}_i \ge 0.$$
In this section we shall be exclusively interested in the case where $X=I_N$, for $N$ a positive integer. Note that naturally, if $M \le N$, then $\ltwoof{I_{M}} \subseteq \ltwoof{I_N}$, and in addition, that there is a pair of shift operators, $S_1,S_2:\ltwoof{I_{N-1}} \to \ltwoof{I_{N}}$ defined by
\begin{eqnarray*}
(S_1\, f)(n) &\ =\
&\left\{
\begin{array}{ll}
f(n-e_1) &\qquad n - e_1 \in I_{N-1} \\
0 & \qquad \text{else}
\end{array}
\right. \\
(S_2 \, f)(n) &\ =\
&\left\{
\begin{array}{ll}
f(n-e_2) &\qquad n - e_2 \in I_{N-1} \\
0 & \qquad \text{else}
\end{array}
\right.
\end{eqnarray*}
If $(\alphan,A,Y)$ is an HVMS of size $N$, then we may define a pair of matrices $a=(a^1,a^2)$ on $I_N$ by
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq5.010}
a_{m,n}^1=<Y\alpha_n,\alpha_m>\text{ and }a_{m,n}^2=<(1-Y)\alpha_n,\alpha_m> \text{ for } m,n \in \isubn.
\end{equation} \addtocounter{theorem}{1}
\begin{defin}\label{def5.1}
We say that $a=(a^1,a^2)$ is a finite Hankel pair of size $N$ if $a^1$ and $a^2$ are matrices on $I_N$ and there exists an HVMS of size $N$ such that (\ref{eq5.010}) holds.
\end{defin}
In Theorem~\ref{thm5.1}, we give a characterization of when a pair of matrices is
a finite Hankel pair. To see how this is a two variable version of
Theorem~\ref{thma4}, let us restate that theorem more abstractly.
Let $S : \ltwoof{ \{ 0, 1, \dots, N-2\}} \to \ltwoof{ \{ 0, 1, \dots, N-1 \} }$ be the shift
defined by
$Sf (j) = f(j-1), j > 0$, and $Sf(0) = 0$.
\begin{thm}\label{thm5.0}
Let $H$ be an $N$-by-$N$ matrix. There is a self-adjoint operator $A$ and a vector
$\alpha$ with $\alpha \in {\rm Dom }(A^k)$ for $ 1 \leq k \leq N-1$ such that
\[
H_{ij} \ = \ < A^j \alpha, A^i \alpha > \qquad 0 \leq i,j \leq N-1
\]
if and only if the following three conditions obtain.
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq5.0020}
H \text{ is positive semi-definite.}
\end{equation} \addtocounter{theorem}{1}
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq5.0030}
H_{i+1,j} \ = \ H_{i,j+1} \qquad 0 \leq i, j \leq N- 2 .
\end{equation} \addtocounter{theorem}{1}
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq5.050}
\supp{f} \in \{ 0, \dots, N-2 \} \text{ and } Hf=0 \Rightarrow HS f=0.
\end{equation} \addtocounter{theorem}{1}
\end{thm}
Here is our two variable version of Hamburger's Theorem~\ref{thma4}.
\begin{thm}\label{thm5.1}
Let $a$ be a pair of matrices on $\isubn$. Then $a$ is a finite Hankel pair of size $N$ if and only if the following four conditions obtain.
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq5.020}
a^1 \text{ and } a^2 \text{ are positive semi-definite.}
\end{equation} \addtocounter{theorem}{1}
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq5.030}
a^1_{m+e_1,n}+a^2_{m+e_2,n}=a^1_{m,n+e_1}+a^2_{m,n+e_2} \ \text{ whenever }\ m,n \in \isub{N-1}.
\end{equation} \addtocounter{theorem}{1}
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq5.040}
a^1_{(0,l),(0,l)} = a^2_{(l,0),(l,0)}=0\ \text{ for }\ l=1,\ldots,N.
\end{equation} \addtocounter{theorem}{1}
\setcounter{equation}{\value{theorem}} \begin{equation}\label{eq5.050}
\supp{f} \in \isub{N-1}\text{ and } (a^1+a^2)f=0 \Rightarrow (a^1S_1+a^2S_2)f=0.
\end{equation} \addtocounter{theorem}{1}
\end{thm}
\begin{proof}
(Necessity) Assume that $( \alphan_{n \in I_N} , Y , A)$ is an HVMS and (\ref{eq5.010})
holds. Then (\ref{eq5.020}) holds because $Y$ and $1-Y$ are positive operators.
(\ref{eq5.030}) holds because the left-hand side
is
\[
\langle Y \alpha_n, \alpha_{m+e_1} \rangle \ + \
\langle (1-Y) \alpha_n, \alpha_{m+e_2} \rangle \ = \ \langle \alpha_n , A \alpha_m \rangle,
\]
by (\ref{eq2.2}). But the right-hand side \blue
of (\ref{eq5.030}) by a similar calculation is $\langle A \alpha_n, \alpha_m \rangle$,
which is equal to $\langle \alpha_n, A \alpha_m \rangle$
\black
because $A$ is self-adjoint and $\alpha_m, \alpha_n$ are in its domain
for $m,n \in I_{N-1}$.
Condition (\ref{eq5.040}) follows from \blue (\ref{eq2.1}). \black
Finally, if $\supp{f} \in \isub{N-1}\text{ and } (a^1+a^2)f=0$, this says that
\[
\langle \sum_{n \in I_{N-1}} f(n) \alpha_n , \alpha_m \rangle \ = \ 0
\]
for all $m \in I_N$.
But
\begin{eqnarray*}
(a^1S_1+a^2S_2)f (m) & \ = \ &
\sum_{n \in I_{N-1}} f(n) \langle Y \alpha_{n+ e_1} + (1-Y) \alpha_{n+e_2} , \alpha_m \rangle \\
&=& \langle A ( \sum_{n \in I_{N-1}} f(n) \alpha_n) , \alpha_m \rangle \\
&=& 0,
\end{eqnarray*}
\blue so (\ref{eq5.050}) holds.
\black
(Sufficiency).
Assume (\ref{eq5.020}) --- (\ref{eq5.050}) hold. Choose vectors $\alpha_n$ in a
Hilbert space $\H$ so that their Grammian equals the matrix $a^1 + a^2$:
\[
\langle \alpha_n, \alpha_m \rangle \ = \ a^1_{m,n} + a^2_{m,n} .
\]
Since $a^1 \leq \langle \alpha_n, \alpha_m \rangle $,
there is a positive operator $Y$ satisfying (\ref{eq5.010}).
\blue
Equation (\ref{eq2.1}) follows from (\ref{eq5.040}).
If $N = 1$, we can define $A$ arbitrarily, {\em e.g.} by $A=0$.
If $N \geq 2$,
\black
we define $A$ on the span of $\{ \alpha_n \}_{n \in I_{N-1}}$ by
\[
A \alpha_n \ = \
Y \alpha_{n+e_1} + (1-Y) \alpha_{n+e_2} .
\]
To check that this is a well-defined linear operator, we need to know that if
\[
\sum_{n \in I_{N-1}} c_n \alpha_n \ = \ 0 ,\]
then
\[
\sum_{n \in I_{N-1}} c_n ( Y \alpha_{n+e_1} + (1-Y) \alpha_{n+e_2}) \ = \ 0 .
\]
This follows from (\ref{eq5.050}).
It follows from (\ref{eq5.030}) that $A$ is symmetric.
\end{proof}
\section{Infinite sequences}
\label{secinf}
As in one variable, passage from the finite to the infinite case is
straightforward and leads to some simplifications. Let ${\cal I}$ denote the set
of pairs of non-negative integers, excluding $(0,0)$.
\begin{defin}
\label{defihvms}
An infinite Hankel vector moment sequence is a 3-tuple, $(\{ \alpha_n \}_{n \in {\cal I}},Y,A)$ where:
$\{ \alpha_n \}_{n \in {\cal I}}$ is a sequence of vectors in some Hilbert space $\H$;
$Y$ is a positive contraction acting on $\H$, satisfying for each $l \geq 1$
\[
Y\alpha_{(0,l)}=0 = (1-Y)\alpha_{(l,0)}=0;
\]
$A$ is a densely defined self-adjoint operator on $\H$ with the property that
\[
\set{\alpha_n}{ n \in {\cal I}} \subset \dom{A};
\]
for each $n \in {\cal I}$,
\[
A\alpha_n=Y\alpha_{n+e_1}+(1-Y)\alpha_{n+e_2}.
\]
\end{defin}
Theorem~\ref{thmaf} becomes a description of functions in ${\cal L}^\infty$.
\begin{theorem}
\label{thmfa}
A Pick function $h$ of two variables
has an asymptotic expansion
\[
h(z) \ = \ \sum_{n \in {\cal I}} \frac{\rho_n}{z^n} \
\]
as $z \stackrel{nt}{\to}\infty$, for some real numbers $\rho_n$,
if and only if it has a representation
as in (\ref{eqa1.2.1}) and for every such representation there is an infinite HVMS
$(\{ \alpha_n \}_{n \in {\cal I}},Y,A)$
with $\alpha = \alpha_{(1,0)} + \alpha_{(0,1)}$.
Moreover, $\{ \rho_n \}$ are given by
\[
\sum_{|n|=l} \frac{\rho_n}{b^n}=-r_l(b)
\]
whenever $l \geq 1$ and $b \in \rtwoplus$.
\end{theorem}
Sufficiency of the condition follows from Theorem~\ref{prop3.10}; necessity
follows from the constructive proof of Theorem~\ref{thm4.1}.
We define an infinite Hankel pair by
\begin{defin}\label{defin2}
We say that $a=(a^1,a^2)$ is a infinite Hankel pair if $a^1$ and $a^2$ are matrices
on ${\cal I}$ and there exists an infinite HVMS such that
\setcounter{equation}{\value{theorem}} \begin{equation}
\label{eqf3}
a_{m,n}^1=\langle Y\alpha_n,\alpha_m\rangle \text{ and }a_{m,n}^2=\langle (1-Y)\alpha_n,\alpha_m\rangle \text{ for } m,n \in {\cal I}.
\end{equation} \addtocounter{theorem}{1}
\end{defin}
If (\ref{eq5.030}) holds for all $N$, then (\ref{eq5.050}) holds automatically.
So the infinite Hamburger theorem becomes
\begin{thm}\label{thmf2}
Let $a$ be a pair of matrices on ${\cal I}$. Then $a$ is an infinite Hankel pair if and only if the following three conditions obtain.
\[
a^1 \text{ and } a^2 \text{ are positive semi-definite.}
\]
\[
a^1_{m+e_1,n}+a^2_{m+e_2,n}=a^1_{m,n+e_1}+a^2_{m,n+e_2} \ \text{ whenever }\ m,n \in {\cal I}.
\]
\[
a^1_{(0,l),(0,l)} = a^2_{(l,0),(l,0)}=0\ \text{ for }\ l \geq 1.
\]
\end{thm}
Here is a two variable version of Kronecker's theorem.
\begin{theorem}
\label{thmf3}
Let $h \in {\mathcal L}^\infty$.
Then there is an infinite HVMS
$(\{ \alpha_n \}_{n \in {\cal I}},Y,A)$
with $\alpha = \alpha_{(1,0)} + \alpha_{(0,1)}$, satisfying
${\rm rank}(a^1 + a^2) < \infty $ and
\setcounter{equation}{\value{theorem}} \begin{equation}
\label{eqf5}
h(z) = \langle (A -z_Y)^{-1} \alpha, \alpha \rangle
\end{equation} \addtocounter{theorem}{1}
if and only if $h$ is a rational function.
\end{theorem}
\begin{proof}
If $h$ is rational of degree $(d_1,d_2)$, then by Theorem~\ref{thm2b2}
$h$ has a representation (\ref{eqf5}) on a Hilbert space $\H$ of dimension at most $d = d_1 + d_2$.
Since $h \in {\mathcal L}^\infty$, by Theorem~\ref{thmfa} there is an infinite
HVMS $(\{ \alpha_n \}_{n \in {\cal I}},Y,A)$ on $\H$.
So
\[
(a^1 + a^2)_{m,n} \ = \
\langle \alpha_n, \alpha_m \rangle_{\H}
\]
has rank at most $d$.
Conversely, suppose
there is an infinite HVMS
$(\{ \alpha_n \}_{n \in {\cal I}},Y,A)$
with $\alpha = \alpha_{(1,0)} + \alpha_{(0,1)}$, satisfying
${\rm rank}(a^1 + a^2) = d < \infty$ and (\ref{eqf5}).
Then one can choose vectors
$\beta_n$ in a space $\H$ of dimension $d$ such that
\setcounter{equation}{\value{theorem}} \begin{equation}
\label{eqf6}
\langle \beta_n , \beta_m \rangle_\H = \langle \alpha_n, \alpha_m \rangle
\end{equation} \addtocounter{theorem}{1}
and so that the vectors $\{ \beta_n \}$ span $\H$.
Define a positive contraction $X$ on $\H$ by
\setcounter{equation}{\value{theorem}} \begin{equation}
\label{eqf7}
\langle X \beta_n, \beta_m \rangle \ = \
\langle Y \alpha_n, \alpha_m \rangle .
\end{equation} \addtocounter{theorem}{1}
Define $B$ by
\[
B \beta_n \= X \beta_{n+e_1} + (1-X) \beta_{n+e_2}.
\]
We claim that $B$ extends by linearity to a well-defined linear operator on $\H$.
Indeed, suppose $\sum c_n \beta_n = 0$. Then by (\ref{eqf6}),
\[
\langle \sum c_n \alpha_n , \sum c_m \alpha_m \rangle
\ = \
\langle \sum c_n \beta_n , \sum c_m \beta_m \rangle
\ = \ 0.
\]
So $\sum c_n \alpha_n = 0$, and therefore by (\ref{eqf7})
\begin{eqnarray*}
\langle \sum c_n [ X \beta_{n+e_1} + (1-X) \beta_{n+e_2} ], \beta_m \rangle
&\ = \ &
\langle \sum c_n [ Y \alpha_{n+e_1} + (1-Y) \alpha_{n+e_2} ], \alpha_m \rangle\\
&=&
\langle A \sum c_n \alpha_n , \alpha_m \rangle \\
&=&
0.
\end{eqnarray*}
As $\{ \beta_m \}$ span $\H$, this means
$
\sum c_n [ X \beta_{n+e_1} + (1-X) \beta_{n+e_2} ] = 0 $;
so $B$ is well-defined,
and hence
$(\{ \beta_n \}_{n \in {\cal I}},X,B)$ is an infinite HVMS on $\H$.
Let $\beta = \beta_{(1,0)} + \beta_{(0,1)}$.
By Remark~\ref{rem2.1}
the scalar $(X,\beta)$
moments of $B$ agree with the scalar $(Y,\alpha)$ moments of $A$ to all orders.
Therefore by Theorem~\ref{thmfa}, the rational function $g$ of degree at most $d$ in each variable
given by
\setcounter{equation}{\value{theorem}} \begin{equation}
\label{eqf9}
g(z) = \langle (B -z_X)^{-1} \beta, \beta \rangle_\H
\end{equation} \addtocounter{theorem}{1}
has the same asymptotic expansion at $\infty$ as $h$.
By Lemma~\ref{lemf4}, we are done.
\end{proof}
\begin{lem}
\label{lemf4}
Let $g, h$ be in ${\mathcal L}^\infty$ and have the same asymptotic expansion at $\infty$.
Assume in addition that $g$ is rational.
Then $g$ and $h$ are equal.
\end{lem}
\begin{proof}
For each fixed $w$ in $\mathbb R$, the functions $g(z,z+w)$ and $h(z,z+w)$ are in the one
variable Pick class and have the same asymptotic expansions at $\infty$. By Theorem~\ref{thma3},
they must be Cauchy transforms of measures with the same moments.
Moreover, $g(z,z+w)$ is rational. Therefore by \cite[Thm. 1.2]{shota43}, the one-variable
moment problem is in this case determinate, so the two measures must be equal.
Therefore $g(z,z+w) = h(z,z+w)$ for all $z \in \Pi, \ w \in \mathbb R$, and so the
two functions are identically equal.
\end{proof}
\begin{cor}
\label{corf8}
Let $h \in \ln$ have an asymptotic expansion
\[
h(z) \ = \
\sum_{|n| \leq 2N-1} \frac{\rho_n}{z^n} \ + \
o(\| z \|^{-(2N-1)})
\]
as $z \stackrel{nt}{\to} \infty.$ Then there is a rational function $g$ in ${\mathcal L}^\infty$
that has the same asymptotic expansion to order $2N-1$.
\end{cor}
\begin{proof}
Let $(\alphanin{I_N},Y,A)$ be a finite HVMS corresponding to $h$ as in Theorem~\ref{thm4.1}.
Choose vectors $\{ \beta_n \}_{n \in I_N} $ in a finite dimensional space $\H$ so that
(\ref{eqf6}) holds, and define $X$ and $B$ as in the proof of Theorem~\ref{thmf3}.
Then $g$ given by (\ref{eqf9}) has the same asymptotic expansion.
\end{proof}
\section{An example}
\label{secex}
Let $\{ w_j \}_{j=1}^\infty$ be a summable sequence of non-negative numbers, and
let $\{ \lambda_j \}_{j=1}^\infty$ be a sequence of real numbers. Let $t_j$ be numbers
in the interval $[0,1]$. Define
\begin{eqnarray*}
A \ &=& \ \bigoplus \left(
\begin{array}{cc}
\langle_j & 0 \\
0 & - \langle_j
\end{array}
\right)\\
Y \ &=& \ \bigoplus \left(
\begin{array}{cc}
t_j^2 & t_j \sqrt{1 - t_j^2} \\
t_j \sqrt{1 - t_j^2} & 1 - t_j^2
\end{array}
\right)\\
\alpha_{(1,0)}
\ &=& \ \bigoplus \sqrt{w_j} \left(
\begin{array}{c}
t_j \\
\\
\sqrt{1 - t_j^2}
\end{array}
\right)\\
\alpha_{(0,1)}
\ &=& \ \bigoplus \sqrt{w_j} \left(
\begin{array}{c}
\sqrt{1 - t_j^2} \\
- t_j
\end{array}
\right)
\end{eqnarray*}
If $\alpha = \alpha_{(1,0)} + \alpha_{(0,1)}$ and
$h(z) = \langle (A-z_Y)^{-1} \alpha, \alpha \rangle$, then $h(z)$ is given by
\setcounter{equation}{\value{theorem}} \begin{equation}
\label{eqex2}
h(z) \ = \
\sum_{j=1}^\infty w_j \frac{ 4 t_j \sqrt{1 - t_j^2} \ \lj + z_1 + z_2}{ \lj^2 -
\lj (2 t_j^2 - 1) (z_1 - z_2)
- z_1 z_2 } .
\end{equation} \addtocounter{theorem}{1}
If $\sum w_j \lj^2 < \infty$, then one can extend the HVMS by
\begin{eqnarray*}
\alpha_{(2,0)} &\ = \ &
\bigoplus \sqrt{w_j}\, \lj \ (2t_j^2 -1)
\left(
\begin{array}{c}
t_j
\\ \\
\sqrt{1-t_j^2}
\end{array} \right) \\
\alpha_{(1,1)} &\ = \ &
\bigoplus 2\, \sqrt{w_j}\, \lj \
\left(
\begin{array}{c}
t_j - t_j^3 + t_j^2 \sqrt{1-t_j^2} \\ \\
t_j -t_j^3 -t_j^2 \sqrt{1-t_j^2}
\end{array}
\right)\\
\alpha_{(0,2)} &\ = \ &
\bigoplus \sqrt{w_j}\, \lj \ (1 - 2t_j^2 )
\left(
\begin{array}{c}
\sqrt{1-t_j^2}
\\
-t_j
\end{array} \right)
.
\end{eqnarray*}
Calculating, one gets that
\begin{eqnarray*}
r_1(z) & \ = \ &
\left( \sum w_j \right) \left[ \frac{1}{z_1} + \frac{1}{z_2} \right] \\
r_2(z) & \ = \ &
\sum w_j \lj \
\left[
\frac{2t_j^2 -1}{z_1^2} + \frac{4 t_j \sqrt{1-t_j^2}}{z_1 z_2}
+\frac{1 - 2t_j^2 }{z_2^2}
\right] \\
r_3(z) & \ = \ &
\sum w_j \lj^2 \
\left[
\frac{(2t_j^2 -1)^2}{z_1^3}\ + \ \frac{4(t_j^2 - t_j^4) + 4t_j (2t_j^2 -1) \sqrt{1-t_j^2}}{z_1^2 z_2} \right.\\
&&\left. \qquad +\ \frac{4(t_j^2 - t_j^4) + 4t_j (1-2t_j^2 ) \sqrt{1-t_j^2}}{z_1 z_2^2}\
+\ \frac{(2t_j^2 -1)^2}{z_2^3}
\right].
\end{eqnarray*}
These are (up to a minus sign) the first 3 terms in the asymptotic expansion of (\ref{eqex2}) at infinity.
If one assumes that $\sum w_j \lj^4 < \infty$, then one gets two more terms, and so on.
In the special case that every $t_j = 1/\sqrt{2}$, the formulas simplify.
Then
\begin{eqnarray*}
h(z) & \ = \ & \sum w_j \frac{2 \lambda_j + z_1 + z_2}{\lambda_j^2 - z_1 z_2} \\
r_1(z) & \ = \ &
\left( \sum w_j \right) \frac{z_1 + z_2}{z_1 z_2} \\
r_2(z) & \ = \ &
\left( \sum w_j \lj \right) \
\frac{2}{z_1 z_2} \\
r_3(z) & \ = \ &
\left( \sum w_j \lj^2 \right) \
\frac{z_1 + z_2}{z_1^2 z_2^2} .
\end{eqnarray*}
\section{Models}
\label{secm}
A {\em model} for $h$ is a reproducing kernel space ${\cal M}$ on $\Pi^2$,
and a positive contraction $Y$ on ${\cal M}$ so that, if the reproducing kernel $K$ for ${\cal M}$ is
written as
\setcounter{equation}{\value{theorem}} \begin{equation}
\label{eq2a1}
K(z,w) \= \langle v_z, v_w \rangle_{\cal M}
\end{equation} \addtocounter{theorem}{1}
with $v_z$ analytic in $z$, then
\setcounter{equation}{\value{theorem}} \begin{equation}
\label{eq2a2}
h(z) - \overline{h(w)} \= (z_1 - \bar w_1) \langle Y v_z, v_w \rangle \, + \,
(z_2 - \bar w_2) \langle (I-Y) v_z, v_w \rangle.
\end{equation} \addtocounter{theorem}{1}
Using our earlier notation $z_Y = z_1 Y + z_2 (I-Y)$, (\ref{eq2a2}) becomes
\setcounter{equation}{\value{theorem}} \begin{equation}
\label{eq2a3}
h(z) - \overline{h(w)} \= \langle (z_Y - w_Y^*) v_z, v_w \rangle .
\end{equation} \addtocounter{theorem}{1}
The existence of models for functions in the Pick class was proved in \cite{ag90}.
Indeed, it was shown there that for every $h$ in the Pick class,
there
are analytic functions $v^1(z)$ and $v^2(z)$ taking values in Hilbert spaces
${\cal M}^1$ and ${\cal M}^2$ so that
\[
h(z) - \overline{h(w)} \= (z_1 - \bar w_1) \langle v^1(z), v^1(w) \rangle_{{\cal M}^1} \, + \,
(z_2 - \bar w_2) \langle v^2(z), v^2(w) \rangle_{{\cal M}^2} .
\]
Let
\[
K(z,w) \= \langle v^1(z), v^1(w) \rangle_{{\cal M}^1} \, + \,
\langle v^2(z), v^2(w) \rangle_{{\cal M}^2}.
\]
This is a kernel, so can be written as in (\ref{eq2a1}) for some other Hilbert space ${\cal M}$, and there is a positive contraction $Y$ on ${\cal M}$
so that
\[
\langle v^1(z), v^1(w) \rangle_{{\cal M}^1} \ = \
\langle Y v_z, v_w \rangle .
\]
This yields (\ref{eq2a3}).
Write $\ii$ for the point $(i,i)$ in $\mathbb C^2$.
The equivalence of {\em (ii) - (iv)} in the following theorem was first proved in
\cite{aty11}.
\begin{theorem}
\label{thm2b1} Let $h: \Pi^2 \to \overline{\Pi}$ be in the Pick class, and not identically zero.
The following are equivalent.
(i) For some/every model with reproducing kernel as in (\ref{eq2a1}), there is a vector
$\alpha$ in ${\cal M}$ such that
\setcounter{equation}{\value{theorem}} \begin{equation}
\label{eqb1}
h(z) \= \langle v_z, \alpha \rangle .
\end{equation} \addtocounter{theorem}{1}
(ii) There exists a self-adjoint operator $A$ on a Hilbert space ${\cal H}$ and a vector $\alpha$ in
${\cal H}$ such that
\setcounter{equation}{\value{theorem}} \begin{equation}
\label{eqb2}
h(z) \= \langle (A - z_Y)^{-1} \alpha, \al \rangle .
\end{equation} \addtocounter{theorem}{1}
(iii) There exists $c > 0$ such that
\setcounter{equation}{\value{theorem}} \begin{equation}
\label{eqb25}
\lim_{s \to \infty} s h(s \ii) \= ic .
\end{equation} \addtocounter{theorem}{1}
(iv) We have
\setcounter{equation}{\value{theorem}} \begin{equation}
\label{eqb3}
\liminf_{s \to \infty} | s h(s \ii) | \ < \ \i .
\end{equation} \addtocounter{theorem}{1}
\end{theorem}
\begin{proof} $(i) \Rightarrow (ii)$:
Define $B$ by
\[
B : v_z \mapsto z_Y v_z + \al .
\]
Equations (\ref{eq2a3}) and (\ref{eqb1}) imply that
\setcounter{equation}{\value{theorem}} \begin{equation}
\label{eqb4}
\langle B v_z , v_w \rangle \=
\langle v_z, B v_w \rangle .
\end{equation} \addtocounter{theorem}{1}
Extend $B$ to finite linear combinations of vectors $v_{z_j}$ by linearity, and
(\ref{eqb4}) says that $B$ is well-defined and symmetric.
Indeed, if some linear combination $\sum c_j v_{z_j} = 0$, then
for every $w$ we have
\[
\langle \sum c_j ( (z_j)_Y v_{z_j} + \al ), v_w \rangle \=
\langle \sum c_j v_{z_j} , w_Y v_w + \al \rangle \= 0 ,
\]
so $B( c_j v_{z_j}) = 0$.
If the defect indices of the closure of $B$ match, then $B$ can be extended to a self-adjoint
operator on ${\cal M}$. If not, $B$ can be extended to a self-adjoint operator on a superspace of ${\cal M}$.
In either event, we can assume that there is a self-adjoint $A$ on ${\cal H} \supseteq {\cal M}$
such that
\[
A : v_z \mapsto z_Y v_z + \al .
\]
Therefore $v_z = (A - z_Y)^{-1} \al$, and (\ref{eqb2}) follows from (\ref{eqb1}).
$(ii) \Rightarrow (iii)$: By the spectral theorem,
\[
s \, h(s \ii) \= \int \frac{s}{t-is} d\mu(t)
\]
where $\mu$ is the finite measure that is the scalar spectral measure of $A$ for $\alpha$.
As the integrand is bounded by $1$ in modulus and tends pointwise to $i$, the dominated convergence theorem
implies
\[
\lim_{s \to \infty} s h( s\ii) \ =\ i \| \al \|^2 .
\]
$(iii) \Rightarrow (iv)$: Obvious.
$(iv) \Rightarrow (i)$ By (\ref{eq2a3}),
\setcounter{equation}{\value{theorem}} \begin{equation}
2 \Im h(s\ii) \= 2 is \langle v_{s \ii} , v_{s \ii} \rangle .
\label{eqb5}
\end{equation} \addtocounter{theorem}{1}
By (\ref{eqb3}) and (\ref{eqb5}), there is
a sequence $s_n$ such that
$-i s_n v_{s_n \ii} $ has a weak limit. Call this limit $\al$.
By (\ref{eq2a3}) we have
\setcounter{equation}{\value{theorem}} \begin{equation}
\label{eqb6}
h(z) - \overline{h(s_n \ii)} \= \langle z_y v_z , v_{s_n \ii}\rangle \, + \,
\langle v_z, -is_n v_{s_n \ii} \rangle .
\end{equation} \addtocounter{theorem}{1}
Take the limit in (\ref{eqb6}) as $s_n \to \infty$ to get (\ref{eqb1}).
\end{proof}
\begin{theorem}
\label{thm2b2}
Let $h$ be in the Pick class of two variables, and assume $h$ satisfies (\ref{eqb3}).
There exists a representation
as in (\ref{eqb2}) with $\H$ finite dimensional
if and only if $h$ is rational and real-valued on the complement in $\mathbb R^2$
of its polar set.
\end{theorem}
\begin{proof}
If $h$ has a representation as in (\ref{eqb2}) with $\H$ $d$-dimensional,
it is clear that $h$ is rational of degree at most $d$ in each variable,
and that $h$ is real on $\mathbb R^2$ off its polar set.
For the converse, let
$$
\alpha(\langle) \= i \frac{1+\langle}{1-\langle}
$$
be a linear fractional map that maps the unit disk ${\mathbb D}$ to $\Pi$, and
$$
\beta(z) \= \frac{z-i}{z+i}
$$
be its inverse. Let
\[
\phi(\lambda_1, \lambda_2) \ = \
\beta \circ h(\alpha(\lambda_1), \alpha(\lambda_2)) .
\]
This is a function in the unit ball of $H^\infty({\mathbb D}^2)$, the space of bounded analytic
functions on the bidisk. Moroever $\phi$ is rational if and only if $h$ is, in which case they have the same bidegree, and $\phi$ is inner if and only if $h$ is real-valued a.e. on $\mathbb R^2$.
Assume $h$ is rational and non-constant
of bidegree $(d_1,d_2)$. By a result of G. Knese \cite{kn08ub},
there are Hilbert spaces ${\cal M}^1$ and ${\cal M}^2$ of dimension $d_1$ and $d_2$ respectively,
and analytic functions $u^1 : \mathbb D^2 \to {\cal M}^1$ and $u^2: \mathbb D^2 \to {\cal M}^2$
so that
\setcounter{equation}{\value{theorem}} \begin{equation}
\label{eq2b4}
1 - \phi(\langle) \overline{\phi(\zeta)} \ = \
(1 - \langle_1 \overline{\zeta_1}) \langle u^1(\langle) , u^1(\zeta) \rangle \ + \
(1 - \langle_2 \overline{\zeta_2}) \langle u^2(\langle) , u^2(\zeta) \rangle.
\end{equation} \addtocounter{theorem}{1}
Define functions $v^r : \Pi^2 \to {\cal M}^r$ for $r=1,2$ by
\[
v^r(z) \ = \
\frac{h(z) + i}{z_r +i} u^r_{\beta(z)}.
\]
Then an algebraic manipulation transforms (\ref{eq2b4}) into
\setcounter{equation}{\value{theorem}} \begin{equation}
\label{eq2b5}
h(z) - \overline{h(w)} \ = \
(z_1 - \bar w_1) \langle v^1(z) , v^1(w) \rangle \ + \
(z_2 - \bar w_2)
\langle v^2(z) , v^2(w) \rangle.
\end{equation} \addtocounter{theorem}{1}
Let
\[
K(z,w) \= \langle v^1(z), v^1(w) \rangle_{{\cal M}^1} \, + \,
\langle v^2(z), v^2(w) \rangle_{{\cal M}^2}.
\]
This has rank less than or equal to $d = d_1 + d_2$, so it is the reproducing kernel for some
Hilbert function space
${\cal M}$ on $\Pi^2$ of dimension less than or equal to $d$.
By $ (iv) \Rightarrow (i)$ of Theorem~\ref{thm2b1}, we have a vector $\alpha$ such that
(\ref{eqb1}) holds. Now follow the proof of $(i) \Rightarrow (ii)$, and observe that since
$B$ is defined on a finite dimensional space, its defect indices must match, and so it can be extended to a self-adjoint operator $A$ on ${\cal M}$.
\end{proof}
|
1,314,259,994,198 | arxiv | \section{Introduction}
\label{sec:intro}
Being relevant to a wide range of practical scenarios, the behavior of
colloid suspensions near solid surfaces has been thoroughly studied
over the years. This research effort consists of several bodies of
work, for each of which we can give only a few representative
references. The first category of papers concerns the disruption of the
structural isotropy of a three-dimensional (3D) fluid suspension by
the surface, {e.g., } the formation of a layered structure decaying away
from the surface under equilibrium
\cite{VanWinkle1988,GonzalezMozuelos1991} and nonequilibrium
\cite{Zurita_Gotor-Blawzdziewicz-Wajnryb:2012} conditions. Another
category addresses the effect of the anisotropic geometry on particle
dynamics near a single planar surface\,---\,for isolated particles
\cite{Happel,Perkins1992,%
Cichocki-Jones:1998,%
Walz-Suresh:1995,%
Prieve1999,%
Prieve2000,CarbajalTinoco2007,%
Blawzdziewicz2010%
},
particle pairs
\cite{Happel,Dufresne2000,Cichocki2007,%
Zurita_Gotor-Blawzdziewicz-Wajnryb:2007b},
and a 3D suspension adjacent to a surface
\cite{Anekal2006,Michailidou2009,%
Cichocki-Wajnryb-Blawzdziewicz-Dhont-Lang:2010,%
Dhont2012,%
Michailidou2013}.
Regarding
quasi-two-dimensional (quasi-2D) layers of particles, most studies have
considered the confinement of suspensions between two rigid
surfaces. This research addressed structural properties of such
confined suspensions
\cite{CarbajalTinoco1996,Schmidt1997,Zangi2000,Frydel2003,Han2008},
and the dynamics of single particles
\cite{Lin2000,Dufresne2001,Ekiel_Jezewska-Wajnryb-Blawzdziewicz-Feuillebois:2008},
particle pairs \cite{Cui2004,Bhattacharya2005b}, and concentrated quasi-2D suspensions
\cite{Diamant2005A,%
Blawzdziewicz-Wajnryb:2012,%
Baron-Blawzdziewicz-Wajnryb:2008}.
Another type of quasi-2D suspensions has also been studied, where a
particle layer is confined to a fluid interface
\cite{Lin1995,Cichocki2004,Peng2009,Zhang2014}.
In cases where the surface attracts the particles and the suspension
is sufficiently dilute, the system can contain a single layer of
surface-associated particles in contact with a practically
particle-free solvent \cite{GonzalezMozuelos1991}. A single layer can
also form as a result of gravitational settling of particles toward a
horizontal wall. This scenario is studied in the present work.
Sedimented colloidal particles undergo random Brownian displacements,
which results in diffusive broadening of the fluctuating particle
layer. The width of the particle height distribution above the bottom
surface is characterized by the sedimentation length $l$, i.e., the
height at which the gravitational energy of a particle equals its
thermal energy. The dynamics and height distribution of individual
sedimented particles above the bottom surface were studied in
Refs.\ \citenum{Walz-Suresh:1995,Prieve1999,Prieve2000} using total
internal reflection microscopy. Particle monolayers at higher
densities were investigated experimentally for a system in which the
sedimentation length is much smaller than the particle
diameter \cite{Skinner2010}. It was shown that at high area fractions
the suspension can assemble into quasi-2D colloidal crystals, but
formation of a nonuniform vertical microstructure was not observed,
because of the small sedimentation length.
Here we are interested in the structure and dynamics of a
surface-associated layer for which the sedimentation length is
comparable to the particle diameter. We focus on the effects of the
suspension concentration on the statistical height distribution of
particles and their diffusion coefficient. Unlike the quasi-2D
suspensions confined between two surfaces or adsorbed at a fluid
interface (which restricts particle configurations and motions in two
directions) in the present system no constraints are imposed on the
distance between the particles and the single wall. Thus, at
sufficiently high area fractions, particles form a nontrivial
stratified microstructure. This microstructure and its effect on
particle dynamics are analyzed in our paper.
The article is organized as follows. Section~\ref{sec:exp_met}
describes the experimental methods used to prepare the system, image
the particles, and analyze the extracted data. In
Sec.\ \ref{sec:num_met} we describe the theoretical background and
numerical methods used to perform the simulations. In
Sec.\ \ref{sec:structure} we present the results concerning the
equilibrium structure of the quasi-2D suspension observed in planes
parallel to the bottom surface (the quasi-2D radial distribution) and
in the direction perpendicular to it (the height distribution).
Section~\ref{sec:dynamics} addresses the diffusion of particles
parallel to the surface, as affected by the surface proximity. We
discuss our findings in Sec.\ \ref{sec:discuss}.
\section{Experimental methods}
\label{sec:exp_met}
\subsection{Quasi-2D system of sedimented Brownian spheres}
\label{Quasi-2D system of sedimented Brownian spheres}
Quasi-2D colloidal layers are created by placing a suspension of
colloidal silica spheres in a glass sample cell $\sim 150\,\mu$m
high. The particles are then allowed to sediment and equilibrate for
30 minutes at a temperature of approximately 24$^\text{o}$C before
measurements start (Fig.~\ref{fig:setup}). We use green fluorescent
monodisperse, negatively charged silica particles (Kisker Biotech,
PSI-G1.5 Lot \#GK0090642T) with diameter $d=1.50\pm0.15\,\mu$m, and
mass density $\massDensity=2.0$ g/cm$^3$. Monolayers of area
fraction $0<\phi\le0.62$ are prepared by diluting the original
suspension with double distilled water (DDW, 18~M$\Omega$), without
and with the addition of salt at a concentration $\KCl=0.01\Mol$. The
sample walls are cleaned and slightly charged by plasma etching to
avoid particle attachment to the bottom wall of the cell. We observe
that the aqueous medium above the colloidal monolayer is free of
colloids. Since the particles are floating right above the bottom
wall, we can treat the upper wall as a distant boundary.
\begin{figure}
\centering
\includegraphics[clip=true,trim=0 0 0 0,scale=0.4]{setup1}
\caption{(a) Images of fluorescent $1.5~\mu$m-diameter silica spheres
suspended in water, taken after the particles sedimented to create a
quasi-2D suspension at area fraction $\phi=0.49$. Large (small)
image corresponds to a typical image of the first (second)
layer. Scale bar = $5~\mu$m. (b) Schematic view of the system and
its parameters. }
\label{fig:setup}
\end{figure}
\subsection{Imaging techniques}
Particle position and motion in the $x$--$y$ plane, perpendicular to the
optical axis, are observed using epifluorescence microscopy (Olympus
IX71). Images are captured at a rate of $70$~fps by a CMOS camera
(Gazelle, Point Grey Research). We use in-line holographic microscopy
to image the dynamics of particles in three dimensions in dilute
samples \cite{Kapfenberger2013}. This imaging technique uses a collimated coherent light
source (DPSS, Coherent, $\lambda=532$nm) to illuminate a sample
mounted on a microscope. The light scattered from the sample
interferes with the light passing through it, to form a hologram in the
image plane. We reconstruct the light field passing through the sample
by Rayleigh-Sommerfeld back-propagation and extract from it the particle
location in three
dimensions \cite{Lee07a,Cheong2010b,Kapfenberger2013}. For holographic
imaging measurements we use non-fluorescent silica particles with the
same diameter ($d=1.50~\pm~0.08~\mu$m, Polysciences Inc.).
Additional details of the setup and measurement methods can be found
elsewhere \cite{Kapfenberger2013}.
\begin{figure}
\centering
\includegraphics[clip=true,scale=0.5]{figure2ja}
\caption{Holographic imaging. Height probability distribution of a
single sphere (in salt-free water) shifted to the maximum value and
fitted to the Boltzmann probability distribution \eqref{low-density
probability distribution} with the particle--wall potential
(\ref{Eq:boltzmann}).}
\label{fig:boltzman}
\end{figure}
We use confocal imaging to monitor particle positions in a dense layer
in three dimensions. Our spinning disc confocal imaging system (Andor,
Revolution XD) includes a Yokogawa (CSU-X1) spinning disc, and an
Andor (iXon 897) EM-CCD camera. An objective lens (Olympus, x60,
NA=1.1, water immersion) mounted on a piezoelectric scanner (Physik
Instrumente, Pifoc P-721.LLQ) is used to scan the sample in the $z$
axis, with a step size of 100 nm.
\subsection{Height calibration}
\label{Height calibration}
A suspended tracer particle
is subject to electrostatic and gravitational forces in addition to
thermal fluctuations, affecting its height distribution
\cite{Prieve1999}.
The particle potential energy can be described as
\begin{equation}
U~= ~mgz+B\mathrm{e}^{-(z-d/2)/\lambda},
\label{Eq:boltzmann}
\end{equation}
where $z$ is the vertical position of the tracer, $g$ is the
gravitational acceleration,
\begin{equation}
\label{m}
m=\textstyle\frac{\pi}{6}\Delta\massDensity d^3
\end{equation}
is the buoyant mass of the tracer ($\Delta \massDensity$ is the mass
density difference between silica and water), $\lambda$ is the Debye
screening length, and the amplitude $B$ depends on $\lambda$ and the
surface charges of both particle and glass surfaces. The
corresponding probability distribution of the particle height $z$ is
\begin{equation}
\label{low-density probability distribution}
\particleDistribution(z)=\particleDistributionNorm\mathrm{e}^{-U(z)/k_{\rm B}T},
\end{equation}
where $k_{\rm B}T$ is the thermal energy and $\particleDistributionNorm$ is
the normalization constant.
The height distribution of a single particle above the sample's bottom
was obtained from very dilute suspensions, using in-line holographic
imaging \cite{Lee07a,Cheong2010b,Kapfenberger2013} (see
Fig.~\ref{fig:boltzman}). Our holographic measurements provide values
of relative particle positions, but not the absolute particle heights
with respect to the bottom wall. We thus set the peak position to
$z=0$ and focus on the height relative to this reference plane. The
exponential decay on the right side of the probability-density peak is
governed by a decay length,
\begin{equation}
l=\frac{k_BT}{mg}\label{ldef}
\end{equation}
(the sedimentation length), resulting from the competition between
gravity and thermal forces. The exponential-decay length determined
from the holographic measurements agrees well with the calculated
sedimentation length \eqref{ldef}, without any fitting parameters (see
Fig.~\ref{fig:boltzman}).
The electrostatic term of the probability-density, which controls the steep rise of the probability, affects mostly the peak position rather than its shape. Since we shifted the peak position to $z=0$, the fitting of the entire probability-density using Eqs.\ \eqref{Eq:boltzmann} and \eqref{low-density probability
distribution} was insensitive to the value of $B$. Reasonable fits were obtained for $\lambda$ in the range of $\lambda \sim 40-70$ nm. Better estimations of $B$ and $\lambda$ are given in Sec.~\ref{Mean particle height at low area fractions}, using mobility measurements.
\begin{figure}
\centering \includegraphics[clip=true,scale=0.5]{figure3ja}
\caption{(a) Confocal imaging. Height probability distribution of
silica particles at $\phi<0.003$ with diameters of $d_0=1.5~\mu$m
(blue) and $d_0=1.0~\mu$m (black). Inset: Logarithm of the
probability distributions scaled by $d_0^3$ in units of
$10^2~\mu$m$^{-3}$; as expected, the two curves have approximately
the same slope, which is used to calibrate the confocal height
measurements. (b) Height probability distribution of silica
particles with diameter $d_0=1.5~\mu$m extracted from holographic
imaging (green circles, see Fig.~\ref{fig:boltzman}) and confocal
imaging (blue line). The suspensions in both figures were with no added salt, $\KCl=0 \Mol$. }
\label{fig:pvsz}
\end{figure}
The applicability of the holographic imaging is limited to low-density
suspensions, whereas the confocal imaging can be also used at higher
concentrations. On the other hand, confocal height measurements suffer
from spherical aberrations due to multiple changes in refractive index
in the imaging path. This leads to a systematic error in measuring
$z$, which can be eliminated by proper calibration. We calibrate the
confocal measurement of the relative vertical particle positions by
requiring the exponential decay of the height distribution to agree
with the known, and verified, value of $l$.
In Fig.~\ref{fig:pvsz}\subfig{a} we show the particle-height
distribution $\particleDistribution(z)$ at $\phi<0.003$ for two
different particle sizes ($d_0=1.0,1.5~\mu$m). The distributions are
shifted so that the highest probability is located at $z=0$. Scaling
the logarithm of the distributions by $d_0^3$ [inset of
Fig.~\ref{fig:pvsz}\subfig{a}] shows that the normalized decay
constants for the two particle sizes have approximately the same
value, from which we calibrate the confocal microscope's height
measurements. In Fig.~\ref{fig:pvsz}\subfig{b}, the height
distributions extracted by the two methods (holographic and confocal
imaging) are overlaid. This figure emphasizes the higher accuracy of
holographic imaging over confocal imaging, especially around $z=0$,
where the increase in distribution should be very steep
\cite{Prieve1999, Kapfenberger2013}. The difference between the
curves can also be attributed to polydispersity, since the holographic
imaging is a single-particle measurement while the confocal imaging is
a multiple-particle measurement, and its corresponding curve
represents an average over $\sim$40 particles.
\section{Numerical methods}
\label{sec:num_met}
\subsection{The system}
\label{The system - numerical}
\subsubsection{Particles and their interactions}
\label{Particles and their interactions}
Silica particles are modeled as Brownian hard spheres with or without
electrostatic repulsion (depending on the salt concentration), immersed in
a fluid of viscosity $\eta$. The bottom wall is treated as an
infinite hard planar surface. Creeping-flow conditions and no slip
boundary conditions at the particle surfaces and at the wall are
assumed.
In a salt solution with $\KCl=0.01\Mol$, the Debye length is only
about 5 nm, and therefore electrostatic interactions are screened out.
The particles thus interact only via infinite hard-core
particle--particle and particle--wall potentials and the gravity
potential $mgz$, and no other potential forces are involved. The
strength of the gravity force is described by the sedimentation length
\eqref{ldef}.
In addition to the hard-core repulsion, in DDW with no added salt
($\KCl=0\Mol$) particles are assumed to also interact via particle--wall and
particle--particle Debye--H\"uckel potentials,
\begin{equation}
V(z)=Be^{-(z-d/2)/\lambda},\label{V}
\end{equation}
and
\begin{equation}
V'(r)=B'e^{-(r-d)/\lambda},\label{V'}
\end{equation}
where $\lambda$ is the Debye screening length, $B$ and $B'$ are the
potential amplitudes, and $r$ is the distance between the particle
centers. The consideration of Debye--H\"uckel potentials in the
salt-free case is based on our experimental measurement $\lambda \sim
60$ nm. A finite Debye screening length in DDW stems from the
presence of residual ions in the solution \cite{Behrens2001}.
\subsubsection{Suspension polydispersity}
\label{Suspension polydispersity}
To determine the effects of the suspension polydispersity on the
near-wall microstructure and dynamics, we have performed numerical
simulations for a hard-sphere (HS) system with a Gaussian
distribution of particle diameters,
\begin{equation}
\label{Particle size distribution}
p(d)=
\frac{1}{(2\pi\sigma^2)^{1/2}}\exp\left[-\frac{(d-\daver)^2}{2\sigma^2}\right],
\end{equation}
where $d$ and $\daver$ are the actual and average particle diameters,
and $\sigma$ is the standard deviation. All the particles have the
same mass density $\massDensity$; hence, particles of different sizes
have different buoyant masses and different sedimentation lengths
\eqref{ldef}. The dimensionless sedimentation length based on the
average particle diameter $d_0$, is defined as
\begin{equation}
\label{dimensionless sedimentation constant}
\frac{l_0}{\daver}=\frac{k_BT}{\maver g\daver},
\end{equation}
where
\begin{equation}
\maver=\frac{\pi}{6} \daver^3\,\Delta\massDensity.
\end{equation}
The area fraction $\phi$ based on the average particle diameter
$\daver$ is
\begin{equation}
\phi=\textstyle\frac{1}{4}\pi n\daver^2,\label{phi}
\end{equation}
where $n$ is the number of particles per unit area. Since the
particles are free to move in the $z$ direction, the area fraction
$\phi$ can exceed 1.
\subsubsection{System parameters}
The simulations were carried out for the following system parameters:
For the dimensionless sedimentation length \eqref{dimensionless
sedimentation constant} we use the value
\begin{equation}
\label{value of dimensionless sedimentation constant}
\frac{l_0}{\daver}=0.158,
\end{equation}
calculated from the particle size and density. Based on the
comparison between the calculated and measured values of the
equilibrium average of the lateral self-diffusion coefficient for
isolated particles in DDW, we estimate that
the Debye length and the amplitude of particle--wall electrostatic
repulsion are
\begin{equation}
\label{Debye parameters}
\lambda/d=0.03,\qquad \frac{B}{k_{\rm B}T}=10.
\end{equation}
These values are used for salt-free suspensions at all suspension
concentrations. Assuming that the charge densities of the particle and
wall surfaces are similar, we take
\begin{equation}
\label{Debye B particle}
B'=B/2,
\end{equation}
for the interparticle-potential amplitude, as follows from the Derjaguin approximation \cite{Israelachvili}.
The simulations were performed in the range of area fractions
$\phi\le1.2$. For polydisperse HS systems the calculations
were carried out for $\sigma/d_0=0.10$, 0.15, 0.20, and 0.25 (we
estimate that $0.10 < \sigma/d_0 < 0.15$ for the silica particles used
in the experiments). For particles interacting via the Debye--H\"uckel
potentials \eqref{V} and \eqref{V'} only monodisperse suspensions were
considered.
\subsection{Evaluation of the equilibrium distribution}
\label{Equilibrium distributions}
\subsubsection{Low density limit}
\label{Structure - low density limit}
For monodisperse suspensions at low particle concentrations, the
equilibrium particle distribution $\particleDistribution(z)$ is given
by the normalized Boltzmann factor \eqref{low-density probability
distribution}. To determine the particle distribution for a dilute
polydisperse suspension, the particle-size-dependent Boltzmann factor
for individual particles, $\particleDistribution_1(z;d)$, is
convoluted with the particle-size distribution \eqref{Particle size
distribution},
\begin{equation}
\label{dilute polydisperse equilibrium distribution}
\particleDistribution(z)=\int_0^z \textrm{d}d\,
p(d)\particleDistribution_1(z;d),
\end{equation}
For a HS system
\begin{equation}
\label{individual Boltzmann factor for hard spheres}
\particleDistribution_1(z;d)=l^{-1} \textrm{e}^{-(z-d/2)/l} \theta(z-d/2),
\end{equation}
according to equations \eqref{Eq:boltzmann}--\eqref{ldef}, where
$\theta(x)$ is the Heaviside step function, and the sedimentation
length $l$ is particle-size dependent due to the variation of particle
mass.
\subsubsection{Monte--Carlo simulations}
\label{Monte-Carlo simulations}
To determine the equilibrium microstructure of a sedimented suspension
at finite particle area fractions, equilibrium Monte--Carlo (MC)
simulations were performed for 2D-periodic arrays of spherical
particles in 3D space (with periodicity in the horizontal directions
$x$ and $y$ and the box size $L$). The particles interact via
infinite hard-core repulsion and the pair-additive potential
\begin{equation}
U(\boldsymbol{X)}=\sum_{i=1}^{N}m_{i}gz_{i}+
\sum_{i=1}^{N}V(z_{i})+\frac{1}{2}\sum_{i=1}^{N}\sum_{j\ne i}^{N}V^{\prime
}(r_{ij}),
\end{equation}
which includes the gravity term and particle--wall and
particle--particle screened electrostatic potentials \eqref{V} and
\eqref{V'}. Here $\boldsymbol{X}=(\mathbf{r}_{1},\ldots
,\mathbf{r}_{N})$ is the particle configuration (with $\mathbf{r}_i$
denoting the position of particle $i$), $z_i$ is the vertical
coordinate of particle $i$, and $r_{ij}=|\mathbf{r}_i-\mathbf{r}_j|$
is the relative particle distance.
A purely HS system with $V=V'=0$ was modeled for monodisperse
particles and for polydisperse particles with the Gaussian size
distribution \eqref{Particle size distribution}. For systems with
nonzero electrostatic repulsion only monodisperse particles were
considered.
The initial configuration was prepared by placing $N=400$ particles
randomly in a vertical cuboid box with the square base $L$ and the
height $10L$. The size $L$ of the 2D-periodic cell was determined to
obtain the required area fraction $\phi$ of the sedimented particle
layer. The suspension was allowed to sediment by following the MC
random-walk dynamics in the configurational space $\boldsymbol{X}$
\cite{Frenkel-Smit:2002} (as described below). After the equilibrium
state was reached, suspension properties were obtained by averaging
the quantities of interest over at least 200 independent
configurations.
Our adaptive simulation procedure was performed by repeating the MC
steps defined as follows:
\begin{itemize}
\item[\subfig{a}] A randomly selected particle $i$ is given a small
random displacement, $\boldsymbol{r} _{i}\rightarrow
\boldsymbol{r}_{i}^{\prime }=\boldsymbol{r}_{i}+\boldsymbol{ \Delta
}$, where $\boldsymbol{\Delta }$ is chosen from a 3D Gaussian
distribution with the standard deviation adaptively adjusted to the
current mean gap between particles. This displacement results in
the change of the configuration from $\boldsymbol{X}$ to
$\boldsymbol{X}^{\prime }$.
\item[\subfig{b}] According to the Metropolis detailed balance
condition, the new configuration is accepted with the probability
\begin{equation}
\min \left( 1,\exp \left\{ -\left[ U(\boldsymbol{X}^{\prime })
-U(\boldsymbol{X})\right] /k_{B}T\right\} \right),
\end{equation}
provided that there is no particle--particle or particle--wall overlap.
\end{itemize}
To let the system reach an equilibrium state $\boldsymbol{X}_{1}$, the
MC step \subfig{a} and \subfig{b} is repeated $10^{5}N$ times. The
next independent equilibrium configuration $\boldsymbol{X}_{n+1}$ is
obtained from the previous configuration $\boldsymbol{X}_{n}$ by
performing $10^{4}N$ MC steps. The particle height distribution
$\particleDistribution(z)$ and other equilibrium quantities are
obtained by averaging over 200 independent configurations
$\boldsymbol{X}_{i}$.
\subsection{Hydrodynamics and self-diffusion}
\label{Hydrodynamics and self-diffusion}
\subsubsection{Low density limit}
In the absence of a wall, the self-diffusion coefficient of an
isolated solid sphere with diameter $d_0$ is given by the
Stokes--Einstein expression
\begin{equation}
D_0=\frac{k_{\rm B}T}{3\pi \eta d_0}.
\end{equation}
The self-diffusion coefficient $D(z)$ of a sphere with diameter $d$ at a
distance $z$ from the wall is smaller by a factor
\begin{equation}
\label{self diffusivity for single sphere in wall presence}
\frac{D(z)}{D_0}=\frac{d_0}{d}\,\mu_{\parallel}(z/d),
\end{equation}
where the normalized mobility coefficient $\mu_{\parallel}$ depends on
the dimensionless particle position $z/d$ and no other parameters.
Relation \eqref{self diffusivity for single sphere in wall presence}
refers to the lateral component of the self-diffusion coefficient
(parallel to the wall), which was measured in our experiments.
However, an analogous expression also holds for the normal component.
For monodisperse particles in the dilute-suspension limit, the
effective self-diffusion coefficient $\Dself$ averaged across the
suspension layer is obtained by integrating \eqref{self diffusivity
for single sphere in wall presence} with the Boltzmann distribution
\eqref{low-density probability distribution},
\begin{equation}
\label{monodisperse effective self-diffusivity}
\frac{\Dself}{D_0}=\int_{d/2}^\infty\textrm{d}z\, \particleDistribution(z)\mu_{\parallel}(z/d).
\end{equation}
For a polydisperse suspension, an additional average over the
particle-size distribution \eqref{Particle size distribution} is
needed,
\begin{equation}
\label{polydisperse effective self-diffusivity}
\frac{\Dself}{D_0}=
\int_0^\infty\textrm{d}d\,p(d)
\int_{d/2}^\infty\textrm{d}z\,\particleDistribution_1(z;d)\mu_{\parallel}(z/d).
\end{equation}
The mobility coefficient $\mu_{\parallel}(z/d)$ was evaluated with
high accuracy using the \textsc{Hydromultipole} algorithm for a
particle near a single wall \cite{Cichocki-Jones:1998}. The
integrals in Eqs.\ \eqref{monodisperse effective self-diffusivity}
and \eqref{polydisperse effective self-diffusivity} were performed
numerically using the Gauss method, with $\mu_{\parallel}(z/d)$
calculated by a series expansion.
\subsubsection{Computations for larger densities}
The effective self-diffusion coefficient for suspensions at higher
concentrations was evaluated using a periodic
version~\cite{Blawzdziewicz-Wajnryb:2008} of the
Cartesian-representation algorithm for a system of particles in a
parallel-wall channel
\cite{Bhattacharya-Blawzdziewicz-Wajnryb:2005a,Bhattacharya-Blawzdziewicz-Wajnryb:2005}.
In our approach, periodic boundary conditions in the lateral directions
are incorporated by splitting the flow reflected by the particles into
a short-range near-field contribution and a long-range asymptotic
Hele--Shaw component. The near-field contribution is summed explicitly
over neighboring periodic cells, and the Hele--Shaw component is
evaluated using Ewald summation method for a 2D harmonic
potential \cite{Cichocki-Felderhof:1989,Blawzdziewicz-Wajnryb:2008}.
The one-wall results were derived from the two-wall calculations using
an asymptotic procedure based on the observation that in the
particle-free part of the channel the velocity field tends to a
combination of a plug flow and a shear flow. All other flow components
decay exponentially with the distance from the particle layer. The
one-wall results are obtained by eliminating the shear flow and
retaining only the plug flow generated by hydrodynamic forces induced
on the particles
\cite{Sadlej-Wajnryb-Blawzdziewicz-Ekiel_Jezewska-Adamczyk:2009}. The
calculations were performed for the distance to the upper virtual wall
$H=10d_0$, which is sufficient to obtain highly accurate one-wall
results.
The self-diffusion coefficient is determined by averaging the trace of
the lateral translational--translational $N$-particle mobility,
evaluated using {\sc Hydromultipole} codes based on the above
algorithm, with the multipole truncation order $L=2$
\cite{Abade:2010}. The averaging was performed over equilibrium
configurations of $N=400$ particles in a 2D-periodically replicated
simulation cell. Independent equilibrium configurations were
constructed using the MC technique described in Sec.\ \ref{Monte-Carlo
simulations}.
\section{Structure of the quasi-2D suspensions}
\label{sec:structure}
\subsection{Experimental results}
\label{Structure - experimental results}
A typical image of our quasi-2D colloidal suspension is
shown in Fig.~\ref{fig:setup}. For each area fraction $\phi$ and salt
concentration, the suspension can be characterized by the structure in
the $x$--$y$ plane (parallel to the cell floor) and the density profile in
the $z$ direction (perpendicular to the floor). In this section we
discuss results of our measurements of the microstructure of a
sedimented particle layer.
\subsubsection{Mean particle height at low area fractions}
\label{Mean particle height at low area fractions}
As mentioned in Sec.\ \ref{Height calibration}, our imaging techniques
do not yield absolute particle heights. To estimate the mean particle
distance $z$ from the bottom wall (the mean height) in a dilute
suspension layer, we observe particle dynamics in the horizontal
directions, and compare measurement results with theoretical
calculations of the effect of the wall on the lateral particle
diffusion. Using fluorescence imaging, we determine the projection of
particle trajectories onto the $x$--$y$ plane,
$\mathbf{r}_\parallel(t)$, and extract the effective self-diffusion
coefficient,
\begin{equation}
\label{self diffusivity from mean-square displacement}
\Dself=\langle \Delta \textbf{r}_\parallel^2
(\tau)\rangle/(4 \tau),
\end{equation}
where $\tau$ is the time interval. The position-dependent diffusivity
$D(z)$ in the $x$--$y$ plane of a single particle near a planar wall
is given by the following expansion in the particle--wall distance \cite{Happel,Perkins1992},
\begin{eqnarray}
\frac{D(z)}{D_0} = 1 &-& \frac{9}{32}\frac{d}{z}
+\frac{1}{64}\left( \frac{d}{z}\right)^3 \label{Eq:ds_zero}
\\ \nonumber &-& \frac{45}{4096}\left( \frac{d}{z}\right)^4
-\frac{1}{512}\left( \frac{d}{z}\right)^5,
\end{eqnarray}
where $z=0$ is the wall position. The expansion \eqref{Eq:ds_zero} is accurate within 5\% to 1\% as $z$ increases from $0.51d$ up to $d/2+2l$, in the range where sedimented particles spend most of
the time in a low-density suspension under equilibrium conditions.
Here $l\approx0.16d$ is the sedimentation length \eqref{ldef}.
From expression \eqref{Eq:ds_zero} and $\Dself$ extracted according to
\eqref{self diffusivity from mean-square displacement}, we can
calculate the suspension's mean distance from the wall (where $z$ in
\eqref{Eq:ds_zero} is replaced by a mean value $\langle z
\rangle$). This calculation holds in the limit $\phi \to
0$, where there are no particle-particle interactions. We measure
$\Dself$ from the particle trajectories, $\mathbf{r}_\parallel(t)$, in
extremely low area fraction solution, $\phi<0.003$ (in salt-free
water), and obtain a mean distance from the wall $\langle z \rangle=
1.1 \pm 0.1\,\mu$m, corresponding to a mean gap $\epsilon=z-d/2$ of
0.3--0.4\,$\mu$m between the particle surface and the wall. We also
extract $\langle z\rangle$ for different salt concentrations by
extrapolating $\Dself$ (measured at various area fractions) to
$\phi=0$ (see Sec.~\ref{Dynamics - experimental results}), obtaining
$\langle z\rangle =0.95 \pm 0.05\,\mu$m for $\KCl=0.01\Mol$ and
$\langle z\rangle =1.11 \pm 0.05\,\mu$m for $\KCl=0\Mol$. The latter
matches the average height extracted from the diffusion of tracers in
the extremely low density suspension.
For $\lambda=5$ nm (added salt), the mean height calculated from the
Boltzmann distribution \eqref{low-density probability distribution} is
dominated by the exponential decay due to gravity and is practically
independent of $B$ in the particle--wall potential
\eqref{Eq:boltzmann}. For $B=0$, using the particle mass as
determined from Eq.\ \eqref{m} with no fitting parameters, we get
$\langle z\rangle = 0.99$ $\mu$m, in agreement with the
diffusivity-based measurements of $\langle z\rangle$. This result
confirms that in the added-salt case we can neglect the electrostatic
repulsion from the wall. For the salt-free case, taking $\lambda=50$
nm, we obtain $\langle z\rangle = 1.11$ $\mu$m for $B$ in the range
5--15 $k_{\rm B}T$. These values are consistent with those obtained by
fitting the measured height distribution to the theoretical expression
\eqref{Eq:boltzmann} (see Sec.~\ref{Height calibration}).
Since Eq.\ \eqref{Eq:ds_zero} does not include lubrication correction
for small particle--wall gaps $\epsilon$, it overpredicts $D(z)$
for $z<0.51d$; however, the accuracy of the approximation is sufficient
for the purpose of the present estimates. In our calculations
discussed in Secs.\ \ref{sec:num_met} and \ref{Dynamics - numerical
simulations}, highly accurate \textsc{Hydromultipole} results were
used instead of the far-field approximation \eqref{Eq:ds_zero}.
\begin{figure}
\centering
\includegraphics[clip=true,scale=0.5]{figure4ja}
\caption{Radial distribution function $g(r)$ in the $x$--$y$ plane for
experiments in (a) salt-free and (b) salt-added ($\KCl=0.01\Mol$)
water. The distribution $g(r)$ was calculated separately in the
first and second layers (see Sec.~\ref{Occupation of particle layers}
for the layer definition) and combined with appropriate weights. }
\label{fig:gofr}
\end{figure}
\subsubsection{Radial distribution in the horizontal plane}
To verify that no crystalline or hexatic structures are formed at
higher values of the area fraction, we evaluate from the experiment
the radial distribution function $g(r)$ and the full 2D pair
distribution $g(r,\theta)$ in the $x$--$y$ plane, for both the
salt-free and salt-added suspensions. No dependence on $\theta$ was
found. The radial distribution $g(r)$ for several values of the area
fraction $\phi$ is shown in Fig.\ \ref{fig:gofr}\subfig{a} for the
salt-free system and in \ref{fig:gofr}\subfig{b} for the salt-added
system.
For monodisperse hard spheres the first peak of $g(r)$ should
correspond to the diameter of the sphere. Our measurements show that
the first peak is at $r = 1.68~\mu$m for suspensions without salt and
at $r = 1.60~\mu$m for suspensions with $\KCl=0.01\Mol$. The
difference between these two numbers implies that the effective shell
around the particles in the salt-free samples is around ~40-50~nm,
which provides an estimation for the screening length in DDW without
the addition of salt. This estimate of $\lambda$ is consistent with
the other two mentioned above.
\subsubsection{Vertical density profile}
The height distributions $\particleDistribution(z)$ of the silica
particles at different area fractions of the sedimented particle layer
were acquired using confocal imaging and conventional image analysis
\cite{Crocker1996}. These distributions for salt-added suspensions
with $\KCl=0.01\Mol$ are plotted in Fig.~\ref{fig:zhist}\subfig{a} for
several values of the area-fraction $\phi$. Since we cannot precisely
measure the position of the wall, the distributions are shifted so
that their first peak (close to the wall) is located at $z=0$. These
distributions indicate the formation of a second layer of particles
for area fractions $\phi \gtrsim 0.26$. The observed center of the
second layer is located $\Delta z\approx 0.75\,\mu$m above the center
of the first layer. The layer separation is thus significantly
smaller than the expected separation $\Delta z\approx d=1.5\,\mu$m
(which is similar to the peak separation for the radial
distribution). See further discussion in Secs.\ \ref{sec:discuss} and
Appendix \ref{appendix}.
\begin{figure}[t]
\centering
\includegraphics[clip=true,scale=0.5]{figure5sja}
\caption{(a) Height probability distribution of the silica colloids
(in $\KCl=0.01\Mol$) for increasing area fraction reveals the
formation of a second layer. Colors correspond to different area
fractions (as labeled). (b) For the most dense suspensions
[$\phi=0.54$ and $\phi=0.48$ (inset)], the height distribution
(black solid line) around the two peaks can be fitted to two
Gaussian functions (blue lines). The intersection of the two
Gaussians defines an effective boundary (red broken line) between
the first and second layers; occupation percentages are indicated. }
\label{fig:zhist}
\end{figure}
To highlight the onset of the formation of the second layer, we look
at the subtraction of the height probability distribution of the
lowest area fraction from the distribution of all area fractions,
$\Delta \rho \equiv \rho-\rho_{\,\phi=0.054}$
[Fig.~\ref{fig:onset}\subfig{a}]. Two phenomena are expected when a
second layer is formed: (i) negative values at $z=0~\mu$m,
corresponding to a reduction in the fraction of particles populating
the first layer, (ii) positive and increasing values at $z=0.75~\mu$m,
corresponding to the formation and increasing population of the second
layer. The values of $\Delta\rho$ at $z=0$ and $0.75~\mu$m are plotted
in Fig.~\ref{fig:onset}\subfig{b}. The two expected phenomena are
observed at approximately $\phi \sim 0.3$, indicating the area
fraction above which a second layer becomes occupied. At area
fractions smaller than 0.3 we still obtain negative values of $\Delta
\rho$ at $z=0~\mu$m, and positive values at $z=0.75~\mu$m, however
these values are relatively low and can correspond to the broadening
of the exponential distribution due to increase in $\phi$.
\begin{figure}
\centering
\includegraphics[clip=true,scale=0.5]{figure5s_extra4}
\caption{(a) The difference between the height probability distribution at
increasing area fractions and the distribution at the lowest area
fraction $\phi =0.054$, $\Delta \rho \equiv \rho-\rho_{\,
\phi=0.054}$ (in $\KCl=0.01\Mol$). Colors are as in
Fig~\ref{fig:zhist}\subfig{a}. Gray (black) dashed line corresponds
to $z=0.75~\mu$m ($z=0~\mu$m). (b) Values of $\Delta \rho$ at $z=0~\mu$m (red
squares) and $z=0.75~\mu$m (blue circles) for all area
fractions. Both plots exhibit a change in trend at area fraction
$\phi \sim 0.3$.}
\label{fig:onset}
\end{figure}
\subsubsection{Particle-layer occupation fractions}
\label{Occupation of particle layers}
For the area fractions at which a clear second peak in the particle
distribution $\rho$ is seen in Fig.\ \ref{fig:zhist}\subfig{a}
(i.e., for $\phi=0.48$ and $0.54$), we fit the area around
each peak to a Gaussian function and define the point of intersection
between the two Gaussians as the effective boundary between the two
layers. Figure~\ref{fig:zhist}\subfig{b} shows the two distributions
with the Gaussian fits and our definition of that boundary, which
turns out to be at a distance of $0.39 \pm 0.04~\mu$m above the peak
of the first layer in both densities.
Using this boundary, we evaluated the occupation fractions
$\occupationFraction{i}=\phi_i/\phi$ of the bottom ($i=1$) and top
layer ($i=2$), where $\phi_i$ is the area fraction of particles in
layer~$i$. The results are shown in Fig.~\ref{fig:occ}\subfig{a} for
a suspension in $\KCl=0.01\Mol$ solution as a function of the total
area fraction $\phi$. As expected, the fraction of particles
populating the second layer grows as the total area fraction of
the suspension is increased.
An additional independent measurement of the layer occupation
fractions is done using epifluorescence microscopy, which enables us
to image the different layers separately [see
Fig.~\ref{fig:setup}\subfig{a}]. The occupation fraction of each
layer is determined by counting the number of particles observed
therein. The occupation fractions measured using the epifluorescence
imaging technique are plotted in Fig.\ \ref{fig:occ}\subfig{a} along
with the results obtained from the confocal microscopy. The two
methods yield similar results.
\begin{figure}[t]
\centering
\includegraphics[clip=true,scale=0.5]{figure7sja}
\caption{Occupation fractions $f_i$ and area fractions $\phi_i$ of
suspension layers as a function of the total area fraction $\phi$;
circles (squares) correspond to the first (second) layer. (a) Occupation
fractions for experiments in $\KCl=0.01\Mol$ extracted from the
confocal height distribution curves (yellow) and from the 2D images
(green), showing good agreement between the two methods. (b) The
same data replotted for the area fractions $\phi_1$ and $\phi_2$
of the first and second layers. }
\label{fig:occ}
\end{figure}
Alternatively, we can represent the layer-occupation results in terms
of the area fractions $\phi_1$ and $\phi_2$ of the first and second
layers [see Fig.~\ref{fig:occ}\subfig{b}]. Both $\phi_1$ and $\phi_2$
increase as $\phi$ is increased, and $\phi_1$ seems to
saturate at $\phi>0.45$.
\subsection{Numerical simulations}
\label{Structure - numerical simulations}
\begin{figure}
\includegraphics[width=\ffraction\textwidth]{equilibrium_distributions_B=0_00.eps}
\includegraphics[width=\ffraction\textwidth]{equilibrium_distributions_B=0_15.eps}
\caption{Particle--wall distribution function for \subfig{a}
monodisperse suspension; \subfig{b} polydisperse suspension with
standard deviation of particle diameter
$\sigma/d_0=0.15$. Simulation results for area fraction $\phi=0$
(solid line), 0.3 (dashed), 0.6 (dotted), 0.9 (dot--dashed), 1.2
(long-dashed). The insets show the deviation
$\Delta\rho=\rho-\rho_{\phi=0}$ from the low-density distribution.
\label{particle-wall distributions simulations}}
\end{figure}
\begin{figure}
\includegraphics[width=\ffraction\textwidth]{equilibrium_distributions_phi=0_054.eps}
\includegraphics[width=\ffraction\textwidth]{equilibrium_distributions_phi=0_540.eps}
\caption{Particle--wall distribution function for area fractions
\subfig{a} $\phi=0.054$ and \subfig{b} $0.54$. Experimental results
(solid circles); simulation results for standard deviation of
particle diameter $\sigma/d_0=0$ (solid line), $0.1$ (dashed),
$0.15$ (dotted), $0.20$ (dashed--dotted), and $0.25$
(long-dashed). \label{particle-wall distributions: comparison with
experiments}}
\end{figure}
\begin{figure}
\includegraphics[width=\ffraction\textwidth]{layer_occupancy.eps}
\includegraphics[width=\ffraction\textwidth]{layer_diam.eps}
\caption{\subfig{a} Occupation fraction $\occupationFraction{i}$ and
\subfig{b} normalized average particle diameter $d_i$ in the first
particle layer (black), second layer (blue), and third layer
(green), vs the total area fraction $\phi$. The results for a
monodisperse system (solid lines) and polydisperse systems with
standard deviation of particle diameter $\sigma/d_0=0.1$ (dashed),
$0.15$ (dotted) $0.20$ (dash--dotted), and $0.25$ (long-dashed).
The symbols represent experimental results from confocal imaging
(circles) and 2D images (squares) for a suspension with salt
concentration $\KCl=0.01\Mol$. Note that the experimental second
layer corresponds to the sum of the second and third layers in the
MC simulations. The layer boundaries in the numerical calculations
are set at $z_1=0.9d_0$, and $z_2=1.8d_0$ and in the experiments are
obtained from Gaussian fitting (see Fig.\ \ref{fig:zhist}).
\label{occupation numbers}}
\end{figure}
\begin{figure}
\includegraphics[width=\ffraction\textwidth]{rhoLowDensLog.eps}
\caption{Low-density limit of the near-wall particle distribution for
a monodisperse system (solid line) and polydisperse systems with
standard deviation of particle diameter $\sigma/d_0=0.1$ (dashed),
$0.15$ (dotted) $0.20$ (dash--dotted), and $0.25$ (long-dashed).
\label{low-density distribution}}
\end{figure}
Here we present results of MC simulations of the equilibrium
microstructure of a HS suspension in the near wall region. The HS
potential corresponds to the system with $\KCl=0.01\Mol$, for which
the electrostatic repulsion is negligible. Since the suspension used
in the experiments is polydisperse, we consider both monodisperse and
polydisperse systems.
\subsubsection{Near-wall particle distribution}
Figure \ref{particle-wall distributions simulations} shows MC results
for the suspension density profile $\particleDistribution(z)$ for a
monodisperse suspension and polydisperse suspensions at several area
fractions. Similarly to the experimental results, the simulations
show that there is a single layer of sedimented particles at low area
fractions $\phi$, and a two-layer microstructure at higher area
fractions. (Development of a third layer for $\phi\gtrsim0.9$ is
also noticeable in the region $z/d_0 \gtrsim 2$.) Suspension
polydispersity results in broadening of the peaks of the particle
distribution.
A direct comparison between the experimental and simulation results is
presented in Fig.\ \ref{particle-wall distributions: comparison with
experiments} for two values of the area fraction $\phi$. At low
area fractions [Fig.\ \ref{particle-wall distributions: comparison
with experiments}\subfig{a}] the agreement between the experiments
and simulations is good. (The standard deviation of the particle-size
distribution for which the simulations match the experimental data,
$\sigma/d_0\approx 0.25$, is larger than the estimated standard
deviation $0.1<\sigma/d_0<0.15$ based on the manufacturer's
specifications; the additional spread of the experimentally observed
peak can be attributed to random errors of the particle height
evaluation from the confocal-microscopy images.)
A comparison of the numerical and experimental results at a higher
area fraction, as shown in Fig.\ \ref{particle-wall distributions:
comparison with experiments}\subfig{b} [also see
Figs.\ \ref{fig:zhist} and \ref{particle-wall distributions
simulations}], reveals that (\textit{i}) the experimentally
observed second maximum of the density distribution develops at lower
area fractions than the corresponding maximum in the numerical
simulations; (\textit{ii}) the experimental second peak is narrower,
and its position is shifted towards the wall. In contrast, the plots
of the excess distribution $\Delta\rho$ with respect to the
low-density limit, shown in Fig.\ \ref{fig:onset}\subfig{a} and the
insets of Fig.\ \ref{particle-wall distributions simulations},
indicate that the onset of the formation of the second layer occurs at
approximately the same area fraction according to the simulations and
experiments. Moreover, the measured and calculated occupation
fractions of the layers are similar for all area fractions, as
depicted in Fig.\ \ref{occupation numbers}\subfig{a}. A possible
source of the observed discrepancies between the experimental and
numerical results for the particle distribution $\rho(z)$ is described
in Appendix \ref{appendix}. It also provides a plausible explanation
of the fact that the agreement between the experiments and MC
simulations for the layer occupation fractions $f_i$ is quite good in
spite of the discrepancies for $\particleDistribution(z)$.
\subsubsection{Polydispersity effects}
The results in Fig.\ \ref{occupation numbers}\subfig{a} show that the
occupation fraction of the first two layers is relatively insensitive
to the suspension polydispersity; in contrast, the occupation fraction
of the third layer strongly increases with the standard deviation of
particle diameter. This increase stems from the presence of smaller
(lighter) particles in polydisperse systems: smaller particles tend to
migrate into the top layer, as evident from Fig.\ \ref{occupation
numbers}\subfig{b}. For dilute suspensions, the particle-size
segregation results in variation of the slope of $\log
\particleDistribution(z)$ with the distance from the wall, as
illustrated in Fig.\ \ref{low-density distribution}. We estimate that
this variation causes an approximately 20\,\% uncertainty of the
calibration of the confocal height measurements described in
Sec.\ \ref{Height calibration}.
\subsection{A quasi-2D model of the equilibrium layered microstructure}
\label{A quasi-2D model of equilibrium layered microstructure}
Here we present a semi-quantitative theoretical model for evaluating
the occupation fractions $\occupationFraction{i}$ of the particle layers
in a sedimented colloidal suspension. Our theory is based on the
assumption that the suspension microstructure can be approximated as a
collection of weakly coupled quasi-2D layers in
thermodynamic equilibrium with respect to particle exchange.
The equilibrium condition for layers $i$ and $i+1$ is
\begin{equation}
\label{equilibrium between layers}
\mu_i+mgz_i=\mu_{i+1}+mgz_{i+1},
\end{equation}
where $\mu_i$ is the chemical potential of layer $i$, and $z_i$ is its
position. In our model, $\mu_i$ is approximated as the chemical
potential of a 2D hard-disk fluid of area fraction $\phi_i$. All disk
diameters are equal to the sphere diameter $d$, which corresponds to a
layer of spheres with the same vertical position $z$.
In the low area-fraction limit, the chemical potential of a hard-disk
fluid is
\begin{equation}
\label{chemical potential at low densities}
\mu_i=k_{\rm B}T\ln\phi_i+C(T),
\end{equation}
where $C(T)$ depends only on the temperature $T$. According to the
equilibrium condition \eqref{equilibrium between layers} and equation
of state \eqref{chemical potential at low densities}, we thus have
\begin{equation}
\label{area fraction in the next layer}
\phi_{i+1}=r\phi_i,\qquad i=1,2,\ldots
\end{equation}
with the ratio $r$ given by the Boltzmann factor
\begin{equation}
\label{occupancy ratio}
r=\textrm{e}^{-\Deltaz/l},
\end{equation}
where $l$ is defined by Eq.\ \eqref{ldef} and $\Deltaz=z_{i+1}-z_i$. We
assume that the layer separation $\Deltaz$ is independent of $i$.
For finite area fractions, relation \eqref{area fraction in the next
layer} is replaced with
\begin{equation}
\label{iteration for phi_i}
\phi_{i+1}=r(\phi_i)\phi_i,\qquad i=1,2\ldots,
\end{equation}
where the layer occupation ratio $r$ depends on the area fraction in
the adjacent layers. The factor $r(\phi)$ is determined from the
equilibrium condition \eqref{equilibrium between layers} with the help
of the Gibbs--Duhem relation
\begin{equation}
\label{Gibbs-Duhem}
\textrm{d}\mu=\frac{\pi d^2}{4}\phi^{-1}\textrm{d} p,
\end{equation}
where $p$ is the lateral 2D pressure within the layer. Combining
\eqref{equilibrium between layers}, \eqref{iteration for phi_i}, and
\eqref{Gibbs-Duhem} yields
\begin{equation}
\label{differential equation for r}
\frac{\textrm{d}r}{\textrm{d}\phi}=
r\left[\frac{p'(\phi)}{p'(r\phi)}-1\right],
\end{equation}
where
\begin{equation}
\label{p'}
p'=\left(\frac{\partial p}{\partial\phi}\right)_T.
\end{equation}
The differential equation \eqref{differential equation for r} is
solved for $r=r(\phi)$ with the boundary condition \eqref{occupancy
ratio} at $\phi=0$. Occupation fractions
$\occupationFraction{i}=\phi_i/\phi$ are then determined by iteration,
applying Eq.\ \eqref{iteration for phi_i} and the relation
\begin{equation}
\label{total area fraction on terms of partial fractions}
\phi=\sum_{i=1}^\infty\phi_i.
\end{equation}
\begin{figure}
\includegraphics[width=\ffraction\textwidth]{layer_occupancy_theory.eps}
\caption{Occupation fraction $f_i$ for the first (black), second
(blue) and third (green) layer vs the total area fraction $\phi$ for
a monodisperse HS suspension. Theoretical results (solid
lines); MC simulations (symbols).
\label{quasi 2D model}}
\end{figure}
We have solved Eq.\ \eqref{differential equation for r} and determined
the occupation fractions $\occupationFraction{i}$ using the
scaled-particle-theory equation of state for hard disks
\cite{Helfand-Frisch-Lebowitz:1961},
\begin{equation}
\label{SPT pressure}
\frac{\pi d^2 p}{4k_{\rm B}T}=\frac{\phi}{(1-\phi)^2}.
\end{equation}
The results of our calculations are presented in Fig.\ \ref{quasi 2D
model} for a HS system with the same value of the sedimentation
length \eqref{value of dimensionless sedimentation constant} as in our
MC simulations. Based on the separation between the first and second
peak of the suspension density profile shown in
Fig.\ \ref{particle-wall distributions simulations}\subfig{a}, the
calculations were performed for $\Deltaz/d=1$.
The theoretical results in Fig.\ \ref{quasi 2D model} are compared with
the MC simulations of a monodisperse HS suspension with the boundaries
between the layers set to $z_1=d$ and $z_2=2d$, consistent with the
peak positions. The agreement between our simple theory and
simulations is quite good. A similar agreement was obtained for other
values of the dimensional parameter $l/d$ (results not shown).
The layer boundaries used in Sec.\ \ref{Structure - numerical
simulations} and \ref{Dynamics - numerical simulations} to compare
the MC results with experiments differ from the boundaries used in the
above model by approximately 10\,\%. Due to the observed deviation
between the measured and simulated particle distributions (see
Fig.\ \ref{particle-wall distributions: comparison with experiments}),
it is not possible to define the layer boundaries in a unique,
equivalent way for the experimental and simulated systems. Therefore,
the layer boundaries $z_1=0.9d$ and $z_2=1.8d$ used in
Sec.\ \ref{Structure - numerical simulations} and \ref{Dynamics -
numerical simulations} were chosen based on the comparison between
the experimental and numerical results for the occupation fractions
and self-diffusivities in particle layers.
\section{Particle dynamics}
\label{sec:dynamics}
\begin{figure}
\centering
\includegraphics[clip=true,scale=0.5]{figure8ja}
\caption{Short-time self-diffusion coefficient $\Dself$, normalized by
the Stokes-Einstein diffusion coefficient $D_0$, as a
function of the total area fraction $\phi$; circles (squares) correspond
to the first (second) layer, and triangles to the effective $\Dself$
calculated by the weighted average of the self-diffusion coefficients for the two individual layers. (a) Suspension with no added salt. (b)
Suspension with salt concentration $\KCl=0.01\Mol$. }
\label{fig:def_gaus}
\end{figure}
\subsection{Experimental results}
\label{Dynamics - experimental results}
The short time self-diffusion coefficient in the $x$--$y$ plane,
$\Dself$, is determined for different total area fractions of the
sedimented particles by extracting the mean square displacement
\eqref{self diffusivity from mean-square displacement} from 2D
epifluorescent images of the first and second particle layer. The
mean-square displacement is measured over a time interval $\tau$ that
is small compared to the structural relaxation time of the suspension,
to ensure that the measurements yield the short-time self-diffusion
coefficient.
The results are shown in Fig.~\ref{fig:def_gaus} for suspensions with
salt concentration $\KCl=0.01\Mol$ and salt-free suspensions with
$\KCl=0\Mol$. The self-diffusion coefficient is expected to decrease
as the particle concentration increases; indeed, we observe this
decrease for both salt concentrations and in both layers, for
$\phi<0.4$. In the case of $\KCl=0.01\Mol$, corresponding to $\lambda
= 5$ nm, the particles can get much closer to the cell floor, which in
turn results in lower values of the self-diffusion coefficient
compared to suspensions with $\KCl=0\Mol$.
Using a linear fit to the values of $\Dself/D_0$ for the low area
fractions, where there is no observable second layer, we can
extrapolate to $\phi=0$ and extract the self-diffusivity of a single
particle. The extrapolated results agree well with the measurements
at very low concentrations $\phi<0.003$, as discussed in Sec.\ \ref{Mean
particle height at low area fractions}.
From the known occupation fractions $\occupationFraction{1}$ and
$\occupationFraction{2}$ for each $\phi$ we can weigh the contribution
of each layer to the total self-diffusivity, and construct an
effective $\Dself$ of the whole suspension
(Fig.~\ref{fig:def_gaus}). As expected, for $\phi<0.4$ the effective
self-diffusion coefficient $\Dself$ decreases as $\phi$ is increased
in both salt concentrations. For larger $\phi$ we observe a flattening
of $\Dself$, which clearly indicates that the second layer becomes
dominant in those area fractions. This observation is supported also
by the saturation of $\phi_1$ at $\phi>0.45$
[Fig.~\ref{fig:occ}\subfig{b}].
\subsection{Numerical simulations}
\label{Dynamics - numerical simulations}
\begin{figure}[b]
\includegraphics[width=\ffraction\textwidth]{mobilities_with_inset.eps}
\caption{Normalized short-time self-diffusion coefficient
$\Dself/D_0$, as a function of the area fraction $\phi$ for a
suspension with salt concentration $\KCl=0.01\Mol$. The main panel
shows $\Dself$ averaged over the whole system, and the inset shows
$\Dself$ for the first (bottom, red) and second (top, blue) particle
layer. Experimental results (solid circles); simulation results
(open symbols) for a monodisperse system (circles) and polydisperse
systems with the standard deviation of the particle diameter
$\sigma=0.1d_0$ (triangles) and $0.15d_0$ (squares). Note that at
low area fractions the triangles overlap with the solid circles.
The lines are a guide for the eye. }\label{self-diffusion with salt}
\end{figure}
\begin{figure}
\includegraphics[width=\ffraction\textwidth]{no_salt_mobilities_with_inset.eps}
\caption{Normalized short-time self-diffusion coefficient
$\Dself/D_0$, as a function of the area fraction $\phi$ for a
suspension with no salt. Symbols are the same as in
Fig.\ \ref{self-diffusion with salt}. Results are shown only for monodisperse suspensions. \label{self-diffusion with no salt}}
\end{figure}
\begin{figure}
\includegraphics[width=\ffraction\textwidth]{zero_dens_mob_poly_hs.eps}
\caption{Normalized short-time self-diffusion coefficient $\Dself/D_0$
in the low-area-fraction limit $\phi=0$ (for the system with salt) as a function of the
suspension polydispersity. \label{self-diffusion low area
fraction}}
\end{figure}
The results of our numerical simulations for the short-time lateral
self-diffusion coefficient $\Dself$ in a HS system are
presented in Fig.\ \ref{self-diffusion with salt} for a monodisperse
suspension and for polydisperse suspensions with $\sigma/d_0=0.1$ and
$0.15$. Figure \ref{self-diffusion with no salt} shows the
corresponding results for a system of monodisperse hard spheres with
particle--wall and particle--particle electrostatic repulsion
\eqref{V} and \eqref{V'}.
The results depicted in Fig.\ \ref{self-diffusion with salt}
indicate that for moderately polydisperse suspensions (in the range
corresponding to the polydispersity of silica particles used in the
experiments), the self-diffusion coefficient is only moderately
dependent on $\sigma/d_0$. For larger values of the variance of
particle diameters, the normalized self-diffusion coefficient
$\Dself/D_0$ significantly increases with the degree of the
polydispersity, because the mobility is dominated by small particles.
This increase is illustrated in Fig.\ \ref{self-diffusion low area
fraction} for a suspension in the low-area-fraction limit $\phi=0$.
The results of our hydrodynamic calculations for a HS suspension and
for a suspension with screened electrostatic repulsion are compared
with experimental results for suspensions with $\KCl=0.01\Mol$
(Fig.\ \ref{self-diffusion with salt}) and $\KCl=0\Mol$
(Fig.\ \ref{self-diffusion with no salt}). For the system with salt,
the measured values are slightly closer to their numerical analogs
than in the absence of salt. Summarizing, the experimental and
numerical results agree well both for the overall self-diffusivity and
for the self-diffusivity in individual particle layers.
\section{Discussion}
\label{sec:discuss}
In this paper we have studied in detail the structure and dynamics of
quasi-2D colloidal suspensions near a wall, comparing experiment and
theory. Our central result is a rather sharp formation of a distinct
second layer at an area fraction of $\phi\sim0.3$. This value is much
lower than the area fraction required for close-packing or other 2D
structural changes such as the formation of hexatic or crystalline
order. One important consequence of this result concerns the apparent
self-diffusion of the particles in the suspension and its dependence
on particle density. Due to the higher mobility of the particles in
the elevated layer, the effective diffusivity is higher and levels off
as particle density increases.
The experimentally observed behavior could be interpreted incorrectly if one is unaware of the layering (or stratifying) effect.
We find good agreement between experimental and simulation results for
the occupation fractions of the first and second layers and for the
lateral self-diffusivity (both for the entire suspension and in the
individual layers). However, we also find an unexpected discrepancy
in the position and the height of the second peak in the near-wall
particle distribution. While the source of this discrepancy is
unknown, one possibility, related to optical aberrations, is suggested
in Appendix \ref{appendix}. On the other hand, the difference between
theory and experiment might also be a result of an actual physical
effect, such as more complicated electrostatic interactions setting in
at higher layer densities.
Another new insight put forth in this study is the significant effect
that polydispersity has on the occupation and composition of layers
close to the bottom wall, even in the case of a relatively small
dispersion of particle sizes. The effect of polydispersity is evident
already at low densities, since the smaller and larger particles
segregate into the upper and lower layers, respectively. We expect
the phenomena described here to be quite general and to be manifested
in any such system where the sedimentation length $l$ is of the order
of the particle diameter. This conclusion is supported by the
appearance of the phenomena both in experiments and in Monte--Carlo
simulations.
An important outcome of this paper is the construction of a very
simple theoretical quasi-2D model of the layered microstructure in
thermodynamic equilibrium. Such systems have been analyzed earlier
using density-functional theory \cite{Chen2006}, but our theoretical
model is much simpler and easier to apply. We have demonstrated that
the model approximates well the experimental and numerical results for
the system studied in this work.
We conclude with three open issues. Layering phenomena near a wall are
well documented in 3D suspensions as well
\cite{VanWinkle1988,GonzalezMozuelos1991,Zurita_Gotor-Blawzdziewicz-Wajnryb:2012}. An
interesting question is whether this perturbation to the 3D pair
correlation function could be fundamentally related to the sequential
layering reported here. The structural features near the wall should
also affect two- and many-particle dynamics in the quasi-2D
suspensions, which can be characterized by two-point
microrheology. Finally, taking a more detailed account of
interparticle forces such as strong electrostatic interactions may
hopefully provide deeper understanding of the effects observed in this
work.
\acknowledgments H.D. wishes to thank the Polish Academy of Sciences
for its hospitality. This research has been supported by the Israel
Science Foundation (Grants No.\ 8/10 and No.\ 164/14) and by the Marie
Curie Reintegration Grant (PIRG04-GA-2008-239378). A.S.--S acknowledges
funding from the Tel-Aviv University Center for Nanoscience and
Nanotechnology. M.L.E.--J. and E.W. were supported in part by
Narodowe Centrum Nauki (National Science Centre) under grant
No. 2012/05/B/ST8/03010. J.B. would like to acknowledge the financial
support from National Science Foundation (NSF) Grant No. CBET 1059745.
|
1,314,259,994,199 | arxiv | \section{{Introduction and Statement of Results} }
\subsection{Inhomogeneous approximation in the plane}
Throughout $\psi:\N\to\Rp$ is a monotonic function such that
$\psi(t)\to0$ as $t\to\infty$ and will be referred to as an
\emph{approximating function}. Given $\psi$ and a point
$\bm\theta:=(\theta_1,\theta_2) \in \R^2$, let ${\cal S}(\psi,
\bm\theta)$ denote the set of points $\vv x :=(x_1,x_2) \in\R^2$ for
which there exists infinitely many positive integers $q $ such that
\begin{equation}\label{e:001}
\max_{1\leq i \leq2} \|qx_i-\theta_i\|<\psi(q) \ .
\end{equation}
Here and throughout $\|\cdot\|$ denotes the distance to the
nearest integer. In the case that the inhomogeneous factor
$\bm\theta $ is the origin, the corresponding set ${\cal S}(\psi)$ is
the usual homogeneous set of simultaneously $\psi$-approximable
points in the plane. In the case $\psi:t\to t^{-v}$ with $v>0$,
let us write ${\cal S}(v, \bm\theta)$ for ${\cal S}(\psi, \bm\theta)$. The
following statement provides a beautiful and simple criterion for
the `size' of ${\cal S}(\psi, \bm\theta)$ expressed in terms of
$s$--dimensional Hausdorff measure ${\mathcal H}^s$.
\begin{thkh}
Let $s \in (0,2]$, $\bm\theta \in \R^2$ and $\psi$ be an
approximating function. Then
$$ {\mathcal H}^s\left({\cal S}(\psi, \bm\theta)\right)=\left\{\begin{array}{ll} 0
& {\rm when} \;\;\; \sum \; t^{2-s} \, \psi(t)^s \;\;
<\infty\\ &
\\ {\mathcal H}^s(\R^2) & {\rm when} \;\;\; \sum \;
t^{2-s} \, \psi(t)^s \;\; =\infty
\end{array}\right..$$
\end{thkh}
\noindent This result generalizes and unifies the classical
theorems of Khintchine (1924) and Jarn\'{\i}k (1931) and will be
refereed to as the Khintchine-Jarn\'{\i}k theorem. When $s=2$, the
measure ${\cal H}^2$ is equivalent to two-dimensional Lebesgue
measure in the plane and, loosely speaking, the theorem
corresponds to Khintchine's Theorem.
Actually, the stronger statement that ${\cal H}^2(\R^2 \setminus
{\cal S}(\psi, \bm\theta))=0$ if $\sum \psi(t)^2 = \infty$ is true and
the homogeneous case of this statement is due to Khintchine. When
$s < 2$, the homogeneous case of the theorem corresponds to
Jarn\'{\i}k's Theorem and can be regarded as a Hausdorff measure
version of Khintchine's Theorem. For further details see
\cite[Section 12.1]{Beresnevich-Dickinson-Velani-06:MR2184760} and
references within.
\subsection{Inhomogeneous approximation restricted to curves} Let ${\cal C}$ be a planar curve.
In short, the goal is to obtain an analogue of
the above Khintchine-Jarn\'{\i}k theorem for ${\cal C} \cap {\cal S}(\psi, \bm\theta) $. The fact
that the points $\vv x :=(x_1,x_2) \in\R^2$ of interest are
restricted to ${\cal C}$ and therefore are of dependent
variables, introduces major difficulties in attempting to
describe
the measure theoretic structure of ${\cal C} \cap {\cal S}(\psi, \bm\theta)$.
In 1998, Kleinbock $\&$ Margulis
\cite{Kleinbock-Margulis-98:MR1652916} established the fundamental
Baker-Sprindzuk conjecture concerning homogeneous Diophantine
approximation on manifolds. As a consequence, for non-degenerate
planar curves\footnote{A planar curve ${\cal C}$ is non-degenerate
if the set of points on ${\cal C}$ at which the curvature
vanishes is a set of one--dimensional Lebesgue measure zero.
Moreover, it is not difficult to show that the set of points on a
planar curve at which the curvature vanishes but the curve is
non-degenerate is at most countable. In view of this, the
curvature completely describes the non-degeneracy of planar
curves.} the one--dimensional Lebesgue measure ${\cal H}^1$ of the
set ${\cal C} \cap {\cal S}(v)$ is zero whenever $v > 1/2 $ -- see also \cite{schmidt}. Subsequently,
staying within the homogeneous setup, the significantly stronger
Khintchine-Jarn\'{\i}k type theorem for ${\cal C} \cap {\cal S}(\psi) $ has
been established -- see
\cite{Beresnevich-Dickinson-Velani-07:MR2373145} for the
convergence part and \cite{Vaughan-Velani-2007} for the
divergence part.
Until the recent proof of the inhomogeneous Baker-Sprindzuk
conjecture
\cite{Beresnevich-Velani-Moscow,Beresnevich-Velani-08-Inhom}, the
theory of inhomogeneous Diophantine approximation on planar curves
(let alone manifolds) had remained essentially non-existent and
ad-hoc. As a consequence of the measure results in
\cite{Beresnevich-Velani-Moscow,Beresnevich-Velani-08-Inhom} or alternatively the even more recent dimension results in \cite{dmitry}, we
now know that for any non-degenerate planar curve ${\cal C}$ and
$\bm\theta \in \R^2$,
$$ {\cal H}^1 ({\cal C} \cap {\cal S}(v, \bm\theta)) \, = \, 0 \qquad {\rm when} \quad v > 1/2 \ . $$
Clearly, this statement is far from the desirable
Khintchine-Jarn\'{\i}k type theorem for ${\cal C} \cap {\cal S}(\psi,
\bm\theta) $. As mentioned above, such a statement exists within
the homogeneous setup. This paper constitutes part of a programme
to develop a coherent inhomogeneous theory for curves, and indeed
manifolds in line with the homogeneous theory.
Without loss of generality, we will assume that ${\cal C}={\cal C}_f : =
\{(x,f(x)):x\in I \} $ is given as the graph of a function
$f:I\to\R$, where $I$ is some interval of $\R$. As usual,
$C^{(n)}(I)$ will denote the set of $n$-times continuously
differentiable functions defined on some interval $I$ of $\R$.
In this paper we establish the inhomogeneous analogues of the main
theorems in \cite{Beresnevich-Dickinson-Velani-07:MR2373145} and
\cite{Vaughan-Velani-2007}; that is, we obtain the following
complete Khintchine-Jarn\'{\i}k type theorem for planar curves.
\begin{thm}\label{thm1}
Let $s \in (1/2,1]$, $\bm\theta \in \R^2 $ and $\psi$ be an
approximating function. Let $f\in C^{(3)}(I)$ and assume that $
{\mathcal H}^s(\{x\in I: f''(x)=0\}) = 0$. Then
$$
{\mathcal H}^s\left({{\cal C}}_f\cap {\cal S}(\psi, \bm\theta) \right)=\left\{
\begin{array}{ll} 0 & {\rm when} \;\;\; \sum
\; t^{1-s} \, \psi(t)^{s+1} \;\;
<\infty\\[1ex] &
\\ {\cal H}^s({\cal C}_f) & {\rm when} \;\;\; \sum \;
t^{1-s} \, \psi(t)^{s+1} \;\; =\infty
\end{array}
\right..$$
\end{thm}
\medskip
\noindent Note that a planar curve is one dimensional and so
${\mathcal H}^s\left({{\cal C}}_f\cap {\cal S}(\psi, \bm\theta) \right) \leq {\mathcal H}^s({{\cal C}}_f)
= 0 $ for any $s> 1$ irrespective of the approximating function
$\psi$. Thus the hypothesis that $ s \leq 1 $ is essential and
obvious. In the case $s=1$, the theorem is a statement concerning
the one-dimensional Lebesgue measure of ${{\cal C}}_f\cap {\cal S}(\psi,
\bm\theta)$ and the convergence part actually only requires that
$f\in C^{(2)}(I)$. Also, as one would expect, the measure zero
assumption on the set $\{x\in I: f''(x)=0\}$ coincides with the
definition of non-degeneracy. In the case $s<1$, we have that
${\cal H}^s({\cal C}_f)=\infty$ and the theorem provides an elegant
zero-infinity law for the Hausdorff measure of
${{\cal C}}_f\cap {\cal S}(\psi, \bm\theta)$. In particular, this
law implies the following corollary on the Hausdorff dimension of
${{\cal C}}_f\cap {\cal S}(\psi, \bm\theta)$ expressed in terms of the lower
order $ \lambda_\psi$ of $1/\psi$. Recall, $$ \lambda_\psi \, :=
\, \liminf_{t\to\infty}\frac{-\log \psi(t)}{\log t}
$$ and indicates the growth of the function $1/\psi$ `near'
infinity. Note that $\lambda_\psi$ is non-negative since
$\psi(t)\to0$ as $t\to\infty$.
\begin{cor}
Let $\bm\theta \in \R^2 $ and $\psi$ be an approximating function
such that $\lambda_\psi\in [1/2,1) $. Let $f \in C^{(3)}(I)$ such
that $f''(x)$ is not identically zero and assume that
$$
\dim \left\{ x \in I : f''(x) = 0 \right\} \ \leq \
\frac{2-\lambda_\psi}{1+\lambda_\psi} \ .
$$
Then,
$$ \dim{{\cal C}}_f\cap {\cal S}(\psi, \bm\theta) = \frac{2-\lambda_\psi}{1+\lambda_\psi} .
$$
\end{cor}
\noindent This generalizes Theorem~4 of
\cite{Beresnevich-Dickinson-Velani-07:MR2373145} to the inhomogeneous
setting. Note that when $\lambda_{\psi}<1/2$, the condition
that $f''(x)$ is not identically zero follows from the assumption that
$\dim \left\{ x
\in I : f''(x) = 0 \right\} \, < \, 1 $. We take this opportunity to mention that this necessary condition should also be present
in \cite[Theorem~4]{Beresnevich-Dickinson-Velani-07:MR2373145},
where it is missing.
\vspace*{4ex}
\subsection{The inhomogeneous counting results}
\noindent The proof of Theorem~\ref{thm1} rests on understanding the
distribution of `shifted' rational points `near' planar curves.
In view of the metrical nature of Theorem~\ref{thm1}, there is no harm is
assuming that the function $f :I\to\R$ is defined on a closed interval $I$
and that $f''$ is continuous and non-vanishing on $I$. By the compactness of $I$, there exist
positive and finite
constants $c_1,c_2$ such that
\begin{equation}\label{e:002}
c_1 \ \le \ |f''(x)| \ \le \ c_2\qquad\forall \ x\in I.
\end{equation}
Let $I$ and $f$ be as above. Furthermore, given $\bm\theta =(\theta_1,\theta_2)\in \R^2$,
$\delta > 0 $ and $Q\ge 1$, consider the counting function
\begin{equation}\label{e:006}
N_{\bm\theta}(Q,\delta) \ := \ {\rm{card}}\left\{ (p_1,q) \in \Z \times {\mathbb N} \, : \,
\begin{array}{l}
Q < q\le 2Q,\ (p_1+\theta_1)/q\in I\\[.5ex]
\|qf(\,(p_1+\theta_1)/q\,)-\theta_2\|<\delta
\end{array}
\right\} \ .
\end{equation}
In short, the
function $N_{\bm\theta}(Q,\delta)$ counts the number of rational
points $(p_1/q,p_2/q)$ with bounded denominator $q$ such that the
shifted points $((p_1+\theta_1)/q,(p_2+\theta_2)/q)$ lie
within the $\delta/Q$-neighborhood of the
curve $\mathcal{C}_f$. The following result generalizes Theorem~1 of
\cite{Vaughan-Velani-2007} to the inhomogeneous setting.
\begin{thm}
\label{t:thm1}
Let $f\in C^{(2)}(I)$. Suppose that $Q\ge 1$ and $0<\delta\le\frac12$. Then
\begin{equation*}
N_{\bm\theta}(Q,\delta) \ll \delta Q^2 + \delta^{-\frac12}Q \ .
\end{equation*}
\end{thm}
With a mild additional condition on $f$ we are able to extend
the validity of the bound in Theorem \ref{t:thm1}. The following statement
is the inhomogeneous analogue of Theorem~3 in
\cite{Vaughan-Velani-2007}.
\begin{thm}
\label{t:thm4}
Let $f''\in{\rm{Lip}}_{\phi}(I)$, where $0<\phi<1$.
Suppose that $Q\ge 1$ and
$0<\delta\le \frac12$. Then, for any $\varepsilon > 0$
\begin{equation*}
N_{\bm\theta}(Q,\delta) \ll \delta Q^2 +
\delta^{-\frac12}Q^{\frac12+\varepsilon} +
\delta^{\frac{\phi-1}2}Q^{\frac{3-\phi}2} \ .
\end{equation*}
\end{thm}
\medskip
\noindent {\em Remark.} When $\phi=1$ the proof gives the above
theorem with the term $\delta^{\frac{\phi-1}2}Q^{\frac{3-\phi}2}$
replaced by $Q\log(Q/\delta)$, and this is then always bounded by
one of the other two terms.
\medskip
Armed with Theorems \ref{t:thm1} and \ref{t:thm4}, the convergent
part of Theorem \ref{thm1} is established on following the
arguments set out in Sections 6 and 7 of
\cite{Vaughan-Velani-2007}. The modifications are essentially
obvious and the details are omitted. It is worth mentioning that
when $s=1$, we only need to appeal to Theorem \ref{t:thm1} and
thus we only require that $f\in C^{(2)}(I)$ when proving the
convergent part of Theorem \ref{thm1}.
\medskip
The key to establishing the divergence part
of Theorem~\ref{thm1} is the following covering result that also
yields a sharp lower bound for the counting function
$N_{\bm\theta}(Q,\delta)$. Throughout, $|X|$ will denote the
one-dimensional Lebesgue measure of a set $X$ in $\R$.
\begin{thm}\label{thm6}
Let $f\in C^{(3)}(I)$.
Then for any interval $J\subseteq I$ there are constants
$k_1,k_2,C_1,Q_0>0$ such that for any choice of $\delta$ and
$Q>Q_0$ subject to
\begin{equation}\label{e:007}
\frac{k_1}{Q}\le\delta\le k_2
\end{equation}
one has
\begin{equation}\label{vb+}
\left|\bigcup_{(p_1,q)\in A_{\bm\theta}(Q,\delta,J)}
\left(B\left(\frac{p_1+\theta_1}{q},\frac{C_1}{Q^2
\delta}\right)\cap J\right)\right| \ \ge \ \frac{1}{2} \,
|J|\qquad\text{$\forall$ $\bm\theta\in\R^2$,}
\end{equation}
where
$$
A_{\bm\theta}(Q,\delta,J)\ :=\ \left\{(p_1,q)\in\Z\times \N \, : \,
\begin{array}{l}
Q<q\le 2Q, \ (p_1+\theta_1)/q\in J \\[.5ex]
\|qf(\,(p_1+\theta_1)/q \,) - \theta_2 \|< \delta \end{array}
\right\} \ \ .
$$
\end{thm}
\noindent Theorem \ref{thm6} is the inhomogeneous generalization of
Theorem~7 in \cite{Beresnevich-Dickinson-Velani-07:MR2373145}. Armed
with Theorem \ref{thm6}, the arguments set out in Section 7 of
\cite{Beresnevich-Dickinson-Velani-07:MR2373145} are easily adapted
to prove the divergence part of Theorem \ref{thm1}. The minor
modifications are essentially obvious and the details are omitted.
Note that $N_{\bm\theta}(Q,\delta)$ is by definition the
cardinality of $A_{\bm\theta}(Q,\delta,I)$. With this in mind, it
trivially follows that
\begin{eqnarray*}
N_{\bm\theta}(Q,\delta) \cdot \frac{2C_1}{Q^2 \delta} \ \ &
\geq & \ \sum_{ (p_1,q) \in A_{\bm\theta} (Q,\delta,I) } \left|
B\left(\frac{p_1+\theta_1}{q},\frac{C_1}{Q^2 \delta}\right)
\right| \\[2ex]
& \geq & \left|\bigcup_{(p_1,q)\in A_{\bm\theta}(Q,\delta,I)}
\left(B\left(\frac{p_1+\theta_1}{q},\frac{C_1}{Q^2
\delta}\right)\cap I\right)\right| \ \
\stackrel{(\ref{vb+})}{\ge} \ \ \frac{1}{2} \, |I| \ .
\end{eqnarray*}
\noindent In other words, Theorem~\ref{thm6} implies the
following statement which is a generalisation of Theorem~6 in
\cite{Beresnevich-Dickinson-Velani-07:MR2373145}.
\begin{thm}\label{thm3}
Let $f\in C^{(3)}(I)$. There are constants $k_1,k_2,c,Q_0>0$ such
that for any choice of $\delta$ and $Q>Q_0$ satisfying
(\ref{e:007}) we have
$$
N_{\bm\theta}(Q,\delta) \ \ge \ c \, \delta\, Q^2 \qquad\text{$\forall$ $\bm\theta\in\R^2$}.
$$
\end{thm}
\vspace*{2ex}
\section{The proof of Theorems \ref{t:thm1} and \ref{t:thm4} }
\noindent Without loss of generality, assume that
$\bm\theta=(\theta_1,\theta_2)$ satisfies $0\le \theta_1, \theta_2
<1$. Let
\begin{equation*}\label{e:008}
J:=\left\lfloor \frac1{2\delta} \right\rfloor
\end{equation*}
and consider the Fej\'er kernel
\begin{equation*}
\mathcal K_J(x) := J^{-2}\left| \sum_{h=1}^Je(hx) \right|^2 =
\left( \frac{\sin\pi Jx}{J\sin\pi x} \right)^2.
\end{equation*}
When $\|x\|\le \delta$ we have $|\sin\pi Jx| = \sin\pi\|Jx\|\ge
2\| Jx\| = 2\|\, J\|x\|\, \|=2J\|x\|$, since $J\|x\| \le \delta
\left\lfloor \frac1{2\delta} \right\rfloor \le \frac12$. Hence,
when $\|x\|\le\delta$, we have
\begin{equation*}
\mathcal K_J(x) \ge \frac{2\|x\|J}{J\pi\|x\|}=\frac2{\pi} \, .
\end{equation*}
Thus
\begin{equation*}
N_{\bm\theta}(Q,\delta)\le \frac{\pi}2 \sum_{Q<q\le 2Q}
\sum_{p_1+\theta_1\in qI} \mathcal K_J \big(
qf((p_1+\theta_1)/q)-\theta_2 \big).
\end{equation*}
Since
\begin{equation*}
\mathcal K_J(x) = \sum_{j=-J}^J \frac{J-|j|}{J^2} \, e(jx)
\end{equation*}
we have
\begin{equation*}
N_{\bm\theta}(Q,\delta)\le \pi\delta |I|Q^2 + N_1 +O(\delta Q) =
N_1 +O(\delta Q^2)
\end{equation*}
where $$N_1:= \frac{\pi}2 \sum_{0<|j|\le J} \frac{J-|j|}{J^2}
\sum_{Q<q\le 2Q} \ \sum_{p_1+\theta_1\in qI}
e\big(jqf((p_1+\theta_1)/q)-j\theta_2\big).$$ We observe that the
function $F(x):= jqf(x/q)$ has derivative $jf'(x/q)$. Given $j$
with $0<|j|\le J$ we define
$$H_-:=\lfloor\inf jf'(x)\rfloor-1,\ \
H_+:=\lceil\sup jf'(x)\rceil+1,$$
$$h_-:=\lceil\inf
jf'(x)\rceil+1,\ \ h_+:=\lfloor\sup jf'(x)\rfloor-1$$ where the
extrema are over $x$ in the interval $I$. Then, by Lemma 4.2 of
\cite{Vaughan-97:MR1435742},
\begin{eqnarray*}
\sum_{p_1+\theta_1 \in qI} e\big(jqf((p_1+
\theta_1)/q)-j\theta_2\big) = \sum_{H_-\le h\le H_+}
\int_{qI-\theta_1}
e\big(jqf((x \hspace*{-5ex} & & +\theta_1)/q)-j\theta_2-hx\big)dx \\
& + & O\big(\log(2+H)\big)
\end{eqnarray*}
where $H=\max(|H_-|,|H_+|)$. Clearly $H\ll|j|\le J$ and so
$$N_1=N_2 + O\big(Q\log\textstyle{\frac1\delta}\big)$$
where
$$N_2
:= \frac{\pi}2 \sum_{0<|j|\le J} \frac{J-|j|}{J^2} \sum_{Q<q\le
2Q} \sum_{H_-\le h\le H_+} \int_{qI-\theta_1}
e\big(jqf((x+\theta_1)/q)-j\theta_2-h x\big)dx.$$
\noindent The integral here is
$$qe(h\theta_1-j\theta_2)\int_I e\big(q(jf(y)-hy)\big)dy.$$
As in Section 2 of \cite{Vaughan-Velani-2007}, we obtain
$$N_2=N_3+O\left(
\delta^{\frac12}Q^{\frac32} \right)$$ where
\begin{equation*}\label{e:009}
N_3:= \frac{\pi}2 \sum_{0<|j|\le J} \frac{J-|j|}{J^2} \sum_{Q<q\le
2Q} q \sum_{h_-< h < h_+} e(h\theta_1-j\theta_2) \int_I
e\big(q(jf(\theta_2)-h\theta_2)\big)d\theta_2
\end{equation*}
and the sum over $h$ is taken to be empty when $h_+\le h_-+1$.
Apart from the twisting factor $e(h\theta_1-j\theta_2)$ this
expression is identical to (2.3) of \cite{Vaughan-Velani-2007},
with identical properties of $f$. The analysis of Sections 2 and
4 of \cite{Vaughan-Velani-2007} can be applied without further
change to obtain the concomitant conclusions.
\section{The proof of Theorem~\ref{thm6}}
We will make use of the following result which appears as Lemma 6 in
\cite{Beresnevich-Dickinson-Velani-07:MR2373145}.
\begin{lemmabdv}\label{BKM0}
Let $\vv g :=(g_1,g_2):I\to\R^2$ be a $C^{(2)}$ map defined on a
compact interval $I$ such that $(g_1'g_2''-g_2'g_1'')(x)\neq0$ for
all $x\in I$. Given positive real numbers $\lambda,K,T$ and an
interval $J\subseteq I$, let $B(J,\lambda,K,T)$ denote the set of
$x\in J$ for which there exists
$(q,p_1,p_2)\in\Z^3\smallsetminus\{0\}$ satisfying the following
system of inequalities:
$$
\left\{
\begin{array}{l}
|q \, g_1(x) \, + \, p_1 \, g_2(x)+ p_2| \ \le \ \lambda
\\[2ex]
|q \, g_1'(x) \, + \, p_1 \, g_2'(x)| \ \le \ K
\\[2ex]
|q| \ \le \ T \ \ .
\end{array}
\right.
$$
Then for any interval $J\subset I$ there is $C>0$ such that for any
choice of numbers $\lambda,K,T$ satisfying
\begin{equation}\label{e:010}
0<\lambda\le 1,\quad T\ge1, \quad K>0\quad \text{and} \quad
\lambda KT\le1
\end{equation}
one has
\begin{equation}\label{e:011}
|{B(J,\lambda,K,T)}|\le C \max\left(\lambda^{1/3}, \left(\lambda K T
\right)^{1/9}\right)|J| \ .
\end{equation}
\end{lemmabdv}
\vspace*{2ex}
To begin the proof of Theorem~\ref{thm6}, define $\vv g(x) :=(g_1(x),g_2(x))$ by setting
$$
g_1(x):=x f'(x)-f(x)\qquad\text{and}\qquad g_2(x):=-f'(x).
$$
Then $\vv g\in C^{(2)}(I)$. Also, note that
\begin{equation}\label{e:012}
\vv g'(x)=(xf''(x),\,-f''(x)) \; ,\qquad \vv
g''(x)=(f''(x)+xf'''(x),\,-f'''(x))
\end{equation}
and $$ (g_1'g_2''-g_2'g_1'')(x)=f''(x)^2 \ . $$ As $f''(x)\neq0$
everywhere, Lemma~BDV is applicable to this $\vv g$. In view
of (\ref{e:002}) and the fact that $g_2'(x)=-f''(x)$, it follows that
\begin{equation}\label{e:013}
c_1\le |g_2'(x)|\le c_2 \qquad \forall x\in I \, .
\end{equation}
For a fixed $x\in I$, consider the following
system of inequalities:
\begin{equation}\label{e:014}
\left\{
\begin{array}{l}
|q^{}g_1(x)+p_1g_2(x)+p_2|\le c_0^3\delta
\\[2ex]
|q^{}g_1'(x)+p_1g_2'(x)|\le c_2(c_0^6Q\delta)^{-1}
\\[2ex]
|q|\le c_0^3Q \ .
\end{array}
\right.
\end{equation}
Here $c_0 < 1 $ is a real parameter to be determined later. Note that
with $q,p_1,p_2$ regarded as real variables, the system defines a
convex body ${\cal D}$ in $\R^3$ symmetric about the origin.
Next, fix an interval $J \subseteq I $. By definition, the set $B(J,\lambda,K,T)$ with
\begin{equation}\label{e:015}
\lambda :=c_0^3\delta,\ \ \ K:=c_2(c_0^6Q\delta)^{-1},\ \ \
T:=c_0^4 Q
\end{equation}
consists of points $x\in J$ such that there exists a non-zero
integer solution $(q,p_1,p_2)$ to the system (\ref{e:014}) with
$|q|\le c_0^4 Q$. By Lemma~BDV, for sufficiently large
$Q$ we have that
\begin{eqnarray*}
|B(J,\lambda,K,T)| & \le & C\, |J|\
\max\big\{(c_0^3\delta)^{1/3} \, , \ (c_2c_0)^{1/9}\big\} \\
& = & C\, (c_2c_0)^{1/9}|J| \ \le \ |J|/4 \ \
\end{eqnarray*}
provided that $c_0\le c_2^{-1}(4C)^{-9}$. Therefore, with
$\lambda,K,T$ given by (\ref{e:015}) and $Q$ sufficiently large
\begin{equation}\label{e:016}
|{\textstyle\frac34}J\setminus B(J,\lambda,K,T)|\ge|J|/2 \ ,
\end{equation}
where $\frac34J$ is the interval $J$ scaled by $\frac34$.
From this point onwards, $x \in \frac34J\setminus B(J,\lambda,K,T)$ and is fixed. Then,
\begin{equation}\label{e:017} q>c_0^4 Q
\end{equation}
for any non-zero integer solution $(q,p_1,p_2)$ to the system (\ref{e:014}). In other words, the first consecutive minimum of the body (\ref{e:014})
is at least $c_0$. Let $\lambda_1\le\lambda_2\le\lambda_3$ be the
consecutive minima of the convex body ${\cal D}$ given by (\ref{e:014}). Thus, $\lambda_1\ge
c_0$. By Minkowski's theorem on consecutive minima \cite{Cassels-97:MR1434478}, we have that
\begin{equation}\label{e:018}
\lambda_1\lambda_2\lambda_3V\le 2^3,
\end{equation}
where $V$ is the volume of ${\cal D}$. It is readily
verified that
$$V \, = \, 8|g_2'(x)|^{-1}c_2 \, \stackrel{(\ref{e:013})}{\ge} \, 8c_2^{-1}c_2=8 \ . $$
Therefore,
$$
\lambda_3\stackrel{(\ref{e:018})}{\le} 8\lambda_1^{-2}V^{-1}\le
c_0^{-2}
$$
\noindent and it follows that there are three linearly independent
integer vectors
$$\vv a^{(i)}\, := \, (q^{(i)},p_1^{(i)},p_2^{(i)}) \qquad (1\le
i\le 3) $$ satisfying the system of inequalities
\begin{equation}\label{e:019}
\left\{
\begin{array}{l}
|q^{(i)}g_1(x)+p_1^{(i)}g_2(x)+p_2^{(i)}|\le c_0\delta
\\[2ex]
|q^{(i)}g_1'(x)+p_1^{(i)}g_2'(x)|\le c_2(c_0^8Q\delta)^{-1}
\\[2ex]
0\le q^{(i)}\le c_0Q \ .
\end{array}
\right.
\end{equation}
For each $i$, define
$$
G_i(x) \, : \, =q^{(i)}g_1(x)+p_1^{(i)}g_2(x)+p_2^{(i)} \ .
$$
Now with $\bm\theta =(\theta_1,\theta_2)\in \R^2 $ fixed, consider the following system of linear equations with respect to
the real variables $\eta_1,\eta_2,\eta_3$:
\begin{equation}\label{e:020}
\left\{
\begin{array}{rcl}
\eta_1G_1(x)+\eta_2G_2(x)+\eta_3G_3(x) & = & \theta_1 f'(x)-\theta_2\\[2ex]
\eta_1G'_1(x)+\eta_2G'_2(x)+\eta_3G'_3(x) & = & \theta_1 f''(x)\\[2ex]
\eta_1q^{(1)}+\eta_2q^{(2)}+\eta_3q^{(3)} & = & 2Q \ . \\[2ex]
\end{array}
\right.
\end{equation}
The determinant of this system is equal to
$$
-f''(x)\left|\begin{array}{ccc}
p_2^{(1)} & p_2^{(2)} & p_2^{(3)} \\[1ex]
p_1^{(1)} & p_1^{(2)} & p_1^{(3)} \\[1ex]
q^{(1)} & q^{(2)} & q^{(3)}
\end{array}
\right|
$$
and so is non-zero. Therefore, there is a unique solution
$\eta_1,\eta_2,\eta_3$ to the system (\ref{e:020}). Let,
$t_i:=\lfloor\eta_i\rfloor$ when $q^{(i)}\ge0$ and
$t_i:=\lceil\eta_i\rceil$ when $q^{(i)}<0$. Therefore,
\begin{equation}\label{sv+}
|\eta_i-t_i|<1 \qquad (1\le i\le 3) \ .
\end{equation}
Also, define $\vv
a=(q,p_1,p_2)\in\Z^3\setminus\{0\}$ by setting
\begin{equation}\label{e:021}
\vv a=(q,p_1,p_2):=\sum_{i=1}^3t_i\vv a^{(i)}.
\end{equation}
\noindent In view of the last equation of (\ref{e:020}) and the
definition of $t_i$, it follows that $q\le 2Q$. Furthermore, using
the fact that $|q^{(i)}|\le c_0Q$ for each $i$ (this follows from
the last equation of (\ref{e:019})) we get that $q\ge 2Q-3c_0Q\ge Q$
provided that $c_0\le 1/3$. Thus, we have that
\begin{equation}\label{e:022}
Q\le q\le 2Q.
\end{equation}
Further,
\begin{equation}\label{e:023}
\begin{array}[b]{rcl}
|qg'_1(x)+p_1g'_2(x)-\theta_1 f''(x)| & \stackrel{(\ref{e:021})}{=} &
\left|\sum_{i=1}^3t_iG'_i(x)-\theta_1 f''(x)\right|\\[2ex]
& \stackrel{(\ref{e:020})}{=} &
\left|\sum_{i=1}^3(t_i-\eta_i)G'_i(x)\right|\\[2ex]
& \stackrel{(\ref{sv+})}{\le} &
\sum_{i=1}^3|G'_i(x)|\\[2ex]
& \stackrel{(\ref{e:019})}{\le} &
3c_2(c_0^8Q\delta)^{-1}.
\end{array}
\end{equation}
In view of (\ref{e:012}) and (\ref{e:023}), we have that
$$
|q x f''(x)-p_1f''(x)-\theta_1 f''(x)|<3c_2(c_0^8 Q\delta)^{-1}.
$$
The latter combined with (\ref{e:002}) gives that
\begin{equation}\label{e:024}
\left|x-\frac{p_1+\theta_1}{q}\right|\ \le \
\frac{3c_2}{q|f''(x)|c_0^8 Q\delta} \ \stackrel{(\ref{e:022})}{\le}
\ \frac{3c_2}{c_1c_0^8 Q^2\delta}=\frac{C_1}{Q^2\delta}\ ,
\end{equation}
where $C_1: =\frac{3c_2}{c_1c_0^8}$. For $Q$ sufficiently large, the
right hand side of (\ref{e:024}) can be made arbitrary small which
together with the fact that $x\in \frac34J$ ensures that
\begin{equation}\label{e:024+}
\textstyle\frac{p_1+\theta_1}{q}\in J \ .
\end{equation}
\noindent Also,
\begin{equation}\label{e:025}
\begin{array}[b]{rcl}
|q^{}g_1(x)+p_1g_2(x)+p_2-(\theta_1 f'(x)-\theta_2)| & \stackrel{(\ref{e:021})}{=} &
\left|\sum_{i=1}^3t_iG_i(x)-(\theta_1 f'(x)-\theta_2)\right|\\[2ex]
& \stackrel{(\ref{e:020})}{=} &
\left|\sum_{i=1}^3(t_i-\eta_i)G_i(x)\right|\\[2ex]
&\stackrel{(\ref{sv+})}{\le} &
\sum_{i=1}^3|G_i(x)|\\[2ex]
& \stackrel{(\ref{e:019})}{\le} &
3c_0\delta.
\end{array}
\end{equation}
By Taylor's formula,
\begin{equation}\label{e:026}
\textstyle
f\big(\frac{p_1+\theta_1}{q}\big)=f(x)+f'(x)\big(\frac{p_1+\theta_1}{q}-x\big)+
\frac12f''(\tilde x)\big(\frac{p_1+\theta_1}{q}-x\big)^2
\end{equation}
for some $\tilde x$ between $x$ and $(p_1+\theta_1)/q$. Thus $\tilde x
\in J$. By (\ref{e:012}), the left hand side of (\ref{e:025}) equals
$|q(xf'(x)-f(x))-p_1f'(x)+p_2-(\theta_1 f'(x)-\theta_2)|$. Hence,
\begin{equation}\label{e:027}
\begin{array}[b]{rcl}
3c_0\delta & \stackrel{(\ref{e:025})}{\ge}
& |q(xf'(x)-f(x))-p_1f'(x)+p_2-(\theta_1 f'(x)-\theta_2)| \\[2ex]
&=&|(qx-p_1-\theta_1)f'(x)+p_2+\theta_2-qf(x)|\\[2ex]
&\stackrel{(\ref{e:026})}{=}&
\big|p_2+\theta_2-qf\big(\frac{p_1+\theta_1}{q}\big)+\frac{q}{2}f''(\tilde
x)(x-\frac{p_1+\theta_1}{q})^2\big| \, .
\end{array}
\end{equation}
Therefore, for $Q$ sufficiently large
\begin{equation}\label{e:028}
\begin{array}[b]{rcl}
\big|qf\big(\frac{p_1+\theta_1}{q}\big)-p_2-\theta_2\big|&\le&
\big| \, p_2+\theta_2-qf\big(\frac{p_1+\theta_1}{q}\big)
+ \frac{q}{2}f''(\tilde
x)\big(x-\frac{p_1+\theta_1}{q}\big)^2\big|\\[2ex]
&~ & \hspace*{5ex} + \ \ \big|\frac{q}{2}f''(\tilde
x)\big(x-\frac{p_1+\theta_1}{q}\big)^2\big|\\[3ex]
& \le & 3c_0\delta+\frac{Q}{2}c_2\left(\frac{C_1}{Q^2\delta}\right)^2
\\[3ex]
&\stackrel{(\ref{e:007})}{\le}&
3c_0\delta+\frac{Q}{2}c_2\left(\frac{C_1}{k_1Q}\right)^2
\, = \, 3c_0\delta+\frac{c_2C_1^2}{2k_1^2} \, Q^{-1}\\[3ex]
&\stackrel{(\ref{e:007})}{\le}&
3c_0\delta+\frac{c_2C_1^2}{2k_1^3} \, \delta \ \ < \ \ \delta
\end{array}
\end{equation}
provided that $c_0<1/6$ and $\frac{c_2C_1^2}{2k_1^3}<1/2$. Thus, for
any $x \in \frac34J\setminus B(J,\lambda,K,T)$ there exists some
$(q,p_1,p_2)$ such that (\ref{e:022}), (\ref{e:024+}) and
(\ref{e:028}) are satisfied. In other words, $(q,p_1) \in
A_{\bm\theta}(Q,\delta,J)$ and moreover, in view of (\ref{e:016}) we
have that (\ref{vb+}) is satisfied for all $Q$ sufficiently large.
This completes the proof of Theorem~\ref{thm6}.
\vspace{1ex}
\noindent {\bf Acknowledgements.} SV would like to thank Resh Khodabocus for
his wonderful friendship and expert support during the `broken
ankle' episode. Also, many thanks to the fab three -- Bridget,
Iona and Ayesha -- for putting up with an immobile and often bad
tempered man. Finally, SV would like to thank Victor Beresnevich
for his immense generosity (personally and professionally) and
Steve Donkin for being an extremely supportive chief during broken
times!
|
1,314,259,994,200 | arxiv | \section{Introduction}
\label{sec:Introduction}
In linguistics, \textit{code-switching} describes using two (or more) languages (or language varieties) within a single text, conversation, or utterance.%
\footnote{The term `code-switching' (sometimes referred to as `codemixing', `codeshifting', `language alternation', `language mixture', or `language switching') appears in the 1950s; however, observations of these linguistic phenomena in academic writing predate this baptism by several decades; see \citet{Benson-2001} for an historical discussion. Code-switching, like many other communicative phenomena, appears to be governed by several linguistic and extra-linguistic features; see discussion in, e.g., \citet{Gumperz-1977, Pfaff-1979, Poplack-1980, Benson-2001}.}
This purely linguistic sense of code-switching has been widely discussed in the context of artificial intelligence (AI)---especially for natural language processing (NLP) models.%
\footnote{See \citet{Jose-et-al-2020} for a recent survey.}
AI/NLP research on code-switching typically concentrates on the morphosyntactic features of code-switched linguistic data at the sentential (or sub-sentential) level. However, there is a broader characterisation of code-switching phenomena that is primarily {\it social}, namely, {\it cultural} code-switching \citep{Falbo-2021}. Cultural code-switching relates to how we adapt our overall behaviour, manner of speaking, and appearance in response to a perceived change in our social environment. Unlike the merely linguistic sense of code-switching, cultural code-switching involves `a much more profound shift than toggling between languages or dialects does' \citep[76]{Morton-2019}.\footnote{See also the definitions and related discussions found in \citet{Morton-2014, McCluney-et-al-1979}.} Cultural code-switching is more intimately connected to the self. It is also a mechanism by which we can conform to dominant cultural norms, and, by the same token, it is a means by which we can defect and resist the status quo.
As AI-based language technologies continue to improve and become more pervasive in society, it will be increasingly essential that they fluidly interact with humans---i.e., in the way that humans typically interact with one another. However, a significant part of such interactions involves non-linguistic commun\-ication---i.e., the additional features of communication that are relevant for cultural code-switching. Engagement in cultural code-switching is especially common among minority communities, and it may be done for a variety of reasons---conforming to majority-group norms, gaining social acceptance, avoiding hostility or discrimination, or achieving social mobility, to name a few. Accordingly, the extant biases of emerging technologies, which typically arise from dominant-group conventions, may be further exacerbated.\footnote{It is, by now, well-documented that many instances of machine bias result from training, where the technological artefact mirrors extant biases in datasets. See discussion in, e.g., \citet{Friedman-Nissenbaum-1996, Angwin-et-al-2016, O'Neil-2016, Caliskan-et-al-2017, Cave-Dihal-2020, Johnson-2021}.} To mitigate, and not further contribute to, these biases, it is important for researchers in AI---especially those working in the machine learning paradigm and other emerging methods---to take cultural code-switching into account in models of NLP. This is particularly true when these systems are deployed, especially in high-stakes social environments (e.g., legal settings, job interviews, educational institutions) or when they are otherwise integrated into standard consumer-facing technologies or applications.
The goals of this paper are two-fold, in light of these considerations. First, we motivate the importance of cultural code-switching in research on the ethics of AI. Despite its ubiquity and, more importantly, the unique value and costs that cultural code-switching has for those from marginalised backgrounds, surprisingly little attention has been given to these phenomena, either in the technical literature on NLP models or in the philosophical literature on AI more broadly. This paper bridges this gap by defending the need to investigate cultural code-switching capacities in AI systems (AIS).
Second, drawing upon the growing literature on epistemic oppression and related work on structural oppression, we canvas the potential moral and epistemic risks involved in implementing---or, more importantly, \textit{failing} to implement---code-switching capacities in AIS. By leaving the socio-dynamic features of cultural code-switching unaddressed, AIS risk negatively impacting already marginalised social groups by widening opportunity gaps and further entrenching social inequalities.%
\footnote{See \citet{LaCroix-OConnor-2021, Bruner-2017, Bruner-Oconnor-2017, Oconnor-Bruner-2019} for discussion on how the constitution of groups and group dynamics alone may give rise to inequitable conventions.}
More generally, reflecting upon the need for cultural code-switching in AIS allows us to enrich our understanding of these phenomena beyond the context of human-AI interaction. Cultural code-switching influences not only first-personal identity---particularly what \cite{Dembroff-Saint-Croix-2019} have called \textit{agential identities
---but also broader systems of social coordination within and between groups.
The paper proceeds as follows. Section~\ref{sec:Background} provides a brief overview of present-day artificial intelligence and machine learning methods, highlighting extant biases that have surfaced. This section also clarifies that the AI research area that is most relevant to code-switching is {\it natural language processing}. However, as mentioned, the code-switching phenomena we are primarily concerned with is broader than the linguistic context alone. Accordingly, section~\ref{sec:Cultural-Code-Switching} makes clear what {\it cultural} code-switching is and some of the reasons we do it, using specific examples. Section~\ref{sec:Cultural-Smothering} draws upon the literature on epistemic oppression and, in particular, on \citepos{Dotson2014} analysis of \textit{testimonial smothering}. In so doing, we broaden Dotson's framework to include a species of self-silencing, which, following \citet{Falbo-2021}, we call \textit{cultural smothering}. This occurs when cultural code-switching behaviours manifest as a form of self-censoring: one alters aspects of their cultural identity in response to an unwelcoming or hostile social atmosphere. We also discuss how pressures to code-switch in this sense, and thus to engage in or succumb to cultural smothering, often have the structure of a \textit{double bind} choice situation \citep{Frye-1983}.
Section~\ref{sec:Conclusion} draws some tentative conclusions concerning the relationship between cultural code-switching and emerging technologies. We must be sensitive to how practices of silencing are sustained and reinforced in society and how this maintains unjust social arrangements, thereby limiting access to social goods needed to improve the material conditions of marginalised communities (e.g., safety, education, job opportunities). A natural extension of this carries over to the domain of emerging technologies, particularly AI and NLP. We should be critical of how these same patterns of silencing and epistemic oppression are reproduced and potentially rendered more potent and efficacious in emerging technologies---especially as these technologies become increasingly widespread and seamlessly integrated into everyday life.
To be clear: it is not our intention to defend any concrete solutions to the problems we identify. We don't offer any optimistic plans for developing or training AI to better avoid or ameliorate epistemic oppression. We are not confident that any such solution ultimately exists. Instead, we offer these arguments as a serious caution, as a lesson concerning the limitations and harms of AI technologies when they are adopted and implemented against the backdrop of a non-ideal, hierarchically structured social world, filled with unjust divisions on the bases of race, gender, sexual orientation, ability, religion, and class, among others. Unless and until the social world changes for the better, we should expect these problems to persist.
\section{Artificial Intelligence, Machine Learning, and Natural Language Processing}
\label{sec:Background}
{\it Artificial intelligence} (AI) describes both a property of computer systems---i.e., displaying intelligent behaviour\footnote{Where `intelligence' is understood as `an agent's ability to achieve goals in a wide range of environments' \citep{Legg-Hutter-2007}.}---as well as a set of techniques that researchers use to achieve this property \citep{Gabriel-2020}. Machine learning (ML) is a branch of artificial intelligence wherein algorithms use data to learn gradually. The three main approaches to machine learning are supervised learning, unsupervised learning, and reinforcement learning.
In supervised learning, the algorithm is trained on labelled examples; this is the {\it training data}. The model is evaluated on how well it generalises what it learns from the training data to previously unseen examples (called test data). For example, one might train an image-recognition model on millions of labelled images; if it performs well, it should correctly label novel images not in the training set. In unsupervised learning, an algorithm learns underlying patterns or correlations from unlabelled data. For example, recommendation systems group users together based on their viewing patterns to recommend similar content. Reinforcement learning depends upon sparse rewards for actions. For example, a chess game has a reward of $+1$ if a player wins, $-1$ if a player loses, and $0$ if a player draws. A reinforcement learning model can learn to play chess merely by playing hundreds of thousands of games and receiving a reward at the end of each game.
Machine learning techniques stand in contrast to {\it symbolic systems} (or `good old-fashioned AI'), where explicit, hand-crafted rules had to be hard-coded into the machine---typically in the form of complex, nested if-then statements. The current driver of AI research is so-called {\it deep learning}, an approach to machine learning that utilises deep neural networks modelled (roughly) after neurons in the human brain. Deep learning uses layers of algorithms to process data---information is passed through each subsequent layer in a neural network, with the previous layer's output providing input for the subsequent layer. One of the key advantages of deep learning techniques is that they do not require the heavily hand-crafted features used by traditional machine learning methods. Deep convolutional neural networks are differentiated from their shallow counterparts primarily in terms of their depth, heterogeneity, and sparse connectivity \citep{Buckner-2019}.
In the early days of machine learning, researchers could focus on the fundamental aspects of their work without much concern for the social or ethical consequences of these systems because they were relatively encapsulated in the confines of the research lab. However, \citet{Luccioni-Bengio-2019} highlight that these algorithms are increasingly being deployed in society due in part to the promise of the unprecedented economic impacts of ML applications \citep{Bughin-et-al-2018, Szczepanski-2019, Russell-2019}. Consequentially, these systems' ethical and social impacts have started to be examined by researchers from a wide range of disciplines.
Since 2016, emerging technologies' ethical and social consequences have just begun to have a more prominent role in AI research more generally \citep{Christian-2020}. One worry is that algorithms may latch on to spurious correlations (overfitting the training data), which can lead to behaviour that we would call, e.g., sexist or racist (even if implicitly so) when performed by a human. Another slightly different worry is that the data being used to train an algorithm is inherently biased.\footnote{See discussion in \citet{Green-2019, Johnson-2021}.} Hence an optimally-trained model will perform exactly how it should, given the data it has received, again leading to potentially problematic patterns of `behaviour'. A final worry is that the data is simply incomplete or unrepresentative, meaning that the trained system may be `oblivious' to certain (potentially huge) swaths of society.
For example, a health algorithm that used health {\it costs} as a proxy for health {\it needs} learned to be biased against Black patients in the United States \citep{Obermeyer-2019}. Similarly, a recidivism-prediction algorithm is twice as likely to have false positives (labelling individuals as high risk for re-offence when they are in fact low risk) for Black individuals and twice as likely to have false negatives (labelling individuals as low risk for re-offence when they are in fact high risk) for white individuals \citep{Angwin-et-al-2016}.
To put it another way: at {\it best}, a sophisticated ML system can be an efficient tool that reflects exactly what the data tell it about the real world, meaning that if the data are biased---which they typically are---then the ML system will perpetuate the existing biases or inequalities in the social system that gave rise to those data.
As mentioned in Section~\ref{sec:Introduction}, cultural code-switching is especially common among minority communities, implying differential values and risks for individuals in minority groups compared with those in majority groups. Thus, biases in AI technologies can come to bear in significant ways on cultural code-switching (which we discuss in more detail in the next section). Owing to the communicative nature of code-switching, the field of artificial intelligence in which an analysis of code-switching will be most relevant is {\it natural language processing}, which studies natural language interactions between computers and humans.
Some of the key areas of study in NLP research include speech recognition, which may involve recognising spoken language and translating it into text format; natural language understanding or interpretation, which is necessary for, e.g., question-and-answer interactions or machine translation; and natural language generation, which outputs text. NLP has many important applications, including virtual assistants, spam detection, speech recognition, named entity recognition, sentiment analysis, question answering, automatic text summary, autocomplete, predictive typing, relationship extraction, and machine translation, to name a few.
For many years, researchers have utilised shallow models (such as support vector machines or logistic regression) or explicit symbolic representations (such as first-order logic) to solve particular NLP tasks. However, the advent of {\it deep learning} techniques has allowed for significant progress in NLP research in the last decade. You are probably familiar with NLP, even if you have not heard of it---NLP models are necessary for consumer-facing applications like Google Assistant, Siri, or Alexa to work.
Consider the last of these. Amazon's Alexa is a virtual assistant AI system that makes significant use of NLP models. It constantly monitors signals in the environment and processes them to minimise ambient noise (such as the conversation on the television) that is not relevant to its task. This pre-processing is always going on in the background. For the software to turn on, it requires a `wake word'---e.g., {\it hey Alexa!} Thus, before Alexa even does anything, it must be capable of differentiating speech from background noise, detecting very specific signals, and responding to them. Once Alexa wakes, it converts the recorded audio to text by analysing certain features like frequency and pitch---this is {\it speech recognition}. Once the speech has been converted to text, Alexa must interpret the meaning of (at least parts of) the text to respond appropriately---this is natural language {\it understanding}. Once the request is processed and understood, Alexa may need to produce a `voiced' response.
However, NLP research generally starts with the assumption that language is solely about the transfer of information---i.e., the {\it content} of the message. As mentioned in the introduction, there are many extra-linguistic factors involved in cultural code-switching, so this emphasis on content is myopic. Even granting the view that language is primarily a way of communicating or transferring information, language is but one way of doing so.
Two of the leading language models in machine learning are {\it Bidirectional Encoder Representations from Transformers}---also known as BERT \citep{devlin2019bert}---and {\it Generative Pre-trained Transformer 3}---also known as GPT-3 \citep{brown2020language}---which is the third iteration of an autoregressive language model, produced by OpenAI. Let's consider the latter of these.
The full version of GPT-3 makes use of around 175 billion parameters---two orders of magnitude larger than its predecessor, GPT-2, and one order of magnitude larger than the next-largest NLP model, Microsoft's {\it Turing NLG}.\footnote{Since the release of GPT-3, this model has been surpassed in size by Google's 1.7 trillion parameter language model. This show of one-upmanship was criticised by \citet{bender2021dangers}, who highlight, among other things, the environmental costs of training large language models.} The majority of training data for this model (around 60\%) comes from a dataset called {\it Common Crawl}---a corpus generated from crawling the Internet---which, as of April 2021, contains around 320 {\it tebibytes} (TiB) of data taken from 3.1 billion web pages. GPT-3 used a filtered version of the Common Crawl dataset, resulting in approximately 14 billion tokens (though they do not provide details of how the Common Crawl dataset was curated).
To give a sense of this dataset's scale, GPT-3 was also trained, in part, on the {\it entire} English-language Wikipedia, which consists of 3 billion tokens, and accounted for only around 3\% of the training data for the model. Additional datasets used to train GPT-3 include WebText2---consisting of the text of web pages from all outbound Reddit links from posts with three or more `upvotes'---and Books1 and Books2---which are two internet-based books corpora.
Note, however, that the Internet---and therefore, the data used to train GPT-3---is predominantly {\it English}: around 45\% of HTML pages in the Common Crawl dataset have English as their primary language. It is estimated that English is used by around 62\% of all the websites whose content language we know, with the next most predominant language being Russian, at 8\% \citep{W3TechS}.
Thus, there is likely an anglocentric bias in the data used to train large language models. However, it is perhaps relevant to note that it is very difficult to make claims about the training of these models with any degree of certainty. For example, OpenAI does not describe the filtering process for the Common Crawl dataset, and the WebText2 dataset and the books corpora are not publicly available and lack official documentation \citep{bandy2021addressing}.
Furthermore, as others have pointed out, the Common Crawl dataset will overrepresent those populations with higher internet usage rates \citep{luccioni2021whats}. And, those countries with the highest internet penetration rates are European (Northern, Western, and Southern) and north-American, with the lowest rates coming from Africa and South Asia \citep{Statista-2021-Internet-Penetration}. This discrepancy leads to a `digital language divide' on the Internet---despite there being around $7000$ languages spoken worldwide, only a few hundred of these are represented on the Internet \citep{Young-nd}. Thus the datasets being used to train these language models are inherently biased, and there has (to date) been relatively little analysis of these biases.
In some ways, the simple case is the one where an algorithm is trained on a specific language; yet, the technologies are highly biased in favour of anglophones. More difficult cases involve particular accents or dialects, and there is already overwhelming evidence that NLP systems are biased---this time in favour of a particular {\it type} of Anglophone: one whose idiolect resembles something like `broadcast English' \citep{Harwell-2018}. Thus, the ubiquity of code-switching highlights a further difficulty that these already inherently flawed systems will face.
The biases that we have been discussing in NLP models have been primarily linguistic in nature. However, as mentioned in the introduction, there is a broader sense of code-switching that we take to be pertinent here: namely, cultural code-switching. In the next section, we discuss how the pressures to code-switch---to adapt, minimise, suppress, mask, hide, or conform---are especially pressing among already disadvantaged populations. Often, code-switching is a skill that is required to gain access to safety and opportunities for improving one's material conditions. But, at what cost?
\section{Cultural Code-Switching}
\label{sec:Cultural-Code-Switching}
Cultural code-switching is ubiquitous in human interactions: it is used to switch between formal and informal contexts; to wield power and to signal asymmetric authority relations \citep{Popa-Wyatt-Wyatt2018, Tirrell2012}; and can express a sense of agency over one's social identity, and serve to build and reinforce a sense of community and solidarity by signalling in-group affiliation \citep{Dembroff-Saint-Croix-2019}. Since cultural code-switching takes account of the socio-pragmatic aspects of communication, it includes extra-linguistic cultural artefacts (in addition to linguistic ones). For example, consider how a student may modulate their tone of voice or change the register of their speech by using honorifics like `ma'am', `sir', `doctor', or `professor' when interacting with faculty. Thus, instead of the narrow sense of altering some linguistic features of communication, we take code-switching more broadly to include non-linguistic communicative signals which convey information---e.g., about oneself and one's relation to others in a given social context.
Why do we code-switch? The reasons for code-switching are vast and complex. One important use of code-switching is to signal in-group membership and affiliation. Consider how teenagers (in a broadly North American context) will often use various slang terms and expressions when talking with each other. A quick search through Urban Dictionary, a crowd-sourced online lexicon used to archive slang words and phrases, reveals a host of in-group expressions of this sort---e.g., `lit', `salty', `bae', `simp', `noob', `ghosting', `dank', `taking an L', and much more. If you don't know what these expressions mean, that is part of the point. When young people use these expressions among friends, they are code-switching (whether they realise it or not). They are signalling to one's audience that one is `in the know', `one of them', or as they might colloquially put it `cool'.
Consider another example from the LGBTQ2+ community, particularly the sub-culture of drag queens \citep{Mattel-2018}. If you go to a drag show, there are a range of unspoken cultural norms and expectations that are operative. If it is your first time, you might be taken aback by the playful, mock-impolite, bantering style that drag performers often use. In some cases drag queens might be `throwing shade': a kind of indirect insult toward other drag performers or audience members. This phenomenon is discussed in Jennie Livingston's \textit{Paris Is Burning} (1991), a documentary on drag culture in New York City. Dorian Corey, a drag queen interviewed in the film, gives the following example of throwing shade. %
%
\begin{quote}
If I were to say in a terribly condescending voice, `Oh honey, I'm so glad you saved up to buy those glasses', that's blatant shade. I didn't insult the glasses, or you, directly. It's implied by my voice and the context of what I said. You know they're ugly.
\end{quote}%
%
It is important to note the central role that tone of voice and context play in this example. This is a key issue which we will return to in discussing the relationship between the phenomena of cultural code-switching and human-to-AI interactions. More broadly, this style of communicative engagement has been identified as a kind of pro-social in-group communication, which builds community and solidarity within LGBTQ2+ communities. Additionally, mock impoliteness is also a rhetorical strategy that can be deployed as a form of self-protection; it is a way of preemptively developing a thick skin against bigotry and discrimination \citep{McKinnon-2017, Olivia-et-al-2021}.
As another example, cultural code-switching in the context of race plays a major role in the plot of Boots Riley's black-comedy film, {\it Sorry to Bother You} (2018). The film centres on a young, Black man named Cassius Green (played by Lakeith Stanfield) who gets a job as a telemarketer and finds major success after he is coached by an older (also Black) co-worker (played by Danny Glover) to use his `white voice'. White actors voice the `white voices' for these characters. One of the major themes of this film is how being successful in a predominantly white culture requires conforming to the expectations of the dominant group---in this case, a marginalised Black man `sounding' white to achieve success at work. According to \citet{Toure-2018}, `putting on a white voice' means to embrace `the ease that white privilege brings. It means sounding as if you’re entitled to the good life. It means feeling calm way down in your soul. It means never having to be afraid someone will call the police on you just because you’re breathing.' However, the film also explores the social consequences of code-switching with respect to one's in-group: Green ends up alienating (or being alienated from) his friends, family, and social in-group.
This resonates with a broader experience of \textit{double alienation} that individuals from marginalised groups often confront as they find themselves existing between disparate social worlds with distinct and often incompatible cultural norms, values, and expectations. \cite{Morton-2019} describes how first-generation scholars often confront ethical compromises in the process of trying to gain upward social mobility through pursuing a degree in higher education. In conforming to the dominant norms operative in institutes of higher education (especially within elite institutions with predominately upper-class and white student and faculty populations), students from first-generation or low socioeconomic backgrounds often feel isolated and less able to connect with family and loved ones. This is perhaps especially true within fields like philosophy, where first-generation and low-income academics may be unable to fully share their academic life with family and loved ones, who may not understand the point or value in pursuing a career as an academic philosopher. In a recent paper, Morton, who is a first-generation scholar, says of her family that: `We love each other, but I am now part of a world whose logic is mysterious to them' \citeyearpar[10]{Morton-2021}. Thus, the process of code-switching can give rise to serious ethical trade-offs and compromises.\footnote{For further discussions of related issues see \cite{Morton-2019}, and for discussion on the case of academic philosophy specifically, see the recent collection of papers in \cite{Falbo-Stewart-2021}.}
More generally, one key use of code-switching is cultivating a sense of belonging and community with those in one's social group. One code-switches to express a shared social identity with others; to make others feel comfortable, at ease, or `at home' in a given social environment. This is just one---distinctively \textit{positive} and \textit{self-affirming}---use of code-switching. Other uses of code-switching are starkly different.
In some cases, one might code-switch not as a means to increase a sense of community with others but rather to navigate an unfamiliar and potentially unwelcoming (or even hostile) social environment. In this sense, the `masking' behaviours that autistic individuals may perform---faking eye contact, mirroring gestures and expressions, scripting conversations, disguising `stimming' behaviour, enduring sensory discomfort, etc.---are a type of cultural code-switching \citep{Hull-et-al-2017}, which may be employed in a range of social situations---including feeling safe in a culture that has, historically, highly stigmatised autistic individuals \citep{Silberman-2015, Donvan-Zucker-2016}.
\par Importantly, for our purposes, each of the behaviours just described is {\it non}-linguistic. In this case, regular practice of cultural code-switching can have inherently negative side effects, including increased stress, anxiety, depression, and exhaustion or `autistic burnout' \citep{Bargiela-et-al-2016, Cage-et-al-2018, Cage-Troxell-Whitman-2019, Raymaker-et-al-2020}, or loss of identity and increased risk of suicidal thoughts \citep{Cassidy-et-al-2020}. In this case, too, certain intersectional groups may feel differential social pressure to code-switch---several studies have suggested that people who identify as women are more prone to code-switching than those who identify as men, leading women to be misdiagnosed and partially causing the gender gap in autism diagnoses \citep{Gould-Ashton-Smith-2011, Dworzynski-et-al-2012, Lai-et-al-2015, Hull-et-al-2019}.
Thus, code-switching is a crucially important skill for negotiating the social dynamics of high-stakes environments---e.g., interactions with the police, or in courtrooms, educational institutions, job interviews, and more. Moreover, the pressures to code-switch are not felt uniformly but are disproportionately experienced by members of historically marginalised groups, who often must assimilate to dominant cultural norms to gain social acceptance and access certain resources or benefits or avoid potential harms and mistreatment.
For example, empirical studies and first-person testimonies have shown that Black women with natural hairstyles (e.g., Afros, braids, twists) are less likely to get job interviews than Black women with straightened hair, and especially compared to white women \citep{Koval-et-al-2020}. Black women with natural hair received lower scores on assessments of `professionalism' and `competence', and they were less likely to be invited for job interviews as a result. Most notably, this occurred when the norms of the job required a more conservative and formal dress code. One striking example is the 2017 case of Destiny Tompkins, a young Black woman who was told by her manager at Banana Republic (a white woman) that her braided hair was `unkempt' and `too urban' for the store's image. The manager told her to take out her braids or else she would stop scheduling her for shifts \citep{Samotin-2017}. It is reasonable that Black women might decide to code-switch---e.g., by straightening their hair for job interviews---in order assimilate to norms of `professionalism' in white-dominated spaces. This is often needed to ensure job security and social acceptance.
Of course, code-switching in high-stakes settings, especially where particular opportunities for social mobility and benefits reside, involves more than changes in one's physical appearance. \cite{Dembroff-Saint-Croix-2019} emphasise the relationship between cultural code-switching and what they call \textit{agential identities}. They define agential identities as `the self-identities we make available to others---they bridge what we take ourselves to be with what others take us to be' \citep[572]{Dembroff-Saint-Croix-2019}. They discuss how code-switching is an effective way to negotiate and switch between \textit{entire social identities}, in some cases moving between genuine identities and merely apparent or superficial social identities.
%
At the most general level, cultural code-switching influences one's overall mode of existence in social space---how one chooses (or is forced) to exist in the social world. Particular social environments and cultural contexts that incentivise cultural code-switching may be sites of epistemic oppression where silencing thrives.
\section{Cultural Smothering and Double Binds}
\label{sec:Cultural-Smothering}
When one is forced to code-switch in the ways that we discussed in the previous section---to mask, blend in, or otherwise express a superficial identity or persona to be accepted, make a living, or avoid harms---it results in a form of self-censorship or self-silencing. This phenomenon is akin to what \cite{Dotson2011} has analysed as \textit{testimonial smothering}. On Dotson's analysis, testimonial smothering is a form of self-silencing that occurs when a speaker truncates or self-censors their testimony because their audience is taken to lack the required competence needed to properly understand what one is saying. This amounts to a kind of \textit{epistemic violence} when the testimonial recipient exhibits \textit{pernicious ignorance}. Pernicious ignorance, according to Dotson, is a form of reliable ignorance, or reliable insensitivity to the truth, that results in a harm.\footnote{Also see, for example, \cite{Dotson2014, Medina2013, Pohlhaus2012} and \cite{Fricker2007} for related discussion.}
Dotson's analysis of testimonial smothering highlights the importance of testimonial competence---the ability of an interlocutor to \textit{get what you mean} (in the way you mean it)---for successful communicative exchanges. Furthermore, this analysis describes how particular speakers, especially those who are testifying from disadvantaged positions, are prone to vulnerabilities and harms when their audience lacks such a competence. \citet{Dotson2011} writes
\begin{quote}
Speakers are vulnerable in linguistic exchanges because an audience may or may not meet the linguistic needs of a given speaker in a given exchange. \ldots [T]o communicate we all need an audience willing and capable of hearing us. (238)
\end{quote} %
Moreover, engaging in communicative exchanges where one's audience lacks testimonial competence (or where it is unclear if one's interlocutor is competent) can be very risky and potentially unsafe \citep[244-245]{Dotson2011}. For example, drawing upon the work of \cite{Crenshaw-1991}, Dotson explains how Black women's testimony about sexual violence perpetrated by Black men is potentially unsafe or risky in social contexts where it may be interpreted as reinforcing racist stereotypes.
Dotson’s analysis focuses on \textit{testimonial} exchanges---how testifiers from oppressed groups are often forced to capitulate to the testimonial incompetency of their audiences by altering their testimony to include only that which is likely to be given proper uptake. However, with cultural code-switching in view, we can broaden Dotson's analysis to apply not only to the truncating or smothering of testimony but also the smothering of aspects of one's cultural identity more broadly. This is salient in cases of forced code-switching, where one masks aspects of their culture in response to dominant norms and pressures within the social environment, especially where code-switching is required to gain some benefit or to avoid harms. In such cases, one may not only smother their testimony, but also engage in a broader form of what we will refer to as \textit{cultural smothering} \citep{Falbo-2021}. When cultural code-switching behaviours manifest as cultural smothering, they take on a form of self-silencing.
The pressures to engage in cultural smothering through code-switching arise in non-ideal choice situations. Such situations often have the structure of a \textit{double bind}, which is described as a `situation in which options are reduced to a very few and all of them expose one to penalty, censure or deprivation' \citep[2]{Frye-1983}. For example, when deciding whether to have (heterosexual) sex, Frye discusses how women are often susceptible to social criticism and scrutiny no matter what they do. If they have sex, they are prone to be called `easy' or a `slut', but if they do not, they are seen as `prudish', `uptight', or `frigid'. No matter what a woman chooses, she loses. The disadvantage that double binds give rise to is a function of their structural properties; therefore, they are sites of systemic oppression. They result from, reinforce, and sustain, oppressive systems and unjust social arrangements.
Recently, \cite{Hirji-forth} has argued that what makes double binds unique and effective vehicles of oppression, compared to other difficult choice situations, is that `whatever an agent does necessarily undermines their own objective interests' (3). Hirji develops the notion of an `imperfect choice', which is a kind of choice situation where it's impossible to advance one's interests and where one will inevitably be worse off as a result. Imperfect choices constrain one's agency while nonetheless leaving aspects of one's autonomy intact. One is still free to choose, but this freedom is hollow and illusory because all available options undermine one's interests. Moreover, in so choosing, one is forced to be complicit in their own oppression. Double binds present a series of options, all of which are non-ideal and further contribute to systems of oppression that function to make one, and members of one's social group, worse off.
How does this relate to cultural code-switching? In cases where one must code-switch to gain some advantage or to avoid some harm in the social environment, one typically faces an imperfect choice. They are in a double bind. On the one hand, one might choose to code-switch. If so, one engages in a form of self-silencing or self-censorship: they culturally smother in response to (likely or actual) pernicious ignorance. They conceal, truncate, mask, or alter aspects of their social identity, presenting an edited version of themselves that accords with dominant cultural standards. On the other hand, if one decides {\it not} to code-switch, they resist dominant cultural norms and expectations within the social environment. But, by not code-switching, one risks social exclusion, unemployment, forms of material deprivation, and even safety (as was discussed in Section~\ref{sec:Cultural-Code-Switching}).
No matter the choice, one's interests are undermined. By code-switching, one is forced to be complicit in reinforcing dominant cultural norms, further entrenching broader patterns of inequality. But, by not code-switching one risks losing important material goods and opportunities for social advancement.
The pressure to code-switch highlights how members of marginalised groups must navigate unfamiliar and potentially hostile social environments to pursue upward mobility. It is also important to remember that code-switching can be \textit{exhausting}, especially when it manifests as cultural smothering. It involves a great deal of psychologically taxing work in which one's comparatively privileged counterparts simply need not engage.
\section{Tentative Lessons and Concluding Remarks}
\label{sec:Conclusion}
AI systems are increasingly ubiquitous and more complex. So too are the social issues to which they give rise. This includes not only more direct human-to-AI interactions via smartphones, chatbots, or digital voice assistants but also how these technologies are implicated `behind the scenes' through social media platforms, streaming services, predictive search algorithms, video games, apps for rideshare programs and online dating, email communications, banking and finance, e-commerce, and more. These advancements have undoubtedly led to many positive changes, making previously arduous tasks much more streamlined and efficient. But, at the same time, these technologies can hinder social progress.
As is now well known, AIS encode biases. This is unsurprising. These systems are not getting their training data in a vacuum. Instead, they are sourcing it from places like the Internet, which reflects society's myriad biases and prejudices, and produces selection effects favouring certain privileged populations (e.g., English-speakers, those who have Internet access, etc.) over others. As these technologies become increasingly advanced and indispensable to everyday tasks, researchers must be cognisant of how social structures that maintain unjust social arrangements are reproduced, reflected, or potentially made more potent and harmful within these technologies and their implementation.\footnote{This discussion also raises the question of whether we should really want AI technologies, for instance, voice assistants like Alexa and Siri, to be very human-like. One interesting development is the creation of Q, a genderless voice assistant. You can try it out yourself: \href{www.genderlessvoice.com}{www.genderlessvoice.com}.}
These insights underscore the need to attend to broader structural inequalities and mechanisms of oppression that reach well beyond AI technologies. It suggests that the solution to these issues does not rest within some manipulation or re-configuration of training data or some alternative algorithm. The problems we identify are not fundamentally problems resulting from AI technologies and, as a result, won't give way to technological solutions. As \citet{O'Neil-2016} has argued, `Big Data processes codify the past. They do not invent the future' (204). Importantly, more data and more accurate or efficient algorithms will not solve any of the problems that we have described. Even if an algorithm had a perfect model of the world, the structures that disadvantage certain groups over others would be reflected in the data. This is because the problems that arise when training and implementing AI technologies in a non-ideal and unjust social world are due to the very structure of that world. As a result, the issue is not whether the data are incomplete or unreflective of the real world. Even (perhaps especially) if they are complete and perfectly reflective, these problems will persist. Moreover, these technologies feed back into society, creating more biased data as input, thus giving rise to a negative feedback loop, of the type described by \citet{O'Neil-2016}. Accordingly, the solution cannot be merely technological because the problem is not merely technological. A responsible first step is to be aware of structural injustice in the world. The point of intervention is not in changing our data to be better---our data will be better when the world is better---but when inequalities persist, so too will inequalities in data emerge to reflect them. All technological progress does is instantiate these inequalities more efficiently.
We have highlighted one important way these technologies can potentially be sites for epistemic oppression: by imposing imperfect choice situations or double binds onto non-dominant social groups. Just as in the case of human-to-human interactions, where patterns of silencing emerge in response to the pernicious ignorance of one's audience, so too can pernicious ignorance be encoded into AI technologies, further deepening patterns of silencing on a potentially grander scale and at a more rapid pace. If technologies involving NLP and human-to-AI-interactions require speakers to conform to dominant expectations and norms---or, in other words, if such technologies encode pernicious ignorance---then non-dominant groups will need to engage in cultural smothering in order to interact with these technologies.
\iffalse
\newpage
\section{NOTES (Travis)}
\begin{itemize}
\item Say something about default-white (male, {\it anglo}) assumptions in ML research; the last of these is particularly important for present purposes.
\item Emphasis in NLP is on syntax and semantics (cite examples or survey); pragmatics are difficult but important for the context-sensitive nature of language which we discuss here.
\item Mention NLP programmes that flag drag queen speech (and the speech of other queer communities) as `toxic' (i.e., insensitive to context).
\item Cite Gabbrielle Johnson's paper on value-free machine learning.
\item Cite DeepMind paper on `fairness for unobserved characteristics'.
\item PAPER STRUCTURE:
\begin{itemize}
\item Introduction - essentially the extended abstract
\item Discussion of Cultural Code-Switching
\item Background on SOTA NLP research
\item Importance of code switching in the context of AI ethics research
\item Value-ladenness of science
\item Conclusion
\end{itemize}
\end{itemize}
\fi
\newpage
\singlespacing
\bibliographystyle{apalikelike}
|
1,314,259,994,201 | arxiv | \section{Introduction}
Interest in the nonlinear dynamics of microelectromechanical and
nanoelectromechanical systems (MEMS \& NEMS) has grown rapidly over
the last few years, driven by a combination of practical needs as well
as fundamental questions~\cite{LCreview}. Lithographic fabrication
technology allows the construction of large arrays of MEMS \& NEMS
devices (as many as 2800 to date~\cite{Bargatin08}), coupled by
electric, magnetic, or elastic forces. In addition, nonlinear behavior
is readily observed in these devices at relatively small amplitudes
of motion~\cite{turner98,C00,BR01,blick02,turner02,turner03,yu02,cleland04,%
erbe00,kozinsky07,demartini07,masmanidis07}. Limitations in the
fabrication technology mean that individual devices will usually have
slightly different resonant frequencies, and nonlinear collective
effects, such as synchronization (all devices oscillating in
phase)~\cite{sync1,sync2} and pattern formation~\cite{BR,sato06,LC,BCL}
(coherent response with a more complex spatial structure), have been
proposed as ways of achieving useful coherent responses.
Consequently, for many technological applications, there exists a
practical need to understand the collective nonlinear behavior of MEMS
\& NEMS devices.
At the same time, the advances in the fabrication, transduction, and
detection of MEMS \& NEMS resonators opens up an exciting new
experimental window into the study of fundamental questions in
collective nonlinear dynamics. Typical nonlinear MEMS \& NEMS
resonators are characterized by extremely high frequencies---recently
going beyond 1 GHz~\cite{HZMR03,cleland:070501}---and relatively weak
dissipation, with quality factors in the range of $10^2-10^5$. For
such devices, transients die out rapidly, so that it is easy to attain
the long-time asymptotic states, be they steady, periodic, or chaotic,
and to acquire sufficient data to characterize these states well. From
the theoretical point of view, the systems have the advantage that the
basic physics of the individual elements is simple, and the parameters
can be measured or calculated, so that the equations of motion
describing the system can be established with confidence. This, and
the fact that weak dissipation can be treated as a small perturbation,
provide a great advantage for quantitative theoretical study.
Moreover, the ability to fabricate arrays of thousands of coupled
resonators opens new possibilities in the study of nonlinear dynamics
of intermediate numbers of degrees of freedom---much larger than one
can study in macroscopic or table-top experiments, yet much smaller
than one studies when considering nonlinear aspects of phonon dynamics
in a crystal.
Our current studies are motivated by the experimental work of Buks and
Roukes~\cite{BR}, who fabricated an array of nonlinear micromechanical
doubly-clamped gold beams, and excited them parametrically by
modulating the strength of an externally-controlled electrostatic
coupling between neighboring beams. The Buks and Roukes experiment was
modeled by Lifshitz and Cross~\cite{LC} using a set of coupled
nonlinear equations of motion. They used secular perturbation theory
to convert these equations of motion into a set of coupled nonlinear
\textit{algebraic\/} equations for the normal mode amplitudes of the
system, enabling them to obtain exact results for small arrays, but
only a qualitative understanding of the dynamics of large arrays. In
order to obtain analytical results for large arrays, Bromberg, Cross,
and Lifshitz~\cite[henceforth BCL]{BCL} studied the same system of
equations, approaching it from the continuous limit of infinitely-many
degrees of freedom, and obtaining a description of the slow
spatiotemporal dynamics of the array of resonators in terms of an
amplitude equation. BCL showed that this amplitude equation could
predict the initial mode that develops at the onset of parametric
oscillations as the driving amplitude is gradually increased from
zero, as well as a sequence of subsequent transitions to other
single-mode oscillations.
The combination of many degrees of freedom and nonlinearity in the
equations of motion typically leads to a large multiplicity of
physically realizable solutions for fixed system parameters. This is
illustrated for the particular case of two and three parametrically
driven oscillators by the explicit results of Lifshitz and
Cross~\cite{LC}. The richness of possible solutions leads to
opportunities for diverse functionality of the system, in nature or
technology. On the other hand we need to be able to predict which out
of the possible solutions will be seen for a given experimental
protocol, or design particular protocols such that the desired
solution is the one that is formed. This is the general question of
\emph{pattern selection}~\cite{reviewcross}. A common experimental
protocol is to vary one or more system control parameters, usually
either slowly compared with the intrinsic time scales of the
dynamics, or in an abrupt step. A particular solution will usually
survive (evolving adiabatically in the former case of slow parameter
variation) until it becomes unstable to small perturbations, and a
sequence of patterns can be predicted by analyzing these
instabilities.
In this paper we investigate the sequence of single mode standing
wave patterns to be expected in parametrically driven oscillator
arrays, in cases where many such modes are simultaneously stable,
when the strength of the driving is varied. Although the quantitative
analysis could be done directly from the basic oscillator equations
of motion, it is advantageous to formulate the analysis in terms of
the BCL amplitude equation. This allows us to display the range of
stable patterns on a reduced plot involving just two dimensionless
variables (a scaled measure of the driving strength, and a scaled
mode wave number), so that it is easy to deduce the general
qualitative behavior on varying parameters. The specific quantitative
behavior for a physical system is also easy to obtain by evaluating
the corresponding scaled quantities. A change of pattern occurs when
parameters vary so that the mode moves outside of the region of
stable patterns on this plot, and the new pattern is predicted by
analyzing the result of the instability using the BCL amplitude
equation. This type of approach has been used in other pattern
forming systems~\cite{ksz}. A novel feature of the present system is
that the difference in the instabilities encountered on increasing
and decreasing the (scaled) driving strength leads to the prediction
of quite different sized mode jumps for the up and down sweeps.
The outline of the paper is as follows. In Sec.~\ref{sec:bcl} we
review the derivation of the amplitude equation of BCL, and in
Sec.~\ref{sec:single} use this equation to discuss the stability of
single-mode oscillating patterns. We then study the sequence of
patterns observed for a variety of time dependent sweeps of the
driving strength: quasistatic variation in Sec.~\ref{quasi_rev};
abrupt step jumps in Sec.~\ref{const_g}; and a control parameter ramp
varying linearly in time in Sec.~\ref{time_dependent_g}. Finally, we
conclude with some remarks connecting our results to those of Buks
and Roukes~\cite{BR} who swept the frequency rather than the driving
strength.
\section{BCL amplitude equation}
\label{sec:bcl}
Lifshitz and Cross~\cite{LC} modeled the array of coupled nonlinear
resonators that was studied by Buks and Roukes~\cite{BR} using the
equations of motion
\begin{align}
\ddot{u}_{n} & + u_{n} + u_{n}^{3} -
\frac{1}{2}\epsilon(\dot{u}_{n+1} - 2\dot{u}_{n} +
\dot{u}_{n-1})\nonumber\label{eom}\\
& +\frac{1}{2}\bigl[\Delta^{2}+\epsilon h\cos(2\omega_{p}t)\bigr](u_{n+1}%
-2u_{n}+u_{n-1})\nonumber\\
& -\frac{1}{2}\delta^{1/2}\bigl[(u_{n+1}-u_{n})^{2}(\dot{u}_{n+1}-\dot{u}%
_{n})\nonumber\\
& -(u_{n}-u_{n-1})^{2}(\dot{u}_{n}-\dot{u}_{n-1})\bigr]=0,
\end{align}
where $u_n(t)$ describes the deviation of the $n^{th}$ resonator from
its equilibrium, with $n=1\ldots N$, and fixed boundary conditions
$u_{0}=u_{N+1}=0$. Detailed arguments for the choice of terms
introduced into the equations of motion are discussed in
Ref.~\cite{LC}. The terms include an elastic restoring force with
both linear and cubic contributions (whose coefficients are both
scaled to 1), a dc electrostatic nearest-neighbor coupling term with a
small ac component responsible for the parametric excitation (with
coefficients $\Delta^{2}$ and $\epsilon h$ respectively), and linear
as well as cubic nonlinear dissipation terms. The dissipation in the
system is assumed to be weak, which is used to define two small
expansion parameters $\epsilon\ll1$ and $\delta\ll1$ by setting the
linear damping rate to $\epsilon$ and the nonlinear damping
coefficient to $\delta^{1/2}$, with a square root for later
convenience. The driving amplitude is then expressed as $\epsilon h$,
with $h$ of order one, in anticipation of the fact that parametric
oscillations at half the driving frequency require a driving amplitude
which is of the same order as the linear damping rate~\cite{Landau}.
Both dissipation terms are taken to be of a nearest neighbor form,
motivated by the experimental indication that most of the dissipation
comes from the electrostatic interaction between neighboring beams.
In order to treat the system of equations (\ref{eom}) analytically,
BCL introduced a continuous displacement field $u(x,t)$, and slow
spatial and temporal scales, $X=\epsilon x$ and $T=\epsilon t$. They
tried a solution in terms of a pair of counter-propagating plane
waves, oscillating at half the drive frequency,
\begin{eqnarray} \label{uAnsatz}\nonumber
u(x,t)&=& \epsilon^{1/2}
\bigl[\left(A_+(X,T)e^{-iq_px}+A_-^*(X,T)e^{iq_px}\right)e^{i\omega_p
t} \\
&+& c.c.\bigr] + \epsilon^{3/2}u^{(1)}(x,t,X,T)+\ldots,
\end{eqnarray}
where the asterisk and $c.c.$ denote complex conjugation, and $q_p$
and $\omega_p$ are related through the dispersion relation
\begin{equation}
\label{eq:dispersion}
\omega_p^2 = 1 - 2\Delta^2\sin^2\left(\frac{q_p}{2}\right).
\end{equation}
By substituting this ansatz (\ref{uAnsatz}) into the equations of
motion (\ref{eom}) and applying a \emph{solvability condition} on the
terms of order $\epsilon^{3/2}$, BCL obtained a pair of coupled
amplitude equations for the counter-propagating wave amplitudes
$A_\pm$. A linear analysis of these equations shows that at the
critical drive amplitude $h_{c}=2\gamma\omega_p$ a particular linear
combination of the two counter-propagating waves obtains a positive
growth rate, forming a standing wave pattern, while the growth rate of
the orthogonal linear combination remains negative. This implies that
a single amplitude equation should suffice at onset, describing this
standing wave pattern.
At this point it is natural to define a reduced driving amplitude $g$
with respect to the critical drive $h_c$ at onset by letting
$(h-h_{c})/h_{c}\equiv g\delta$, and to introduce a second ansatz,
\begin{eqnarray}\label{Bansatz}
\left(%
\begin{array}{c}
A_+ \\
A_- \\
\end{array}%
\right)&=&\delta^{1/4}
\left(%
\begin{array}{c}
1 \\
i \\
\end{array}%
\right) \hat{B}(\hat\xi,\hat\tau)+
\delta^{3/4}\left(%
\begin{array}{c}
w^{(1)}(X,T,\hat\xi,\hat\tau) \\
v^{(1)}(X,T,\hat\xi,\hat\tau) \\
\end{array}%
\right)\nonumber\\&+&
\delta^{5/4}\left(%
\begin{array}{c}
w^{(2)}(X,T,\hat\xi,\hat\tau) \\
v^{(2)}(X,T,\hat\xi,\hat\tau) \\
\end{array}%
\right),
\end{eqnarray}
where $\hat\xi=\delta^{1/2} X$ and $\hat\tau=\delta T$. Substitution
of this ansatz allows one to obtain the correction of the solution at
order~$\delta^{3/4}$
\begin{equation}\label{w1v1Sol}
\begin{split}
&\left(\begin{array}{c}
w^{(1)} \\
v^{(1)} \\
\end{array}%
\right)=\frac1{4\omega_p\sin^2(q_p/2)}\\
&\times\left(\Delta^2\sin\left(q_p\right)\frac{\partial \hat{B}}{\partial
\hat\xi} + 9i|\hat{B}|^2\hat{B}\right) \left(
\begin{array}{c}
1 \\
-i \\
\end{array}%
\right),
\end{split}
\end{equation}
after which a solvability condition applied to the terms of order
$\delta^{5/4}$, and a rescaling of all the physical quantities, yield
an equation for the scaled field $B(\xi,\tau)$ of the form
\begin{eqnarray} \label{BampEq}\nonumber
\frac{\partial B}{\partial\tau} &=& gB +
\frac{\partial^{2}B}{\partial\xi^{2}} + i\frac{2}{3}
\left(4|B|^{2}\frac{\partial B}{\partial\xi} +
B^{2}\frac{\partial B^{*}}{\partial\xi}\right)\\
&- &2|B|^{2}B - |B|^{4}B.
\end{eqnarray}
This is the BCL amplitude equation. It is governed by a single control
parameter, the reduced drive amplitude $g$, and captures the slow
dynamics of the coupled resonators just above the onset of parametric
oscillations. The reader is encouraged to consult Ref.~\cite{BCL} for
a more detailed account of the derivation of the BCL equation, as well
as a detailed list of all the scale factors leading to the final form of
the equation.
\section{Single-mode solutions of the BCL amplitude equation}
\label{sec:single}
The simplest nontrivial solutions of the BCL amplitude equation are
steady-state single-mode extended patterns, given by
\begin{equation}\label{SingleMode}
B(\xi,\tau) = b_k e^{i(\varphi-k\xi)},
\end{equation}
with $b_k$ and $\varphi$ both real. This solution, when substituted
back into (\ref{w1v1Sol}) and (\ref{Bansatz}), and then into
(\ref{uAnsatz}), yields single-mode standing-wave parametric
oscillations at half the drive frequency, whose explicit form is given
in Appendix A. The original boundary conditions $u(0,t)=u(N+1,t)=0$
constrain the phase $\varphi$ to be $\pi/4$ or $5\pi/4$, and constrain
the wave numbers of the spatial pattern to have the quantized values
of $q_m = m\pi/(N+1)$, with $m=1,\ldots, N$.
BCL showed that the first single-mode pattern to emerge as the
zero-state becomes unstable is that whose wave number $q_m$ is closest
to the wave number $q_p$ that is determined by the drive frequency
$\omega_p$ through the dispersion relation~(\ref{eq:dispersion}). This
determines the value of the scaled wave number in the single-mode
solution~(\ref{SingleMode}) to be
\begin{equation}\label{eq:k-zero}
k_0 = \left(m - q_p \frac{N+1}{\pi}\right) \Delta Q_N,
\end{equation}
where $m$ is the integer closest to $q_p(N+1)/\pi$, and $\Delta Q_N$,
whose explicit value is given in Appendix A, tends to zero as the size
$N$ of the array of resonators tends to infinity. In this paper we
are interested in secondary transitions as the initial single-mode
state of wave number $k_0$ becomes unstable with respect to the growth
of other single-mode states, whose wave numbers we label as
\begin{equation}
\label{eq:k-n}
k_n \equiv k_0 + n\Delta Q_N.
\end{equation}
In steady state, the relation between the magnitude $b_k$ and the
wave number $k$ is found by substituting (\ref{SingleMode}) into
(\ref{BampEq}) and setting the time derivative to zero, to give
\begin{equation}\label{Bsteady}
b_k^2 = (k-1) + \sqrt{(k-1)^2 + (g-k^2)}\geq 0,
\end{equation}
along with a negative square-root branch which is always unstable
against small perturbations~\cite{BCL}, as can be verified by the
analysis below. Linearization of the BCL amplitude equation
(\ref{BampEq}) shows that the zero state with $B(\xi,\tau)=0$---which
is a solution of~(\ref{BampEq}) for any value of $g$---is stable
against the formation of single-mode patterns with wave number $k$ as
long as $g<k^2$. The neutral stability curve $g=k^2$ is plotted as a
dashed parabola in Fig.~\ref{StbBalloon}. Furthermore, for $k<1$ the
bifurcation from the zero state to that of single-mode oscillations is
supercritical, occurring on the neutral stability curve, while for
$k>1$ it is subcritical, with a locus of saddle-node bifurcations
located along the line $g=2k-1$ (shown in Fig.~\ref{StbBalloon} as a
solid green line), where the square root in (\ref{Bsteady}) is exactly
zero.
The stability of a single-mode solution (\ref{SingleMode}) of wave
number $k$ against an Eckhaus transition to a different single-mode
solution of wave number $k\pm Q$ is found by performing a linear
stability analysis of solutions of the form
\begin{equation}\label{BStability}
B(\xi,\tau)=b_ke^{-ik\xi}+\left(\beta_+(\tau)
e^{-i(k+Q)\xi}+\beta_-^*(\tau)e^{-i(k-Q)\xi}\right),
\end{equation}
with $|\beta_\pm|\ll1$. When the larger of the two eigenvalues
describing the growth of such a perturbation, which is given
by~\cite{BCL}
\begin{eqnarray}\label{lambda}\nonumber
\lambda_{g,k}(Q) & = & 2b_{k}^{2}(k-1-b_{k}^{2})-Q^{2}\\\nonumber
&+ &\frac{2}{3} \left[3Q^{2}(k - b_{k}^{2})(3k - 5b_{k}^{2})\right.\\
&&+ \left.9b_{k}^{4}(k - 1 - b_{k}^{2})^{2}\right]^{1/2},
\end{eqnarray}
becomes positive the single-mode solution of wave number $k$ undergoes
an Eckhaus instability with respect to different single-mode solutions
of wave numbers $k\pm Q$\footnote{Note that for the positive square-root
branch (\ref{Bsteady}) $b_{k}^{2}(k-1-b_{k}^{2})<0$, implying that
$\lambda_{g,k}(0)=0$, while for the negative square-root solution
$b_{k}^{2}(k-1-b_{k}^{2})>0$, implying that $\lambda_{g,k}(0)>0$. As a
consequence the positive square-root solution (\ref{Bsteady}) is
stable with respect to small perturbations with the same wave number
$k$, while the negative square-root solution is unstable.}.
For an infinite number of oscillators the Eckhaus instability forms
the upper boundary of the stability balloon of the single-mode
solutions, and also the lower boundary for $k<5/2$. For $k>5/2$ the
lower boundary is the saddle node bifurcation line. For a finite
number of oscillators, restricting $Q$ to be an integer multiple of
$\Delta Q_N$ in (\ref{lambda}) slightly shifts the Eckhaus
instability lines. The upper Eckhaus boundary is shifted to larger
values of $g$. The nature of the lower instability
boundary now depends on the number of resonators in the array through
$\Delta Q_{N}$, as well as on the wave number $k$ \cite{BCL}. For
$k<1$ the lower boundary will be the Eckhaus instability curve if
$|k|>\Delta Q_{N}/2$, and the neutral stability curve otherwise. From
(\ref{eq:k-zero}) and (\ref{eq:k-n}) we find that the only wave
number to satisfy $|k|<\Delta Q_{N}/2$ is $k_{0}$, which means that
upon decreasing $g$ the $k_{0}$ solution undergoes a continuous
transition to the zero state. For $k>1$ the lower boundary will be
the Eckhaus instability curve if $1<k<(5-3(\Delta Q_{N}/2)^{2})/2$,
and the line of saddle node bifurcations otherwise. For $\Delta Q_{N}>2$
(for the parameters used throughout this paper this corresponds to
$N<172$) there is no portion of Eckhaus instability on the lower
boundary, which is the neutral stability curve if $k<1$ and the
saddle node bifurcation curve if $k>1$. These stability boundaries
are shown in Fig.~\ref{StbBalloon} for an infinite system and for a
system of $N=92$ resonators, which is discussed next.
\begin{figure}
\includegraphics[width=0.9\columnwidth]{fig1}%
\caption{\label{StbBalloon}
(Color online) Stability boundaries of the single-mode solution
(\ref{SingleMode}) of the BCL amplitude equation (\ref{BampEq}) in
the $g$~{\it vs.}~$k$ plane. Dashed line: neutral stability curve
$g=k^{2}$. Dotted line: stability boundary of the single-mode
solution~(\ref{SingleMode}) for a continuous spectrum
($Q\rightarrow0$). Solid lines: stability boundary of the
single-mode solution for $N=92$ and the parameters $\Delta=0.5$,
$q_{p}=73\pi/101$, and $\epsilon=\delta = 0.01$ (giving $k_0\simeq
-0.81$ and $\Delta Q_N \simeq 3.70$). Black line: the value of $g$
for which the eigenvalue $\lambda_{g,k}(\Delta Q_N)$ turns positive.
Red line: the lower bound for $k<1$, $g=k^{2}$. Green line: the
lower bound for $k>1$, the locus of saddle-node bifurcations
$g=2k-1$. Vertical and horizontal arrows mark the secondary
instability transitions shown in Fig.~\ref{three_type_sweep} and
discussed in Sec.~\ref{quasi_rev}.}
\end{figure}
\section{Quasistatic sweeps of the control parameter}
\label{quasi_rev}
\begin{figure}
\includegraphics[width=1\columnwidth]{fig2}%
\caption{\label{three_type_sweep}
(Color online) The amplitude of single-mode oscillations as a
function of the reduced drive amplitude $g$. The parameters are the
same as in Fig.~\ref{StbBalloon}. Solid lines show the analytical
values~(\ref{Bsteady}) of the amplitudes $b^{2}_{k}$ of the modes
$k_{n}$ (for $n=0,\ldots, 3$). Note that the $k_0$ mode bifurcates
supercritically, whereas all the other modes start at a saddle-node
bifurcations. All modes terminate at the values of $g$ for which
they become Eckhaus unstable. Symbols show numerical calculations,
where blue is used for upward sweeps of $g$ and red is used for
downward sweeps, as follows: (a) $+$s and $\Box$s show upward and
downward sweeps, respectively, of the original equations of
motion~(\ref{eom}) of the coupled resonators. (b) $\bigtriangleup$s
and $\bigtriangledown$s show upward and downward sweeps,
respectively, of the BCL amplitude equation~(\ref{BampEq}). (c) $*$s
and $\circ$s show upward and downward sweeps, respectively, of the
truncated mode expansion equations~(\ref{a-eqn}) for the seven modes
$b_{-3}$ to $b_{3}$. }
\end{figure}
We begin by taking a close look at the switching that occurs between
single-mode patterns~(\ref{SingleMode}) of different wave numbers
$k_{n}$ as the control parameter---the reduced drive amplitude
$g$---is varied quasistatically. We examine a typical situation,
which is depicted within the stability balloon of single-mode
solutions, shown in Fig.~\ref{StbBalloon}. Parameters are chosen such
that the initial pattern happens to have a wave number
$k_0\simeq-0.81$, which corresponds to the array of $N=92$ nonlinear
resonators oscillating at its $m=67$ mode. Because $k_0<1$ we expect
the pattern to grow supercritically from the zero state as the
control parameter is gradually increased from $g=0$. The sequence of
expected secondary transitions to single-mode patterns of wave
numbers $k_n$ can be understood with the help of the vertical and
horizontal lines drawn within the stability balloon. As $g$ reaches a
value of about 10, the initial $k_0$ pattern undergoes an Eckhaus
instability to a pattern of wave number $k_1\simeq 2.90$. As this
occurs in the solution~(\ref{SingleMode}) of the amplitude
equation~(\ref{BampEq}) the pattern of the array of nonlinear
resonators~(\ref{eom}) switches from the $67^{th}$ mode to the
$68^{th}$ mode via a single phase slip, in which the number of nodes
in the standing-wave pattern increases exactly by one. With the
continuing increase of the control parameter $g$ the secondary
pattern eventually undergoes another Eckhaus transition to $k_2\simeq
6.60$ ($m=69$), followed by a further Eckhaus transition to
$k_3\simeq 10.30$ ($m=70$).
Upon decreasing the value of the control parameter $g$ back to zero,
the $k_3$ pattern remains stable down to its saddle-node bifurcation
at a value of $g$ just below 20. As we further decrease $g$, the
$k_{2}$ wave number is skipped and the $k_{1}$ wave number appears,
even though the control parameter is varied quasistatically. This
transition is discussed in detail below. The single-mode pattern of
wave number $k_1$ eventually reaches its saddle-node bifurcation value
and is replaced by the $k_0$ pattern.
This sequence of secondary transitions, which is expected for such a
quasistatic upward sweep of the control parameter followed by a
quasistatic downward sweep, is verified numerically in
Fig.~\ref{three_type_sweep}. Solid curves show the analytical
values~(\ref{Bsteady}) of the amplitudes $b_{k}^{2}$ of the modes
$k_{n}$ ($n=0\ldots 3$), plotted in the region in which the
corresponding single-mode solutions~(\ref{SingleMode}) are stable.
Superimposed symbols show the numerical solution of both the original
equations of motion~(\ref{eom}) for $N=92$ resonators, and the BCL
amplitude equation~(\ref{BampEq}), for a quasistatic sweep of the
control parameter from $g=0$ up to $g=85$ and back down to $g=0$. We
note that in order to satisfy the boundary conditions when integrating
the BCL amplitude equation, both the $\delta^{1/4}$ and $\delta^{3/4}$
terms of Eq.~(\ref{Bansatz}), when substituted into the expression
$A_{+}e^{-iq_{p}x}+A_{-}^{*}e^{iq_{p}x}$ in Eq.~(\ref{uAnsatz}), must
be set to zero separately at the boundaries. This yields a pair of
conditions on $B$ and its derivative, at the boundaries, of the form
\begin{eqnarray}
&&Be^{-iq_{p}x}-iB^{*}e^{iq_{p}x} = 0, \\
&&\frac{\partial B}{\partial \xi}e^{-iq_{p}x}+i\frac{\partial
B^{*}}{\partial \xi}e^{iq_{p}x} = 0.
\end{eqnarray}
To study the actual process of an Eckhaus transition as it takes
place, we expand the general solution of the BCL amplitude equation in
the linear modes of the array
\begin{equation}\label{multimode}
B(\xi,\tau)=\sum_n b_{n}(\tau)e^{i(\varphi_{n}-k_{n}\xi)},
\end{equation}
where $k_{n}$ is defined in~(\ref{eq:k-n}), as was done, for example,
in a similar situation by Kramer et al.~\cite{ksz}. Substituting a
truncated mode expansion~(\ref{multimode}) containing a finite number
of modes around $k_0$ into the BCL amplitude equation~(\ref{BampEq}),
allows us to replace this partial differential equation with a finite
number of ordinary differential equations for the coupled mode
amplitudes,
\begin{eqnarray}\label{a-eqn}\nonumber
\frac{\partial b_{n}}{\partial \tau}&= &\left(g - k_{n}^{2}\right)
b_{n}\\\nonumber
&+ &2\sum_{m,p}\left(k_p - 1 - \frac{m-n}{3} \Delta Q_N \right)
b_{m}b_{p}b_{m+p-n}^{*}\\
&- &\sum_{m,l,p,r}b_{m}b_{l}^{*}b_{p}b_{r}b_{m-l+p+r-n}^{*}.
\end{eqnarray}
To satisfy the boundary conditions, as mentioned above for the
single-mode solution~(\ref{SingleMode}), we take each mode amplitude
to be zero at the boundaries by setting all the phases $\varphi_{n}$
in Eq.~(\ref{multimode}) to $\pi/4$, and take the amplitudes $b_{n}$
to be real, keeping in mind that they can be either positive or
negative. Note that if all mode amplitudes except $b_0$ are set to
zero we obtain a single equation with $n=m=p=l=r=0$, whose
steady-state solution is the same as the single-mode solution of
BCL~(\ref{Bsteady}).
\begin{figure}
\includegraphics[width=0.9\columnwidth]{fig3}%
\caption{\label{phase_slip} (Color online) Time evolution of the
amplitudes of the four largest modes that participate in the Eckhaus
transition from the initial $k_0$ pattern to the $k_1$ pattern,
obtained by a numerical integration of the seven truncated mode
equations~(\ref{a-eqn}), for modes $b_{-3}$ to $b_{3}$, using the
same parameters as in Fig.~\ref{three_type_sweep}. The value of the
control parameter is changed from $g=10$ to $g=11$ at $\tau=0$,
causing the initial $k_0$ pattern to become unstable. The decay of
the amplitude $b_{0}$ is followed by the rise of $b_{1}$ to its
expected steady-state value~(\ref{Bsteady}), where it is clearly
seen that during the transition other modes---including the unstable
$k_{-1}$ mode---have a non-zero amplitude.}
\end{figure}
\begin{figure}
\includegraphics[width=0.9\columnwidth]{fig4}%
\caption{\label{saddle_node} (Color online) Time evolution of the
amplitudes of the four largest modes as the control parameter is
changed from $g=20$ to $g=19$, obtained by a numerical integration
of the seven truncated mode equations~(\ref{a-eqn}), for the
amplitudes $b_{-3}$ to $b_{3}$, using the same parameters as in
Fig.~\ref{StbBalloon}. As the value of $g$ drops below the
saddle node value of the $k_{3}$ wave number, its amplitude drops
abruptly to zero. Then, the smallest possible wave number which has
the largest linear growth rate over the zero solution, $k_{0}$,
grows to reach its steady-state value~(\ref{Bsteady}). Nevertheless,
after a short transient the $k_0$ pattern decays through an Eckhaus
instability and the $k_1$ pattern grows to its steady-state value.}
\end{figure}
As shown in Fig.~\ref{three_type_sweep}, we can capture the sequence
of Eckhaus transitions from the $k_0$ pattern up to the $k_3$ pattern,
and back down to the $k_0$ pattern through the saddle-nodes, by
integrating seven coupled ordinary differential
equations~(\ref{a-eqn}) (calculated symbolically using MATLAB) that
correspond to a truncated mode expansion~(\ref{multimode}) containing
the seven modes from $n=-3$ to $n=3$. The results agree very well
with those obtained by integrating the full BCL amplitude equation as
well as the original equations of motions for the resonators, yet the
solution in terms of a truncated mode expansion allows us to inspect
the transitions between patterns in greater detail.
We take a closer look at the transient behavior during the first
Eckhaus transition from the initial $k_0$ pattern to the $k_1$ pattern
by plotting the time evolution of the four largest modes during this
Eckhaus transition, as shown in Fig.~\ref{phase_slip}. One can observe
the decay of the unstable mode amplitude $b_{0}$ followed by the
growth of $b_{1}$ to its steady-state value. One can also see that
during the transient the amplitude of the unstable mode $b_{-1}$
becomes non-zero. Its participation in the Eckhaus transition from the
$k_0$ pattern to the $k_1$ pattern is essential, as can be verified by
considering these two modes alone in a truncated expansion. Limiting
the expansion to $b_0$ and $b_1$ suppresses the Eckhaus transition,
and the $k_0$ pattern remains stable as $g$ exceeds its expected value
for the Eckhaus instability. The Eckhaus transition is observed only
when the $k_{-1}$ mode is included as well, corresponding to the
stability calculation, performed earlier for the state given by
Eq.~(\ref{BStability}).
One might naively expect that the same mechanism causes the transition
from the $k_{3}$ pattern to the $k_{1}$ pattern at $g=19$ through a
double phase slip, however, this is not the case.
Fig.~\ref{saddle_node} reveals the transient processes on a downward
sweep of $g$ just below the saddle node at $g=19$. As $g$ crosses the
saddle node bifurcation point, the amplitude $b_{3}$ drops abruptly to
zero. As can be seen from Eq.~(\ref{a-eqn}), in the zero displacement
state the linear growth rates of the solutions (\ref{multimode}) are
$g-k_{n}^{2}$, so the $k_{0}$ pattern has the largest possible growth
rate and it out-grows the other modes until its amplitude approaches
the steady state value (\ref{Bsteady}). However, according to the
eigenvalue (\ref{lambda}), at this value of $g$ the $k_{0}$ pattern is
Eckhaus unstable with respect to the $k_{1}$ pattern---notice the
characteristic evolution of the modes around $\tau=3$ in
Fig.~\ref{saddle_node} corresponding to the Eckhaus instability (cf.\
around $\tau=25$ in Fig.~\ref{phase_slip}). Thus the $k_{1}$ mode is
ultimately the selected pattern.
\section{Abrupt change of the control parameter}
\label{const_g}
\begin{figure}
\includegraphics[width=0.9\columnwidth]{fig5}%
\caption{\label{four_lambda} (Color online) The linear growth rate
$\lambda_{g,k_0}(Q)$ as a function of the wave number shift $Q$,
plotted for different values of the control parameter $g$. The
horizontal axis labels $Q$ in units of $\Delta Q_N$, which for a
finite system of $N=500$ resonators, with $k_0\simeq-0.075$, has the
value of $\Delta Q_N\simeq0.69$ (all other parameters are the same
as in Fig.~\ref{StbBalloon}). The Eckhaus instability of the initial
$k_0$ pattern occurs for these parameters at $g\simeq5.13$.}
\end{figure}
For a quasistatic increase of $g$ the Eckhaus instability leads to a
single phase slip event and a jump by one of the mode number. This is
no longer always the case for more rapid variations of $g$
\footnote{The behavior when the saddle node instability is encountered
on decreasing $g$ is expected to be similar for slow or rapid
variation of $g$, since the dynamics is simply the decay of the mode
to zero, with the subsequent growth of other modes.}. In this
section we consider the question of pattern selection after an abrupt
jump in $g$, so that single-mode states of different wave numbers
compete with each other after the system is initiated in an Eckhaus
unstable state. We consider a scenario in which the system is
initiated in the $k_0$ single-mode state~(\ref{SingleMode}), after
which the control parameter $g$ is abruptly increased so that the
$k_0$ wave number is no longer stable, while many other wave numbers
become simultaneously stable. In order to predict the single-mode
pattern that is selected we use our previous expression~(\ref{lambda})
for the eigenvalue $\lambda_{g,k_0}(Q)$ to calculate the linear growth
rate of perturbations of single-mode patterns of wave number $k_0+Q$.
In Fig.~\ref{four_lambda} we plot the $\lambda_{g,k_0}(Q)$ as a
function of $Q$ for four different values of $g$, illustrating the
dependence of the fastest growing wave number $k_{0}+Q_{max}$ on $g$.
The wave number $k_0+Q_{max}$ with the largest linear growth rate
$\lambda_{max}$, which is expected to overcome all other modes, is
obtained by finding the maximum of $\lambda_{g,k_0}(Q)$ as a function
of $Q$, yielding
\begin{eqnarray}\label{Qmax}\nonumber
Q_{max}^{2} & = &(3k_0^2 - 5b_{k_0}^{2}k_0 - 3b_{k_0}^{2} +
2b_{k_0}^{4})\\
&\times &\frac{(3k_0^{2} - 11b_{k_0}^{2}k_0 + 3b_{k_0}^{2} +
8b_{k_0}^{4})}{3(k_0 -
b_{k_0}^{2})(3k_0 - 5b_{k_0}^{2})},
\end{eqnarray}
and
\begin{equation}\label{lambda_max}
\lambda_{max}=\frac{(3k_0^{2} - 5b_{k_0}^{2}k_0 - 3b_{k_0}^{2} +
2b_{k_0}^{4})^{2}}{3(k_0 - b_{k_0}^{2})(3k_0 - 5b_{k_0}^{2})},
\end{equation}
where $b_{k_0}$ is the steady-state amplitude of the unstable $k_0$
mode, as given by Eq.~(\ref{Bsteady}), that depends on the actual
value of $g$.
\begin{figure}
\includegraphics[width=0.9\columnwidth]{fig6}%
\caption{\label{Eckhaus_decay} The wave number shift $Q_{max}$ with the
maximal growth rate (in units of $\Delta Q_N$) as a function of $g$.
The blue solid line shows $Q_{max}$ for the same parameters used in
Fig.~\ref{four_lambda}, and the green dashed line gives $Q_{max}$
for an infinite system~(\ref{Qmax}). The open red circles are the
wave number shifts that are observed numerically by integrating the
BCL amplitude equation~(\ref{BampEq}). The full black circles are
wave number shifts that are obtained by a numerical integration of
the original equations of motion~(\ref{eom}), calculated for select
values of $g$. The numerical calculations are initialized with the
$k_{0}$ solution and random small-amplitude noise, to initiate the
growth of competing patterns. We emphasize that the results of the
numerical solution of the BCL amplitude equation (\ref{BampEq}) are
not sensitive to the noise amplitude as long as it is sufficiently
small.}
\end{figure}
For a finite system the selected wave number is expected to be the
$k_{n}$---defined in Eq.~(\ref{eq:k-n})---which has the largest linear
growth rate. The Eckhaus instability is triggered by random
small-amplitude noise. In our finite system the difference between
growth rates of different modes is expected to be sufficiently large
so that by the time nonlinear effects are important, the amplitude of
the fastest growing mode far exceeds those of other destabilizing
Eckhaus modes, and it will reach its steady state value.
Fig.~\ref{Eckhaus_decay} shows $Q_{max}$ for an infinite system and
for a finite system of $N=500$ resonators, where the two curves should
tend to one another as $N\rightarrow\infty$. These predictions for the
selected wave numbers are verified numerically by integrating the BCL
amplitude equation~(\ref{BampEq}), as well as the original equations
of motion~(\ref{eom}) of the coupled resonators. We note that for the
parameters used the stability balloon contains about $10$ modes for
each of the values taken for the control parameter. For $g=21$, for
example, all modes with wave numbers from $k_{3}$ to $k_{16}$ are
stable. Finally, by following the amplitude of the growing mode as a
function of time in the numerical solution of the BCL amplitude
equation~(\ref{BampEq}), it is possible to extract the linear growth
rate of the mode numerically. We have done so and found that the
numerically calculated growth rates agree to within $2\%$ with the
analytical values of $\lambda_{g,k_0}(Q_{max})$.
\section{Ramps of the control parameter}
\label{time_dependent_g}
We finish by considering a scenario in which the control parameter $g$
varies smoothly with time---this is often called a control parameter
ramp. To simplify the analysis we consider slow variation in time,
where $dg/d\tau \ll 1$, so that we can use the expressions
(\ref{SingleMode}) and (\ref{Bsteady}), obtained earlier for the
steady-state single-mode solutions of the BCL amplitude equation, with
a simple replacement of the previously constant $g$ by a time
dependent $g(\tau)$,
\begin{eqnarray}\label{t_d_singleMode}
&B(\xi, \tau)=a_n(\tau)e^{i(\varphi-k_{n}\xi)},\\
\label{t_d_steady} &a_n(\tau)^2 = (k_{n}-1) + \sqrt{(k_{n}-1)^2
+ (g(\tau)-k_{n}^2)}.
\end{eqnarray}
Thus, $a_n(\tau)$ would be the steady-state amplitude of the pattern
with wave number $k_n$ if the drive were varied quasistatically to its
instantaneous value at time $\tau$. For ramps that are not quasistatic
we expect the actual amplitude, which we denote as $\bar{a}_n(\tau)$,
to lag behind its expected value for a quasistatic ramp. This time
lag phenomenon is known from experiments measuring the heat flow in a
Rayleigh-B$\acute{\textmd{e}}$nard cell \cite{Guenter01}.
\begin{figure}
\includegraphics[width=0.9\columnwidth]{fig7}
\caption{(Color online) Stability balloon for an array of
$N=1230$ resonators and the same parameters as in
Fig.~\ref{StbBalloon}, which yields $k_{0}\simeq0.075$ and $\Delta
Q_{N}\simeq0.28$. The values of $k_{n}$ for $n=-1...10$ are marked
with vertical dotted lines. The thick dashed line is the neutral
stability curve $g=k^{2}$, and the segment in which it is the
lower boundary $|k|<\Delta Q_{N}/2$ is marked by a solid red line.
The black dotted and solid curves (which are almost
indistinguishable) are the Eckhaus boundaries for an infinite and
a finite system respectively. The solid green line is the saddle
node $g=2k-1$ which is the lower boundary of oscillations for
$k>2.47$.}\label{StbBalloon2}
\end{figure}
The specific scenario we examine is one in which the control parameter
increases linearly in time from zero, $g=\alpha\tau$ with
$\alpha\ll1$. Initially, the system is expected to evolve to the
single-mode state~(\ref{t_d_singleMode}) with wave number $k_0$. As
$g$ increases, this solution becomes Eckhaus unstable and a transition
is expected to a different pattern, which eventually becomes Eckhaus
unstable as well. It is the first of these Eckhaus transitions that we
treat analytically below, as well as test numerically using the BCL
amplitude equation. In order to obtain interesting mode competition,
even for $\alpha\ll 1$, we increase the number of resonators to
$N=1230$, thus increasing the number of stable single-mode solutions
for any particular value of $g$. For such a number of resonators we no
longer perform numerical calculations on the original coupled
equations of motion (\ref{eom}). The stability balloon for $N=1230$
resonators is shown in Fig.~\ref{StbBalloon2}. Due to the large
number of resonators, the Eckhaus boundaries for infinite and finite
systems are almost the same. The dashed vertical lines mark the
values of possible wave numbers $k_{n}$ for $n=-1...10$. For $n=0$
($k\simeq0.075$) the lower boundary of oscillations is the neutral
stability curve, for $n=1...8$ the lower boundary is the Eckhaus
instability curve, and for $n\geq9$ ($k\simeq 2.5$) it is the saddle
node line.
\begin{figure}
\includegraphics[width=0.9\columnwidth]{fig8}
\caption{(Color online) The three relevant Fourier amplitudes
of the numerical solution of the amplitude equation (\ref{BampEq})
for $g=10^{-4}\tau$ and the same parameters as in
Fig.~\ref{StbBalloon2}. The quasistatic values of the
amplitudes~(\ref{t_d_steady}) are plotted in thin black lines. For
$\alpha=10^{-4}$ we expect a double phase slip from the $k_{0}$
pattern to the $k_{2}$ pattern as can be inferred from
Fig.~\ref{time_dependent}. The inset demonstrates the time lag at
early times between the actual amplitude of the $k_{0}$ mode and
its quasistatic value.\label{t_r_phase_slip}}
\end{figure}
\begin{figure}
\includegraphics[width=0.9\columnwidth]{fig9}
\caption{(Color online) The real part of $B(\xi,\tau)$, obtained by
a numerical integration of the BCL amplitude
equation~(\ref{BampEq}) for $g=10^{-5}\tau$, using the same
parameters as in Fig.~\ref{StbBalloon2}. The initial zero-state
$B(\xi,0)=0$ evolves into the $k_0$ state, which then undergoes a
sequence of Eckhaus transitions as $g$ increases in time---the
first two transitions involve single phase slips, while the third
involves a double phase slip.}\label{cs_time_ramp}
\end{figure}
A typical response of the system to a linear ramp of the drive
amplitude is shown in Fig.~\ref{t_r_phase_slip} for a ramp rate of
$\alpha=10^{-4}$. One clearly sees the amplitude of the $k_0$ mode,
which forms initially from the zero-state, becoming Eckhaus unstable
around $g\simeq7$ and undergoing a double phase slip to the $k_2$
mode. Thin black lines show the quasistatic values (\ref{t_d_steady})
of the amplitudes of these two modes as a function of $g(\tau)$, while
the blue dot-dashed curve and the green dashed curve show the actual
values of these two amplitudes as obtained by Fourier transforming the
numerical solution of the BCL equation. The curves are
distinguishable from each other only at very early times, shown in the
inset of Fig.~\ref{t_r_phase_slip}, clearly demonstrating the time it
takes the actual amplitude $\bar{a}_0(\tau)$ to ``catch up'' with the
quasistatic value $a_0(\tau)$, from the zero displacement state. After
this initial time lag the system responds sufficiently quickly so that
the ramp becomes effectively quasistatic. The only points where the
time dependence of the ramp is still evident are the Eckhaus
instability points at which different ramp rates are expected to lead
to different transitions. A typical sequence of such transitions is
shown in Fig.~\ref{cs_time_ramp} for a ramp rate of $\alpha=10^{-5}$.
To analytically predict the first Eckhaus transition, given the ramp
rate $\alpha$, it is useful to introduce a more compact notation for
the eigenvalues (\ref{lambda}), which now depend on time \footnote{For
slow ramps the instantaneous rate of growth of perturbations is well
approximated by ignoring the time dependence of the parameter.},
denoting $\lambda_{g(\tau),k_{0}}(n\Delta
Q_{N})\equiv\lambda_{n}[g(\tau)]$. The first five of these eigenvalues
are plotted in Fig.~\ref{time_ramp_lambda_n_zoom} as a function of
$g(\tau)$. Note that as $n$ increases, the corresponding eigenvalue
$\lambda_n$ becomes positive at a later point in time, which we denote
as $\tau_n$, but grows more rapidly than the smaller-$n$ eigenvalues.
At time $\tau_n$ the amplitude of the $k_n$ mode is expected to start
growing from its initial value $\bar{a}_n(\tau_n)$, which in a real
physical system is set by the noise floor. In our analysis below we
take this initial value to be the same as the accuracy of the
numerical routine that is used for integrating the BCL equation. Once
the $k_n$ pattern starts growing it competes with all the other
single-mode patterns with positive growth rates. We expect the
pattern that is eventually selected to be the one whose amplitude is
first to reach the quasistatic value $a_n(\tau)$, given by
Eq.~(\ref{t_d_steady}). Thus, lower-$n$ modes have an advantage for
small ramp rates $\alpha$ because their growth rates become positive
earlier. At higher ramp rates, due to the time-lag phenomenon shown
above, the higher-$n$ modes have an advantage because their
eigenvalues increase more rapidly in time. This gives rise to an
interesting competition between the possible stable patterns. A
similar situation was observed in a system described by the stochastic
time-dependent Ginzburg-Landau equation~\cite{TE}.
\begin{figure}
\includegraphics[width=0.9\columnwidth]{fig10}
\caption{The first five eigenvalues $\lambda_{n}$, plotted as
a function of $g=\alpha\tau$ using the parameters of
Fig.~\ref{StbBalloon2}. As $\lambda_{1}$ turns positive the
$k_{0}$ solution becomes Eckhaus unstable, but the selected
pattern depends on the ramp rate $\alpha$ as explained in the
text.\label{time_ramp_lambda_n_zoom}}
\end{figure}
Owing to the slow ramp rates, and the fact that the second eigenvalue
associated with each mode remains negative, we can estimate the growth
of the $n^{th}$ amplitude, in the linear regime, from its initial value
at $\tau_n$ to be
\begin{equation}
\label{time_ramp_amplitude}
\bar{a}_{n}(\tau)=\bar{a}_{n}(\tau_n)e^{\sigma_{n}(\tau,\tau_n)},
\end{equation}
where
\begin{equation}\label{Sigma}
\sigma_{n}(\tau,\tau_n)=\int_{\tau_n}^{\tau}\lambda_{n}[g(\tau')]d\tau'.
\end{equation}
A comparison of these expressions for $\bar{a}_{n}(\tau)$ for the
different patterns allows us to determine which is the first to reach
its quasistatic value ${a}_{n}(\tau)$, and provides a simple scheme
for predicting the selected pattern following the Eckhaus instability
of the initial $k_0$ pattern. These analytical predictions for the
selected pattern $k_n$ are shown as a function of the ramp rate
$\alpha$ in Fig.~\ref{time_dependent}, and are nicely verified by
numerical integration of the BCL amplitude equation (\ref{BampEq}).
As expected, for small values of $\alpha$ there is a single phase slip
to the pattern with wave number $k_1$. As $\alpha$ is further
increased this changes to a double phase slip to the $k_2$ pattern
(as demonstrated earlier in Fig.~\ref{t_r_phase_slip}), followed by a
transition to the $k_3$ pattern, and so on.
\begin{figure}
\includegraphics[width=0.9\columnwidth]{fig11}
\caption{(Color online) The number of phase slips $Q/\Delta Q_{N}$,
that are observed following the Eckhaus instability of the initial
$k_0$ pattern, plotted as a function of the ramp rate $\alpha$ for
a linear ramp of the drive $g=\alpha \tau$. Parameters are the
same as in Fig.~\ref{StbBalloon2}. Red circles are the actual
values observed in the numerical integration of the BCL amplitude
equation (\ref{BampEq}). The blue line shows the predicted values
from the linear analysis described in the text, where the initial
amplitude of each mode, when its eigenvalue becomes positive, is
taken to be $\bar{a}_{n}(\tau_{n})=10^{-12}$, which is the
accuracy of the time integration in the numerical solution of
(\ref{BampEq}).\label{time_dependent}}
\end{figure}
\section{Conclusions}
We have investigated the sequence of single-mode standing-wave
patterns to be expected in one-dimensional arrays of
parametrically-driven oscillators for time varying drive strengths. An
amplitude equation approach provides a general treatment in terms of a
universal stability diagram on a plot with scaled versions of the
driving strength and wave number as axes. This immediately shows the
type of instability that will be encountered on varying parameters,
and gives qualitative insights on the mode jumps to be expected. For
example, for quasistatic parameter variations, we find that the jump
in the mode number is always unity if the control parameter is
increased so that the Eckhaus instability operates, but larger jumps
are often seen if the control parameter is decreased so that a saddle
node bifurcation occurs. For more rapid increases in the control
parameter larger jumps in the mode number may also occur, and can be
predicted simply from the eigenvalue equation for the Eckhaus
instability. We give explicit results for examples of an abrupt jump
and a slow temporal ramp in the control parameter. In all cases we
checked, simulations of the original oscillator equations of motion
confirm the results based on the amplitude equation.
In the Buks and Roukes experiments on parametrically-driven oscillator
arrays~\cite{BR} which motivated this study the frequency of the drive
was swept, rather than the strength of the driving. Since the drive
frequency is involved in setting the wave number of the resonant mode
that goes unstable, and these two parameters are involved in a
complicated way in the expressions for the scaled drive and wave
number variables (see BCL), it is more difficult to display the
variation on the scaled stability plot of Fig.~\ref{StbBalloon}. For
quasistatic or abrupt variations this is immaterial, since the
behavior is determined by where the stability boundary is crossed in
the former case, or the relationship of the final point on the
stability plot to the stability boundaries in the latter case. These
can be estimated quite easily, so that the behavior for quasistatic or
abrupt jumps can be readily predicted. In particular we expect single
mode jumps for quasistatic variations of the drive frequency in the
sense leading to a crossing of the Eckhaus boundary, and the
possibility of larger mode jumps for quasistatic sweeps in the reverse
direction. This is qualitatively consistent with the numerical results
of Lifshitz and Cross~\cite{LC} for a numerical model of 67
parametrically driven resonators, who found more jumps in the solution
on decreasing the frequency than on increasing it. (In the experiments
of Buks and Roukes only upward frequency sweeps were performed.) A
more detailed comparison with experiment would require a better
knowledge of the parameters of the MEMS or NEMS devices so that the
frequency variation could be mapped onto the reduced stability diagram
of the amplitude equation. In future experiments, upward and downward
sweeps of the strength of the driving would provide a more direct
comparison with the theory we have developed.
\section*{Acknowledgments}
This work was funded by the U.S.-Israel Binational Science Foundation
(BSF) through Grant No.~2004339, the U.S. National Science Foundation
under Grant No.~DMR-0314069, and the Israeli Ministry of Science and
Technology.
|
1,314,259,994,202 | arxiv | \section{Introduction}
The magnetic-field structure in the solar corona is primarily governed by evolution of the photospheric magnetic field. Large-scale systematic flows and random convective motion of plasma on the solar surface redistribute the magnetic field; consequently, the coronal magnetic field evolves in a time-dependent manner. However, most global computational models assume quasi-static evolution of the coronal magnetic field such that they produce a sequence of independent ``single-time'' extrapolations from the photospheric magnetic fields at discrete time intervals \citep{2012LRSP....9....6M}. The premise is motivated by the relatively fast response time for any disturbance originating in the photosphere to propagate through the corona. The surface magnetic field encounters plasma flow of speed $1\,--\,2$\,km\,s$^{-1}$ (for e.g., differential rotation; \citeauthor{2009LRSP....6....1H}, \citeyear{2009LRSP....6....1H}), whereas any perturbation in the corona is dispersed with the Alfv\'{e}n speed (a few $1000$\,km\,s$^{-1}$ in coronal loops; \citeauthor{2007Sci...317.1192T}, \citeyear{2007Sci...317.1192T}). Thus, the assumption of a quasi-static corona is reasonably justified. Such models have been successful in reproducing various observed coronal magnetic-field features (\citeauthor{2012LRSP....9....6M}, \citeyear{2012LRSP....9....6M} and references therein). Models based on nonlinear force-free extrapolations perform well regarding active-region associated structures, where reliable vector magnetic-field input data are available. However, these static models cannot capture the build-up of free magnetic energy outside of strong active regions, such as in filament channels \citep{2018SSRv..214...99Y}. Because static coronal models -- even those solving the full magneto-hydrodynamics (MHD) equations -- ignore any previous magnetic connectivity that existed in the corona.
In contrast, the modelling approach used in this study preserves the ``memory'' imparted by the slowly evolving photospheric-field distribution over time. The technique was initially introduced by \cite{2000ApJ...539..983V} and was later utilised (with necessary modifications and improvements) in many studies on the evolution of the non-potential coronal magnetic field. The methodology is based on a magnetofrictional approach where the velocity within the corona is proportional to the Lorentz force, such that coronal magnetic field relaxes toward a force-free equilibrium \citep{1986ApJ...309..383Y}. Note that the magnetofriction model is not a dynamic model like a full MHD model, and indeed it acts like a quasi-static model. However, it can capture the effect of magnetic reconnection through the diffusion term present in the magnetic-induction equation (used in this model), which again dictates the dynamic evolution of the magnetic field. In this sense, the magnetofriction model is more advanced than other static non-linear force-free coronal models (for detailed comparison, refer to \citeauthor{2018SSRv..214...99Y}, \citeyear{2018SSRv..214...99Y}). Magnetofriction simulations were successful in explaining the observed hemispheric pattern in solar filaments \citep{2000ApJ...544.1122M,2005ApJ...621L..77M,2006ApJ...641..577M,2008SoPh..247..103Y,2012ApJ...753L..34Y}. These models were also applied to successfully model the formation of non-potential magnetic structures, and associated eruptive phenomena in the global solar corona \citep{2006ApJ...641..577M,2006ApJ...642.1193M,2009ApJ...699.1024Y,2012ApJ...757..147C,2014SoPh..289..631Y,2016ApJ...828...83G,2017ApJ...846..106L}. \cite{2016MNRAS.456.3624G} used a magnetofriction model with different parameter combinations to explore coronal activity in other stars. However, the magnetofrictional model has some limitations too. For example, unlike MHD simulations, it cannot provide information about the plasma properties in the solar corona. Also, the solar wind is implemented in a simplified way: the model uses a radial outflow boundary condition to mimic the effect of the solar wind. Using a full MHD model, \cite{2018NatAs...2..913M} presented a prediction of the global coronal magnetic-field distribution for the August 2017 solar eclipse, which again used a magnetofriction model to inform the energisation of filament channels in the MHD model. From the same MHD model, they were able to deliver the associated coronal density and brightness profiles, which cannot be derived using a magnetofriction model only.
Most earlier research with the magnetofriction model aimed to study the formation and evolution of non-potential structures such as flux ropes in the solar corona, and their sudden eruption. The same is the objective of our work presented here, but we focus in detail on the evolution of such structures during the solar minimum period. Non-potential structures are generated in the corona by the surface evolution, and form, in general, over polarity-inversion lines as a result of flux cancellation and magnetic reconnection \citep{1989ApJ...343..971V}. They may take the form of sheared arcades or flux ropes where a bundle of magnetic-field lines are twisted around a common axis. Corresponding free magnetic energy typically becomes concentrated above polarity inversion lines, in so-called filament channels \citep{2010SSRv..151..333M}. Naturally, magnetic helicity is also prone to accumulate in these structures \citep{2015ApJ...809..137K}. Magnetic helicity is an important topological quantity with numerous applications in many astrophysical systems \citep{1999PPCF...41B.167B}. In the case of solar corona, it serves as a quantitative mathematical measure of the chiral properties of the flux ropes or sheared-arcade structures. Over the course of evolution, reconnection as well as continuous shearing and twisting of magnetic field in such structures can drive them toward instability \citep{2008A&A...480..255R,2010SSRv..151..333M}. Upon successful eruption, newly reconnected field moves outward and is ejected to the outer corona \citep{2006ApJ...641..590G}.
Observationally, pre-eruption flux ropes are linked with solar filaments or prominences -- one of the most common large-scale magnetic features observed in the solar corona \citep{2014LRSP...11....1P}. Within a filament, dense chromospheric plasma, which is much cooler than the $1$\,MK hot corona, stays confined by the helical magnetic-field lines of the associated flux rope. Thus they appear as dark elongated structures against the bright solar disk when observed in the H$\alpha$ absorption line. The same structures look brighter on the solar limb in the H$\alpha$ emission line and are known as prominences. Instabilities in the flux rope trigger eruption of the filament, causing the confined plasma to be released in an explosion resulting in a coronal mass ejection (CME) \citep{2004ApJ...614.1054J,2013AdSpR..51.1967S}. Highly energetic events such as CMEs can have a significant impact on space weather by changing its radiative, electromagnetic, and particulate environment drastically \citep{2015AdSpR..55.2745S}. Thus, with the increasing importance of predicting space weather, understanding the evolution of magnetic-flux ropes is increasingly relevant.
Since the coronal magnetic field is strongly affected by the distribution and complexities of the surface magnetic field, the number of filaments forming in the corona, in general, follows the sunspot cycle \citep{2015ApJS..221...33H}. However, during solar minimum, when there is little new sunspot emergence, a significant number of filaments (as high as $200$ in a year: \citeauthor{2015ApJS..221...33H}, \citeyear{2015ApJS..221...33H}) can be observed at higher latitudes ($> 30 ^{\circ}$). These are known as polar-crown filaments \citep{2014LRSP...11....1P}, and they comprise quiescent filaments forming over long neutral lines passing across the diffused and weak magnetic-field distribution. The eruption of such filaments is known to result in the occasional CMEs recorded during solar minimum \citep{2012LRSP....9....3W}.
Using a magnetofrictional approach, this current study focuses on how eruptions are generated due to the gradual injection of non-potentiality through surface motion and magnetic reconnection in the corona during a very low activity period of Sunspot Cycle 24. In the following, we first provide a brief description of the computational model, the details of the period of our study, and different analysis tools utilised in this work in Sections \ref{sec2}. Section \ref{sec3} comprises the results obtained, which we further divide into several subsections to address different aspects of our findings. Finally, we summarise and interpret our results in Section \ref{sec4}.
\section{Computational Model and Analysis Tools}
\label{sec2}
\subsection{Coronal Magnetic-Field Model}
\label{sec2.1}
The computational model used in this study is a combination of a surface-flux transport model and a non-potential coronal model, where the magnetic field [$\vec{B}$] within the corona evolves in response to the large-scale shearing velocity on the solar surface. The approach was introduced by \cite{2000ApJ...539..983V}, and extended to the global corona by \cite{2008ApJ...680L.165Y}. For the coronal part, we employ
a magnetofrictional modelling approach and solve the non-ideal form of the induction equation in terms of magnetic vector potential [$\vec{B} = {\bf \nabla} \times \vec{A}$],
\begin{equation}
\frac{\partial \vec{A}}{\partial t} = -\vec{E}
\label{eq1}
\end{equation}
\noindent where, $\vec{E} = - \vec{v} \times \vec{B} + \vec{N}$. Here, $\vec{E}$ represents the electric field, and $\vec{N}$ corresponds to the non-ideal part of Ohm's law. This non-potential coronal model is a simplified version of full-scale magnetohydrodynamic (MHD) models. We retain the induction Equation [\ref{eq1}] but use a ``frictional'' velocity rather than coupling it to the full momentum equation. The computational domain is within $\rm{R_{\odot} \le r \le 2.5\,R_{\odot}}$ and the full extent of co-latitudes and longitudes varying from 0\,--\,180 and 0\,--\,360 degrees respectively. We solve Equation [\ref{eq1}] using a finite-difference method on an equally-spaced grid of $360 \times 180 \times 60$ cells in longitude, sine(co-latitude), and $\log(r/\rm{R_{\odot}})$.
The frictional velocity [$\vec{v}$] is proportional to the Lorentz force that drives the magnetic field to relax toward a force-free equilibrium, [$\vec{j} \times \vec{B} = 0$], with $\vec{j}$ and $\vec{B}$ being the current density and the magnetic field. Accordingly, the velocity field within the corona is modelled through the equation
\begin{equation}
\vec{v} = \frac{\vec{j} \times \vec{B}}{\nu |\vec{B}|^2} + v_{\mathrm{out}}(r)\hat{\vec{e_r}}.
\label{eq2}
\end{equation}
\noindent In the above equation, $\nu$ is the friction coefficient and has the functional form, $\nu = \nu_0 r^2 \sin^2 \theta$ ($\theta$: co-latitude), with $\nu_0 = 3.6 \times 10^{-6}$\,s$^{-1}$. The spatial dependency of $\nu$ facilitates reduced computational time. However, at the inner boundary, the frictional velocity is set to be zero. The second term in Equation [\ref{eq2}], $v_{\rm out}(r) = v_0(r/\rm{R_\odot})^{15}$ mimics the presence of solar wind in the corona and ensures that magnetic-field lines become radial beyond $2.5\,\rm{R_{\odot}}$. For most of our simulations, we consider a solar wind with the maximum speed $v_0 = 100$\,km\,s$^{-1}$.
At the inner boundary, however, the velocity field is replaced by two large-scale plasma flows, differential rotation and meridional circulation, along with a supergranular diffusion term modelling the net effect of unresolved small-scale convection. These flows are considered to be time-independent. The time-averaged observed values on the photosphere determine their spatial profiles and amplitudes. These flows are part of the surface-flux-transport model functioning at the inner boundary ($r = \rm{R_{\odot}}$), which controls the evolution of the radial component of magnetic field at $r = \rm{R_{\odot}}$ (for more details, see Section 2.2 of \citeauthor{2014SoPh..289..631Y}, \citeyear{2014SoPh..289..631Y}). The parameters involved in the surface-flux-transport model are chosen according to the standard values suggested by \cite{2017A&A...607A..76W}.
We try two different forms of the non-ideal term [$\vec{N}$] in Ohm's law: ohmic diffusion or fourth-order hyperdiffusion. In the case of ohmic diffusion,
\begin{equation}
\vec{N} = \eta_0 \left( 1 + c \frac{|\vec{j}|}{\mathrm{max}|\vec{B}|}\right) \vec{j}
\label{eq3}
\end{equation}
\noindent while $c > 0$ ensures enhanced diffusivity in strong current sheets. The constant $\eta_0$ (we choose a value of $6 \times 10^{11}$\,cm$^2$\,s$^{-1}$) determines the effectively of the ohmic diffusion. The term, $\mathrm{max}|\vec{B}|$ denotes the maximum amplitude of $|\vec{B}|$. However, recent works with coronal magnetofriction models consider unresolved small-scale fluctuations as the major contributor to $\vec{N}$ and utilise hyperdiffusion such that
\begin{equation}
\vec{N} = \frac{- \vec{B}}{|\vec{B}|^2} \nabla (\eta_{\mathrm{h}} |\vec{B}|^2 \nabla \alpha).
\label{eq4}
\end{equation}
\noindent In the above equation, $\alpha = \vec{j}.\vec{B}/|\vec{B}|^2$, is the current-helicity density and the chosen value of $\eta_{\mathrm{h}}$ is $10^{31}$\,cm$^4$\,s$^{-1}$. This form of hyperdiffusion preserves magnetic helicity density, [$\vec{A}\cdot\vec{B}$], in the volume \citep{2008ApJ...682..644V}. It reduces gradients in $\alpha$ so that the coronal magnetic field evolves towards a linear force-free configuration. However, due to the large-scale shearing flows on the surface, that force-free state is never achieved in full-Sun simulations. For the majority of the current study, we consider hyperdiffusion to model $\vec{N}$.
\subsection{Period of Study}
\label{sec2.2}
We simulate continuously the period between 26 August 2018 and 22 February 2019 -- covering 180 days with a daily cadence to output 3D coronal magnetic field. However, we note that the dates do not hold much importance in our current study. The main objective is to analyse how non-potentiality builds up within six months through the magnetofrictional evolution of the coronal magnetic field and not to draw a comparison between the simulated and observed corona during this period. To initiate the simulation, we require three-dimensional magnetic-field information in the corona. Accordingly, we perform a potential-field source-surface (PFSS) extrapolation \citep{1969SoPh....6..442S,1969SoPh....9..131A} on the radial component of the observed surface magnetic-field distribution corresponding to Carrington rotation 2207 (which began on 6 August 2018 and ended on 2 September 2018). This was taken from the radial-component, pole-filled map derived by \cite{2011SoPh..270....9S} using the data from the Helioseismic and Magnetic Imager (\textit{HMI}) of Solar Dynamics Observatory (\textit{SDO}). This pole-filled map is corrected for the erroneous data near the poles, which, in general, appears due to projection effects. The PFSS extrapolation used our finite-difference code \citep{anthony_yeates_2018_1472183}. Additionally, we assume no new active regions emerged on the surface during these six months. Our selected period was quite close to the minimum of Sunspot Cycle 24; thus, only thirteen sunspots were discarded due to our assumption. Moreover, during this 180-day-long simulation, we choose certain epochs to perform additional simulations with hourly cadence to study the evolution of particular non-potential structures of interest in more detail.
\subsection{Different Measures of Non-Potentiality}
\label{sec2.3}
\subsubsection{Free Magnetic Energy and Current Density}
\label{sec2.3.1}
To understand the dynamics of the coronal magnetic field, we study the temporal evolution of various quantities, especially those reflecting the build-up of non-potentiality within the corona. One such measure is free energy, which is an upper limit for the expendable energy available for eruptions of non-potential structures such as flux ropes. It is computed as the difference between the non-potential solution and the corresponding solution from the PFSS model at the same point of time. Another important measure is the mean current density per unit volume within the corona. However, one must note that both of these measures are inadequate to represent the spatial distribution of non-potentiality in the coronal magnetic field. Free energy only makes sense as a global measure and cannot be defined locally. Although, the current density can be defined locally, it is not an ideal invariant and can be changed even if the magnetic field on the boundaries is fixed. So a more robust local measure, which is also ideal invariant, is required (see Section \ref{sec2.3.2}).
\subsubsection{Magnetic Helicity}
\label{sec2.3.2}
Another means of assessing non-potentiality within the corona is to calculate the relative magnetic helicity \citep{1984JFM...147..133B}. However, it also cannot provide any local-helicity information associated with non-potential magnetic structures. Recent studies \citep{2016A&A...594A..98Y,2017ApJ...846..106L} have demonstrated that evaluating the field-line helicity is an excellent way to identify the spatial distribution of magnetic helicity within solar corona. In particular, those magnetic-field lines that are significantly twisted and sheared always have strong field-line helicity, and such field lines are often spatially concentrated to form structures such as flux ropes.
Field-line helicity is defined as the normalised magnetic helicity within an infinitesimally thin tube around a field line and is calculated through the line integral
\begin{equation}
\mathcal{A} = \int_{L(x)} \frac{\vec{A}\cdot\vec{B}}{|\vec{B}|} \,\mathrm{d}l.
\label{eq5}
\end{equation}
\noindent Here $l$ represents arc length along the field line $L(x)$ through point $x$, and $\vec{A}$ is a vector potential for the magnetic field $\vec{B}$. The quantity $\mathcal{A}$ can also be thought of as the flux linked with the field lines, where contributions come from twisting of magnetic-field lines with height and winding around centres of strong flux on the boundary. A more detailed theoretical basis of this quantity is discussed by \cite{2016A&A...594A..98Y} as well as \cite{2018JPlPh..84f7702Y}. If the field-line foot-points on the solar surface remained fixed in time, $\mathcal{A}$ would be an ideal invariant. However, helicity is continuously injected to the global corona through the surface motions; and thereby, we can expect a continuous evolution of field-line helicity and its preferential accumulation near non-potential structures such as flux ropes generated from magnetic reconnection.
Since the coronal volume is a magnetically open domain with non-zero $B_r$ on both the inner and outer boundaries, $\mathcal{A}$ depends on our choice of gauge for $\vec{A}$. In particular, we recompute $\vec{A}$ in a more appropriate (according to relative magnetic helicity) gauge than that given by the computation with Equation [\ref{eq1}]. The previous studies of \cite{2016A&A...594A..98Y} and \cite{2017ApJ...846..106L} computed $\mathcal{A}$ in the DeVore\,--\,Coulomb gauge $\vec{A}^{\rm DC}$, which has $A^{\rm DC}_r=0$ throughout the domain and $\nabla_h\cdot\vec{A}^{\rm DC}=0$ on the inner boundary $r=\rm{R_\odot}$. Here we use an alternative gauge $\vec{A}^*=\vec{A}^{\rm DC} + \nabla\chi$ that satisfies $\nabla_h\cdot\vec{A}^*=0$ on both the inner and outer boundaries. On closed field lines (connecting to the Sun at both ends), this gives the same $\mathcal{A}$-values as $\vec{A}^{\rm DC}$, but on open-field lines the $\mathcal{A}$-values can differ slightly. This difference is small because the open-field lines do not tend to store field-line helicity, but it makes the calculations consistent with the poloidal-toroidal gauge of \cite{2018JPhA...51W5501B} as well as the minimal gauge condition of \cite{2018JPlPh..84f7702Y}. Moreover, unlike $\vec{A}^{\rm DC}$, this modified gauge has the property that integrating $\mathcal{A}|\vec{B}|$ over all field lines gives the relative magnetic helicity. Using this formulation, we calculate the field-line helicity for a set of coronal magnetic-field lines (with footpoints equally distributed on the photospheric boundary) based on the simulation-generated daily data. We consecutively utilise it to identify magnetic structures with high non-potentiality by applying a threshold to the field-line helicity (details are provided in Section \ref{sec3.1.2}). Moreover, the variation in the mean field-line helicity will reflect the temporal evolution of magnetic helicity in the global corona in our simulation.
Although helicity is continuously being injected into the coronal magnetic field through the shearing motion on the surface, any depletion of the mean unsigned field-line helicity in the corona is related to two factors. To understand this, consider the evolution equation for the relative helicity [$H$], which is the signed integral of field-line helicity. This may be written (cf. \citeauthor{2016A&A...594A..98Y}, \citeyear{2016A&A...594A..98Y}) as
\begin{equation}
\frac{\rm{d}H}{\rm{d}t} = -2 \int_V \vec{N}\cdot\vec{B} \,\rm{d}V + \oint_S \vec{A} \times \big[ 2 \vec{E} + \frac{\partial \vec{A}}{\partial t}\big] \,\rm{d}\vec{S}.
\label{eq6}
\end{equation}
\noindent First, there is an overall volume dissipation of helicity, represented by the first term on the right-hand side of Equation [\ref{eq6}]. Second, there are sudden decreases due to the ejection of unstable non-potential structures with high helicity content through the outer boundary, which we measure by integrating the second term on the right-hand side of Equation [\ref{eq6}] over a closed surface at $r = 2.5\,\rm{R_{\odot}}$. The temporal evolution of these two quantities in our simulation is discussed in Section \ref{sec3}.
\subsubsection{Change in Magnetic-Field Distribution}
\label{sec2.3.3}
The destabilisation of a non-potential structure can lead to an eruption, which is always associated with significant changes in coronal magnetic-field structure. Therefore, it should be reflected in the distribution of open magnetic field passing through $2.5\,\rm{R_{\odot}}$ as well as in the horizontal component of magnetic-field associated with the non-potential structure itself. Open magnetic-field lines are those with one end at the photospheric boundary and the other at the $2.5\,\rm{R_{\odot}}$ boundary, and the extended regions with such open-field lines are regarded as coronal holes. In fact, one of the significant precursors of the onset of a CME is dimming in coronal emission in regions surrounding the unstable flux rope associated with the CME. Several computational and observational studies have suggested that coronal dimming is linked with the formation of transient coronal holes around the non-potential structure \citep{1998ApJ...498L.179G,2000JGR...10518187G,2001JGR...10629239K,2006SoPh..238..117A,2011LRSP....8....1C}. Thus, we inspect how the coronal-hole area changes in the vicinity of the evolving non-potential structures in our simulation. These localised changes in the coronal-hole area also cause significant amplitude variation of open magnetic flux over the period of the simulation.
At the outer boundary ($r = 2.5\,\rm{R_{\odot}}$), the magnetic field is primarily radial because of the solar wind. However, as an erupting non-potential structure migrates upwards and eventually passes through the outer boundary, the horizontal component of magnetic field [$B_{\perp} = (B_{\theta}^2 + B_{\phi}^2)^{1/2}$] at $r = 2.5\,\rm{R_{\odot}}$ becomes significantly enhanced \citep{2016A&A...594A..98Y,2017ApJ...846..106L}. Thus we also compute the variation of $B_{\perp}$ at $2.5\,\rm{R_{\odot}}$ in simulation-generated data.
\subsubsection{Magnetic Pressure and Tension Forces}
\label{sec2.3.4}
We inspect the evolution of radial magnetic-tension force and pressure gradient across individual non-potential structures of interest. Within the twisted and helical structure of a flux rope, magnetic pressure acts outward from the flux-rope core axis, whereas the tension force acts inward (see Figure 4 of \citeauthor{2014SoPh..289..631Y}, \citeyear{2014SoPh..289..631Y}). Previous magnetofriction simulations \citep{2009ApJ...699.1024Y,2014SoPh..289..631Y} have utilised this concept to identify flux ropes. Similar to \cite{2014SoPh..289..631Y}, at each point on the computational grid, we calculate the normalised radial magnetic-tension force and magnetic-pressure gradient determined by the following equations (in our simulation $\mu_0$ is equal to one),
\begin{equation}
T_r = \frac{\rm{R_{\odot}}}{B^2} \left(\vec{B}\cdot \nabla B_{r} - \frac{B_{\theta}^2}{r} - \frac{B_{\phi}^2}{r} \right), \hspace{1cm} P_r = - \frac{\rm{R_{\odot}}}{B^2} \frac{\partial}{\partial r} \left(\frac{B^2}{2}\right).
\label{eq7}
\end{equation}
\noindent Then to locate flux-rope structures we identify those grid points ($r_i,\theta_j,\phi_k$) where the following four conditions are satisfied:
\begin{align}
P_r (r_{i-1}, \theta_j , \phi_k)& < - 0.4 \nonumber \\
P_r (r_{i+1}, \theta_j , \phi_k)& > 0.4 \nonumber \\
T_r (r_{i-1}, \theta_j , \phi_k)& > 0.4 \nonumber \\
T_r (r_{i+1}, \theta_j , \phi_k)& < - 0.4 .
\label{eq8}
\end{align}
This analysis adds to our understanding of the distinctive evolving nature of different non-potential structures, which are primarily selected based on their field-line helicity content and enhancement in the $B_{\perp}$ at the outer boundary. Moreover, we particularly focus on detecting positive tension force at 2.0\,$\rm{R_{\odot}}$, which is a signature of helical magnetic-field lines associated with an unstable flux-rope-like structure passing through the coronal height of 2.0\,$\rm{R_{\odot}}$ (see Sections \ref{sec3.1} and \ref{sec3.2} for more details).
\subsubsection{Quantitative Analysis of Individual Non-Potential Structures}
\label{sec2.3.5}
We intend to quantify different physical properties, such as total area, total magnetic flux, and total field-line helicity content associated with individual non-potential structures formed during our coronal magnetic-field simulation. As these structures sometimes undergo drastic reformation or disappear entirely due to instability, we are also interested in corresponding changes in these physical quantities during those epochs. An analysis of all of the erupting structures in our simulation is provided in Section \ref{sec3.3}. The structures are selected based on a field-line helicity threshold, i.e. determined from the magnetic field. They are not selected specifically by physical size, which in any case might appear different in observations where structures would be determined primarily by plasma density rather than magnetic field.
\subsubsection{Dependency on Model Parameters}
\label{sec2.3.6}
To test the robustness of our results, we perform additional magnetofriction simulations of 180 days, starting with the same initial surface magnetic-field map. However this time, we consider ohmic diffusion to model $\vec{N}$ (using Equation [\ref{eq3}]). We again evaluate different measures of non-potentiality over 180 days, similar to our analyses with hyperdiffusion. Another critical factor that may influence our results is the speed of the solar wind used in the magnetofriction simulations ($v_{\mathrm{out}}$ in Equation [\ref{eq2}]). Thus we execute additional simulations of 180 days with a slower solar wind (maximum speed 50\,km\,s$^{-1}$, and $\vec{N}$ modelled according to hyperdiffusion). The corresponding results are briefly discussed in Section \ref{sec3.4} with further details in the Appendix.
\begin{figure}[!ht]
\centerline{\includegraphics[width=0.7\textwidth,clip=]{fig1.jpg}}
\caption{Global measures. Temporal evolution of free energy and current density are depicted by the black and red curves, respectively, in the top row. The second row shows the temporal evolution of mean field-line helicity, over-plotted with the total number of grid points on the outer boundary where $B_{\perp} > 0.02$\,G. The third row shows the evolution of open magnetic flux and the maximum amplitude of positive radial-tension force. In the last row, the helicity volume dissipation and helicity flux through the outer coronal boundary, from Equation [\ref{eq6}], are presented. The vertical lines correspond to the 19 identified epochs of sudden but significant changes of non-potential measures associated with eruptive ``events'' of different classes: flux-rope eruptions (cyan lines) and overlying-arcade eruptions (magenta lines). Two of them (marked by the solid cyan and magenta-vertical lines) are discussed in more detail.}
\label{fig1}
\end{figure}
\section{Results}
\label{sec3}
\subsection{Generation and Evolution of Non-Potentiality}
\label{sec3.1}
\subsubsection{Global Measures}
\label{sec3.1.1}
First, we focus on the temporal evolution of global measures such as the free magnetic energy and current associated with the build-up of non-potentiality within the corona. The first row in Figure \ref{fig1} depicts the variation of free magnetic energy and volume-averaged unsigned current density in the corona. Both quantities are strongly correlated and have a general decreasing trend beyond the initial rising phase. This initial phase arises because the initial potential coronal magnetic field requires a certain amount of ``ramp time'' to form non-potential structures. In our magnetofriction simulation, this ramp time is about 50 days. After this, and since our simulation does not include emergence of new sunspots, the total photospheric magnetic-field strength decreases monotonically over time, thereby resulting in the observed decay of free magnetic energy and current. However, several sharp decrements at certain epochs are noteworthy, which we inspect thoroughly in the following section.
We calculate the field-line helicity for a set of magnetic-field lines in the corona from the daily simulated coronal magnetic field according to the method described in Section \ref{sec2.3.2}. The second row in Figure \ref{fig1} represents the evolution of the average (unsigned) field-line helicity in the global corona, which again shows sharp changes similar to the current and free-energy evolution at the same epochs. On each day of the 180-day simulation, from the $B_{\perp}$ maps generated on the outer boundary ($r = 2.5\,\rm{R_{\odot}}$), we record the number of grid points with $B_{\perp} > 0.02$\,G, the temporal evolution of which is depicted by the red curve on the second row of Figure \ref{fig1}. This shows strong peaks around the time when the mean helicity drops. There are also peaks at similar times in the open magnetic flux (see the third row in Figure \ref{fig1}).
As mentioned in Section \ref{sec2.3.4}, we calculate the radial magnetic-tension force in the coronal volume and primarily search for positive values of $T_r$ at 2.0\,$\rm{R_{\odot}}$. The reasoning is as follows: a flux-rope structure comprises helical magnetic-field lines, and the lower part (with respect to the flux-rope axis) of these field lines must contribute to positive $T_r$. It should be detected in the corona as an unstable flux rope erupts and rises. In our $T_r (\theta, \phi)$ analysis at a fixed radius, we choose the coronal height at 2.0\,$\rm{R_\odot}$, which is somewhere between the inner and outer boundaries (closer to the outer boundary). In the third row of Figure \ref{fig1}, the black curve with distinct peaks represents the maximum amplitude of positive $T_r (r=2.0\,\rm{R_{\odot}}, \theta, \phi)$. We find that for particular cases (indicated by cyan lines), the epochs according to the maximum $T_r$ match well with those obtained from other non-potential measures, such as $B_{\perp}$, mean field-line helicity, etc.
\begin{figure}[ht!]
\centerline{\hspace*{0.015\textwidth}
\includegraphics[width=0.5\textwidth,clip=]{fig2a.jpg}
\hspace*{-0.03\textwidth}
\includegraphics[width=0.5\textwidth,clip=]{fig2b.jpg}
}
\vspace{-0.35\textwidth}
\centerline{\Large \bf
\hfill}
\vspace{0.31\textwidth}
\centerline{\hspace*{0.015\textwidth}
\includegraphics[width=0.5\textwidth,clip=]{fig2c.jpg}
\hspace*{-0.03\textwidth}
\includegraphics[width=0.5\textwidth,clip=]{fig2d.jpg}
}
\vspace{-0.35\textwidth}
\centerline{\Large \bf
\hfill}
\vspace{0.32\textwidth}
\caption{Spatial distributions on two nearby dates: the first row depicts a top view of the magnetic-field lines colour-coded according to their field-line helicity (positive helicity in red and negative in blue, units in Mx). The grey background corresponds to the radial component of the photospheric magnetic field (within $\pm$ 10\,G). The second row shows the photospheric mapping of field-line helicity. The third row represents the footprint of selected non-potential structures, marking cores and their extension with darker and lighter shades, respectively. The foot-points of the open-field lines with upward (green) and downward (violet) directions are shown on the fourth row. The fifth row depicts the change in coronal-hole area compared to the previous day; dark green and dark purple suggest opening up and closing down of field lines, i.e. increase and decrease in coronal-hole area, respectively. The distribution of the horizontal component of the magnetic field [G] at the outer boundary is depicted in the last row.}
\label{fig2}
\end{figure}
Lastly, we study the global quantities associated with the change in magnetic helicity, as discussed in Section \ref{sec2.3.2}. The last row in Figure \ref{fig1} shows the temporal evolution of the dissipation term and the helicity flux through the outer boundary, where variation in the former is negligible compared to the latter, owing to the nearly ideal nature of the simulation. The helicity flux shows a mean background value, due to the differential rotation of open-field lines \citep{2016A&A...594A..98Y}, superimposed with significant fluctuations. These are associated with those that have already been mentioned in other global quantities.
\subsubsection{Local Measures: Spatial Distribution of Non-Potentiality}
\label{sec3.1.2}
Among the different global measures of non-potentiality, magnetic-field-line helicity, coronal-hole area, and $B_{\perp}$ maps generated on the outer boundary ($r = 2.5\,\rm{R_{\odot}}$) can reveal how the non-potential magnetic field is spatially distributed within the corona. Thus we inspect the spatial and temporal evolution of these quantities during our 180-day simulation with daily cadence. Note that radial magnetic pressure gradient and tension force maps can also provide similar information, which we have utilised later in Section \ref{sec3.2}.
The first row of Figure \ref{fig2} shows the two-dimensional projection of magnetic-field lines on the solar surface with colours (red-white-blue) assigned according to their field-line helicity. This figure's left and right columns correspond to the corona on 21 and 25 January 2019 (Days 148 and 152), respectively. We further map the field-line helicity at the field-line footpoints, using an equally spaced grid in sine-latitude and longitude on the photospheric boundary following the same technique utilised by \cite{2017ApJ...846..106L}. The results are shown in the second row of Figure \ref{fig2}. Notably, the distribution of $\mathcal{A}$ on the surface is such that distinct domains are formed around the footpoints of more complex and twisted field structures. These maps are then further used to detect footpoints of non-potential structures through implementing a threshold technique based on the intensity of field-line helicity. Following the same method described in \cite{2017ApJ...846..106L}, we apply two thresholds to identify both the strong core and outer extension of the structure, [$\tau_{\rm{c}}$] and [$\tau_{\rm{e}}$] respectively. The amplitude of $\tau_{\rm{c}}$ and $\tau_{\rm{e}}$ are chosen according to the following relations,
\begin{equation}
\tau_{\mathrm{c}} = \frac{\overline{\mathcal{A}(t)}}{\overline{\mathcal{A}}_\mathrm{ref}} \tau_{\mathrm{c,ref}}; \hspace{0.1cm} \hspace{0.1cm} \rm and \hspace{0.1cm} \tau_{\mathrm{e}} = \frac{\overline{\mathcal{A}(t)}}{\overline{\mathcal{A}}_\mathrm{ref}} \tau_{\mathrm{e,ref}}
\label{eq9}
\end{equation}
\noindent where $\overline{\mathcal{A}(t)}$ is the mean unsigned field-line helicity, $\overline{\mathcal{A}}_\mathrm{ref} = 1.29 \times 10^{21}$\,Mx, $\tau_{\mathrm{c,ref}} = 4.84 \times 10^{21}$\,Mx, and $\tau_{\mathrm{e,ref}} = 3.39 \times 10^{21}$\,Mx. These particular values were suggested by \cite{2017ApJ...846..106L} based on their careful and thorough calibration to detect flux ropes from a similar magnetofriction simulation covering a period of eighteen years. The mean magnetic flux of erupting structures in their simulation was about $10^{21}$\,Mx, which is close to that of the observational estimates, between $10^{19}-10^{22}$\,Mx \citep{2005JGRA..110.8107L}. These estimates can be compared with those estimated from observations of interplanetary magnetic clouds. Although helicity for magnetic clouds is not very well constrained due to the uncertainty in estimating the associated length of the flux rope in the heliosphere, \cite{2017ApJ...846..106L} found that the mean helicity of a typical erupting structure in their simulation was reasonable and consistent with magnetic-cloud observations (more details are provided in Section \ref{sec3.3}). The third row in Figure \ref{fig2} depicts detected structures where the helicity magnitude exceeds these thresholds in the projected-helicity maps on 21 and 25 January 2019. These identified non-potential structures selected based on the field-line-helicity threshold are likely to have twisted magnetic-field lines similar to flux ropes (for an example see our first case study in Section \ref{sec3.2.1} below). However, we have found some instances where the detected structure has a strongly sheared arcade rather than helically twisted field lines -- we will see an example in our second case study below (Section \ref{sec3.2.2}).
Note that the sign of helicity is primarily positive in the northern and negative in the southern hemisphere, which is precisely opposite to the predominant hemispheric pattern of magnetic helicity observed in solar filaments. This disparity in our simulation is caused by the differential rotation being the sole process to determine the nature of helicity \citep{1997SoPh..175...27Z}. \cite{2012ApJ...753L..34Y} have shown that in the absence of helicity transport from emerging active regions, high-latitude flux ropes are bound to have oppositely signed helicity, especially in the declining phase of a solar cycle. Thus the opposite helicity pattern is quite expected from our simulation, which excluded emergence of new sunspots and was performed for a period very close to the cycle minimum. Nevertheless, our simulation clearly predicts the growth of non-potentiality in the coronal magnetic field and the eventual formation of flux ropes through reconnection. Thus we expect our analyses and findings will still be relevant in case of a longer simulation with the inclusion of active regions, but likely with the opposite sign of helicity at some locations.
A closer look at the first three rows of Figure \ref{fig2} reveals a noteworthy change of coronal structures from 21 to 25 January 2019, highlighted by a rectangular box in each of the figures. Primarily, on the third row, we notice the disappearance of two blue structures near $200$ and $320$ degrees longitude in the southern hemisphere (initially extended within zero to $- 50$ degrees latitude on 21 January 2019). Simultaneously, field line distribution (see the first row) changed significantly around the same locations, which correspond to the two footpoints associated with a coronal non-potential structure (see the set of blue field lines disappearing in the top row of Figure \ref{fig2}). Furthermore, the disappearance of that particular structure is also reflected in the temporal evolution of the mean field-line helicity (see the second row of Figure \ref{fig1}). On Day 151 (which is 24 January 2019), we observe a sudden decrease in the blue curve (indicated by a cyan-vertical line). This indicates that the recurrent decreases in mean field-line helicity are likely to be associated with the disappearance and eventual eruption of non-potential structures.
As discussed in Section \ref{sec2.3.3}, an eruption causes a significant change in coronal magnetic-field structures; therefore, it should be reflected in the distribution of open magnetic field too. On the fourth row of Figure \ref{fig2}, we present coronal-hole maps of 21 and 25 January 2019 (Days 148 and 152) based on the distribution of open-field lines. A notable change is visible near $300$ degrees longitude in the southern hemisphere, almost at the same location where the non-potential structure has disappeared. We further investigate the difference in coronal-hole area between two consecutive days. The dark green patch at about $300$ degrees longitude and $-50$ degrees latitude in the first figure on the fifth row indicates the opening up of new field lines. As an eruptive non-potential structure becomes unstable and rises through the corona, the overlying field lines open up. Therefore, the coronal-hole area in the vicinity of the structure grows. This causes an increase in the overall amplitude of open magnetic flux, and indeed we notice a peak on Day 151 (see the third row of Figure \ref{fig1} indicated by solid-cyan line). In the post-eruption configuration, field lines close down, ensuing a significant decrease in coronal-hole area, consistent with the purple patch in the second image on the fifth row of Figure \ref{fig2}. Such transient changes in coronal-hole area are also reported by observational studies on the evolution of coronal dimming associated with erupting CMEs \citep{2007ApJ...660.1653M,2017MNRAS.471.4776G}.
Additionally, we observe the strongest values of $B_{\perp}$ along the heliospheric current sheet between $240$ and $330$ degrees longitude on the left-side figure on the last row of Figure \ref{fig2} corresponding to 21 January 2019 (Day 148), which is associated with the outward moving non-potential structure. By 25 January 2019 (Day 152), the strength of $B_{\perp}$ along the heliospheric current sheet decreased, and entirely disappeared on the following Day 153. In agreement, the global evolution of $B_{\perp}$ (see the second row of Figure \ref{fig1}) shows a peak on Day 151 (24 January 2019) coinciding with the timing of probable eruption of the non-potential structure under consideration.
Lastly, in the evolution of helicity flux through the outer boundary (see the last row in Figure \ref{fig1}, indicated by a solid-cyan line), we notice around Day 151 a significant enhancement in the helicity flux through the outer boundary. This increment is causally connected with the disappearance of the negative-helicity flux rope between 21 (Day 148) and 25 January 2019 (Day 152), which removes negative helicity from the coronal volume (see the third row of Figure \ref{fig2}).
\begin{figure}[ht!]
\centerline{\includegraphics[width=0.50\textwidth,clip=]{fig3.jpg}}
\caption{Evolution of the first case study (an erupting flux rope) viewed from two different viewing angles, where the field lines are colour-coded according to the field-line helicity with the maximum amplitude $2.5 \times 10^{21}$\,Mx. The colour blue represents negative field-line helicity with the darker shades corresponding to increasing amplitude. The magnetic-field distribution on the solar disk is depicted in shades of grey (within $\pm 5$\,G). Left images show field lines traced from the flux-rope footprint on the inner boundary, and right images those traced from the outer boundary.}
\label{fig3}
\end{figure}
Our analyses so far indicate that the disappearance of a non-potential structure is characterised by simultaneous model signatures, including a drop in the amplitude of mean field-line helicity, a significant change in coronal-hole area, a peak in the strength of $B_{\perp}(r = 2.5\,\rm{R_{\odot}})$ as well as in open magnetic flux, and an increase in helicity flux through the outer boundary. Globally, these changes are accompanied by decreases in free energy and current during the same epoch (as indicated in Figure \ref{fig1}). In our 180-day-long simulation, there are multiple instances where all these signatures are visible concurrently, and we label each of these instances as an individual \textit{``event''}. Taking individual peaks in $B_{\perp}$ (with threshold $ 0.02$\,G) and significant simultaneous change in field-line distribution as indicators of separate events, we identify 19 significant events, which are marked by 19 vertical lines in Figure \ref{fig1}. Note that events associated with the first four peaks in $B_\perp$ were excluded from our analyses due to their occurrence during the initial (model-induced) ramp phase. The following section describes the evolution of coronal magnetic field during each of these events to find whether all of them correspond to the eruption of non-potential structures.
\subsection{Classes of Eruption}
\label{sec3.2}
To determine the nature of the 19 events, we generate additional snapshots from the simulation with hourly cadence for a period of three to five days around the time of each event. Analysing the hourly data helps us to understand the field-line dynamics during these events in more detail and to determine their physical nature.
\subsubsection{Case Study: Flux-Rope Eruption Event}
\label{sec3.2.1}
First, we examine further the event on Day 151 that was illustrated in Section \ref{sec3.1}, as an example of a clear flux-rope eruption. In Figure \ref{fig3}, we present snapshots with three-dimensional views of the flux rope. In the figures on the left side, the plotted field-lines are selected based on the flux-rope footprint maps (shown in the third row of Figure \ref{fig2}). On the right, the figures show only those field lines passing through the outer boundary and where $B_{\perp}(r = 2.5\,\rm{R_{\odot}}) > 0.01$\,G (according to the last row of Figure \ref{fig2}). Thus increasing numbers of field lines would suggest proportional growth in the strength of $B_{\perp}$ on the outer boundary. Although the selection processes of the field lines are different for figures on the left and right columns, similarities in the footpoints of the field lines clearly demonstrate that they all belong to the same structure.
As time advances, field lines get more twisted and form the helical structure associated with a flux rope, which can be seen in the first two rows. Eventually, the structure becomes unstable, and the ejection process initiates. In the third row of Figure \ref{fig3}, we can see a few ``U''-shaped field lines with low helicity (indicated by a yellow arrow). The creation of such U-loops occurs through reconnection of field lines at the quasi-separatrix layer over the polarity-inversion line \citep{2006ApJ...642.1193M}. The U-loops are then pushed through the outer boundary by the high-speed solar wind. As the reconnection process continues, we observe strongly sheared field lines associated with the flux rope to be ejected, and new field lines with significantly less helicity to be formed below the flux rope (primarily seen in the right-side figure in the third row of Figure \ref{fig3} marked by ``C''). In the following hours, the number of field lines passing through the outer boundary decreases drastically as low-lying field lines form at lower heights (see the last two rows in Figure \ref{fig3}). Thus the disappearance of the blue patch in the third row of Figure \ref{fig2}, as well as the reduction of coronal-hole area (the fifth row in the same figure) on 25 January 2019 (Day 152), are both consistent with the hourly evolution of the non-potential structure and suggest that this event represents the full eruption of a pre-existing flux rope.
\begin{figure}[ht!]
\centerline{\includegraphics[width=1.0\textwidth,clip=]{fig4.jpg}}
\caption{Radial pressure gradient and the radial component of magnetic-tension force across the flux rope in case study 1 are presented. The forces are shown by the red/blue colour map, while grid points satisfying the conditions in Equation [\ref{eq8}] are shown by dots. In (a) and (b), these are coloured by radius according to the black/white colour scale while the viewing angle is perpendicular to the surface. In (c) and (d), the forces are plotted as functions of radius and cos(latitude) across the dotted vertical cut at $250^{\circ}$ longitude.}
\label{fig4}
\end{figure}
We also analyse the evolution of radial magnetic-tension force and pressure gradient across the flux-rope structure based on the method described in Section \ref{sec2.3.4}. The first and second columns in Figure \ref{fig4} show the initial distributions of the pressure gradient and radial magnetic-tension force. On the first row, forces are radially integrated, as if looking down from the outer boundary. The colours of the identified flux-rope points in Figure \ref{fig4} [a] and [b] indicate their height above the photosphere. On the bottom row, the figures depict the value in a single meridional cut, which was chosen at longitude $250^{\circ}$. The black dots on the figures on these rows show the identified flux-rope points at that particular longitude.
\begin{figure}[ht!]
\centerline{\includegraphics[width=1.0\textwidth,clip=]{fig5.jpg}}
\caption{Next stages of evolution of pressure-gradient and radial-tension force across the flux rope (at $250^{\circ}$ longitude) with the values varying between $\pm 5.0$ for both the pressure-gradient and radial-tension force (shown by the colour bar on the right).}
\label{fig5}
\end{figure}
The initial position (as evaluated on 24 January 2019, Day 151, at 02:00) of the erupting flux rope is highlighted by rectangular boxes in each of the maps in Figure \ref{fig4}. We can clearly see that the pressure and tension forces are oppositely directed within the flux rope. In the later stage of evolution, the flux rope starts to migrate through the corona as visible in the top row of Figure \ref{fig5} and eventually is ejected through the outer boundary. On the last row of Figure \ref{fig5}, we find no trace of the flux rope, indicating a full-scale eruption, which is quite consistent with the earlier analyses of the same event. Moreover, focusing on the maps of the tension force only, we can see significant positive values of $T_r$ at the coronal height, 2.0\,$\rm{R_{\odot}}$, which corresponds to the peak on Day 151 in maximum positive $T_r (r = 2.0\,\rm{R_{\odot}}, \theta, \phi)$ (see the black curve in the third row of Figure \ref{fig1}, indicated by the solid-cyan line).
\subsubsection{Case Study: Overlying-Arcade Eruption Event}
\label{sec3.2.2}
We perform qualitative examination of the 3D magnetic-field-line distribution during the hourly evolution of each of the 19 events and search for helical structures similar to the event discussed in Section \ref{sec3.2.1}. We also check whether $T_r$ has positive values near $2.0\,\rm{R_{\odot}}$. Our analysis shows that, unlike the event discussed in Section \ref{sec3.2.1}, many of the 19 events do not involve the ejection of a helical flux rope, although each of them coincides with significant drops in the mean field-line helicity and increases in open flux and $B_{\perp}(r = 2.5\,\rm{R_{\odot}})$. Here, we present a similar analysis to Section \ref{sec3.2.1} for one such event before studying its hourly evolution.
\begin{figure}[ht!]
\centerline{\hspace*{0.015\textwidth}
\includegraphics[width=0.5\textwidth,clip=]{fig6a.jpg}
\hspace*{-0.03\textwidth}
\includegraphics[width=0.5\textwidth,clip=]{fig6b.jpg}
}
\vspace{-0.35\textwidth}
\centerline{\Large \bf
\hfill}
\vspace{0.31\textwidth}
\centerline{\hspace*{0.015\textwidth}
\includegraphics[width=0.5\textwidth,clip=]{fig6c.jpg}
\hspace*{-0.03\textwidth}
\includegraphics[width=0.5\textwidth,clip=]{fig6d.jpg}
}
\vspace{-0.35\textwidth}
\centerline{\Large \bf
\hfill}
\vspace{0.32\textwidth}
\caption{Spatial distributions on two different dates corresponding to the second case study: an overlying-arcade eruption event on Day 76 (10 November 2018). The first row: a top view of the magnetic-field lines colour-coded according to their field-line helicity (positive in red and negative in blue, units in Mx). The grey background refers to the radial component of the photospheric magnetic field (within $\pm$ 10\,G). The second row: photospheric mapping of field-line helicity. The third row: footprint of selected non-potential structures, marking cores and their extension with darker and lighter shades, respectively. The forth row: foot-points of the open-field lines with upward (green) and downward (violet) directions. The fifth row: change in coronal-hole area compared to the previous day; dark green and dark purple suggest increase and decrease in coronal-hole area, respectively. The distribution of the horizontal component of the magnetic field [G] at the outer boundary is depicted in the last row.}
\label{fig6}
\end{figure}
In the first three rows of Figure \ref{fig6}, we present field-line helicity maps on 8 and 11 November 2018 (Days 74 and 77) generated in the same way as Figure \ref{fig2}. A notable change in field-line distribution is visible on the second date (within the rectangular box in the northern hemisphere). Likewise, in the third row, we notice the footprint area of the structure under consideration has shrunk simultaneously. Moreover, the fifth row in Figure \ref{fig6} shows an increase in the coronal-hole area in the initial stage of the event followed by a significant decrease on 11 November 2018, indicating opening up and closing down of magnetic-field lines, respectively. Additionally, in the last row of Figure \ref{fig6} we observe expected signatures in the $B_{\perp}(r = 2.5\,\rm{R_{\odot}})$ maps quite similar to the full flux-rope eruption case. All these changes are reflected in the overall temporal evolution of different measures of non-potentiality as depicted in Figures \ref{fig1} on Day 76 (refer to the solid-magenta-vertical line).
\begin{figure}[ht!]
\centerline{\includegraphics[width=0.6\textwidth,clip=]{fig7a.jpg}}\vspace{-0.02\textwidth}
\centerline{\includegraphics[width=0.6\textwidth,clip=]{fig7b.jpg}}\vspace{-0.02\textwidth}
\centerline{\includegraphics[width=0.6\textwidth,clip=]{fig7c.jpg}}\vspace{-0.02\textwidth}
\centerline{\includegraphics[width=0.6\textwidth,clip=]{fig7d.jpg}}\vspace{-0.02\textwidth}
\centerline{\includegraphics[width=0.6\textwidth,clip=]{fig7e.jpg}}\vspace{-0.02\textwidth}
\centerline{\includegraphics[width=0.6\textwidth,clip=]{fig7f.jpg}}
\caption{Evolution of the second case study (an overlying-arcade eruption) viewed from two different viewing angles, where the field lines are colour-coded according to the field-line helicity with the maximum amplitude $4.5 \times 10^{21}$\,Mx. The colour red represents positive field-line helicity with the darker shades corresponding to increasing amplitude. The magnetic-field distribution on the solar disk is represented in shades of grey (within $\pm 5$\,G). Left images show field lines traced from the footprint maps, and right images those traced from the outer boundary.}
\label{fig7}
\end{figure}
However, the hourly cadence snapshots during this event reveal quite different dynamics of field lines associated with the non-potential structure, as presented in Figure \ref{fig7}. Similar to the previous case study, field lines shown in the left column are chosen based on the footprint maps. Whereas on the right, field lines passing through the outer boundary with $B_{\perp}(r = 2.5\,\rm{R_{\odot}}) > 0.01$\,G are shown.
The first salient difference from the previous event is the absence of a clear helical structure like a flux rope in the low-coronal non-potential structure detected by the field-line helicity threshold. Instead, we find highly sheared magnetic-field lines with strong field-line helicity extended in the east--west direction. In the first figure on the top row of Figure \ref{fig7}, we also observe some overlying arcades with lesser but significant field-line helicity encompassing the low-lying highly sheared field lines. As time advances, these overlying arcades undergo reconnection with the surrounding field lines and open up while losing field-line helicity. The same is reflected in the expansion of the coronal-hole area on the fourth row of Figure \ref{fig6}. During the initial stages of this dynamic evolution, we also notice an increase in the magnetic flux through the outer boundary (see the figures on the right side of the first three rows of Figure \ref{fig7}). In the last row, we find some overlying arcades that are almost potential with negligible helicity starting to close down on the existing east--west directed low-lying field lines with much stronger helicity. On the following days, the same process continues (see the last two rows in Figure \ref{fig7}), and the overlying arcade relaxes down by 13 November (Day 79) to a more potential-like structure containing less helicity than the initial arcade on 8 November (Day 74). There is a lesser open field in the lower energy arcade, manifested through the decrease in the coronal-hole area (Figure 6, fourth row). However, unlike unlike the flux-rope eruption event analysed in Section \ref{sec3.2.1}, the low-lying field lines constituting the core of the structure have not changed significantly and are still present. It is the overlying arcade that has erupted, not the sheared field lines within the arcade. These remain almost unaltered throughout the whole event.
\begin{figure}[ht!]
\centerline{\includegraphics[width=1.0\textwidth,clip=]{fig8.jpg}}
\caption{Pressure-gradient force and the radial component of magnetic-tension force for the second case study. Same as in Figure \ref{fig4}, the forces are shown by the red/blue colour map. In (a) and (b), these are coloured by radius according to the black/white colour scale while the viewing angle is perpendicular to the surface. In (c) and (d), the forces are plotted as functions of radius and cos(latitude) across the dotted vertical cut at $58^{\circ}$ longitude.}
\label{fig8}
\end{figure}
The evolution of radial magnetic-tension force and pressure gradient force cross this structure also shows different behaviour than the case with flux-rope eruption. Its hourly evolution suggested that the non-potential structure did not erupt; rather, field lines in the overlying arcade shed helicity through reconnection and are redistributed to a comparatively stable configuration. In the pressure-gradient and tension-force maps in Figure \ref{fig8}, we notice that there is a structure satisfying the criteria according to Equation [\ref{eq8}], but is situated close to the photosphere on 10 November 2018 (Day 76) at 10.00. The size of the structure is notably small compared to the previous one. Moreover, its evolution is completely different, as depicted in Figure \ref{fig9}. We notice that a new structure that is initially visible in the upper corona (indicated by black dots and pointed by a black arrow) appears and moves downwards while the original structure stays rooted at the bottom layers. At the end of the evolution, the downward-moving structure disappears, but the low-lying part remains almost the same. Clearly, the nature of evolution suggests a redistribution of magnetic structures over the original sheared structure rather than a full-scale eruption of a flux rope. Additionally, from the map sequence of tension force, we do not find any positive values of $T_r (\theta, \phi)$ at 2.0\,$\rm{R_{\odot}}$. Consequently, on Day 76 (indicated by the solid-magenta line in Figure \ref{fig1}), there is no peak visible in the maximum positive $T_r (2.0\,\rm{R_{\odot}})$ evolution. This analysis, thereby, further supports our understanding that this particular event is not associated with the ejection of a pre-existing helical magnetic structure or flux rope.
\begin{figure}[ht!]
\centerline{\includegraphics[width=1.0\textwidth,clip=]{fig9.jpg}}
\caption{Next stages of evolution of pressure-gradient and radial-tension force across the non-potential structure shown in Figure \ref{fig8} (at $58^{\circ}$ longitude). The values are according to colour bar shown on the right.}
\label{fig9}
\end{figure}
We note that, there is a small peak in the maximum positive $T_r (2.0\,\rm{R_{\odot}}, \theta, \phi)$ on Day 122 (see the black dots in the third row of Figure \ref{fig1}). It may seem to be associated with an overlying-arcade eruption occurring on Day 119, as indicated by the dashed-magenta line. However, a careful investigation showed that the peak was linked with the eruption of a small flux rope on Day 122, which was too weak to satisfy the threshold condition of $B_{\perp}$. Thus we do not notice any corresponding peak in the evolution of the strength of $B_{\perp}$ (see the red curve in the second row of Figure \ref{fig1}).
\subsection{Statistical Properties}
\label{sec3.3}
We performed similar detailed analyses for the remaining 17 events and found them distributed between two classes: full-scale flux-rope eruptions and overlying-arcade eruptions. Based on this classification, these events are separately identified with a set of magenta (arcade eruption) and cyan lines (flux-rope eruption) in Figure \ref{fig1}. Note that we classify an event as a case of overlying-arcade eruption when we cannot find helical magnetic field associated with any existing underlying well-defined flux-rope structure. Rather, these non-potential structures always had highly sheared underlying arcades that remained almost unaltered throughout the evolution of the overlying arcades. It is noteworthy that among the cases with flux-rope eruptions, we encounter two events where the flux rope started forming high up in the corona just before becoming unstable and resulting in a full-scale eruption. In other cases, we see the rise and ejection of preexisting flux ropes through the outer boundary. To go beyond the qualitative distinction in Section \ref{sec3.2}, we seek quantitative measures to differentiate between the two types of event: flux-rope eruption or overlying-arcade eruption. Although our sample of 19 events is small, the statistical analyses can shed more light on these different classes of events and this will be useful in future studies also.
As discussed in Section \ref{sec3.1.2}, each event is accompanied by either partial or complete removal of pre-eruption helicity during its course of evolution, which is clearly visible in the foot-point maps (third row in Figure \ref{fig2}). For each of the 19 events, we select the respective non-potential structure based on a field-line helicity threshold and evaluate three quantities (calculated based on the footprint maps for, e.g., the third row Figure \ref{fig2}): total area, total magnetic flux, and total field-line helicity content. These quantities are measured at the beginning and at the end of each event, such that we can also quantify the changes in those quantities. In general, the most dynamic part of evolution causing disappearing structure happens quite rapidly within one day. Thus to evaluate the changes in area, magnetic
flux, and helicity content, we compare the day before and after the event. The results are summarised in Table \ref{table1}, where we have classified each event into one of the two categories through manual inspection of the hourly cadence magnetic-field lines.
The majority of the non-potential structures (14 out of 19) have footprint area less than $5 \times 10^{20}$\,cm$^{2}$. Various observational studies (\citeauthor{2014LRSP...11....1P}, \citeyear{2014LRSP...11....1P}, and reference therein) found the length of filaments varies from about $3 \times 10^{9}$ to about $1.1 \times 10^{10}$\,cm, and the width ranging between a few $10^{7}$ up to a few $10^{9}$\,cm in different filaments. Based on these statistics, the maximum area of a filament would be a few $10^{19}$\,cm$^{2}$, which is one order lesser than the size of the simulated flux ropes. We speculate that the difference is caused by two factors: firstly, in this simulation, most of the flux ropes are large structures forming over large-scale neutral lines, more akin to polar-crown filaments than to those within active regions. Secondly, dense filaments themselves sit within wider filament channels of sheared magnetic field \citep{2010SSRv..151..333M}, and it is the latter that would correspond to the flux ropes considered here. For the two different types of events, full-scale eruption and overlying-arcade eruption, we did not find any significant difference in their respective footprint area, (see the first row in Table \ref{table1}).
However, when we measure the unsigned magnetic flux associated with these structures on the surface at the beginning of the events, there exists a significant distinction between the two classes of events. The average total magnetic flux for a flux-rope eruption is almost three times higher than that for an arcade eruption event, and the values are $4.83 \times 10^{20}$\,Mx and $1.66 \times 10^{20}$\,Mx, respectively. It is hard to evaluate how these values agree with observation since we did not find a published study on observed magnetic clouds covering the same period as our simulation. However, we can compare our results with those presented by \cite{2017ApJ...846..106L}. They used a similar magnetofriction simulation covering a period spanning from June 1996 to February 2014. Their reported mean unsigned magnetic flux for erupting flux ropes was $4.04 (\pm 6.17) \times 10^{21}$\,Mx -- thus their mean is about one order of magnitude higher than our calculated average. Note that their study included the whole of Solar Cycle 23 and the major portion of Cycle 24 with observed sunspots. In contrast, our study focuses on an interval very close to Cycle 24 minimum and does not include sunspot emergence. Thus, in our simulations, the overall average magnetic-field strength on the surface is relatively small even though the same magnetofriction model has been used. Comparing the unsigned magnetic field associated with the structures, we find the averages are $0.77$\,G and $0.20$\,G for the flux-rope eruption and overlying-arcade eruption events, respectively; thus, the difference persists between these two classes.
The total helicity contained within a non-potential structure has been computed from the field-line-helicity map on the photospheric boundary using Equation [6] of \cite{2017ApJ...846..106L}. We find the average helicity content of the erupting non-potential structures is $3.17 \times 10^{42}$\,Mx$^2$, which is one order less than the value reported by \cite{2017ApJ...846..106L}. Again, this is due to our choice of a significantly less active period near cycle minimum, whereas their average was over the whole solar cycle. Nonetheless, the average is twice as high for the flux-rope eruption events compared to the overlying-arcade eruption events. This difference remains even after considering the area-normalised helicity magnitude (normalisation is based on associated area). If we calculate the average helicity-ejection rate per day over our simulation, we find $2.32 \times 10^{41}$\,Mx$^2$\,day$^{-1}$. This compares well with the (approximately) $3 \times 10^{41}$\,Mx$^2$\,day$^{-1}$ during the (previous) solar minimum seen in the \citet{2017ApJ...846..106L} simulation (see their Figure 13). The latter simulation produced ejection rates of magnetic helicity and flux over the solar cycle that were in agreement with observational estimates from magnetic clouds \citep{2016SoPh..291..531D}, so we have reason to believe that the helicity ejection in our simulation is also broadly consistent with observations.
The distinction between the two classes of eruptions becomes more prominent when evaluating the change in magnetic flux and helicity magnitude. The average changes in magnetic flux on the surface, and total-helicity magnitude are significantly higher in the case of flux-rope eruption events (see Table \ref{table1}). The contrast between these categories remains the same, even when we consider the change in the average magnetic field or normalised helicity magnitude.
\begin{table}
\caption{Comparison between two classes of events: hyperdiffusion}
\label{table1}
\begin{tabular}{lcc}
\hline
Quantity & Flux-rope eruption & Overlying-arcade eruption\\
\hline
number of events & 8 & 11 \\
footprint area [$10^{20}$\,cm$^2$] & 7.03 & 7.42 \\
total magnetic flux [$10^{20}$\,Mx] & 4.83 & 1.66 \\
average magnetic field [G] & 0.77 & 0.20 \\
total helicity magnitude [$10^{42}$\,Mx$^2$] & 3.17 & 1.49 \\
normalised helicity magnitude [$10^{21}$\,Mx] & 3.42 & 1.83 \\
change in magnetic flux [$10^{20}$\,Mx] & 1.44 & 0.12 \\
change in helicity magnitude [$10^{41}$\,Mx$^2$] & 9.23 & 3.28 \\
helicity flux [$10^{42}$\,Mx$^2$] & 5.08 & 2.33 \\
\hline
\end{tabular}
\end{table}
Finally, we estimate the helicity flux through the outer boundary of the corona during each of the 19 events (from the last row in Figure \ref{fig1}). Note that, unlike the calculated helicity magnitude associated with a non-potential structure, helicity flux is a global quantity. So the estimated value will have contributions from all coronal structures with positive as well as negative helicity magnitude and can add up to a negligible helicity flux. Even so, we find, for flux-rope eruption events, the average magnitude of helicity flux is about two times higher ($5.08 \times 10^{42}$\,Mx$^2$) than for arcade eruption events ($2.33 \times 10^{42}$\,Mx$^2$). Moreover, assuming the helicity flux through the outer boundary to originate primarily from the changing helicity content of the particular structure in consideration (at that same epoch), we further perform a linear correlation analysis between helicity magnitude and respective helicity flux. The evaluated Pearson linear correlation coefficient is $0.71$ (with p-value $0.001$). Interestingly, when the same analysis is performed separately for two distinct classes of events, we find an even higher correlation (coefficient $= 0.85$) between the change in helicity magnitude and the respective helicity flux for flux-rope eruptive events. However, the correlation decreases remarkably for cases with overlying-arcade eruption (coefficient $= 0.47$). This lower value indicates, for the second type of evolution, that the helicity flux is linked with the overlying sheared magnetic-field lines exterior to the selected non-potential structure, whose helicity remains largely unchanged.
\subsection{Parameter Dependence}
\label{sec3.4}
It is only logical to examine whether the existence of two distinct classes of events is an artefact of the way that we model the non-ideal part in the induction Equation [\ref{eq1}] or any other model parameters. Thus we perform two additional sets of magnetofriction simulations of 180 days starting with the same initial surface magnetic-field map. In one set, we consider ohmic diffusion to model $\vec{N}$ (instead of hyperdiffusion), and in the other we use a slower solar wind (maximum speed 50\,km\,s$^{-1}$). We again evaluate different measures of non-potentiality over 180 days, similar to our previous analyses with hyperdiffusion.
For both cases, we find a similar increasing trend in non-potentiality with episodic decreases, which are linked to the significant reshaping of coronal magnetic field associated with non-potential structures. A corresponding figure depicting a comparison of the mean field-line helicity among different simulations is provided in the Appendix. We found that,
for about the initial 90 days, the epochs of drastic changes in mean field-line helicity are happening concurrently among the different sets of magnetofriction simulations with hyperdiffusion, ohmic diffusion, or higher outflow speed. We verified that these simultaneous drops do originate from non-potential structures at the same locations in the corona. After 90 days, the simulations with ohmic diffusivity or slower outflow start to diverge notably, such that the temporal and spatial changes in the non-potentiality associated with the structures no longer match with those occurring in the hyperdiffusion simulation.
We identify 17 and 15 events from the simulations with ohmic diffusion and slower outflow, respectively, while following the same analyses described in Section \ref{sec3.1} and then perform additional simulations with hourly cadence for individual events (as in Section \ref{sec3.2}). The slightly later eruptions (in the initial 90 days) and lower number of events overall likely result from the dissipation of stored helicity by the ohmic-diffusion term, in contrast to hyperdiffusion where there is no volume dissipation of helicity. The slower outflow has the effect of allowing the corona to store more helicity, thus delaying eruptions. We found about 41\,$\%$ and 47\,$\%$ out of 17 and 15 events (corresponding to ohmic diffusion and slower outflow, respectively) were related to complete eruption of flux ropes, while the rest were overlying-arcade eruption events.
We again perform the same statistical analysis (as in Section \ref{sec3.3}) for these two new sets of simulations. The distinction in the amplitude of the associated magnetic flux and helicity magnitude and their respective changes persist for the events with full-scale flux-rope eruption and overlying-arcade eruption. The corresponding tables (\ref{table2} and \ref{table3}) are provided in the Appendix.
To summarise, altering different elements in the magnetofriction simulation has a finite effect on how the non-potentiality will build up in the coronal magnetic field. However, the existence of two disparate class of evolving magnetic field associated with non-potential structures is independent of our choices of model parameters.
\section{Concluding Discussion}
\label{sec4}
This work investigates the evolution of coronal magnetic field of the full Sun over six months -- a period chosen very close to the Solar Cycle 24 minimum (26 August 2018\,--\,22 February 2019). Although we did not include any sunspot emergence in the magnetofriction simulation, the slow but continuous building up of non-potentiality during this period leads to a moderately dynamic solar corona. The shearing motion associated with the differential rotation on the solar surface along with magnetic reconnection plays the most vital role in generating complexities in coronal magnetic-field distribution. These complexities are often localised, forming non-potential structures. These structures eventually become unstable and instigate abrupt and drastic changes in the surrounding magnetic field. Through such changes, the coronal magnetic field sheds some of its non-potentiality associated with these structures. Thus we notice sudden decreases in free energy and mean field-line helicity at the same epochs, which we label as individual events.
Each event is driven by the destabilisation of a non-potential arcade whose photospheric footprint is seen in field-line helicity maps. As the structure becomes unstable, it is pushed through the corona, causing opening up of overlying field lines. Simultaneously we observe significant changes in coronal-hole area and the horizontal component of the magnetic field in the vicinity of the original structure. The eruption of a structure with high field-line helicity also causes notable variation in the open magnetic flux and helicity flux through the corona's outer boundary.
However, we found that these events comprise two distinct classes, even though their associated signatures of non-potentiality are quite comparable. The difference mainly was perceived when we studied individual events with hourly cadence. In one set of events, a low-lying magnetic flux rope rises and disappears entirely, indicating a full-scale eruption. This could explain the origin of occasional large CMEs during solar minimum \citep{2012LRSP....9....3W,2019SSRv..215...39L}. There exist numerous computational and observational studies on CMEs originating from the eruption of flux ropes \citep{2011LRSP....8....1C,2014LRSP...11....1P}, but in this model, they form and erupt self-consistently without the need for an artificial driver such as a localised sheared flow. We speculate that in our model, the initial magnetic reconnection occurs underneath the flux rope forming a ``U''-shaped loop. As this U-loop moves radially outward, it imparts excessive stress to the flux-rope structure, which in turn causes the overlying field lines to break via tether-cutting. The whole dynamics resembles the class of CME models with the ``tether-straining'' mechanism \citep{2001GMS...125..143K}, although of course our magnetofrictional model is only quasi-static. Nevertheless, \cite{2018JSWSC...8A..26P} have shown that configurations that are unstable in the magnetofrictional model are likely to be unstable in full MHD. Indeed, \cite{1986ApJ...311..451C} showed that force-free equilibria have the same linear stability properties in magnetofriction as in ideal MHD.
In the other class of events, reconnection with the surrounding magnetic-field lines allows the localised non-potential system to partially shed some of its helicity content before settling down to a relatively more stable structure. However, the original sheared structure at lower height remains almost unaltered during the event. Thus any visible changes in mean field-line helicity or any other global measures of non-potentiality are mainly linked with the evolution of the overlying highly sheared arcades and not with the structure in the lower corona. Absence of positive radial tension force calculated at mid-coronal height (2.0\,$\rm{R_{\odot}}$) also suggests that no helical magnetic structures are associated with these events. Such evolution resembles the observed phenomenon known as streamer blowout \citep{2007ApJ...671..926S}, where a slow CME is associated with the gradual swelling then sudden reconnection of a coronal streamer. In a detailed study with the Large Angle and Spectrometric Coronagraph (\textit{LASCO}) on board the Solar and Heliospheric Observatory (\textit{SOHO}) CME observations during 1996\,--\,2012, assisted by 3D MHD simulations and EUV and coronagraphic observations from the Solar Terrestrial Relations Observatory (\textit{STEREO}) and \textit{SDO}, \cite{2013SoPh..284..179V} investigated how many observed CMEs had flux rope structures. Their statistical results based on Cycle 23 CMEs show that 40\,$\%$ of total 2403 CMEs can be regarded as standard three-part CMEs (including loop CMEs) with flux ropes at their cores. However, a significant number of events (40\,$\%$) did not have any detectable flux rope cavity and were named ``outflow'' CMEs, which can be as large as regular CMEs. These outflow CMEs lack a three-part morphology but are too wide to be categorised as jet CMEs, and the front parts are not sharp enough to be classified as loop CMEs. Although \cite{2013SoPh..284..179V} mentioned that the undecided physical nature of these outflow CMEs could partially be an artefact of the lack of good observational data, their study leaves open the possibility of CMEs without pre-existing flux ropes.
\cite{2018ApJ...861..103V} also performed an extensive study with more than 900 streamer-blowout events in the \textit{LASCO}-C2 observations between 1996 and 2015. They found streamer-blowout events, particularly during cycle minimum, to be associated with large-scale neutral lines, and some without signatures of helical flux ropes. Moreover, using a full-MHD numerical simulation, \cite{2016JGRA..12110677L} modelled the evolution of the slow streamer-blowout event of 1\,–-\,2 June 2008 which was also the origin of a stealth CME. The primary characteristic of stealth CMEs is the virtual absence of any identifiable surface or low-corona signatures indicating that an eruption has occurred. Thus, the second kind of events, in our speculation, could also be manifestations of stealth CMEs in terms of the magnetic field.
For further observational support for our work, we compare the number of CMEs arising from our simulations of the solar minimum corona to the actually observed one recorded by \textit{SOHO}/\textit{LASCO} \citep{2009EM&P..104..295G} during the period when the events start to occur in our simulation. According to the \textit{LASCO} catalogue of CMEs (\url{cdaw.gsfc.nasa.gov/CME_list}), there were 20 ``poor'' CME events detected by both C2 and C3 between 15 October 2018\,--\,22 February 2019 (which is equivalent to days 50\,--\,180 in our simulation, excluding the initial ramp time). In addition, there were additional 17 CMEs detected either by \textit{LASCO}-C2 or -C3. All of them were marked as ``poor'' events, and for many of them estimating associated mass and kinetic energy was impossible. No halo-CME was observed during this period \citep{2020ApJ...903..118D}. Thus the number of events in our simulation (19) appears comparable with the number of recorded CMEs.
We also performed a quantitative analysis of different measures such as area, magnetic flux, and helicity content of the non-potential structures associated with selected events and tried to compare them with observed interplanetary magnetic-cloud statistics. Although our comparison could not be a direct one due to lack of available observations, we were able to validate our range of values -- especially for rates of erupting magnetic flux and magnetic helicity -- by drawing a comparison with another similar long-term simulation of coronal magnetic field performed by \cite{2017ApJ...846..106L}, whose results matched well with observed estimates of magnetic clouds. Thereby, we anticipate that our simulated values would also be close to the observed quantities associated with magnetic clouds during Cycle 24 minimum.
Finally, to investigate the robustness of our findings, we changed the non-ideal term from hyperdiffusion to ohmic diffusion and changed the solar-wind speed. In both scenarios, we obtain the two distinct classes of eruptive events. From a comparative point of view, eruptions of flux ropes are less frequent than overlying-arcade eruptions in all of the simulations. We think that this is consistent with observational findings that most slow CMEs are not associated with erupting filaments \citep{2008JGRA..113.1104H}.
In conclusion, our study demonstrates that during a low-activity phase of the sunspot cycle, the large-scale shearing plasma flow on the surface can lead to a slow but significant generation of non-potentiality in the coronal magnetic field. Highly sheared arcades and flux ropes with strong field-line helicity store this excess non-potential energy, which is eventually released to the heliosphere through two different classes of eruptive events. This work based on magnetofrictional simulations that exclude any intricacies of emerging sunspots can thus be perceived as the first step to understand the complex global evolution of the coronal magnetic field.
\begin{acks}
This work was supported by STFC (UK) consortium grant ST/S000321/1. The \textit{SDO} data are courtesy of NASA and the \textit{SDO}/\textit{HMI} science team. The CME catalogue is generated and maintained at the CDAW Data Center by NASA and The Catholic University of America in cooperation with the Naval Research Laboratory. \textit{SOHO} is a project of international cooperation between ESA and NASA. We are also thankful to the reviewer for the useful suggestions, which have helped to improve the quality of this manuscript.
\end{acks}
{\footnotesize\paragraph*{Disclosure of Potential Conflicts of Interest}
The authors declare that they have no conflicts of interest.}
|
1,314,259,994,203 | arxiv | \section[#1]{#2}\setcounter{equation}
{0}}
\newcommand{\small\begin{equation}}{\small\begin{equation}}
\newcommand{\end{equation}\normalsize\vspace*{-0.1ex}}{\end{equation}\normalsize\vspace*{-0.1ex}}
\newcommand{\small\begin{eqnarray}}{\small\begin{eqnarray}}
\renewcommand{\arraystretch}{1.2}
\newcommand{\end{eqnarray}\normalsize\vspace*{-0.1ex}}{\end{eqnarray}\normalsize\vspace*{-0.1ex}}
\newcommand{\small\begin{displaymath}}{\small\begin{displaymath}}
\newcommand{\end{displaymath}\normalsize\vspace*{-0.1ex}}{\end{displaymath}\normalsize\vspace*{-0.1ex}}
\newcommand{\small\begin{eqnarray*}}{\small\begin{eqnarray*}}
\renewcommand{\arraystretch}{1.2}
\newcommand{\end{eqnarray*}\normalsize\vspace*{-0.1ex}}{\end{eqnarray*}\normalsize\vspace*{-0.1ex}}
\newcommand{\noindent}{\noindent}
\newcommand{\epsilon}{\epsilon}
\newcommand{\int\limits}{\int\limits}
\newcommand{\overline{\rm MS}}{\overline{\rm MS}}
\newcommand{\frac{Q^2}{\mu^2}}{\frac{Q^2}{\mu^2}}
\newcommand{\frac{2 vk}{\mu}}{\frac{2 vk}{\mu}}
\newcommand{\frac{m^2}{\mu^2}}{\frac{m^2}{\mu^2}}
\newcommand{\frac{m^2-p^2}{m^2}}{\frac{m^2-p^2}{m^2}}
\newcommand{\not\! v}{\not\! v}
\newcommand{\frac{1+\!\vslash}{2}}{\frac{1+\!\not\! v}{2}}
\newcommand{{\mbox d}}{{\mbox d}}
\newcommand{\frac{C_F}{4\pi N_f}}{\frac{C_F}{4\pi N_f}}
\newcommand{\left(-\frac{2\omega}{\mu}\right)^{-2 u}}{\left(-\frac{2\omega}{\mu}\right)^{-2 u}}
\begin{document}
\begin{titlepage}
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\makebox[2cm]{}\\[-1in]
\begin{flushright}
\begin{tabular}{l}
CERN-TH/95-26\\
UM-TH-95-3\\
hep-ph/9502300\\
February 1995
\end{tabular}
\end{flushright}
\vskip1.1cm
\begin{center}
{\Large\bf
Resummation of $(\beta_0 \alpha_s)^n$ Corrections in QCD:
Techniques and Applications to the $\tau$ Hadronic Width and the
Heavy Quark Pole Mass
\\[4pt]
}
\vspace{1.4cm}
Patricia Ball$^1$,
M. Beneke$^2$ and
V.M.\ Braun$^3$\footnote{On leave of absence from St.\ Petersburg
Nuclear Physics Institute, 188350 Gatchina, Russia.}
\vspace{1.4cm}
$^1${\em CERN, Theory Division, CH--1211 Gen\`{e}ve 23, Switzerland}\\[0.5cm]
$^2${\em Randall Laboratory of Physics,
University of Michigan, Ann Arbor, Michigan 48109, USA}\\[0.5cm]
$^3${\em DESY, Notkestra\ss\/e 85, D--22603 Hamburg, Germany}
\vspace{2cm}
{\bf Abstract:\\[5pt]}
\parbox[t]{\textwidth}{
We propose to resum exactly any number of one-loop vacuum polarization
insertions into the scale of the coupling of
lowest order radiative corrections. This makes maximal
use of the information contained in one-loop perturbative corrections
combined with the one-loop running of the effective coupling and
provides a natural extension of the familiar BLM scale-fixing
prescription to all orders in the perturbation theory. It is suggested
that the remaining radiative corrections should be reduced after
resummation. In this paper we implement this resummation
by a dispersion technique and indicate a possible generalization
to incorporate two-loop evolution. We investigate in some detail
higher order perturbative corrections to the $\tau$ decay width and
the pole mass of a heavy quark. We find that these corrections tend
to reduce $\alpha_s(m_\tau)$ determined from $\tau$ decays by
approximately 10\% and increase the difference between the bottom pole
and $\overline{\rm MS}$-renormalized mass by 30\%.}
\end{center}
\end{titlepage}
\newpage
\renewcommand{\thefootnote}{\arabic{footnote}}
\setcounter{footnote}{0}
\mysection{Introduction}{Introduction}
During the past decade QCD has turned from qualitative descriptions
to quantitative predictions for the strong interactions. The influx
of more accurate experimental data represents a constant stimulus to
improve theoretical predictions. While an understanding of
confinement remains as evasive as ever, these predictions
rely on the validity of
nonperturbative factorization in processes governed by a large momentum
scale. This allows to isolate the presently uncalculable infrared
dynamics in a set of universal parameters or functions. One is then led to
the conclusion that relations of physical quantities can be calculated
in perturbation theory. For certain observables, such as those
derived from totally inclusive processes with no hadrons in the initial
state, one obtains parameter-free predictions (except for the
strong coupling) to logarithmic accuracy
in the hard scale.
Much effort has been invested in the computation of higher order QCD
radiative corrections. For a few observables for which quark masses
are unimportant, second order corrections in the strong coupling
$\alpha_s$ are known. Third order expressions
exist for hadroproduction in $e^+ e^-$-annihilation
and $\tau$-lepton decays and for the Gross-Llewellyn-Smith and Bjorken
sum rule in deep inelastic scattering. On the other hand, observables
involving quark masses, for example decay widths of hadrons containing
a heavy quark, are known only to first order. In most cases, the extension
of present results is not simply a question of algebraic complexity and
algorithms for handling it. The existing calculations have exploited
presently known techniques to the frontier beyond which new
methods need to be designed, a
task which might not be completed soon. In this situation it is
interesting to explore possible sources of systematically
large corrections in higher orders, which, once identified, might then be
taken into account exactly to all orders. In doing so, one
may hope to reduce
the uncertainty due to ignorance of exact higher order coefficients.
Resummations of this type are familiar and necessary for many
problems involving disparate mass scales. Large perturbative corrections
in higher orders are associated with a logarithm of the ratio
of these scales and can often be summed with renormalization
group techniques, which allow to obtain the dominant power of that
logarithm to all orders by one-loop calculations. Although in practice
it is not always clear whether the logarithms dominate the constant
pieces -- in particular, since the coefficients of logarithms grow
geometrically with order of $\alpha_s$, whereas the constants grow
factorially~--, such a resummation is controlled by an ``external''
parameter (the logarithm of two scales) that can be varied, at least
in fictitious limits. For the problem at hand, we assume that such
renormalization group improvement has already been done or consider
observables that depend only on a single scale. We are thus interested
in systematically large contributions to constant terms with no external
scale at our disposal.
\phantom{\ref{ex}}
\begin{figure}[t]
\vspace{-2cm}
\epsfysize=28cm
\epsfxsize=20cm
\centerline{\epsffile{ex.eps}}
\vspace*{-20cm}
\caption{\label{ex} A typical diagram with multiple fermion loop
insertions into the lowest order correction to a generic physical
quantity.}
\end{figure}
The estimation of constant terms of yet uncalculated coefficients, which is
closely related to the problem of scale- and scheme-setting for truncated
perturbative expansions, is often thought to attempt
the impossible. While indeed for any particular observable it is
impossible to assess with rigour the quality of a certain scale-setting
prescription, some prescriptions may be supported by general physical
arguments and turn out to be closer to exact results {\em a posteriori}.
The most distinguished prescription of this kind has been formulated
by Brodsky, Lepage and Mackenzie \cite{BRO83}. Observing that in QED
all scale-dependence of the coupling results from photon vacuum
polarization, they suggested that the effect of fermion loop insertions
into a photon line in higher orders be absorbed into the scale of the
coupling of a given order and proposed to apply this criterion to
QCD as well. In practical applications, this suggestion has mainly
been realized in second order in $\alpha_s$. To this accuracy,
only a single
fermion loop insertion needs to be calculated and the relevant contribution
can be traced by its dependence on the number of light fermions $N_f$.
In general, at order $\alpha_s^{n+1}$, the effect of one-loop evolution
of the coupling can still be obtained by the highest power of $N_f$ in
the flavour-dependence of coefficients. The highest power of $N_f$
originates solely from diagrams with $n$ fermion loop insertions, such
as in Fig.~\ref{ex}, which are much easier to calculate than
flavour-independent terms. In QCD, the identification of contributions
related to one-loop
renormalization of the coupling implies that the coefficient
of the highest power in $N_f$ should in fact not multiply $N_f$
to some power,
but the combination $N_f-33/2$, as it appears in the Gell-Mann-Low
function to leading order. Computing diagrams as in Fig.~\ref{ex} and
then replacing $N_f$ by $N_f-33/2$, one takes into account
partial contributions
from other diagrams, which are much harder to evaluate exactly (the
precise identification of these contributions with particular
diagrams is not straightforward and gauge-dependent). It turns out
that in all cases where comparison with exact second order results is
possible, this replacement approximates the exact coefficient
amazingly well in the $\overline{\rm MS}$-scheme \cite{BROT94}. In what follows we
shall refer as ``Naive Nonabelianization''
(NNA) \cite{BG94} to the hypothesis that a substantial part of
higher order
radiative corrections can be accounted for by running of the coupling,
in the sense of substitution of $N_f\rightarrow N_f-33/2$ in the term
with the highest power of $N_f$. We should mention that
BLM scale-setting and NNA
are technically identical and the distinction is rather a matter of
interpretation: The BLM scale-setting by its physical motivation
does not necessarily claim that genuine higher order corrections
missed by the above substitution should be small, whereas NNA assumes
this stronger assertion.
This paper is concerned with an exposition of details of the recent
proposal \cite{BB94b} of two of us
to extend the BLM procedure to higher orders and, in particular,
to sum all contributions associated with one-loop running of the
coupling. We emphasize that this procedure has repeatedly been
suggested in the past \cite{BRO83,LEP93}, but has apparently not
been pursued to the point of practical implementation. A summation
similar to, but technically different from \cite{BB94b} was also
presented in \cite{Nnew}. We believe that this resummation
is useful, because the effect of evolution of the coupling is
a systematic source of potentially large perturbative coefficients.
In such a situation it is advantageous to incorporate this effect
in theoretical predictions even if it transcends a fixed-order
perturbative approximation.
To become definite, we use the following convention for the QCD
$\beta$-function\footnote{Since we discuss corrections proportional
to $(-\beta_0\alpha_s)^n$, the reader accustomed to the normalization
$11-(2/3) N_f$ may think of an expansion in terms of $\alpha_s/(4\pi)$.}:
\small\begin{eqnarray}
\mu^2\frac{d\alpha_s}{d\mu^2}&=&\beta(\alpha_s)=\beta_0\alpha_s^2
+\beta_1\alpha_s^3 +\ldots\,;
\nonumber\\
\beta_0 &=& -\frac{1}{4\pi}\left(11-\frac{2}{3} N_f\right)\,,
\hspace{0.5cm}
\beta_1 = -\frac{1}{(4\pi)^2}\left(102-\frac{38}{3} N_f\right)\,.
\end{eqnarray}\normalsize\vspace*{-0.1ex}
\noindent For a generic physical observable $R$ (which for the sake of illustration
we assume to depend only on a single, large scale $Q$), we may eliminate
the $N_f$-dependence in radiative corrections in favour
of the $\beta$-function
coefficients $\beta_0$, $\beta_1$ etc. For now (and most parts of this
paper) we restrict ourselves to one-loop running, induced by $\beta_0$.
The (truncated) perturbative expansion of $R$ is written as
\small\begin{eqnarray}
R-R_{tree} &=& \sum_{n=0}^N[r_{n0}+r_{n1}N_f
+\ldots + r_{nn}N_f^n]\alpha_s(Q)^{n+1}
\nonumber\\&=&
r_0 \alpha_s(Q)\sum_{n=0}^N\left[\delta_n+d_n (-\beta_0)^n\right]
\alpha_s(Q)^n\,,
\end{eqnarray}\normalsize\vspace*{-0.1ex}
\noindent where
\small\begin{equation}\label{dn}
d_n = (-6\pi)^n \frac{r_{nn}}{r_0}\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent Let us stress again that $d_n$ is unambiguously determined by diagrams
with $n$ insertions of fermion bubbles as in Fig.~\ref{ex} and only
$\delta_n$ requires a true $n+1$-loop calculation. The $\delta_n$
($n\ge 2$) could
be further separated into contributions from two-loop evolution and
the effect of one-loop evolution on the genuine two-loop corrections
(see \cite{BRO94} for $\delta_2$). We introduce
\small\begin{equation}\label{M}
M_N(-\beta_0\alpha(Q)) = 1+\sum_{n=1}^N d_n (-\beta_0\alpha(Q))^n
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent as a measure of how much the lowest order correction is modified by
summing $N$ one-loop vacuum polarization insertions.
To make an explicit connection to BLM scale-setting, we define
\small\begin{equation}\label{blmscalesdef}
\label{scale} \alpha_s(Q^*_N) = \alpha_s(Q)\,M_N(-\beta_0 \alpha_s(Q))\,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent absorbing the effect of vacuum polarization in the scale $Q_N^*$ of the
coupling. For $N=\infty$,
$\alpha_s(Q^*)$ ($Q^*\equiv Q^*_\infty$) is transparently interpreted as the
running coupling averaged over the lowest order corrections. Isolating the
integration over gluon virtuality in the lowest-order diagram, we may write
\small\begin{equation} r_0\alpha_s(Q)=\alpha_s(Q)\int\mbox{d}^4 k\,F(k,Q)\,\frac{1}{k^2} .
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent Then
\small\begin{equation} \label{average}
r_0\alpha_s(Q^*) = \frac{r_0\alpha_s(Q)}
{1-\beta_0\alpha_s(Q)\ln({Q^*}^2/Q^2)}
= \int\mbox{d}^4 k\,F(k,Q)\,\frac{\alpha_s\left(
k\exp[C/2]\right)}{k^2},
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent where $C$ is a the scheme-dependent subtraction constant for the
fermion loop. The difference between the
leading-order BLM scale $Q_1^*$ and $Q^*$ is
precisely the difference between averaging the running coupling itself,
or the logarithm of its argument, over the lowest-order diagram.
The values of $Q^*$ and $Q^*_N$ for $N>1$
depend on the value of $\alpha_s(Q)$. Such a dependence is intrinsic to any
generalization of the BLM prescription beyond
leading-order and has previously
been noted in \cite{GRU92,BRO94}. As a result we may express any physical
quantity as
\small\begin{equation} R-R_{tree} = r_0\left(\alpha(Q)
M_\infty(-\beta_0\alpha_s(Q))+\sum_{n=1}^\infty\delta_n\alpha_s(Q)^
{n+1}\right)\,.
\end{equation}\normalsize\vspace*{-0.1ex}
Separating radiative corrections in this way and summing a partial set
of them into $M_\infty(-\beta_0\alpha_s)$
can be motivated in different ways. First of all, as already
mentioned, this procedure is supported empirically at the two-loop
level.
Provided the three-loop coefficient is sizeable, one might again
expect that a significant portion of it can already be accounted for
by $M_2(-\beta_0\alpha_s)$ and, in higher orders, by
$M_N(-\beta_0\alpha_s)$.
It must be noted that
such a statement depends on the choice of scheme and, since the $\delta_n$
depend on $N_f$ for $n>1$, also on $N_f$ for a given scheme. Although
$Q^*$ is defined such that $\alpha_s(Q^*)$ is scheme-independent if
one consistently uses the one-loop $\beta$-function, an arbitrary
$N_f$-independent finite renormalization can always ruin the
supposed dominance of $\beta_0^n\alpha_s^{n+1}$-terms by implying
a definition of the coupling that leads to anomalously large values
of the remaining terms $\delta_n$. This remark applies of course to any
partial resummation, whether it is based on a systematic (external)
parameter or not as in the case at hand\footnote{We do not consider
$\beta_0$ on the same footing as, say, $\ln Q^2/m^2$, since it is
an intrinsic parameter of QCD. However, we shall somewhat loosely refer
to the approximation of ignoring the $\delta_n$ as the ``large-$\beta_0$
limit''.}. In this respect a resummation utilizing one-loop running
is similar to resummation of $\pi^2$'s that arise from analytic
continuation from the space-like to the time-like region of, for
instance, the Sudakov form factor \cite{STE87} or the coupling as
in $e^+ e^-$-annihilation or $\tau$ decays \cite{DIB92},
and, in fact, comprises the resummation for $\tau$ decays as a
special case. Since empirically
the $\beta_0$-term dominates second order radiative corrections for
many observables in the $\overline{\rm MS}$ scheme,
one would not expect the resummation of $\beta_0\alpha_s^{n+1}$-terms to
be numerically useful in schemes that
differ from $\overline{\rm MS}$ by a redefinition of the coupling that changes
significantly the relative importance of flavour-dependent and
flavour-independent terms.
The dominance of
$\beta_0^n\alpha_s^{n+1}$-terms is lended further
support from the behaviour of
perturbative coefficients in large orders (renormalons). It is
precisely the effect of vacuum polarization, which leads to the
expectation that in sufficiently large orders, the coefficients should
indeed have the form
\small\begin{equation} r_n\sim K\,(a\beta_0)^n n! n^b\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent The series is thus divergent and, provided it is asymptotic at all,
cannot approximate the true result to any desired accuracy, even if the
exact coefficients could be computed to arbitrary order. The
evaluation of diagrams as in Fig.~\ref{ex} provides some insight into
this ultimate limitation of perturbative QCD and into the approach to
the asymptotic limit. In this paper, however,
we are not so much
interested in large orders and effects suppressed by powers
of the hard scale $Q$. Summation of effects associated with one-loop
running of the
coupling should prove useful in intermediate orders (say,
$n=2 - 6$), when perturbation theory is still reliable. It is amusing
to speculate whether the empirical dominance of $\beta_0$ at the
two-loop
level indicates that one is already close to the asymptotic behaviour.
The latter is derived from a saddle point
expansion and the large contribution
arises from internal momenta either much smaller or much larger than
the external scale $Q$. Proximity to the asymptotic behaviour -- if true --
would indicate that the distribution of internal momenta is
already well-approximated by a Gaussian even though the main contribution
to the Feynman integral is not from small or large momenta.
The consideration of large orders provides also useful guidance to
the limitations that arise from the restriction to one-loop evolution
effects. As a matter of fact, diagrams associated with one-loop
evolution do not provide the correct constants $b$ and $K$ in the
large-$n$ behaviour. At large $n$, the effect of two-loop evolution
on a single gluon line
and one-loop evolution on two gluon lines becomes equally important
and should eventually be taken into account\footnote{
For example, an infinite number of two-loop insertions is necessary
to obtain the correct value of $b$ \cite{ZAK92}.}.
In practical applications,
we will often find that the series has to be truncated due to its
divergence at rather low $n$. Thus, many diagrams which are required
to establish the formal large-$n$ behaviour never become relevant.
Moreover, provided $\delta_1$
is already not large, one should also expect that vacuum polarization
insertions into two gluon lines will remain comparatively small.
Let us repeat that the resummation of $\beta_0^n\alpha_s^{n+1}$-terms
can only be judged {\em a posteriori} and can fail to provide a
good estimate of higher order corrections for any particular quantity.
Even in this case, we believe that this resummation is physically
motivated and that it is appropriate to absorb this particular class
of higher order corrections into the scale of the coupling in lowest
order. Application of BLM scale-setting to leading order (absorbing
the contribution from one fermion loop) yields $Q_1^*=Q e^{-d_1/2}$.
Upon re-expansion of $\alpha_s(Q_1^*)$ to one-loop accuracy for the
$\beta$-function, this pretends a geometric
growth $d_n^{\rm BLM}=d_1^n$ to be compared with the factorial growth
of the exact coefficients. Resummation to all orders corrects for this
discrepancy by adjusting $Q^*_\infty$ such that the expansion of
$\alpha_s(Q^*_\infty)$ gives the correct values of $d_n$. Thus to
the accuracy of one-loop evolution the result of resummation is
equivalently expressed as $\alpha_s(Q) M_\infty(-\beta_0\alpha_s)$
and we often prefer this form of presentation. It is no longer
equivalent to $\alpha(Q^*_\infty)$, if one adopts two-loop accuracy
for $\alpha_s$. Since for large $n$ the true $d_n$ will always
outgrow $d_1^n$, one might conclude that the usual BLM scale-setting
underestimates higher order radiative corrections. In practice,
the effect is often just the opposite: The most
important contributions come from intermediate $n$ and it turns out that
in many cases $d_1$ is comparatively large ($Q_1^*$ is small), so
that $d_1^n > d_n$ in this region. The usual BLM scale-setting therefore
typically overestimates the size of radiative corrections associated
with one-loop running.
The remainder of the paper is organized as follows: In Sect.~2 we
develop in detail the technique to implement the resummation.
We use a dispersion technique to reduce the problem to a one-dimensional
integral over lowest order radiative corrections computed with
finite gluon mass which is suitable to numerical evaluation.
Thus compared to the complications of a complete higher
order calculation, this resummation can be performed with little
computational expense. Compared to a similar implementation of the
standard BLM scale-setting \cite{SV94}, the computation of higher
order coefficients $d_n$ comes with no additional expense at all. It is
convenient to introduce the Borel transform as a generating function
of higher order radiative corrections. The principal value definition
of the Borel integral serves as a starting point for the summation of
the series within a certain accuracy, indicated by the presence of
infrared renormalons. We shall also see that this definition requires
all kinematic invariants to be large compared to $\Lambda_{\rm QCD}$,
reflecting the inapplicability of perturbation theory as a starting
point for summation, if this requirement is not met. In this Section
we also generalize the results of \cite{BB94b} to quantities with
anomalous dimension and include quarks with finite masses
in the loops.
In Sect.~3 we apply the resummation to the
hadronic width of the $\tau$
lepton and find a 10\% decrease of $\alpha_s(m_\tau)$ due to
four- and higher loop corrections.
We discuss the possibility of $1/m_\tau^2$-corrections in the
light of our resummation. In general we prefer to be agnostic about
power corrections and stick to perturbation theory. An exception to
this rule is that we do not want to introduce power corrections in
conflict with the operator product expansion in euclidian space
without
good reasons (which we do not have). Under this assumption we show that
principal value resummation does not introduce $1/m_\tau^2$-terms to the
decay width, provided the coupling is chosen appropriately. Sect.~4
contains a detailed discussion of the resummation of
$(-\beta_0)^n\alpha_s^{n+1}$-terms for the pole mass
of a heavy quark. We combine
the resummation with the exact two-loop result to give an estimate
of the difference between the pole mass and the $\overline{\rm MS}$-renormalized
mass. We keep finite quark masses inside loops which allows to
trace the origin of large coefficients to the relevant regions of
internal momentum.
In Sect.~5 we formulate one possible extension of
our resummation to incorporate partially the effect of two-loop
running resulting in $\beta_1 \beta_0^{n-2}$-corrections.
This extension is again guided by the flavour-dependence
of coefficients and can be considered as an extension of recent work
by Brodsky and Lu \cite{BRO94}. The size of corrections is
illustrated for the
vector correlation function relevant to $\tau$ decays and for the
pole mass of a heavy quark. Sect.~6 contains conclusions. Three Appendices
deal with technical issues: In Appendix A we derive a simple expression for
subtractions needed for logarithmically ultraviolet divergent quantities
and collect the details of results presented in Sect.~2. Appendix B
contains analytic formulae
for the lowest order radiative corrections to $R_{e^+ e^-}$ and
hadronic $\tau$ decays with finite gluon mass. In Appendix
C we list exact results for some abelian five-loop diagrams to the
vector correlation function.
In a companion paper \cite{BBB} we shall discuss the implications
of resummation for heavy quark decays and the determination of
$|V_{bc}|$.
\mysection{Techniques}{Techniques}
\setcounter{equation}{0}
The aim of this Section is to develop a systematic approach to
the calculation of diagrams with an
arbitrary number of fermion loop insertions,
such as in Fig.~1.
We assume a generic physical (short-distance)
quantity $R$ and a renormalization
scheme that does not introduce an artificial $N_f$ dependence, such
as $\overline{\rm MS}$. It is also assumed that lowest order radiative corrections
to $R$ do not involve the gluon self-coupling.
Subtracting the tree-level contribution,
we are left with the (truncated) perturbative expansion
\small\begin{equation}\label{rtree}
R-R_{tree} = \sum_{n=0}^N r_n\alpha_s^{n+1}\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent The coefficients $r_n$ are polynomials in $N_f$\,,
\small\begin{equation}\label{flavourseries}
r_n=r_{n0}+r_{n1} N_f+\ldots+r_{nn} N_f^n,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent and we will calculate the coefficients $r_{nn}$, which
originate from $n$ fermion loop insertions into the lowest-order
diagram. We then write
\small\begin{equation}\label{defd_n}
r_n= r_0\left[\delta_n+(-\beta_0)^n d_n\right],
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent where $d_n=(-6\pi)^n r_{nn}/r_0$ absorbs the term with the largest power
of $N_f$. The effect of one-loop evolution of the coupling on lowest order
radiative corrections is then entirely contained in the $d_n$ and in the
remainder of the paper we will consider the $\delta_n$ as
corrections to the approximation of ``Naive Nonabelianization''. As a
measure of how much the lowest order radiative correction
is modified by
including $N$ vacuum polarization insertions, we define
\small\begin{eqnarray}\label{defM}
M_N(-\beta_0\alpha_s) &\equiv& 1+\sum_{n=1}^N d_n
(-\beta_0\alpha_s)^n\,,
\nonumber\\
M_\infty(-\beta_0\alpha_s)&\equiv& M_{N\to\infty}(-\beta_0\alpha_s)\,.
\end{eqnarray}\normalsize\vspace*{-0.1ex}
\noindent Taken literally, the limit $N\rightarrow\infty$ does not exist, because
the series diverges. It will be interpreted in the sense of
Eq.~(\ref{borelintegral}) below.
For further use, we introduce the shorthand notation
\small\begin{equation}
a_s(\mu) = -\beta_0 \alpha_s(\mu)\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent where $\mu$ is the normalization scale, which often will
be suppressed for brevity. In this Section
we are concerned only
with technical aspects of the calculation of $M_N(a_s)$ and
$M_\infty(a_s)$.
\subsection{Borel transform vs. finite gluon mass}
A convenient way to deal with diagrams with multiple loop
insertions is to calculate the Borel transform
\small\begin{equation}\label{borelsum}
B[R](u) \equiv \sum_n \frac{r_n}{n!}(-\beta_0)^{-n} u^n\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent which can be used as a generating function for the fixed-order
coefficients \cite{BEN93}:
\small\begin{equation}\label{genfunction}
r_n = (-\beta_0)^n \frac{d^n}{du^n} B[R](u)_{|_{u=0}}\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent Another advantage of presenting the results in form of the
Borel transform is that the result for the sum of all diagrams
can easily be recovered by the integral representation
\small\begin{equation}\label{borelintegral}
r_0 M_\infty(a_s) = \frac{1}{a_s}
\int_0^\infty du\, e^{-u/a_s} B[R](u)\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent where the integration goes over positive values of the Borel
parameter $u$. Note that we define the Borel parameter $u$
with an additional factor $(-\beta_0)$ compared to the conventional
definition. In fact, Eq.~(\ref{borelintegral})
requires some explanation: As it stands the integral
is not defined, because the Borel transform generally
has singularities on the integration path, known as infrared
renormalons. We shall adopt a definition of the integral based
on deforming the contour above or below the real axis or on
a principal value prescription. These prescriptions are not unique
and their difference, which is exponentially small in the
coupling, must be considered as an uncertainty which
can not be removed within perturbation theory. This will
be discussed in more detail. A second question
concerns the existence of the principal value
integral and the behaviour of the
Borel transform at $u=\infty$. If we consider a physical quantity
that depends only on a single scale $Q$, then, to one-loop running
accuracy, renormalization group invariance entails that the Borel
transform can be written as $(\mu^2/Q^2)^u$ times a $Q$- and
$\mu$-independent function $F(u)$. Combining the factor
$(\mu^2/Q^2)^u$ with $e^{-u/a_s(\mu)}$ in Eq.~(\ref{borelintegral}),
we deduce that the principal value integral exists, provided
that $Q^2$ is sufficiently large compared to $\Lambda_{\rm QCD}^2$
and $F(u)$ does not increase faster than any exponential as
$u$ approaches
positive infinity. Since the second property is satisfied in all
examples which we shall meet (and we may conjecture that this
is generally true for the Borel transform computed from higher
order corrections due to vacuum polarization), we shall assume
in general that all kinematic invariants on which $R$ depends
explicitly are sufficiently large compared to $\Lambda_{\rm QCD}^2$
(to be precise, the difference of two invariants is also an
invariant).
This is not an additional assumption needed to take the Borel
integral. Without it perturbative methods are not applicable
and there is no perturbation theory to start with. In particular,
we do not know whether the Borel transform, which is useful in
connection with short-distance expansions, can serve as a
{\em bona fide} starting point to summation in the strong coupling
regime\footnote{If some kinematic invariants are small,
one might still be able to define the Borel
integral as an analytic
continuation. However, in this regime all power corrections
in a short-distance expansion are of the same importance and
the analytic continuation is useless unless the summation of
the short-distance expansion is understood.}.
In simple cases the Borel transform can be calculated directly.
This is due to the fact that the evaluation of diagrams with multiple
fermion bubble insertions in Landau gauge corresponds to the evaluation
of the lowest-order diagram with the effective propagator
\small\begin{equation}\label{propagator}
D^{AB}(k) = i\delta^{AB}\frac{k_\mu k_\nu-k^2 g_{\mu\nu}}{k^4}
\frac{1}{1+\Pi(k^2)}\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent where
\small\begin{equation}
\Pi(k^2) = a_s \ln\left(-\frac{k^2}{\mu^2}e^C\right)
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent and $C$ is a scheme-dependent finite renormalization constant.
In the $\overline{\rm MS}$-scheme $C=-5/3$, in the V-scheme \cite{BRO83}
$C=0$.
For a chain of fermion loops contributing to a physical amplidude with
euclidian external momenta, one can separate the integration over the
gluon momenta to write it as
\small\begin{equation}\label{bs}
r_0 \alpha_s(\mu) M_\infty(a_s)
=\int d^4k\, F(k,Q)\frac{1}{k^2}\frac{\alpha_s(\mu)}{1+\Pi(k^2)}\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent where the transverse projector that appears in the gluon propagator
in Landau gauge is assumed to be included in the function $F(k,Q)$ and
$Q$ stands for a collection of kinematic invariants.
A crucial simplification arises since for diagrams with only one
fermion bubble
chain the Borel transformation applies to the expansion in $\alpha_s$
of the propagator in Eq.~(\ref{propagator}) itself, rather than to
the set of diagrams as a whole. The effective
(Borel-transformed) gluon propagator
is \cite{BEN93}:
\small\begin{equation}\label{gluonprop}
B[\alpha_s D_{\mu\nu}^{AB}(k)](u)=
i\delta^{AB} \left(\frac{e^C}{\mu^2}\right)^{-u} \frac{
k_\mu k_\nu - k^2 g_{\mu\nu}}{(-k^2)^{2+u}}\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent Thus, the task of calculating the Borel transform of Feynman
diagrams with bubble insertions reduces to the calculation of the
leading-order diagram with the usual gluon propagator raised to
an arbitrary
power, which is familiar from analytic regularization \cite{SPE68}.
This trick suffices to derive an expression for the Borel
transform of the polarization operator with light quarks
\cite{BEN93,BRO93,BB94}, of the
heavy quark self-energy \cite{BB94} and several more complicated cases in
connection with heavy quark expansions, which can be
found in \cite{BB94,NS94,LMS94,BG94}.
However, in most phenomenologically interesting cases an analytic
expression for the Borel transform is difficult to obtain, especially
for observables involving more than one scale. Even if the
exact Borel transform is obtainable, taking $n$ derivatives
to evaluate fixed-order coefficients (see Eq.~(\ref{genfunction}))
may turn out
to be a complicated task. In this paper we shall work out a different
technique, which extracts the desired information on higher orders from the
lowest-order diagrams, calculated with a finite
gluon mass \cite{BB94b,SV94}.
Calculations of one-loop diagrams with finite gauge boson mass have
become routine for electroweak radiative corrections, which allows
to hope that this technique is generally applicable to a wide range
of observables in QCD. In this way one obtains a concise integral
representation for the Borel transform.
For a while let us restrict our discussion to euclidian quantities,
which are not sensitive to the gluon self-coupling to leading
order.\footnote{With the latter restriction, we do not have problems
with gauge invariance, which otherwise prohibits introduction of a finite
gluon mass, unless QCD is embedded, for example, in an $SU(3)$ Higgs
model.}
Call $r_0(\lambda^2)$ the leading-order radiative correction calculated
with finite gluon mass $\lambda$ and $r_0\equiv r_0(0)$.
To be precise, we define $r_0(\lambda^2)$ as the sum of all Feynman
diagrams to leading order, which in general may be ultraviolet
divergent and need to be renormalized.
In this Section we restrict our discussion to
cases where no explicit renormalization is needed, which is the case,
e.g. for the derivative of the polarization
operator in Eq.~(\ref{polaroper}),
or transition amplitudes related to heavy quark decays with on-shell
mass renormalization.
It is easy to show that this assumption is equivalent to the
requirement that
$r_0(\lambda^2)$ vanishes as $\lambda^2\to\infty$.
Equally, in the case of Borel representation for the diagrams with
fermion bubbles, we assume that fermion loops are renormalized, and
no additional explicit renormalization is necessary.
This assumption can easily be relaxed.
A detailed discussion of
renormalization is given in Appendix A, where we work out
the missing overall subtractions for the general case.
We keep the standard gauge-fixing and work with the propagator
\small\begin{equation}\label{massgluonprop}
-i\delta^{AB}\frac{1}{k^2-\lambda^2+i\epsilon}
\left[g_{\mu\nu} - (1-\xi) \frac{k_\mu k_\nu}{k^2-\xi\lambda^2
+i\epsilon}\right] \,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent The relation to the Borel transform of the massless propagator
in Eq.~(\ref{gluonprop}) (in Landau gauge, $\xi=0$, or Feynman gauge,
$\xi=1$) is established
by an (inverse) Mellin representation
\small\begin{equation}
\frac{1}{k^2-\lambda^2} = \frac{1}{2\pi i}\,\frac{1}{k^2}
\!\!\!\int\limits_{-1/2-i\infty}^{-1/2+i\infty}
\!\!\!\!\!\mbox{d} u\,\Gamma(-u)
\Gamma(1+u)\left(-\frac{\lambda^2}{k^2}\right)^u\, .
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent Writing down the leading-order radiative correction
with non-vanishing gluon mass $\lambda$ (in Landau gauge) as
\small\begin{equation}\label{finitemass}
r_0(\lambda^2)= \int d^4k\, F(k,Q)\frac{1}{k^2-\lambda^2}\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent and comparing to Eq.~(\ref{bs}), one finds the identity \cite{BBZ94}
\small\begin{equation}\label{relation}
r_0(\lambda)\,=\,\frac{1}{2\pi i}
\!\!\!\int\limits_{-1/2-i\infty}^{-1/2+
i\infty}\!\!\!\!\!\mbox{d} u \,\Gamma(-u)\Gamma(1+u)\left(\frac{
\lambda^2}{\mu^2} e^C\right)^u\,B[R](u)
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent Taking the inverse, we get \cite{BB94b}
\small\begin{eqnarray}\label{final1}
B[R](u) &=&
-\frac{\sin(\pi u)}{\pi }\int_0^\infty \frac{d\lambda^2}{\lambda^2}\,
\left(\frac{\lambda^2}{\mu^2}e^C\right)^{-u}
\left[r_0(\lambda^2)-r_0(0)\right]
\nonumber\\&=&
-\frac{\sin(\pi u)}{\pi u }\int_0^\infty d\lambda^2\,
\left(\frac{\lambda^2}{\mu^2}e^C\right)^{-u}
r'_0(\lambda^2)
\end{eqnarray}\normalsize\vspace*{-0.1ex}
\noindent where $ r'_0(\lambda^2) =(d/d \lambda^2)r_0(\lambda^2)$. If $r_0(\lambda^2)
\sim \lambda^{2 a}$ for small $\lambda^2$, the first line exists, when
$0 < u < a$. The second line provides the analytic continuation to an interval
about $u=0$, so that derivatives at zero can be taken. Thus fixed-order
coefficients $r_n$ can be
expressed in terms of the integrals
\small\begin{equation}
J_k\equiv\int_0^\infty d\lambda^2\,\ln^k(\lambda^2/\mu^2)\,r'_0(\lambda^2)
\qquad k\le n\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent For instance (cf.~Eqs.~(\ref{defd_n}) and (\ref{genfunction})),
\small\begin{eqnarray}
d_0 &=& \frac{1}{r_0}\left[-J_0\right] = \,1\nonumber\\
d_1 &=& \frac{1}{r_0}\left[J_1+C J_0\right]\nonumber\\
d_2 &=& \frac{1}{r_0}\left[-J_2-2 C J_1-\left(C^2-\frac{\pi^2}{3}
\right) J_0\right]\\
d_3 &=& \frac{1}{r_0}\left[J_3+3 C J_2+(3 C^2-\pi^2) J_1+
(C^3-C\pi^2) J_0\right]\nonumber
\end{eqnarray}\normalsize\vspace*{-0.1ex}
\noindent etc. and $r_{n}=r_0\,(-\beta_0)^n d_n$.
$r_{1}$ coincides with the result of Smith and Voloshin \cite{SV94}
(who use a scheme with $C=0$), integrating of their
expression by parts. \\
Next, we evaluate the Borel integral in Eq.~(\ref{borelintegral}) to
obtain a compact answer for the sum of diagrams with fermion bubble
insertions to arbitrary order, up to the ambiguities caused by renormalons
as mentioned below Eq.~(\ref{borelintegral}). For the following
derivation we assume that the Borel integral is defined with a contour
slightly above the positive real axis. We also recall that this integral
exists, if all kinematic invariants are large compared to
$\Lambda_{\rm QCD}^2$, which we assume. To find a
representation for the so-defined sum in terms of an integral over
$\lambda^2$, one needs to insert
Eq.~(\ref{final1}) in Eq.~(\ref{borelintegral}) and take the $u$-integral
explicitly. The $u$-integration is elementary, but the interchange of
orders of integration in $u$ and $\lambda^2$ can not be done
immediately, because the $\lambda^2$-integral in Eq.~(\ref{final1})
is not defined for all $u$ on the integration path.
It is convenient to use the first representation
in Eq.~(\ref{final1}) and write it as\footnote{The separation of the
two terms in the following equation reintroduces the pole at $u=0$
in each of the two $\lambda^2$-integrals. The pole cancels in the sum
of both terms, and both terms can be manipulated separately
regardless of this pole. This can be justified as follows:
One splits the $\lambda^2$-integral before
separation into the two terms at some sufficiently large
$\lambda_T^2$. The contribution from $\lambda^2_T$ to infinity
can be handled without difficulty. For the integral form 0
to $\lambda_T^2$ one can proceed as following Eq.~(\ref{split})
and both terms have no singularity at $u=0$.}
\small\begin{eqnarray}\label{split}
r_0 M_\infty(a_s) &=&
-\frac{1}{2\pi i}\int_0^\infty du\, e^{-u/a_s}
\int_0^\infty \frac{d\lambda^2}{\lambda^2}\,
\left(\frac{\lambda^2e^{-i\pi}}{\mu^2}e^C\right)^{-u}
\left[r_0(\lambda^2)-r_0(0)\right]
\nonumber\\&&\hspace{-1.5cm}+\,
\frac{1}{2\pi i}\int_0^\infty du\, e^{-u/a_s}
\int_0^\infty \frac{d\lambda^2}{\lambda^2}\,
\left(\frac{\lambda^2e^{i\pi}}{\mu^2}e^C\right)^{-u}
\left[r_0(\lambda^2)-r_0(0)\right]
\end{eqnarray}\normalsize\vspace*{-0.1ex}
\noindent The integration
over $u$ in the first of the two terms above can easily be taken,
rotating the contour to the positive imaginary axis. The quarter-circle
at infinity does not contribute, because of the behaviour of the
Borel-transform at infinity as discussed above.
The $\lambda^2$-integral exists for positive imaginary $u$ and
the order of integrations
can be interchanged. The $u$-integral in the second
term would equally easily be taken by rotating
the contour to the negative imaginary axis, but
the infrared renormalon singularities on the
real axis obstruct this deformation.
It can only be done at the price of picking
up the residues from the infinite
number of poles. A more elegant solution is to rotate first the
integration contour over $\lambda^2$ to the second
sheet: $\lambda^2\to\lambda^2 e^{-i(\pi+\epsilon)}$. Here we note that
for euclidian quantities, there are no singularities of
the $\lambda^2$-integrand in the lower complex plane.
Next, the $u$-integration can again be performed by rotation
to the positive imaginary axis, as above. This integration gives
\small\begin{equation}\label{howpoleappears}
\int_0^{i\infty} du\, \exp\{-u [1/a_s+\ln(|\lambda^2|/\mu^2e^C)]
-i\epsilon\} =
\frac{a_s}{1+a_s\ln(|\lambda^2|/\mu^2e^C)-i\epsilon}
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent and introduces a pole singularity,
located at
\small\begin{equation}\label{Landau}
\lambda_L^2 =- \mu^2\exp[-1/a_s-C]\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent which is simply the
position of the Landau pole in the running coupling (in the
V-scheme). Now one can rotate the $\lambda^2$-integral back
from the second sheet to the real positive
axis. In this way one encounters the pole in Eq.~(\ref{howpoleappears}),
whose residue has to be added.
Collecting all the terms, we get after some algebra:
\small\begin{equation}\label{firstversion}
r_0 M_\infty(a_s) = -\!\int_0^\infty \frac{d\lambda^2}{\lambda^2}
\frac{a_s}{|1+a_s\ln(-\lambda^2/\mu^2 e^C)|^2}
\left[r_0(\lambda^2)\!-\!r_0(0)\right] +
\frac{1}{a_s}\left[r_0(\lambda_L^2\!-i\epsilon)\!-\!r_0(0)\right]
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent where the $i\epsilon$ prescription corresponds to defining the
Borel integral above the real axis: The contours are rotated
in the opposite directions if the Borel integral
is defined below the real axis, with the only modification that the
sign of the $i\epsilon$-prescription is reversed in the result.
Finally, integrating by parts in the first term, we obtain
\small\begin{equation}\label{rBS}
r_0 a_s(\mu) M_\infty(a_s(\mu)) =
\int_0^\infty d\lambda^2\, \Phi(\lambda^2)\,r'_0(\lambda^2)
+[r_0(\lambda_L^2-i\epsilon)-r_0(0)]
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent where
\small\begin{equation}\label{Phi}
\Phi(\lambda^2)= -\frac{1}{\pi}\arctan\left[\frac{a_s(\mu)\pi}
{1+a_s(\mu)\ln(\lambda^2/\mu^2 e^C)}\right] -
\,\Theta(-\lambda_L^2-\lambda^2)\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent which coincides with the result of \cite{BB94b}, obtained by
a different method.
Note that the term with the $\Theta$-function exactly cancels the
jump of the $\arctan$ at $\lambda^2=-\lambda_L^2$.
Eq.~(\ref{rBS}) presents the desired answer for the sum of
diagrams with any number of fermion bubbles in terms of an integral
over the gluon mass. This relation is
one of the main technical tools which
we suggest in this paper, and we discuss its structure
in detail.
\begin{figure}[t]
\vspace{-3cm}
\epsfysize=14cm
\epsfxsize=10cm
\centerline{\epsffile{coupfig.eps}}
\vspace*{-3cm}
\caption{\label{coupfig} One-loop running coupling in the V-scheme,
$C=0$, (broken line) and effective
coupling $-\Phi(\lambda^2)$ (solid line) as functions
of $\lambda^2/\Lambda_V^2$ .}
\end{figure}
First, notice that Eq.~(\ref{rBS}) has a very intuitive
and simple interpretation: The
quantity $r'_0(\lambda^2)$ (with certain reservations) can be considered
as the contribution
to the integral from gluons of virtuality of order $\lambda^2$, and
the function $-\Phi(\lambda^2)$ can be understood as an effective charge.
At large scales, $-\Phi(\lambda^2)$ essentially coincides with
$\alpha_V(\lambda)$, the QCD coupling in the V-scheme
\cite{BRO83}, but in distinction to
it remains finite at small $\lambda^2$, see Fig.~\ref{coupfig}.
The absence of a Landau pole
in this effective coupling implies that
the integral is a well-defined number and the fact
that we have started with the expression in
Eq.~(\ref{borelintegral}) that is ill-defined
due to infrared renormalon singularities
(equivalently, attempted to sum a non-Borel
summable series) is isolated in the Landau pole contribution
$r_0(\lambda_L^2)$. Whenever infrared
renormalons are present, $r_0(\lambda^2)$
develops a cut at negative $\lambda^2$. The
real part of the above expression for the bubble sum coincides
with the principal value of the Borel integral and,
in particular, coincides with the Borel sum, when it exists. The
imaginary part provided by $r_0(\lambda_L^2)$ coincides with the
the imaginary part of the Borel integral, when the contour is deformed
above (or below) the positive real axis.
We therefore adopt the
real part of Eq.~(\ref{rBS}) as a definition of $M_\infty(a_s)$, and
consider the imaginary part (divided by $\pi$)
as an estimate of the intrinsic
uncertainty from summing a (non-Borel summable) divergent series
without any additional nonperturbative input. Note that this
imaginary part is proportional to a power of $\lambda^2_L/Q^2\sim
\exp(-1/a_s(Q))$ and therefore is suppressed
by powers of $\Lambda_{\rm QCD}/Q$.
Second, notice that the result in Eq.~(\ref{rBS}) is manifestly
scale- and scheme-invariant, provided the running of the coupling
is consistently implemented with the one-loop $\beta$-function:
$a_s(\mu_1)=a_s(\mu_2)/(1+a_s(\mu_2)\ln(\mu_1^2/\mu_2^2))$.
In particular, $\alpha_s(Q^*)$
is formally independent of the finite renormalization $C$ for the
fermion loop.
To be precise, in schemes
that do not introduce artificial flavour dependence, the coefficients of
the expansion that relates the couplings in two different schemes
have itself an expansion of the type of Eq.~(\ref{flavourseries}).
Within the restriction to fermion loops with subsequent restoration
of $\beta_0$ one must again keep only the highest power of $N_f$
in these coefficients. If couplings are related in this way, the numerical
value for $\alpha_s(Q^*)=\alpha_s(Q) M_\infty(a_s(Q))$
is scheme-independent.
In practice, working with the full QCD $\beta$-function one introduces
a scheme-dependence in higher orders in $1/N_f$ similarly as
usually with higher orders in $\alpha_s(Q)$.
The quality of the NNA as compared to exact coefficients
depends on the numerical magnitude of
terms that are formally suppressed by powers of $N_f$ (or $\beta_0$)
and therefore
depends on the scheme. Since empirically the success of NNA in low orders
is observed in the $\overline{\rm MS}$ scheme, we cannot
expect it to hold in schemes that differ from $\overline{\rm MS}$ by
terms that are formally of higher order but numerically large. Thus,
although the resummation accounts exactly
for the contribution of fermion loops
in any scheme, we believe that its phenomenological relevance
is tied to the $\overline{\rm MS}$ scheme, or others that
are ``reasonable'' in the above sense.
Third, we remark that Eq.~(\ref{rBS}) applies without
modification to quantities like inclusive decay rates, which can be
obtained starting from a suitable amplitude in euclidian space and
taking the total imaginary part upon analytic continuation
to Minkowski space. The structure of the $\lambda^2$-integral remains
unaffected, and it is only the quantity $r'_0(\lambda^2)$ that should be
substituted by the corresponding decay rate calculated with finite
gluon mass (For heavy quark decays,
no explicit renormalization is needed, when
the decay rates are expressed in terms of pole masses.).
Similarly, Eq.~(\ref{rBS}) is applicable to various semi-inclusive
quantities, with the only formal restriction that a weight-function
chosen to specify the final state does not resolve quark-antiquark
pairs in fermion bubbles, that is, their phase space must be
completely integrated. It is not applicable to quantities
like hadronic event shapes.
The derivation of Eq.~(\ref{rBS}) is only slightly modified, when
$r_0(\lambda^2)$ represents a physical cross section. In this case,
$r_0(\lambda^2)$ is written as a sum of virtual and real gluon
emission, where real gluon emission occurs only when $\lambda^2$ is
below a certain threshold $\lambda_T(Q)^2$,
\small\begin{equation} \label{cross}
r_0(\lambda^2) = r_{virt}(\lambda^2) + r_{real}(\lambda^2)\,
\Theta(\lambda_T(Q)^2-\lambda^2)
\,.\end{equation}\normalsize\vspace*{-0.1ex}
\noindent When all kinematic invariants, represented by $Q$, are large
compared to $\Lambda_{\rm QCD}^2$, the same is true for $\lambda_T(Q)^2$.
It is simple to show that Eq.~(\ref{final1}) holds unmodified
for the Borel transform of a cross section with Eq.~(\ref{cross}).
When deriving Eq.~(\ref{rBS}) one splits the $\lambda^2$-integral
in Eq.~(\ref{final1})
at $\lambda_T(Q)^2$. The contribution from 0 to $\lambda_T(Q)^2$
can be treated in the same way as before (adding and
subtracting a contribution from a semicircle with radius
$\lambda_T(Q)^2$). For the contribution
from $\lambda_T(Q)^2$ to infinity, one may interchange the
order of $\lambda^2$- and $u$-integrals directly, since the first
exists for all $u$. Then, the $u$-integral can be taken, since
$\lambda_T(Q)^2$ is large compared to
$\Lambda_{\rm QCD}^2$ (and therefore larger than $-\lambda_L^2$).
After the $u$-integral is taken, we combine both pieces. The final
result is identical to
Eq.~(\ref{rBS}).
We should also mention that the Borel transform of physical cross
sections generically contains a factor $\sin\pi u$, which seems
to obstruct the rotation of the $u$-integral to the imaginary
axis. Closer inspection of Eq.~(\ref{split}) shows that the two
separate $\lambda^2$-integrals there correspond in fact to the
Borel transform
of
\small\begin{equation}
e^{\pm i \pi u}\,\frac{B[R](u)}{\sin\pi u}\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent which is sufficient to guarantee convergence at the
imaginary $u$ relevant to the two terms in Eq.~(\ref{split}).
\subsection{Quantities involving renormalization}
So far the derivation was restricted to quantities, where the lowest
order radiative correction $r_0$ is ultraviolet (and infrared) finite.
Then, in diagrams involving insertion of fermion loops into the gluon
line, counterterms are needed only for the fermion loop subdiagrams, but
no overall subtraction for the diagram itself
(after summation of all diagrams that contribute to $r_n$) is
required. We consider now the more general case, that the quantity
$R(Q)$ has an anomalous dimension. One can still use Eq.~(\ref{propagator}),
but the resulting Borel transform is singular at $u=0$. This singularity
is compensated by adding a function $S_R(u)$, which accounts for the
missing counterterms. The function $S_R(u)$ in
the $\overline{\rm MS}$ scheme is given in Eq.~(\ref{olres}).
When expressed in terms of the lowest order radiative correction with
non-zero gluon mass, the necessity of additional subtractions is reflected
in a divergence of the $\lambda^2$-integral at large $\lambda^2$ for $u=0$
in the second line of Eq.~(\ref{final1}), since $r_0(\lambda^2)$ grows
logarithmically at large $\lambda^2$ (We assume a logarithmic ultraviolet
divergence.). The additional counterterms not associated
with fermion loop insertions amount to the subtraction
of the leading term in
the large-$\lambda^2$ expansion of $r_0(\lambda^2)$
and to the addition of some scheme specific
contributions. In the $\overline{\rm MS}$ scheme, Eqs.~(\ref{final1}) and (\ref{rBS})
are replaced by ($C=-5/3)$
\small\begin{eqnarray}\label{bofin}
B[R](u) &=& -\frac{\sin\pi u}{\pi u} \int\limits_0^\infty
d \lambda^2 \left(\frac{\lambda^2}{\mu^2} e^C
\right)^{-u} \left[r_0^\prime(\lambda^2)-\frac{r_\infty}{\lambda^2}
\,\Theta(\lambda^2-\mu^2 e^{-C})\right]\nonumber\\
&&\,+ \frac{1}{u}\left(\tilde{G}_0(u)-r_\infty\,\frac{\sin\pi u}
{\pi u}\right)\,,
\end{eqnarray}\normalsize\vspace*{-0.1ex}
\noindent and
\small\begin{eqnarray}\label{rBSren}
r_0 a_s M_\infty(a_s) &=&
\int_0^\infty d\lambda^2\, \Phi(\lambda^2)\,\left(r'_0(\lambda^2) -
\frac{r_\infty}{\lambda^2}\,\Theta(\lambda^2-\mu^2 e^{-C})\right)
+[r_0(\lambda_L^2)-r_0(0)]\nonumber\\
&&\hspace*{-2cm}
\,+ \int\limits_0^{\,a_s}\frac{d u}{u}\left(G_0(u)-r_\infty\right) + r_\infty
\left[\frac{\arctan(\pi a_s)}{\pi a_s}+\frac{1}{2} \ln\left(
1+\pi^2 a_s^2\right) - 1\right]\,.
\end{eqnarray}\normalsize\vspace*{-0.1ex}
\noindent Here, as always, $a_s=a_s(\mu)$. The derivation of this result
together with the definition of all new quantities that appear in the
above equation is given in Appendix A. In particular, $G_0(u)$
is essentially the anomalous dimension of $R$
(to leading order in $1/N_f$).
\subsection{Quarks with finite masses}
In practical applications it can be important to trace the number of
active fermion flavours, which may effectively decrease in high orders
because important integration regions shift towards decreasing momenta,
if the series is dominated by infrared renormalons in large or
intermediate orders. Thus, it is worthwhile to generalize the above
technique to include
massive quarks in fermion loops. For definiteness, we shall consider
here the case of one massive, and $N_f-1$ exactly massless flavours.
The generalization to several massive flavours is then obvious.
$\beta_0$ is
to be taken with $N_f$ flavours, including the massive one, since we
have in mind a situation, where the quark mass is non-zero, but smaller
than the external momenta $Q$. We
study the decoupling
of quarks with finite masses in higher orders on a particular
example in Sect.~4.
The vacuum polarization $\Pi(k^2)$ in Eq.~(\ref{propagator}) includes
summation over flavours. With one massive quark of mass $m$,
it is modified
to
\small\begin{eqnarray}
\Pi(k^2) &=& a_s\Big[ \ln(-k^2/\mu^2)+C -\Delta(k^2,m^2)\Big]\,,
\nonumber\\
\Delta(k^2,m^2)&=& \frac{1}{6\pi\beta_0}\int_0^\infty
\frac{ds}{s-k^2}\big[\rho(s,m^2)-1\big]\,,
\end{eqnarray}\normalsize\vspace*{-0.1ex}
\noindent where
\small\begin{equation}
\rho(s,m^2) = \left(1+\frac{2m^2}{s}\right)\sqrt{1-\frac{4m^2}{s}}
\,\Theta(s-4m^2)\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent Performing the integral gives
\small\begin{eqnarray}
\Delta(k^2,m^2)&=& \frac{1}{6\pi\beta_0}\left\{
\ln\left(\frac{-k^2}{m^2}\right)
+\frac{4m^2}{k^2}
-\sqrt{1-\frac{4m^2}{k^2}}
\left(1+\frac{2m^2}{k^2}\right)
\right.\nonumber\\&&{}\left.\times
\left[\ln\left(\frac{-k^2}{m^2}\right)
+\ln\left[\frac{1}{2}\left(1-\frac{2m^2}{k^2}+\sqrt{1-\frac{4m^2}{k^2}}
\right)\right]\right]\right\}\,.
\end{eqnarray}\normalsize\vspace*{-0.1ex}
\noindent To derive the expression for the sum of fermion loop insertions,
we follow the method of \cite{BB94b,SV94} and substitute the
effective propagator $(1+\Pi(k^2))^{-1}$
in Eq.~(\ref{bs}) by the dispersion relation
\small\begin{equation}\label{disprel}
\frac{1}{1+\Pi(k^2)} = \frac{1}{\pi}\int_0^\infty d\lambda^2\,
\frac{1}{k^2-\lambda^2} \frac{{\rm Im}\,
\Pi(\lambda^2)}{|1+\Pi(\lambda^2)|^2}
+\frac{1}{k^2-\lambda_L^2}
\frac{1}{\Pi'(\lambda_L^2)}
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent where $\Pi'(\lambda^2) \equiv d/d\lambda^2\Pi(\lambda^2)$ and
$\lambda_L^2<0$ is the solution of
\small\begin{equation}
1+\Pi(\lambda^2_L)=0
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent provided it exists. If no solution exists, the second term in
Eq.~(\ref{disprel}) is absent. We now use
\small\begin{equation}
\frac{1}{\pi}\mbox{\rm Im}\, \Pi(\lambda^2) =
-a_s\left[
1+\frac{1}{6\pi\beta_0}(\rho(\lambda^2,m^2)-1)\right]
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent and write
\small\begin{equation}
\frac{1}{k^2-\lambda^2} =\frac{1}{\lambda^2}\left(
\frac{k^2}{k^2-\lambda^2} -1\right)\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent Interchanging the order of integrations in $k$ and $\lambda^2$,
we arrive at
\small\begin{equation}\label{rBSmass}
r_0 M_\infty(a_s) = -a_s\int_{-\infty}^\infty\frac{d\lambda^2}{\lambda^2}
\frac{r_0(\lambda^2)-r_0(0)}{|1+\Pi(\lambda^2)|^2}
\left[1+\frac{1}{6\pi\beta_0}(\rho(\lambda^2,m^2)-1)\right]
+\frac{r_0(\lambda^2_L)-r_0(0)}{\lambda_L^2\Pi'(\lambda_L^2)}
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent which coincides with Eq.~(\ref{firstversion}) in the limit $m\to 0$.\\
A representation of the Borel transform which allows the calculation of
fixed-order perturbative coefficients is obtained in a similar way.
Starting from Eq.~(\ref{bs}), one only needs
\small\begin{equation}
\mbox{Im}\, B\left[\frac{\alpha_s}{1+\Pi(\lambda^2)}\right](u) =
\mbox{Im}\,e^{-u \Pi(\lambda^2)/a_s} = e^{-u \mbox{\scriptsize\rm Re}
\Pi(\lambda^2)/a_s}
\sin\left[-u \mbox{Im}\Pi(\lambda^2)/a_s\right] \,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent Then we proceed as in the massless case and get
\small\begin{eqnarray}
B[R](u) &=&
-\frac{1}{\pi }\int_0^\infty \frac{d\lambda^2}{\lambda^2}\,
\left[r_0(\lambda^2)-r_0(0)\right]
\sin\Big\{ u\pi\Big[1+ \frac{1}{6\pi\beta_0}(\rho(\lambda^2,m^2)-1)\Big]\Big\}
\nonumber\\&&{}\times
\left(\frac{\lambda^2}{\mu^2}e^C\right)^{-u}
\exp\left\{u \mbox{Re}\Delta(\lambda^2,m^2)\right\}\,.
\end{eqnarray}\normalsize\vspace*{-0.1ex}
\noindent For $m^2=0$ we
recover the old result in Eq.~(\ref{final1}). To avoid
possible bad behaviour at $\lambda^2\to\infty$
and to separate the mass dependence, we add and subtract the
expression for $m^2=0$. Defining
\small\begin{equation}
T(u,\lambda^2,m^2) =
\sin\Big\{ u\pi\Big[1+ \frac{1}{6\pi\beta_0}
(\rho(\lambda^2,m^2)-1)\Big]\Big\}
\left(\frac{\lambda^2}{\mu^2}e^C\right)^{-u}
\exp\left\{u \mbox{Re}\Delta(\lambda^2,m^2)\right\}\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent we obtain, finally
\small\begin{eqnarray}\label{final2}
B[R](u) &=&
-\frac{\sin(\pi u)}{\pi u }\int_0^\infty d\lambda^2\,
\left(\frac{\lambda^2}{\mu^2}e^C\right)^{-u}
r'_0(\lambda^2)
\nonumber\\&&{}\hspace*{-1.5cm}
-\frac{1}{\pi }\int_0^\infty \frac{d\lambda^2}{\lambda^2}\,
\left[r_0(\lambda^2)-r_0(0)\right]
\Big[T(u,\lambda^2,m^2) -T(u,\lambda^2,0) \Big]\,.
\end{eqnarray}\normalsize\vspace*{-0.1ex}
\noindent The second line gives the correction due to finite quark masses
inside loops. We derived this result for quantities that do not
need renormalization.
However, in the $\overline{\rm MS}$ scheme subtractions are mass-independent.
Therefore, only the first term in Eq.~(\ref{final2}) is affected by
subtractions, and in precisely the same way as for $m^2=0$
(cf. Sect.~2.2).
We should point out that Eq.~(\ref{rBSmass}) for the sum of fermion
loops has been derived through a dispersion relation. For massless
quarks, comparison of the derivation of Eq.~(\ref{rBS}) with the one
in \cite{BB94b} shows that the result obtained by a dispersion relation
coincides with the principal value
prescription of the Borel integral. We do not offer such an equivalence for
massive quarks and there is reason to doubt that it is true in minimal
subtraction schemes for the fermion loops. To illustrate this, suppose
there were only a single massive particle inside loops, which produces
a negative contribution to the $\beta$-function. For any finite value
of the mass of this particle, the vacuum polarization behaves as $k^2/m^2$
at very small virtuality $k$. Therefore the factorial growth of coefficients
from the infrared region of integration is eliminated at very large
order. There are no infrared renormalon poles and therefore no ambiguity
in the Borel transform (though it may be sharply peaked at those values,
where singularities appear as $m$ approaches zero). On the other
hand $1/(1+\Pi(k^2))$ does have a pole in the euclidian, when $m$ is
smaller than a critical value and $\Pi(k^2)$ is defined by minimal
subtraction. This leads to an ambiguity in the second
term in Eq.~(\ref{rBSmass}). The critical value is of order $\Lambda_{\rm
QCD}$ and the discrepancy will be noticeable only for such small
quark masses. However, for quark masses of order $\Lambda_{\rm QCD}$,
the quark mass must be considered as an infrared regulator and
one can no longer straightforwardly identify an infrared parameter,
such as the gluon condensate, with non-analytic terms in a finite
gluon mass. As is well-known, the gluon condensate must be redefined
to absorb the non-analyticities in light quark masses as well.
This is indicated by the highly singular behaviour of the Borel
transform as the quark mass goes to zero.
In our practical application the mentioned
discrepancy is numerically irrelevant,
and we
do not pursue this point further. Notice that such a difficulty does
not appear in finite order perturbative coefficients, derived from
Eq.~(\ref{final2}), and affects only the Landau pole term, in which
we are mainly interested as a measure of intrinsic uncertainty.
\subsection{Renormalon ambiguities and extended Bloch-Nordsieck
cancellations}
We want to emphasize the close relation of renormalon singularities
to non-analytic terms in the expansion of lowest-order radiative
corrections in powers of the gluon mass \cite{BBZ94}.
A formal relation between singularities in the Borel plane and
non-analytic terms in $\lambda^2$ is established by Eq.~(\ref{relation}).
Each non-analytic term proportional to $(\sqrt{\lambda^2})^{2n+1}$
in the expansion
of $r_0(\lambda^2)$ at small $\lambda^2$ is in one-to-one correspondence
to a single-pole singularity of $B[R](u)$ at positive half-integers
$u=n+1/2$. Each non-analytic term proportional to
$\lambda^{2n} \ln\lambda^2$
corresponds to a single pole at positive integer $u=n$.
Likewise, non-analytic terms in the expansion at large $\lambda^2$ of
type
$\sqrt{\lambda^2}^{(-2n-1)}$ or $\lambda^{-2n} \ln\lambda^2$
correspond to a single-pole singularities
of the Borel transform at {\em negative}
$u=-n-1/2$ and $u=-n$, respectively. For double or higher poles, higher
powers of $\ln\lambda^2$ appear.
We recall that through the presence of singularities for real positive
values of the Borel parameter the perturbation series signals its
deficiency: Explicit non-perturbative corrections must be added to
make the full answer unambiguous. In fact, the infrared
renormalon problem is
just one manifestation of a generally accepted wisdom: Perturbative
calculations are only reliable if the
essential integration regions include
momenta much larger than $\Lambda_{\rm QCD}$. The above relation between
infrared renormalons and non-analytic terms in
the expansion in powers of the
infrared regulator like the gluon mass is thus natural and expected.
A comment is necessary, however, to explain why only {\em non-analytic}
terms in the expansion at small $\lambda^2$ are related to the
infrared behaviour, and simple power-like terms, $\lambda^{2n}$, are not.
A small gluon mass $\lambda^2\sim \Lambda_{\rm QCD}^2$ not only eliminates
contributions of small momenta $k^2 \sim \Lambda_{\rm QCD}^2$, but also
modifies the gluon propagator at virtualities of order $k^2\sim Q^2$.
In this region of momenta $1/(k^2-\lambda^2)$ can be expanded and produces
(infrared insensitive) corrections of the form $\lambda^2/k^2\sim
\lambda^2/Q^2$. In perturbative calculations these terms are
unimportant, since the corrections
are suppressed by powers of $Q^2$.
Hence the common practice to use the finite gluon
mass as an IR regulator in calculations of various QCD observables.
The famous Bloch-Nordsieck cancellations guarantee that
terms proportional to $\ln \lambda^2$,
which appear at intermediate steps of the calculation, cancel
in final answers for inclusive quantities.
Calculations aiming at {\em power-like} accuracy must trace
accurately the fate of power-like terms in the gluon mass.
Since analytic terms $\lambda^{2n}$ come entirely
from the expansion of the gluon propagator at large
virtualities of order
$Q^2$, they disappear when the regulator is removed. Only non-analytic terms
are related to the infrared behaviour, and indicate the failure
of perturbation theory to account for this region properly.
Thus, tracing non-analytic terms in the expansion of leading-order
radiative correction at small gluon masses, one can trace the
sensitivity of particular quantities to the IR region, and, in particular,
judge upon existence of non-perturbative
corrections suppressed by particular powers of $Q^2$.
Stated otherwise, the absence of particular power-suppressed corrections
in physical quantities can be understood as an extension of the
Bloch-Nordsieck cancellations. For example, the fact that in
Wilson's operator product expansion for the correlation function
$\Pi(Q^2)$ the $1/Q^2$-corrections are
absent, corresponds in this language to the cancellation of
corrections of order $\lambda^2\ln \lambda^2$ in the correlation
function.
Note once more that analytic terms
proportional to $\lambda^2$ do not necessarily
cancel, and in fact they are present in a quantity, closely related
to $\Pi(Q^2)$, the $\tau$-lepton hadronic width, see below.
In turn, the existence of a gluon condensate contribution,
$\langle G^2\rangle/Q^4$, implies that contributions of order
$\lambda^4 \ln \lambda^2$ do not cancel.
\mysection{Hadronic $\tau$ decays}{Hadronic $\tau$ decays}
In this Section we apply the summation of one-loop running coupling
effects developed above to observables
related by analyticity to the correlation function of vector
(axial-vector) currents
\small\begin{equation}\label{polaroper}
\Pi_{\mu\nu}(q) = (q_\mu q_\nu-q^2 g_{\mu\nu})\,\Pi(Q^2) =
i\int\mbox{d}^4 x\,e^{i q x}\,\langle 0|T\{j_\mu^\dagger(x) j_\nu(0)\}| 0
\rangle\,, \qquad Q^2=-q^2\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent Quantities of prime physical interest related to
$\Pi(Q^2)$ are the cross section of
$e^+e^-$ annihilation
\small\begin{equation}
R_{e^+e^-}(s)\equiv \frac{\Gamma(e^+e^-\to {\rm hadrons})}
{\Gamma(e^+e^-\to \mu^+\mu^-)} =12\pi
\mbox{Im}\,\Pi_V(s)
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent (neglecting $Z$-boson exchange) and the $\tau$-lepton total
hadronic width
\small\begin{eqnarray}\label{rep1}
R_{\tau} &\equiv& \frac{\Gamma(\tau^-\to \nu_\tau+{\rm hadrons})}
{\Gamma(\tau^-\to \nu_\tau e^-\bar\nu_e)}
\\
&=&\nonumber 12\pi\int\limits_0^{m_\tau^2} \frac{d s}{m_\tau^2} \left(
1-\frac{s}{m_\tau^2}\right)^2\left[\left(1+2\frac{s}{m_\tau^2}
\right) \mbox{Im}\,\Pi^{(1)}(s) + \mbox{Im}\,\Pi^{(0)}(s)
\right]\,,
\end{eqnarray}\normalsize\vspace*{-0.1ex}
\noindent where $s=q^2$ and we introduced the decomposition
\small\begin{equation}
\Pi^{\mu\nu}_{V/A}(q) = \left(q_\mu q_\nu-g_{\mu\nu} q^2\right)
\Pi^{(1)}_{V/A}(q^2) +q_\mu q_\nu \Pi^{(0)}_{V/A}(q^2)
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent for the vector and axial-vector correlation function
and defined
$\Pi^{(i)}(s)=\Pi^{(i)}_V(s)+\Pi^{(i)}_A(s)$. For the purpose of
our discussion we neglect the strange quark mass and omit
the overall CKM factor $|V_{ud}|^2+|V_{us}|^2\approx 1$.
The {\em exact} (nonperturbative) correlation functions should be
analytic in the complex $s$-plane cut along the positive axis.
Exploiting this property, we may transform Eq.~(\ref{rep1}) into
\cite{BRA92}
\small\begin{equation}\label{rep2}
R_\tau= 6\pi i \int\limits_{|s|=m_\tau^2} \frac{d s}{m_\tau^2} \left(
1-\frac{s}{m_\tau^2}\right)^2\left[\left(1+2\frac{s}{m_\tau^2}
\right) \Pi^{(1)}(s) + \Pi^{(0)}(s)
\right]\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent The same analytic property holds to any {\em finite} order in
perturbation theory in $\alpha_s(\mu)$
(although the discontinuity is arbitrarily
wrong at small $s$). Eqs.~(\ref{rep1}) and (\ref{rep2}) are
equivalent if the correlation functions are substituted either
by the exact values or finite order perturbative expansions. In
addition, in perturbation theory, the vector and axial-vector
contributions coincide. The equivalence of
Eqs.~(\ref{rep1}) and (\ref{rep2}) does not hold in
renormalization group improved perturbation theory or after fermion loop
summation. This will be discussed extensively below.
In what follows we concentrate on $\tau$ decays, which despite
the low energy scale involved, are considered to provide
one of the most reliable determinations of the QCD coupling
\cite{Revs}.
The state-of-the-art perturbative calculations yield \cite{GOR91}
in the $\overline{\rm MS}$ scheme:
\begin{eqnarray}\label{tauexact}
R_\tau &=&
3 \Bigg[1+
\left(\frac{\alpha_s(m_\tau^2)}{\pi}\right)
+\left(\frac{\alpha_s(m_\tau^2)}{\pi}\right)^2(6.3399-0.3792 N_f)
\nonumber\\&&{}
+\left(\frac{\alpha_s(m_\tau^2)}{\pi}\right)^3
(48.5832-7.8795N_f+0.1579N_f^2)\Bigg]
+O(\alpha_s^4)\,.
\end{eqnarray}
\noindent The leading non-perturbative corrections due to contributions of local
operators of dimension 6 have been estimated
\cite{BRA92,Revs} and turn out to be small, below 1\%
compared to the tree-level unity in brackets above. Thus, the principal
uncertainty of the determination of $\alpha_s$ from
the $\tau$ hadronic width
comes from unknown higher-order terms in the perturbative expansion.
The purpose of this Section is twofold:
We analyze summation of $(-\beta_0\alpha_s)^n$
perturbative corrections and conclude -- with certain {\em caveats} --
that the cumulative effect of higher order corrections beyond
order $\alpha_s^3$ is somewhat
larger than the exact order $\alpha_s^3$-correction. Resummation
of vacuum polarization reduces the value of $\alpha_s(m_\tau)$
extracted from $\tau$ decays by about 10\%.
Second, we endeavour to
clarify and speculate on a conceptual point:
Several authors \cite{ALT92,DOM94,Revs} considered
(with different conclusions) the possibility that
power corrections proportional to
$1/m_\tau^2$ may creep into $R_\tau$ from
various sources (summation of large orders in perturbation
theory -- infrared and ultraviolet renormalons~--, freezing of a
physical
coupling in the infrared, violations of duality), whose absence in
the approach reviewed above is crucial to ascertain its power
for the determination of $\alpha_s$. We elaborate on two aspects,
which are sometimes omitted from the discussion: The necessity to
define a coupling parameter to power-like (in $1/m_\tau$) accuracy
and the distinction between hypothetical $1/m_\tau^2$-corrections
to the OPE of correlation functions in the euclidian and to the
$\tau$ decay width itself.
\subsection{Higher orders in $(-\beta_0)\,\alpha_s$}
To start with, let us demonstrate that Naive
Nonabelianization provides indeed
an excellent approximation in low orders. The NNA
approximation is obtained from Eq.~(\ref{tauexact})
keeping the term with the
leading power of $N_f$ only, and restoring the full $\beta_0$ by the
substitution $N_f\to N_f-33/2$. The result is
\small\begin{eqnarray}\label{tauNNA}
R_\tau^{NNA} &=&
3 \Bigg[1+
\left(\frac{\alpha_s(m_\tau^2)}{\pi}\right)
+\left(\frac{\alpha_s(m_\tau^2)}{\pi}\right)^2(6.2568-0.3792 N_f)
\nonumber\\&&{}
+\left(\frac{\alpha_s(m_\tau^2)}{\pi}\right)^3
(42.9883-5.2107N_f+0.1579N_f^2)\Bigg]
+O(\alpha_s^4)\,.
\end{eqnarray}\normalsize\vspace*{-0.1ex}
\noindent The accuracy is impressive: For the practical case $N_f=3$ NNA
predicts a coefficient $28.7773$ for the $(\alpha_s/\pi)^3$
correction, to be compared to the exact coefficient $26.3658$. We
should mention, however, that this coincidence is also slightly
misleading, since a substantial part of the exact coefficient is
from a combination of $\beta$-function coefficients and lower order
coefficients of the correlation function $\Pi(s)$ and results from
contour integration in Eq.~(\ref{rep2}).
To estimate higher orders, we recall that for the
polarization operator, Eq.~(\ref{polaroper}), an analytic expression is
available for the Borel transform of the
sum of diagrams with fermion loop insertions
\cite{BEN93,BRO93,BENTH,BB94}. To get rid of
an (irrelevant) overall subtraction, it is convenient to take one
derivative with respect to $Q^2$ and to consider
\small\begin{equation}\label{defD}
D(Q^2)\equiv Q^2 \frac{d\Pi(Q^2)}{d Q^2}\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent Then (the simple representation quoted here is adapted
from \cite{BRO93}, see \cite{BENTH,BB94})
\small\begin{equation}\label{borelpolaroper}
B\big[D(Q^2)\big](u)\,=\,\frac{8}{3\pi^3}
\left(\frac{Q^2}{\mu^2} e^C\right)^{-u} \frac{u}{1-(1-u)^2} \sum_{k=2}^
\infty \frac{(-1)^k k}{(k^2-(1-u)^2)^2}
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent and \cite{BENTH}
\small\begin{equation}\label{taubtexact}
B\big[R_\tau\big](u)\,=\, 12 \pi^2 B\big[D(Q^2)\big](u)\sin(\pi u)
\left[\frac{1}{\pi u}+\frac{2}{\pi(1-u)}-\frac{2}{\pi(3-u)}+
\frac{1}{\pi(4-u)}\right]
\end{equation}\normalsize\vspace*{-0.1ex}
\begin{table}[t]
\addtolength{\arraycolsep}{0.2cm}
$$
\begin{array}{|l||c|c|c||c|c|}
\hline
n & d_n^{\tau,\overline{\rm MS}} &d_n^{\tau,V} & M_n^{\tau,\overline{\rm MS}} & d_n^{D,\overline{\rm MS}} &
M_n^{\tau,LDP} \\ \hline\hline
0 & 1 & 1 & 1 & 1 & 1.329 \\
1 & 2.2751 & 0.6084 &1.521 & 0.6918 & 1.578 \\
2 & 5.6848 & 0.8788 &1.819 & 3.1035 & 1.855 \\
3 & 13.754 & -0.3395 &1.984 & 2.1800 & 1.898 \\
4 & 35.147 & 3.7796 &2.081 & 30.740 & 2.027 \\
5 & 84.407 & -14.680 &2.134 & -34.534 & 2.000 \\
6 & 248.83 & 99.483 &2.170 & 759.74 & 2.094 \\
7 & 525.38 & -664.00 &2.187 & -3691.4 & 2.041 \\
8 & 3036.0 & 5400.06 &2.210 & 42251 & 2.042 \\ \hline
\end{array}
$$
\caption{
Coefficients for $n$ fermion loop insertions into one-loop
radiative corrections for the $\tau$-lepton hadronic width
and partial sums of the perturbation theory for
$\alpha_s(m_\tau)=0.32$ [$a_s(m_\tau)=0.229$]. See text.
}
\label{tab1}
\end{table}
\noindent Taking derivatives of $B\big[R_\tau\big](u)$ (see
Eq.~(\ref{genfunction}))
it is easy to evaluate
fixed-order perturbative coefficients. In Table~\ref{tab1},
second to fourth column,
we give values for the
coefficients $d_n$ defined in Eq.~(\ref{defd_n}) for $n\le8$
in the $\overline{\rm MS}$- and V-scheme\footnote{
Note that both series are far from the expected asymptotic behaviour.
There are two reasons for this: First, for the current
correlation function, the formal large-$n$ behaviour is dominated
by ultraviolet renormalons, whose suppression in the $\overline{\rm MS}$ scheme
with $C=-5/3$ is significant in low orders. Therefore, one does not
expect to see sign-alternating behaviour in low orders. Second,
the contour integration rearranges coefficients and
postpones the onset of the asymptotic regime, see
below.} ($C=-5/3$ and $C=0$),
and partial sums
of the perturbative series $M_n(a_s(m_\tau))$ in the
$\overline{\rm MS}$-scheme, defined in Eq.~(\ref{defM}), for
$\alpha_s^{\overline{\rm MS}}(m_\tau) = 0.32$ \cite{Revs} and
taking $N_f=3$ massless flavours.
\phantom{\ref{taufig}}
Truncating the expansion of $R_\tau$ at its minimal term,
one deduces from the fourth column of
Table~\ref{tab1} that the effect of summation of
$(-\beta_0\alpha_s(m_\tau))^n$-corrections increases the leading order
correction by factor $M_{7}^{\tau,\overline{\rm MS}}(0.229)\simeq 2.19$, which is to be
compared with 1.803, obtained from the exact coefficients up to
order $\alpha_s^3$. The value of $M_{7}^{\tau,\overline{\rm MS}}(a_s)$ from the truncated
series is in good
agreement with the result of resummation\footnote{We have obtained
this value in three different ways: (1) Starting from Eq.~(\ref{rBS})
with the leading order corrections to hadronic $\tau$ decays with finite
gluon mass collected in Appendix B; (2) Taking the principal value
Borel integral of Eq.~(\ref{taubtexact}) and (3) Computing the
principal value integral of Eq.~(\ref{borelpolaroper}) along a
circle of radius $m_\tau^2$ in the complex $s$-plane and then
taking the contour integral along the circle according to
Eq.~(\ref{rep2}). The intrinsic uncertainty due to IR renormalons
is small, $\pm 0.02$, since the leading pole at $u=2$ disappears
in $R_\tau$ in the large-$\beta_0$ limit.},
\small\begin{equation}\label{sumtau} M_\infty^{\tau,\overline{\rm MS}}(0.229) = 2.233\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent Taken at face value, higher order vacuum polarization effects lead
to a significant increase of radiative corrections beyond the
$\alpha_s^3$-approximation. The cumulative effect amounts to somewhat
more than the $\alpha_s^3$-correction itself.
Before we discuss its impact on the determination of $\alpha_s$, we
note that it has become customary not to use a fixed order approximation
for $R_\tau$, but to employ the approach of Le~Diberder and Pich (LDP)
\cite{DIB92}, who resum exactly the effect of the running coupling
along the circle in the complex plane, but not on $Q^2 d\Pi/dQ^2$
itself. This resummation is motivated by the observation that evolution
along the circle generates a series of large higher order corrections,
which is convergent, but only barely so at the actual value of
$\alpha_s(m_\tau)$. The resummation of Le~Diberder and Pich (restricted
to one-loop running of the coupling)
is automatically included in our resumation, since the
running coupling is not expanded in $\alpha_s(m_\tau)$ in the
derivation of Eq.~(\ref{rBS}). It is yet instructive
to apply the procedure of \cite{DIB92} to fixed order
approximations for $Q^2 d\Pi/dQ^2$. The successive approximations for
$R_\tau$ are then given by
\small\begin{eqnarray}\label{mlp}
M_N^{\tau,LDP}(a_s(m_\tau))&\equiv& \sum_{n=0}^N d_n^D\,
A_n(a_s(m_\tau))\,a_s(m_\tau)^n \\
\nonumber A_n(a_s(m_\tau)) &=&
\int\limits_{|s|=m_\tau^2}\frac{d s}{s}\left(1-2\frac{s}{m_\tau^2}+
2\frac{s^3}{m_\tau^6}-\frac{s^4}{m_\tau^8}\right)\frac{1}{[1+a_s(m_\tau)
\ln(-s/m_\tau^2)]^{n+1}}\,,
\end{eqnarray}\normalsize\vspace*{-0.1ex}
\noindent where $d_n^D$ are the expansion coefficients of $Q^2 d\Pi/dQ^2$,
defined as in Eq.~(\ref{defd_n}). $M_N^{\tau,LDP}(a_s(m_\tau))$ and
$d_n^{D,\overline{\rm MS}}$ are given in the last two columns of Table~\ref{tab1}. From
this we would conclude from truncation of the series that
$M_\infty^{\tau,\overline{\rm MS}}$ is about 2.05 or even less since the
coefficient $d_2^{D,\overline{\rm MS}}=3.10$ is almost a factor three larger
than the exact $\alpha_s^3$-coefficient 1.26 for $Q^2 d\Pi/dQ^2$,
which indicates that keeping vacuum polarization alone is not a
good approximation\footnote{The discrepancy is partially removed,
when two-loop running is incorporated, see Sect.~5.2.}
for $Q^2 d\Pi/dQ^2$ beyond order $\alpha_s^2$.
The difference between $M_\infty^{\tau,\overline{\rm MS}}(0.229) = 2.233$ and
2.05 can be explained by the different weight of infrared and
ultraviolet regions of internal momenta for $R_\tau$ as compared
to $Q^2 d\Pi/dQ^2$. Since the $A_n$ in Eq.~(\ref{mlp}) are positive
numbers of order one, the behaviour of $M_N^{\tau,LDP}(a_s(m_\tau))$
as function of $N$
is controlled by the series for $Q^2 d\Pi/dQ^2$. The point is
that the series for this quantity becomes dominated by ultraviolet
regions of integration much earlier than $R_\tau$ itself, for
which the contour integral suppresses the ultraviolet renormalon
singularities as well as their residues. For instance, the ratio
of the contribution of the leading infrared renormalon pole and the
leading ultraviolet renormalon pole to the coefficients of the
expansion for $R_\tau$ at order $n+1$ is given by
\small\begin{equation}
e^{20/3}\,\frac{20}{9}\left(\frac{1}{3}\right)^n n
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent and the crossover to the asymptotically expected ultraviolet
renormalon dominance takes place at $n\approx 8-9$. For
$Q^2 d\Pi/dQ^2$ we obtain instead
\small\begin{equation}
e^{5}\,\frac{2}{3}\left(\frac{1}{2}\right)^n \frac{1}{n}
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent with crossover at $n\approx 4-5$, which is indeed confirmed by the
coefficients for $d_n^{D,\overline{\rm MS}}$ given in Table~\ref{tab1}. The problem
with the onset of ultraviolet renormalon dominance is that then
one is forced to construct the analytic continuation of the Borel
transform in order to overcome the formal $1/m_\tau^2$-uncertainty,
associated with the truncation of the series
due to ultraviolet renormalons. Since for $\alpha_s=0.32$
this onset of divergence is expected at $n\sim 1/a_s\sim 4-5$, we
see that, in the $\overline{\rm MS}$ scheme,
the resummation of Le~Diberder and Pich can not be continued
beyond $n\approx 5$ without running into this $1/m_\tau^2$-uncertainty.
It is the suppression of ultraviolet regions of integration that
allows a fixed-order approximation of $R_\tau$ itself to higher
$n$ than five, which explains the proximity of $M_7^{\tau,\overline{\rm MS}}(0.229)$
from truncation to 2.233 in this case. Recall that after resummation,
there is no uncertainty left due to ultraviolet renormalons,
because the sign-alternating behaviour is summable.
Let us turn to the determination of $\alpha_s$. We may write the
(normalized) hadronic $\tau$ decay width as
\small\begin{equation}
R_\tau = 3\,(|V_{ud}|^2+|V_{us}|^2)\, S_{EW} \left\{
1+\delta^{(0)}+\delta_{EW}+\delta_{p}\right\}\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent where $S_{EW}$ and $\delta_{EW}$ are electroweak corrections,
$\delta^{(0)}$ is the perturbative QCD correction (with quark masses
set to zero) and $\delta_p$ denotes power suppressed corrections
in $1/m_\tau^2$, including quark mass and condensate terms. We have
borrowed the values for $S_{EW}$, $\delta_{EW}$ and $\delta_p$
from \cite{BRA92,Revs}. To a very good approximation we can neglect
the variation of $\delta_p$ with $\alpha_s$ and we have evaluated
it at $\alpha_s(m_\tau)=0.32$. Then the experimental value
$R_\tau=3.56\pm 0.03$ (quoted from \cite{Revs}),
obtained from an average of branching ratio and
lifetime measurements translates into a constant experimental
value\footnote{The latest experimental data, which we have not
taken into account, seem to indicate a
larger value for $R_\tau$. In this case, too,
resummation reduces $\alpha_s(m_\tau)$ obtained
without resummation by $10\%-15\%\,$.}
\small\begin{equation}
\delta^{(0)}_{exp}=0.183\pm 0.010
\end{equation}\normalsize\vspace*{-0.1ex}
\begin{figure}[t]
\vspace{-4cm}
\epsfysize=16.8cm
\epsfxsize=12cm
\centerline{\epsffile{tau.eps}}
\vspace*{-4cm}
\caption{Perturbative corrections to the $\tau$ hadronic width. I:
After resummation of one-loop running effects; II: Exact order
$\alpha_s^3$-approximation; III: Exact order
$\alpha_s^3$-approximation including the resummation of running
coupling effects along a circle in the complex plane (taken from
Pich); IV: Exact order $\alpha_s^2$-approximation for
comparison. The
shaded bar gives the experimental value with experimental errors
only.}
\label{taufig}
\end{figure}
\noindent for the perturbative QCD corrections. In Fig.~\ref{taufig} we
have plotted the prediction for $\delta^{(0)}$ as a function
of $\alpha_s(m_\tau)$ after resummation of one-loop running
effects (recall for $R_\tau$ the exact $\alpha_s^2$- and
$\alpha_s^3$-coefficients are very well approximated by one-loop
running) as compared to the $\alpha_s^2$- and
$\alpha_s^3$-approximation as well as the $\alpha_s^3$-approximation,
including the resummation of \cite{DIB92}. We conclude that
resummation of one-loop running reduces the
central value for $\alpha_s(m_\tau)$ by
approximately 10\% to
\small\begin{equation} \label{estimate}
\alpha_s(m_\tau) = 0.29\,.
\end{equation}\normalsize\vspace*{-0.1ex}
There is some difficulty in assigning an error to this value,
which is related to the extent to which one trusts
the restriction to one-loop running effects as a good estimate
of higher order coefficients, not known exactly at present.
In view of the fact that one-loop
running overestimates the coefficient $d_2$ of $Q^2 d\Pi/dQ^2$,
one might consider $M_\infty^{\tau,\overline{\rm MS}}(a_s)$ as an upper
estimate for the effect of higher order perturbative
corrections and the quoted value for
$\alpha_s(m_\tau)$ as a lower estimate for the central value.
On the other hand, we shall see in Sect.~5.2, that a partial
inclusion of two-loop running points towards even a lower value
of $\alpha_s(m_\tau)$. It seems safe to conclude that higher
order corrections add up constructively and a cumulative effect
is most likely to shift $\alpha_s(m_\tau)$ towards the above value.
Optimistically, one could even hope for a reduction of the theoretical
uncertainty from the unknown residual perturbative corrections. A
conservative evaluation would not
exclude the entire region from 0.27 to 0.34 for $\alpha_s$ at the scale
$m_\tau$. The lower bound reflects experimental and other than
perturbative theoretical uncertainties and the upper bound follows
from an analysis that takes into account only the completely
known perturbative corrections up to cubic order.
The implications of the expected ultraviolet renormalon dominance
for $R_\tau$ in large orders, which we have briefly touched upon
above, have recently been investigated in Ref.~\cite{ALT95}, whose
authors use conformal mapping techniques to construct the analytic
continuation of the Borel transform beyond its radius of convergence
set by the first ultraviolet renormalon. As noted in \cite{ALT95},
such a
technique is not particularly successful, when the series is not already
close to the asymptotic ultraviolet
renormalon behaviour (which is not the
case for $R_\tau$ and $Q^2 d\Pi/dQ^2$ up to
cubic order), because then the intricate
cancellations necessary to push the ultraviolet
renormalon further away from
the origin of the (conformally mapped) Borel plane than the first
IR renormalon do not take place. The variation of results obtained
from different mapping functions can then be considered analogous
to the variations induced by different choices of
renormalization schemes and as
in the latter case it is difficult to decide to what extent such a
(in principle arbitrary) variation should be considered as a
theoretical error. Based on the evidence presented above, we believe
that ultraviolet renormalons are sufficiently suppressed in fixed-order
perturbative approximations to $R_\tau$ (but potentially not to
$Q^2 d\Pi/dQ^2$) to be safely ignored at present.
\subsection{$(\Lambda_{\rm QCD}/m_\tau)^2$-corrections}
In this Subsection we investigate whether resummation of perturbative
corrections can introduce power corrections, which elude the
operator product expansion. We will exclude from the discussion
the effect of renormalons, which have received main attention
in this context \cite{BRO92,ZAK92,B92,BEN93,DUN94,ALT95,Revs}.
As far as evidence
from explicit calculations is available, infrared renormalons are
in correspondence with condensates in the OPE as expected. There
is no indication for explicit power corrections of dimension two
from this source, which could turn out to be numerically
significant. The effect
of the dominant ultraviolet renormalon divergence is taken into
account automatically by the error estimate due to unknown higher
order perturbative corrections. After resummation it disappears
completely in principle\footnote{This is seen explicitly in the
representation Eq.~(\ref{rBS}) for the Borel integral, which avoids
construction of the
analytic continuation of the Borel transform beyond its radius of
convergence set by the first ultraviolet renormalon.}, although
in practice this is not simple to implement \cite{ALT95} without
approximations (like large $\beta_0$).
It is also conceptually important
to distinguish the statement that $1/Q^2$-corrections are absent in
the OPE of correlation functions at euclidian momenta
from that that $1/m_\tau^2$-corrections are absent in $R_\tau$.
Validity of the first needs
not imply the second, though the second will hardly be true without
the first.
\subsubsection{Definition of the coupling}
Our first point of concern is the definition of the coupling parameter
inherent to perturbative expansions and therefore to their resummations.
To emphasize that this question is not connected with the minkowskian
nature of $R_\tau$, we work with the derivative of the correlation
function $D(Q^2)$ rather than with $R_\tau$ itself.
Let us first take a closer look at the Borel sum (to be definite,
let us assume a principal value prescription throughout this
subsection, which corresponds to
taking the real part of $r_0(\lambda_L^2)$)
in the representation as an integral over finite gluon mass in
the large-$\beta_0$ (NNA) approximation. Let us denote the two terms
on the right hand side of Eq.~(\ref{rBS}) by $I(\alpha_s)$ for the
integral and $L(\alpha_s)$ for the contribution from the Landau
pole in the dispersion relation for the running coupling (using the
technique of Sect.~2.3 or \cite{BB94b}). The leading (two-loop)
radiative correction with finite $\lambda$ can
be obtained from the Borel transform by a Mellin transformation, see
Eq.~(\ref{relation}). Taking
the integral analytically can be difficult, but for our present purpose
we are interested only in the coefficients in the small-$\lambda^2$
expansion. This expansion can be obtained easily, since particular
terms proportional to $(\lambda^2/m_\tau^2)^n$ (modulo logarithms)
correspond to contributions from
singularities of the integrand in the right $u$-plane at $u=n$.
For example,
to pick up the term of order $\lambda^2/m_\tau^2$ one can evaluate the
integrand in Eq.~(\ref{relation}) at $u=1$, except for $\Gamma(-u)$.
For the $D$-function defined in Eq.~(\ref{defD}) we obtain
\begin{eqnarray}\label{Dexpand}
4 \pi^2 D(Q^2,\lambda^2) &=&1+\frac{\alpha_s}{\pi}\Bigg\{
1-\left[\frac{32}{3}-8\zeta(3)\right]\frac{\lambda^2}{Q^2}-
\left[2 \ln (Q^2/\lambda^2)+\frac{20}{3}-8\zeta(3)\right]
\frac{\lambda^4}{Q^4}
\nonumber\\&&{}+O\left(\lambda^6\ln^2 \lambda^2/Q^2\right)\Bigg\}\,.
\end{eqnarray}
\noindent The presence of quadratic terms,
$\lambda^2/Q^2$, is not in conflict with the operator product expansion,
since such terms come from the region of large momenta: As emphasized
in Sect.~2.4, only non-analytic terms in the small $\lambda^2$-expansion
can unambiguously be identified with infared contributions. The leading
non-analytic term is proportional to $\lambda^4\ln\lambda^2$
and produces a correction proportional to $1/Q^4$, which can
be related to the
contribution of the gluon condensate \cite{SVZ} in the OPE.
This contribution agrees with the calculation
of the gluon condensate with finite gluon mass in \cite{CHE88}, after the
corresponding Wilson coefficient is extracted\footnote{The full contribution
of order $\lambda^4/Q^4$ which is multiplied by $\ln(Q^2/\lambda^2)+
const$ should be schematically decomposed as $\ln(\mu^2/\lambda^2)$,
contributing to the gluon condensate, and $\ln(Q^2/\mu^2)+const$, contributing
to the coefficient function to $\alpha_s$ accuracy. Here $\mu$ is the
scale separating small and large distances. If one considers $\lambda$ as
as an infrared cutoff, which is natural, then the full
contribution should be ascribed
to the coefficient function. Hence the rationale of ascribing the constant
term to contributions of large momenta (small distances). In general,
the separation of matrix elements and coefficient functions is of course
factorization scheme-dependent and constant terms
can be reshuffled. We see once more
that only non-analytic terms (logarithmic in this example) can be
used to trace infrared contributions.}.
(The difference in the constant $20/3$ arises, because we consider the
derivative of $\Pi(Q^2)$.).
The presence of a $\lambda^2/Q^2$-correction in Eq.~(\ref{Dexpand})
implies that the Landau-pole contribution in Eq.~(\ref{rBS}) is
\small\begin{equation}
L(\alpha_s(Q)) \propto \exp\left(\frac{1}{\beta_0\alpha_s(Q)}
\right) \propto \frac{\Lambda_{\rm QCD}^2}{Q^2} \,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent Numerically, this term is quite substantial. Taking $\alpha_s(Q)=
0.32$ (which corresponds to $Q\simeq m_\tau$), separation of the
two terms in Eq.~(\ref{rBS}) for $D(Q^2)$ amounts to
\small\begin{equation}
M_\infty^D(0.229) = 1.48 = \mbox{integral over gluon mass} + 0.31\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent Note that $L(\alpha_s)$ has identically vanishing perturbative
expansion. Thus, without any additional information, keeping
$I(\alpha_s)$ alone in Eq.~(\ref{rBS}) provides an equally
legitimate summation of the original series, which differs from
Borel summation by
terms of order $1/Q^2$, which are not related to renormalons or
any particular regime of small or large momenta. We conclude,
at this stage, that statements about power corrections are
meaningful only with respect to particular summation prescriptions.
Physically, dropping the contribution $L(\alpha_s)$ is equivalent
to dropping the Landau pole contribution to the dispersion relation
for the running
coupling which can be interpreted
as a redefinition of the coupling, such that the new coupling
has no Landau pole. Since Borel summation
in our limit of large $\beta_0$ coincides
(in the V-scheme, $C$=0) with averaging
the running coupling $\alpha_s(k) = \alpha_s(Q)/(1-\beta_0
\alpha_s(Q) \ln(k^2/Q^2))$, it is readily seen from
Eq.~(\ref{disprel}) that neglecting $L(\alpha_s)$ corresponds to
averaging with a coupling $\alpha^{\rm eff}_s(Q)$, related
to $\alpha_s(Q)$ by
\small\begin{equation} \label{effcoup}
\alpha^{\rm eff}_s(Q) = \alpha_s(Q)-\frac{\lambda_L^2}{\beta_0
(Q^2+\lambda_L^2)} = \alpha_s(Q)-\frac{1}{\beta_0}\,e^{1/(\beta_0
\alpha_s(Q))} + O\left(e^{2/(\beta_0
\alpha_s(Q))}\right)\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent This coupling has no Landau pole and freezes to a finite value as
$Q^2$ approaches zero. However, it has $1/Q^2$-corrections to its
evolution and correspondingly to the $\beta$-function:
\small\begin{equation}
\beta^{\rm eff}(\alpha_s^{\rm eff}) = \beta_0(\alpha_s^{\rm eff})^2
+ \left(\frac{1}{\beta_0}+2\alpha_s^{\rm eff}\right) e^{1/(\beta_0
\alpha_s^{\rm eff})} + O\left(e^{2/(\beta_0
\alpha_s^{\rm eff})}\right)
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent Note that the absence of a Landau pole in $\alpha_s^{\rm eff}$
is seen only after summation of an {\em infinite} number of power
corrections in $1/Q^2$ (exponentially small terms in $\alpha_s(Q)$)
in Eq.~(\ref{effcoup}). Further, the coefficients of perturbative
expansions of any quantity are the same whether one uses $\alpha_s(Q)$
or $\alpha_s^{\rm eff}(Q)$ and diverge although the average
\small\begin{equation} \label{aveff}
\int d^4 k \,F(k,Q) \frac{\alpha^{\rm eff}_s(k)}{k^2} \sim
I(\alpha_s(Q))
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent has no Landau pole ambiguities. We emphasize
once more that averaging
one-loop radiative corrections with
this freezing coupling differs
from Borel summation of the series in $\alpha_s(Q)$
(which gives $I(\alpha_s(Q))+L(\alpha_s(Q))$
by $1/Q^2$-terms. So does Borel summation of
the identical series in $\alpha^{\rm eff}_s(Q)$,
because the couplings differ by such terms. We shall argue that
couplings like
$\alpha_s^{\rm eff}(Q)$ obscure the relation to the operator product
expansion in the sense that explicit $1/Q^2$-terms must be added
to the resummed result (such as $L(\alpha_s(Q)$), had one used
$\alpha_s^{\rm eff}(Q)$) in order to cancel spurious $1/Q^2$-effects
in the large $Q^2$-expansion of the resummed result.
This remark applies identically to a freezing coupling of
type $1/(-\beta_0 \ln(c+Q^2/\Lambda^2))$ and thus to the procedure
of \cite{Nnew}, where such a coupling has been used to
estimate the size of infrared contributions for quantities related
to heavy quark expansions.
We will now try to make precise the statement that $1/Q^2$-terms
should be absent in the OPE. In order to talk about power corrections
we have to attach some meaning to the divergent perturbative
expansion at leading order in $1/Q^2$. A natural (though one might
still ask whether it is justified to prefer Borel summation-type
schemes to any other) definition apparently is
\small\begin{equation} \label{interpretation1}
\left| D(Q^2)_{\rm exact} - BS[D(\alpha_s(Q))_{\rm pert}]\right|
\sim O\left(\frac{1}{Q^4}\right)\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent where $BS[D(\alpha_s(Q))_{\rm pert}]$ denotes the Borel
integral (with principal value prescription) of the perturbative
expansion of $D$ in some $\alpha_s(Q)$. However, this statement
is still ambiguous, because it implies knowledge of the
$Q^2$-dependence in the coupling, which is arbitrary to a large
extent.
Moreover, it is not sufficient to appeal to the usual ambiguities
in the choice of perturbative renormalization schemes, since the
coupling and its evolution must be specified to power-like
accuracy. We can bypass this point, noting that QCD with massless
fermions has only a single free parameter. Since we are discussing
asymptotic expansions in a dimensionful parameter $1/Q^2$, it seems
most natural to choose a physical mass scale as this parameter,
say $m_\rho^2$. This is especially natural in the context of lattice
definitions of QCD, where one would trade the bare coupling for
$m_\rho^2$. Then, if the operator product expansion exists
nonperturbatively, this suggests the existence of a double expansion
in $1/\ln(Q^2/m_\rho^2)$ and $m_\rho^2/Q^2$ at
large $Q^2$ and the interpretation
of the statement that there are no $1/Q^2$ terms could be\footnote{
This is still a simplification. In fact, $\ln\ln (Q^2/m_\rho^2)$ is
also expected to appear. We will restrict the discussion to the
large-$\beta_0$ limit, which might be considered as an analytic
continuation to a large negative number of massless fermion flavours.
In this limit, only $\ln(Q^2/m_\rho^2)$ appears and Borel transformation
with respect to $\ln(c Q^2/m_\rho^2)$ has a unique meaning.}
\small\begin{equation} \label{interpretation2}
\left| D(Q^2)_{\rm exact} - BS[D(Q^2)_{\rm pert}]\right|
\le K(\delta,R) \left(\frac{m_\rho^2}{c Q^2}\right)^2\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent Here $BS[D(Q^2)_{\rm pert}]$ denotes the Borel sum (with
principal value prescription) of the leading term in the
expansion of $D(Q^2)$ at large $Q^2$, which is an infinite series
in $1/\ln(c Q^2/m_\rho^2)$, where
$c$ is a constant to be specified
later. The constant $K(\delta,R)$ depends on
the opening angle $\delta$ and radius $R$ of the sector in the
complex $1/Q^2$-plane, where
Eq.~(\ref{interpretation2})
is supposed to hold. In general, $K(\delta,R)$ will neither be
continuous nor bounded as a function of these two parameters.
In particular, one can not expect a uniform bound in the
entire cut plane.
In the limit $\delta\rightarrow \pi$,
this is related to violations of duality. We stress that as far
as mathematical rigour is concerned, the validity of
Eq.~(\ref{interpretation2}) must be regarded as purely hypothetical.
We wish to present it as a mathematical formulation of the
{\em assumption} that the operator product expansion holds
at {\em euclidian} momenta (i.e. $\delta > 0$)
and no $1/Q^2$-terms are present in the
asymptotic expansion. We note that the
condition Eq.~(\ref{interpretation2})
is stronger than the condition that long-distance contributions
can be factorized into condensates, which does not exclude the
presence of power-like corrections, in particular $1/Q^2$, to
coefficient functions, in particular of the unit operator, from
short distances.
After we have chosen $m_\rho$ as the fundamental parameter of
QCD, we may define the coupling by its beta-function and an
overall scale. We define (again, to leading-$\beta_0$ accuracy)
\small\begin{equation} \label{defcoup2}
\alpha_s(Q)=\frac{1}{-\beta_0 \ln(c Q^2/m_\rho^2)} \equiv
\frac{1}{-\beta_0 \ln(Q^2/\Lambda_V^2)}\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent where we have fixed $c$ by matching the large $Q^2$ behaviour
with the $V$-scheme\footnote{This is a matter of
convenience, since it
eliminates writing $C$ in the large-$\beta_0$ approximation. We
could also have matched to the perturbative $\overline{\rm MS}$ coupling. We
also note that the change of variables from $m_\rho^2$ to
$\alpha_s(Q)$ is singular due to the Landau pole at $\Lambda_V^2$.
However, this is not a restriction, since Eq.~(\ref{interpretation2})
is limited to finite $R$ anyway. The position of the pole
may be varied by the choice of $c$, which implies a reorganization
of powers in inverse logarithms, but the pole occurs always at
a finite value of $Q^2$.}.
It is easy to see that Eq.~(\ref{interpretation2}) implies
Eq.~(\ref{interpretation1}) with this coupling, which by
definition has no power-like evolution. This implication is
not valid for couplings, that incorporate $1/Q^2$-dependence
in their running. It is in this sense that we believe
the use of freezing couplings
is hazardous. {\em If} Eq.~(\ref{interpretation2}) is correct,
then the use of Borel summation for perturbative series expressed
in terms of such a coupling, or averaging lowest order
radiative corrections with such a running coupling, necessitates
the addition of explicit $1/Q^2$-corrections simply to
cancel such corrections hidden in the
definition of the coupling.
In practice, in one way or another, one relates physical quantities
and an unphysical coupling like Eq.~(\ref{defcoup2}) might be
considered as an intermediate concept only. However, the importance
of being definite with the evolution of the coupling to power-like
accuracy is not diminished by the use of physical couplings. To give
a somewhat constructed example: If one expressed $R_{e^+ e^-}$ as
an expansion in the effective coupling, defined by QCD corrections
to the Gross-Llewellyn-Smith
sum rule, one would expect $1/Q^2$-corrections to this
perturbative relation, which are imported from the definition of
the coupling.
\subsubsection{Analyticity and the Landau pole}
After renormalization group improvement, perturbative expansions are
plagued by the Landau pole. This unphysical singularity is endemic
not only to perturbative expansions, but to the operator product
expansion, truncated to any finite order. It requires some care
in defining resummations for quantities like $R_\tau$, which are
related to euclidian quantities by analyticity. For the purpose of
illustration, we shall restrict ourselves to the approximation, where
$\beta(\alpha_s)=\beta_0\alpha_s^2$ and adopt the V-scheme, $C=0$,
in all explicit formulas. Then, ignoring an irrelevant $Q^2$-independent
subtraction, we can write the Borel sum (with principal value
prescription, as usual) as
\small\begin{eqnarray}
BS[\Pi](Q^2) &\equiv& \frac{1}{-\beta_0}\int\limits_0^\infty d u \,e^{-u/(
-\beta_0\alpha_s(\mu))} B[\Pi](u,Q^2/\mu^2)
= \frac{1}{-\beta_0}\int\limits_0^\infty d u \left(\frac{\Lambda_V^2}{Q^2}
\right)^u F(u)\nonumber\\
&=& BS[\Pi]_<(Q^2,u_0) + BS[\Pi]_>(Q^2,u_0)\,,
\end{eqnarray}\normalsize\vspace*{-0.1ex}
\noindent where we use $\alpha_s(Q)=1/(-\beta_0 \ln(Q^2/\Lambda^2_V))$ and
$F(u)$ is $Q^2$-independent. In the second line, we have defined two
new functions by splitting the Borel integral into two regions from
0 to $u_0$ and $u_0$ to $\infty$. From the explicit form in
Eq.~(\ref{borelpolaroper}), we deduce that, for {\em any} (positive)
$u_0$, $BS[\Pi]_<(Q^2,u_0)$ is analytic in the $Q^2$-plane cut
along the negative axis. On the other hand the $u$-integral in
$BS[\Pi](Q^2)$ diverges, when $Q^2 < \Lambda_V^2$ and $Q^2=\Lambda^2_V$
is a singular point. We say that $BS[\Pi](Q^2)$ has a Landau pole,
though, in general, $Q^2=\Lambda^2_V$ will rather be a branch point.
Notice, since in practice one truncates the operator product expansion
at operators of some dimension $d$, it is perfectly consistent to replace
$BS[\Pi](Q^2)$ by $BS[\Pi]_<(Q^2,u_0)$, provided we choose $u_0 >
d/2+1$, since the difference is bounded by
\small\begin{equation}\label{bound2}
\left| BS[\Pi](Q^2) - BS[\Pi]_<(Q^2,u_0)\right| < \tilde{K}(R)
\left(\frac{\Lambda_V^2}{Q^2}
\right)^{u_0}
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent and vanishes faster than other corrections neglected in the truncation
of the OPE. In this case, the bound can be established in a cut
circle of radius $R$ in the $1/Q^2$-plane. Again, it will not be
possible to establish uniform bounds, since typically $\tilde{K}(R)
\sim 1/\ln(R \Lambda_V^2)$. Thus, although for fixed $Q^2>1/R$,
the difference can be made vanish faster than any desired power
of $\Lambda_V^2/Q^2$ by increasing $u_0$, at fixed $u_0$ the bound
may become arbitrarily weak as $Q^2$ approaches $\Lambda_V^2$.
Still, we conclude that the presence or absence of a Landau pole
in resummed results is related to the behaviour of the Borel integral
at infinity
and is thus an effect that formally vanishes faster than any power
of $1/Q^2$.
Consider now the two different representations for
the tau decay width, Eqs.~(\ref{rep1}) and
(\ref{rep2}), in this light. We use equality of
vector and axial-vector correlators in perturbation theory and
abbreviate the weight function by $w(s/m_\tau^2)$. Then
\small\begin{equation}\label{sum1}
R_\tau^{
\mbox{{\tiny eq.(\ref{rep1})}}}
= 24\pi\int\limits_0^{m_\tau^2} \frac{d s}{m_\tau^2}
\,w\!\left(\frac{s}{m_\tau^2}\right)\,\frac{1}{2 i}\mbox{disc} \,
BS[\Pi](s)\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\small\begin{equation}\label{sum2}
R_\tau^{
\mbox{{\tiny eq.(\ref{rep2})}}} = 12\pi i\int\limits_{|s|=m_\tau^2} \frac{d
s}{m_\tau^2}
\,w\!\left(\frac{s}{m_\tau^2}\right)\,BS[\Pi](s)\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent Note that in both cases summation is carried out inside the
$s$-integral. The following considerations can
easily be extended to the situation, where summation is
taken after $s$-integration (and do not lead to any of
the differences observed below). We define $R_{\tau,<}^{
\mbox{{\tiny eq.(\ref{rep1})}}}(u_0)$ and $R_{\tau,<}^{
\mbox{{\tiny eq.(\ref{rep2})}}}(u_0)$ by the replacement of
$BS[\Pi](s)$ by $BS[\Pi]_<(s,u_0)$. Using the analyticity properties
discussed above as well as $m_\tau^2 > \Lambda_V^2$, it is
straightforward to find
\small\begin{equation}
R_\tau^{\mbox{{\tiny eq.(\ref{rep2})}}} -
R_{\tau,<}^{\mbox{{\tiny eq.(\ref{rep2})}}}(u_0)
\sim \left(\frac{\Lambda_V^2}{m_\tau^2}\right)^{u_0}
\qquad R_{\tau,<}^{\mbox{{\tiny eq.(\ref{rep1})}}}(u_0) =
R_{\tau,<}^{\mbox{{\tiny eq.(\ref{rep2})}}}(u_0)\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent but
\small\begin{equation}
R_\tau^{\mbox{{\tiny eq.(\ref{rep1})}}} -
R_\tau^{\mbox{{\tiny eq.(\ref{rep2})}}} =
-12 \pi i \int\limits_C \frac{d s}{m_\tau^2}
\,w\!\left(\frac{s}{m_\tau^2}\right)\,BS[\Pi]_>(s,u_0)\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent where the contour $C$ runs along a circle of radius
$\Lambda_V^2$ around $s=-\Lambda_V^2$. Against appearances, the
right hand side is independent of $u_0$. Of course, one can not take
$u_0$ to infinity inside the integral and conclude that it is
zero. $BS[\Pi]_>(s,u_0)$ has a pole or branch point at
$s=-\Lambda_V^2$ and we find
\small\begin{equation}\label{differ}
R_\tau^{\mbox{{\tiny eq.(\ref{rep1})}}} -
R_\tau^{\mbox{{\tiny eq.(\ref{rep2})}}} \sim \frac{\Lambda_V^2}{m_\tau^2}
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent and similarly
\small\begin{equation}
R_\tau^{\mbox{{\tiny eq.(\ref{rep1})}}} -
R_{\tau,<}^{\mbox{{\tiny eq.(\ref{rep1})}}}(u_0)
\sim \frac{\Lambda_V^2}
{m_\tau^2}\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent Since Eq.~(\ref{bound2}) is not valid for $s<\Lambda_V^2$,
this result should not surprise. The difference in Eq.~(\ref{differ})
arises, because the resummation introduces (or preserves) the
Landau pole singularity, which is in conflict with the analytic
properties of the exact correlation function, that have been
assumed to derive Eq.~(\ref{rep2}) from Eq.~(\ref{rep1}).
Should one conclude then that resummations of perturbative corrections
are ambiguous by terms of order $1/m_\tau^2$, even if there are
no $1/Q^2$ terms in the OPE at euclidian momenta in the strong sense of
Eq.~(\ref{interpretation2})? Do we have evidence for power corrections
not captured by the OPE, since they originate from $u=\infty$ in
the Borel integral? The answer is no. A positive answer would be
warranted, if there were no reason to prefer the prescription
Eq.~(\ref{sum1}) to Eq.~(\ref{sum2}) or vice versa, while only
the second can be used. The reason is that the region $|s| <
\Lambda_V^2$ can not be penetrated to any finite order in the
short distance expansion ($1/Q^2$-expansion). Since all summation
prescriptions of perturbative expansions are formulated within the
context of the short-distance expansion\footnote{
See the discussion below Eq.~(\ref{borelintegral}).},
they can not be applied
to $|s| < \Lambda_V^2$, unless the summation of this expansion itself
is understood. This discards Eq.~(\ref{sum1}) as a legitimate summation.
One must first use the analyticity properties of the exact correlation
functions to deform the contour outside the region $|s| < \Lambda_V^2$,
before an attempt at summing perturbative expansions can be made,
which privileges Eq.~(\ref{rep2}) as the starting point. A different
way to express this fact is to observe that the principal value Borel
integral is defined for $|s| < \Lambda_V^2$ only in the sense of
analytic continuation, but cannot be used as a numerical approximation
since all power corrections in the operator product expansion are
of the same order of magnitude in this region. It is only when
all these are taken into account that the Landau pole vanishes in
physical observables.
To conclude this Section, let us mention that a relation between
analyticity in $s$ and behaviour of the Borel integral at infinity
has been noted in a very different and much more physical
context in \cite{tHO77}. The presence of
resonances and multiple thresholds on the physical axis was observed
to be in conflict with Borel summation\footnote{
Historically, it is interesting to note that the
horn-shaped analyticity region in the coupling, which leads to
this conclusion, was
discovered for the photon propagator in \cite{AZI70}.}.
In this case, the restoration
of the correct analytic properties should also be understood
in connection
with summing the OPE itself \cite{BEN93b}. A much more elaborate
argument has been presented in \cite{SHI94}, where the presence
of resonances was connected with the divergence of OPE, which also offers
a way to understand the concept of duality and its limitations. If
the OPE is by itself only an asymptotic expansion, it is quite possible
that its application is limited to a finite phase range in the complex
$s$-plane around the negative $s$-axis. We
believe that this possibility
should be taken seriously, since it might imply the presence of
$1/s_0$-corrections in finite averages
up to $s_0$ of the discontinuity along the
physical axis (for $R_\tau$, $s_0=m_\tau^2$). Unfortunately, we
do not know how to substantiate or disprove such
a statement theoretically.
\mysection{The pole mass of a heavy quark}
{The pole mass of a heavy quark}
Although on-shell quarks do not exist and there is presumably no
natural nonperturbative definition of a pole
mass of a quark, the pole
mass has proven useful as an auxiliary concept in applications
of perturbative QCD to heavy quark physics, where physically
the quark is expected to be close to its would-be mass-shell.
Still, the relation between the pole mass and an off-shell
renormalized mass is known to have large perturbative coefficients
from small loop momenta, at least in high orders of
perturbation theory \cite{BB94,BIG94}. Since neither quark
mass definition is physical, this might appear as an irrelevant
problem. However, the behaviour of perturbative expansions of
quantities involving quark masses change accordingly with the
quark mass definition used and in
general one expects coefficients to be
significantly reduced, when the pole mass is abandoned in favour
of an off-shell renormalized mass \cite{BIG94,BBZ94}, such as $\overline{\rm MS}$.
In the latter case, it is quite well-known that the exact two-loop
coefficient \cite{GRA90} in the relation to the pole mass
is substantial and one might wonder
whether this is coincidental.
In this Section we apply resummation and NNA to the difference
between the pole and $\overline{\rm MS}$ mass.
Apart from its practical interest,
we also use this quantity to illustrate certain features of higher
order perturbative corrections, when the masses of fermions in
loops are finite. The reader interested only in
results may jump directly to Sect.~4.3, where our best estimate
is presented. In this Section $\alpha_s(\mu)$ always refers
to the $\overline{\rm MS}$ coupling.
\subsection{Preliminaries}
To begin with, let us quote the exact two-loop
result from \cite{GRA90}:
\small\begin{equation}\label{m1}
\frac{\delta m}{m}
\equiv\frac{m_{\rm pole}-m_{\overline{\rm MS}}(m_{\overline{\rm MS}})}{m_{\overline{\rm MS}}(m_{\overline{\rm MS}})}
= \frac{4}{3}\frac{\alpha_s(m_{\overline{\rm MS}})}{\pi} \left[1 +
\left(4.68\,(-\beta_0^{N_f}) - \left\{\begin{array}{c} 0.89\\0.91\end{array}
\right\}\right) \alpha_s(m_{\overline{\rm MS}})\right]
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent The upper entry in curly brackets refers to the calculation with
$N_f$ massless quarks and a quark of mass $m$ in loops, the
lower entry to the situation, where the quark flavour of mass $m$ is
excluded from loops\footnote{To obtain the lower value,
it is necessary to insert a missing $C_F$ in front of the mass
correction in Eq.~(17) of \cite{GRA90}, see also \cite{BG94}.}.
The superscript on $\beta_0$ indicates the value
of $N_f$ to be taken for the $\beta$-function. Since $(-\beta_0^{N_f})
\sim 2/3$ for cases of interest, $N_f=3,4$, we note that keeping
the term proportional to $\beta_0^{N_f}$ alone provides indeed a
reasonable approximation within $30-40\%$ of the exact coefficient.
Note that above we have normalized $m_{\overline{\rm MS}}$ at the scale $m_{\overline{\rm MS}}$,
since we prefer to eliminate $m_{\rm pole}$ in all places. When
Eq.~(\ref{m1}) is expressed in terms of the $\overline{\rm MS}$ mass normalized at
$m_{\rm pole}$,
\small\begin{equation}\label{m2}
\frac{m_{\rm pole}-m_{\overline{\rm MS}}(m_{\rm pole})}{m_{\overline{\rm MS}}(m_{\rm
pole})}
= \frac{4}{3}\frac{\alpha_s(m_{\rm pole})}{\pi} \left[1 +
\left(4.68\,(-\beta_0^{N_f}) - \left\{\begin{array}{c} 0.25\\0.28\end{array}
\right\}\right) \alpha_s(m_{\rm pole})\right]\,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent the approximation of keeping only the term proportional to
$\beta_0^{N_f}$ is significantly improved. This ambiguity is in fact a
source of trouble within the BLM and our extended prescription. For
$\delta m$ additional renormalization scheme and scale dependence
is present from the definition of the quark mass parameter, whereas
the BLM method by construction deals only with the scale-dependence of
the coupling. Although the calculation of fermion loop insertions
and subsequent restoration of $\beta_0$ also provides the anomalous
dimension of the quark mass within the same approximation, the scheme
ambiguity in the size of neglected genuine two-loop corrections is
amplified by this additional source of scheme-dependence. If
$\mu_1^2-\mu_2^2\sim O(\alpha_s)$, then the difference
$m_{\overline{\rm MS}}(\mu_1)-m_{\overline{\rm MS}}(\mu_2)$ must be neglected in the
approximation of large $\beta_0$, although it can be sizeable,
if the coefficient of $\alpha_s$ in $\mu_1^2-\mu_2^2$ is so. In the
following, we will work with Eq.~(\ref{m1}), whenever relevant.
Let us first consider the (excellent) approximation, that the quark
flavour of mass $m$ is neglected in loops. All other $N_f$
quarks are massless. In this case the relevant $\beta$-function coefficient
is $\beta_0^{N_f}$ and an exact expression for the Borel transform
of the mass shift exists \cite{BB94}:
\small\begin{equation} \label{polebt}
B\left[\delta m/m\right](u) = \frac{1}{3\pi}\left[e^{5 u/3}\,6 (1-u)
\frac{\Gamma(u)\Gamma(1-2 u)}{\Gamma(3-u)} + \frac{\tilde{G}_0(u)}{u}
\right]
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent $\tilde{G}_0(u)$ is defined in the following way (see Appendix~A): If
$g_n$ are the expansion coefficients of $G_0(u)$ in $u$, then
$\tilde{G}_0(u)$ has expansion coefficients $g_n/n!$. $G_0(u)$ can
be calculated with the methods of Appendix~A and is found to be
\small\begin{equation}\label{subfunction}
G_0(u) = -\frac{1}{3} (3+2 u)\frac{\Gamma(4+2 u)}{\Gamma(1-u)
\Gamma^2(2+u) \Gamma(3+u)}\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent Higher order perturbative corrections
with fermion loop insertions
into the one-loop diagram can then be computed according to
Eq.~(\ref{genfunction}). Equivalently, one can start from Eq.~(\ref{bofin})
with the input of the one-loop mass shift with finite gluon mass,
given by ($x\equiv \lambda^2/m^2$)
\small\begin{eqnarray}\label{polefg}
r_0(\lambda^2) &=&\nonumber \frac{1}{3\pi}\Bigg[4+x-\frac{x^2}{2}\ln x-
\frac{\sqrt{x}(8+2x-x^2)}{\sqrt{4-x}}
\Bigg\{\arctan\left[\frac{2-x}{\sqrt{x(4-x)}}\right]+\\
&&\,\arctan\left[\frac{\sqrt{x}}{\sqrt{4-x}}\right]\Bigg\}\Bigg]
= \frac{4}{3\pi}\left[1+\frac{\pi}{2} \sqrt{x} + \frac{3}{4} x + O\left(
x^{3/2}\right)\right]\,.
\end{eqnarray}\normalsize\vspace*{-0.1ex}
\noindent We define coefficients $d_n$ as in Sect.~2 by
\small\begin{equation}
\frac{\delta m}{m}
= \frac{4}{3}\frac{\alpha_s(m)}{\pi} \left[1+\sum_{n=1}^\infty
d_n \,(-\beta_0^{N_f} \alpha_s(m))^n\right]\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent They are listed in Table~\ref{tabpole}. Higher order perturbative
corrections grow very rapidly, as expected from the dominant asymptotic
behaviour from the pole at $u=1/2$ in Eq.~(\ref{polebt}),
\small\begin{equation}\label{aspole}
d_n \stackrel{n\gg 1}{=} e^{5/6}\, 2^n n!\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\begin{table}[t]
$$
\begin{array}{|c||c|c|c|}
\hline
n & d_n & M_n\,[\mbox{c-quark}] & M_n\,[\mbox{b-quark}] \\ \hline\hline
0 & 1 & 1 & 1 \\
1 & 4.6861511 & 2.176 & 1.623 \\
2 & 17.622650 & 3.286 & 1.935 \\
3 & 109.85885 & 5.024 & 2.193 \\
4 & 873.92393 & 8.492 & 2.467 \\
5 & 8839.6860 & 17.30 & 2.835 \\
6 & 105814.28 & 43.76 & 3.420 \\
7 & 1484968.4 & 137.0 & 4.514 \\
8 & 237407365 & 511.0 & 6.838 \\ \hline
\infty & - & 1.712\pm 0.608 & 2.041\pm 0.201 \\ \hline
m^*_1 & - & 0.096 \,m_c & 0.096 \,m_b \\
m^*_\infty & - & 0.437 \,m_c & 0.147 \,m_b \\
\hline
\end{array}
$$
\caption{Higher order perturbative corrections to the difference between
the pole and $\overline{\rm MS}$ mass from $n$ fermion loop insertions. The second
and third columns give partial sums and summations for the charm
quark ($a_s=-\beta_0^{(3)}\alpha_s(m_c)=0.251$) and bottom quark
($a_s=-\beta_0^{(4)}\alpha_s(m_b)=0.133$). In the latter case, the c-quark
is treated as massless inside loops.}
\label{tabpole}
\end{table}
\noindent This asymptotic formula is in fact a very good approximation
(within 5\%) to the exact coefficient $d_n$ for all $n\ge 1$, which
seems to imply that the saddle point approximation of the loop
momentum distribution inherent to deriving the asymptotic
behaviour is a very good substitute for the exact
distribution, even if the width of the Gaussian is not small.
Given that genuine two-loop corrections are rather small compared
to $d_1$, it can be asserted that the exact two-loop coefficient
is already dominated by the first infrared renormalon at $u=1/2$.
Evidently, this observation is scheme-dependent and we do not
know why the $\overline{\rm MS}$ scheme is preferred. Let us note, however, that
a similar coincidence can not be expected and indeed does not
happen (see Sect.~3) for quantities, whose asymptotic behaviour
is dominated by ultraviolet renormalons, like $R_{e^+ e^-}$ or
$R_\tau$, since with $C=-5/3$ in the $\overline{\rm MS}$ scheme, the
leading ultraviolet singularity at $u=-1$ is suppressed compared
to the leading infrared singularity at $u=2$ by a factor
$e^{-5}\approx 7\cdot 10^{-3}$. Therefore the latter is expected
to dominate in intermediate orders \cite{BEN93b} (if there is
any regularity at all) and it might not be an accident, that
exact low order coefficients are indeed of same sign for these
quantities.
Table~\ref{tabpole} also shows how the one-loop radiative correction
to the pole mass is modified by inclusion of a finite number
of fermion loops and by summation according to Eq.~(\ref{rBSren}).
We have taken $\alpha_s(m_b)=0.2$ and $\alpha_s(m_c)=0.35$.
Recall that the sum corresponds to a principal value prescription for
the Borel integral. The errors quoted correspond to the imaginary
part of the Borel integral, when it is defined by deforming the
contour into the complex plane. The
imaginary part can be taken as an estimate
of inherent uncertainty of perturbative relations. We have actually
divided this imaginary part by $\pi$, which upon inspection we
find closer to, but somewhat smaller than
the naive estimate of uncertainty by the minimal
term of the series. A more conservative estimate would be to enlarge
these errors by a factor of two. The BLM scales in Table~\ref{tabpole}
are defined by (cf.~Eq.(\ref{blmscalesdef}))
\small\begin{equation}\label{compare}
m^*_1 = m\,\exp\left[-\frac{1}{2a_s}(M_1(a_s)-1)\right]\,, \qquad
m^*_\infty = m\,\exp\left[-\frac{1}{2a_s}
\left(1-\frac{1}{M(a_s)}\right)\right]\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent The uncertainty in $M_\infty(a_s)$ translates into an
uncertainty of the BLM scale $m^*_\infty$ by the previous
equation.
We want to point out that the usual BLM scale $m^*_1$, which
uses only $d_1$ as input is smaller than $m^*_\infty$ although
all higher orders $d_n$ add up positively. The reason is that
$\alpha_s(m^*_1)$ upon re-expanding in terms of $\alpha_s(m)$
implies $d_n^{\rm BLM}=d_1^n$. For the charm and bottom quark mass
the most important effect comes from $n=1,2$. But since $d_1$
is rather large, $d^{\rm BLM}_2=d_1^2\approx 21.9 > 17.6 = d_2$, and
the usual BLM prescription overestimates the size of
radiative corrections. This behaviour is quite generic to quantities
dominated by scales below a few GeV and with a leading infrared
renormalon at
$u=1/2$.
\subsection{Effect of internal quark masses}
Before proceeding to realistic charm and bottom quarks we want
to illustrate the effect of finite quark masses inside loops
on the coefficients of diagrams with quark loop insertions in
the toy example of the mass shift due to a single quark flavour
with mass $m_i$ in loops. The ``heavy'' quark of mass $m$
is again excluded from loops.
The factorially large contribution from small and large loop
momenta can be traced to the logarithmic behaviour of the
vacuum polarization for a massless particle at very small and
very large virtuality. In the case of small virtuality, the
large coefficient arises from momenta $k\sim m e^{-n}\ll m$.
For a massive quark the logarithmic behaviour is cut off, because
at very small momenta its vacuum polarization is proportional
to $k^2/m_i^2$. Thus, no matter how small the quark mass, the
factorially large contribution should be eliminated when
$n$ is such that $m e^{-n} < m_i$. There are no infrared renormalon
singularities in the Borel transform in the absence of massless
particles. This expectation is verified in Fig.~\ref{masses1},
where we have plotted the ratio of the coefficient $d_n(m_i)$,
computed with internal mass $m_i$, and $d_n(0)$ for the massless
case as in Table~\ref{tabpole} as a function of the number of loops
$n$ for various ratios of $m_i/m$. Since $m_i$ serves
effectively as an infrared cutoff, Fig.~\ref{masses1} provides
some information on what proportion of the coefficient $d_n(0)$
originates from low momentum regions. Note that low momentum
means low momentum compared to $m$ and not $\Lambda_{\rm QCD}$.
Eventually, as $n$ becomes
large, coefficients will
be dominated by large momentum for any $m_i$ and the asymptotic
behaviour of Eq.~(\ref{aspole}) is replaced by a sign-alternating
behaviour due the leading
ultraviolet renormalon
\begin{figure}[t]
\vspace{-4.5cm}
\epsfysize=16.8cm
\epsfxsize=12cm
\centerline{\epsffile{masses1.eps}}
\vspace*{-3.5cm}
\caption{\label{masses1} Ratio $d_n(m_i)/d_n(0)$ for different values
of internal quark masses $m_i$ as a function of the number of fermion
loop insertions. The value of $m_i^2/m^2$ is indicated to the right
of each
curve.}
\end{figure}
\small\begin{equation}\label{aspolem}
d_n(m_i) \stackrel{n\gg 1}{=} \left[-1+\frac{9}{2}
\frac{m_i^2}{m^2}\right] e^{-5/3}\, (-1)^n n!\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent We find that for $m_i^2/m^2 \ge 0.1$, this asymptotic behaviour
practically coincides with the exact coefficient for $n>6$.
The presence of mass dependence in the coefficient of ultraviolet
renormalons requires some explanation. The singularity of the
Borel transform at $u=-1$, which is responsible for Eq.~(\ref{aspolem}),
is due to the presence of a $1/k^6$-term in the expansion of the
Feynman integrand for the one-loop mass-shift at large $k$:
\small\begin{equation} \delta m \sim \alpha_s \,m \int d^4 k \left(\frac{a}{k^4}+
b\frac{m^2}{k^6} + \ldots\right)
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent The $1/k^4$-term creates a logarithmic ultraviolet divergence,
which is subtracted by minimal subtraction. The effect of the Borel
transformed gluon propagator (with fermion loop insertions) amounts
to insertion of (see Sect.~2.3)
\small\begin{equation} \label{massbtprop}
\exp(-u \Pi(k^2)/a_s) = \left(-\frac{m^2}{k^2} e^{-C}\right)^{\!u}
\left[
1+6 u \frac{m_i^2}{k^2} +O\left(\frac{m_i^4}{k^4}\right)\right]
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent into the integrand. The pole at $u=-1$ arises, because close
to $u=-1$ the $1/k^6$-term is converted into a logarithmically
divergent term. But when $m_i$ is not zero a second contribution
of order $1/k^6$ arises, when $a/k^4$ combines with the first
mass correction in Eq.~(\ref{massbtprop}). This term accounts for
the mass-dependence of ultraviolet renormalons
in minimal subtraction schemes. Had one chosen a
renormalization prescription, which subtracts the $a/k^4$-term
inside the integrand, that mass-dependence would be absent for
the first ultraviolet renormalon at $u=-1$, but still present
in coefficients of singularities at $u=-2,-3\ldots$.
\subsection{Charm and bottom pole masses}
For realistic charm and bottom quarks, the numerical results of
Sect.~4.1 need to be amended in two respects: The quark, whose
mass shift is considered, should also be taken into account in
loops. This effect is tiny, but we include it for completeness.
More interesting is the effect of finite charm mass on the bottom
pole mass, because the charm mass might be larger than
the typical loop momentum already in low orders. If so, it would seem
more appropriate to use $\beta_0^{(3)}$ rather than $\beta_0^{(4)}$,
when restoring the QCD $\beta$-function coefficient from the
calculation of fermion loops. Strictly speaking, changing the
mass of one flavour is a
negligible effect in the formal large-$\beta_0$ limit (i.e.
large-$N_f$), and
should be discarded in a consistent approximation.
{\em In realiter}
the effect is numerically noticeable and we prefer to include it
as a calculable correction, keeping in mind its size as an
uncertainty in our approach.
To gain some understanding of the numerical results to follow, we
trace the decoupling of internal charm loops for the bottom pole mass
in more detail. For this purpose, we ignore again the bottom quark in
loops. When the order of perturbation theory is sufficiently large,
internal integrations are dominated by momenta much smaller
than $m_c$. Let us denote coefficients including charm loops by
$d_n^{[3+1]}(m_c)\,(-\beta_0^{(4)})^n$ and
those without charm loops
by $d_n^{[3]}\,(-\beta_0^{(3)})^n$. Then one might expect
\small\begin{equation} \label{decoup1}
d_n^{[3+1]}(m_c)\,\left(-\beta_0^{(4)}\right)^n \alpha_s(m_b)^{n+1}
\approx
d_n^{[3]}\,\left(-\beta_0^{(3)}\right)^n \alpha_s(m_b)^{n+1}
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent if $n$ is sufficiently large up to corrections suppressed by
a power of the typical internal momentum divided by $m_c$. This is
not quite correct, because there is no manifest decoupling in the $\overline{\rm MS}$
scheme. In the limit that we consider only the contribution
from small $\lambda^2$, appropriate for sufficiently large $n$, the
Borel transform including the massive charm in Eq.~(\ref{rBSmass})
reduces to
\small\begin{equation}\label{decoupborel}
B\left[\frac{\delta m}{m}\right](u)
= \exp\!\left(\frac{u}{6\pi\beta_0^{(4)}}\ln
\frac{\mu^2}{m_c^2}\right)\!\left(-\frac{1}{\pi}\right)
\sin\!\left(\frac{u\beta_0^{(3)}}{\beta_0^{(4)}}\right)\!\!\int\limits_0^\infty
\frac{d\lambda^2}{\lambda^2}\,(r_0(\lambda^2)-r_0(0)) \!\left(
\frac{\lambda^2}{\mu^2 e^{5/3}}\right)^{\!\!u \beta_0^{(3)}/
\beta_0^{(4)}}
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent up to power corrections. If the first factor
were absent, this Borel transform would
coincide with the one for purely massless flavours up to a rescaling
of $u$ which is just what is necessary to obtain Eq.~(\ref{decoup1})
from Eq.~(\ref{genfunction}). The first factor is present, because
the vacuum polarization for charm has been renormalized in the
$\overline{\rm MS}$ scheme
and not by zero momentum subtraction, which would lead to manifest
decoupling. The difference can be accomodated by a change of the
coupling
constant. This is most easily seen by combining the extra exponential
factor with the exponential in the Borel integral, Eq.~(\ref{borelintegral}).
We deduce that (up to power corrections in the typical loop
momentum divided by $m_c$)
\small\begin{equation} \label{decoup2}
d_n^{[3+1]}(m_c) \,\left(-\beta_0^{(4)}\right)^n
\alpha_s^{(4)}(m_b)^{n+1} \approx
d_n^{[3]} \,\left(-\beta_0^{(3)}\right)^n \alpha_s^{(3)}(m_b)^{n+1}\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent where the two couplings are related by
\small\begin{equation}
\frac{1}{\alpha_s^{(3)}(\mu)} = \frac{1}{\alpha_s^{(4)}(\mu)} +
\frac{1}{6\pi} \,\ln\frac{\mu^2}{m_c^2}\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent Alternatively, we can simply replace $u$ in the
extra exponential factor in Eq.~(\ref{decoupborel})
by the location of the closest infrared
pole of the Borel transform, $u=1/2$, and obtain
\phantom{\ref{tabmc},\ref{tabmb}}
\small\begin{equation} \label{decoup3}
d_n^{[3+1]}(m_c) \,\left(-\beta_0^{(4)}\right)^n
\alpha_s(m_b)^{n+1} \approx
\exp\left(\frac{1}{12\pi\beta_0^{(4)}}\ln
\frac{m_b^2}{m_c^2}\right)
d_n^{[3]} \,\left(-\beta_0^{(3)}\right)^n \alpha_s(m_b)^{n+1}\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent which is equivalent to Eq.~(\ref{decoup2}) for large $n$.
\begin{table}[t]
$$
\begin{array}{|c||c|c|c|c|}
\hline
n & d_n^{[3]} & d_n^{[3+1]} & d_n^{[3]}\,
(\beta_0^{(3)}/\beta_0^{(4)})^n & M_n^{[3+1]}\\ \hline\hline
0 & 1 & 1 & 1 & 1\\
1 & 4.6862 & 5.0984 & 5.0613 & 2.183 \\
2 & 17.623 & 20.514 & 20.555 & 3.291 \\
3 & 109.86 & 138.58 & 138.38 & 5.021 \\
4 & 873.92 & 1188.3 & 1189.0 & 8.470 \\ \hline
\infty & - & - & - & 1.720 \pm 0.608 \\ \hline
m^*_1 & - & - & - & 0.078 \, m_c \\
m^*_\infty & - & - & - & 0.406 \, m_c \\
\hline
\end{array}
$$
\caption{Perturbative corrections to the pole charm quark mass with
(column 3) and without (column 2) charm inside loops. For comparison,
column 4 gives coefficients in the limit that all internal
masses are large compared to the typical loop momentum. The last
column updates the modification of one-loop corrections due to
coefficients $d_n^{[3+1]}$.}
\label{tabmc}
\end{table}
\begin{table}[t]
$$
\begin{array}{|c||c|c|c|c|}
\hline
n & d_n^{[3]} & d_n^{[3+2]} & K\,d_n^{[3]}\,
(\beta_0^{(3)}/\beta_0^{(5)})^n & M_n^{[3+2]}\\ \hline\hline
0 & 1 & 1 & 1 & 1\\
1 & 4.6862 & 5.3093 & 4.9774 & 1.648 \\
2 & 17.623 & 22.720 & 21.975 & 1.986 \\
3 & 109.86 & 159.69 & 160.82 & 2.276 \\
4 & 873.92 & 1.5\cdot 10^3 & 1502.0 & 2.601 \\ \hline
\infty & - & - & - & 2.099 \pm 0.224 \\ \hline
m^*_1 & - & - & - & 0.064\, m_b \\
m^*_\infty & - & - & - & 0.117\, m_b \\
\hline
\end{array}
$$
\caption{Coefficients as
in the previous table, where $d_n^{[3+2]}$ now includes charm
and bottom masses with $m_c^2/m_b^2=0.1$. $K=\exp(1/(12\pi\beta_0^{(5)})
\ln(m_b^2/m_c^2))\approx 0.9$.}
\label{tabmb}
\end{table}
Numerical values for corrections to the charm pole mass and the bottom
pole mass, including the charm and bottom masses in loops, are
given in Tables~\ref{tabmc} and \ref{tabmb}. For the bottom pole mass,
the result depends only on the ratio $m_c/m_b$ and we used
$m_c^2/m_b^2=0.1$. For comparison coefficients
according to Eq.~(\ref{decoup3}) are given. The last column in both
tables displays the modification of the one-loop mass shift by
running coupling effects. Note that at the charm scale, the perturbative
series has to be truncated already at second order. The lower rows
give the result of summation according to Eq.~(\ref{rBSmass}) and the
corresponding BLM scales, Eq.~(\ref{compare}). Two technical observations
are in order: First, as mentioned in Section 2, the sum as defined by
Eq.~(\ref{rBSmass}) does not exactly coincide with the principal
value of the Borel integral. The difference is tiny for the value of
masses considered here and does not affect our conclusions which are
based on the behaviour of perturbation theory in low and intermediate
orders. Second, for the bottom quark the
BLM scale is defined without taking into account the charm threshold
in the running coupling.
The difference between the pole mass and the $\overline{\rm MS}$-renormalized
mass (normalized at $m_{\overline{\rm MS}}$) for bottom quarks can now be
written as
\small\begin{equation}
\frac{\delta m_b}{m_b} = \frac{4}{3}\frac{\alpha_s(m_b)}{\pi} \left[
M_\infty^{[3+2]}(-\beta_0^{(5)} \alpha_s(m_b))-0.91 \alpha_s(m_b)
+\mbox{higher orders}\right]\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent where the second term in square brackets accounts for the genuine
gluonic two-loop corrections, cf.~Eq.~(\ref{m1}).
Unknown higher order corrections
include genuine three- and higher loop corrections, vacuum polarization
insertions into two-loop corrections as well as effects of two-loop
running on lowest order radiative corrections.
Numerically, for $\alpha_s(m_b)=0.2$,
we find
\small\begin{equation}
\frac{\delta m_b}{m_b} = (16.3\pm 2.9\pm 1.5)\%\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent Compared to the two-loop expression, the estimate obtained from
NNA increases the mass shift from about $12\%$ to $16\%$.
The second error reflects our estimate
of unknown higher order corrections, which we allow to be as large
as the genuine two-loop corrections. The first error, which
dominates the total uncertainty, represents an estimate of the
ultimate accuracy of $\delta m_b/m_b$ due to the divergence of the
perturbative series. It can not be reduced by calculating higher
orders (but of course refined -- the numerical value we quote
has been obtained by increasing
the uncertainty of $M_\infty^{[3+2]}(a_s)$
in Table~\ref{tabmb} by 50\%, upon which it is close to the
minimal term of the series at $n\approx 2 - 3$). In absolute values,
the intrinsic uncertainty of $\delta m_b$ and therefore of
the pole mass of
the bottom quark ranges between $100$ and $150\,$MeV. The above
errors do not include an uncertainty in $\alpha_s(m_b)$, which
has been fixed to 0.2.
All results were given in terms of $m_{\overline{\rm MS}}(m_{\overline{\rm MS}})$. To run
to a different renormalization point $\mu$, one
should use the anomalous dimension to the same large-$\beta_0$
approximation. In the $\overline{\rm MS}$-scheme it is given by
\small\begin{equation}
\gamma_m(\alpha_s)\equiv -\frac{\mu^2}{m}\frac{d m(\mu)}{d\mu^2}
= -\frac{\alpha_s}{3\pi}\,G_0(-\beta_0\alpha_s)
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent with $G_0$ as in Eq.~(\ref{subfunction}) in agreement with
\cite{PAL84}. Thus, in this approximation
\small\begin{equation}
m(\mu)=m(m)\,\exp\left(\,\,\int\limits_{\alpha_s(m)}^{\alpha_s(\mu)}
\frac{d\alpha_s}{3\pi\beta_0\alpha_s}\, G_0(-\beta_0\alpha_s)
\right)\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent To the approximation considered, one may replace the
exponential by the first two terms of its expansion. In practice,
it might be better to consider the approximation as an approximation
to the exponent and to keep the exponential.
\renewcommand{\topfraction}{1}
\renewcommand{\textfraction}{0}
\mysection{Scale-setting at next-to-leading order}
{Scale-setting at next-to-leading order}
So far we have been discussing a scale-setting procedure that utilizes only
one-loop evolution of the strong coupling. At the same number of loops,
where this procedure extends the familiar scheme of Brodsky, Lepage
and Mackenzie, one also encounters diagrams associated with two-loop
evolution, which are not suppressed compared to insertions of single
fermion loops by any small parameter. This Section is devoted to the
possibility of and difficulty in extending the scale-setting to
next-to-leading order (NLO). The precise meaning of NLO in this context
is illustrated in Fig.~\ref{nlodiag}: For any process, we will calculate
the class of higher order corrections, generated by substituting the
gluon (photon) propagator in lowest order by the
chains of Fig.~\ref{nlodiag}a
(this was done in previous Sections) and Fig.~\ref{nlodiag}b.
In the abelian theory,
the results are exact, while in the nonabelian theory, one must again
define how to restore the coefficients $\beta_0$ and $\beta_1$ of
the nonabelian $\beta$-function. Let us note that as in the case of
one-loop running this extension is heuristically motivated by the
behaviour of perturbation theory in large orders. Two-loop running
is known \cite{MUE85,ZAK92} to modify the
strength of renormalon singularities from poles to branch points
involving the ratio $\beta_1/\beta_0^2$. The class of diagrams in
Fig.~\ref{nlodiag}b is expected to dominate at large $n$ over one-loop
running by a factor $\beta_1/\beta_0^2 \ln n$ and is the first in
a series of multiple two-loop insertions that exponentiate to produce
an enhancement factor $n^{-\beta_1/\beta_0^2}$. We are thus again led
by the expectation that a class of systematically large corrections
can be identified and calculated.
Consider a physical quantity,
with perturbative expansion written as
\small\begin{equation} R-R_{tree} = \sum_{n=0} r_n \alpha_s(Q)^{n+1}\,,\qquad
r_n=r_{n0}+r_{n1} N_f+\ldots r_{nn} N_f^n\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent assuming for simplicity that $R$ depends only on a single scale
$Q$, which is equal to the renormalization scale, such that the
$r_n$ are numerical coefficients.
Since the scale-setting prescription is derived from QED (that is,
its fictitious version with $N_f$ massless fermions), it is useful
to consider first the abelian analogue of $R$. Due to the Ward
identity, the evolution of the coupling in the abelian theory is
generated by radiative corrections to the photon propagator. It is
then natural to define the scale of the coupling at each order in
perturbation theory by replacing each photon propagator $1/k^2$ by
the full propagator $1/(k^2 (1+\Pi(k^2))$ and absorbing the effect
of integrating loops with full propagators into the normalization
of the coupling constant to the order where the corresponding skeleton
diagram appears. In this way, one is led to a modified expansion
\cite{BRO83}
\begin{figure}[t]
\vspace{-1cm}
\epsfysize=28cm
\epsfxsize=20cm
\centerline{\epsffile{nlodiag.eps}}
\vspace*{-21.5cm}
\caption{\label{nlodiag} QED-like diagrams incorporating evolution of
the coupling to leading (a) and next-to-leading order (b). A circle with
letter $m$ denotes a chain of $m$ fermion loops. At order $\alpha^{k+1}$,
the relevant diagrams are specified by $n=k$ and $n_1+n_2+n_3=k-2$. At NLO,
the diagrams, where the $n_2$-chain forms a self-energy-type
insertion, are
not depicted.}
\end{figure}
\small\begin{equation} R-R_{tree} = r_0\left[\alpha_s(Q^*) + \delta_1 \alpha_s(Q^{**})^2
\ldots\right]\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent Specifically, the scale of the coupling at order $\alpha_s$ is
determined by the replacement
\small\begin{equation} \label{mom}
\alpha_R(Q) \rightarrow \alpha_V(k)\equiv
\frac{\alpha_R(Q)}{1+\Pi_R(k^2/Q^2,\alpha_R(Q))}
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent inside the loop-integration over $k$, where $k$ is the
momentum of the virtual photon. We have attached a subscript $R$ to
quantities that depend on the choice of the
renormalization scheme $R$. On
the right hand side we notice that $\alpha_V(Q)$ is the effective
coupling, defined by the potential between two static sources in momentum
space,
\small\begin{equation}
V(Q)=-\frac{4\pi\alpha_V(Q)}{Q^2}\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent which defines the V-scheme \cite{BRO83}. Thus, in {\em any} scheme
$R$, the scale $Q_R^*$ is given by averaging the lowest order radiative
corrections to the quantity $R$ with the running coupling in the
V-scheme, $\alpha_V(k)$, and not\footnote{At the level of
one-loop running, the relevant transformation of schemes is
simply a shift of scale, since, to this approximation,
\small\begin{displaymath}
\alpha_V(k)=\frac{\alpha_R(k)}{1-\beta_0 C_R\alpha_R(k)} = \alpha_R
\left(k e^{C_R/2}\right)\,.
\end{displaymath}\normalsize\vspace*{-0.1ex}} with $\alpha_R(k)$. By construction
$\alpha_R(Q_R^*)$ is scheme-independent.
The choice of $\alpha_V(k)$ to average the loop momentum distribution
is physically appropriate, because the photon exchanged between
static sources at distance $r$ has momentum $k\sim 1/r$. Therefore
it is in this scheme that $\alpha(k)$ can be interpreted as the
effective coupling of a virtual photon of momentum $k$.
With this remark one is prepared to adopt the same definition in
the nonabelian theory. $Q^*$ is defined as \cite{LEP93}
\small\begin{equation}\label{defofq}
r_0\alpha_R(Q^*) \equiv \int d^4 k\,F(k,Q)\,\frac{\alpha_V(k)}{k^2}
\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent where $F(k,Q)$ is the integrand of the lowest order radiative correction
and $C_F 4\pi \alpha_V(k)/k^2$ the potential between two static
colour sources in momentum space, given by a Wilson loop. The Wilson
loop is nonperturbatively well-defined (up to a distance-independent
contribution) and the above integral could be evaluated without any
ambiguity. In the following, we will use it only perturbatively, in the
sense of an expansion of $\alpha_V(k)$ in $\alpha_V(Q)$. This restriction
is not only self-imposed. Power corrections to the static potential might
be different and in particular larger than those to the quantity $R$,
in which case one would not like to use Eq.~(\ref{defofq}) with power-like
accuracy. Let us also note that one-loop gauge boson exchange exponentiates
exactly in the abelian Wilson loop but not in the nonabelian case.
Therefore re-expressing $\alpha_V(Q)$ in terms of some other $\alpha_R(Q)$
inevitably involves contributions to the Wilson loop with more than one
gluon exchanged.
The exact (perturbative) evaluation of Eq.~(\ref{defofq}) is impossible
even in the abelian theory. One has to resort to some truncation of
(a) the $\beta$-function in the $V$-scheme, needed to relate $\alpha_V(k)$
to $\alpha_V(Q)$ and (b) the relation between the coupling in the V-scheme
and some other scheme $R$, if one wishes to express the result in
terms of $\alpha_R$. To leading order, the $\beta$-function has been
replaced by $\beta_0 \alpha^2$. After expansion of $\alpha(k)$ in
$\alpha(Q)$, this corresponds to the diagrams of Fig.~\ref{nlodiag}a
in QED. In this section we discuss a truncation, which is
guided by the subleading $N_f$-dependence of coefficients and incorporates
the set of diagrams depicted in Fig.~\ref{nlodiag}b in addition to those
in Fig.~\ref{nlodiag}a.
Let us first consider $n_1=n_3=0$ in Fig.~\ref{nlodiag}b.
The diagram with $n_2=0$ gives $\beta_1\alpha^3$ to the abelian
$\beta$-function. For $n_2>0$, these diagrams give contributions
to higher order coefficients to the $\beta$-function. Again in QED,
this contribution can be written as
$b_{n_2+1} (-\beta_1) (-\beta_0)^{n_2}$
with a $N_f$-independent number $b_{n_2}$. When expanding
$\alpha_V(k)$ as
\small\begin{equation} \label{expexp}
\alpha_V(k)= \exp\left[\ln\frac{k^2}{Q^2}\,\beta_V(\alpha^\prime)
\frac{d}{d\alpha^\prime}\right]\alpha^\prime_{\big|\alpha^\prime=
\alpha_V(Q)}\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent the diagrams of Fig.~\ref{nlodiag} are recovered by
using
\small\begin{equation}\label{betav}
\beta_V(\alpha)=\beta_0\alpha^2+\beta_1 \sum_{n=1}^\infty b_n
(-\beta_0)^{n-1} \alpha^{n+2}\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent but keeping only terms with at most one power of $\beta_1$. A
different truncation could be imagined, where the $\beta$-function
is kept only up to $\beta_1\alpha^3$ but all powers of $\beta_1$ are
taken into account in the expansion of Eq.~(\ref{expexp}).
The difference to the previous
truncation appears first at order $\alpha^5$.
The transition to the nonabelian theory is arguably uniquely
achieved at leading order by replacing the abelian $\beta_0$ by
its nonabelian value. At NLO one has to make a choice which is
less compulsory: Calculate the insertions of the
diagrams of Fig.~\ref{nlodiag}b
and express them as $c_n (-\beta_1)(-\beta_0)^{n-2}\alpha(Q)^{n+1}$
in QED. Then replace $\beta_0$ and $\beta_1$ by their nonabelian
values. In effect this implies substitution of the true $\beta$-function
in the V-scheme by Eq.~(\ref{betav}). We can now write the perturbative
coefficients of the quantity $R$ as
\small\begin{equation}\label{conlo}
r_n = r_{n0}+\ldots r_{nn-1}N_f^{n-1}+r_{nn} N_f^n
\equiv r_0\left[\delta_n -\beta_1 (-\beta_0)^{n-2}
c_n+(-\beta_0)^n d_n
\right]\qquad n\ge2\,,\end{equation}\normalsize\vspace*{-0.1ex}
\noindent where, according to the prescription, the last two
terms are absorbed into
$Q^*$. Both sets of coefficients $c_n$ and $d_n$ are computed
from abelian diagrams. Note that $d_n$ is constructed as to eliminate
the highest power of $N_f$. On the other hand, to NLO, the
remainder $\delta_n$ still contains subleading flavour dependence,
$N_f^{n-1}$, which arises from insertion of fermion loops into
diagrams with two gluon lines as well as the effective three-gluon
coupling generated by attaching three gluons to a fermion loop.
Before turning to the calculation of $c_n$, we want to illustrate the
prescription at order $\alpha_s^3$. Up to this order, the expansion
of $R$ in a certain scheme with coupling $\alpha_s$
can be written as
\small\begin{equation}\label{rr}
R-R_{tree} = r_0\left[\alpha_s(Q)+\left\{\delta_1+(-\beta_0) d_1\right\}
\alpha_s(Q)^2+\left\{\delta_2 -\beta_1 c_1+(-\beta_0)^2 d_2\right\}
\alpha_s(Q)^3\right]\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent From Eq.~(\ref{defofq}) we obtain
\small\begin{equation}
\alpha_s(Q^*)=\alpha_V(Q)+(-\beta_0) d_1 \alpha_V(Q)^2 + \left\{
(-\beta_1) d_1+(-\beta_0)^2 d_2\right\}
\alpha_V(Q)^3
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent where the (scheme-dependent) $d_n$ are computed as in Section~2
from diagrams with fermion loop insertions.
By comparison we find that in the V-scheme ($\alpha_s(Q)=\alpha_V(Q)$)
$c_1=d_1$. The remaining
linear $N_f$-dependence in $\delta_2$ can then be absorbed into
a scale $Q^{**}$ of the $\alpha_s^2$-correction \cite{BRO94}.
Notice, however, that
while $Q^*$ is defined without knowledge of the
exact $\alpha_s^3$-coefficient,
the definition of $Q^{**}$ in the nonabelian theory requires
the exact $\alpha_s^3$-result.
To obtain the scale $Q^*$ as an
expansion in the coupling
different from the V-scheme, one has to relate the couplings to the same
accuracy and expand $\alpha_V(Q)$ in, say, $\alpha_{\overline{\rm MS}}(Q)$ in the
same form as Eq.~(\ref{rr}):
\small\begin{eqnarray}\label{rel}
\alpha_V(Q) &=& \alpha_{\overline{\rm MS}}(Q)+\left\{\gamma_1+(-\beta_0)
\left(-\frac{5}{3}
\right)\right\}\alpha_{\overline{\rm MS}}(Q)^2\nonumber\\
&&\,+\left\{\gamma_2 -\beta_1
\left(-\frac{55}{12}+4\zeta(3)\right)+(-\beta_0)^2 \frac{25}{9}
\right\}\alpha_{\overline{\rm MS}}(Q)^3
\end{eqnarray}\normalsize\vspace*{-0.1ex}
\noindent One can then determine $c_1$ in the $\overline{\rm MS}$ scheme. Once $d_1$,
$d_2$ and $c_1$ are determined, $Q^*_2$ to $\alpha_s^3$-accuracy
is given by
\small\begin{equation} Q^*_2=Q\,e^{-d_1/2}\left(1+\left\{(-\beta_0) b_1+\tilde{b}_1
\frac{\beta_1}{\beta_0}\right\}
\alpha_s(Q)\right)\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent $b_1$ and $\tilde{b}_1$ are uniquely determined by the
condition
\small\begin{eqnarray} \alpha_s(Q^*_2) &=& \alpha_s(Q)+(-\beta_0) d_1 \alpha(Q)^2
+\left\{(-\beta_0)^2 (d_1^2-2 b_1)-(d_1-2 \tilde{b}_1)\beta_1
\right\} \alpha_s(Q)^3+\ldots\nonumber\\
&\equiv& \alpha_s(Q)+(-\beta_0) d_1 \alpha(Q)^2
+\left\{(-\beta_0)^2 d_2-\beta_1 c_1\right\}\alpha_s(Q)^3\,,
\end{eqnarray}\normalsize\vspace*{-0.1ex}
\noindent which specifies that the diagrams of Fig.~\ref{nlodiag}
relevant at order $\alpha_s^3$ are absorbed into $Q^*_2$ in
any scheme. In the V-scheme $\tilde{b}_1=0$, but in general
$\tilde{b}_1$ is non-zero.
In the V-scheme the prescription outlined here
coincides exactly with the one by Brodsky and Lu \cite{BRO94}.
When $Q^*_2$ is defined in other schemes, we suspect that
$Q^*$ defined in \cite{BRO94} does not absorb the QED diagrams of
Fig.~\ref{nlodiag}b and part of the flavour-dependence
from these diagrams is hidden in $Q^{**}$, since $\tilde{b}_1$
is always zero in \cite{BRO94}.
When we compute $c_n$ in the following Section, rather than using the
V-scheme in intermediate steps, we will perform the average of
Eq.~(\ref{defofq}) directly in an arbitrary scheme by replacing
$\alpha_V(k)$ with the appropriately truncated form of Eq.~(\ref{mom}).
\subsection{Abelian diagrams at NLO}
We now evaluate the perturbative coefficients generated by inserting
the chains of Fig.~\ref{nlodiag} into the gluon (photon) line which
appears in lowest order radiative corrections to an observable $R$.
In this subsection $\beta_0$ and $\beta_1$ refer to their abelian values:
$\beta_0=N_f/(3 \pi)$ and $\beta_1=N_f/(4\pi^2)$. These chains have
previously been considered for particular quantities: The muon anomalous
magnetic moment \cite{BRO93} and the self-energy of a static quark
\cite{BEN94}. Because vacuum polarization insertions are universal, the
results can easily be adapted to the general case. The vacuum polarization
to the present approximation is given by the diagrams with $n=1$ and
$n_1,n_3=0$ in Fig.~\ref{nlodiag} ($Q^2$ is euclidian):
\small\begin{equation} \Pi\left(\frac{Q^2}{\mu^2},\alpha(\mu)\right) = -\beta_0\alpha(\mu)
\left[\ln\frac{Q^2}{\mu^2}+C\right] + \beta_1\Pi_1\left(
\frac{Q^2}{\mu^2},\alpha(\mu)\right)
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent When $\alpha_V(k)$ is expanded inside the
integrand of Eq.~(\ref{defofq}),
the result is expressed in terms of coefficients of the vacuum
polarization and averages of $\ln^n(k^2/Q^2)$ in the lowest
order radiative correction $F(k,Q)$, which have already been
evaluated through the insertion of the diagrams of
Fig.~\ref{nlodiag}a. Therefore, since
$\Pi_1$ is known, the extension to incorporate Fig.~\ref{nlodiag}b is
merely a combinatorical problem. To organize the combinatorics, it is
convenient to introduce the Borel transform. First, we define the
leading order and next-to-leading order Borel transform as the
result of insertion of diagrams
with a single chain (Fig.~\ref{nlodiag}a) and
the sum of both diagrams in the Fig.~\ref{nlodiag}.
{}From Eq.~(\ref{borelsum})
and Eq.~(\ref{conlo}):
\small\begin{eqnarray}
B_{\rm LO}[R](u) &\equiv& r_0 \sum_{n=0}^\infty\frac{d_n}{n!} u^n
\nonumber\\
B_{\rm NLO}[R](u) &\equiv& r_0 \sum_{n=0}^\infty\frac{d_n^{\rm NLO}}
{n!} u^n\qquad d_n^{\rm NLO}=d_n-\frac{\beta_1}{\beta_0^2} c_n \quad
(n\ge 2)
\end{eqnarray}\normalsize\vspace*{-0.1ex}
\noindent and $d_n^{\rm NLO}=d_n$ for $n=0,1$. After Borel transformation of
Eq.~(\ref{defofq}), the truncation of $\alpha_V(k)$ discussed above
corresponds to insertion of
\small\begin{equation}\label{vacnlo}
B\left[\frac{\alpha(Q)}{1+\Pi(k^2/Q^2,\alpha(Q))}\right](u)
= \left(\frac{k^2}{Q^2} e^C\right)^{-u} - \frac{\beta_1}{\beta_0^2}
\int\limits_0^u d v\, v\left(\frac{k^2}{Q^2} e^C\right)^{-v}
B\left[\frac{\Pi_1}{\alpha}\right](u-v)
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent into Eq.~(\ref{defofq}) instead of the complete $\alpha_V(k)$.
On the right hand side, we have neglected
multiple insertions of $\Pi_1$. The Borel transform of $\Pi_1/\alpha$
can be represented as
\small\begin{equation}
B\left[\frac{\Pi_1}{\alpha}\right](u) =
\left(\frac{k^2}{Q^2}\right)^{-u}
F(u) - G(u)\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent where $F(u)$ is a scheme-independent function that can be obtained
from Eq.~(\ref{borelpolaroper}) by integration with respect to $Q^2$:
\small\begin{equation} F(u) = \frac{32}{3} \frac{1}{1-(1-u)^2} \sum_{k=2}^\infty
\frac{(-1)^k k}{(k^2-(1-u)^2)^2} \equiv \sum_{n=-1}^\infty f_n u^n
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent $G(u)$ is a scheme-dependent integration constant. One can use the
renormalization group equation obeyed by the photon
vacuum polarization to
relate the expansion coefficients of $G(u)$ to those of the $\beta$-function
in the chosen scheme. The precise relation is as follows: Let us
write the highest power of $\beta_n$ as
\small\begin{equation}
\beta_{n\big|N_f^n} \equiv b_n \beta_1 (-\beta_0)^{n-1}\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent Then
\small\begin{equation} G(u) = \sum_{n=0}^\infty \frac{b_{n+1}}{n!} u^{n-1}\,,\qquad
g_n = \frac{b_{n+2}}{(n+1)!}\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent With these preliminaries, we insert Eq.~(\ref{vacnlo}) into
Eq.~(\ref{defofq}) and obtain
\small\begin{eqnarray}\label{rnlo}
B_{\rm NLO}[R](u) &=& B_{\rm LO}[R](u) - \frac{\beta_1}{\beta_0^2}
\int\limits_0^u\frac{d v\, v}{u-v} \left(B_{\rm LO}[R](u)-B_{\rm LO}[R](v)
\right)\nonumber\\
&&\hspace*{-2cm}
\, - \frac{\beta_1}{\beta_0^2} \int\limits_0^u d v\, v\left(B_{\rm LO}[R](u)
F_{\rm reg}(u-v) - B_{\rm LO}[R](v) G_{\rm reg}(u-v)
\right)\,,
\end{eqnarray}\normalsize\vspace*{-0.1ex}
\noindent where $F_{\rm reg}(u)$ and $G_{\rm reg}(u)$ are defined by
removing the pole
at $u=0$: $F_{\rm reg}(u)\equiv F(u)-1/u$, $G_{\rm reg}(u)\equiv G(u)-1/u$.
The coefficients $d_n^{\rm NLO}$ are given by
\small\begin{equation} d_n^{\rm NLO}=\frac{1}{r_0}\frac{d^n}{du^n}
B_{\rm NLO}[R](u)_{|_{u=0}}\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent Taking the derivatives of Eq.~(\ref{rnlo}) we obtain (for $n\ge 2$)
\small\begin{eqnarray}\label{dnnlo}
d^{\rm NLO}_n &=& d_n-\frac{\beta_1}{\beta_0^2}
n\,(\psi(n+1)-\psi(2))\,d_{n-1}
\nonumber\\
&&\,-\frac{\beta_1}{\beta_0^2} \sum_{k=0}^{n-2}
\left[\left(\begin{array}{c}
n\\k\end{array}\right)
f_{n-2-k}-(k+1) g_{n-2-k}\right] (n-2-k)!\, d_k
\,,\end{eqnarray}\normalsize\vspace*{-0.1ex}
\noindent where $\psi(x)$ is the logarithmic derivative of the $\Gamma$-function.
Note that for large $n$ the second term on the right side indeed dominates
the first one by a factor $\beta_1/\beta_0^2\ln n$.
With correspondingly changed conventions, this equation
agrees with the corresponding one in \cite{BRO93}. The expansion
coefficients of $F(u)$ are given by \cite{BRO93}
\small\begin{equation} f_n = -\frac{2}{3} (n+2) \left[-2 n-2-\frac{n+7}{2^{n+3}}+\frac{16}{n+2}
\sum_{k=1}^{\left[\frac{n+3}{2}\right]} k (1-2^{-2 k}) (1-2^{2 k-n-3})
\zeta(2 k+1)\right]\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent where $\zeta(k)=\sum_{n=1}^\infty n^{-k}$ and $[..]$ denotes the integer
part of the number in brackets. The coefficients $g_n$ depend on the
scheme employed for the definition of $\alpha$. In the $\overline{\rm MS}$ scheme,
we can use the $\beta$-function to the approximation required here from
\cite{PAL84,BRO93}. We then find
\small\begin{equation}
g_n^{\overline{\rm MS}} = \frac{1}{(n+1)! (n+2)!} \frac{d^{n+1}}{du^{n+1}} \left[
\frac{(1-u) (1+2 u) (3+2 u) \Gamma(4+2 u)}{9\Gamma(2+u)^2 \Gamma(3+u)
\Gamma(1-u)}\right]_{\big|u=0}\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent In the $V$-scheme, we have
\small\begin{equation} g_n^V = f_n\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\phantom{\ref{tt1},\ref{tt2},\ref{tt3}}
\noindent The simplest way to see this is that in this scheme by definition
$\Pi(1,\alpha_V(Q))=0$. Then the expansion of the
integrand in Eq.~(\ref{defofq}) has only non-zero powers of $\ln(k^2/Q^2)$
and $d_0$ can not appear in Eq.~(\ref{dnnlo}).
\subsection{Numerical analysis}
The transition to QCD according to the prescription formulated above
is performed by replacing $\beta_1/\beta_0^2$ in Eq.~(\ref{dnnlo})
by its QCD value. Note that contrary to $d_n$ the NLO coefficient
depends explicitly on the number of flavours. In this
Subsection, rather then presenting
values for the BLM scale $Q^*$ to NLO accuracy, we give the perturbative
coefficients in low orders generated by expansion of of $\alpha_s(Q^*)$
in the $\overline{\rm MS}$-coupling, that is the coefficients $d_n^{\rm NLO}$ and
$M^{\rm NLO}_n(-\beta_0\alpha_s)$, defined in complete analogy with
$M_n(-\beta_0\alpha_s)$. To compare the coefficients obtained from
keeping only vacuum polarization with the exact ones, if available,
we shall also define $d_n^{\rm exact}$ as the exact coefficient,
divided by $(-\beta_0)^n$. Numerical values for the difference between
pole and $\overline{\rm MS}$ mass are shown in Table~\ref{tt1}, and for the
derivative of the hadronic vacuum polarization $Q^2 d\Pi/dQ^2$ and
the $\tau$ decay width in Tables~\ref{tt2} and \ref{tt3}.
\begin{table}[th]
$$
\begin{array}{|c||c|c|c|c|}
\hline
n & d_n & d_n^{\rm NLO} & M_n\,[\mbox{b-quark}] &
M_n^{\rm NLO}\,[\mbox{b-quark}]\\ \hline\hline
0 & 1 & 1 & 1 & 1 \\
1 & 4.68615 & 4.68615 & 1.623 & 1.623 \\
2 & 17.6227 & 19.6884 & 1.935 & 1.972 \\
3 & 109.859 & 127.529 & 2.193 & 2.272 \\
4 & 873.924 & 1138.28 & 2.467 & 2.628 \\
5 & 8839.69 & 12085.6 & 2.835 & 3.131 \\
\hline
\end{array}
$$
\caption{\label{tt1}
Comparison of leading order and next-to-leading order
coefficients for the difference of pole and $\overline{\rm MS}$
mass, where quark masses inside loops
are neglected and loops containing the heavy quark are excluded
(cf. Sect.~4). The comparison of the modification of lowest order
radiative corrections is for $\alpha_s(m_b)=0.2$, relevant to bottom
quarks.}
\vspace*{0.4cm}
$$
\begin{array}{|c||c|c|c||c|c|c|}
\hline
n & d_n & d^{\rm NLO}_n & d_n^{\rm exact} & M_n & M^{\rm NLO}_n
& M^{\rm exact}_n\\ \hline\hline
0 & 1 & 1 & 1 & 1 & 1 & 1 \\
1 & 0.691772 & 0.691772 & 0.728809 & 1.158 & 1.158 & 1.167\\
2 & 3.10345 & 2.31417 & 1.25847 & 1.321 & 1.271 & 1.233\\
3 & 2.18004 & 7.40378 & - & 1.347 & 1.360 & - \\
4 & 30.7398 & 18.4580 & - & 1.432 & 1.411 & - \\
5 & -34.5336 & 146.293 & - & 1.410 & 1.503 & - \\
\hline
\end{array}
$$
\caption{\label{tt2}
Comparison of leading order, next-to-leading order and
exact coefficients of $Q^2 d\Pi(Q)/dQ^2$. The NLO and
exact values are given for
$N_f=3$ and the partial sums $M_n$ for $\alpha_s=0.32$. Exact values,
when available, include genuine higher order corrections.}
\vspace*{0.6cm}
$$
\begin{array}{|c||c|c|c||c|c|c|}
\hline
n & d_n & d^{\rm NLO}_n & d_n^{\rm exact} & M_n & M^{\rm NLO}_n
& M^{\rm exact}_n\\ \hline\hline
0 & 1 & 1 & 1 & 1 & 1 & 1 \\
1 & 2.27511 & 2.27511 & 2.31213 & 1.521 & 1.521 & 1.529\\
2 & 5.68475 & 5.98780 & 5.20806 & 1.819 & 1.835 & 1.803\\
3 & 13.7536 & 18.1248 & - & 1.984 & 2.053 & - \\
4 & 35.1470 & 54.7939 & - & 2.081 & 2.203 & - \\
5 & 84.4066 & 178.897 & - & 2.134 & 2.316 & - \\
\hline
\end{array}
$$
\caption{\label{tt3}
Same as in the
previous table for the perturbative expansion of the $\tau$
hadronic width ($\alpha_s(m_\tau)=0.32$).}
\end{table}
The main conclusion for the radiative mass shift is that the effect
of two-loop running remains small up to the order, at which the series
has to be truncated. It leads to a less then 5\% additional increase
of the one-loop radiative correction, which can be neglected in view
of the uncertainties that have been discussed in Sect.~4. This is
reassuring, because it supports the suggestion that the mass shift is
to a large extent given by one-loop running effects, although it must
be borne in mind that by incorporating two-loop running one does
not gain control over genuine two- and higher loop corrections.
On the other hand, the effect of two-loop running is very large
for the derivative of the vacuum polarization and the efficiency of
extending the BLM prescription beyond leading order can be doubted
in this case. Although the inclusion of two-loop running improves
the estimate of $d_2$ as compared to the exact coefficient $d_2$
(Table~\ref{tt2}), for $n>3$ the effect of two-loop running is
exceedingly large and this improvement might as well be accidental.
It is interesting to observe that for the $\tau$ hadronic width
itself (Table~\ref{tt3}),
such irregularities do not occur, which is in qualitative agreement
with the conclusion of Sect.~3.1 that the perturbative
expansion of $R_\tau$ has a smoother behaviour in intermediate
orders of perturbation theory than $Q^2 d\Pi/dQ^2$. The relative
importance of two-loop running increases with $n$ as expected from the
asymptotic $\ln n$-enhancement. Though smooth, the effect of
two-loop running is significant for $R_\tau$ and points towards
an even further increase of the cumulative effect of higher order
perturbative corrections and consequently a further decrease
of $\alpha_s(m_\tau)$ from $R_\tau$.
In Appendix C, we give exact results for the five-loop diagrams that
enter the calculation of $d_3^{\rm NLO}$ for $Q^2 d\Pi/dQ^2$.
\mysection{Conclusions}{Conclusions}
In this paper we have shown how to deal with
higher order vacuum polarization
insertions into radiative corrections to observables
in QCD. This enterprise
has been motivated by the fact that these higher order
corrections often
lead to disturbingly large coefficients in the perturbative
expansion. This
trend is systematic, but since it can be identified, we suggest
to calculate
these corrections and separate them from the remaining ones.
The behaviour
of the remaining series should then be improved. Even
if the remaining corrections
turn out not to be small, the summation of vacuum polarization
insertions
can be motivated as a physical way of scale-setting.
In this respect it
can be considered as an extension of the scale-setting proposed
by Brodsky, Lepage and Mackenzie \cite{BRO83} and Brodsky
and Lu \cite{BRO94},
as far as two-loop running is concerned. It is worth
noting that the familiar
BLM scale typically overestimates the size of higher
order vacuum polarization
insertions. The smallness of this scale needs not
necessarily indicate
a failure of perturbation theory, unless it is
indicative of its divergence
already at two-loop order.
We would like to stress the computational ease with
which the summation of
effects of one-loop running can be performed compared
to a genuine higher
order calculation. We thus have devoted considerable space
to technical aspects,
which should allow routine implementation of this
summation. The two existing
techniques can be
summarized as follows:
The first one \cite{BEN93,BRO93,BB94,BG94} amounts to
direct evaluation of the relevant
higher order Feynman
diagrams. A convenient tool to organize such a calculation
is the Borel
transform. The result can be obtained as a certain
analytically regularized
Feynman integral plus -- if necessary -- a subtraction
function that is
easily computed directly from a large-mass expansion.
This technique
has its limits, when applied to processes with several
scales or to physical
cross sections that are more directly calculated as a
sum of real and
virtual corrections. In these cases, we have applied a
dispersion technique \cite{BBZ94,SV94,BB94b}
(or, if one starts from the Borel transform, a Mellin
transformation) to
reduce the problem to calculation of lowest order
corrections with finite
gluon mass. It is in this representation, that summation
(up to an
accuracy set by renormalons) is most easily performed.
This summation is
usually complicated by the need to analytically
continue
the Borel transform beyond its radius of convergence
and to take
the (principal value) integral
over the Borel parameter. Our Eqs.~(\ref{rBS}) and
(\ref{rBSren}) are
immediately suited to numerical evaluation. Given this
simplicity, the
computational expense seems worth the investment even
if one could only
hope to absorb a higher order radiative correction that
has the correct sign compared to the exact one.
In the present paper, we have investigated two
quantities in more detail.
Higher order $\beta_0^n\alpha_s^{n+1}$-corrections
are indeed sizeable
for the hadronic decay width of the $\tau$ lepton,
when expressed as
a series in $\alpha_s$. We compared fixed order perturbative
approximations to $R_\tau$ with those from a partial resummation
of running coupling effects due to contour integration \cite{DIB92}
and found that in intermediate orders the first one has a smoother
behaviour, since potentially large corrections from contour
integration conspire with the divergent coefficients of
$Q^2d\Pi/dQ^2$ to produce an effective suppression of ultraviolet
renormalons (Table~\ref{tab1}). Resummation of all one-loop
running coupling effects leads to an approximate 10\% decrease
of $\alpha_s(m_\tau)$ determined from hadronic $\tau$ decays to
a central value of\footnote{As an input we have taken
$R_\tau=3.56\pm 0.03$ from \cite{Revs}, see Sect.~3.1.}
\small\begin{equation}
\alpha_s(m_\tau)\simeq 0.29\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent This shift of about
one error margin of previous analyses is not caused by a single
large next-order coefficient, but by the constructive addition
of several higher order coefficients beyond the exactly known
coefficient at order $\alpha_s^3$.
The accurate extraction of $\alpha_s$ from $\tau$-decays
relies on the absence of
$\Lambda_{\rm QCD}^2/m_\tau^2$-corrections.
Although from a purely theoretical point of view,
the situation with
respect to such terms is not conclusive, given
that the limitations
of duality are only poorly understood, we conclude
that effects
associated with summation of the divergent perturbative
expansion
should be excluded as a source of such terms,
provided one accepts
their absence in the relevant current correlation
functions in euclidian
space.
For the difference between the pole and $\overline{\rm MS}$-renormalized
mass of a
heavy quark, we conclude that the large two-loop
correction found in
\cite{GRA90} is probably not accidental but the first one
in a rapidly
divergent series of higher order corrections dominated
indeed by one-loop
running of the coupling. For the charm quark, the
divergence is such that
the perturbative series can certainly not be improved
beyond two-loop
order. For the bottom pole mass, our estimate incorporating
one-loop
running reads
\small\begin{equation}
\frac{\delta M_b}{M_b} = (16.3\pm 2.9\pm 1.5)\%\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent which is about 30\% larger than the two-loop result
without resummation.
We expect this difference to be significant in phenomenological
applications. The first error quoted is associated with the divergence
of the series that relates the two mass definitons and is irreducible.
Numerically this error amounts to an uncertainty somewhat
larger than 100 MeV
for the mass difference, a value in between the estimates initially
reported in \cite{BIG94,BB94}. Short-distance observables containing
quark masses, when expressed in terms of pole masses will generally
exhibit large coefficients implanted by the use of this parameter. Thus,
although both mass definitions are ultimately unphysical, we expect that
the $\overline{\rm MS}$ mass can be determined to better accuracy through perturbative
relations\footnote{This argument does not apply to pole mass differences.
The large coefficients associated with the pole mass can be understood
as a universal (flavour-independent) additive mass renormalization,
which cancels in the difference.}.
Finally, let us mention that the calculation of vacuum polarization
can never replace an exact calculation. Until this is completed, we
find it worthwhile to incorporate the systematic effects exhibited
by one-loop running. Surely, the authors are among those who await
further exact results with suspense.\\
{\bf Note added.} While this paper was written,
we were informed by M.~Neubert on work of his, which partially
overlaps with the present one. We acknowledge exchange of manuscripts
and are very grateful for the ensuing discussions which
helped to clarify
our presentation. We disagree with the conclusion of \cite{NEU95}
that the difference of various summation prescriptions for cross
sections should be interpreted as a sign and quantification of
the failure of the operator product expansion in the physical
region. As emphasized in Sect.~3.2, once one abandones Borel-summation
type prescriptions, which are distinguished by their relation to
the OPE, one can introduce $1/Q^2$-differences in euclidian
space just as well and there is no discrimination between the
euclidian and minkowskian region from the point of view of
summation of perturbation theory, given the present state of
knowledge.\\
{\bf Acknowledgements.} M.~B. would like to thank D.~Broadhurst and
A.~Pich for stimulating discussions. M.~B. is supported by the
Alexander von Humboldt-foundation.
\newpage
\begin{appendix}
\noindent {\Large\bf Appendices}
\mysection{Subtractions}
{Subtractions}
In general, we will also be interested in ultraviolet divergent
quantities. Examples include the difference between the pole mass
and the $\overline{\rm MS}$-renormalized quark mass and the correlation function
of heavy-light currents in heavy quark effective theory. Denoting
a generic quantity by $R(\alpha)$, the renormalized Borel transform
has the form
\small\begin{equation}
B[R](u) = B[R]_0(u) + S_R(u)\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent where $B[R]_0(u)$ is the bare Borel transform, calculated from
diagrams without overall subtraction,
using the Borel parameter itself as a
regulator. This corresponds to analytic regularization \cite{SPE68}.
Consequently, the bare Borel transform is
singular at the origin (Recall
that the perturbative series is generated by derivatives of the
Borel transform at the origin.).
This singularity is subtracted by the function $S_R(u)$, which
depends on the renormalization scheme. In minimal subtraction schemes
$u S_R(u)$ is an entire function (at least for one chain of fermion
loops) and -- when combined with the appropriate singular piece of
$B[R]_0(u)$ to be defined below, Eq.~(\ref{defsub}) -- $S_R(u)$ yields
an unambiguous
contribution to the bubble sum
\small\begin{equation}
S_R(\alpha) \equiv \left(-\frac{1}{\beta_0}\right)
\int\limits_0^\infty d u \,e^{-u/(-\beta_0\alpha)}\,
\Big(S_R(u) + \mbox{singular term in } B[R]_0(u)
\Big) \,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent General expressions for $S_R(u)$ in minimal subtraction schemes
have been derived in Appendix A
of \cite{BB94}. In this Appendix we outline a simple method
to obtain $S_R(u)$ and calculate $S_R(\alpha)$ in minimal subtraction
schemes.
An essential simplification comes from the
fact that one needs only the dominant large-$\lambda$ behaviour
of dimensionally regularized Feynman amplitudes, where $\lambda$
is a mass for the gluon. As an illustration of the technique we
calculate the subtractions for the correlation function of two
heavy-light currents in heavy quark effective theory.
\subsection{Calculation of $S_R(\alpha)$}
As always we assume that the quantity $R$ has been
made dimensionless, is infrared finite and
such that the
order-$\alpha$ radiative correction comes from gluon exchange. We
shall also assume that the corresponding diagrams have one loop
and are at most logarithmically ultraviolet divergent. The last
assumption is actually unnecessary, see the explicit example in
the subsequent subsection. The
dimensionally regularized order-$\alpha$ correction can then be
represented as
\small\begin{equation} \label{co}
r_0^{\rm bare}(\epsilon) \,\alpha = \alpha\mu^{2\epsilon}\int d^d k\,
F(k,Q)\frac{1}{k^2}\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent We use $d=4-2\epsilon$ and denote by $Q$ a set of external momenta.
For gauge-dependent quantities, we take Landau gauge. The factor
$(k_\mu k_\nu/k^2-g_{\mu\nu})$ from the gluon propagator is
included in the integrand $F(k,Q)$ and we do not write Lorentz
indices explicitly.
We define the both, dimensionally and analytically regularized
coefficient by
\small\begin{equation}\label{dimanalytic}
\hat{r}_0^{\rm bare}(s,\epsilon) \alpha = \alpha\mu^{2\epsilon}\int d^d k\,
F(k,Q)\frac{1}{k^2}\left(-\frac{\mu^2}{k^2}\right)^s\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent where $\mu$ is a subtraction scale. Note that $B[R]_0(u)=
\hat{r}_0^{\rm bare}(u,\epsilon=0)\,e^{-u C}$, see Eq.~(\ref{gluonprop}).
Insert $n$ fermion loops into the gluon line of all diagrams
that generate $r_0$. The fermion loop integrations can be done.
Each loop gives a factor
\small\begin{displaymath}
-\frac{\beta_0^f}{\epsilon} \frac{6\Gamma(1+\epsilon)\Gamma(2-\epsilon)^2}
{\Gamma(4-2\epsilon)} \left(-\frac{k^2}{4\pi\mu^2}\right)^{-\epsilon}\,,
\end{displaymath}\normalsize\vspace*{-0.1ex}
\noindent where $\beta_0^f=T/(3\pi)$ is the fermionic contribution to the
beta-function ($T=1$ in QED and $T=1/2$ in QCD). Performing the
final integration over gluon momentum $k$, the result for the
coefficient of order $\alpha^{n+1}$ can be written as
\small\begin{equation}
r_n^{\rm bare}(\epsilon) = \frac{\left(\beta_0^f\right)^n}{(n+1)
(-\epsilon)^{n+1}}\,G(-\epsilon,-(n+1)\epsilon)
\end{equation}\normalsize\vspace*{-0.1ex}
\small\begin{equation}
G(-\epsilon,-(n+1)\epsilon) = \left((4\pi)^\epsilon \frac{6\Gamma(1+\epsilon)
\Gamma(2-\epsilon)^2}{\Gamma(4-2\epsilon)}\right)^n (n+1) (-\epsilon)\,
\hat{r}_0^{\rm bare}(s=n\epsilon,\epsilon)\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent where Eq.~(\ref{dimanalytic}) has been used. The function $G$
introduced above coincides with the one of Appendix A of
\cite{BB94} [Note that $d=4+2\epsilon$ has been used in
\cite{BB94}, which motivates the signs in the arguments
of $G$ above.]. To obtain the subtraction function $S_R(u)$ in
the $\overline{\rm MS}$ scheme,
one has to add to $r_n^{\rm bare}(\epsilon)$ the diagrams with
fermion loops replaced by their $\overline{\rm MS}$ counterterms
and then compute the finite part of the sum. For any quantity $R$,
the steps
of Appendix A in \cite{BB94} for the self-energy of a
heavy quark can be repeated with the general result
\small\begin{equation}\label{olres}
S_R(u) = \frac{\tilde{G}_0(u)}{u}\qquad \tilde{G}_0(u) =
\sum_{n=0}^\infty \frac{g_n}{n!} u^n\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent where $g_n$ are the expansion coefficients of
\small\begin{equation} G_0(-\epsilon) \equiv G(-\epsilon,-(n+1)\epsilon=0) = \sum_{m=0}^\infty
g_m (-\epsilon)^m\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\vspace{0.2cm}
Since $G_0$ is related to subtractions, it should be unnecessary
to calculate the full diagram, encoded in the function $G$, in
order to obtain $G_0$. We prove
that $G_0$ can be deduced from the large-mass expansion
of the dimensionally regularized one-loop coefficient
$r_0^{\rm bare}(\lambda^2,\epsilon)$ computed
with a gluon mass $\lambda$. That is, $1/k^2$ in Eq.~(\ref{co})
is replaced by
$1/(k^2-\lambda^2)$ (in Landau or Feynman gauge). Since the
original four-dimensional integral was logarithmically
ultraviolet divergent, the asymptotic behaviour
at large $\lambda$ is given by
\small\begin{equation} \label{as}
r_0^{\rm bare}(\lambda^2,\epsilon)\stackrel{\lambda^2\rightarrow\infty}{=}
-\frac{1}{\epsilon}\,r_\infty(\epsilon) \left(\frac{\mu^2}{\lambda^2}\right)^
\epsilon\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent This equation defines the $\lambda^2$-independent function
$r_\infty(\epsilon)$. Its value at $\epsilon=0$ is denoted by $r_\infty$. By
definition
\small\begin{equation}
G_0(-\epsilon) = -\frac{1}{(4\pi)^\epsilon} \frac{\Gamma(4-2\epsilon)}{6\Gamma(
1+\epsilon)\Gamma(2-\epsilon)^2}\,\lim_{s\rightarrow -\epsilon}\,(s+\epsilon)\,
\hat{r}_0^{\rm bare}(s,\epsilon)\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent To evaluate the limit, we use that $\hat{r}_0^{\rm bare}(s,\epsilon)$
is related to the one-loop coefficient with finite gluon mass
$r_0^{\rm bare}(\lambda^2,\epsilon)$ by a Mellin transform:
\small\begin{equation}
\hat{r}_0^{\rm bare}(s,\epsilon) = -\frac{\sin\pi s}{\pi} \int\limits_0^\infty
\frac{d \lambda^2}{\lambda^2} \left(\frac{\lambda^2}{\mu^2}
\right)^{-s} r_0^{\rm bare}(\lambda^2,\epsilon)
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent This represents the analytic continuation of the analogon of
Eq.~(\ref{final1}) into the $u$-interval $]-\epsilon,0[$. Integrating by
parts, we get
\small\begin{equation}
\hat{r}_0^{\rm bare}(s,\epsilon) = -\frac{\sin\pi s}{\pi\,(s+\epsilon)}
\int\limits_0^\infty d\lambda^2 \left(\frac{\lambda^2}{\mu^2}
\right)^{-(s+\epsilon)} \! f^\prime(\lambda^2,\epsilon)\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent where $f(\lambda^2,\epsilon)$ is defined by dividing
$\hat{r}_0^{\rm bare}(s,\epsilon)$ by $(\mu^2/\lambda^2)^\epsilon$.
The limit is now easily taken with the result
\small\begin{equation}
\lim_{s\rightarrow -\epsilon}\,(s+\epsilon)\,
\hat{r}_0^{\rm bare}(s,\epsilon) = -\frac{\sin\pi\epsilon}{\pi\epsilon}\,r_\infty
(\epsilon)\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent This yields the final result
\small\begin{equation}\label{gzero}
G_0(u) = \frac{1}{(4\pi)^{-u}} \frac{\Gamma(4+2 u)}{6\Gamma(
1-u)\Gamma(2+u)^2}\,\frac{\sin\pi u}{\pi u}\,r_\infty
(-u)\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent The computation of the subtraction function has been reduced to
calculation of the large mass limit of one-loop corrections with
non-zero gluon mass.
\vspace*{0.2cm}
Combining Eq.~(\ref{final1}) for the bare Borel transform with
Eq.~(\ref{olres}), the renormalized Borel transform is given by
\small\begin{equation}
B[R](u) = -\frac{\sin\pi u}{\pi u} \int\limits_0^\infty
d \lambda^2 \left(\frac{\lambda^2}{\mu^2} e^C
\right)^{-u} r_0^\prime(\lambda^2) + \frac{\tilde{G}_0(u)}{u}\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent The derivative $r_0^\prime(\lambda^2)$ is ultraviolet finite, which
allows to put $\epsilon=0$. However, the integral is not yet finite at
$u=0$ and the above expression is not suited to take derivatives at
$u=0$. Since, by Eq.~(\ref{as}), $r_0^\prime(\lambda^2)=r_\infty/\lambda^2$
for large $\lambda^2$, we obtain
\small\begin{eqnarray}\label{bofin2}
B[R](u) &=& -\frac{\sin\pi u}{\pi u} \int\limits_0^\infty
d \lambda^2 \left(\frac{\lambda^2}{\mu^2} e^C
\right)^{-u} \left[r_0^\prime(\lambda^2)-\frac{r_\infty}{\lambda^2}
\,\Theta(\lambda^2-\mu^2 e^{-C})\right]\nonumber\\
&&\,+ \frac{1}{u}\left(\tilde{G}_0(u)-r_\infty\,\frac{\sin\pi u}
{\pi u}\right)\,.
\end{eqnarray}\normalsize\vspace*{-0.1ex}
\noindent In this form the integral exists for $u=0$ and the second line is
finite at $u=0$. We note that this expression is equivalent to, but
slightly different from the one given in \cite{BB94b}. In the
present form $\mu^2$ and $C$ appear only in their natural combination
$\mu^2 e^{-C}$.
To obtain the bubble sum, all steps that lead from Eq.~(\ref{final1})
to Eq.~(\ref{rBS}) can now be repeated on the first line of
Eq.~(\ref{bofin2}). The Borel integral of the second line is given by
\small\begin{eqnarray} \label{defsub}
S_R(\alpha) &\equiv& \left(-\frac{1}{\beta_0}\right)
\int\limits_0^\infty d u \,e^{-u/(-\beta_0\alpha)}\,\frac{1}{u}
\left(\tilde{G}_0(u)-r_\infty\,\frac{\sin\pi u}
{\pi u}\,e^{-u C}\right)\\
&=& \left(-\frac{1}{\beta_0}\right)\int\limits_0^{-\beta_0\alpha}
\frac{d u}{u}\left(G_0(u)-r_\infty\right)-\frac{r_\infty}{\beta_0}
\left[\frac{\arctan(\pi\beta_0\alpha)}{\pi\beta_0\alpha}+\frac{1}{2}
\ln\left(1+\pi^2\beta_0^2\alpha^2\right) - 1\right]\nonumber
\,.
\end{eqnarray}\normalsize\vspace*{-0.1ex}
\noindent Thus, in case subtractions are required beyond coupling renormalization,
Eq.~(\ref{rBS}) is replaced by
\small\begin{eqnarray}\label{rBSren2}
r_0 a_s M_\infty(a_s) &=&
\int_0^\infty d\lambda^2\, \Phi(\lambda^2)\,\left(r'_0(\lambda^2) -
\frac{r_\infty}{\lambda^2}\,\Theta(\lambda^2-\mu^2 e^{-C})\right)
+[r_0(\lambda_L^2)-r_0(0)]\nonumber\\
&&\hspace*{-2cm}
\,+ \int\limits_0^{\,a_s}\frac{d u}{u}\left(G_0(u)-r_\infty\right) + r_\infty
\left[\frac{\arctan(\pi a_s)}{\pi a_s}+\frac{1}{2} \ln\left(
1+\pi^2 a_s^2\right) - 1\right]\,,
\end{eqnarray}\normalsize\vspace*{-0.1ex}
\noindent with $G_0(u)$ given by Eq.~(\ref{gzero}).
\subsection{A sample calculation}
As a non-trivial example, we calculate the subtraction function for the
correlation function of heavy-light currents in heavy quark effective
theory,
\small\begin{equation}\label{CF2}
\Pi_5(\omega)\,=\,i\int{\mbox d}^4 x\,e^{i\omega (v\cdot x)}\,
\langle 0|T\{j_5^\dagger(x) j_5(0)\}|0\rangle\qquad
j_5(x) = \bar{h}_v(x) i\gamma_5 q(x)\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent The corresponding bare Borel transform has been given in \cite{BB94}.
At first sight the method exposed in the previous subsection appears
inapplicable, because the diagrams to be considered have two loops, see
Fig.~\ref{hqetI}, and, because the correlation function is quadratically
divergent, one also needs subtractions for the two-loop diagram itself.
To eliminate these, we shall consider the third derivative
\small\begin{equation}
D(\omega)\equiv \omega\,\frac{{\mbox d}^3\Pi_5(\omega)}{{\mbox d}\omega^3}\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\begin{figure}[t]
\vspace{-1cm}
\epsfysize=23.8cm
\epsfxsize=17cm
\centerline{\epsffile{hqetI.eps}}
\vspace*{-14cm}
\caption{\label{hqetI} Radiative corrections to the correlation
function of heavy-light currents. The shaded circle represents a
current insertion with momentum $q$ and the double line denotes
a heavy quark propagator.}
\end{figure}
\noindent Then all subtractions originate from divergent one-loop subdiagrams,
which are subsequently inserted in the lowest order one-loop diagram
for $\Pi_5(\omega)$.
We remind the reader that the heavy-light current in the effective theory
is not conserved. We treat the three diagrams in turn and take Landau
gauge.\\
Let us start with diagram (a) and investigate the one-loop correction to
the heavy-light vertex with non-zero gluon mass. The numerator of the
corresponding Feynman integrand is proportional to
\small\begin{equation} \gamma^\rho (\!\not\! q-\!\not\! k) \gamma_5 v^\tau (k^2 g_{\rho\tau}
- k_\rho k_\tau)\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent where $v$ is the velocity of the heavy quark and the momentum
assignments to the lines of the diagram are evident. Since the contribution
proportional to $\lambda^{-2\epsilon}$ required by Eq.~(\ref{as}) comes
from $k\sim\lambda\gg q$, we can drop $\not\!\! q$ in the above
expression and the integral is simplified to
\small\begin{equation}
\int\frac{d^d k}{(2\pi)^d} \frac{\!\not\! v\!\not\! k-v\cdot k}{
(q-k)^2 \,v\cdot(q^\prime-k) \,(k^2-\lambda^2)}\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent The corresponding integral with numerator replaced by $k_\mu$ has
structure $F_1 v_\mu+F_2 q_\mu+F_3 q_\mu^\prime$. By dimensional counting,
only $F_1$ can behave as $\lambda^{-2\epsilon}$ for large $\lambda^2$,
the other structures being suppressed by powers of $\lambda$. But then
the potential term $\lambda^{-2\epsilon}$ is proportional to
$\!\not\! v\!\not\! v-v^2 = 0$ and therefore $r_\infty(\epsilon)$ vanishes
identically for this diagram (This property is specific to Landau gauge.).
For diagram (b) we need the self-energy insertion of a light (massless)
quark, given by
\small\begin{equation}
-C_F g^2\mu^{2\epsilon} \int\frac{d^d k}{(2\pi)^d} \frac{\gamma^\rho
(\!\not\! p-\!\not\! k)\gamma^\tau}{(p-k)^2 (k^2-\lambda^2)}\,\left[
g_{\rho\tau}-\frac{k_\rho k_\tau}{k^2}\right]\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent where $p$ is the external momentum for the subdiagram. Again the
term involving $\lambda^{-2\epsilon}$ in the large $\lambda^2$-expansion
comes from $k\sim\lambda\gg p$, which allows to expand the propagator
$1/(p-k)^2$. The integrand simplifies to
\small\begin{equation}
\frac{\gamma^\rho
(\!\not\! p-\!\not\! k)\gamma^\tau}{(p-k)^2 (k^2-\lambda^2)}\,\left[
g_{\rho\tau}-\frac{k_\rho k_\tau}{k^2}\right]\longrightarrow
\frac{\epsilon \,(3-2 \epsilon)}{2-\epsilon} \frac{\!\not\! p}{k^2 (k^2-\lambda^2)}
+ O\left(\frac{1}{k^5}\right)\,,
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent where the neglected terms do not produce a $\lambda^{-2\epsilon}$
contribution. The result is
\small\begin{equation}
(-i)\!\not\! p \left(\frac{4\pi\mu^2}{\lambda^2}\right)^\epsilon
\frac{C_F\alpha}{
4\pi} \frac{(3-2 \epsilon)\Gamma(1+\epsilon)}{(1-\epsilon)(2-\epsilon)}\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent Integration of the final quark loop is straightforward, and taking
three
subtractions we obtain
\small\begin{equation}
G_0^{(b)}(u) = \frac{C_F N_c}{4\pi^3}\left[-\frac{u}{6}(3+2 u) \frac{
\Gamma(4+2 u)}{\Gamma(1-u)\Gamma(2+u)^2\Gamma(3+u)}\right]\,.
\end{equation}\normalsize\vspace*{-0.1ex}
Turning to the self-energy of a heavy quark in heavy quark effective
theory in diagram (c), we encounter another
apparent obstacle to applying the
technique of the previous subsection: The subdiagram is linearly
ultraviolet divergent, and -- at large $\lambda^2$ -- behaves like
$\lambda^{1-2\epsilon}$. The remedy is simply to ignore this term and
pick up the term $\lambda^{-2\epsilon}$, since power divergences are
discarded in minimal subtraction schemes. The large mass expansion of
the heavy quark self-energy is
\small\begin{equation}
(-i)\frac{1+\!\not\! v}{2}\left(\frac{4\pi\mu^2}{\lambda^2}\right)^\epsilon
\frac{C_F\alpha}{4\pi} \left[\Gamma\left(\frac{1}{2}\right)\Gamma\left(
\epsilon-\frac{1}{2}\right)\,\lambda+\frac{(3-2\epsilon)\Gamma(\epsilon)}{1-\epsilon}\,
v\cdot p + O\left(\frac{v\cdot p}{\lambda}\right)\right]\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent Discarding the first term and inserting the remaining one into
the quark loop, we get
\small\begin{equation}
G_0^{(c)}(u) = \frac{C_F N_c}{4\pi^3}\left[\frac{1}{6} (3+2 u) \frac{
\Gamma(4+2 u)}{\Gamma(1-u)\Gamma(2+u)^3}\right]\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent Adding the contributions from all diagrams, the result is
\small\begin{equation}
G_0(u) = \frac{C_F N_c}{4\pi^3}\left[\frac{1}{3} (3+2 u) \frac{
\Gamma(4+2 u)}{\Gamma(1-u)\Gamma(2+u)^2\Gamma(3+u)}\right]\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\mysection{Radiative corections to $R_{e^+e^-}$ and $R_\tau$ with
finite gluon mass}
{Radiative corections to $R_{e^+e^-}$ and $R_\tau$ with
finite gluon mass}
We write the lowest order radiative corrections to the
total cross section of $e^+e^-$ annihilation and the hadronic $\tau$
decay width with finite gluon mass $\lambda$ as
\small\begin{eqnarray}
R_{e^+e^-} &=& 3\left[1+\alpha_s(s)\left\{r_{e^+e^-}^{virt}(y)+\Theta(1-y)\,
r_{e^+e^-}^{real}(y)\right\}\right]\,,\\
R_\tau &=& 3\left[1+\alpha_s(m_\tau)\left\{r_\tau^{virt}(y)+\Theta(1-y)\,
r_\tau^{real}(y)\right\}\right]\,,
\end{eqnarray}\normalsize\vspace*{-0.1ex}
\noindent where $y=\lambda^2/s$ or $y=\lambda^2/m_\tau^2$, respectively.
Quarks are taken as massless.
$r^{virt}(y)$ are virtual and $r^{real}(y)$ real gluon emission
corrections:
\small\begin{eqnarray}
r_{e^+e^-}^{virt}(y) &=&\frac{2}{3\pi}\Bigg[2(1+y)^2\Bigg\{-\frac{1}{2}
\ln^2 y +\ln y \ln(1+y) +\frac{\pi^2}{6}+ \mbox{Li}_2(-y)\Bigg\}
\nonumber\\
&&{} -\frac{7}{2}-2y-3\ln y -2y \ln y \Bigg]\\
r_{e^+e^-}^{real}(y) &=&\frac{2}{3\pi}\Bigg[2(1+y)^2\Bigg\{\frac{1}{2}
\ln^2 y -2\ln y \ln(1+y) -\frac{\pi^2}{6}-2 \mbox{Li}_2(-y)\Bigg\}
\nonumber\\
&&{} +5-5y^2+3\ln y +4y \ln y +3 y^2\ln y\Bigg]\,,
\end{eqnarray}\normalsize\vspace*{-0.1ex}
\noindent and
\small\begin{eqnarray}
r_\tau^{virt}(y)&=&\frac{1}{324\pi}\Bigg[-2577+72 \pi^2+(2392-240 \pi^2)\,
y+(828-432 \pi^2)\,y^2+144 (1-\pi^2)\,y^3\nonumber\\
&&\hspace*{-1.5cm} -24 \pi^2 y^4-1332\ln y+
(2208+288\pi^2)\,y\ln y+792 y^2\ln y+144 y^3\ln y-216 \ln^2 y
\nonumber\\[0.3cm]
&&\hspace*{-1.5cm}
+720 y \ln^2 y+1296 y^2\ln^2 y+432 y^3\ln^2 y+72 y^4\ln^2 y+864 y\ln^3 y
\nonumber\\[0.3cm]
&&\hspace*{-1.5cm} +
(432-1440 y-2592 y^2-864 y^3-144 y^4)\,(\ln y\ln(1+y)+\mbox{Li}_2(-y))
\nonumber\\
&&\hspace*{-1.5cm}
+
1728 y \ln y\,\mbox{Li}_2(-y)-3456 y\,\mbox{Li}_3\left(-\frac{1}{y}\right)
\Bigg]\\
r_\tau^{real}(y)&=&\frac{1}{324\pi}\Bigg[2901-72 \pi^2+(-9736+240 \pi^2+5184
\zeta(3))\,
y+(6120+432 \pi^2)\,y^2\nonumber\\
&&\hspace*{-1.5cm} +(456+144\pi^2)\,y^3+(259+24 \pi^2)\, y^4+1332\ln y-
(1776+864\pi^2)\,y\ln y-4176 y^2\ln y\nonumber\\[0.3cm]
&&\hspace*{-1.5cm} -288 y^3\ln y+216 \ln^2 y
-720 y \ln^2 y-1296 y^2\ln^2 y-864 y^3\ln^2 y-144 y^4\ln^2 y
\nonumber\\[0.3cm]
&&\hspace*{-1.5cm} -1440 y\ln^3 y+
(-864+2880 y+5184 y^2+1728 y^3+288 y^4)\,(\ln y\ln(1+y)+\mbox{Li}_2(-y))
\nonumber\\
&&\hspace*{-1.5cm}
-
3456 y \ln y \,\mbox{Li}_2(-y)+6912 y\,\mbox{Li}_3\left(-\frac{1}{y}\right)
\Bigg]
\end{eqnarray}\normalsize\vspace*{-0.1ex}
\mysection{Abelian five-loop diagrams to the hadronic vacuum polarization}
{Abelian five-loop diagrams to the hadronic vacuum polarization}
\phantom{\ref{fiveloop}}
\begin{figure}[t]
\vspace{-1cm}
\epsfysize=24cm
\epsfxsize=20cm
\centerline{\epsffile{fiveloop.eps}}
\vspace*{-13.8cm}
\caption{Representative abelian four- and five-loop
diagrams for the hadronic vacuum polarization. It is understood that
all self-energy type diagrams are added to each class of diagrams.}
\label{fiveloop}
\end{figure}
In this Appendix we collect the abelian five-loop diagrams to
the hadronic vacuum polarization that are
included in the scale-setting in next-to-leading order as described in
Sect.~5. For completeness, we also list the four-loop diagrams. Numerical
values are given for
\small\begin{equation} Q^2\frac{d\Pi(Q^2)}{dQ^2} = \frac{1}{4\pi^2}\sum_{n=0}^\infty
D_{n+1}\left(\frac{\alpha(Q)}{\pi}\right)^n\,.
\end{equation}\normalsize\vspace*{-0.1ex}
\noindent The diagrams are shown in Fig.~\ref{fiveloop}. Each diagram shown
stands for the class of diagrams obtained, when one adds the diagrams
where gluons form self-energy type insertions rather than exchange type
topologies. Diagram (V2) also includes the symmetric diagram, where
the two inner loops are interchanged. We obtain:
\small\begin{eqnarray}
D_4^{\mbox{\tiny (IV1)}} &=& C_F (T N_f)^2\left[\frac{151}{54}-
\frac{19}{9} \zeta(3)
\right]\\
D_4^{\mbox{\tiny (IV2)}} &=& C_F (C_F T N_f)\left[-\frac{101}{64}+
\frac{3}{2} \zeta(3)
\right]\\[0.2cm]
D_5^{\mbox{\tiny (V1)}} &=& C_F (T N_f)^3
\left[-\frac{6131}{972}+\frac{203}{54}\zeta(3)
+\frac{5}{3} \zeta(5)\right]\\
D_5^{\mbox{\tiny (V2)}} &=& C_F (T N_f) (C_f T N_f)
\left[\frac{3571}{576}-\frac{59}{8}
\zeta(3)+ 2 \zeta(3)^2\right]\\
D_5^{\mbox{\tiny (V3)}} &=& C_F (T N_f) (C_f T N_f)
\left[\frac{10199}{3456}-\frac{7}
{2}\zeta(3)+\zeta(3)^2\right]
\end{eqnarray}\normalsize\vspace*{-0.1ex}
\noindent Here $T=1/2$,
$C_F=4/3$ for $SU(3)$ with fermions in the fundamental
representation and $T=1$, $C_F=1$ for $U(1)$.
The sum of these terms yields $d_2^{\rm NLO}$ and $d_3^{\rm NLO}$
in QED (after proper normalization).
\end{appendix}
\newpage
\small
|
1,314,259,994,204 | arxiv | \section{Introduction}
Methods and techniques from nonlinear systems analysis have the potential to greatly enhance the surface analysis capabilities of the atomic force microscope (AFM). The nonlinearity of interest in AFM is the minute force between a very sharp tip and a surface, which depends on the material composition and geometry of the tip and surface at the nanometer scale. This nonlinear tip-surface force perturbs the linear dynamics of the freely oscillating AFM cantilever, giving rise to intermodulation, or mixing of different drive frequencies. When the AFM cantilever is driven with two pure harmonic tones at frequencies $f_1$ and $f_2$, the nonlinear tip-surface force will generate intermodulation products of the drive tones in the cantilevers response at frequencies $nf_1 + mf_2$ , where $n$ and $m$ are integers. With the appropriate choice of $f_1$ and $f_2$ , many intermodulation products of high order $\vert n \vert + \vert m \vert$ can be placed near resonance, where large transfer gain allows for detection of the response with enhanced sensitivity \cite{Platz2008}. Thus, in contrast to traditional dynamic AFM, where response amplitude and phase are measured only at the drive frequency, intermodulation AFM acquires many (typically of order 30) amplitude and phase quantities, which together contain information about the nonlinear tip-surface force that created the intermodulation response. By analysis of the intermodulation spectrum one can, in principal, reconstruct a polynomial approximation to a conservative tip-surface force $F_{ts}(z)$ as a function of the cantilever displacement $z$ \cite{Hutter_PRL}. Reconstruction methods which include the possibility of non-conservative tip-surface forces are under development.
Reconstructing the tip-surface force from analysis of the nonlinear cantilever dynamics is one of the current trends in AFM research \cite{Garcia2002197,PhysRevLett.97.036104,0957-4484-19-37-375704,sahin_nn07,MartinStark06252002}. This approach has historical roots: The analytical power of the AFM has thus far been defined in terms of the instruments ability to measure force-distance curves $F_{ts}(z)$ by monitoring the {\em static} deflection of the cantilever while slowly approaching the surface. It is natural to extend the interpretation of approach curves in { \em dynamic} AFM in terms of force-distance curves. However, this is perhaps not the best method of analyzing the nonlinear dynamics of a driven cantilever impacting on a surface. The problem can become quite complex if many eigenmodes of the cantilever and the fast feedback used in AFM to track the surface, are all accounted for in a full description of the dynamical system. If the goal of dynamic methods of AFM is to enhance the imaging capabilities of AFM or recognize patterns on a surface, one could consider the use of statistical methods \cite{0957-4484-20-8-085714,0957-4484-20-40-405708} as a means of revealing dependencies in the data set of intermodulation amplitude and phases measured at each image pixel. One can also simply plot surface maps of the intermodulation amplitudes and phases \cite{platz_ultramicroscopy} to discover if new features arise in the image which were not visible in standard dynamic AFM. Nevertheless, a physical understanding of the origin of the newly observed features is desirable. To this end one can resort to direct numerical simulation of the dynamical system with an appropriate model for the tip-surface force, as a means of understanding how parameters in a force model effect the intermodulation response.
Here we report on the results of numerical simulations using a single eigenmode cantilever and conservative tip-surface force model, where the Young's modulus of the surface is varied over six orders of magnitude. We simulate the response as the oscillating cantilever approaches a surface, and compare the results with experimental approach curves taken on three different materials spanning this range of Young's moduli. In both simulation and experiment we calculate and measure respectively, both the amplitude and phase of the response at intermodulation frequencies while approaching the surface. Comparison between the simulation and experiments allow us to draw some qualitative conclusions about the origin of characteristic features in the approach curves. In particular, we find that the higher order intermodulation response results from stiffer surfaces, and that the phase of higher order intermodulation products is quite responsive to small changes in the approach distance, making this signal very suitable for feedback control of the probe height.
\section{Numerical simulations}
We model the cantilever dynamics with a single eigenmode, for example the fundamental bending mode of the cantilever \cite{Raman200820}. This approximation will be valid as long as the eigenmodes have a sharp resonances, and the drive and response have significant frequency components only close to one eigenmode. The single eigenmode approximation allows us to treat the cantilever as a simple harmonic oscillator with a linear restoring force in $z-z_0$ , where $z$ is the the tip-surface separation, and $z_0$ is the tip location when the cantilever is in it's equilibrium position.
\begin{equation}
\ddot z +\frac{1}{Q} \dot z + (z-z_0) = \frac{1}{k} (F_{\mathrm{drive}}+F_{\mathrm{ts}})
\label{eq_sho}
\end{equation}
Here $\dot z $ means differentiation with respect to dimensionless time $\tau=2 \pi f_0 t$. The simulation used values of the quality factor $Q = 510$, resonance $f_0 =277$~kHz, and spring constant $k=28$~N/ms, which are typical for the experiments described in the next section.
The nonlinear tip-surface force is modeled in a piece-wise fashion using the van der Waals - DMT model \cite{Garcia2002197}. In this model the tip is approximated by a sphere of radius $R$ which is attracted toward a planar surface of uniform composition by the van der Waals force. The attractive force is cut off at $z=a_0$, where the model switches to a repulsive contact force due to the mutual elastic deformation of a spherical tip and planar surface,
\begin{equation}
F_{ts}(z)=\left\{
\begin{array}{ll}
-\frac{HR}{6z^{2}} & \mathrm{for~} z > a_0\\
-\frac{HR}{6a_{0}^{2}}+\frac{4}{3}E^{*}\sqrt{R}\left(a_{0}-z\right)^{3/2}
& \mathrm{for~} z \leq a_0
\end{array}\right.
\end{equation}
The Hamaker constant $H$, and the cut-off distance $a_0$ are the parameters controlling the attractive van-der Walls force, and the effective modulus $E^*$ characterizes the repulsive contact force.
\begin{equation}
\frac{1}{E^*}=
\frac{1-\nu_{\mathrm{tip}}^2}{E_\mathrm{tip}}+
\frac{1-\nu_{\mathrm{surface}}^2}{E_\mathrm{surface}}
\end{equation}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=8cm]{figure1}
\end{center}
\caption{THE MODEL USED FOR THE TIP-SURFACE FORCE IN THE NUMERICAL SIMULATIONS, SHOWN FOR THREE DIFFERENT VALUES OF THE YOUNG'S MODULUS OF THE SURFACE.}
\label{fig_force}
\end{figure}
For the sake of limiting parameter space in the simulation, we keep the attractive force constant by fixing $H=6.0$ x $10^{-20}$~J, $a_0=0.103$~nm and $R=10$~nm, and we neglect differences in the Poisson ratios by fixing $\nu_{\mathrm{tip}}=\nu_{\mathrm{surface}}=0.53$. The stiffness of the surface is then varied by changing the Young's modulus of the surface over six orders of magnitude. Representative tip-surface force curves are plotted in fig. \ref{fig_force} for a Si tip $E_{\mathrm{Si}}=120$~GPa, and three different values of $E_{\mathrm{surface}}$ corresponding to the materials studied in the experimental section $E_{\mathrm{SiO2}}=70$~GPa, $E_{\mathrm{PMMA}}=1.2$~GPa and $E_{\mathrm{PDMS}}=50$~kPa.
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=17cm]{figure2}
\end{center}
\caption{COLOR MAPS SHOWING THE SIMULATED RESPONSE AMPLITUDE AT THE DRIVE FREQUENCIES AND FOUR OF THE MANY INTERMODULATION PRODUCTS, IN THE PLANE OF THE YOUNG'S MODULUS OF THE SURFACE VS. THE APPROACH DISTANCE.}
\label{fig_map_amp}
\end{figure*}
The drive force consists of two pure harmonic tones,
\begin{equation}
F_{\mathrm{drive}}=A_1 \cos(\Omega_1 \tau) + A_2 \cos(\Omega_2 \tau)
\end{equation}
at the drive frequencies $\Omega_{1,2}=f_{1,2}/f_0$ which have a greatest common divisor $\Delta \Omega$. All intermodulation products appear at frequencies which are integer multiples of $\Delta \Omega$, so the response can be expressed as a Fourier series in $\Delta \Omega$ \cite{Hutter_PRL}. In the simulations one drive was placed slightly below resonance, $f_1=276.5$~kHz and the other drive on resonance $f_2=277$~kHz, so that $\Delta \Omega / 2 \pi =f_2 - f_1$ = 500 Hz. We note however, that there are several possible drive configurations which produce many intermodulation products near resonance.
The integration of equation (\ref{eq_sho}) was done numerically using the solver CVODE contained in the SUNDIALS suite of nonlinear solvers \cite{1089020}. CVODE is a variable-order, variable-step integrator with a built in root finding, or discrete event detection routine, used here to ensure that the solver generates a discrete output at $z=a_0$. We experimented with the two families of multi-step methods provided in CVODE, Adams Moulton Formulas and Backward Differential Formulas, which are recommended for non-stiff and stiff problems respectively. In both cases, the functional iteration method was used. We found no discernible difference between these various methods for the study reported here. The output of the integrator is sampled in time and Fast Fourier Transformed to get the response spectrum. Care is taken to chose a sampling frequency which is an integer multiple of $\Delta \Omega$, so that points in the discrete Fourier transform land exactly at intermodulation frequencies.
In order to explore the effect of surface stiffness on the response, we fix the cantilever spring constant to $k = 28\ \mathrm{N/m}$ and the values of the $H$, $a_0$, and $R$ as given for fig. \ref{fig_force} , and simulate the response as we step $z_0$ toward the surface. This approach simulation is repeated in a loop stepping the effective modulus $E^*$, logarithmically over six orders of magnitude. We generate color density plots of the response amplitude in the parameter plane of $E^*$ vs. $z_0$. Figure \ref{fig_map_amp} shows six such plots, at each of the two drive frequencies $\Omega_1$ and $\Omega_2$, and four of the intermodulation frequencies: the third order intermodulation product 3L at frequency $\Omega_1 - \Delta \Omega$, the 11$^{\mathrm{th}}$ order intermodulation product 11L at $\Omega_1 - 5 \Delta \Omega$, the 7$^{\mathrm{th}}$ order intermodulation product 7H at frequency $\Omega_2 + 3 \Delta \Omega$, and the 21$^{\mathrm{st}}$ order intermodulation product 21H at $\Omega_2 + 10 \Delta \Omega$.
These color maps show that the intermodulation response is rich and varied over the parameter space explored. Nevertheless, we can observe some general trends which are best described by comparing three regions of surface stiffness, as indicated by the horizontal dashed lines in fig. \ref{fig_map_amp}.
At low surface stiffness, below $10^{-2}$ GPa, we see that the response at the drive frequencies changes very little with stiffness. In the simulation drive 1 and drive 2 have equal strength, but the response amplitude of the free cantilever (left edge of the panels of fig. \ref{fig_map_amp}) is lower at drive 1 than drive 2. This is because drive 2 is on resonance, and drive 1 is off resonance by 500~Hz, below drive 2. When engaging the surface, we see that the response at drive 1 increases in amplitude during the initial approach, whereas the response at drive 2 decreases. We can understand the relative change of the drive amplitudes as resulting from a parameter change of a linear system: the attractive tip-surface force effectively weakens the linear cantilever restoring force, causing a shift of resonance toward lower frequency, away from drive 2, toward drive 1. However, this naive approach neglects the redistribution of power between frequencies, forbidden in linear systems, but characteristic of nonlinear systems. Upon approaching the surface, we also see the appearance of intermodulation products of the two drives. Response at these frequencies can only be understood by considering the nonlinear response of the system. For the nonlinear system, power can be taken from one drive and transfered to the other drive (amplification), and to the intermodulation products of the two drives.
In this low stiffness region we also see that intermodulation products of high order have a low response amplitude, which shows an oscillating behavior as the surface is approached. The order of the intermodulation product roughly corresponds to the order of the term in a power series approximation to the tip-surface force \cite{Hutter_PRL}. For low stiffness, coefficients of high powers of $z$ in a polynomial approximation of $F_{\mathrm{ts}}(z)$, will be small in comparison to those for high stiffness (see fig. \ref{fig_force}). Thus, a rule of thumb is: the stiffer the surface, the larger the response of high-order intermodulation products.
At intermediate stiffness, between $10^{-2}$ and $10^{0}$ GPa, we see a rapid drop in the response amplitude at drive 2, and a corresponding increase in the amplitude at drive 1. We also observe a peaking of the intermodulation response amplitudes 3H, and at somewhat higher stiffnesses, 7H. In all of the aforementioned, we observe sharp tongues of high amplitude extending toward smaller approach distance, where they become unstable, switching from low to high amplitude. At such a close approach and large amplitude oscillation, the nonlinearity is too strong and a bifurcation of the dynamical system will occur \cite{1461509,Garcia2002197,hashemi:093512} resulting in bi-stable or multi-stable oscillation states. Bifurcations cause unstable imaging conditions because unavoidable noise causes jumps between the resulting meta-stable oscillations states. It is therefore reassuring to see that at moderate approach distances of 22~nm and above, we find stable behavior over a wide range of surface stiffness. Indeed, experiments on multi-frequency AFM report more stable behavior than standard single-drive dynamic AFM for comparable operation conditions \cite{Platz2008,thota:093108}.
Finally, at high stiffness, above $10^{0}$ GPa, we find a sharp reduction in the response amplitude at drive 1 coinciding with a sharp increase in the amplitude of the 3$^{rd}$ order intermodulation product, as drive power is redistributed. Proceeding to higher stiffness, we see that the drives and lower order intermodulation response show little change, whereas higher order intermodulation response show greater variation with stiffness. Again, this is to be expected because for very stiff surfaces, the polynomial approximation of $F_{ts}(z)$ will have significant contributions from high powers of $z$. These parameter maps, together with those for other intermodulation products, show that intermodulation is capable of producing a varied response over a very wide range of surface stiffness, for one value of the cantilever spring constant. This complex behavior of the various intermodulation products which is observed upon approaching the surface, demonstrates that spectrum of intermodulation response at one approach distance provides excellent fingerprint of the material and mechanical surface properties at the nanometer scale.
\section{Experiments}
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=17cm]{figure3}
\end{center}
\caption{THE AMPLITUDE OF RESPONSE $Z$ AT THE TWO DRIVE FREQUENCIES AND FOUR INTERMODULATION PRODUCTS AS A FUNCTION OF CHANGE IN APPROACH DISTANCE FOR THREE MATERIALS OF WIDELY VARYING YOUNG'S MODULUS. THE VERTICAL AXES IN EACH ROW ARE IDENTICAL AND THE INSETS SHOW A VERTICAL ZOOM WHERE THE RESPONSE WAS WEAK.}
\label{fig_experiments}
\end{figure*}
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=17cm]{figure4}
\end{center}
\caption{PHASOR PLOTS SHOWING HOW THE AMPLITUDE AND PHASE OF THE RESPONSE EVOLVES AS THE PROBE APPROACHES AN SiO$_2$ SURFACE. THE NUMBERS GIVEN JUST OUTSIDE THE FIRST OCTANT REFER TO THE AMPLITUDE AT FULL SCALE (nm). THE COLOR GRADIENT CODES APPROACH DISTANCE, WHERE RED CORRESPONDS TO CLOSEST APPROACH.}
\label{fig_polar}
\end{figure*}
Experiments were performed in air on a Veeco Multimode~2 AFM, with additional electronics for synthesizing the drive signal and a separate computer for sampling and analyzing the cantilever response \cite{platz_ultramicroscopy}. Cantilevers of the type MP-11100-10 from Veeco were used in the experiment. The resonant frequency and quality factor were determined by measuring the thermal equilibrium fluctuation force due to the damping medium (air), which can be observed near resonance where sensitivity is enhanced. Typical values for these cantilevers were: $Q=575$ and $f_0=290$~kHz. By calculating the hydrodynamic damping from the cantilever dimensions and density of air \cite{sader:3967,Higgins2006}, we can determine both the spring constant of the cantilever and the optical lever responsivity of the AFM, without touching the surface. Typical values for these experiments were $k=28$~N/m and $\alpha = 55$~nm/V respectively.
When acquiring the approach curves, the Veeco system is set to make a ramp toward the surface with ramp frequency 0.1~Hz, while a sampling card streams data to storage at a sampling frequency of 2~MHz. The stored data is parsed and Fourier transformed to capture the intermodulation spectrum at different approach distances. In the experiment we can only control the change in approach distance $\Delta z_0=z_{\mathrm{start}}-z_0$, where the origin of the approach $z_{\mathrm{start}}$ is chosen to be at the onset of intermodulation response. Measurements were made approaching the surface, and immediately retracting, taking care not to go too far beyond the point where all response amplitudes extinguish.
By overlaying the approach and retract data, we found that the data fell on the same curve, with no visible sign of hysteresis. This indicates that the oscillating cantilever could not be trapped in differing, meta-stable oscillation states. Taking care to avoid tip and surface damage by not approaching too close to the surface, we were able to get quantitatively consistent results for fixed drive parameters, when comparing several consecutive measurements at one point on a surface. When comparing the response at several different points on the same surface, we also found quantitatively self-consistent results for each surface studied. Measurements were preformed on each surface with two different cantilevers, where comparison showed a qualitative self-consistency. The small differences observed between different cantilevers may be explained by variation in the cantilever parameters and the placement of the drive frequencies with respect to resonance.
Representative curves showing the amplitude of the response at the two drive frequencies and four of the intermodulation products are show in fig. \ref{fig_experiments}, for approach toward three different surfaces: The SiO$_2$ surface was a piece of a Si wafer with 2~$\mu$m thermal oxide on the surface. The PMMA (molecular weight 950 kDalton) was spin-coated on a piece of Si wafer to a thickness of about 1.3~$\mu$m. The PDMS surface was cast on to a glass surface and vacuum cured to form a sheet 1~mm thick. This sheet was then piled off and placed on an AFM chuck so that the surface which cured against the glass could be probed.
The experimental curves of fig. \ref{fig_experiments} reveal some interesting similarities and differences from the simulated response. Similar to what was expected from the simulations, we find that the amplitude of higher order intermodulation products is much weaker for the softest material PDMS, in comparison with either PMMA or SiO$_2$. In contrast to what we expected from the simulation, we find that PMMA and SiO$_2$ have qualitatively similar approach curves. Upon first contact with the surface, each material shows a small initial maximum in the response amplitude of the low-order intermodulation products. This initial maximum is also seen in the simulations at intermediate stiffnesses. However, judging from the relative size of this feature in the two materials, comparison with simulation (see fig. \ref{fig_map_amp} IMP 3L) would indicate that SiO$_2$ is stiffer than PMMA, which is clearly not the case.
From our experience with other simulations, we find that this initial maximum is sensitive to both attractive and dissipative forces. The former was kept constant and the latter was absent in the simulations. We modeled the tip-surface force with the van der Waals - DMT model, changing only the surface Young's Modulus. This conservative force model does not account for dissipative effects which may be caused by adsorbed water molecules on the surface. The fact that response from SiO$_2$ appears softer than expected from our simulations could be explained by the presence of a surface adsorbate on the SiO$_2$.
Comparing the higher order intermodulation amplitudes of SiO$_2$ and PMMA in fig. \ref{fig_experiments} we again find qualitative consistency with the simulations. The stiffer SiO$_2$ surface generates a larger amplitude of response at the 11$^{th}$ order intermodulation product IMP 11L, than does the softer PMMA surface. Overall we see a striking qualitative similarity in features of the experimental amplitude vs. $\Delta z_0$ curves of SiO$_2$ and PMMA at each frequency, where features in the PMMA curves appear smother than the corresponding features in the SiO$_2$ curves.
\section{Intermodulation phase}
The previous sections discussed only the amplitude of the response at the drive frequencies and a few of the many intermodulation products. It is also possible to determine the phase of the response at all of these frequencies because they are integer multiples of a fundamental frequency in the problem, $\Delta \Omega$, which is the greatest common divisor of the two drive frequencies. That all response can be accounted for by a Fourier series in $\Delta \Omega$ is a valid assumption as long as the nonlinearity is not too strong, so that period doubling or other such precursors to chaos do not appear in the response. In the experiment, the phase can be determined by using the two drives signals to build a reference signal with frequency $\Delta \Omega$ \cite{platz_ultramicroscopy}. This measurement constituents a generalization to nonlinear systems, of a common instrument used in linear systems analysis, known as the lock-in amplifier, or network analyzer.
To represent both the amplitude and phase of the response, it is convenient to plot the approach curves in a polar coordinate system as seen in fig. \ref{fig_polar}. Each point in the plot corresponds to the response at a particular value of $\Delta z_0$. A vector stemming from the origin to this point (a phasor) has length which is the amplitude, and polar angle which is the phase. The response at the two drive frequencies starts at the high amplitude of the freely oscillating cantilever and evolves toward zero along a contorted path as the cantilever approaches the surface. Response at the intermodulation frequencies starts at the origin of the polar plots because the freely oscillating cantilever is a linear system with no intermodulation response. When the surface is engaged the intermodulation response appears and we observe much greater variation of the phase, where the path loops several times around zero for higher order intermodulation products.
This multiple looping of the higher order intermodulation products means that the phase is winding more rapidly for higher order. Indeed, if we unwind the phase during the approach, we can plot the phase as an extended variable over an interval of several times $2 \pi$ as the surface is approached. Figure \ref{fig_phase} shows such a plot for the same curves as those given in fig. \ref{fig_polar}. Here we clearly show how intermodulation products at frequencies higher than the drive advance in phase, whereas those lower than the drive retard in phase, upon approaching the surface (winding in opposite sense in fig. \ref{fig_polar}). We also see that the phase for higher order intermodulation products is much more responsive to changes in $\Delta z_0$ than the phase at either drive frequency, indicating that the higher order phase would make a good feedback signal for controlling the AFM probe height.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=8cm]{figure5}
\end{center}
\caption{THE PHASE OF THE RESPONSE $\phi$, PLOTTED AS AN EXTENDED VARIABLE, VS. THE APPROACH DISTANCE.}
\label{fig_phase}
\end{figure}
\section{CONCLUSION}
We have described the amplitude and phase response due to the intermodulation of two pure harmonic drive tones in dynamic AFM, showing how these quantities change upon approach toward a surface. Simulations of the approach process with a single eigenmode model using the van der Waals - DMT, conservative, nonlinear tip-surface force, show that the intermodulation response is rich and varied over a wide range of surface stiffnesses. Experiments on three surfaces spanning this range of Young's modulus of the surface also show rich and varied response. However, it is difficult to correlate the experimental and simulated results in any quantitative way. These difficulties stem from the use of an idealized tip-surface model in the simulation. The model is not expected to be accurate for softer surfaces, and it neglects the dissipative processes due to an absorbed water layer, which can arise in AFM performed in ambient air at standard temperature and pressure. Furthermore, the model assumes an ideal geometry of a round tip and a flat, semi-infinite surface, both being homogeneous in composition. While the experiments here probed flat, homogeneous surfaces, samples of interest will often have considerable variation in topography and composition of the surface at the nanometer scale. Because intermodulation AFM is very sensitive to small changes in the nonlinear tip-surface force, we expect from theory that the observed intermodulation response will also be strongly effected by the topography and local composition of the surface. Thus, for real samples, the idealized force model used here is not particularly useful. Indeed, bulk elastic moduli or Hamaker constants for a specified geometry, are not the proper quantities for characterization of nanostrucutred surfaces. Exactly which quantities best characterize the surface properties probed by dynamic AFM, is an open question for AFM research. It is the belief of the authors that intermodulation AFM can play an important role finding the answer.
\section{Acknowledgments}
We gratefully acknowledge funding from the Swedish Research Council VR, Vinnova and G\"oran Gustafsson Foundation.
|
1,314,259,994,205 | arxiv | \section{Introduction}
\subsection{Background}
In 1965, Gurari\u\i\ constructed a separable Banach space $\Gur$ of ``almost universal disposition for finite-dimensional spaces'', that is, having the following extension property: for every isometry $g: X \to Y$, where $Y$ is a finite-dimensional Banach space and $X$ is a subspace of $\Gur$, and every $\e>0$ there is an $\e$-isometry $f:Y\to \Gur$ such that $f(g(x))=x$ for all $x\in X$.
It is almost obvious that if $V$ is any other separable Banach space fitting in the definition, then there is a linear isomorphism $u:\Gur\to V$ with $\|u\|\cdot\|u^{-1}\|$ arbitrarily close to 1.
Gurari\u\i's creature spurred a considerable interest in Banach space theory and is still an object of intense research. Amongst the main hits we find the following. Lusky proved in \cite{l-unique} that all separable Banach spaces of almost universal disposition are isometric; see \cite{ks} for a simpler proof. The space $\Gur$ is a Lindenstrauss space, that is, its dual space is isometric to an $L_1$-space. Moreover, every separable Lindenstrauss space is isometric to a subspace of $\Gur$ which is the range of a nonexpansive projection.
This was proved by Wojtaszczyk \cite{Wojt}, see also \cite[Proposition 8]{l-survey} where it is shown that one can arrange the projection so that its kernel is again isometric to $\Gur$. The Gurari\u\i\ space is complemented in no space of type $C(K)$ and it is isomorphic to the space of all continuous affine functions on the Poulsen simplex; see \cite[Corollary 2]{b-l} and \cite{l-survey}.
We refer the reader to the survey paper \cite{g-k} for more information and references reporting recent work and to \cite{group} for a related construction in group theory.
\subsection{The plan of the paper}
There is no clear intrinsic reason to restrict attention to Banach spaces in studying the extension of isometries.
In this paper we push the notion of ``universal disposition'' and its relatives into the larger class of quasi-Banach spaces.
We shall construct, for each $p\in(0,1]$, a separable $p$-Banach space of almost universal disposition for finite-dimensional $p$-Banach spaces, which turns out to be unique, up to isometries, and that we will call $\Gp$.
Our main tools are the push-out construction and the notion of a direct limit, whose adaptations to the $p$-normed setting are presented in Sections~\ref{sec:po} and \ref{sec:direct}. The construction itself is carried out in Section~\ref{sec:main}.
In Section~\ref{sec:uni} we prove that any two separable $p$-Banach spaces of almost universal disposition for finite-dimensional $p$-Banach spaces are isometric. As a consequence, $\mathbb G_p$ contains an isometric copy of each separable $p$-Banach space, which improves a classical result by Kalton and provides a complete solution to an old problem in the isometric theory of quasi-Banach spaces.
Up to this point the paper is rather elementary and self-contained.
In Section~\ref{sec:w1} we present a nonseparable generalization. We construct a $p$-Banach space whose density character is the continuum and which is of universal disposition for separable $p$-Banach spaces. We also mention a result of Ben Yaacov and Henson, with a simpler argument provided by Richard Haydon, showing that it is impossible to reduce the size of the space in the preceding result. We prove that these spaces contain isometric copies of all $p$-Banach spaces with density character $\aleph_1$ or less and that they are all isometric under the continuum hypothesis.
Section~\ref{sec:inj} studies the extension of operators with values in the spaces of (almost) universal disposition. Let us pause a moment for some definitions.
First, following a long standing tradition, a quasi-Banach space $E$ would be injective amongst $p$-Banach spaces if there is a constant $\lambda\geq 1$ such that for every $p$-Banach space $X$ and every subspace $Y$ of $X$ every operator $t:Y\to E$ extends to an operator $T:X\to E$ with $\|T\|\leq \lambda\|t\|$.
Also, we say that $E$ is separably injective amongst $p$-Banach spaces if the preceding condition holds for $X$ separable and we say that it is locally injective if it holds when $X$ is finite-dimensional.
After proving that there is no injective $p$-Banach space, apart from $0$, we show that $\mathbb G_p$ is ``locally injective'' (see Definition~\ref{DefVrsbtix}) and also that any space of universal disposition for separable $p$-Banach spaces is separably injective. No separably injective $p$-Banach space had been previously known for $p<1$.
In Section~\ref{sec:operas} we show the existence of a nonexpansive projection on $\Gp$ whose kernel is isometric to $\Gp$. Moreover, this projection is universal in the sense that the class of all its restrictions to closed subspaces contains (up to isometry) all possible nonexpansive operators from separable $p$-Banach spaces into $\Gp$.
Finally, the closing Section~\ref{closing} contains some miscellaneous remarks and questions which we found interesting.
\subsection{Quasi-Banach spaces}
We shall denote by $\K$ the field of scalars, which is fixed to be either the field of real or complex numbers.
A quasinorm on a $\K$-linear space $X$ is a function $\|\cdot\|:X\to \R_+$ satisfying the following conditions:
\begin{itemize}
\item If $\|x\|=0$, then $x=0$.
\item $\|\lambda x\|=|\lambda|\/\|x\|$ for every $\lambda\in\mathbb K$ and every $x\in X$.
\item There is a constant $C$ such that $\|x+y\|\leq C(\|x\|+\|y\|)$ for all $x,y\in X$.
\end{itemize}
A quasinorm gives rise to a linear topology on $X$, namely the least linear topology for which the unit ball $B=\{x\in X:\|x\|\leq 1\}$ is a
neighborhood of zero. This topology is locally bounded, that is, it has a bounded neighborhood of zero.
Actually, every locally bounded topology arises in this way. We refer the reader to \cite{kpr, r} for general information on locally bounded spaces.
A quasinormed space is a linear space $X$ equipped with a quasinorm. If $X$ is complete for the quasinorm topology we say that $X$ is a quasi-Banach space.
Let $p\in(0,1]$. A $p$-normed (respectively, $p$-Banach) space is a quasinormed (respectively, quasi-Banach) space whose quasinorm is a $p$-norm, that is, it satisfies the inequality $\|x+y\|^p\leq \|x\|^p+\|y\|^p$. The case $p=1$ corresponds to the popular class of Banach spaces. Observe that every $p$-norm is also a $q$-norm for each $q\leq p$.
It is an important result of Aoki and Rolewicz that every quasinorm is equivalent to a $p$-norm for some $p\in(0,1]$ in the sense that they induce the same topology; see \cite[Theorem 1.3]{kpr} or \cite[Theorem 3.2.1]{r}.
Let $X$ and $Y$ be quasinormed spaces. A linear map $f:X\to Y$ is a (bounded) operator if there is a constant $K$ such that $\|f(x)\|_Y\leq K\|x\|_X$ for all $x\in X$. The infimum of the constants $K$ for which the preceding inequality holds is denoted by $\|f\|$, the quasinorm of $f$.
If, besides, one has $(1-\e)\|x\|_X\leq \|f(x)\|_Y\leq (1+\e)\|x\|_X$ for some $\e\in[0,1)$ independent of $x\in X$, then $f$ is called an $\e$-isometry.
If $\|f(x)\|_Y=\|x\|_X$ for all $x\in X$, then $f$ is called an isometry. Isometries are not assumed to be surjective. However, we say that $X$ and $Y$ are isometric if there is a surjective isometry $f:X\to Y$.
Note that there is no quasi-Banach space containing, for every $\e>0$ and every $p\in(0,1]$, a subspace $\e$-isometric to the 2-dimensional space $\ell_p^2$, the space $\mathbb K^2$ with the $p$-norm $\|(s,t)\|_p=(|s|^p+|t|^p)^{1/p}$. So, strictly speaking, the title of the paper is a bit exaggerated.
\subsection{Push-outs}\label{sec:po}
This section is an adaptation of \cite[Section 2.1]{accgm} to the $p$-normed setting.
Let $(X_\gamma)_{\gamma\in\Gamma}$ be a family of $p$-Banach spaces, where $\Gamma$ is a set of indices. We set
$$\ell_p(\Gamma, X_\gamma) =\left\{(x_\gamma)\in \prod_{\gamma\in\Gamma} X_\gamma : \left(\sum_\gamma\|x_\gamma\|^p\right)^{1/p}<\infty\right\}
$$
with the obvious $p$-norm. If the family has two spaces only, say $X$ and $Y$, we just write $X\oplus_p Y$. It is important to realize that this construction represents the direct sum in the category of $p$-Banach spaces and nonexpansive operators in the obvious sense.
Let $u: X\to Y$ and $v:X\to Z$ be operators acting between $p$-normed spaces. The associated push-out diagram is
\begin{equation}\label{PO}
\begin{CD}
X@>u>> Y\\
@V v VV @VV v'V\\
Z@> u' >>\PO
\end{CD}
\end{equation}
Here, $\PO=\PO(u,v)$ is the quotient of the $p$-sum $Y\oplus_p Z$ by $S$, the closure of the subspace $\{(u(x),-v(x)): x\in X\}$. The map $u':Z\to \PO$ is the inclusion of $Z$ into $Y\oplus_p Z$, followed by the quotient map of $Y\oplus_p Z$ onto $\PO = (Y\oplus_p Z)/S$. The operator $v'$ is defined analogously. The universal property behind this construction is that the preceding diagram is ``minimally commutative'', in the sense that if $v'':Y\to E$ and $u'':Z\to E$ are operators such that $u''\circ v=v''\circ u$, then there is a
unique operator $w:\PO\to E$ satisfying $u''=w\circ u'$ and $v''= w \circ v'$.
Clearly, $w((y,z)+S)=v''(y)+u''(z)$ and $\|w\|\leq \max(\|v''\|,\|u''\|)$.
As for the quasinorm of the operators appearing in (\ref{PO}) it is obvious that both $u'$ and $v'$ are nonexpansive. The proof of the following remark is left to the reader.
\begin{lemma}
Referring to Diagram~\ref{PO}, if $u$ is an isometry and $\|v\|\leq 1$, then $u'$ is an isometry.
\end{lemma}
\subsection{Direct limits}\label{sec:direct}
Let $(X_\alpha)$ be a family of $p$-Banach spaces indexed by a directed set $\Gamma$ whose preorder is denoted by $\leq$. Suppose that, for each $\alpha,\beta\in \Gamma$ with $\alpha\leq \beta$ we have an isometry $f_\alpha^\beta: X_\alpha\to X_\beta$ in such a way that $f_\alpha^\alpha$ is the identity on $X_\alpha$ for every $\alpha\in\Gamma$ and $f_\beta^\gamma\circ f_\alpha^\beta=f_\alpha^\gamma$ provided $\alpha\leq \beta\leq \gamma$. Then $(X_\alpha, f_\alpha^\beta)$ is said to be a directed system of $p$-Banach spaces.
The direct limit of the system is constructed as follows.
First we take the disjoint union $\bigsqcup_\alpha X_\alpha$ and we define an equivalence relation $\sim$ by identifying $x_\alpha\in X_\alpha$ and $x_\beta\in X_\beta$ if there is $\gamma\in\Gamma$ such that $f_\alpha^\gamma(x_\alpha)=f_\beta^\gamma(x_\beta)$.
Then we may use the natural inclusion maps $\imath_\gamma: X_\gamma\to \bigsqcup_\alpha X_\alpha$ to transfer the linear structure and the $p$-norm from the spaces $X_\alpha$ to $\bigsqcup_\alpha X_\alpha/\!\sim$ thus obtaining a $p$-normed space whose completion is called the direct limit of the system and is denoted by $\underrightarrow{\lim} X_\gamma$. The universal property behind this construction is the following: if we are given a system of nonexpansive operators $u_\gamma:X_\gamma\to Y$, where $Y$ is a $p$-Banach space, which are compatible with the $f_\alpha^\beta$ in the sense that $u_\alpha=u_\beta\circ f_\alpha^\beta$ for $\alpha\leq \beta$, then there is a unique nonexpansive operator $u: \underrightarrow{\lim}\: X_\gamma \to Y$ such that $u\circ \imath_\alpha=u_\alpha$ for every $\alpha\in \Gamma$. That operator is often called the direct limit of the family $(u_\alpha)$.
\section{Construction of $p$-Banach spaces of almost universal disposition}\label{sec:main}
Let $\mathscr C$ be a class of quasi-Banach spaces.
Following \cite[Definition 2]{g}, let us say that a quasi-Banach space $U$ is of almost universal disposition for the class $\mathscr C$ if, for every $\e>0$ and for every isometry $g:X\to Y$, where $Y$ belongs to $\mathscr C$ and $X$ is a subspace of $U$, there is an $\e$-isometry $f:Y\to U$ such that $f(g(x))=x$ for all $x\in X$.
Here is the main result of the paper.
\begin{theorem}\label{main}
For every $p\in(0,1]$ there exists a unique, up to isometries, separable $p$-Banach space of almost universal disposition for finite-dimensional $p$-Banach spaces. This space contains an isometric copy of every separable $p$-Banach space.
\end{theorem}
From now on we fix $p\in(0,1]$ once and for all.
We remark that everything in this paper is well-known for $p=1$.
However, the spaces we shall construct have rather unexpected properties when $p<1$ and shed some light on a widely ignored paper by Kalton \cite{k78}, where one can find a forerunner of our construction; see Proposition~\ref{inj} below.
Concerning the last statement of Theorem~\ref{main}, it is perhaps worth noticing that, while it is well-known that the separable Banach space $C[0,1]$ (as well as $\mathbb G$) contains an isometric copy of every separable Banach space, there is no available proof of the corresponding fact for $p$-Banach spaces for $p<1$. In \cite[Theorem 4.1(a)]{k77} it is stated without proof that for $0<p<1$ there exists a separable $p$-Banach space which is ``universal'' for the class of all separable $p$-Banach spaces. This result also appears in \cite[Theorem 3.2.8]{r} but, as far as we can understand, the rather involved proof
only gives ``universality with respect to $\e$-isometries''.
Before embarking into the proof of Theorem~\ref{main}, let us record the following remark.
\begin{lemma}\label{relax}
Let $U$ be a $p$-Banach space. We assume that for every $\e>0$ and every isometry $g:X\to Y$, where $Y$ is a finite-dimensional $p$-Banach space and $X$ is a subspace of $U$, there is an $\e$-isometry $f:Y\to U$ such that $\|f(g(x))-x\|\leq \e\|x\|$ for all $x\in X$.
Then $U$ is of almost universal disposition for finite-dimensional $p$-Banach spaces.
\end{lemma}
\begin{proof}
This obviously follows from the fact that if $B$ is a basis of $Y$, then for every $\e>0$ there is $\delta$ (depending on $\e$ and $B$) such that if $t:Y\to U$ is linear map with $\|t(b)\|\leq \delta$ for every $b\in B$, then $\|t\|\leq \e$.
\end{proof}
The following result, which should be compared to \cite[Lemma~4.2]{k78} and the construction in \cite[Section 3]{accgm}, is the key step in our construction. It is assumed that the families $\frak J$ and $\frak L$ are actually sets.
\begin{lemma}\label{po}
Let $E$ be a $p$-Banach space, $\frak J$ be a family of isometric embeddings between $p$-Banach spaces and $\frak L$ a family of non-expansive operators (i.e. $\|f\| \leq 1$ for every $f \in \frak L$) from $p$-Banach spaces into $E$. Then there is a $p$-Banach space $E'$ and an isometry $\imath: E\to E'$ having the following property: if $u: A\to B$ is in $\frak J$ and $f:A\to E$ is in $\frak L$, then there is $f':B\to E'$ such that $f'\circ u=\imath\circ f$, with $\|f'\|=\|f\|$. Moreover, if $f$ is an $\e$-isometry, then $f'$ is an $\e$-isometry too.
\end{lemma}
\begin{proof}
If $f:X\to Y$ is an operator, then we put $\dom(f) := X$ and $\cod(f) := Y$. Note that $\cod(f)$ may be larger than the range of $f$.
Set $\Gamma=\{(u,t)\in \mathfrak J\times \frak L: \dom(u)=\dom(t)\}$.
We consider the spaces of $p$-summable families $\ell_p(\Gamma, \dom(u))$ and
$\ell_p(\Gamma, \cod(u))$. We have an isometry
$
\oplus\frak J: \ell_p(\Gamma, \dom(u))\to \ell_p(\Gamma, \cod(u))
$
given by $\oplus\frak J((x_{(u,t)})_{(u,t)\in \Gamma})= (u(x_{(u,t)}))_{(u,t)\in \Gamma}$.
In a similar vein, we can define a nonexpansive operator $\sum\frak L: \ell_p(\Gamma, \dom(u))\to E$ by letting $\sum\frak L((x_{(u,t)})_{(u,t)\in \Gamma})=
\sum_{(u,t)\in \Gamma} t(x_{(u,t)})$. The notation is slightly imprecise because both operators depend on $\Gamma$.
Now we can consider the push-out diagram
\begin{equation}\label{lp}
\begin{CD}
\ell_p(\Gamma, \dom(u))@> \oplus\frak J >> \ell_p(\Gamma, \cod(u))\\
@V \sum\frak L VV @V (\sum\frak L)' VV\\
E @> (\oplus\frak J)' >>\PO
\end{CD}
\end{equation}
Let us see that the lower arrow does the trick so that we may take $E'=\PO$ and $\imath=(\oplus\frak J)'$. We already know that $(\oplus\frak J)'$ is an isometry and also that $(\sum\frak L)'$ is nonexpansive.
Fix $(v,s)$ in $\Gamma$. Put $X=\dom(v)=\dom(s)$ and $Y=\cod(v)$. Let $s'$ be the inclusion $\imath_{(v,s)}$ of $Y$ into the $(v,s)$-th coordinate of $\ell_p(\Gamma, \cod(u))$ followed by $(\sum\frak L)'$. As Diagram (\ref{lp}) is commutative, it is clear that $s'\circ v=(\oplus\frak J)'\circ s$ and also that $s'$ is nonexpansive.
Now suppose $s$ is an $\e$-isometry, that is, $(1-\e)\|x\|_X\leq \|s(x)\|_Y\leq \|x\|_X$ (recall that $s$ is nonexpansive). For $y\in Y$ one has
$$
\|s'(y)\|_{\PO}=\|(\imath_{(v,s)}(y),0)+S\|_{\ell_p(\Gamma, \cod(u))\oplus_p E},
$$
where $S=\{((\oplus\frak J)((x_{(u,t)})), -(\sum\frak L)((x_{(u,t)}))): (x_{(u,t)}))_{(u,t)\in \Gamma}\in\ell_p(\Gamma, \dom(u) ) \}$.
Clearly,
$$
\left\|\imath_{(v,s)}(y)-(u(x_{(u,t)}))_{(u,t)}\right\|^p_{\ell_p(\Gamma, \cod(u) )}+ \left\|\sum_{(u,t)\in\Gamma} t(x_{u,t})\right\|^p_E\geq \|y-v(x)\|_Y^p+\|s(x)\|_E^p,
$$
where $x=x_{(v,s)}$. Now, if $\|x\|_X\geq \|y\|_Y$ one has
$$
\|y-v(x)\|_Y^p+\|s(x)\|_E^p\geq \|s(x)\|_E^p\geq (1-\e)^p\|x\|_X^p\geq (1-\e)^p\|y\|_Y^p.
$$
If $\|x\|_X\leq \|y\|_Y$, then
\begin{align*}
\|y-v(x)\|_Y^p+\|s(x)\|_E^p&\geq \|y\|_Y^p-\|v(x)\|_Y^p+(1-\e)^p\|x\|_X^p\\
&\geq \|y\|_Y^p-(1-(1-\e)^p)\|x\|_X^p\\
&\geq (1-\e)^p\|y\|_Y^p.
\end{align*}
Thus, $
\|s'(y)\|_{\PO}\geq (1-\e)\|y\|_Y$ and $s'$ is a nonexpansive $\e$-isometry.
\end{proof}
\begin{lemma}\label{contains}
Every separable $p$-Banach space is isometric to a subspace of a separable $p$-Banach space of almost universal disposition.
\end{lemma}
\begin{proof}
Let $\frak F$ be a countable family of isometries between finite-dimensional $p$-normed spaces having the following density property: for every isometry of finite-dimensional $p$-normed spaces $g:A\to B$ and every $\e\in(0,1)$ there is $f\in\frak F$ and surjective $\e$-isometries $u:A\to \dom(f)$ and $v:B\to\cod(f)$ making commutative the square
$$
\begin{CD}
A@>g>> B\\
@Vu VV @V v VV\\
\dom(f) @>> f >\cod(f)
\end{CD}
$$
Let $S$ be a separable $p$-Banach space.
We shall construct inductively a chain of separable $p$-Banach spaces based on the nonnegative integers
$$
\begin{CD}
G_0@>\imath_1>>\dots@>>> G_{n-1}@>\imath_n>> G_n@>\imath_{n+1}>> G_{n+1}@>>>\dots
\end{CD}
$$
as follows. We put $G_0=S$ and, assuming that $G_k$ and $\imath_k$ have been constructed for $k\leq n$,
we take a countable set of operators $\mathfrak L_n$ such that for every $\e\in(0,1)$, every $f\in\frak F$ and every $\e$-isometry $u:\dom(f)\to G_n$, there is $v\in \frak L_n$ satisfying $\|u-v\|<\e$.
Then, we apply Lemma~\ref{po} with $E=G_n, \mathfrak J=\mathfrak F, \mathfrak L=\mathfrak L_n$ and we set $G_{n+1}=E'$ and $\imath_{n+1}=\imath$.
Finally, we consider the direct limit
$$
\mathbb G_p(S)= \underrightarrow{\lim}\: G_n
$$
and we prove that it satisfies the hypothesis of Lemma~\ref{relax}.
So suppose we are given an isometry $g:X\to Y$, where $Y$ is a finite-dimensional $p$-Banach space and $X$ is subspace of $\mathbb G_p(S)$ and $\e>0$.
We shall prove that there is an $\e$-isometry $f:Y\to \mathbb G_p(S)$ such that $\|f(g(x))-x\|\leq \e\|x\|$ for all $x\in X$.
Let us fix $\delta>0$. The precise value of $\delta$ required here will be announced later.
First, there is an integer $n$ and a linear map $w:X\to G_n$ such that $\|w(x)-x\|\leq\delta\|x\|$. Moreover, we may take $h\in \mathfrak F$ and $\delta$-isometries $u$ and $v$ making the following diagram commutative:
$$
\begin{CD}
\dom(h) @> h >>\cod(h)\\
@V u VV @VV v V\\
X @> g >>Y
\end{CD}
$$
In fact we can clearly assume that $t=w\circ u$ is in $\frak L_n$ and also that it is a $\delta$-isometry.
Let $t':\cod(h)\to G_{n+1}$ be a $\delta$-isometry extending $t$ and set $f=t'\circ v^{-1}$.
Obviously $\|f(g(x))-x\|=\|w(x)-x\|\leq\delta\|x\|$ for all $x\in X$. Moreover,
$$
(1-\delta)^2\|y\|\leq \|f(y)\|\leq (1+\delta)^2\|y\|\quad\quad(y\in Y)
$$
and therefore taking $\delta=\sqrt{1+\e}-1$ suffices.
\end{proof}
\begin{remark}\label{RmRationsl}
In order to obtain a countable family of isometries having the property required in the proof of Lemma~\ref{contains} one may proceed as follows.
Let us say that a vector in $\K^n$ is ``rational'' if its components are all rational -- here, a complex number is ``rational'' if both its real and imaginary parts are rational numbers. Let $x_1,\cdots, x_k$ be rational vectors spanning $\mathbb K^n$ and put
$$
|x|=\inf\left\{ \left(\sum_{i=1}^k |\lambda_i|^p\right)^{1/p} \colon x= \sum_{i=1}^k \lambda_i x_i \right\}.
$$
Then $|\cdot|$ is a $p$-norm on $\mathbb K^n$ and we say that $(\mathbb K^n, |\cdot|)$ is a \emph{rational $p$-normed space}.
Consider the family of those isometries $f$ whose codomain is a rational $p$-normed space $(\mathbb K^n,|\cdot|)$, its domain is $\mathbb K^m$ for some $m\leq n$, equipped with a (not necessarily rational) $p$-norm and having the form $f(x_1,\ldots, x_m)= (x_1,\ldots, x_m, 0,\ldots, 0)$. Then an obvious compactness argument shows that these are ``dense amongst all isometries''.
\end{remark}
\section{Uniqueness}\label{sec:uni}
The following result is the first step towards the proof of uniqueness in Theorem~\ref{main}. It is the $p$-convex analogue of \cite[Lemma 2.1]{ks}.
As the reader can imagine, the proof has to be different here since one needs to avoid the use of linear functionals to work with $p$-normed spaces.
\begin{lemma}\label{key}
Let $X$ and $Y$ be $p$-normed spaces and $f:X\to Y$ an $\e$-isometry, with $\e\in(0,1)$. Let $i:X\to X\oplus Y$ and $j:Y\to X\oplus Y$ be the canonical inclusions. Then there is a $p$-norm on $X\oplus Y$ such that $\|j\circ f - i\|\leq\e$ and both $i$ and $j$ are isometries.
\end{lemma}
\begin{proof}Put
\begin{equation}\label{put}
\|(x,y)\|^p=\inf\left\{\|x_0\|_X^p+\|y_1\|_Y^p+\e^p\|x_2\|_X^p: (x,y)=(x_0,0)+(0,y_1)+(x_2,-f(x_2)) \right\}.
\end{equation}
It is easily seen that this formula defines a $p$-norm on $X\oplus Y$. Let us check that $\|(x,0)\|=\|x\|_X$ for all $x\in X$. The inequality $\|(x,0)\|\leq \|x\|_X$ is obvious. As for the converse, suppose $x=x_0+x_2$ and $y_1=f(x_2)$. Then
\begin{align*}
\|x_0\|_X^p+\|y_1\|_Y^p+\e^p\|x_2\|_X^p&= \|x_0\|_X^p+\|f(x_2)\|_Y^p+\e^p\|x_2\|_X^p\\
&\geq \|x_0\|_X^p+(1-\e)^p\|x_2\|_X^p+\e^p\|x_2\|_X^p\\
&= \|x_0\|_X^p+\|(1-\e)x_2\|_X^p+\|\e x_2\|_X^p\\
&\geq \|x\|_X^p,
\end{align*}
as required.
Next we prove that $\|(0,y)\|=\|y\|_Y$ for every $y\in Y$. That $\|(0,y)\|\leq\|y\|_Y$ is again obvious. To prove the reversed inequality assume $x_0+x_2=0$ and $y=y_1-f(x_2)$. As $t\to t^p$ is subadditive on $\R_+$ for $p\in(0,1]$, we have
\begin{align*}
\|x_0\|_X^p+\|y_1\|_Y^p+\e^p\|x_2\|_X^p&= \|x_2\|_X^p+\|y_1\|_Y^p+\e^p\|x_2\|_X^p\\
&= \|y_1\|_Y^p+(1+\e^p)\|x_2\|_X^p\\
&\geq \|y_1\|_Y^p+(1+\e)^p\|x_2\|_X^p\\
&\geq \|y_1\|_Y^p+\|f(x_2)\|_Y^p\\
&\geq \|y\|_Y^p.
\end{align*}
To end, let us estimate $\|j\circ f- i\|$. We have
$
\|j\circ f- i\|=\sup_{\|x\|\leq 1}\|j(f(x))-i(x)\|= \sup_{\|x\|\leq 1}\|(-x,f(x))\|\leq \e
$
and we are done.
\end{proof}
From now on, $X\oplus_f^\e Y$ will denote the sum space $X\oplus Y$ furnished with the quasinorm defined by (\ref{put}). The fact that the quasinorm depends, not only on $f$ and $\e$, but also on $p$ will cause no confusion.
A linear operator $f:X\to Y$ is called a strict $\e$-isometry if for every nonzero $x\in X$,
$$(1-\e)\|x\|_X< \|f(x)\|_Y< (1+\e)\|x\|_X,$$ where $\e\in(0,1)$.
Note that when $X$ is finite-dimensional, every strict $\e$-isometry is an $\eta$-isometry for some $\eta < \e$.
\begin{lemma}\label{helpful}
Let $U$ be a $p$-Banach space of almost universal disposition for finite-dimensional $p$-Banach spaces and let $f:X\to Y$ be a strict $\e$-isometry, where $Y$ is a finite-dimensional $p$-Banach space, $X$ is a subspace of $U$ and $\e\in(0,1)$. Then for each $\delta>0$ there exists a $\delta$-isometry $g:Y\to U$ such that $\| g(f(x))-x \|< \e\|x\|$ for every nonzero $x\in X$.
\end{lemma}
\begin{proof}
Choose $0<\eta<\e$ such that $f$ is an $\eta$-isometry. Shrinking $\delta$ if necessary, we may assume that $\delta^p+(1+\delta)^p\eta^p<\e^p$. Set $Z= X\oplus_f^\eta Y$ and let $i:X\to Z$ and $j:Y\to Z$ denote the canonical inclusions, so that $\|j\circ f-i\|\leq \eta$. Let $h:Z\to U$ be a $\delta$-isometry such that $\|h(i(x))-x\|\leq\delta\|x\|$ for $x\in X$. Then $g=h\circ j$ is a $\delta$-isometry from $Y$ into $U$ and we have
\begin{align*}
\|x-g(f(x))\|^p &\leq \|x-h(i(x))\|^p+\|h(i(x))-h(j(f(x))\|^p\\
&\leq \delta^p\|x\|^p+(1+\delta)^p\|i(x)-j(f(x))\|_Z^p\\
&\leq (\delta^p+(1+\delta)^p\eta^p)\|x\|^p < \e^p \|x\|^p,
\end{align*}
as required.
\end{proof}
We are now ready for the proof of the uniqueness. Note that the following result, together with Lemma~\ref{contains}, completes the proof of
Theorem~\ref{main}.
\begin{theorem}\label{completes}
Let $U$ and $V$ be separable $p$-Banach spaces of almost universal disposition for finite-dimensional $p$-Banach spaces. Let $f:X\to V$ be a strict $\e$-isometry, where $X$ is a finite-dimensional subspace of $U$ and $\e\in(0,1)$. Then there exists a bijective
isometry $h:U\to V$ such that $\|h(x)-f(x)\|_V\leq \e\|x\|_U$ for every
$x\in X$. In particular, $U$ and $V$ are isometrically isomorphic.
\end{theorem}
\begin{proof}
Fix $0<\eta <\e $ such that $f$ is an $\eta $-isometry and then choose $0<\lambda <1$ such that
\begin{equation}
\eta^p \frac{1+3\lambda^p}{1-\lambda^p }<\e^p.
\tag{$\star $}\label{star}
\end{equation}
Let $\e_n=\lambda^{n} \eta$. We define inductively sequences of linear operators $(f_n), (g_n)$ and finite-dimensional subspaces $(X_n)$, $(Y_n)$ of $U$ and $V$, respectively, so that the following conditions are satisfied:
\begin{enumerate}
\item[(0)] $X_0 = X$, $Y_0 = f[X]$, and $f_0 = f$;
\item[(1)] $f_n:X_n\to Y_n$ is an $\e_n$-isometry;
\item[(2)] $g_n:Y_n\to X_{n+1}$ is an $\e_{n+1}$-isometry;
\item[(3)] $\|g_n f_n(x) - x\| < \e_n \| x\|$ for $x\in X_n$;
\item[(4)] $\|f_{n+1}g_n(y) - y\| < \e_{n+1} \| y\|$ for $y\in Y_n$;
\item[(5)] $X_n\subset X_{n+1}$, $Y_n\subset Y_{n+1}$, $\bigcup _n X_n$ and $\bigcup _n Y_n$ are dense in $U$ and $V$, respectively.
\end{enumerate}
We use condition (0) to start the inductive construction. Suppose that $f_i$, $X_i$, $Y_i$, for $i\leq n$, and $g_i$ for $i<n$, have been constructed. We easily find $g_n$, $X_{n+1}$, $f_{n+1}$ and $Y_{n+1}$ using Lemma~\ref{helpful}.
To guarantee that Condition (5) holds, we may start by taking sequences $(x_n)$ and $(y_n)$ dense in $U$ and $V$, respectively and then we require first that $X_{n+1}$ contains both $x_n$ and $g_n[Y_n]$ and then that $Y_{n+1}$ contains both $y_n$ and $f_{n+1}[X_{n+1}]$.
After that, fix $n \in \omega$ and $x \in X_n$ with $\|x\|=1$. Using (4), we
get
$$\| f_{n+1} g_n f_n(x) - f_n(x) \|^p < \e_{n+1}^p \cdot \|f_n(x)\|^p \leq \e_{n+1}^p\cdot (1 + \e_{n})^p=(\lambda^{n+1}\eta)^p \cdot(1+\lambda^{n}\eta)^p.$$
Using (3), we get
$$\| f_{n+1} g_n f_n(x) - f_{n+1}(x) \|^p \leq \|f_{n+1}\|^p \cdot \| g_n f_n(x) - x\|^p < (1 + \e_{n+1})^p\cdot \e_{n}^p=(\lambda^{p}\eta)^p \cdot(1+\lambda^{n+1}\eta)^p.$$
These inequalities give
\begin{align*}
\| f_n(x) - f_{n+1}(x) \|^p & < (\lambda^{n}\eta + \lambda^{n}\eta \lambda^{n+1}\eta)^p + (\lambda^{n}\eta \lambda^{n+1}\eta + \lambda^{n+1}\eta)^p \\ &< \eta^p (\lambda^{np}+2\lambda^{(n+1)p}+\lambda^{(n+1)p}) =\eta^p (\lambda^{np} + 3\lambda^{(n+1)p}).
\tag{$\star \star $}\label{twostar}
\end{align*}
Now it is clear that $(f_n(x))_{n\in \omega}$ is a Cauchy sequence.
Given $x\in \bigcup_{n \in \omega} X_n$, define $h(x) = \lim_{n \geq
m}f_n(x)$, where $m$ is such that $x\in X_m$. Then $h$ is an
$\e_n$-isometry for every $n \in \omega$, hence it is an isometry.
Consequently, it extends to an isometry on $h: U\to V$ that we do not relabel. Furthermore, (\ref{star})
and (\ref{twostar}) give
\begin{align*}
\| f(x) - h(x)\|^p \leq \sum_{n=0}^\infty \eta^p (\lambda^{np} + 3\lambda^{(n+1)p})=\eta^p \frac{1+3\lambda^p}{1-\lambda^p} <\e^p
\end{align*}
It remains to see that $h$ is a bijection. To this end, we check
as before that $(g_n(y))_{n \geq m}$ is a Cauchy sequence for
every $y\in Y_{m}$. Once this is done, we obtain an isometry
$g:V\to U$. Conditions (3) and (4) tell us that
$g\circ h$ is the identity on $U$ and that $h \circ g$ is the identity on $V$. This
completes the proof.
\end{proof}
\section{Nonseparable generalizations}\label{sec:w1}
As the reader may expect, we say that a quasi-Banach space $U$ is of universal disposition for a given class of quasi-Banach spaces $\mathscr C$ if, whenever $g:X\to Y$ is an isometry, where $Y$ belongs to $\mathscr C$ and $X$ is a subspace of $U$, then there is an isometry $f:Y\to U$ such that $f(g(x))=x$ for all $x\in X$.
Using $\mathbb G_p$ as an isometrically universal separable $p$-Banach space and iterating Lemma~\ref{po} until the first uncountable ordinal $\omega_1$ we now proceed as in \cite[Proposition 3.1(a)]{accgm} to prove the following.
\begin{theorem}\label{w1}
There is a $p$-Banach space of universal disposition for separable $p$-Banach spaces and whose density character is the continuum.
\end{theorem}
\begin{proof} Let $\omega_1$ be the first uncountable ordinal. We may regard $\omega_1$ as the set of all countable ordinals equipped with the obvious order; see \cite{cie} for details. We are going to define a transfinite sequence of $p$-Banach spaces $(G_p^\alpha, f_\alpha^\beta)$ indexed by $\omega_1$ having the following properties:
\begin{itemize}
\item[(1)] For each $\alpha\in\omega_1$ the density character of $G_p^\alpha$ is at most the continuum.
\item[(2)] If $\beta=\alpha+1$ and $g:X\to Y$ is an isometry, where $Y$ is a separable $p$-Banach space and $X$ is a subspace of $G_p^\alpha$, then there is an isometry $f:Y\to G_p^{\beta}$ such that $f(g(x))=f_\alpha^{\beta}(x)$ for all $x\in X$.
\end{itemize}
We proceed by transfinite induction on $\alpha\in\omega_1$. Let us fix an arbitrary $p$-Banach space $C$ with density $2^{\aleph_0}$. Then, we take $G_p^0=C$ to start.
The inductive step is as follows. We fix $\gamma\in\omega_1$ and we assume that the directed system $(G_p^\alpha,f_\alpha^\beta)$ has been constructed for $\alpha,\beta<\gamma$ in such a way that (1) and (2) hold for $\alpha,\beta<\gamma$.
We want to see that we can continue the system in such a way that (1) and (2) now hold for $\alpha,\beta<\gamma+1$. We shall distinguish two cases.
First, assume $\gamma$ is a limit ordinal. Then we take $G_p^\gamma=\underrightarrow{\lim}_{\alpha<\gamma} G_p^\alpha$ and $f_\alpha^\gamma=\imath_\alpha$. It is clear that $\dens(G_p^\gamma)\leq 2^{\aleph_0}$ and there is nothing else to prove since $\gamma$ cannot arise as $\alpha+1$ for $\alpha<\gamma$.
Now, suppose $\gamma$ is a successor ordinal, say $\gamma=\delta+1$.
To construct $G_p^{\delta+1}$ we consider the set of all isometric embeddings between subspaces of $\mathbb G_p$ and we call it $\mathfrak J$ and the set $\mathfrak L$ of all $G_p^\delta$-valued isometries whose domain is a subpace of $\mathbb G_p$ -- recall that $G_p^\delta$ is already defined by the induction hypothesis. Now, we let $E= G_p^\delta$ and we apply Lemma~\ref{po} with $\e=0$ to get the push-out space $G_p^{\delta+1}=E'$ and $f_\delta^{\delta+1}=\imath$. Observe that $G_p^{\delta+1}$ has density character at most the continuum since it is a quotient of the direct sum of $G_p^\delta$ and $\ell_p(\Gamma, \cod(u))$, where $\Gamma$ is a subset of $\mathfrak J\times \mathfrak L$, with $|\mathfrak J|,|\mathfrak L|\leq\frak c$ and $\cod(u)$ separable for every $u$.
Now, for $\alpha<\delta$ we put $f_\alpha^{\delta+1}= f_\delta^{\delta+1}\circ f_\alpha^{\delta}$ and the Principle of Transfinite Induction goes at work.
The remainder of the proof is rather easy. We define $U$ as the direct limit of the system $(G_p^\alpha)_\alpha$ and we consider the natural isometries $\imath_\alpha: G_p^\alpha\to U$, so that
$$
U=\bigcup_{\alpha\in\omega_1} \imath_\alpha [G_p^\alpha].
$$
Observe that it is not necessary to take closures here.
Obviously, the density character of $U$ is at most the continuum.
Suppose $g:X\to Y$ is an isometry, where $Y$ is a separable $p$-Banach space and $X$ a subspace of $
U$. Then there is $\alpha\in\omega_1$ so that $X\subset\imath_\alpha[G_p^\alpha]$. It is straightforward from (2) that there is an isometry $f:Y\to G_p^{\alpha+1}$ such that $\imath_{\alpha+1}(f(g(x)))=x$ for every $x\in X$.
\end{proof}
The following result, due to Ben Yaacov and Henson~\cite{BYH},
shows that it is impossible to reduce the size of the space in Theorem~\ref{w1}.
We include a nice, straightforward proof found by
Richard Haydon.
Formally, Ben Yaacov and Henson stated and proved the result for $p=1$, however Haydon's argument gives exactly the same for any $p \in (0,1]$.
Namely, a $p$-Banach space of universal disposition for the class of $p$-Banach spaces of dimension three must already have density $2^{\aleph_0}$. This was asked in \cite[Problem 2]{accgm} for $p=1$.
\begin{proposition}[Ben Yaacov and Henson]\label{PBenYakHay}
Let $H$ be the 2-dimensional Hilbert space and suppose $X$ is a $p$-Banach space containing $H$ and having the following property:
\begin{enumerate}
\item[$(\checkmark)$] Given an isometric embedding $\map i H F$, where $F$ is a $3$-dimensional $p$-Banach space, there exists an isometric embedding $\map j F X$ such that $j \cmp i$ is the inclusion $H \subset X$.
\end{enumerate}
Then the density of $X$ is at least the continuum.
\end{proposition}
\begin{proof} (Haydon)
Let $S$ be the positive part of the unit sphere of $H$.
Given $\phi \in S$, we define a $p$-norm on $H \oplus \K$ (recall that $\K$ is the scalar field) by the formula
$$\norm{(x,\lambda)}^p_\phi = \max \Bigl\{ \norm x_2^p, |\lambda|^p + |(x | \phi)|^p \Bigr\},$$
where $(\cdot|\cdot)$ denotes the usual scalar product on $H$.
Note that $\norm{(0,1)}_\phi = 1$ and $\norm{\cdot}_\phi$ extends the Euclidean norm $\norm{\cdot}_2$ of $H$, where $x \in H$ is identified with $(x,0)$.
Using ($\checkmark$), for each $\phi \in S$ we can find $e_\phi \in X$ such that the map $\map {i_\phi}{H \oplus \K} X$, defined by $i_\phi(x,\lambda) = x + \lambda e_\phi$, is an isometric embedding with respect to $\norm{\cdot}_\phi$.
Fix $\phi, \psi \in S$ such that $\phi \ne \psi$ and let $\norm{\cdot}$ denote the $p$-norm of $X$.
Fix $\mu > 0$ and let $w = \mu \phi \in H \subset X$.
Then
$$\norm{e_\phi - e_\psi}^p \geq \norm{e_\phi + w}^p - \norm{e_\psi + w}^p = \norm{(\mu \phi,1)}^p_\phi - \norm{(\mu \phi,1)}^p_\psi.$$
Finally, observe that $\norm{(\mu \phi,1)}^p_\phi = 1 + \mu^p$ and
$$\norm{(\mu \phi, 1)}^p_\psi = \max \Bigl\{ \mu^p, 1 + \mu^p |(\phi | \psi)|^p \Bigr\} = \mu^p,$$
whenever $\mu$ is sufficiently large, because $|(\phi | \psi)| < 1$ (recall that $\phi$, $\psi$ are distinct vectors of $S$).
Thus, we conclude that $\norm{e_\phi - e_\psi} \geq 1$ whenever $\phi \ne \psi$, which shows that the density of $X$ is at least $|S|=2^{\aleph_0}$.
\end{proof}
A couple of additional remarks about Theorem~\ref{w1} are in order. First, it is clear that any $p$-Banach space of universal disposition for the class of all separable $p$-Banach spaces must contain an isometric copy of every $p$-Banach space of density $\aleph_1$. This is so because every quasi-Banach space $X$ of density $\aleph_1$ can be written as $X=\bigcup_{\alpha\in\omega_1}X_\alpha$, where each $X_\alpha$ is a separable subspace of $X$ and $X_\alpha\subset X_\beta$ whenever $\alpha\leq\beta$ are countable. For the same reason, if we assume the continuum hypothesis, then we can easily obtain uniqueness up to isometries in Theorem~\ref{w1}. See Section~\ref{universal} for more on this.
\section{Some forms of injectivity for $p$-Banach spaces}\label{sec:inj}
In this Section we study the extension of operators with values in $\mathbb G_p$ and its nonseparable relatives.
\begin{definition}\label{DefVrsbtix}
Let $E$ be a $p$-Banach space.
\begin{itemize}
\item[(a)] We say that $E$ is injective amongst $p$-Banach spaces if for every $p$-Banach space $X$ and every subspace $Y$ of $X$, every operator $t:Y\to E$ can be extended to an operator $T:X\to E$. If this can be achieved with $\|T\|\leq\lambda\|t\|$ for some fixed $\lambda\geq 1$, then $E$ is said to be $\lambda$-injective amongst $p$-Banach spaces.
\item[(b)] $E$ is said to separably injective or separably $\lambda$-injective amongst $p$-Banach spaces if the preceding condition holds when $X$ is separable.
\item[(c)] $E$ is said to be locally injective amongst $p$-Banach spaces if there is a constant $\lambda$ such that every finite-dimensional $p$-Banach space $X$ and every subspace $Y$ of $X$, every operator $t:Y\to E$ can be extended to an operator $T:X\to E$ with $\|T\|\leq\lambda\|t\|$.
\item[(d)] Finally, $E$ is called locally $1^+$-injective amongst $p$-Banach spaces if it satisfies the preceding condition for every $\lambda > 1$.
\end{itemize}
\end{definition}
These notions play a fundamental role in Banach space theory.
As it is well-known, a Banach space is injective (amongst Banach spaces) if and only if it is a complemented subspace of $\ell_\infty(I)$ for some set $I$.
Also, a Banach space is locally injective if and only if it is a $\mathscr L_\infty$-space and it is locally $1^+$-injective if and only if it is a Lindenstrauss space
As for separable injectivity, Sobczyk theorem asserts that $c_0$ is separably 2-injective and a deep result by Zippin states that every separable separably injective Banach space has to be isomorphic to $c_0$. Nevertheless, there is a wide variety of (nonseparable) separably injective Banach spaces, see \cite{z, adv}.
\begin{proposition}\label{inj} Let $0<p<1$.
\begin{itemize}
\item[(a)] No nonzero $p$-Banach space is injective amongst $p$-Banach spaces.
\item[(b)] Every space of almost universal disposition for finite-dimensional $p$-Banach spaces, in particular $\mathbb G_p$, is locally $1^+$-injective amongst $p$-Banach spaces.
\item[(c)] All spaces of universal disposition for separable $p$-Banach spaces, in particular those appearing in Theorem~\ref{w1}, are separably $1$-injective amongst $p$-Banach spaces.
\end{itemize}
\end{proposition}
\begin{proof} (a) Let $E$ be a $p$-Banach space with density character $\aleph$. Let $\mu$ denote Haar measure on the product of a family of $2^\aleph$ copies of $\mathbb T$, the unit circle. Then there is no nonzero operator from $L_p(\mu)$ to $E$ (recall that $p<1$). This was proved for $\aleph=\aleph_0$ by Kalton (see \cite[p. 163, at the end of Section 3]{k78}) and by Popov in general \cite[Theorem 1]{p}.
Thus, if we fix a nonzero $x\in E$ and we consider the subspace $\mathbb K$ of constant functions in $L_p(\mu)$, then the operator $\lambda\in\mathbb K\mapsto \lambda x\in E$ cannot be extended and $E$ is not injective.
(b) Assume $U$ is of almost universal disposition for finite-dimensional $p$-Banach spaces. Let $X$ be a finite-dimensional $p$-Banach space, $Y$ a subspace of $X$ and $t:Y\to U$ an operator of norm one. We will prove that, for each $\e>0$, there is an extension $T:X\to U$ with $\|T\|\leq1+\e$. Consider the push-out diagram
$$
\begin{CD}
Y@>>> X\\
@VtVV @VVt'V\\
t[Y]@>\imath >> \PO
\end{CD}
$$
where the unlabelled arrow is plain inclusion. As $\imath$ is an isometry, for each $\e>0$, there is an $\e$-isometry $u:\PO\to U$ such that $\imath(t(y))=u(t(y))$ for all $y\in Y$. Then $u\circ t'$ is an extension of $t$ to $X$ with quasinorm at most $1+\e$.
(c) Replace ``finite-dimensional'' by ``separable'', take the closure of $t(Y)$, delete the word ``almost'', and put $\e=0$ in the proof of (b).
\end{proof}
\section{Universal operators on $p$-Gurari\u\i\ spaces}\label{sec:operas}
Finding operators on quasi-Banach spaces can be a difficult task. Actually, it can be an impossible task: Kalton and Roberts exhibited in \cite{rigid} a certain closed subspace of $L_p$ (for $0<p<1$) which is ``rigid'' -- every endomorphism is a multiple of the identity. Of course $\Gp$ cannot be so extreme since Theorem~\ref{completes} says that it has plenty of isometries.
Throughout this section we again fix $p \in (0,1]$.
Our aim is to construct a nonexpansive projection $\Pp$ on $\Gp$ whose kernel is isometric to $\Gp$ and satisfying the following condition:
\begin{enumerate}
\item[$(\heartsuit)$] Given a nonexpansive operator $\map s X \Gp$, where $X$ is a separable $p$-Banach space, there exists an isometry $\map e X \Gp$ such that $\Pp \cmp e = s$.
\end{enumerate}
This will show, in particular, that $\Gp$ has nontrivial projections.
For the remaining part of the section we fix a locally $1^+$-injective separable $p$-Banach space $\Ha$.
Note that, by Proposition~\ref{inj}, we may take $\Ha = \Gp$.
In fact, besides obvious variants like the $c_0$-sum of $\Gp$ (see Corolary 6.6 below), we do not know essentially different examples, unless $p = 1$, where being locally $1^+$-injective is the same as being a Lindenstrauss space and a projection satisfying $(\heartsuit)$ has already been described in \cite{KuMetrics}.
In order to present the announced construction, we shall define a special category involving $\Ha$, which is actually a particular case of so-called comma categories. These ideas come from a recent work of Pech \& Pech~\cite{Pech} as well as from Kubi\'s~\cite{KuMetrics}, where an abstract theory of almost homogeneous structures has been developed.
Namely, let $\fK$ be the category whose objects are nonexpansive operators $u: U\to \Ha$ where $U$ is a finite-dimensional $p$-Banach space.
An $\fK$-morphism from $\map u U \Ha$ to $\map v V \Ha$ is an isometry $\map i U V$ satisfying $v \cmp i = u$.
In this case we write $\map i u v$.
Using the properties of push-outs, we easily obtain the following fact.
\begin{lemma}
$\fK$ has amalgamations.
Namely, given two $\fK$-morphisms $\map i z x$, $\map j z y$, there exist $\fK$-morphisms $i'$, $j'$ such that $i' \cmp i = j' \cmp j$.
\end{lemma}
The amalgamation property is visualized in the following commutative diagram, in which $W$ could be the push-out associated to the operators $i$ and $j$.
\begin{equation}\label{amalgamation}\xymatrix{
Y \ar[rr]^{j'} \ar[drrrrrr]_y & & W \ar[drrrr]^w \\
& & & & & & \Ha \\
Z \ar[rr]_i \ar[uu]^j \ar[urrrrrr]^z & & X \ar[uu]_{i'} \ar[urrrr]_x
}
\end{equation}
We also need the following strengthening of Lemma~\ref{key}.
\begin{lemma}\label{keypoperator}
Let $\map f X Y$ be an $\e$-isometry between finite-dimensional $p$-Banach spaces and let $\map r X \Ha$, $\map s Y \Ha$ be nonexpansive linear operators such that $s \cmp f$ is $\e$-close to $s$.
Let $X\oplus_f^\e Y$ be the space constructed in the proof of Lemma~\ref{key} and let $i,j$ be the canonical isometric embeddings of $X$ and $Y$, respectively.
Then the operator $\map {r\oplus s} {X \oplus_f^\e Y} \Ha$, defined by $(r\oplus s)(x,y) = r(x) + s(y)$, is nonexpansive and has the property that $(r\oplus s) \cmp i = r$, $(r\oplus s) \cmp j = s$.
In particular, $i$ and $j$ become $\fK$-morphisms.
\end{lemma}
\begin{proof}
Fix $(x,y) \in X \oplus Y$ and assume $x = x_0 + x_2$, $y = y_1 - f(x_2)$.
Using the fact that $\|r(x_2) - s(f (x_2))\| \leq \e \|x_2\|$, we get
$$\|(r\oplus s)(x,y)\|^p = \| r(x_0)+r(x_2) + s(y_1) - s(f(x_2)) \|^p \leq \|x_0\|_X^p + \|y_1\|_Y^p + \e^p \|x_2\|_X^p.$$
And recalling that $\|(x,y)\|^p$ is the infimum of all expressions that can arise as the right-hand side of the preceding inequality, we see that $\|(r\oplus s)(x,y)\|^p \leq \|(x,y)\|^p$.
\end{proof}
Now we need to work within a countable subcategory of $\fK$ having certain ``density'' properties. To this end, let $D$ be a dense countable subset of $\Ha$ and let $\Ha_0$ denote the (dense) subspace of all finite linear combinations of elements of $D$ with rational coefficients. We define a subcategory $\fK_0$ of $\fK$ as follows:
\begin{itemize}
\item The objects of $\fK_0$ are those nonexpansive operators $f:F\to\Ha$ in which $F$ is a rational $p$-normed space (see the definition in Remark~\ref{RmRationsl}) and $f$ sends the rational vectors of $F$ into $\Ha_0$.
\item Given objects $f:F\to\Ha$ and $g:G\to\Ha$ in $\fK_0$, a $\fK_0$-morphism is a $\fK$-morphism $i:F\to G$ preserving ``rational'' vectors.
\end{itemize}
Let us collect the properties of $\fK_0$ we shall invoke later.
\begin{lemma}\label{invoke}
{\rm (a)} Given $g:G\to \Ha$ in $\fK$ and $\e>0$ there is $f:F\to\Ha$ in $\fK_0$ and a surjective $\e$-isometry $u:G\to F$ such that $\|g -f\circ u\|<\e$.
{\rm (b)} $\fK_0$ has amalgamations.
\end{lemma}
\begin{proof}
(a) By fixing a basis, we may assume $G$ is just $\mathbb K^n$ with some (not necessarily rational) $p$-norm. Take $h:G\to \Ha$ sending the unit basis to $D$ and such that $\|h\|\leq 1$ and $\|h-g\|<\e$. Let $|\cdot|$ be a rational $p$-norm on $\mathbb K^n$ such that $(1-\e)|\cdot|\leq \|\cdot\|_G\leq|\cdot|$. Then set $F=(\mathbb K^n,|\cdot|)$, $u:G\to F$ the formal identity and $f=h\circ u^{-1}$ -- that is, $f$ is just $h$ viewed as a mapping from $F$ to $\Ha$.
(b) It is obvious that the $p$-sum of two rational $p$-normed spaces is again ``rational'' and that if $u:X\to Y$ is a rational map acting between rational $p$-normed spaces, then there is a rational $p$-normed space $Z$, a rational surjection $\varpi: X\to Z$ and a linear isometry $v:Z\to Y/u[X]$ such that $\pi=v\circ\varpi$, where $\pi:Y\to Y/u[X]$ is the natural quotient map. This implies that everything in the amalgamation Diagram~(\ref{amalgamation}) can be done in $\fK_0$ if the initial data $i$ and $j$ are morphisms in $\fK_0$.
\end{proof}
We now construct a sequence
$$
\begin{CD}
u_0@>\imath_1>> u_1@>\imath_2 >> u_2@>>> \cdots
\end{CD}
$$
where each $u_n \colon U_n\to\Ha$ is an object of $\fK$, each arrow in the diagram above is a morphism in $\fK$, so that $\imath_{n+1}:U_n\to U_{n+1}$ is an isometry such that $u_n=u_{n+1}\circ \imath_{n+1}$, and the following condition is satisfied:
\begin{enumerate}
\item[($\dagger$)] Given $n \in \Nat$, $\e>0$, an isometric embedding $\map e{U_n}V$ with $V$ finite-dimensional, and a nonexpansive operator $\map v V \Ha$ satisfying $v\cmp e = u_n$, there exist $m>n$ and an $\e$-isometric embedding $\map {e'} V {U_m}$ such that $e' \cmp e$ is $\e$-close to the linking map $\imath_{(n,m)}:U_n\to U_m$ and $u_m \cmp e'$ is $\e$-close to $v$.
\end{enumerate}
This can be proved by following the lines of \cite[Section 4.1]{KuMetrics}. Here we adopt a different approach which is similar to the construction from the proof of Lemma~\ref{contains}, but at each stage taking into account a fixed nonexpansive linear operator into $\Ha$.
Taking $\Ha = 0$, this will also provide an alternative construction of $\Gp$.
\begin{lemma}
There exists a sequence $u_0 \to u_1 \to u_2 \to \cdots$ satisfying $(\dagger)$.
\end{lemma}
\begin{proof}
We shall work in a countable subcategory $\fK_0$ of $\fK$ having the properties appearing in Lemma~\ref{invoke}.
First, we fix an enumeration $\{(f_n, k_n)\}_{n \in \omega}$ of all pairs of the form $(f,k)$, where $f$ is a morphism of $\fK_0$ and $k$ is a natural number, so that each pair occurs infinitely many times.
We construct a sequence $\{u_n\}_{n \in \omega}$ by induction, starting from the zero space with the zero operator.
Having defined $u_{n-1}$, we look at $(f_n, k_n)$.
If either $k_n \geq n$ or $\dom(f_n) \ne u_{k_n}$ then we set $u_n = u_{n-1}$.
So suppose that $k_n < n$ and $\dom(f_n) = u_{k_n}$.
Denote by $j$ the bonding arrow from $u_{k_n}$ to $u_{n-1}$.
Using the amalgamation property of $\fK_0$, we find arrows $g$, $h$ such that $h \cmp j = g \cmp f_n$.
Denote by $u_n$ the common co-domain of $h$ and $g$ and declare $h$ to be the bonding arrow from $u_{n-1}$ to $u_n$.
This finishes the description of the construction.
Condition ($\dagger$) follows from the ``density'' of $\fK_0$ in $\fK$ and from the fact that each appropriate pair $(f,k)$ appears infinitely many times in the enumeration.
\end{proof}
Actually, it can be shown that ($\dagger$) specifies the sequence $\{u_n\}$, up to an isomorphism in the appropriate category, although we shall not use this fact.
Consider the directed system of $p$-Banach spaces underlying the sequence we have just constructed:
$$
\begin{CD}
U_0@>\imath_1>> U_1@>\imath_2 >> U_2@>>> \cdots
\end{CD}
$$
set $U_\infty=\underrightarrow{\lim} \: U_n$ and let $u_\infty:U_\infty\to\Ha$ be direct limit of the operators $u_n$.
The main properties of $u_\infty$ are collected below.
\begin{theorem}\label{Thmrbgibr}
The space $U_\infty$ and the operator $u_\infty: U_\infty\to \Ha$ have the following properties:
\begin{enumerate}
\item[(a)] The operator $u_\infty$ is nonexpansive and right-invertible -- in particular, its range is $\Ha$.
\item[(b)] Both $U_\infty$ and $\ker(u_\infty)$ are linearly isometric to $\Gp$.
\item[(c)] For every nonexpansive linear operator $\map s X \Ha$, where $X$ is a separable $p$-Banach space, there exists a linear isometric embedding $\map e X U_\infty$ such that
$s = u_\infty \cmp e.$
\end{enumerate}
\end{theorem}
Note that if $\Ha = \Gp$ then $\Pp = r \cmp u_\infty$, where $\map r \Gp \Gp$ is a fixed right inverse of $u_\infty$, provides the ``universal'' projection announced at the beginning of the Section.
\begin{proof}
Obviously, $u_\infty$ is nonexpansive, being a direct limit of nonexpansive operators. That it is right-invertible will follow from (c), just taking $s$ as the identity on $\Ha$.
In order to prove (c), fix a nonexpansive operator
$\map s X \Ha$ from a separable $p$-Banach space $X$ and let $(X_n)$ be an increasing sequence of finite-dimensional subspaces whose union is dense in $X$, with $X_0=0$.
Set $s_n=s\restriction X_n$ and $\e_n=2^{-n/p}$.
We shall construct inductively nonexpansive $\e_n$-isometries $\map {e_n}{X_n}{U_{k_n}}$ so that the following condition is satisfied:
\begin{enumerate}
\item[($*$)]\quad $u_{k_n} \cmp e_n$ is $\e_n$-close to $s_n$ and $e_{n+1} \restriction X_n$ is $(\e_n^p+\e_{n+1}^p)^{1/p}$-close to $\imath_{(k_n,k_{n+1})}\circ e_n$.
\end{enumerate}
Having defined $e_n: X_n\to U_{k_n}$ with $\|s_n-u_{k_n}\circ e_n\|\leq\e_n$, we may apply Lemma~\ref{keypoperator} with $f=e_n$ to get the commutative diagram
$$\xymatrix{
U_{k_n} \ar[d]_j \ar[drr]^{u_{k_n}} \\
X_n\oplus_{e_n}^{\e_n} U_{k_n} \ar[rr]^{s_n\oplus u_{k_n}}& & \Ha \\
X_n \ar[u]^i \ar[urr]_{s_n}\\
}$$
which shows that $i$ is a $\fK$-morphism from $s_n$ to ${s_n\oplus u_{k_n}}$. On the other hand, the inclusion of $X_n$ into $X_{n+1}$ we momentarily denote by $a$ is a $\fK$-morphism from $s_n$ to $s_{n+1}$ and amalgamating $i$ and $a$ we arrive to the following commutative diagram
$$\xymatrix{
U_{k_n} \ar[dd]_j \ar[dddrrrrrr]^{u_{k_n}}\\
&&&&&&\\
X_n\oplus_{e_n}^{\e_n} U_{k_n} \ar[rr]^{a'} \ar[drrrrrr]_{\quad s_n\oplus u_{k_n}} & & W \ar[drrrr]^w \\
& & & & & & \Ha \\
X_n \ar[rr]_a \ar[uu]^i \ar[urrrrrr]^{s_n} & & X_{n+1} \ar[uu]^{i'} \ar[urrrr]_{s_{n+1}}
}$$
Here, $W$ is a finite-dimensional $p$-normed space and $w$ nonexpansive.
Having $(\dagger)$ in mind we can find $k_{n+1}>k_n$ and a nonexpansive $\e_{n+1}$-isometry $\ell: W\to U_{k_{n+1}}$ such that $u_{k_{n+1}}\circ\ell$ is $\e_{n+1}$-close to $w$ and $\ell\circ a'\circ j$ is $\e_{n+1}$-close to $\imath_{(k_n, k_{n+1})}:U_{k_n}\to U_{k_{n+1}}$.
Setting $e_{n+1}=\ell\circ i': X_{n+1}\to W\to U_{k_{n+1}}$ we obtain an $\e_{n+1}$-isometry fulfilling the requirements in $(*)$. Indeed,
$$
\|u_{k_{n+1}}\circ e_{n+1}-s_{n+1}\|= \|u_{k_{n+1}}\circ \ell\circ i' -s_{n+1}\|= \|u_{k_{n+1}}\circ \ell\circ i' - w\circ i'\|\leq \|u_{k_{n+1}}\circ \ell - w\|\leq \e_{n+1},
$$
while
\begin{align*}
\|\imath_{(k_n, k_{n+1})}\circ e_n -e_{n+1}\circ a\|^p&\leq \|\imath_{(k_n, k_{n+1})}\circ e_n -\ell\circ a'\circ j\circ e_n\|^p +
\|\ell\circ a'\circ j\circ e_n -e_{n+1}\circ a\|^p\\
& = \|\imath_{(k_n, k_{n+1})}\circ e_n -\ell\circ a'\circ j\circ e_n\|^p +
\|\ell\circ a'\circ j\circ e_n -\ell\circ i' \circ a\|^p\\
&= \|\imath_{(k_n, k_{n+1})}\circ e_n -\ell\circ a'\circ j\circ e_n\|^p +
\|\ell\circ a'\circ j\circ e_n -\ell \circ a'\circ i\|^p\\
&\leq \|\imath_{(k_n, k_{n+1})}-\ell\circ a'\circ j\|^p \cdot \|e_n\|^p +
\|\ell\circ a'\|^p \cdot \|j\circ e_n-i\|^p\\
&\leq \e_{n+1}^p+\e_n^p
\end{align*}
since $\|j\circ e_n-i\|\leq\e_n$.
Finally, the sequence $\{e_n\}$ ``converges'' to an isometric embedding $e:X\to U_\infty$ satisfying $u_\infty \cmp e = s$, which proves (c).
We pass to the proof of (b). To prove that $U_\infty$ is isometric to $\mathbb G_p$, it suffices to prove that it is of almost universal disposition for the class of finite-dimensional $p$-Banach spaces and we shall show that $U_\infty$ satisfies the hypothesis of Lemma~\ref{relax}. Here, the hypothesis that $\Ha$ is locally $1^+$-injective amongst $p$-Banach spaces enters.
So, let $g: X\to Y$ be an isometry, where $X$ is a subspace of $U_\infty$ and $Y$ a finite-dimensional $p$-normed space. As we did in the proof of Lemma~\ref{contains}, we fix a small $\delta>0$ and we take a nonexpansive $\delta$-isometry $w:X\to U_n$ such that $\|w(x)-x\|_{U_\infty}\leq \delta\|x\|$. Let us form the push-out square
$$
\begin{CD}
X@> g >> Y\\
@V w VV @VV w'V\\
U_n @>g'>> \PO
\end{CD}
$$
Here $g'$ is an isometry and $w'$ is a contractive $\delta$-isometry, according to Lemma~\ref{po}.
As $\Ha$ is locally $1^+$-injective and $\PO$ is finite-dimensional there is an operator $\widehat u_n: \PO\to\Ha$ such that $u_n=\widehat u_n\circ g'$, with $\|\widehat u_n\|\leq 1+\delta$. Next we amend the $p$-norm in $\PO$ to make $\widehat u_n$ nonexpansive: for instance we may take $|v|=\max(\|v\|_{\PO}, \|\widehat u_n(v)\|_\Ha)$. Now, if $V$ denotes the space $\PO$ furnished with this new norm, then $g':U_n\to V$ is still isometric and $\widehat u_n: V\to\Ha$ becomes contractive and we may use $(\dagger)$ to get $m>n$ and a $\delta$-isometric embedding $g'':V\to U_m$ such that $\|g''\circ g'-\imath_{(n,m)}\|\leq \delta$. Now, if $\delta>0$ is small enough, the composition
$$
\begin{CD}
f: Y@> w' >> \PO @>{\bf {\rm Id}}>> V @> g'' >> U_m @>>> U_\infty
\end{CD}
$$
is an $\e$-isometry such that $\|f(g(x))-x\|_U\leq \e\|x\|$ for every $x\in X$. This shows that $U_\infty$ is isometric to $\mathbb G_p$.
\medskip
It only remains to check that $\ker u_\infty$ is isometric to $\Gp$. We first prove that the operator $u_\infty:U_\infty\to\Ha$ has the following additional property:
\begin{itemize}
\item[($\ddagger$)] Suppose $E$ is a subspace of a finite-dimensional $p$-Banach space $F$. If $g:F\to \Ha$ is nonexpansive and $e:E\to U_\infty$ is an isometry such that $u_\infty\circ e=g\restriction E$, then for each $\delta>0$ there is a $\delta$-isometry $f:F\to U_\infty$ satisfying
$\| f \restriction E - e \| < \delta$ and $ \| u_\infty \cmp f - g \| < \delta$.
\end{itemize}
Indeed, after taking a small perturbation, we may assume that $\map e E {U_n}$ is an $\e$-isometric embedding and $u_\infty \cmp e$ is $\e$-close to $g \restriction E$. Applying Lemma~\ref{keypoperator} to $e:E\to U_n$ and the operators $g:E\to\Ha$ and $u_n:U_n\to\Ha$, we get the commutative diagram
$$\xymatrix{
U_{n} \ar[d]_j \ar[drr]^{u_{n}} \\
E\oplus_e^\e U_{n} \ar[rr]^{g\oplus u_{n}}& & \Ha \\
E \ar[u]^i \ar[urr]_{g}\\
}$$
with $\|j \circ e- i\|\leq\e$. Now, amalgamating $i:E\to E\oplus_e^\e U_{n}$ which is a morphism from $g\restriction E$ to $g\oplus u_n: E\oplus_e^\e U_n\to\Ha$ and the inclusion of $E\to F$ regarded as a morphism from $g\restriction E$ to $g:F\to\Ha$ we obtain a finite-dimensional $p$-normed space $W$ and a commutative diagram
$$\xymatrix{
U_{n} \ar[dd]_j \ar[dddrrrrrr]^{u_{n}}\\
&&&&&&\\
E\oplus_e^\e U_{n} \ar[rr]^{a'} \ar[drrrrrr]_{s_n\oplus u_{k_n}} & & W \ar[drrrr]^w \\
& & & & & & \Ha \\
E\ar[rr]_a \ar[uu]^i \ar[urrrrrr]^{s_n} & & F \ar[uu]^{i'} \ar[urrrr]_{g}
}$$
with $\|w\|\leq 1$. Applying now $(\dagger)$ to $w$ and the embedding $a'\circ j$ we obtain $m>n$ and an almost isometry $v:W\to U_m$ such that $u_m\circ v$ is close to $w$ and $v\circ a'\circ j $ is close to $\imath_{(n,m)}$. Finally, the composition
$$
\begin{CD}
f: F@> i' >> W @> v >> U_m @>>> U_\infty
\end{CD}
$$
does the trick.
After this preparation, let $F$ be a finite-dimensional $p$-normed space, $e:E\to\ker u_\infty$ an isometry, where $E$ is a subspace of $F$, and $\e>0$. We shall construct an $\e$-isometry $f:F\to\ker u_\infty$ such that $\|f(x)-e(x)\|\leq \e\|x\|$ for every $x\in E$. This will show that $\ker u_\infty$ is of almost universal disposition, thus completing the proof.
Fix some small $\delta$ and apply $(\ddagger)$ taking $g$ as the zero operator from $F$ to $\Ha$ to get a $\delta$-isometry $f':F\to U_\infty$ such that
$\| f' \restriction E - e \| < \delta$ and $ \| u_\infty \cmp f' \| < \delta$. Of course, we cannot guarantee that $f'$ takes values in $\ker u_\infty$. To amend this, let $r:\Ha\to U_\infty$ be a right-inverse for $u_\infty$, and set $f=({\bf 1}_{U_\infty}-r\circ u_\infty)\circ f'$, that is, $f(x)=f'(x)-r(u_\infty(f'(x)))$. Then $f$ takes values in $\ker u_\infty$ since $u_\infty\circ f=0$ and, moreover, $\|f-f'\|=\|r\circ u_\infty\circ f'\|\leq \delta$. Thus for $\delta$ sufficiently small $f:F\to\ker u_\infty$ is an $\e$-isometry with $\| f \restriction E - e \| < \e$.
\end{proof}
\begin{corollary}
The following spaces are linearly isomorphic to $\Gp$:
\begin{itemize}
\item[(a)] $\Gp\oplus\Gp$ as well as all finite direct sums $\Gp \oplus \dots \oplus \Gp$.
\item[(b)] $c_0(\Gp)$, the space of all sequences converging to $0$ in $\Gp$, endowed with the maximum $p$-norm.
\item[(c)] $C(\Delta, \Gp)$, the space of all continuous functions from the Cantor set $\Delta$ to $\Gp$, with the sup quasinorm.
\end{itemize}
\end{corollary}
\begin{proof}
We first observe that if $\mathbb H$ is separable and $1^+$-locally injective amongst $p$-Banach spaces, then $\Gp\oplus\mathbb H$ is linearly isomorphic to $\Gp$. Indeed, if $u_\infty: U_\infty\to \mathbb H$ is the operator appearing in Theorem~\ref{Thmrbgibr} and $r:\mathbb H\to U_\infty$ is a right inverse for $u_\infty$, then the mapping $x\in U_\infty\mapsto (x-r(u_\infty(x)),u_\infty(x))\in \ker(u_\infty)\oplus\mathbb H$ is a linear homeomorphism and we already know that both $U_\infty$ and $\ker(u_\infty)$ are isomorphic to $\Gp$.
Now, taking $\mathbb H=\Gp$ which is locally $1^+$-injective according to Proposition~\ref{inj}(b), we see that $\Gp$ is isomorphic to $\Gp \oplus \Gp$ hence also to any finite sum $\Gp \oplus \dots \oplus \Gp$. This proves (a).
To prove (b) and (c), note that both $c_0(\Gp)$ and $C(\Delta, \Gp)$ are locally $1^+$-injective, being the completion of the union of a chain of spaces of the form $\Gp \oplus \dots \oplus \Gp$ endowed with the maximum quasinorm; all these spaces are locally $1^+$-injective, because $\Gp$ is.
Hence, $\Gp$ is isomorphic to $\Gp \oplus c_0(\Gp)$ which is isomorphic to $c_0(\Gp)$, which proves (b).
As for (c), we know that $\Gp$ is isomorphic to $\Gp\oplus C(\Delta, \Gp)$ and since $\Gp$ lives complemented in $C(\Delta, \Gp)$ (as the subspace of constant functions) and $\Gp$ is isomorphic to its square, Pe\l czy\'nski decomposition method applies: indeed, if $C(\Delta,\Gp)\approx \Gp \oplus Z$, then $\Gp\approx \Gp \oplus C(\Delta, \Gp)\approx \Gp \oplus \Gp \oplus Z \approx \Gp \oplus Z\approx C(\Delta,\Gp)$.
\end{proof}
\section{Miscellaneous remarks and questions}\label{closing}
\subsection{Mazur's ``rotations'' problem}
A quasi-Banach space is said to be almost isotropic if the orbits of the isometry group are dense in the unit sphere: if $\|x\|=\|y\|= 1$, then for every $\e>0$ there is a surjective isometry $u$ such that $\|y-u(x)\|\leq \e$. If this condition holds even for $\e=0$, the space is said to be isotropic: the isometry group acts transitively on the sphere.
A notorious problem that Banach attributes to Mazur in his {\it ``Th\'eorie des Op\'erations Lin\'eaires''} asks whether $\ell_2$ is the only separable isotropic Banach space; cf. \cite[p. 242]{banach}. This is the problem mentioned by Gurari\u\i\ in the title of \cite{g} and, as far as we know, is still open. We may refer the reader to \cite{c-extracta, b-rp} for two complementary surveys on the topic.
The following remark is immediate from Theorem~\ref{completes}.
\begin{corollary}\label{almost isotropic}
The space $\mathbb G_p$ is almost isotropic. \hfill$\square$
\end{corollary}
It is well-known that $\mathbb G$ (``our'' $\mathbb G_p$ when $p=1$) is not isotropic. However the standard argument depends on Mazur's theorem about the existence of smooth points on any separable Banach space and this argument is not available when $p<1$.
It is worth remarking that the notion of ``almost isotropic space'' that Gurari\u\i\ manages in \cite{g} is stronger than ours: For every $\e > 0$, every linear isomorphism $f$ between finite-dimensional subspaces should extend to a bijective linear isomorphism $\tilde f$ satisfying $\| \tilde f \| \leq (1+\e) \|f\|$ and $\| \tilde f ^{-1} \| \leq (1+\e) \|f\|$.
Anyway, it is clear from the proof of \cite[Theorem 3]{g} that the spaces $\Gp$ are ``almost isotropic'' in Gurari\u\i\/ 's sense for all $p\in(0,1]$.
\subsection{Ultrapowers of $\mathbb G_p$}
There is an alternative proof of Theorem~\ref{w1} which is based on the ultraproduct construction; see \cite{k84}. Let $(X_i)$ be a family of $p$-Banach spaces indexed by $I$ and let $\mathscr U$ be a countably incomplete ultrafilter on $I$. Then the space of bounded families $\ell_\infty(I,X_i)$ with the quasinorm $\|(x_i)\|=\sup_i \|x_i\|$ is a $p$-Banach space and $c_0^\mathscr U(X_i)=\{(x_i): \lim_\mathscr U \|x_i\|=0\}$ is a closed subspace of $\ell_\infty(I,X_i)$. The ultraproduct of the family $(X_i)$ with respect to $\mathscr U$, denoted by $[X_i]_\mathscr U$, is the quotient space $\ell_\infty(I,X_i)/c_0^\mathscr U(X_i)$ with the quotient quasinorm. The class of the family $(x_i)$ in $(X_i)_\mathscr U$ is denoted by $[(x_i)]$.
The quasinorm in $[X_i]_\mathscr U$ can be computed as $\|[(x_i)]\|=\lim_\mathscr U\|x_i\|$.
When all the spaces $X_i$ are the same, say $X$, the ultraproduct is called the ultrapower of $X$ following $\mathscr U$.
One has the following generalization of \cite[Proposition 5.7]{accgm} for which we provide a simpler proof.
\begin{proposition}
If $\mathscr U$ is a countably incomplete ultrafilter on the integers, then $[\mathbb G_p]_\mathscr U$ is a $p$-Banach space of universal disposition for separable $p$-Banach spaces and its density character is the continuum.
\end{proposition}
\begin{proof}
We denote by $I$ the index set supporting $\mathscr U$. Let $X$ be a separable subspace of $[\mathbb G_p]_\mathscr U$ and $g:X\to Y$ an isometry, where $Y$ is any separable $p$-Banach space. We will prove that there is an isometry $f:Y\to [\mathbb G_p]_\mathscr U$ such that $f(g(x))=x$ for every $x\in X$.
Clearly, we may and do assume that $Y/X$ has dimension one.
So, let $(x^n)$ be a normalized, linearly independent sequence whose linear span is dense in $X$ and $y^0\in Y\backslash X$.
Let $X^n$ be the subspace spanned by $(x^1,\dots, x^n)$ in $X^n$ and $Y^n$ the subspace spanned by $g[X^n]$ and $y^0$ in $Y$.
Also,
let us fix representatives $(x_i^n)$ so that $x^n=[(x_i^n)]$ for every $n$. We may assume $\|x_i^n\|=1$ for every $n$ and every $i$.
For $i\in I$ and $n\in\mathbb N$, let us denote by $X_i^n$ the subspace of $\mathbb G_p$ spanned by $(x_i^1,\dots,x_i^n)$. We define a linear map $\jmath_{n,i}: X^n\to X^n_i$ by letting $\jmath_{n,i}(x^k)= x^k_i$ for $1\leq k\leq n$ and linearly on the rest.
To proceed, we observe that the sets
$$
I^n_\e=\{i\in I \text{ such that $\jmath_{n,i}: X^n\to X^n_i$ is a strict $\e$-isometry}\}
$$
are in $\mathscr U$ for every $n$ and every $\e>0$. Let $(J_n)$ be a sequence of subsets of $\mathscr U$ with $\bigcap_n J_n=\varnothing$. For each $i\in I$, set $n(i)=\max\{n\in\mathbb N: i\in J_n\cap I^n_{1/n}\}$ and observe that $n(i)\to\infty$ along $\mathscr U$.
Let us form the ultraproducts $[X^{n(i)}]_\mathscr U, [X^{n(i)}_i]_\mathscr U$ and $[Y^{n(i)}_i]_\mathscr U$.
It is obvious that $[X^{n(i)}]_\mathscr U$ and $[X^{n(i)}_i]_\mathscr U$ are isometric through the ultraproduct operator $[(\jmath_{n(i),i})]$.
Moreover, there is a linear isometry $\kappa: X\to [X^{n(i)}]_\mathscr U$ that we may define taking $\kappa(x)=[(x_i)]$, where $x_i\in X^{n(i)}$ is any point minimizing the ``distance'' from $x$ to $X^{n(i)}$ and the same applies to $Y$ and $[Y^{n(i)}_i]_\mathscr U$.
For each $i\in I$ consider the composition $g\circ\jmath_{n(i),i}^{-1}$ which is a strict $(1/n(i))$-isometry from $X^{n(i)}_i$ into $Y^{n(i)}$. On account of Lemma~\ref{helpful} we may find an $(1/n(i))$-isometry $f_i: Y^{n(i)}\to \mathbb G_p$ such that $\|f_i(g(\jmath_{n(i),i}^{-1}(x)))-x\|\leq \|x\|/n(i)$ for every $x\in X^{n(i)}_i$. It is now obvious that if $f:Y\to [\mathbb G_p]_\mathscr U$ denotes the composition of the embedding $Y\to [Y^{n(i)}_i]_\mathscr U$ with the ultraproduct operator $[(f_i)]$ one obtains an isometry with $f(g(x))=x$ for every $x\in X$.
\end{proof}
Notice that, while it is unclear whether the spaces arising in the proof of Theorem~\ref{w1} are isotropic or not, it follows from Corollary~\ref{almost isotropic} and rather standard ultraproduct techniques that every ultrapower of $\mathbb G_p$ built over a countably incomplete ultrafilter is isotropic.
\subsection{Universal spaces}\label{universal}
As we have already mentioned, under the continuum hypothesis, all the spaces having the properties appearing in Theorem~\ref{w1} are isometric. It was observed in \cite[Proposition 4.7]{accgm} that, in the Banach space setting, the uniqueness cannot be proved in ${\sf ZFC}$, the usual setting of set theory, with the axiom of choice. This depends on the fact that it is consistent with ${\sf ZFC}$ that there is no Banach space of density $2^{\aleph_0}$ containing an isometric copy of all Banach spaces of density $2^{\aleph_0}$, a recent result by Brech and Koszmider \cite{b-k}. Whether or not the same happens to $p$-Banach spaces is left open to reflection.
\subsection{Vector-valued Sobczyk's theorem without local convexity}
Sobczyk's theorems states that $c_0$, the Banach space of all sequences converging to zero with the sup norm is separably injective -- amongst Banach spaces, of course. More interesting for us is that if $E$ is a separably injective Banach space, then so is $c_0(E)$ -- the space of sequences converging to $0$ in $E$. Several proofs of this fact are available. Some of them made strong use of local convexity. For instance, Johnson-Oikhberg's argument in \cite{j-o} is based on $M$-ideal theory, while Castillo-Moreno proof in \cite{c-m} uses the bounded approximation property, a very rare property outside the Banach space setting. We don't know if Rosenthal's proof
in \cite{ros} would survive without local convexity or not, but in any case the proof in \cite{c} applies verbatim to $p$-Banach spaces. So we have the following.
\begin{proposition}
If $E$ is separably injective amongst $p$-Banach spaces, then so is $c_0(E)$
\end{proposition}
We do not know whether there is a nontrivial separable space, separably injective amongst $p$-Banach spaces when $p<1$, but our guess is no.
In any case, such a space would necessarily be a complemented subspace of $\mathbb G_p$.
\subsection{Operators on $\mathbb G_p$ when $p<1$} It is a classical result in quasi-Banach space theory that every operator from $L_p$ to a $q$-Banach space for $p<q\leq 1$ is zero. It follows easily that the same is true replacing $L_p$ by $\Gp$. In particular, the dual of $\Gp$ is trivial.
In a similar vein, there is no nonzero operator from $\mathbb G_p$ into any $L_q$ (here $q$ can be 0) and there is no compact operator on $\mathbb G_p$; the first statement follows from the fact that there is no nonzero operator from $L_p/H_p$ to $L_0$, see \cite{alek} and the second one from the fact that every operator defined on $L_p$ is either zero or an isomorphism on a copy of $\ell_2$, see \cite[Theorem 7.20]{kpr} for which is perhaps the simplest proof.
We do not know whether $\Gp$ is isomorphic to all its quotients or complemented subspaces.
In particular we don't know whether $\Gp$ is isomorphic to its quotient by a line.
This is clearly connected to the notion of a $K$-space. Recall that a quasi-Banach space $X$ is said to be a $K$-space if whenever $Z$ is a quasi-Banach space with a subspace $L$ of dimension one such that $Z/L$ is isomorphic to $X$, then $L$ is complemented in $Z$ and so $Z$ is isomorphic to $\mathbb K\oplus X$.
It would be interesting to know whether the spaces $\Gp$ are $K$-spaces or not. The case $p=1$ is solved in the affirmative by a deep result of Kalton and Roberts \cite[Theorem 6.3]{kr}, who proved that every $\mathscr L_\infty$-space, and in particular the Gurari\u\i\ space, is a $K$-space.
\subsection*{Acknowledgments}
The proof of Proposition~\ref{PBenYakHay} is due to Haydon who presented it to the third named author during the {\it Workshop on Forcing Axioms and their Applications} held at the Fields Institute, Toronto, 22 -- 26 October 2012.
We thank Richard Haydon for this elegant argument.
|
1,314,259,994,206 | arxiv | \section{Introduction}
The Standard Model of Particle Physics is an
extremely successful
theory which has been tested experimentally
to a high level
of accuracy \cite{PDG,Lang}.
After the discovery of the top quark
the Higgs boson which is predicted
to exist by the Standard Model
is the only `missing'
ingredient that has not been directly
observed yet.
However, a number of theoretical prejudices
suggest that the Standard Model is not the
`final answer' of nature but rather
an effective description valid up
to the weak scale of order ${\cal O}(100 GeV)$.
The arbitrariness of the spectrum and gauge group, the
large number of free parameters,
the smallness of the weak scale compared
to the Planck scale and the
inability to turn on gravity
suggest that at higher energies (shorter
distances) a more fundamental theory
will be necessary to describe nature.
Over the past 20 years
various extensions of the Standard Model
such as Technicolor \cite{tech,FS}, Grand Unified Theories \cite{GGUT,ross},
Supersymmetry \cite{WZ,wb} or String Theory \cite{Green} have been proposed.
In recent years supersymmetric
extensions of the Standard Model became
very popular also among experimentalists
not necessarily because of their convincing solution of
the above problems but rather because
most other contenders have been
(more or less) ruled out by now.
Another reason for the popularity of
supersymmetric theories among theorists
is the fact that the low energy
limit of superstring theory --
a promising
candidate for a unification of all interactions including gravity
-- is (by and large)
supersymmetric.
This set of lectures give an elementary introduction to the
supersymmetric Standard Model. Section~2
contains some of the necessary background on
generic supersymmetric field theories
while section~3 develops
supersymmetric extensions of the Standard Model
and discusses spontaneous breaking of supersymmetry.
In section~4 extensions of the Standard Model
with softly broken supersymmetry are presented
and some of the phenomenological
properties are discussed.
Section~5 contains a summary and
our conventions which follow rather closely
ref.\ \cite{wb} are recorded in an appendix.
These lectures are not meant
to review the latest developments of the
supersymmetric Standard Model but rather
attempts to give an elementary introduction from
a ``modern'' point of view.
Many excellent review articles on
supersymmetry and the supersymmetric
Standard Model do exist and have been heavily
used in these lectures \cite{RNi} -- \cite{Rsm}.
In addition, a collection of
some of the classic papers concerning the subject can be found
in ref.~\cite{Fer}.
\section{Introduction to Supersymmetry}
Supersymmetry is a symmetry between bosons and fermions or more precisely it is a symmetry
between states of different spin \cite{WZ}.
For example, a spin-0 particle is mapped to
a spin-$\frac{1}{2}$ particle
under a supersymmetry transformation.
Thus,
the generators $Q_{\alpha }, \overline{Q}_{\dot{\alpha } }$
of the supersymmetry transformation must
transform in the
spin-$\frac{1}{2}$ representations
of the Lorentz group.
These new fermionic
generators form together with
the four-momentum $P_m$
and the generators of the Lorentz transformations $M^{mn}$
a graded Lie algebra
which features in addition to
commutators also anticommutators
in their defining relations.
The simplest ($N=1$)
supersymmetry algebra reads:
\bea \label{N1}
\{ Q_{\alpha},\overline{Q}_{\dot{\beta }} \} &=& 2\sigma_{\alpha \dot{\beta }}^{m} ~P_m \nn\\
\{ Q_{\alpha} , Q_{\beta} \}
&=& \{ \overline{Q}_{\dot{\alpha }}, \overline{Q}_{\dot{\beta }} \} = 0\nn \\
{}[\overline{Q}_{\dot{\alpha }}, P_m] &=& [Q_{\alpha} ,P_m ]=0\\ \nn
[Q_{\alpha}, M^{mn}] &=& \frac{1}{2}\,
\sigma^{mn \beta}_{\ \alpha} Q_\beta\\ \nn
[\overline{Q}_{\dot{\alpha }}, M^{mn}] &=& \frac{1}{2}\,
\bar\sigma^{mn \dot\beta}_{\ \dot{\alpha }} Q_{\dot\beta}
\eea
where we used the notation and convention
of ref.\ \cite{wb}.
$\sigma^m$ are the Pauli matrices
and the $\sigma^{mn}$ are defined in the
appendix.\footnote{
In general it is possible to have
$N$ sets of supersymmetry generators
$Q_{\alpha }^I, \overline{Q}_{\dot{\alpha } }^I, I=1,\ldots,N$,
in which case one refers to $N$-extended
supersymmetry.
Such extended superalgebras have been
classified by
Haag, Lopuszanski and Sohnius \cite{hls}
generalizing earlier work of
Coleman and Mandula \cite{cm}
who showed that
the possible bosonic symmetries of the
S-matrix of a four-dimensional, local,
relativistic quantum field theory
consist of the generators of the Poincar\'e
group and a finite number of generators
of a compact Lie group,
which are Lorentz scalars.
In ref.\ \cite{hls} this theorem was generalized
to also include symmetry
transformations generated
by fermionic operators and all possible
superalgebras were found.
In extensions of the Standard Model
$N$-extended supersymmetries
have played no role so far since
they cannot accommodate the chiral structure
of the Standard Model. }
The particle states in a supersymmetric field theory form representations (supermultiplets)
of the supersymmetry algebra (\ref{N1}).
We do not recall the entire
representation theory here (see, for example,
refs.\ \cite{RSo,wb})
but only highlight
a few generic features:
\begin{itemize}
\item[(a)]
There is an equal number of bosonic degrees of freedom
$n_B$ and fermionic
degrees of freedom $n_F$
in a supermultiplet
\beq\label{nBF}
n_B = n_F\ .
\eeq
\item[(b)]
The masses of all states
in a supermultiplet are degenerate.
In particular the masses
of bosons and fermions are
equal\footnote{This follows immediately from the fact that
$P^2$ is a Casimir operator of the supersymmetry
algebra (\ref{N1})
$[P^2,Q] = [P^2,M^{mn}]=0$.}
\beq\label{mass}
m_B = m_F\ .
\eeq
\item[(c)]
$Q$ has mass dimension
$\frac{1}{2}$ and thus the mass dimensions
of the fields
in a supermultiplet differ by $\frac{1}{2}$.
\end{itemize}
The two irreducible multiplets
which are important for constructing the supersymmetric Standard Model
are the chiral multiplet and the vector multiplet which we discuss in turn now.
\subsection{The chiral supermultiplet}
The chiral supermultiplet $\Phi$ \cite{WZ}
contains a complex scalar field $A(x)$ of spin 0 and mass dimension 1,
a Weyl fermion $\psi_{\alpha} (x)$ of spin $\frac{1}{2}$ and mass
dimension $\frac{3}{2}$ and an auxiliary complex scalar field $F(x)$ of spin 0 and
mass dimension 2
\beq
\Phi = \big(A(x),\psi_{\alpha} (x),F(x)\big)\ .
\eeq
$\Phi$ has off-shell four real bosonic
degrees of freedom ($n_B=4$)
and four real fermionic degrees of freedom
($n_F=4$) in accord with (\ref{nBF}).
The supersymmetry transformations
act on the fields in the multiplet as follows:
\bea \label{repr}
\de_{\xi} A &=& \sqrt{2} \xi \psi \nn \\
\de_{\xi} \psi &=& \sqrt{2} \xi F
+ i \sqrt{2} \sigma^m \bxi \partial_m A \\ \nn
\de_{\xi} F &=& i \sqrt{2} \bxi \bs^m \partial_m \psi
\eea
where we used the conventions of ref.\ \cite{wb}
and the appendix.
The parameters of the transformation
$\xi^{\alpha}$ are constant, complex
anticommuting Grassmann parameters obeying
\beq
\xi_{\alpha} \xi_{\beta}= - \xi_{\beta} \xi_{\alpha}.
\eeq
The transformations (\ref{repr})
can be thought of as
generated by the operator
\beq\label{deldef}
\de_{\xi} = \xi Q + \bxi \overline{Q}
\eeq
with $Q$ and $\overline{Q}$ obeying (\ref{N1}).
This can be explicitly checked by evaluating the commutators
$[\de_{\xi}, \, \de_{\eta} ]$ on the fields
$A, \psi$ and $F$.\\
\noindent
{\it
{\bf Exercise:}
Show $[ \de_{\xi}, \de_{\eta} ]
= 2i (\eta \sigma^m \bar\xi - \xi\sigma^m \bar\eta)
\partial_m $ by using (\ref{deldef}) and (\ref{N1}).\\
\noindent
{\bf Exercise:} Evaluate
the commutator
$[ \de_{\xi}, \de_{\eta} ]$ using (\ref{repr})
for all three fields $A, \psi$ and $F$
and show that this is consistent
with the results of the previous exercise.\\
}
The field $F$
has the highest mass dimension
of the members of the chiral multiplet and therefore
is called the highest component.
As a consequence it cannot transform
into any other field
of the multiplet but only into
their derivatives.
This is not only true for the chiral
multiplet (as can be seen explicitly
in (\ref{repr})) but holds
for any supermultiplet.
This fact can be used to construct
Lagrangian densities which transform into
a total derivative under supersymmetry
transformations leaving
the corresponding actions invariant.
We do not review here the method for systematically
constructing supersymmetric actions which
is done most efficiently using
a superspace formalism.
Since these lectures focus on
the phenomenological properties of supersymmetry
we refer the reader to the
literature \cite{wb} for further details
and only quote the results.
For the chiral
multiplet a supersymmetric and
renormalizable Lagrangian
is given by \cite{wb}
\bea
\cL (A,\psi,F) &=&
-i \bpsi \bs^m \partial_m \psi - \partial_m \bA \partial^m A + F\bF \nn\\
& &+m ( AF + \bA \bF -\frac{1}{2}(\psi \psi + \bpsi \bpsi) ) \\ \nn
& & +Y(A^2F + \bA^2 \bF - A\psi \psi - \bA \bpsi \bpsi)\ ,
\eea
where $m$ and $Y$ are real parameters.
This action has the peculiar property that
no kinetic term for $F$ appears.
As a consequence the equations of motion for $F$ are
purely algebraic
$$
\frac{\de \cL}{\de \bF} = F +m\bA + Y \bA^2 = 0 ,
\qquad
\frac{\de \cL}{\de F} = \bF +mA + Y A^2 = 0.
$$
Thus
$F$ is a non-dynamical, `auxiliary' field
which can be eliminated from the action
algebraically by using its
equation of motion. This yields
\bea\label{Lafter}
\cL (A,\psi,F=-m\bA - Y\bA^2) &=&
-i \bpsi \bs^m \partial_m \psi - \partial_m \bA \partial^m A
\\ \nn
&-&\frac{m}{2} (\psi \psi + \bpsi \bpsi) -Y(A\psi \psi + \bA \bpsi \bpsi ) - V(A,\bA)
\eea
where $V(A,\bA)$ is the scalar
potential given by
\bea\label{potential}
V(A,\bA) &=& \mid m A + Y A^2 \mid^2 \nn\\
&=& m^2A\bA + m Y(A\bA^2 + \bA A^2) + Y^2 A^2 \bA^2\\
&=& F\bF \mid_{\frac{\de \cL}{\de F}=\frac{\de \cL}{\de \bF}= 0} \ .\nn
\eea
As can be seen from (\ref{Lafter}) and
(\ref{potential}) after elimination of
$F$ a standard renormalizable Lagrangian
for a complex scalar $A$ and a Weyl fermion $\psi$
emerges. However (\ref{Lafter}) is not the most
general
renormalizable Lagrangian for such fields.
Instead it satisfies the following
properties:
\begin{itemize}
\item
$\cL$ only depends on two independent
parameters, the mass parameter $m$
and the dimensionless Yukawa coupling $Y$.
In particular, the $(A\bA)^2$ coupling
is not controlled by an independent
parameter
(as it would be in non-supersymmetric theories)
but determined
by the Yukawa coupling $Y$.
\item
The masses for $A$ and $\psi$ coincide
in accord with (\ref{mass}).\footnote{As
immediate consequence of this feature
one notes that supersymmetry
must be explicitly or
spontaneously broken
in nature.}
\item
$V$ is positive semi-definite, $V \geq 0$.
\end{itemize}
\subsection{The vector supermultiplet}
The vector supermultiplet $V$ contains
a gauge boson $v_m$ of spin 1 and
mass dimension 1,
a Weyl fermion (called the gaugino)
$\lambda $ of spin $\frac{1}{2}$
and mass dimension $\frac{3}{2}$,
and a real scalar field
$D$ of spin 0
and mass dimension 2
\beq
V = (v_m(x), \lambda_{\alpha}(x), D(x))\ .
\eeq
Similar to the chiral multiplet
also the vector multiplet
has $n_B=n_F=4$.
The vector multiplet can be used to gauge the action of
the previous section.
An important consequence of the theorems
of refs.~\cite{cm,hls} is the fact that the
generators $T^a$ of a compact
gauge group $G$ have to
commute with the supersymmetry generators
\beq
{}[T^a,Q_\alpha]=[T^a,\overline{Q}_{\dot\alpha}]=0\ .
\eeq
Therefore all members of a chiral multiplet
($A,\psi,F$) have to reside in the same
representation of the gauge group.
Similarly, the members of the vector multiplet
have to transform in the adjoint representation
of $G$ and thus they all are
Lie-algebra valued fields
\beq
v_m = v_m^a T^a\ , \qquad
\lambda_{\alpha } = \lambda_{\alpha}^a T^a \ ,
\qquad
D = D^a T^a \ .
\eeq
The
supersymmetry transformations of
the components of the
vector multiplet are \cite{wb}:
\bea \label{vsusy}
\de_{\xi} v_m^a &=& -i \bl^a \bs^m \xi + i \bxi \bs^m \lambda^a\ , \\ \nn
\de_{\xi} \lambda^a &=& i \xi D^a + \sigma^{mn}\xi F_{mn}^a\ , \\ \nn
\de_{\xi} D^a &=& - \xi \sigma^m D_m
\bar\lambda^a - D_m \lambda^a \sigma^m \bxi\ .
\eea
The field strength of the
vector bosons $F_{mn}^a$
and the covariant derivative $D_m \lambda^a$
are defined according to
\bea
F_{mn}^a &:=& \partial_m v_n^a - \partial_n v_m^a - gf^{abc} v_m^b v_n^c\ ,\\
\nn
D_m \lambda^a &:=& \partial_m\lambda^a - gf^{abc} v_m^b \lambda^c\ ,
\eea
where $f^{abc}$ are the structure constants of the Lie algebra
and $g$ is the gauge coupling.
A gauge invariant, renormalizable
and supersymmetric
Lagrangian for the vector multiplet
is given by
\beq\label{puregauge}
\cL = -\frac{1}{4} F_{mn}^a F^{mn\,a} - i \bl^a \bs^m D_m \lambda^a
+ \frac{1}{2} D^a D^a\ .
\eeq
As before
the equation of motion for the auxiliary
$D$-field is
purely algebraic $D^a = 0$.
A gauge invariant, renormalizable
Lagrangian containing a set of
chiral multiplets ($A^i, \psi^i, F^i$)
coupled to vector multiplets
is found to be \cite{wb}
\bea \label{cvlagr}
\cL(A^i, \psi^i, F^i, v_m^a, \lambda^a, D^a)
&=& -\frac{1}{4} F_{mn}^a F^{mn ~a} - i\bl^a \bs^m D_m \lambda^a + \frac{1}{2}
D^a D^a \nn \\ \nn
&& -D_m A^i D^m \bA^i - i \bpsi^i \bs^m D_m \psi^i
+ \bF^i F^i \\
&& +i \sqrt{2} g (\bA^i T_{ij}^a \psi^j \lambda^a
- \bl^a T_{ij}^a A^i \bpsi^j) \\ \nn
&& + gD^a\bA^i T_{ij}^a A^j - \frac{1}{2} W_{ij} \psi^i \psi^j
-\frac{1}{2} \bW_{ij} \bpsi^i \bpsi^j \\ \nn
&& +F^i W_i + \bF^i \bW_i\ ,
\eea
where
the covariant derivatives
are defined by
\bea\label{covD}
D_m A^i &:=& \partial_m A^i + ig v_m^a T_{ij}^a A^i\ , \\ \nn
D_m \psi^i &:=& \partial_m \psi^i + ig v_m^a T_{ij}^a \psi^j\ .
\eea
$W_i$ and $W_{ij}$ in (\ref{cvlagr}) are
the derivatives of a holomorphic
function $W(A)$ called the superpotential
\bea \label{superpot}
W(A) &=& \frac{1}{2} m_{ij} A^i A^j + \frac{1}{3} Y_{ijk} A^i A^j A^k\ , \nn\\
W_i &\equiv& \frac{\partial W}{\partial A^i}= m_{ij} A^{j} + Y_{ijk}A^j A^k\ , \\ \nn
W_{ij} &\equiv& \frac{\partial^2 W}{\partial A^i \partial A^j}
= m_{ij} + 2 Y_{ijk} A^k\ .
\eea
By explicitly inserting (\ref{superpot})
into (\ref{cvlagr}) one observes that the
$m_{ij}$ are mass parameters while
the $Y_{ijk}$ are Yukawa couplings.
Supersymmetry forces $W$ to be a holomorphic
function of the scalar fields $A$ while
renormalizability restricts
$W$ to be at most a cubic polynomial of $A$.
Finally, the parameters
$m_{ij}$ and $Y_{ijk}$ are further constrained by
gauge invariance.
As before, $F^i$ and $D^a$ obey
algebraic equations of motion which read
\bea \label{FDeqm}
\frac{\de \cL}{\de F} = 0 & \Rightarrow &
\bF_i + W_i = 0\ , \nn \\
\frac{\de \cL}{\de \bF} = 0 & \Rightarrow & F_i + \bW_i = 0\ , \\ \nn
\frac{\de \cL}{\de D^a} = 0 & \Rightarrow & D^a + g \bA^i T_{ij}^a A^j = 0\ .
\eea
They can be used to eliminate
the auxiliary fields $F^i$ and $D^a$ from
the Lagrangian (\ref{cvlagr})
and one obtains
\bea\label{cvlagrphys}
\lefteqn{
\cL(A^i, \psi^i, v_m^a, \lambda^a, F_i=-\bW_i, D^a= -g\bA^i T_{ij}^a A^j)=} \nn\\
&&-\frac{1}{4} F_{mn}^a F^{mn ~a} - i\bl^a \bs^m D_m \lambda^a
-D_m A^i D^m \bA^i - i \bpsi^i \bs^m D_m \psi^i \\ \nn
&& +i \sqrt{2} g (\bA^i T_{ij}^a \psi^j \lambda^a
- \bl^a T_{ij}^a A^i \bpsi^j)
- \frac{1}{2} W_{ij} \psi^i \psi^j
-\frac{1}{2} \bW_{ij} \bpsi^i \bpsi^j - V(A,\bA)
\eea
where
\bea \label{scalarpot}
V(A,\bA) &=& W_i \bW_i
+ \frac{1}{2} g^2(\bA^i T^a_{ij}A^j)(\bA^i T^a_{ij}A^j) \nn\\
&=& (F^i \bF^i + \frac{1}{2} D^a D^a)\mid_{\frac{\de \cL}{\de F}=0,
\frac{\de \cL}{\de D^a}=0}\\
&\geq & 0 \ .\nn
\eea
As before the scalar potential
$V(A,\bA)$ is positive semi-definite.
\bigskip
\noindent
{\bf Exercise:} {\it Insert
(\ref{FDeqm}) into (\ref{cvlagr}) and derive (\ref{cvlagrphys}).}
\section{A Supersymmetric Extension of the
Standard Model}
\subsection{The Standard Model}
In this section we briefly review
some basic features of the Standard Model.
The Standard Model is a
quantum gauge field
theory with a chiral gauge group
$G_{\rm SM}=SU(3)\times SU(2)\times U(1)_Y$.
The spectrum of particles includes
three families of quarks and leptons,
the gauge bosons (gluons, $W^\pm, Z^0$, photon)
of $G_{\rm SM}$
and one spin-0 Higgs doublet.
In table~1 the particle content and the
corresponding gauge quantum numbers are displayed.
\begin{table}
\begin{center}
\vspace{0.4cm}
\begin{tabular}{|l||c||ccc|c|}
\hline
&& SU(3) & SU(2) & U(1)$_Y$ & U(1)$_{\rm em}$ \\[0.5ex]
\hline
&&&&& \\
quarks
& $q_L^I = \left( \begin{array}{c} u_L^I \\ d_L^I \end{array} \right)$&
3 & 2 & $\frac{1}{6}$ & $\left( \begin{array}{c} \frac{2}{3} \\ -\frac{1}{3}
\end{array} \right) $ \\[0.5ex]
&&&&& \\
& ${u}_R^I $&
$\bar{3}$ & 1 & $-\frac{2}{3}$ & $-\frac{2}{3}$\\[0.5ex]
& ${d}_R^I$ &
$\bar{3}$ & 1 & $\frac{1}{3} $ & $\frac{1}{3}$ \\[1ex]
&&&&&\\
\hline
&&&&&\\
leptons &
$l_L^I = \left( \begin{array}{c} \nu_L^I \\ e_L^I \end{array} \right)$ &
1 & 2 & $ -\frac{1}{2} $& $\left( \begin{array}{c} 0 \\ -1 \end{array}
\right)$ \\[0.5ex]
&&&&& \\
& ${e}_R^I$ &
1 & 1 & $1$ & $1$ \\[1ex]
&&&&&\\
\hline
&&&&&\\
Higgs &
$ h= \left( \begin{array}{c} h^0 \\ h^- \end{array} \right)$ &
1 & $2$ & $-\frac{1}{2} $ & $\left( \begin{array}{c} 0 \\ -1 \end{array}
\right)$ \\
&&&&& \\
\hline
&&&&& \\
gauge bosons& $G$& 8&1&0&0\\
&$W$ &1&3&0&$(0,\pm1)$\\
&$B$&1&1&0&0\\
&&&&& \\
\hline
\end{tabular}
\caption{The particle content of the
Standard Model.
The index $I=1,2,3$ labels the three families
of chiral quarks $q_L^I, {u}_R^I,
{d}_R^I$ and chiral leptons $l_L^I, {e}_R^I$.
All of them are Weyl fermions and transform in the
$(\halb,0)$ representation of the Lorentz group
(they have an undotted spinor
index $\alpha$).
The subscripts $R,L$ do not specify the representation
of the Lorentz group but instead
are used to indicate
the different transformation properties
under the chiral
gauge group $SU(2)\times U(1)$.
This somewhat unconventional notation is used
to make a smooth transition to the supersymmetric
Standard Model later on.
The electromagnetic charge listed in the last column is defined by
$Q_{em} = T^3_{SU(2)} + Q_Y $.}
\end{center}
\end{table}
The Lagrangian of the Standard Model reads
\bea\label{LSM}
\cL &=& -\frac{1}{4} \sum_{(a)=1}^{3}
\left( (F_{mn}^b F^{mn ~b})_{(a)} \right) - D_m h D^m \bh \nn\\
&& + \sum_{I=1}^3 \left( -i \bar{q}_L^I D\!\!\!\! / \ q_L^I -i \bu_R^I D\!\!\!\! / \ u_R^I
- i\bar{d}_R^I D\!\!\!\! / \ d_R^I
-i \bel_L^I D\!\!\!\! / \ l_L^I -i \be_R^I D\!\!\!\! / \ e_R^I \right) \\ \nn
&& - \sum_{IJ=1}^3\left( ({Y_u})_{IJ} \bh q_L^I u_R^J
+ ({Y_d} )_{IJ} h q_L^I d_R^J
+ ({Y_l})_{IJ} h l_L^I e_R^J + h.c.\right) -V(h,\bh),
\eea
where $D\!\!\!\! / \ = \sigma^m D_m $
and the index $(a)$ labels the 3 different
factors in the gauge group. $V(h,\bh)$ is
the scalar potential for the
Higgs doublet which is chosen to be
\beq
V(h,\bh)= \mu^2 h \bh + \lambda (h \bh)^2\ .
\eeq
In order to have a bounded
potential
$\lambda>0$ has to hold. For $\mu^2 <0$ the electroweak
gauge group
$SU(2)\times U(1)_Y$ is spontaneously
broken down to $U(1)_{\rm em}$.
In this case the minimum of the potential is not at $\langle h\rangle =0$, but at
$\langle h \bar{h}\rangle
= -\frac{\mu^2}{2\lambda}.$
\bigskip
\noindent
{\bf Exercise:} {\it Give explicitly
all covariant derivatives in (\ref{LSM}).}
\noindent
{\bf Exercise:} {\it Check that
the Lagrangian (\ref{LSM})
is gauge and Lorentz invariant.}
\subsection{Supersymmetric Extensions} \label{ssm}
Let us now turn to the supersymmetric generalization
of the Standard Model.\footnote{See also
\cite{RNi} - \cite{Fer}.}
The idea is to promote the Lagrangian (\ref{LSM})
to a supersymmetric Lagrangian.
As we learned in the previous section
supersymmetry requires the presence of additional
states which form supermultiplets with
the known particles. Since all states
of a supermultiplet carry
the same gauge quantum numbers we need
at least a doubling of states:
For every field of the SM one has to
postulate a superpartner
with the exact same gauge quantum numbers
and a spin such that it can form an
appropriate supermultiplet.
More specifically, the quarks and leptons
are promoted to chiral multiplets
by adding scalar (spin-0) squarks
($\tilde{q}_L^I, \tilde{u}_R^I, \tilde{d}_R^I$)
and sleptons ($\tilde{l}_L^I, \tilde{e}_R^I$)
to the spectrum.
The gauge bosons are promoted to vector multiplets by adding the corresponding spin-$\halb$
gauginos
($\tilde{G}, \tilde{W}, \tilde{B}$) to the spectrum.
Finally, the Higgs boson is also
promoted to a chiral multiplet
with a spin-$\halb$ Higgsino superpartner.
However, the supersymmetric version of the
Standard Model cannot `live' with only
one Higgs doublet and at least a
second Higgs doublet has to be added.
This can be seen from the fact
that one cannot write down
a supersymmetric version of the
Yukawa interactions of the Standard Model
without introducing a second Higgs doublet.
The reason is
the definite chirality of the Higgsino.
Another way to see the necessity
of a second Higgs doublet is the fact that
the Higgsino
is a chiral fermion which carries $U(1)$ hypercharge and
hence it upsets the anomaly cancellation
condition.
Thus a second Higgsino with opposite $U(1)$
charge
is necessary and supersymmetry
then also requires a second
spin-0 Higgs doublet.\footnote{%
Of course, extensions with more
Higgs doublets are also possible, but
two is the minimal number.}
The precise spectrum of the supersymmetric
Standard Model is summarized in
table~2.
\begin{table}
\begin{center}
\begin{tabular}{|l||c||cc||ccc|c|}
\hline
&supermultiplet& F & B & SU(3) & SU(2) & U(1)$_Y$ & U(1)$_{\rm em}$ \\
\hline
&&&&&&& \\
quarks
& $Q_L^I = \left( \begin{array}{c} U_L^I \\ D_L^I \end{array} \right)$&
$q_L^I$ & $ \tilde{q}_L^I $ &
3 & 2 & $\frac{1}{6}$ & $\left( \begin{array}{c} \frac{2}{3} \\ -\frac{1}{3}
\end{array} \right) $ \\
&&&&&&& \\
& ${U}_R^I $&
$ {u}_R^I $ & $ \tilde{{u}}_R^I $ &
$\bar{3}$ & 1 & $-\frac{2}{3}$ & $-\frac{2}{3}$\\[0.5ex]
& ${D}_R^I$ & $ {d}_R^I $ & $ \tilde{{d}}_R^I $ &
$\bar{3}$ & 1 & $\frac{1}{3} $ & $\frac{1}{3}$ \\[1ex]
&&&&&&&\\
\hline
&&&&&&&\\
leptons &
$L_L^I = \left( \begin{array}{c} \cN_L^I \\ E_L^I \end{array} \right)$ &
$l_L^I $ & $ \tilde{l}_L^I $ &
1 & 2 & $ -\frac{1}{2} $& $\left( \begin{array}{c} 0 \\ -1 \end{array}
\right)$ \\
&&&&&&& \\
& ${E}_R^I$ &
$ {e}_R^I $ & $ \tilde{{e}}_R^I $ &
1 & 1 & $1$ & $1$ \\[1ex]
&&&&&&&\\
\hline
&&&&&&&\\
Higgs &
$ H_d= \left( \begin{array}{c} H_d^0 \\ H_d^- \end{array} \right)$ &
$ \left( \begin{array}{c} \tilde{h}^0 \\ \tilde{h}^- \end{array} \right)$ &
$ \left( \begin{array}{c} h_d^0 \\ h_d^- \end{array} \right)$ &
1 & 2 & $-\frac{1}{2} $ & $\left( \begin{array}{c} 0 \\ -1 \end{array}
\right)$ \\
& $ H_u= \left( \begin{array}{c} H_u^+ \\ H_u^0 \end{array} \right)$ &
$ \left( \begin{array}{c} \tilde{h}^+ \\ \tilde{h}^0 \end{array} \right)$ &
$ \left( \begin{array}{c} h_u^+ \\ h_u^0 \end{array} \right)$ &
1 & 2 & $\frac{1}{2} $ & $\left( \begin{array}{c} 1 \\ 0 \end{array}
\right)$ \\
&&&&&&& \\
\hline
&&&&&&& \\
gauge& $G$&$\tilde G$&$G$ & 8&1&0&0\\
bosons&$W$&$\tilde W$ &$W$&1&3&0&$(0,\pm1)$\\
&$B$&$\tilde B$&$B$&1&1&0&0\\
&&&&&&& \\
\hline
\end{tabular}
\caption{Particle content of the supersymmetric Standard Model. The column below `F' (`B')
denotes the fermionic (bosonic)
content of the model.}
\end{center}
\end{table}
The Lagrangian for the supersymmetric Standard
Model has to be of the form (\ref{cvlagr})
with an appropriate superpotential $W$.
It has to be chosen such that the
Lagrangian of the non-supersymmetric
Standard Model (\ref{LSM}) is contained.
This is achieved by
\beq \label{mssmpot}
W= \sum_{IJ}\left(({Y_u})_{IJ} h_u \tilde{q}_L^I \tilde{u}_R^J +
({Y_d})_{IJ} h_d \tilde{q}_L^I \tilde{d}_R^J +
({Y_l})_{IJ} h_d \tilde{l}_L^I \tilde{l}_R^I
\right) + \mu\, h_u h_d\ .
\eeq
Once $W$ it specified also the scalar potential
is fixed. Of particular importance is the
scalar potential for the Higgs
fields since it controls the electroweak
symmetry breaking.
Using (\ref{scalarpot}) and (\ref{mssmpot})
one derives the Higgs potential for the
two neutral Higgs fields $h_d^0, h_u^0$
by setting all other scalars to zero\footnote{Note that the scalars
can only be set to zero in the potential $V$
but not in the superpotential $W$ since the computation of
the potential requires taking appropriate derivatives of $W$.}
\beq\label{HiggsV}
V(h_d^0, h_u^0) =\ |\mu|^2 \left(|h_d^0|^2 + |h_u^0|^2 \right)
+\frac{1}{8} \left( g_1^2 + g_2^2 \right)
\left( |h_u^0|^2 - |h_d^0|^2 \right)^2\ .
\eeq
The coupling
of the terms quartic in the Higgs fields
is not an independent parameter but instead
determined by
the gauge couplings $g_1$ of $U(1)_Y$
and $g_2$ of $SU(2)$.
Thus it seems that the number of parameters
is reduced.
However, now there are two possible
vacuum expectation values
$\langle h_u^0\rangle, \langle h_d^0\rangle$
-- one more than in the Standard Model.
\bigskip
\noindent
{\bf Exercise:} {\it Derive (\ref{HiggsV})
from (\ref{scalarpot}) and (\ref{mssmpot}).}
\bigskip
In the last section we learned that
the potential of any supersymmetric theory
is positive semi-definite
and the Higgs potential of eq.~(\ref{HiggsV})
is no exception as can be seen explicitly:
$|\mu|^2$ cannot be
chosen negative.
Thus the minimum
of $V$ necessarily sits at
$\langle h_u^0 \rangle =
\langle h_d^0 \rangle =0$
which corresponds to a vacuum with
unbroken $SU(2)\times U(1)$.
Therefore, the supersymmetric version
of the Standard Model as it is defined so far
-- the spectrum of table~2 with interactions
specified by the Lagrangian
(\ref{cvlagr}) with the $W$ of
(\ref{mssmpot}) --
cannot accommodate a vacuum with
spontaneously broken electroweak symmetry.
A second phenomenological problem is the
presence of all the new supersymmetric
states which have the same mass as their
superpartners but are not observed in nature.
As we said before, supersymmetry itself
necessarily has to
appear in its broken phase
and as we will see electroweak symmetry breaking
is closely tied to the breakdown of supersymmetry.
Before we close this section let us note
that in addition to the couplings
of (\ref{mssmpot})
gauge and Lorentz invariance
also allows terms in $W$ which are
of the form
\beq \label{nono}
h_u \tilde{l}_L,\quad \tilde{l}_L \tilde{q}_L \tilde{d}_R, \quad
\tilde{d}_R \tilde{d}_R \tilde{u}_R,\quad
\tilde{l}_L \tilde{l}_L \tilde{e}_R\ .
\eeq
These terms violate baryon or lepton number conservation and thus easily lead
to unacceptable physical consequences
(for example the proton could become unstable
\cite{Wein}).
Such couplings can be excluded by imposing
a discrete R-parity \cite{Fa}.
Particles of the Standard
Model (includuing both Higgs doublets)
are assigned R-charge 1 while
all new supersymmetric particles
are assigned R-charge $-1$.
This eliminates all terms of (\ref{nono})
while the superpotential of (\ref{mssmpot})
is left invariant.
An immediate consequence of this additional
symmetry is the fact that
the lightest supersymmetric
particle (often denoted by the `LSP')
is necessarily stable.
However, one should stress that R-parity is not
a phenomenological necessity.
Viable models with broken R-parity can be
constructed and they also can have some
phenomenological appeal \cite{brokenR}.
\bigskip
\noindent
{\bf Exercise:}
{\it Check the gauge and Lorentz invariance
for each term in (\ref{nono})
and compute their R-charge.}
\bigskip
\subsection{Spontaneous breaking of supersymmetry} \label{spon}
In the previous section we learned that
in the simplest supersymmetric extension
of the Standard Model the electroweak symmetry is
unbroken.
However, so far we constructed a manifestly
supersymmetric extension but from the mass degeneracy
of each multiplet (\ref{mass})
it is already clear that supersymmetry
cannot be an exact symmetry in nature
but has to be either
spontaneously or explicitly broken.
Therefore we now turn to the question
of spontaneous supersymmetry breaking
and return to the electroweak symmetry breaking
afterwards.
Let us first recall the order parameter
for supersymmetry breaking.
Multiplying the anticommutator
$
\{ Q_{\alpha}, \overline{Q}_{\dot{\alpha }} \} = 2 \sigma_{\alpha \dot{\alpha }}^m P_m
$ of the supersymmetry-algebra (\ref{N1})
with $\bs^n$ and using
$ Tr(\sigma^m \bs^n) = -2\eta^{mn}$
results in
$$
\bs^{n \alpha \dot{\alpha } } \{ Q_{\alpha}, \overline{Q}_{\dot{\alpha }} \} = -4P^n\ .
$$
Thus the Hamiltonian $H$ of a supersymmetric
theory is expressed as the `square'
of the supercharges
\beq \label{ham}
H= P_0 = \frac{1}{4} \left( Q_1 \overline{Q}_1 + \overline{Q}_1 Q_1 + Q_2 \overline{Q}_2 + \overline{Q}_2 Q_2
\right)\ .
\eeq
This implies that
$H$ is a positive semi-definite operator
on the Hilbert space
\beq
\langle \psi |H|\psi \rangle \; \geq 0 \;
\quad \forall \psi\ .
\eeq
Supersymmetry is unbroken
if the supercharges annihilate the vacuum
$Q_\alpha|0\rangle = \overline{Q}_{\dot\alpha}
|0\rangle = 0$. From (\ref{ham}) we learn
that also $H$ annihilates a
supersymmetric vacuum $H|0\rangle = 0$.
This in turn implies that
the scalar potential $V$ of
a supersymmetric field theory
which has a supersymmetric ground state
has to vanish at its minimum
\beq\label{Vmin}
\langle H\rangle = 0\quad \Rightarrow\quad
\langle V \rangle \equiv
V(A, \bA) |_{\rm min} = 0\ .
\eeq
The general form of the scalar potential
$V= F^i \bF^i + \frac{1}{2} D^a D^a$
was given in (\ref{scalarpot}).
Since $V$ is
positive semi-definite one immediately
concludes from (\ref{Vmin})
that in a supersymmetric ground state
\beq
\langle F^i\rangle \equiv F^i |_{\rm min} = 0 \quad {\rm and} \quad
\langle D^a \rangle \equiv D^a |_{\rm min} = 0
\eeq
has to hold.
The converse is also true
\beq
\langle F^i\rangle \neq 0 \quad {\rm or}\quad
\langle D^a \rangle \neq 0\quad
\Rightarrow\quad
V|_{min} >0\quad \Rightarrow\quad
Q_{\alpha} |0>\ \neq 0\
\eeq
and supersymmetry is spontaneously broken.
Thus
$\langle F^i\rangle$ and $\langle D^a\rangle$
are the order parameters of supersymmetry breaking
in that non-vanishing $F$- or $D$-terms signal
spontaneous supersymmetry breaking.
Specific potentials which do lead
to non-vanishing $D$- or $F$-terms
have been constructed \cite{FIl,ORa}.
For example,
the O'Raifeartaigh model \cite{ORa} has
three chiral superfields $A_0, A_1, A_2$
and the following superpotential:
\beq\label{WOR}
W = \lambda A_0 + m A_1 A_2 + g A_0 A_1^2\ , \qquad m^2>2\lambda g\ .
\eeq
By minimizing $V$ it can be shown that
$F_0|_{min} \neq 0$ and therefore supersymmetry is broken.
Furthermore the mass spectrum of the
6 real bosons and the 3 Weyl fermions is
found to be
\bea\label{mspectrum}
{\rm Bosons:} \quad && (0,0,m^2,m^2, m^2 \pm 2\lambda g) \\ \nn
{\rm Fermions:} \quad && (0,m^2,m^2)\ .
\eea
Thus the mass degeneracy between bosons
and fermions is lifted but nevertheless
a `mass sum rule' still holds
\beq \label{bosferm}
\sum_{\rm bosons} M_{b}^2
= 2 \sum_{\rm fermions} M_{f}^2\ .
\eeq
\bigskip
\noindent
{\bf Exercise:}
{\it Minimize $V$ using
(\ref{scalarpot}), (\ref{WOR})
and compute $F_i|_{\rm min}$.
Verify the mass spectrum (\ref{mspectrum})
and the sum rule (\ref{bosferm}).}
\bigskip
Unfortunately, the sum rule
(\ref{bosferm}) is not a coincidence but
a special case of a general sum rule
which
holds in any theory with spontaneously broken
supersymmetry.
Let us therefore proceed and derive
this sum rule.
In general the mass matrix of the bosons
has the following form
\bea
V(A,\bA )|_{\rm mass\ terms}\ =\ \mu^2_{i\bar{j}}
A^i \bA^j + \mu^2_{ij} A^i A^j
+ \mu^2_{\bar{i} \bar{j}} \bA^i \bA^j
\ =\ ( \bA \quad A ) M_0^2
\left( \begin{array}{c} A \\ \bA \end{array} \right)
\eea
where
\beq\label{mmatrix}
M_0^2 = \left( \begin{array}{c c} \frac{1}{2} \mu^2_{i\bar{j}} &
\mu^2_{ij} \\
\mu^2_{\bar{i} \bar{j}} &
\frac{1}{2} \mu^2_{i\bar{j}}
\end{array} \right)\ .
\eeq
The entries in the mass matrix are
determined by
the appropriate derivatives of the potential
evaluated at its minimum
\beq
\mu^2_{i\bar{j}} = V_{i\bar{j}}|_{min}\ , \qquad
\mu^2_{ij} = V_{ij}|_{min}\ , \qquad
\mu^2_{\bar{i} \bar{j}} = V_{\bar{i} \bar{j} }|_{min}\ ,
\eeq
where $V_{i\bar{j}}\equiv {\partial^2 V\over
\partial A^i \partial \bar{A}^{\bar{j}}}$ etc.
Using (\ref{scalarpot}) one derives
\bea
V &=& W_i \bW_{i} + \frac{1}{2} D^a D^a\ , \nn\\
V_j &=& W_{ij} \bW_{i} + D^a_j D^a\ , \\ \nn
V_{j \bar{k}} &=& W_{ij} \bW_{ik} + D^a_j D^a_{\bar{k}}
+ D^a D^a_{j \bar{k}}\ ,
\eea
where again the indices $i,j,\ldots$
denote derivatives with respect to $A^i, A^j, \ldots$.
Inserted into (\ref{mmatrix})
one obtains for the trace of the mass matrix:
\beq\label{bosonictrace}
\tr M_0^2 = \tr \mu^2_{i\bar{j}}
=\tr V_{i \bar{j} }|_{min}
= \tr(W_{ij} \bW_{{i}{k}}
+ D^a_j D^a_{\bar{k}}
+ D^a D^a_{j \bar{k}})|_{min}\ .
\eeq
For the fermion masses the relevant pieces of the Lagrangian (\ref{cvlagr}) are
\beq
\cL = i \sqrt{2} g \; (\, \bA^i T_{ij}^a \psi^j \lambda^a
- \bl^a T_{ij}^a A^i \bpsi^j \,)
- \frac{1}{2} W_{ij} \psi^i \psi^j
-\frac{1}{2} \bW_{ij} \bpsi^i \bpsi^j + \ldots\ .
\eeq
This can be rewritten as
\beq
\cL =
-\frac{1}{2} ( \psi^i\ \lambda^a) M_{1/2}
\left(\begin{array}{c} \psi^j \\ \lambda^b \end{array} \right)
\ + {\rm h.c.} \ + \ldots
\eeq
where
\beq \label{fermmass}
M_{1/2} = \left( \begin{array}{cc}
W_{ij} & -i\sqrt{2} g \bA^i T_{ij}^{a} \\
-i\sqrt{2} g \bA^i T_{ij}^b & 0 \end{array} \right)
= \left( \begin{array}{cc}
W_{ij} & i\sqrt{2} D_j^{a} \\
i\sqrt{2} D_i^b & 0 \end{array} \right)\ .
\eeq
Thus, we obtain
\beq\label{fermionictrace}
\tr M_{1/2} \bar{M}_{1/2}
= \tr (W_{ik} \bW_{{k}{j}}
+ 4 D_i^a D_{\bar{j}}^a )|_{min}\ .
\eeq
Already at this point we learn from
(\ref{bosonictrace}) and (\ref{fermionictrace})
that
for $D^a|_{min} =0$
we have a sum rule
\beq
\sum_{bosons} M_b^2\ =\ 2 \sum_{fermions} M_f^2
\eeq
where in the sum real bosons are counted.
For $D^a|_{min} \neq 0$ also the gauge symmetry
is necessarily broken and some of the gauge
bosons become massive.
{}From (\ref{covD})-(\ref{cvlagrphys})
one obtains
the mass matrix of the gauge bosons
\beq\label{vectormass}
M_{1}^2 = 2 g^2 \bA^j T^a_{jl} T^b_{lk} A^k
= 2 D^a_l D^b_{\bar l}
\eeq
Combining (\ref{bosonictrace}),
(\ref{fermionictrace})
and (\ref{vectormass})
one arrives at the mass sum rule \cite{Msf}:
\beq \label{str}
{\rm Str} M^2 \equiv \sum_{J=0}^1 (-)^{2J} (2J+1) \tr M_J^2 = -2 g (\tr T^a) D^a\ ,
\eeq
where $J$ is the spin of the particles.
The right hand side of (\ref{str})
vanishes for any non-Abelian factor
in the gauge group
while for $U(1)$ factors
it is proportional to the
sum of the $U(1)$ charges $\sum Q_{U(1)}$.
Whenever this sum is non-vanishing
the theory has a $U(1)$ trace-anomaly.
(In the supersymmetric Standard Model this
trace-anomaly vanishes.)
Finally, by repeating the steps of this section
one can show that (\ref{str}) holds over all field space
and not only at the minimum of $V$. This will play
a r\^ole in deriving the soft supersymmetry breaking terms.
\bigskip
\noindent
{\bf Exercise:}
{\it Verify (\ref{vectormass}) and
(\ref{str}).} \\
{\bf Exercise:}
{\it Compute $\sum Q_{U(1)_Y}$ in the supersymmetric
Standard Model.}\\
{\bf Exercise:}
{\it Show that (\ref{str}) holds over all field space
and not only at the minimum of $V$.}
\bigskip
The sum rule (\ref{str})
is problematic for the supersymmetric Standard Model.
Since non of the supersymmetric partners
has been observed yet they must be heavier
than the particles of the Standard Model.
Close inspection of (\ref{str}) shows that
this cannot be arranged within a spontaneously
broken supersymmetric Standard Model.
An additional problem is the presence of
a massless Goldstone fermion.
Goldstone's theorem implies that any spontaneously
broken global symmetry leads to a massless state
in the spectrum. This also holds for
supersymmetry where the broken generator is a Weyl spinor and thus there is an
additional massless Goldstone fermion.
The presence of this state can be seen
explicitly from the condition that at the minimum
of the potential one has
\beq\label{gold}
V_j = W_{ij} \bW_{{i}} + D^a_j D^a = 0 \ .
\eeq
Let us consider for simplicity the case
that supersymmetry is broken by a non-vanishing
F-term $\langle F_i\rangle = - \bW_i|_{\rm min} \neq 0$
while $\langle D^a\rangle =0$.\footnote{The general
case is discussed in ref.\ \cite{Msf}.}
{}From (\ref{gold}) one learns immediately
that now $ W_{ij}|_{\rm min}$ has to have a zero
eigenvalue. Using (\ref{fermionictrace})
this implies that also the mass matrix of
the fermions has to have a zero
eigenvalue which is the Goldstone fermion.
To summarize, the lesson of this section
is that also spontaneously broken supersymmetry runs
into phenomenological difficulties.
The only way out is an explicit breaking
of (global) supersymmetry.
\section{Extensions of the
Standard Model with Softly Broken Supersymmetry}
\subsection{The Hierarchy and Naturalness Problem} \label{hn1}
Before we continue in our endeavor to construct
a phenomenologically viable extension
of the Standard Model let us briefly review
what is called the hierarchy and
naturalness problem in the
Standard Model.\footnote{The discussion of this section
follows ref.\ \cite{RBg}.}
Consider
the following (non-supersymmetric)
Lagrangian of a complex scalar $A$
and a Weyl fermion $\chi$
\begin{eqnarray}
\cL = & - & \partial_m \bA \partial^m A -i \bchi \bs^m
\partial_m \chi - \frac{1}{2} \,m_f\, (\chi\chi + \bchi\bchi)-
m^2_b\, \bA A \nonumber \\
& - & \;Y\,(A\chi\chi + \bA \bchi\bchi) \;-\; \lambda \, (\bA A)^2\ .
\label{toy model}
\end{eqnarray}
{}From (\ref{Lafter}) we learn that
this Lagrangian is supersymmetric if $m_f=m_b$ and $Y^2=\lambda$ but let us not consider
this choice of parameters at first.
$\cL$ has a chiral symmetry for $m_f=0$ given by
\beq
A \rightarrow e^{-2 i \alpha}\, A\ , \qquad
\chi \rightarrow e^{i \alpha}\, \chi \ .
\eeq
This symmetry prohibits the generation
of a fermion mass by quantum corrections.
For $m_f\neq 0$ the fermion
mass does receive radiative corrections, but all possible diagrams
have to contain a mass insertion as can be seen
from the one-loop diagram shown
in Fig.~\ref{Fmass}. Since the propagator of
the boson (upper dashed line in the diagram)
is $\sim \frac{1}{k^2} $ while
the propagator of the fermion (lower solid line) is
$\sim \frac{1}{k} $ one obtains a
mass correction
which is proportional to $m_f$
\begin{figure}[t]
\hspace*{1.1truein}
\psfig{figure=Fmass.eps,height=1.5in}
\caption{The one-loop correction to the fermion mass.}
\label{Fmass}
\end{figure}
\beq
\delta m_f \sim Y^2 m_f \ln {m_f^2\over \Lambda^2}\ ,
\eeq
where $\Lambda$ is the ultraviolet cutoff.
Hence the mass of a chiral fermion
does not receive large radiative corrections
if the bare mass is small.
For that reason `t~Hooft calls
fermion masses ``natural'' --
an extra symmetry appears
when the mass is set to zero
which in turn leads to a
protection of the fermion mass
by an approximate
chiral symmetry \cite{thooft}.
\begin{figure}[t]
\hspace*{0.8truein}
\psfig{figure=Bmasss.eps,height=2.5in}
\caption{The one-loop corrections to the boson mass.}
\label{Bmass}
\end{figure}
This state of affairs is different for scalar fields.
The diagrams giving the
one-loop corrections to $m_b$
are shown in Fig.~\ref{Bmass}.
Both diagrams are quadratically divergent but
they have an opposite sign
because in the second diagram fermions
are running in the loop. One finds
\beq \label{bmasscorrection}
\delta m_b^2 \sim (\lambda - ~Y^2)\, \Lambda^2\ .
\eeq
Thus, in non-supersymmetric theories
scalar fields receive
large mass corrections
(even if the bare mass is set to zero)
and small scalar masses are
``unnatural'' \cite{thooft,hier,tech}.
They can only be arranged by delicately
fine-tuning the bare mass and the
couplings $\lambda, Y$.
This problem becomes apparent in extensions
of the Standard Model which apart from
the weak scale $M_Z$
do have a second larger scale, say
$M_{\rm GUT}$ with $M_{\rm GUT}\gg M_{Z}$
\cite{hier,tech}.
In such theories the mass of the scalar boson
is naturally of the order of the largest
mass parameter in the theory.
This discussion applies
to the Higgs boson of the Standard Model and
it is difficult to understand the smallness
of $M_Z$ and how it can be kept stable
against quantum corrections
whenever the Standard Model
is the low energy limit of a theory
with a large mass scale.
A concrete example of this problem
occurs in Grand Unified Theories (GUTs) \cite{ross}
where the Standard Model is embedded into a single
simple gauge group $G_{\rm GUT}$
(eg.\ $G_{\rm GUT}= SU(5)$).
The GUT gauge symmetry is broken by
a Higgs mechanism to the
gauge group of the Standard Model and
one has the following pattern of symmetry
breaking
\beq
G_{\rm GUT} \stackrel{M_{\rm GUT}}{\rightarrow} SU(3) \times SU(2) \times U(1) \stackrel{M_{Z}}{\rightarrow}
SU(3) \times U(1)_{\rm em}\ ,
\eeq
where $M_{\rm GUT} \approx 10^{15}$ GeV
and thus $M_{\rm GUT} \gg M_Z$.
There are basically two different
suggestions for `solving' this problem.
The first class of models
assume that the Higgs boson of the Standard Model is not
an elementary scalar, but
rather a condensate of strongly interacting
`techni'- fermions \cite{tech,FS}.
These theories are called ``technicolor'' theories
but in all such theories it is difficult to
arrange agreement with the electroweak
precision measurements
of this decade \cite{RPes}.
The second class of models are supersymmetric
theories where the Higgs boson is elementary
but the quadratic divergence in
(\ref{bmasscorrection}) exactly cancels
due to the supersymmetric relation
$Y^2= \lambda$.
{\setcounter{footnote}{0}}
The cancellation of quadratic divergences
is a general
feature of supersymmetric quantum field theories
and a consequence of a more general
non-renorm\-alization theorem:
The superpotential $W$ of a supersymmetric
quantum field theory is not renormalized
in perturbation theory \cite{gsr} and
all quantum corrections solely
arise from the gauge coupling
and wavefunction renormalization.\footnote{This
non-renormalization theorem
only holds in perturbation theory
but non-perturbative corrections do appear
\cite{ads}.}
The non-renormalization theorem
or in other words the `taming' of the quantum
corrections
is one of the attractive features of
supersymmetric quantum field theories.
It leads (among other things) to the possibility
of stabelizing the weak scale $M_Z$.
In that sense supersymmetry solves the
naturalness problem in that it allows
for a small and stable weak scale without
fine-tuning. However, supersymmetry
does not solve the hierarchy problem
in that it does not explain why the weak scale
is small in the first place.
\subsection{Soft Breaking of Supersymmetry} \label{soft}
As we have seen in section \ref{spon}
models with spontaneously broken supersymmetry are phenomenologically
not acceptable. For example the mass formula (\ref{str}),
generally valid in such cases,
forbids that all supersymmetric particles acquire masses large enough to make
them invisible in present experiments. One way to overcome those difficulties
is to allow explicit supersymmetry breaking.
In the last section we observed
that the absence of quadratic divergences
in supersymmetric theories
stabilizes the Higgs mass and
thus the weak scale.
This `attractive' feature of supersymmetric
field theories can be maintained in
theories with explicitly broken
supersymmetry if the
supersymmetry breaking terms are
of a particular form.
Such terms which
break supersymmetry explicitely
and generate no quadratic divergences are called
`soft breaking terms'.
One possibility to identify the soft breaking terms is to
investigate the
divergence structure of the effective potential \cite{GG}.
Consider a
quantum field theory of a scalar field $\phi$ in the presence of an
external source $J$. The generating functional for the Green's
functions is given by
\begin{equation}
e^{-iE[J]}\ =\ \int{\cal D}\phi\,
\mbox{exp}\left[i\int d^4x
({\cal L}[\phi(x)]+J(x)\phi(x))\right] \mbox{ }.
\end{equation}
The effective action $\Gamma(\phi_{cl})$ is defined
by the Legendre transformation
\begin{equation}
\Gamma(\phi_{cl})=-E[J]-\int d^4xJ(x)\phi_{cl}(x) \mbox{ },
\end{equation}
where $\phi_{cl} = - \frac{\delta E[J]}{\delta J(x)}$.
$\Gamma(\phi_{cl})$ can be expanded
in powers of momentum; in position
space this expansion takes the form
\begin{equation}
\Gamma(\phi_{cl})=\int d^4x[-V_{eff}(\phi_{cl})
-\frac{1}{2}(\partial_m
\phi_{cl})(\partial^m\phi_{cl})Z(\phi_{cl})+\dots\mbox{ }] \mbox{ }.
\end{equation}
The term without derivatives is called the effective
potential $V_{eff}(\phi_{cl})$.
It can be calculated in a perturbation theory
of $\hbar$:
\begin{equation}
V_{eff}(\phi_{cl})=V^{(0)}(\phi_{cl})+\hbar V^{(1)}(\phi_{cl}) + \dots
\end{equation}
where $V^{(0)}(\phi_{cl})$ is the tree level and $V^{(1)}(\phi_{cl})$ the
one-loop contribution.
In a theory with scalars, fermions and vector bosons
the
one-loop contribution takes the form \cite{coleman}
\beq \label{A}
V^{(1)}\sim \int d^4k\,\mbox{Str} \ln(k^2+M^2)
= \sum_J(-1)^{2J}(2J+1)\, \mbox{Tr}
\int d^4k\, \ln(k^2+M_J^2)
\eeq
where $M_J^2$ is the matrix of second derivatives
of ${\cal L}|_{k=0}$ at zero momentum for scalars ($J=0$),
fermions ($J=1/2$) and vector bosons
($J=1$).\footnote{$M_J^2$ is not necessarily
evaluated at the
minimum of $V_{eff}$. Rather it is a function of
the scalar fields in the theory. The mass matrix
is obtained from $M_J^2$
by inserting the vacuum expectation
values of the scalar fields.}
The UV divergences of (\ref{A}) can be displayed
by expanding the integrand
in powers of large $k$. This leads to
\beq \label{B}
V^{(1)} \sim \mbox{Str}{\bf 1} \int\frac{d^4k}{(2\pi)^4}\ln k^2
+ \mbox{Str}M^2\int\frac{d^4k}{(2\pi)^4} k^{-2} + \ldots\ .
\eeq
If a UV-cutoff $\Lambda$ is introduced
the first term in (\ref{B}) is
${\cal O}(\Lambda^4\ln \Lambda)$.
Its coefficient Str${\bf 1}=n_B-n_F$
vanishes in theories with a supersymmetric spectrum
of particles (cf.\ (\ref{nBF})).
The second term in (\ref{B}) is ${\cal O}(\Lambda^2)$ and
determines the presence of quadratic divergences at one-loop level.
Therefore quadratic divergences are absent
if
\begin{equation} \label{C}\mbox{Str}M^2=0\ .
\end{equation}
More precisely, one can also
tolerate $\mbox{Str}M^2= const.$~ since
this would correspond to a shift of the zero
point energy which without coupling to gravity
is undetermined.
In theories with exact or spontaneously
broken supersymmetry (\ref{C}) is
fulfilled whenever the trace-anomaly vanishes
as we learned in (\ref{str}).\footnote{Indeed,
theories with a non-vanishing D-term
have been shown to produce a quadratic
divergence at one-loop
\cite{polchinski}. }
The soft supersymmetry breaking terms are
defined as those non-supersym\-metric
terms that can be added to a supersymmetric
Lagrangian
without spoiling $\mbox{Str}M^2= const.$~.
One finds the following possibilities \cite{GG}
\begin{itemize}
\item
Holomorphic terms of the scalars
proportional to $A^2$, $A^3$
and the
corresponding complex conjugates.\footnote{Higher
powers of $A$ are forbidden since they generate
quadratic divergences at the 2-loop level \cite{GG}.}
\item
Mass terms for the scalars proportional
to $\bar{A} A$.\\
(They only contribute a constant, field independent
piece in $\mbox{Str}M^2$).
\item
Gaugino mass terms.\\
(A generic mass matrix of the fermions takes the form
\beq\label{dmass}
M_{1/2}=\left(\begin{array}{cc}W_{ij} + \delta W_{ij} & i\sqrt{2}D_i^b
+\delta D_i^b \\
i\sqrt{2}D_j^a+\delta D_j^a & \delta\tilde m_{ab} \end{array}
\right) \mbox{ },
\eeq
where according to (\ref{fermmass})
\[ \left(\begin{array}{cc}W_{ij} & i\sqrt{2}D_i^b \\
i\sqrt{2}D_j^a & 0 \end{array} \right) \mbox{ } \]
is the supersymmetric part of $M_{1/2}$.
Computing the supertrace of (\ref{dmass})
reveals that $\mbox{Str}M^2 = const.$
requires $\delta W=0=\delta D$ while
$\delta \tilde m$ can be arbitrary.)
\end{itemize}
Thus the most general Lagrangian with softly
broken supersymmetry takes the form
\begin{equation} \label{Lsusy}
{\cal L}={\cal L}_{\rm susy}+{\cal L}_{\rm soft} \mbox{ },
\end{equation}
where ${\cal L}_{\rm susy}$ is of the form
(\ref{cvlagrphys}) and
\begin{eqnarray} \label{Lsoft}
{\cal L}_{\rm soft}&=&
- m^2_{ij}A^i\bar{A}^j -
(b_{ij}A^iA^j + a_{ijk}A^iA^jA^k+\mbox{ h.c. })
\nonumber\\
&& {}-\frac{1}{2}\, \tilde{m}_{ab}\lambda^a\lambda^b +\mbox{ h.c. }\mbox{ }.
\end{eqnarray}
$m^2_{ij}$ and $b_{ij}$ are mass matrices for the scalars, $a_{ijk}$ are
trilinear couplings (often called `A-terms')
and $\tilde{m}_{ab}$ is a mass
matrix for the gauginos.
The next step will be to investigate
if the more general Lagrangian
(\ref{Lsusy}) can be used to construct viable phenomenology.
Before we do so let us mention
that there is an alternative
way to motivate the relevance of
softly broken supersymmetric theories.
Ultimately one has to couple the supersymmetric
Standard Model to gravity.
This requires the promotion of global
supersymmetry to a local symmetry, that is the
parameter of the supersymmetry transformation
$\xi_{\alpha}=\xi_{\alpha}(x)$
is no longer constant but depends on
the space-time coordinates $x$ \cite{IN,wb}.
This demands the presence of an additional
massless
fermionic gauge field (the gravitino)
$\Psi_{m\alpha}$ with spin 3/2
and an inhomogeneous transformation law
\beq
\delta_{\xi}\Psi_{m\alpha}=-\partial_m\xi_{\alpha}+\ldots\ .
\eeq
(The necessity of this transformation law can be
seen for example from the supersymmetry
transformation of $\partial_m A$
which now has an extra contribution
$\partial_m\delta_{\xi}A\propto\partial_m\xi\psi=\xi\partial_m\psi
+(\partial_m\xi)\psi \mbox{ }$.)
Together with the metric $g_{mn}$
and 6 auxiliary fields $b_m,M,\bar M$
the gravitino $\Psi_{m\alpha}$
forms the supergravity multiplet
$(g_{mn},\Psi_{m\alpha},b_m,M,\bar M) \mbox{ }$.
The potential for the scalar fields
is modified in the presence of supergravity
and found to be \cite{cremmer}
\begin{equation}
V(A,\bar{A})=e^{\kappa^2A\bar{A}}\Big[(D_iW)(\bar{D}_{\bar i}\bar{W})
-3\kappa^2|W|^2\Big]
+\frac12 D^a D^a\ ,
\end{equation}
where
\beq
\kappa^2=\frac{8\pi}{M_{Pl}^2}\ , \qquad
D_iW=\frac{\partial W}{\partial A^i}+\kappa^2
\bar A^iW\ .
\eeq
The limit $\kappa^2\rightarrow 0$
corresponds to turning off gravity
and in this limit one obtains indeed
$V\rightarrow \frac{\partial W}{\partial A^i}
\frac{\partial \bar{W}}{\partial \bar{A^i}}
+\frac12 D^a D^a$
in accord with (\ref{scalarpot}).
Local supersymmetry
is spontaneously broken if
$D_i W|_{min}\not=0$
for some $i$.
This can be achieved by introducing
a hidden sector which only couples
via non-renormalizable interactions
to the observable sector
of the supersymmetric Standard Model
and which has a superpotential
$W_{hid}(\phi)$ suitably chosen
to ensure
$D_\phi W|_{min}\not=0$ \cite{HLW,RNi}.
In this case the
gravitino becomes massive through
a supersymmetric Higgs effect \cite{cremmer}.
In the limit
$\kappa^2\rightarrow 0$
with the gravitino mass $m_{3/2}$ kept fixed
the Lagrangian for the fields
in the observable sector
looks precisely like eqs.\
(\ref{Lsusy}), (\ref{Lsoft}) \cite{HLW,RNi}.
Thus the spontaneous breakdown of supergravity in a hidden sector
manifests itself as explicit but soft breakdown of global supersymmetry in
the low energy limit of the observable sector.
Finally, a variant of this mechanism
is to break supersymmetry dynamically
(ie.\ non-perturbatively) in an additional gauge sector
with some asymptotically free gauge theory \cite{ads,dns}.
In this case the supersymmetry breaking is communicated
to the observable sector by renormalizable interactions
but as in the previous case the breaking appears
in the observable sector as explicit but soft \cite{RDn,Rdynamical}.
\subsection{The
Supersymmetric Standard Model with Softly Broken
Supersymmetry}
In the previous section we recalled
the most general
Lagrangian of a softly broken supersymmetric
gauge theory in eqs.~(\ref{Lsusy}) and
(\ref{Lsoft}).
For ${\cal L}_{\rm susy}$ we continue to
take (\ref{cvlagrphys}) together with
the superpotential specified in
(\ref{mssmpot}).
For ${\cal L}_{\rm soft}$
only gauge invariance and R-parity is imposed.
This leads to the following
possible soft terms \cite{RNi,RHK,RHa,RZw,RBg,RDn}
\begin{eqnarray} \label{softSM}
{\cal L}_{\rm soft}&=&{}
-\left((a_u)_{IJ}h_u\tilde{q}_L^I\tilde{u}_R^J
+(a_d)_{IJ}h_d\tilde{q}_L^I\tilde{d}_R^J
+(a_e)_{IJ}h_d \tilde{l}_L^I\tilde{e}_R^J
+ b h_uh_d + \mbox{ h.c. }\right) \nonumber\\
&&{} -\sum_{\mbox{\footnotesize all scalars}} m_{ij}^2A^i\bar{A}^j-
(\frac{1}{2}\sum_{(a)=1}^3
\tilde{m}_{(a)}(\lambda\lambda)_{(a)}+\mbox{ h.c. }) \mbox{ }.
\end{eqnarray}
Obviously a huge number of new
parameters is introduced via
${\cal L}_{\rm soft}$.
The parameters of ${\cal L}_{\rm susy}$
are the Yukawa couplings $Y$ and the
parameter $\mu$ in the Higgs potential.
The Yukawa couplings are determined experimentally
already in the non-supersymmetric
Standard Model. In the softly broken
supersymmetric Standard Model
the parameter space is enlarged
by
\beq\label{softpara}
\left(\mu,(a_u)_{IJ}, (a_d)_{IJ}, (a_e)_{IJ}, b,
m^2_{ij}, \tilde{m}_{(a)}\right) \ .
\eeq
Not all of these parameters can be arbitrary
but quite a number of them
are experimentally constrained.
Some of these constraints we will see
in the following sections.
Within this much larger parameter space
it is possible to overcome several of the
problems encountered in the supersymmetric Standard Model.
For example, the
supersymmetric particles can now easily be heavy
(due to the arbitrariness of the
mass terms $m^2_{ij}$)
and therefore out of reach of present experiments.
Furthermore, the Higgs potential is changed
and vacua with spontaneous
electroweak symmetry breaking can be arranged.
However, the soft breaking terms introduce their own set of difficulties.
For generic values of the parameters
(\ref{softpara}) the contribution
to flavor-changing neutral currents
is unacceptably large \cite{ElN,RFCNCP},
additional (and forbidden)
sources of CP-violation occur \cite{EFN,RCP} and
finally the absence of vacua
which break the $U(1)_{\rm em}$ and/or $SU(3)$
is no longer automatic \cite{Color}.
It is beyond the scope of these lectures
to review all of these aspects in detail.
Let us therefore focus on a few selected
topics and refer the reader to the literature
for further details and discussions.
\subsection{Electroweak Symmetry Breaking}
In section~3.2 we noticed that for unbroken
or spontaneously broken supersymmetry
the electroweak symmetry remains intact in
the supersymmetric version of the Standard Model.
Let us now review the situation in the presence
of soft breaking terms \cite{HHG}.
The Higgs sector of the supersymmetric
Standard Model consists of two $SU(2)$-doublets
\[ h_u={h_u^+\choose h_u^0} \quad,\quad\quad h_d={h_d^0\choose h_d^-} \quad, \]
which carry eight real degrees of freedom, four of them neutral and four
charged. Like in the Standard Model $SU(2)_L\times U(1)_Y$
will be broken to $U(1)_{\rm em}$
by non-vanishing VEVs of the neutral Higgs bosons $h_u^0$ and $h_d^0$.
For that purpose consider their potential
which can be derived from eqs.~(\ref{HiggsV}),
(\ref{softSM})
\begin{eqnarray} \label{higgspot}
V(h_u^0,h_d^0)= \hat{m}_u^2|h_u^0|^2
+\hat{m}_d^2|h_d^0|^2
-b(h_u^0h_d^0+\bar{h}_u^0\bar{h}_d^0)
+\frac{g_1^2+g_2^2}{8}
\Big(|h_u^0|^2-|h_d^0|^2\Big)^2,
\end{eqnarray}
where
\begin{eqnarray}
\hat{m}_u^2&=&m_u^2+|\mu|^2 \mbox{ },\nonumber\\
\hat{m}_d^2&=&m_d^2+|\mu|^2 \mbox{ }.
\end{eqnarray}
Notice that the coefficient of the $|h_{u,d}^0|^4$-term is exactly as in
(\ref{HiggsV}) determined by the gauge couplings
and not changed by soft breaking terms.
This term is positive
so that the potential is bounded from below
for large values of $h_u^0,h_d^0$ as long as
$|h_u^0|\not=|h_d^0|$.
To secure this bound also in the direction
$|h_u^0|=|h_d^0|$ one has to impose the
following constraint on the parameter space
\begin{equation} \label{bb}
\hat{m}_u^2+\hat{m}_d^2 \ge 2|b| \mbox{ }.
\end{equation}
\bigskip
\noindent
{\bf Exercise:}
{\it Verify formula (\ref{higgspot}) using
eqs.~(\ref{HiggsV}),
(\ref{softSM}).}\\
{\bf Exercise:}
{\it Verify the condition (\ref{bb}).}
\bigskip
The existence of a minimum
of $V$ with broken gauge symmetry
requires that the
Hessian of $V(h_u^0,h_d^0)|_{h_u^0=h_d^0=0}$
\beq
\left( \begin{array}{cc} \hat{m}_u^2 & -b \\ -b & \hat{m}_d^2
\end{array}\right)
\eeq
has at least one negative eigenvalue.
Together with (\ref{bb})
this implies\footnote{This is the generalization
of the condition $\mu^2 <0$ in the Standard Model
to a Higgs sector with two Higgs doublets.}
\begin{equation} \label{sb}\hat{m}_u^2\hat{m}_d^2<b^2 \mbox{ }.
\end{equation}
So the soft terms have to satisfy
(\ref{bb}), (\ref{sb}) in order to
induce electroweak symmetry breaking
but in addition also the
masses of the $Z$- and $W$-bosons
have to come out correctly.
These masses are given by
\begin{eqnarray} \label{D}
M_Z^2&=&\frac{1}{4}(g_1^2+g_2^2)(v_u^2+v_d^2)=\frac{1}{2}(g_1^2+g_2^2)v^2
\nonumber \\
M_W^2&=&\frac{1}{4}g_2^2(v_u^2+v_d^2)=\frac{1}{2}g_2^2v^2 \mbox{ },
\end{eqnarray}
where
\begin{equation} \label{E}
\langle h_u^0\rangle =\frac{1}{\sqrt{2}}\,v_u=v\sin\beta,\qquad
\langle h_d^0\rangle =\frac{1}{\sqrt{2}}\,v_d=v\cos\beta\ .
\end{equation}
The electroweak symmetry breaking is parameterized
by the two Higgs vacuum expectation values
$v_u,v_d$ (which can be chosen real)
or equivalently $v$ and $\beta$.
As in the Standard Model $v$ has to be chosen
such that it reproduces the experimentally
measured $Z$- and $W$-masses.
$\beta$ on the other hand is an additional parameter
which arises as a consequence of the enlarged
(2 doublet) Higgs sector.
The Higgs expectation values are directly
related to the parameters of the Higgs potential
via the minimization conditions
\begin{eqnarray}\label{F}
\frac{\partial V}{\partial h_u^0}&=&2\hat{m}_u^2v_u-2bv_d
+\frac{g_1^2+g_2^2}{2}(v_u^2-v_d^2)v_u=0\ ,
\nonumber\\
\frac{\partial V}{\partial h_d^0}&=&2\hat{m}_d^2v_d-2bv_u
-\frac{g_1^2+g_2^2}{2}(v_u^2-v_d^2)v_d=0 \mbox{ }.
\end{eqnarray}
This in turn can be used to derive a more physical
relationship among the parameters.
Using (\ref{D}) and (\ref{E}) the minimization conditions (\ref{F}) can
be rewritten as
\begin{eqnarray} \label{MZcond}
b&=&\frac{1}{2}\sin 2\beta\, (\hat{m}_u^2+\hat{m}_d^2)\nonumber \\ \label{H}
M_Z^2&=&-2|\mu|^2+\frac{2}{1-\tan^2\beta}(m_u^2\tan^2\beta-m_d^2) \mbox{ }.
\end{eqnarray}
Finally, the full Higgs potential including
all eight real degrees of freedom
can be used to compute the $8\times8$
mass matrix of all Higgs bosons.
After a somewhat lengthy calculation \cite{HHG}
one finds that this mass matrix has
three eigenvalues zero corresponding
to the three Goldstone
modes `eaten' by the $W^{\pm}$ and the $Z$.
The remaining five degrees
of freedom yield the physical Higgs bosons of the
model:
\begin{eqnarray}
H^{\pm}&&\quad\quad\mbox{charged Higgs boson pair}\nonumber \\
A^0&&\quad\quad\mbox{CP-odd neutral Higgs boson}\nonumber \\
H^0,h^0&&\quad\quad\mbox{CP-even neutral Higgs bosons .}\nonumber
\end{eqnarray}
Their tree-level masses are given by
\begin{eqnarray} \label{G}
m_A^2&=&\hat{m}_u^2+\hat{m}_d^2 \nonumber\\
m_{H_{\pm}}^2&=&m_A^2+M_W^2 \nonumber\\
m_{h^0}^2&=&\frac{1}{2}\Big[m_A^2+M_Z^2-\sqrt{(m_A^2+M_Z^2)^2
-4m_A^2M_Z^2\cos^22\beta}\mbox{ }\Big] \nonumber\\
m_{H^0}^2&=&\frac{1}{2}\Big[m_A^2+M_Z^2+\sqrt{(m_A^2+M_Z^2)^2
-4m_A^2M_Z^2\cos^22\beta}\mbox{ }\Big]\mbox{ }.
\end{eqnarray}
The Higgs masses obey physically interesting
mass relations.
{}From (\ref{G}) we learn
\begin{equation}\label{massrel}
m_{H^{\pm}}\ge M_W\ , \qquad
m_{H^0}\ge M_Z\ , \qquad
m_{h^0} \le M_Z \ .
\end{equation}
\bigskip
\noindent
{\bf Exercise:} {\it Derive the mass relations
(\ref{massrel}) from (\ref{G}).}
\bigskip
Physically the most interesting is the last inequality since it predicts the
existence of a light Higgs boson.
This `prediction' can be directly traced to
the fact the quartic couplings in the Higgs potential
are fixed by the (measured) gauge couplings and
are not free parameters as in the Standard Model.
However, radiative corrections
for this lightest Higgs boson mass can be large
and after taking into
account quantum corrections
the upper bound on $m_{h^0}$ is pushed up
to about 150 GeV \cite{hrad}.
However, the prediction of one light neutral Higgs
boson remains and is one of the characteristic
features of the supersymmetric
two-doublet Higgs sector.
It even holds in the limit that all
masses of the supersymmetric particles
are sent to infinity.
In this limit one recovers the non-supersymmetric
Standard Model -- albeit with a light Higgs.
Finally, a proper minimization of the scalar
potential requires to take into account
all scalar fields
and not truncate to the neutral Higgs
bosons. It is possible (and does occur
in certain regions of the
supersymmetric parameter space)
that there exist minima which not only break
the electroweak symmetry but also
$SU(3)$ and/or $U(1)_{\rm em}$.
An extended analysis of this aspect can be found,
for example, in ref.\ \cite{Color}.
\subsection{Weak-Scale Supersymmetry}
Let us briefly come back to the hierarchy and
naturalness problem.
In section~\ref{hn1} we learned that
supersymmetry sheds no light
on the hierarchy problem, i.e.\ the
question why
$M_Z$ is so much
smaller than $M_{Pl}$.
However, because
of the absence of quadratic divergences
it does solve the naturalness problem,
that is it provides a stable
hierarchy in the presence of a light Higgs boson.
To only add soft supersymmetry breaking terms
into the supersymmetric Standard Model
was precisely motivated by this feature.
However, from eq.~(\ref{H}) we learn
an important additional constraint
on the soft breaking parameters.
In order to introduce no new fine-tuning
all soft terms in eq.~(\ref{H}) should be
of the same order of magnitude,
i.e.\ of order ${\cal O}(M_Z)$ or at most
in the TeV range \cite{Wi}.
If this were not the case the soft parameters
would have to be delicately tuned
in order to add up
to $M_Z$. This in turn implies
the breaking of supersymmetry should occur
at the weak scale and that most likely
all supersymmetric particles have masses
in that range.\footnote{This argument is
somewhat imprecise since not only
the soft breaking terms $(m_u,m_d)$
but also $\mu$ have to be ${\cal O}(M_Z)$.
However $\mu$ is not related to supersymmetry
breaking in any obvious way but
rather a parameter in the superpotential.
Thus one needs
a mechanism which also
explains the approximate equality
of $\mu$ with $(m_u,m_d)$.
This is known as the $\mu$-problem \cite{KN,RNi}.}
This line of argument is used
to motivate what is called
``weak-scale supersymmetry'' and indeed
the current LEP II experiments actively
look
for supersymmetric particles with masses
slightly above the weak scale.
\subsection{Further Constraints on the Supersymmetric Parameter Space}
The experimental searches for supersymmetric particles
impose additional constraints on the supersymmetric
parameter space. First and foremost
the direct lower
bounds on the masses of the supersymmetric
particles \cite{PDG}
exclude certain regions of the parameter space.
The translations of experimental bounds
into the supersymmetric parameter space is
complicated by the fact that
the states which are listed in table~2 are
interaction eigenstates, but not necessarily mass eigenstates.
The only exception are the gluinos $\tilde G$ and the mass bounds
directly translate into bounds on $\tilde m_3$.
On the other hand the three Winos $\tilde W^\pm,
\tilde W^3$, the $\tilde B$ and the four
Higgsinos $\tilde h_{u,d}^0, \tilde h_{d}^-,
\tilde h_{u}^+$ combine into
a four-vector of neutral Weyl fermions
consisting of
$ {\bf N} \equiv
(\tilde B, \tilde W^3,\tilde h_{u}^0,\tilde h_{d}^0)$
and two pairs of charged Weyl fermions
${\bf C}^- \equiv
(\tilde W^-, \tilde h_{d}^-),\
{\bf C}^+ \equiv (\tilde W^+,\tilde h_{u}^+)$
with the following set of mass matrices
\beq
{\cal L}_{\rm fmass} =
-\frac12\, \tilde m_3\, \tilde G^a\tilde G^a \
-\ {\bf C}^- M_C ({\bf C}^+)^T
\ -\ \frac12\, {\bf N} M_N {\bf N}^T
+ h.c. \ ,
\eeq
where
\beq\label{mchargino}
M_C= \left(\begin{array}{cc}
\tilde m_2 & i\sqrt 2 g_2 v_u\\
i\sqrt 2 g_2 v_d& \mu\end{array}\right)\ ,
\eeq
and
\beq\label{mneutralino}
M_N= \left(\begin{array}{cccc}
\tilde m_1 & 0&\frac{i}2 g_1 v_u& -\frac{i}2 g_1 v_d \\
0&\tilde m_2& -\frac{i}2 g_2 v_u& \frac{i}2 g_2 v_d \\
\frac{i}2 g_1 v_u& -\frac{i}2 g_2 v_u&0&\mu \\
-\frac{i}2 g_1 v_d& \frac{i}2 g_2 v_d&\mu &0
\end{array}\right)\ .
\eeq
Thus, the physical
mass eigenstates of $M_C$ and $M_N$
are parameter dependent linear
combinations of the corresponding
interaction eigenstates and
they are termed {\em charginos}
and {\em neutralinos}, respectively.
\bigskip
\noindent
{\bf Exercise:} {\it Derive the mass matrices
(\ref{mchargino}) and (\ref{mneutralino}).
}
\bigskip
The other supersymmetric particles
are the spin-0
partners of the quarks and leptons,
the {\em squarks} $\tilde q_L^I, \tilde u_R^I,\tilde d_R^I$
and the {\em sleptons}
$\tilde l_L^I, \tilde e_R^I$.
Their mass eigenstates are derived from
the following
three $6\times6$ and one $3\times3$ mass matrices
\beq
{\bf U} M_U {\bf U}^\dagger\ ,\qquad
{\bf D} M_D {\bf D}^\dagger\ ,\qquad
{\bf E} M_E {\bf E}^\dagger\ ,\qquad
\tilde \nu M_\nu \tilde{\bar \nu}\ ,
\eeq
where
\beq
{\bf U} \equiv (\tilde u_L^I, \bar{\tilde u}_R^I)
\ ,\qquad
{\bf D} \equiv (\tilde d_L^I, \bar{\tilde d}_R^I)
\ ,\qquad
{\bf E} \equiv (\tilde e_L^I, \bar{\tilde e}_R^I)
\ .
\eeq
Explicit forms of these mass
matrices in terms of the soft parameters
can be found e.g.~in ref.~\cite{RHa}.
Constraints on the soft parameters
imposed by the experimental bounds
can be found in \cite{PDG,RHa,RBg,RDn,RGu,RPes,Rlep}.
\subsection{The Minimal Supersymmetric Standard \\ Model (MSSM)}
The supersymmetric version of the Standard Model we discussed
so far has a huge parameter space and therefore very limited
predictive power. A much more constrained version (with less free parameters)
became known as the Minimal Supersymmetric Standard Model (MSSM)
which is the topic of this section.
The MSSM was motivated by the success of
Grand Unified Theories combined with a simple, flavor blind mechanism
of supersymmetry breaking in a hidden sector
\cite{DG}.
Over the last 15 years this model went through a few alterations but today
it is a well defined model with a very particular
set of soft supersymmetry breaking terms which are flavor blind
and in some sense the minimal choice of free parameters \cite{RNi,RHK,RHa,RZw}.
One imposes that {\it all} scalar masses are the same
$m^2_{ij}=m_0^2\delta_{ij}$, all gaugino masses are the same
$\tilde{m}_1=\tilde{m}_2=\tilde{m}_3=\tilde{m}$,
all a-parameters are proportional to the Yukawa couplings
with the same universal proportionality constant
$a_0$
and finally that the b-parameter is of a specific form.
Altogether one has
\begin{eqnarray}\label{MSSMsoft}
m^2_{ij}&=&m_0^2\, \delta_{ij}\ , \qquad
\tilde{m}_1=\tilde{m}_2=\tilde{m}_3=\tilde{m}
\ ,\qquad b=b_0\, m_0\, \mu\ ,\nonumber\\
(a_u)_{IJ}&=&a_0\, (Y_u)_{IJ}\ , \qquad
(a_d)_{IJ}=a_0\, (Y_d)_{IJ}\ , \qquad
(a_l)_{IJ}=a_0\, (Y_l)_{IJ} \ .
\end{eqnarray}
Thus, the parameter space of the MSSM is
spanned by the 5 parameters
\[(m_0,\tilde{m},a_0,b_0,\mu)\mbox{ },\]
which are subject to one further
non-trivial constraint (\ref{MZcond})
which ensures
the observed electroweak symmetry breaking.
Of course the relations (\ref{MSSMsoft})
are meant to be
tree level relations and they do enjoy
quantum corrections which are governed
by the appropriate renormalization group equations
\cite{APW,RGEa}.
The quantum corrections destroy
the universality of the soft parameters
but the deviation from universality is
small and so far in accord with all
measurements \cite{RBg,RGu,Poko,RFCNCP}.
In particular, the smallness of
flavor-changing neutral currents is
`naturally explained' in the MSSM \cite{ElN,BBM}.
Thus even without an underlying GUT theory
this set of parameters seems to have some
phenomenological attraction.
However, one should also stress that supersymmetric
GUTs are an extremely viable possibility today.
Among other things the LEP precision experiments
determined the gauge coupling constant $g_2$
very precisely.
In light of these measurements
one can ask to what extend the three gauge
couplings
$g_1,g_2,g_3$ do unify at some
high energy scale $M_{\rm GUT}$
given the experimental input at $M_Z$.
At one-loop order the energy dependence
of the gauge couplings is given by
\begin{equation}
g_{(a)}^{-2}(M_Z)\ =\ g_{(a)}^{-2}(M_{\rm GUT})\,
+\, \frac{b_{(a)}}{8\pi^2}\,
\ln \big(\frac{M_{\rm GUT}}{M_Z}\big) \ ,
\end{equation}
where $b_{(a)}$ are the coefficients
of the one-loop beta-function
which depend on the massless spectrum of the
theory.
Let us first recall that the index $T_{(a)}(R)$ of
a representation $R$ is
defined as $\mbox{Tr}_R(T^aT^b)_{(a)}=T_{(a)}(R)\delta^{ab}$ where $T^b$ are
the generators of the gauge group.
In terms of the indices $b_{(a)}$ is given by
\beq \label{b}
b_{(a)} =
-\frac{11}{3}T_{(a)}(\mbox{G})
+\frac{2}{3}T_{(a)}(\mbox{R})\, n_{WF}
+\frac{1}{6}T_{(a)}(\mbox{R})\, n_{RS} \mbox{ },
\eeq
where G denotes the adjoint
representation and
$n_{WF}$ ($n_{RS}$) counts the number of
Weyl fermions (real scalars)
in the representation $R$.
For the non-supersymmetric Standard Model
one finds
\[ (b_1,b_2,b_3)=(\frac{41}{10},-\frac{19}{6},-7) \mbox{ }, \]
which does not lead to a unification
of coupling constants at any scale. That is,
one cannot find an $M_{\rm GUT}$ where
$g_{(1)}(M_{\rm GUT})=g_{(2)}(M_{\rm GUT})=
g_{(3)}(M_{\rm GUT})$ holds.
However, in the supersymmetric Standard Model
one finds
\[ (b_1,b_2,b_3)=(11,1,-3) \mbox{ }, \]
which does lead to a unification of couplings
at $M_{GUT}\simeq 10^{16}$ GeV \cite{Uni}.
This can be taken as a (strong) hint for a
supersymmetric GUT.
Let us close this section with a discussion
of electroweak symmetry breaking in the MSSM.
At the tree level
one now has $\hat{m}_d^2=\hat{m}_u^2$
and as a consequence the conditions for
electroweak symmetry
breaking
(\ref{bb}) and (\ref{sb})
cannot be satisfied simultaneously
\begin{eqnarray}\hat{m}_u^2+\hat{m}_d^2=2(m_0^2+\mu^2)\ge 2 |b| \nonumber\\
\hat{m}_u^2\hat{m}_d^2=(m_0^2+\mu^2)^2 < |b|^2 \mbox{ }.\nonumber
\end{eqnarray}
However, quantum corrections
alter this situation and naturally induce
electroweak symmetry breaking \cite{IR}.
Thus, the MSSM naturally displays
a supersymmetric version of the Coleman-Weinberg mechanism \cite{coleman}
where quantum corrections
generate a non-trivial minimum in the Higgs potential.
The electroweak symmetry breaking scale
is not
put in hand, but related to the scale
of the supersymmetry breakdown.
\section{Summary}
Supersymmetry is a generalization of the space-time symmetries of quantum field theories
and it transforms fermions into bosons and
vice versa. In the supersymmetric Standard Model
all particles of the Standard Model
are accompanied by superpartners
with opposite statistics.
Moreover, it is necessary to enlarge the
Higgs sector and add a second Higgs doublet
to the spectrum.
Supersymmetry cannot be an exact symmetry
of nature and has to be realized in its broken
phase (if at all).
However, spontaneously broken
global supersymmetry is
phenomenologically ruled out.
Spontaneously broken local supersymmetry on the other hand
leads to models which are (still)
in agreement with all present observations.
These models do provide
a solution to the naturalness problem as long as
the supersymmetric partners have masses
not much bigger than $1 TeV$.
The supersymmetric Standard Model
has one solid prediction: a light neutral Higgs
with a mass smaller than 150 GeV
(as well as additional charged
but not necessarily light Higgs bosons).
The gauge couplings of
the supersymmetric Standard Model do
unify at a scale $M_{GUT}\simeq 10^{16}$ GeV,
which may indicate a supersymmetric GUT.
The MSSM -- motivated by GUTs --
contains only four new parameters
and is consistent with observations in large
regions of this parameter space.
In this model the electroweak symmetry breaking
is driven by radiative corrections realizing
a supersymmetric version of the Coleman-Weinberg
mechanism.
Finally, the concept of supersymmetry also
arises naturally in string theories
which might be another hint towards
its realization in nature.
\bigskip
\noindent
{\bf Exercise:} {\it How can supersymmetry be verified or falsified? Distinguish between
necessary and sufficient conditions.
}
\bigskip
\newpage
\section{Appendix-Conventions and Notation}
In these lectures the notation and conventions
of ref.\ \cite{wb} are used.
The four-dimensional Lorentz metric is chosen as
\begin{equation} \eta_{mn}=\mbox{diag}(-1,1,1,1) \ .\end{equation}
Lorentz indices are labeled by Latin indices $m,n,...$ which run from 0 to 3.
Greek indices are used
to denote spinors.
A two-component Weyl spinor can transform under the
$(\frac{1}{2},0)$ or the complex conjugate
$(0,\frac{1}{2})$--representation of the Lorentz group
and dotted or
undotted indices are used to distinguish between these representations.
$\psi_{\alpha}$ denotes a spinor transforming under
the $(\frac{1}{2},0)$ representation while
$\bar{\chi}_{\dot{\alpha}}$
transforms under the
$(0,\frac{1}{2})$ representation of the Lorentz group.
The spinor indices $\alpha$ and $\dot{\alpha}$ can take the values 1 and 2.
These indices can be raised and lowered using the skew-symmetric
$SU(2)$-- invariant tensor
$\epsilon^{\alpha\beta}$ or $\epsilon_{\alpha\beta}$.
\begin{equation} \psi^{\alpha}=\epsilon^{\alpha\beta}\psi_{\beta} \mbox{ }, \quad\quad
\psi_{\alpha}=\epsilon_{\alpha\beta}\psi^{\beta} \mbox{ },
\end{equation}
where
$$\epsilon_{21}=-\epsilon_{12}=1,\quad
\epsilon_{11}=\epsilon_{22}=0, \quad
\epsilon_{\alpha\gamma}\epsilon^{\gamma\beta}=\delta_{\alpha}^{\beta}\ .
$$
For dotted indices the analogous equations hold.
The product
$ \epsilon^{\beta \alpha} \psi_{\alpha} \chi_{\beta}=
\psi^{\beta} \chi_{\beta}$ is a Lorentz scalar.
Spinors are anticommuting objects and one has the
following summation convention:
\begin{eqnarray}&&\psi\chi=\psi^{\alpha}\chi_{\alpha}=-\psi_{\alpha}\chi^{\alpha}=\chi^{\alpha}\psi_{\alpha}=\chi\psi \mbox{ },\nonumber\\
&& \bar{\psi}\bar{\chi}=\bar{\psi}_{\dot{\alpha}}\bar{\chi}^{\dot{\alpha}}=-\bar{\psi}^{\dot{\alpha}}\bar{\chi}_{\dot{\alpha}}=
\bar{\chi}_{\dot{\alpha}}\bar{\psi}^{\dot{\alpha}}=\bar{\chi}\bar{\psi} \mbox{ }.
\end{eqnarray}
The convention for the conjugate spinors are
chosen such that it is consistent with the conjugation of scalars:
\begin{equation} (\psi\chi)^{\dagger}=(\psi^{\alpha}\chi_{\alpha})^{\dagger}=\bar{\psi}_{\dot{\alpha}}\bar{\chi}^{\dot{\alpha}}=
\bar{\psi}\bar{\chi}=\bar{\chi}\bar{\psi}\mbox{ }.
\end{equation}
The $\sigma$-matrices $\sigma_{\alpha\dot{\alpha}}^m$ are given by:
\begin{eqnarray} \sigma^0=\left(\begin{array}{cc}-1&0 \\ 0&-1\end{array}\right)\mbox{ }, \quad\quad
\sigma^1=\left(\begin{array}{cc}0&1 \\ 1&0\end{array}\right)\mbox{ }, \nonumber\\
\sigma^2=\left(\begin{array}{cc}0&-i \\ i&0\end{array}\right)\mbox{ }, \quad\quad
\sigma^3=\left(\begin{array}{cc}1&0 \\ 0&-1\end{array}\right)\mbox{ }.
\end{eqnarray}
The invariant $\epsilon$--tensor raises and lowers their indices:
\begin{equation} \bar{\sigma}^{m\dot{\alpha}\alpha}=\epsilon^{\dot{\alpha}\dot{\beta}}\epsilon^{\alpha\beta}\sigma_{\beta\dot{\beta}}^m \mbox{ }
\end{equation}
and we have:
\beq
\bar{\sigma}^0={\sigma}^0\ , \qquad \bar{\sigma}^{1,2,3}=-{\sigma}^{1,2,3} \mbox{ }.
\eeq
The generators of the Lorentz group in the spinor representation
are given by
\beq
\sigma^{nm} = \frac14
(\sigma^n\bar\sigma^m-\sigma^m\bar\sigma^n)\ , \qquad
\bar\sigma^{nm} = \frac14
(\bar\sigma^n\sigma^m-\bar\sigma^m\sigma^n)\ .
\eeq
The Dirac-$\gamma $--matrices can be written in terms of Weyl matrices:
\beq
\gamma^m =
\left( \begin{array}{cc} 0 & \sigma^m \\ \bar{\sigma}^m & 0\end{array}
\right)
\eeq
which fulfill
\beq
\{ \gamma^m, \gamma^n \} = -2 \, \eta^{mn}\ .
\eeq
A four component Dirac spinor contains two Weyl spinors
\beq
\Psi_D = \left( \begin{array}{c} \psi \\ \bar{\chi} \end{array} \right)
= \left( \begin{array}{c} \psi_{\alpha} \\ \bar{\chi}^{\dot{\alpha }} \end{array}
\right)\ .
\eeq
Its conjugate is
\beq
\bPsi_D = \Psi_D^{\dagger} \gamma^0 =
(\chi^{\alpha},\bar{\psi}_{\dot{\alpha }})\ .
\eeq
The Dirac equation describing relativistic spin-$\frac{1}{2}$ particles reads:
\beq
(i \, \gamma^n \partial_{n} + m)\, \Psi_D =0.
\eeq
It can be decomposed into two Weyl equations
\bea
i \sigma^n \partial_n \, \bar{\chi} \, + \, m \psi &=& 0\ , \\ \nn
i \bar{\sigma^n} \partial_n \psi \, + \, m \bar{\chi} &=& 0\ .
\eea
\bigskip
\noindent
{\bf Exercise:} {\it
Show the validity of the following identities:}
$$\psi^{\alpha} \chi_{\alpha} = \chi^{\alpha} \psi_{\alpha} , \quad
\chi^{\alpha} \sigma_{\alpha \dot{\alpha }}^n \bpsi^{\dot{\alpha }}
= -\bpsi_{\dot{\alpha }} \bs^{n\dot{\alpha } \beta} \chi_{\beta},\quad
\left( \psi^{\alpha} \phi_{\alpha} \right) \bar{\chi}_{\dot{\beta }}
= -\frac{1}{2} \left( \phi^{\alpha} \sigma^m_{\alpha \dot{\delta }}
\bar{\chi}^{\dot{\delta }} \right) \psi^{\gamma} \sigma^m_{\gamma \dot{\beta }}
$$
\noindent
{\bf Exercise:} {\it
Compute in terms of $\sigma$-matrices} the following
matrices
$$\gamma_5 = -i \gamma_0 \gamma_1 \gamma_2 \gamma_3 \qquad
P_L= \frac{1}{2} (1-\gamma_5), \qquad P_R = \frac{1}{2} (1+\gamma_5)
$$
\noindent
{\bf Exercise:} {\it
Express the following Lorentz-invariants in terms of Weyl-spinors:}
$$
\bPsi_D \Psi_D, \; \bPsi_D \gamma^5 \Psi_D, \; \bPsi_D \gamma^m \Psi_D, \;
\bPsi_D \gamma^5 \gamma^m \Psi_D, \;
\bPsi_D [\gamma^m, \gamma^n] \Psi_D
$$
\vskip 1cm
{\bf Acknowledgement:}
We would like to thank the students and organizers
of the 1996 Saalburg Summers School for
generating an inspiring scientific atmosphere.
Special thanks go to H.\ G\"unther,
M.\ Haack and M.\ Klein
for carefully reading the manuscript and
P.\ Zerwas for guiding us through the
CERN home page.
J.L.\ also thanks the students of the University of Munich
for their questions and comments during a course
on supersymmetry which was the seed of these lecture notes.
|
1,314,259,994,207 | arxiv |
\section{Introduction}
\label{sec:Introduction}
Future smart factories require flexible production of highly
individualized goods in small and medium lot sizes, where
reconfigurations of systems occur with high frequencies. To achieve
this, such factories require flexibly usable assembly line robots, which
can work safely in shared environments with humans. Flexible usage of
such robots requires easy programming for arbitrary tasks. Compliant
manipulators allow such interaction, but programming these robots is
more complex than programming rigid industrial robots. Beside knowledge
of a general-purpose programming language (GPL), it also requires
control specific knowledge to adjust compliance parameters like
stiffness and damping for each motion. Thus, only robotics experts with
sufficient programming knowledge are able to program such compliant
manipulators. Furthermore, the re-usability of such control specific
programs is endangered due to their target specific nature. Making these
usable and re-usable in daily flexible production requires tools for a
more abstract development of assembly tasks that are executable by shop
floor workers with none software engineering expertise.
In \cite{THR+13} we propose a new framework which consists of a
domain-specific language (DSL) and a toolchain to generate specialized
robot programs for assembly tasks. The recent robot programming
interface for the iiwa-LBR is based on the compliance-frame concept
introduced by Mason and further developed with the task-frame-formalism
by~\cite{deSchutter88}. The programming interface uses stiffness and
damping definitions for each Cartesian DOF while the robot moves towards
a target pose. Stop conditions can be applied in order to immediately
stopping the motion and awaiting new specifications for the next motion.
Fig.~\ref{fig:SkillIntro} illustrates the programming specification
necessary for the action shown at the right side of the figure. The
robot rotates about the x-axis in the task-frame, while it pushes the
object towards the top hat rail until the torque about the x-axis
exceeds a certain threshold. For Cartesian force controlled robots the
skill-primitive concept has been suggested earlier~\cite{HST92,TBW03},
which is also based on the task-frame formalism. Skill primitives (SPs)
are combined to skill primitive nets~\cite{TFKW03,KFTW04,MMTW09} (SPNs)
and their transitions are ensured by preconditions and postconditions.
Figure~\ref{fig:SkillPrimitiveNet} shows a SPN consisting of three SPs,
which describes the placement of an object onto a table. The surface
orientation of the plane is not precisely known. Hence, a force-torque
sensor is attached to the robot hand flange. Therewith contact forces
and torques can be measured and evaluated during the assembly process.
According to contact forces and torques, different SPs are selected for
execution. In this example, either a rotation along the object's depths
axis or its width axis is carried out. When the stop conditions trigger,
a transition to one of the subsequent SPs, depending on the measured
values of the force-torque sensor, is performed.
\begin{ownfigure}
\centering
\includegraphics[width=0.48\textwidth]{images/BildIntro2.pdf}
\caption{Left side: Motion commands for the LBR-robot to establish the
task. Right side: Position of the task frame for the rotation about the
x-axis to assemble the socket onto the top hat rail.}
\label{fig:SkillIntro}
\end{ownfigure}
\begin{ownfigure}
\centering
\includegraphics[width=0.48\textwidth]{gen/SkillPrimitiveNet}
\caption{Excerpt of a SPN for placing an object onto a table
despite uncertainties about the orientation of table's plane.}
\label{fig:SkillPrimitiveNet}
\end{ownfigure}
SPNs have proven useful for the programming of complex robot tasks but
lack abstraction, separation of concerns between shop floor workers and
robotics experts. As it can be seen, the art of programming different
robots in the assembly domain, might be the same. Hence a concept is
necessary which is grounded on domain-specific languages and allows the
programming of different robots in an intuitive way triggered by the
problem domain rather than the robot hardware. It motivated us to
provide such a framework which allows experts to exploit robotic
hardware while non-experts can use their results in an intuitive way.
Figure~\ref{fig:layers} illustrates how robot assembly tasks are modeled
as three-level networks with \lr~\cite{THR+13} instead. We have adapted
the concept of SPs such that it can be used for compliant manipulators.
In order to increase the level of abstraction we distinguish between
assembly \concept{plans}, \concept{processes}, \concept{tasks},
\concept{skills}, and \concept{actions}: Assembly plans are provided
externally~\cite{Thomas08} and consist of assembly processes. Each
assembly process consists of tasks that consist of skills. Assembly
processes and tasks can be modified by domain experts to adjust the
assembly process to fit new environmental conditions, e.g., because the
robot uses another gripper than assumed, a sensor does not work as
expected, or workpieces are placed differently than the expert system
assumed. Assembly skills consist of actions, which are platform specific
and provided by robot experts.
\begin{ownfigure}
\centering
\includegraphics[width=0.48\textwidth]{gen/LightRocksLevels}
\caption{The \lr abstraction layers.}
\label{fig:layers}
\end{ownfigure}
These levels of abstraction allow to separate robotics expertise modeled
within actions and skills from domain expertise embodied in tasks and
processes. It also allows re-using recurring skills for different
assembly tasks and recurring tasks for different assembly processes. The
previous version of \lr~\cite{THR+13} was implemented as a profile of
the \umlp~\cite{Rum11} Statechart (SC) language with the MontiCore
language workbench~\cite{GKR+06,KRV10}. The \umlp is a variant of
UML~\cite{OMG10} for programming. While this allows re-use of language
infrastructure, description of domain types via \umlp class diagram (CD)
models~\cite{LNPR+13}, model analyses, and code generators~\cite{Sch12},
the resulting modeling language is less comprehensible than intended and
requires domain experts to comprehend the full expressiveness of \umlp SCs.
To liberate domain experts from this, we present a collection of MontiCore
DSLs for concise representation of SPNs to facilitate development of
re-usable, platform-independent robot assembly tasks. In addition, the
degree of re-usability for each introduced abstraction layer regarding
different tasks, APIs, target platforms or robots is evaluated.
In the following, Sect.~\ref{sec:Example} illustrates \lr by example
before Sect.~\ref{sec:RelatedWork} discusses related work.
Sect.~\ref{sec:LR2} describes the new \lr DSLs and toolchain.
Afterwards, Sect.~\ref{sec:CaseStudies} outlines case studies and
finally, Sect.~\ref{sec:Conclusion} debates future work and summarizes
the contribution.
\section{Example}
\label{sec:Example}
The assembly task of placing a screw into a thread may be composed by
grasping the screw, moving it to the thread, and tightening it into the
thread. Figure~\ref{fig:GraspAndScrew} depicts a part of the task
\code{GraspAndScrew} that uses the gripper to pick up a screw and
tightens it into a thread. The task consists of several skills and
provides two outcomes: either the screw is placed accordingly to the
skill \code{Screwing} or it is not. The skill \code{Screwing} inserts a
screw into a thread after a previously executed skill placed it
accordingly. It consists of four actions of which \code{Spin} and
\code{CloseGripper} are illustrated in Fig.~\ref{fig:GraspAndScrew}.
\begin{ownfigure}
\centering
\includegraphics[width=0.48\textwidth]{gen/GraspAndScrew}
\caption{Excerpts of task \code{GraspAndScrew} and skill
\code{Screwing}.}
\label{fig:GraspAndScrew}
\end{ownfigure}
The developer modeled the skill following human behavior: after
initially grasping the screw and holding it over the thread, the robot
spins the screw (action \code{Spin}). If a certain torque is reached,
the screw is fixed and the skill finishes. Otherwise the robot releases
the gripper, rotates back, grasps the screw again (cf. action
\code{CloseGripper}), and spins it again. Note that the action
\code{Spin} yields two further outcomes to detect whether something
besides the robot manipulated the workpiece. Such multiple outcomes
allow modeling flexible skills that can deal with uncertainties.
The action \code{CloseGripper} references the type \code{Tool}, which is
part of the robot interface of the domain model, via \texttt{t.close()}.
\lr uses MontiCore to validate such models and to generate proper GPL
implementations of these models. The resulting robot behavior is
illustrated in Fig.~\ref{fig:ResultingBehavior}.
\begin{ownfigure}
\centering
\includegraphics[width=0.48\textwidth]{gen/ResultingBehavior}
\caption{A KUKA LBR robot finding and tightening a screw using
\lr}
\label{fig:ResultingBehavior}
\end{ownfigure}
\section{Related Work}
\label{sec:RelatedWork}
Recently, multiple DSLs for imperative or event-driven robot
behavior~\cite{KAH+10,BDH+10,ASH+12}, perception tasks~\cite{HSV+13},
and software architecture with state-based behavior
descriptions~\cite{SSL11,KSB+11,RRW13ATPS} have been presented.
While not focused on assembly tasks, these behavior modeling languages
are related in the common aim to facilitate robot programming.
Current approaches to robot behavior modeling aim at robotics software
engineers, not domain experts. To this end, these approaches either
provide less abstraction \cite{KAH+10,BDH+10,ASH+12} or require
knowledge on automata semantics to describe
behavior~\cite{SSL11,KSB+11,RRW13ATPS}. Even previous \lr~\cite{THR+13}
and closely related approaches~\cite{BBH+13,VKS+13} expect certain
degrees of software engineering knowledge from the domain experts. With
current \lr, this is relieved further as well-formedness rules prohibit
tasks and skills to reference the robot's API. Thus, domain experts only
need to comprehend task composition, skill composition and domain
parameter assignment.
Rethink developed an industrial robot called Baxter which can be trained
manually by ordinary line workers~\cite{Fit13}. Different kind of tasks, like
performing a blind pick or placing an object in a grid pattern, can
be configured by interacting with the UI and teaching positions and areas by
moving the robot's end effector directly. As described in~\autoref{subsec:KUKAAssembly} LightRocks supports
manual teaching of different poses, too. Unlike LightRocks the approach
provides a user-friendly UI to define tasks only, while the proposed skill level
is hidden to the end user and consequently new skills can not be defined by the
customer directly.
\section{LightRocks Languages and Toolchain}
\label{sec:LR2}
\lr is developed as an integrated collection of MontiCore~\cite{KRV10}
languages which comprises of a process language, a task language, a
skill language, and an action language. In this collection, each
language comprises of (i) a context-free grammar (CFG)~\cite{KRV07},
(ii) well-formedness checks, to check properties not expressible with
CFGs, and (iii) symbol tables~\cite{Voe11,LNPR+13}, which give
information about imported models of the same and other languages to
enable language integration. The latter, for instance, enable
integration with \umlp CDs to reference the models representing domain
types and type checking between models. Models of \lr are
textual~\cite{GKR+07}, which allows easy comprehension even of large
models by software engineering experts, liberates these from layouting
efforts, and enables processing models with their accustomed tools.
Processes and tasks represent the logical structure of re-usable assembly
process knowledge. To this effect, processes contain a net of tasks and
tasks contain a net of skills. Both may refer only to domain types.
Skills contain nets of actions. Actions, however, may reference a single
method of the underlying domain model only. The domain model describes
the interfaces of sensors, tools, the types of movement that can be used
(e.g., linear or rotational), and the available domain types in terms of
a \umlp CD language profile that restricts class diagrams to interfaces.
Using different CDs for domain types and robot interfaces separates
concerns and enables to re-use both with arbitrary manipulators, as long
as a code generator from CD to the manipulator's GPL is present.
Ultimately, it grounds the assembly processes via actions to the robot
and allows validating the models.
We implemented previous \lr as a profile of \umlp SC where
tasks, skills, and actions were handled uniformly as states. This led to
``notational noise''~\cite{Wil01}, e.g., unintuitive language elements,
and increased the ``accidental complexity''~\cite{FR07} by forcing
domain experts to learn SCs instead of using an assembly domain
specific language. The new stand-alone \lr languages clearly separate
between language elements for domain experts and language elements for
robotics experts. This reduces noise and complexities.
Listing~\ref{lst:Spin} shows the textual model of the action \code{Spin}
as depicted in Fig.~\ref{fig:GraspAndScrew}.
\lstinput
{Action}
{Spin}
{Model of the action \code{Spin} as show in Fig.~\ref{fig:GraspAndScrew}.}
{listings/Spin.action}
The textual syntax is straightforward and consists of the keyword
\code{action} followed by a name (l.~1), a \code{parameters} declaration
block (ll.~3-7), which defines how the action can be parametrized, an
\code{execution} block (ll.~9-11) that may reference the robot API, and
a set of entry and exit rules (ll.~13-16) to define preconditions and
postconditions of the action. Parameters are assigned via incoming
transitions and locally visible in task, skill, or action.
Tasks and skills additionally propagate their parameters to the
contained topology.
Based on the grammars of \lr languages, MontiCore generates language
processing infrastructure including FreeMarker-based\footnote{FreeMarker
template engine: \url{http://www.freemarker.org}} code generation
sub-frameworks and text editors~\cite{KRV07a}. The \lr toolchain
utilizes these to parse models, process these, and ultimately transform
these into executable code.
Figure~\ref{fig:Toolchain} depicts the toolchain with the related roles,
according to~\cite{KRV06}.
As current and previous \lr are functionally equivalent, retaining
compatibility with existing models, tooling, and code generators is
straightforward: task, skill, and action models are transformed into
SC representations compatible with previous \lr which can be
processed by existing tooling. The current \lr toolchain
parses process models provided by the \emph{application modeler}.
Robotics experts acting as \emph{skill library providers} may provide
the latter. Current code generators retain the structural separation into
tasks, skills, and actions, which interact with a run-time system that,
for instance, defines how to execute transitions. The \lr toolchain
provides extension points for code generators to enable code generation
for arbitrary target platforms with minimal effort. Code generators and
run-time system (RTS) are not specific to a certain robot but to the
programming language to be used and non-functional requirements (e.g.,
restrictions to the amount of memory to be used). Thus, \emph{code
generator developers} and \emph{RTS developers} require software
engineering expertise, but no robot expertise. Platform-independence of
code generators and RTS ultimately enables their re-use and further
facilitates robot task development with \lr.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{gen/Toolchain}
\caption{The \lr toolchain translates assembly processes consisting
of tasks, skills, and actions via Statecharts to executable
code.}
\label{fig:Toolchain}
\end{figure}
We also have developed a combined graphical and textual editor for
convenient modeling of assembly tasks by factory floor workers, domain
experts, and robotics experts. The editor provides two views: one for
modeling and one for model execution. It further allows parallel textual
and graphical modeling, parsing, and well-formedness checking of tasks
and their constituents. The editor is built with MontiCore's text editor
generation features, hence the text editor itself is generated: a
corresponding editor grammar allows to define keywords, outline
elements, filters, and other features from which text editor plugins for
Eclipse\footnote{Eclipse project:
\url{http://www.eclipse.org/}} are generated. The graphical editor is
also implemented as an Eclipse plugin and uses the Standard Widget
Toolkit\footnote{SWT website: \url{http://www.eclipse.org/swt/}} (SWT)
to render tasks, skills, and actions.
Figure~\ref{fig:ExecutorView} shows the editor's execution view with the
textual editor top middle and the graphical editor bottom middle.
The graphical editor displays the currently edited network of tasks
parallel to the corresponding textual model. Model parsing, context
condition checks, outline and syntax highlighting of the text editor are
directly re-used from the \lr toolchain. The model execution framework
takes care of monitoring and representing the currently executed parts
of the model. The contents of textual and the graphical editor are
synchronized directly, by either informing the textual or the graphical
editor about any modifications of the model. Once the developer starts
modeling, the graphical editor invokes MontiCore to either parse the
changed textual model or prints the changed model into the displayed
text editor. Layout data are stored separately and do not pollute the
textual model.
\begin{ownfigure}
\centering
\includegraphics[width=0.48\textwidth]{gen/ExecutionView}
\caption{The model execution view shows the currently executed action
and its parents parallel to graphical and textual editors.}
\label{fig:ExecutorView}
\end{ownfigure}
The right part of the editor shows the three assembly process levels and
their execution states at run-time. The editor highlights the currently
executed action and its parents. The top section shows the process level
and highlights the active task, the middle section shows the task level
and highlights the active skill, and the bottom section shows the skill
level and highlights the active action. Currently, the editor does not
support on-line editing of \lr models at run-time. While desirable, we
yet need to ascertain the requirements on valid run-time changes and the
implications on error handling mechanisms.
\section{Case Studies}
\label{sec:CaseStudies}
We have evaluated \lr with KUKA LBR robots and Lego Mindstorms robots.
With the former, we modeled classical assembly tasks. Due to hardware
restrictions of the Lego robots, we examined whether modeling of
non-assembly tasks is feasible with \lr. To our satisfaction, modeling
of re-usable logistics tasks with these robots was straightforward as
well. The following sections briefly report on both case studies.
\subsection{KUKA LBR Assembly Tasks}
\label{subsec:KUKAAssembly}
First case studies were performed as typical assembly tasks with a KUKA
LBR manipulator. These modeled tasks included screwing, picking,
stacking, plugging, and different kinds of movements (e.g., force
controlled). The domain model for this case study comprised of $13$
interfaces for various concepts of domain and robot.
Figure~\ref{fig:UseCaseStackBlocks} shows the LBR successfully stacking
blocks as modeled with \lr. The process modeled to grasp and stack
a tower of four blocks consists of one task, which in turn consists of
four skills of between one and six actions.
\begin{ownfigure}
\centering
\includegraphics[width=.40\textwidth]{gen/UseCaseStackBlocks}
\caption{A LBR robot stacking colored blocks.}
\label{fig:UseCaseStackBlocks}
\end{ownfigure}
Another typical assembly process is plugging workpieces onto another. We
therefore modeled a task for the LBR to plug safety sockets on a top-hat
rail~\cite{THR+13}. Figure~\ref{fig:UseCaseCapRail} shows the
performance of the LBR. The executed process model consists of a single
task that is repeated once per safety socket. The task itself consists
of five skills of up to five actions. Modeling assembly processes for
the LBR was straightforward and we could re-use tasks, skills, and
actions intuitively. We also observed that most re-use took place on
skill level, where - from a human perspective - simple behavior was
composed from actions. Skills regarding movement and grasping were
re-used most often.
\begin{ownfigure}
\centering
\includegraphics[width=.48\textwidth]{gen/UseCaseCapRail}
\caption{A LBR plugging safety sockets on a top-hat rail.}
\label{fig:UseCaseCapRail}
\end{ownfigure}
\subsection{Lego NXT Logistics Tasks}
We also deployed \lr to Lego Mindstorms NXT robots to evaluate its usage
in different use cases. The Lego robots are designed for education and
easy access to robotics. Consequently, their hardware is restricted: out
of the box, there are neither laser scanners, nor compliant
manipulators. As \lr is not tied to platforms providing such hardware,
we designed a clean up scenario and modeled the processes accordingly.
In this scenario, a robot explores a fixed area while searching for
colored blocks. Whenever a block is detected, it is gripped and
collected in a container. The robot consists of a base with four wheels
and a manipulator (Fig.~\ref{fig:NXT}). The base uses a light sensor to
ensure moving within the defined area's boundaries, a front-mounted
distance sensor to detect blocks, and a manipulator to collect blocks.
The manipulator uses a color sensor to detect the blocks' colors.
\begin{ownfigure}
\centering
\includegraphics[width=.48\textwidth]{gen/NXT}
\caption{The Lego Mindstorms robot to perform clean up tasks.}
\label{fig:NXT}
\end{ownfigure}
The \lr process to clean up colored blocks consists of three tasks, six
skills, and $14$ actions. Figure \ref{fig:ProcessCollectBlueObjects}
depicts the structure of the process \code{CollectBlueObjects}, which
contains a task \code{LookForObjects}, and a skill
\code{DriveToNextObject}. Skills and actions interface robot hardware
via the leJOS Java operating system\footnote{leJOS NXJ website:
\url{http://www.lejos.org/}} as robot API.
Due to lack of memory on the Mindstorms robots, re-using the code
generator used with the LBR robot was not feasible: the code generated
for the LBR produced too many artifacts for the Mindstorms robot's memory
to hold. Instead, we developed a new code generator for the same RTS.
Due to the modularity of \lr, (a) integrating code generators is
straightforward and (b) the transformation from \lr models into SCs is
independent of subsequent code generation. Therefore, the new code
generator only translates SCs to Java. However, the current Mindstorms
version EV3 provides enough memory to re-use the same code generator as
used with the LBR.
\begin{ownfigure}
\centering
\includegraphics[width=.48\textwidth]{gen/CollectBlueObjects}
\caption{Process \code{CollectBlueObjects} with contained
task \code{LookForObjects} and skill \code{DriveToNextObject}.}
\label{fig:ProcessCollectBlueObjects}
\end{ownfigure}
\subsection{KUKA iiwa Assembly Tasks}
We also applied \lr to assembly tasks with a KUKA iiwa robot in a case
study with $10$ participants. The $3$ female and $7$ male participants
were between $20$ and $59$ years old and had different degrees of
expertise with model-driven engineering, robot programming, \lr, tablet
computer usage and the iiwa robot. For instance, $60$\% of the
participants had ``no'' previous experience with the iiwa robot and
$20$\% had ``little'' previous experience with it.
The participants were given a task and had to answer a questionnaire
afterwards. To fulfill the task, the participants had to pick up a
lightbulb from its initial position, move it to a thread, screw it into
the thread, and activate it via a switch. \autoref{fig:ScrewingTheLamp}
shows the initial setup with both lightbulb positions. To achieve this,
the participants were introduced to the concepts of \lr, the iiwa, and
tablet UI before they started modeling the task's solutions. The
introduction took between $10$ minutes and $1$ hour, depending on the
participant's previous knowledge.
\begin{ownfigure}
\centering
\includegraphics[width=.48\textwidth]{gen/ScrewingTheLamp}
\caption{The case study setup with the lightbulb's initial
position on top-left and the target thread bottom-right.}
\label{fig:ScrewingTheLamp}
\end{ownfigure}
To model the required process and tasks, the participants used a
graphical editor displayed in \autoref{fig:EditorAtRuntime} on a tablet
computer. In this setup, all skills and actions were provided to the
participants, thus no robot API knowledge was required. This corresponds
with the idea, that the robot expert provides skills and tasks, and the
factory floor worker combines these only. In the end, all participants
completed the task. The fastest participant required $45$ minutes to
complete the complete case study, which included comprehending task
description and available skills, teaching relevant poses to the robot,
and solving the task. The slowest participant required $2$ hours.
\begin{ownfigure}
\centering
\includegraphics[width=.48\textwidth]{gen/EditorAtRuntime}
\caption{The tablet-based editor used to model the process and
tasks to pickup, deliver, and screw the lightbulb.}
\label{fig:EditorAtRuntime}
\end{ownfigure}
This case study was focused on applying \lr instead of developing
low level skills or actions for it and reflected its usage as intended
with factory floor workers. The study however is biased as $40$\% of the
participants had at least mediocre or good programming skills. As
expected, these participants finished faster than the others. In
consequence, a future case study will work with participants with
little or no programming knowledge only. Nonetheless, even participants
without programming knowledge were confident as their feedback included
that \lr allowed ``easy robot programming'' and even ``enabled
untrained users to use robots as tools''.
\section{Discussion and Conclusion}
\label{sec:Conclusion}
\lr is a high-level robot programming toolchain feasible for both domain
experts and robot experts. Due to the abstraction of \lr,
processes and tasks can easily be re-used with different robots. Skills
and actions are tied to specific platforms but can easily be re-used for
different assembly processes. If the new platform represents the same kind of
robot (e.g. a LBR with seven degrees of freedom), only the parametrization of
the used actions needs to be reconfigured at model level. The \lr toolchain
furthermore enables re-use of code generators and run-time systems with compatible
robots and supports development and execution of assembly tasks with powerful editors.
Code generators can be re-used as long as target specific technical
restrictions, like available memory or target language are
fulfilled. The adaption of provided robot API has to be performed per robot /
API version used. Referring to \autoref{fig:Toolchain} the skill library
provider and the code generator developer need to perform target-specific
adaption, while the run-time system developer and the application modeler can
focus on a target-independent development.
Case studies indicate that \lr helps to improve development of robotic
(assembly) processes. Nonetheless, the case studies pointed out issues:
currently, neither synchronous execution of tasks and skills, nor
parametrization of tasks with skills are supported. We will address the
issue of synchronous execution of tasks and skills and examine whether
such parametrization is useful without raising additional complexities.
Actions currently reference a single method of the robot's interface.
While facilitating re-use, this also leads to an increased number of
actions. In the future, we also will perform further case studies with
different robots and differently skilled users.
While the combined graphical and textual editor is helpful at modeling
tasks, it currently uses SWT to render tasks. Unfortunately, we have
experienced performance issues for large (more than 40 nodes) assembly
processes. We therefore will switch to a new rendering engine and we
will also examine whether on-line modeling as mentioned in
Sect.~\ref{sec:LR2} can be realized. Reasoning with tasks and skills
will be examined as well. With the strong formalism of preconditions and
postconditions, reasoning about the next action to be executed seems
useful to assist factory floor workers in modeling robot tasks.
Therefore we will examine how to integrate \lr with a model of the
environment, which, together with the actions, serves as the knowledge
base for assembly process reasoning.
With these models, the robot may react more dynamically to events and
thus increase the flexibility of assembly processes.
|
1,314,259,994,208 | arxiv |
\subsection{Distribution-Dependent Implicit Bias}
We next discuss our second sets of results that argue about \emph{distribution-dependent regularization}. Here we want to study if, for a given distribution, the set of solutions on which SGD converges has some meaningful structure on which we can argue why it generalizes.
Note that so far, our problem instances considered only a single function and the results were applicable to GD also. Here, though, in the distribution dependent setting such an example cannot work. Indeed, given a single function as an instance problem, SGD behaves deterministically and the solution it chooses is a unique solution which trivially generalizes.
\paragraph{No strongly-convex distribution-dependent bias.}
We next discuss our argument that rules out a strongly convex regularizer, even if it may depend on the distribution at hand. We again utilize the property that a strongly convex regularizer obtains approximately minimum solutions only on a small diameter around the unique minimum.
Our strategy is as follows: assume that there are two samples $S_1$ and $S_2$ such that, when SGD observes $S_1$ it converges to $\mathbf{w}_1$ and when it observes $S_2$ it converges to $\mathbf{w}_2$. However, assume also that $\|\mathbf{w}_1-\mathbf{w}_2\|=\Theta(1)$, and that the empirical loss of $\mathbf{w}_1$ and $\mathbf{w}_2$ is comparable, on both samples: namely $F_{S_1}(\mathbf{w}_1)=F_{S_2}(\mathbf{w}_1)$, and similarly with $\mathbf{w}_2.$
In the case above, as we argued in the distribution-independent case, clearly the algorithm failed to choose the minimizer of the regularization penalty, in at least one of the realization $S_1$ or $S_2$. So if $S_1$ and $S_2$ are equally likely, we obtain that with probability half (conditioned on the event that we saw $S_1$ or $S_2$) the algorithm failed to minimize $r$.
Now, if the probability to observe one of such couple of samples $S_1,S_2$ is positive, then we obtain the desired result.
To generate this setting, we rely on the following auxiliary construction in $\mathbb{R}^2$. We construct two functions such that, if SGD observes the first function, at the first iteration, then the gradient points upward and right. But if SGD observes the second function, at the first iteration, then the gradient points upward. This ensures that in each case SGD will move towards a different solution. If the size of the gradient is constant then the gap between the two iterations will be $\Theta(\eta)$.
We will also construct the examples in such a way that both points enter a regime where all points obtain the same empirical loss on both functions. This construction can in fact be done using piece-wise linear functions and it is illustrated in \cref{fig:high_dim}. We also give the formal statement here:
\begin{restatable}{lemma}{rtwo}\label{lem:r2}
For every constant $0<c<1$, there are two $1$-Lipschitz functions $f(\mathbf{w}; \pm1)$ over $\mathbb{R}^2$ such that if $\v_1=-\nabla f(0;1)$ and $\v_{-1}=-\nabla f(0;-1)$ and $ c <\frac{1}{2}\eta<1$ then $\|\v_1-\v_{-1}\|\ge 1/4$ and $f(\eta \v_1;z)=f(\eta \v_{-1};z)$ for any $z\in \{-1,1\}$.
\end{restatable}
\begin{figure*}[t]
\centering
\includegraphics[width=0.7\textwidth]{figures/high_dim_example.pdf}
\caption{\small%
Depiction of the auxiliary construction in \cref{lem:r2}. The left sketch illustrate the first function, and the right sketch the second function. Both functions are piece-wise linear of the form $x\to \max\{0,\v\cdot x\}$. The left function, the gradient of the loss points upward at the origin and is flat at a second regime. The right function, the gradient at origin points sideways, and again flat at the second regime.
}
\label{fig:high_dim}
\end{figure*}
We next utilize the above construction to generate the problem in $\mathbb{R}^d$. Note that the construction above generates a problem where SGD will converge to two different solutions with distance $\eta$ but same empirical loss (after one step). Indeed, we just need to randomly pick one of these functions.
We next want to amplify the distance. To do that, we consider $d=\Omega(T)$ Cartesian copies of $\mathbb{R}^2$. Then at each example, we show one of the functions above, at one of the products.
Assuming enough coordinates were seen only once (which is going to happen w.h.p.), the variance on each sub-plane will be $\eta^2$: if we have $\Theta(T)$ such coordinates, the overall variance is going to be $\Theta(T\eta^2)$ which ensures that we will converge to far away solutions on different realizations of the problem, if $\eta=\Theta(1/\sqrt{T})$.
\paragraph{SGD might be biased towards statistically-complex sets.}
Next, we derive \cref{thm:nouc} which addresses implicit regularization in a much broader setting. As discussed, here we cannot rule out the existence of an implicit bias; indeed, some form of an implicit bias always exists.
We attempt, though, to understand how the implicit bias can explain generalization.
The result shows that for any regularizer: the set $K_{S,r}(\mathbf{w}_S)$ which is the set of comparable solutions to the one outputted by SGD, given the empirical loss and regularization penalty, can be large up to the fact that choosing an arbitrary solution from this set can, in principle, lead to over-fitting (over general convex problems). Thus, to argue that the algorithm did generalize, further structure in the problem needs to be taken into account. And this is true for \emph{any} regularizer.
Our construction is similar to the previous case in \cref{thm:sgdr} up to some modification. Therefore, let us show that in the construction above $K_{S,r}$ will be $(T/6,\Theta(1))$-statistically complex.
This is less than what we actually desire. We, in fact, observed $T$ examples and not $T/6$. Indeed, in the construction above, we showed that if we project the output of SGD to the observed coordinates, we obtain a solution of the form $(\v_{\pm 1},\v_{\pm 1},\cdots, \v_{\pm 1})\in (\mathbb{R}^2)^{T}$, where $\v_1,\v_{-1}$ are as in \cref{lem:r2}.
By projecting this set, it can be seen to be a copy of (up to some rescaling) the normalized unit cube $\mathbcal{M}=\{\pm\eta, \pm\eta ,\cdots, \pm\eta\} \in \mathbb{R}^T$. This is true since $\|\eta \v_{1} - \eta \v_{-1}\| = \Theta(\eta)$.
Here, we rely on a construction by \citet{feldman2016generalization}.
In order to show that uniform convergence is not equivalent to learnability in the convex optimization setting, Feldman showed (in our terminology) that the set $\mathbcal{M}\in \mathbb{R}^T$ is $(T/6,1/4)$ statistically complex, if $\eta=\Theta(1/\sqrt{T})$.
As discussed, this is less than what we want, as we actually want a set that is at least $(T,\Theta(1))$ statistically complex. To tackle this, on each iteration we show the learner a loss function over multiple pairs of coordinates.
Namely, if in the example above we drew at each iteration $f(\mathbf{w};z)$ where $z\sim D$, now in each iteration we show the algorithm $\frac{1}{k}\sum_{i=1}^k f(\mathbf{w};z_i)$, where $z_i$ are i.i.d.
This will reduce the step-size on each coordinate a little bit but if $k$ is constant we will still present a constant loss.
On the other hand, now projecting on observed coordinates, SGD will converge to a solution in $(\v_{\pm 1},\v_{\pm 1},\ldots,\v_{\pm 1})\in (\mathbb{R}^2)^{\Theta(kT)}.$ Thus we only need a constant $k>6$ so that the algorithm will converge to a $(T,\Theta(1))$-statistically complex set.
\subsection{Implicit Bias in Constant Dimension}
We next provide a construction in $\mathbb{R}^2$ that again rules out a class of regularizers, in particular strongly convex regularizers (and more generally, strictly quasi-convex regularizers).
In a similar fashion to previous constructions, we make SGD choose from a set of solutions, that exhibit comparable empirical loss. While the dimension of previous constructions depended on $T$, this construction does not. However, for the construction we relax the assumption that $f(\mathbf{w} ;z)$ are convex, but $F$ remains convex. Note that the learning guarantees of SGD are completely applicable to this setting.
Our construction relies on a 2-dimensional square, centered at the origin. Inside the square, SGD makes a simple 2-dimensional random walk, while when it exits from the square, it continues to perform a random walk in just one dimension (denoted as $y$), while the other coordinate (denoted as $x$) remains the same. As a result, the optimizer of $F_S$ is independent of $w_x$.
We study the event that $\mathbf{w}$ will stay inside the square for enough iterations to ensure that the variance of $w_x$ will be larger than some constant, but eventually $\mathbf{w}$ exit from the square to make $F_S$ independent of $w_x$. This will result with a set of solutions that share the same empirical error and also SGD can converge to each one of them.
\section*{Broader Impact}
There are no foreseen ethical or societal consequences for the research presented herein.
\begin{ack}
TK is supported by an ISF grant no.~2549/19 and by the Yandex Initiative in Machine Learning.
RL is supported by an ISF grant no.~2188/20 and partially funded by an unrestricted gift from Google.
Any opinions, findings, and conclusions or recommendations expressed in this work are those of the author(s) and do not necessarily reflect the views of Google.
MF is supported by an ISF gran no.~819/20.
\end{ack}
\section{Introduction}
One of the great mysteries of contemporary machine learning is the impressive success of \emph{unregularized} and \emph{overparameterized} learning algorithms.
In detail, current machine learning practice is to train models with far more parameters than samples and let the algorithm \emph{fit} the data, oftentimes without any type of regularization.
In fact, these algorithms are so overcapacitated that they can even memorize and fit random data. Yet, when trained on real-life data, these algorithms show remarkable performance in generalizing to unseen samples~\citep{NeyshaburTS14,ZhangBHRV17}.
This phenomenon is often attributed to what is described as the \emph{implicit-regularization} of an algorithm~\citep{NeyshaburTS14}.
Implicit regularization roughly refers to the learner's preference to implicitly choosing certain structured solutions \emph{as if} some explicit regularization term appeared in its objective.
As a canonical example, in linear optimization one can show that various forms of gradient descent, an apriori unregularized algorithm, behaves identically as regularized risk minimization penalized with the squared Euclidean norm on the parameters~\citep{shalev2011online}.
Understanding implicit regularization poses several interesting challenges.
For example: how can we find the implicit bias of a given learning algorithm?
what is the rate of convergence towards the biased solution?
how (and if) does it govern the generalization of an algorithm?
and, when and what types of regularizations can account for and explain the generalization in modern-days machine learning?
Towards answering these questions we revisit a fundamental setting that was extensively studied in recent years: Stochastic Convex Optimization (SCO), focusing on the SGD optimization algorithm.
In contrast to most previous work, we do not attempt to identify the implicit bias in specific problems.
Instead, we study these questions in the general case, and we construct examples which rule out the existence of potential regularizers in general.
To some extent, these constructions demonstrate a behavior that might seem counter-intuitive or contradictory to the implicit-bias point of view.
Besides being a well-studied and well-understood model for learning, an important trait of SCO which makes it suitable for our investigation is that learning cannot in general be performed by naive \emph{Empirical Risk Minimization} (ERM). In detail, the work of \citet{shalev2009stochastic} showed the existence of SCO instances where naive-ERM fails but \emph{regularized}-ERM succeeds. Thus, we view SCO as a natural test-bed for exploring the role of regularization and its relation to generalization. Compellingly, the generalization of SGD in SCO is well-established, and we are left with the question of how well can we account for generalization through an investigation of its bias.
\subsection{Contributions}
\paragraph{Implicit distribution-independent bias.}
We begin with a simple construction which demonstrates that SGD does not have any \emph{distribution-independent} implicit bias. To show that, we construct a case where SGD \emph{does not} converge to a Pareto-efficient (not even approximately) solution with respect to the empirical loss and a given regularization penalty.
In fact, this result is also true for Gradient Descent over smooth functions. In other words, our construction here involves a distribution supported on a single smooth convex function.
Our result is general and rules out any (reasonable) regularizer from being the implicit bias of SGD in this distribution-independent setting. Since the Euclidean-norm distance is the immediate suspect for the implicit regularization of SGD, the first step towards achieving the result is to rule out that Euclidean norm is the implicit bias of SGD. We thus construct an example of a function with a plateau of minimizers where SGD does not converge to the closest point in Euclidean-norm sense.
While the result might not seem surprising, it is the technical engine behind the further constructions we provide. Previous to this work, \citet{suggala2018connecting} showed that gradient descent with an infinitely small step size (that is, gradient flow), might diverge from the closest point, and we provide a complementary construction combined with a full rigorous analysis for fixed step-size gradient descent.
\paragraph{Implicit distribution-dependent bias.}
Having ruled out the possibility of a problem-independent regularizer, we proceed to study the more compelling \emph{distribution-dependent} implicit regularization. The question here is whether for every distribution over convex functions, we can associate a regularizer $r$ such that SGD tries to (approximately) find a Pareto-efficient solution with respect to $r$ and the empirical loss (notice that we allow the regularizer to depend on the distribution, but \emph{not} on the specific sample received by SGD.)
We first show that we can rule out the effect of strongly-convex regularizers in the relevant regime of learning (where the dimension and the number of training examples are of roughly the same order). In fact, we rule out a more general class of regularizers that have large range on sets with large diameters. Namely, in any ball with large diameter the regularizer shows preference towards a certain point.
We then continue and demonstrate a distribution where, given an input sample, there is a very large set of possible solutions that share the same empirical loss and the same regularization penalty, and yet, SGD chooses its solution arbitrarily within this set.
Here, by ``very large'' we mean from a learning-theoretical point of view; namely, this set is large enough so that, in general,
empirical risk minimization restricted to the set will fail (and yet, it appears that this is exactly what SGD does).
In other words, no regularizer $r$ is sufficient for narrowing down the set of possible SGD solutions to the point where non-trivial generalization can be deduced without appealing to other properties of the specific problem.
\paragraph{Implicit bias in constant dimension.}
Several of our constructions are given in high dimension, namely the number of parameters is larger than the number of examples. One could argue that this is the interesting regime, nevertheless it is still worthy to understand the role of implicit bias when the dimension of the problem is smaller than number of examples.
Here we cannot rule out the role of implicit bias in a similar fashion to before - namely, due to uniform convergence, any algorithm that is constrained to the unit ball will generalize and this implicit bias is indeed the explanation to that. It is interesting though to understand the existence of specific regularizers (such as, e.g., strongly convex regularizers).
While we do not provide an answer to this question, we make an intermediate step. Our final construction is in a slightly relaxed model, where the instances are non-convex, but the expected loss function is convex. While this result may be limited, because of the non-convexity, we stress that the learning guarantees of SGD are completely applicable to this setting: namely, SGD does learn the problem (as it is convex in expectation).
We show that for any \emph{strictly quasi-convex} regularizer, namely a regularizer that has preference for a single point in any convex regime, the algorithm will not converge to the optimal solution with optimal regularization penalty (even though it converges to a convex domain where seemingly it can improve its parameter choice towards the regularized solution).
\subsection{Related work}
Understanding the implicit bias of learning algorithms and its importance in generalization is a central theme in machine learning, and in the study of many classical algorithms~\citep{buhlmann2003boosting, schapire1998boosting, wei2017early}.
Recently, implicit bias has received considerable attention in the past few years.
Starting with \cite{NeyshaburTS14,ZhangBHRV17}, it was suggested that implicit regularization might explain the success of networks to improve test error by increasing network size beyond what is needed to achieve zero training error.
Subsequently, a line of work has focused on identifying implicit regularization in various problems and domains, e.g., linear and non parametric regression~\citep{ali2019continuous, raskutti2014early, wei2017early} matrix factorization~\citep{gunasekar2017implicit, arora2019implicit}, linearly separable data~\citep{soudry2018implicit, gunasekar2018implicit}, as well as deep networks~\citep{neyshabur2017implicit,neyshabur2017geometry} and others~\citep{NacsonLGSSS19,nakajima2010implicit,lin2016generalization,gunasekar2018characterizing}. Our work here can be seen as an attempt to investigate the limitations of implicit regularization.
Most similarly to this work, \citet{suggala2018connecting} provides an example of a problem where gradient flow does not converge to closest Euclidean solution. Here we focus on the more concrete SGD algorithm with a fixed step size, and give finite-time analysis. We are also able to harness our example to construct further new constructions that rule out a richer class of implicit-type regularization schemes.
This work can also be seen as an attempt towards separation between \emph{learnability} and \emph{regularization}.
Besides regularization, several other useful notions have been suggested as surrogates of learnability.
Most classically, uniform convergence \citep{blumer1989learnability} has been shown to be equivalent to learnability in the binary, distribution-independent model of PAC learning~\citep{valiant1984theory}. As discussed, \cite{shalev2009stochastic} showed that in the stochastic convex setting naive-ERM fails (but not regularized-ERM), hence learnability and uniform convergence are no longer equivalent. The constructions of \citet{shalev2009stochastic} were later substantially strengthened by \citet{feldman2016generalization}.
More recently, \citet{nagarajan2019uniform} also provided an example that rules out uniform convergence, perhaps in the strictest sense. Their construction, though, does exhibit tangible implicit regularization, which account to the generalization of the algorithm.
Another useful notion is the \emph{stability} of a learning algorithm.
Stability is very much related to regularization: e.g., regularizing empirical risk minimization with a strongly convex function induces stability~\citep{bousquet2002stability}, and smoothness can also be harnessed to argue for stability \citep{hardt2015train}.
As such, constructing a convex problem where an algorithm is unstable could also serve as a means to rule out certain types of implicit regularizers.
Our examples are in fact stable, and as such, could also be interpreted as a certain weak separation between stability and regularization.
\section{Preliminaries}
\subsection{The Setup: Stochastic Convex Optimization}
We consider the following standard setting of stochastic convex optimization.
A learning problem consists of a fixed domain $\mathcal{W}$, which for concreteness we will assume it to be a closed and bounded set in $\mathbb{R}^d$ for some finite $d$, a class of functions $f(\mathbf{w};z)$ that are convex over $\mathbf{w}$, and an unknown distribution $D$ over a random variable $z$.
The objective of the learner is to minimize:
\begin{align*}
F(\mathbf{w}) := \mathbf{E}_{z\sim D}[f(\mathbf{w};z)]
.
\end{align*}
The goal of the learner, given a sample $S= \{z_1,\ldots,z_T\}$ of $T$ i.i.d.~examples from the distribution $D$, is to
return a parameter vector $\mathbf{w}_{S}$ such that
\begin{equation}
\mathbf{E}_{S}[F(\mathbf{w}_S)]< \min_{\mathbf{w}\in \mathcal{W}} F(\mathbf{w})+\epsilon,
\end{equation}
for a desired target accuracy $\epsilon>0$. (The sample size $T$ may be determined based on $\epsilon$.)
We make the following assumptions throughout.
We will generally assume that the functions $f$ are also $O(1)$-Lipschitz. Specifically, in all our constructions we will have $\|\nabla_\mathbf{w} f(w,z)\| \le 23$ for all values of $z$ and $\mathbf{w} \in \mathcal{W}$. We will mostly be concerned with the case that $\mathcal{W}$ is a bounded unit ball of radius $r$ around $0$. For concreteness we will mostly take $r=5$. This is just for convenience and clearly our results apply to any constant radius ball. Since our main focus in this paper is on impossibility results, fixing the Lipschitz constant and the diameter does not harm the generality of the setup.
We will also discuss strongly-convex functions (or regularizers): we say that a convex function is $\lambda$\emph{-strongly convex} if for any $\mathbf{w}_1,\mathbf{w}_2 \in \mathcal{W}$ we have: $f(\mathbf{w}_1)\ge f(\mathbf{w}_2) +\nabla f(\mathbf{w}_2)^\top(\mathbf{w}_1-\mathbf{w}_2) + \lambda \|\mathbf{w}_1-\mathbf{w}_2\|^2$.
\subsection{Gradient Descent and Stochastic Gradient Descent}
The main focus of this paper is the well-known Stochastic Gradient Descent (SGD) algorithm.
Given a sample $S=\{z_1,\ldots,z_T\}$ and a step-size parameter $\eta>0$, SGD initializes at $\textbf{w}^{(1)}=\textbf{0}$ and performs iterations:
\begin{equation}\label{SGD_alg}
\forall ~ t=1,\ldots,T :
\quad
\mathbf{w}^{(t+1)} = \Pi_{W}\big(\mathbf{w}^{(t)}-\eta \nabla f(\mathbf{w}^{(t)}; z_t)\big)~,
\quad
\text{and outputs:}
\quad
\mathbf{w}_S
= \frac{1}{T}\sum_{t=1}^T \textbf{w}^{(t)}
,
\end{equation}
where $\Pi_{W}(w)$ is defined to be the projection of $w$ over the convex set $W$.
The standard SGD analysis guarantees the following (see, e.g.,~\cite{shai_book}):
\begin{theorem*
Let $B, \rho>0$. Le $\mathcal{W}=\{w: \|w\|\le B\}$, and
assume that $F(\cdot)$ is convex and $\|\nabla f(w,z)\| \leq \rho$ for all $z$ and $w\in W$.
Suppose that SGD is run for $T$ iterations on the sample $S=\{z_1,\ldots,z_T\}$ with step size $\eta=\sqrt{B^2/(\rho^2 T)}$.
Then,
\begin{align} \label{thm:sgd}
\mathbf{E}_S[F(\mathbf{w}_S)]-F(\mathbf{w}^\star)
\leq
\frac{B \rho}{\sqrt{T}}
,
\end{align}
where here $\mathbf{w}^\star \in \arg \min_{\mathbf{w} : \|\mathbf{w}\| \leq B} F(\mathbf{w})$.
\end{theorem*}
We will also discuss in this paper the procedure of \emph{Gradient Descent} (GD). Given an objective function $F$ GD obtains the following update steps:
\begin{equation} \label{GD_alg}
\forall ~ t=1,\ldots,T :
\quad
\mathbf{w}^{(t+1)} = \Pi_{W}\big(\mathbf{w}^{(t)} - \eta \nabla F(\mathbf{w}^{(t)})\big)~,
\quad
\text{and outputs:}
\quad
\mathbf{w}_F
= \frac{1}{T}\sum_{t=1}^T \textbf{w}^{(t)}
.
\end{equation}
In our context, given a sample $S=\{z_1,\ldots, z_T\}$, the gradient descent algorithm takes steps using the full gradient with respect to the empirical loss defined as follows $F_{S}(\mathbf{w})=\frac{1}{T}\sum_{t=1}^T f(\mathbf{w},z_t)$. We will then write in shorthand $\mathbf{w}_S$ for $\mathbf{w}_{F_S}$
\ignore{
\begin{equation} \label{GD_alg}
\forall ~ t=1,\ldots,T :
\quad
\mathbf{w}^{(t+1)} = \Pi_{W}\left(\mathbf{w}^{(t)} - \eta \nabla F_S(\mathbf{w}^{(t)})\right)~,
\quad
\text{and outputs:}
\quad
\mathbf{w}_S
= \frac{1}{T}\sum_{t=1}^T \textbf{w}^{(t)}
.
\end{equation}
where here $F_{S}(\mathbf{w}) = \frac{1}{T}\sum_{t=1}^T f(\mathbf{w};z_t)$ is the \emph{empirical loss} of $\mathbf{w}$.}
\paragraph{Other variants of SGD.}
While the above version of SGD is perhaps the most standard one, there are other variants that can be considered.
For example, it is common to consider, instead of a fixed step-size, a decaying step-size (where $\eta$ may depend on $t$), as well as taking the last SGD iterate rather than the average iterate.
We focus on the version in \cref{SGD_alg} for several reasons. First, taking the last iterate is not always justified and attains suboptimal rates (see \cite{shamir2013stochastic}). Second, the algorithm in \cref{SGD_alg} is also the more challenging variant to argue about, in the sense that averaging and taking small fixed step size induces bias towards initialization, and as such, is more strongly regularized (and indeed, the constructions we provide here can be readily modified to address a decaying step-size or the last iterate.\footnote{In fact, the proofs will be significantly simpler; for example, in the proof overview we actually consider the last iterate for simplicity.}) Another variant to consider is \emph{unprojected} gradient descent. Convergence bounds can be derived for this variant that depend on the norm of the benchmark solution \citep{shai_book, shalev2011online}. Again, we note that in all of our constructions we pick domain large enough so that projections in fact don't take place.
Nevertheless, it could be an interesting future work to derive a natural variant of SGD whose implicit regularization properties induce the desired generalization guarantees.
\subsection{Regularized (Structural) Risk Minimization}
Another well studied approach to perform learning is through \emph{regularization}, Regularized Empirical Risk Minimization (ERM) solves the following minimization problem:
\begin{align}\label{eq:rerm}
\widehat{w}_\lambda
=
\arg \min_{w \in \mathcal{W}} \big\{ F_S(\textbf{w})+\lambda r(w) \big\},
\end{align}
where $\lambda \in \mathbb{R}^+$, and $r(w): \mathbb{R} \mapsto \mathbb{R}^+$ is a regularization function. When $f(\mathbf{w};z)$ is Lipschitz-bounded and $r(\mathbf{w}),\lambda$ are properly chosen this method leads to a principled learning algorithm.
For example, in the case $r(\mathbf{w})=\|\mathbf{w}\|^2$, \citet{bousquet2002stability} showed that with the correct choice of $\lambda$, Regularized ERM is guaranteed to generalize.
\section{Regularization}
We next discuss the different classes of regularizers we will consider in this paper. While some of the results we provide make little to no assumptions on the regularizers, sometimes we would like to add further structure and rule out specific classes as the implicit bias of SGD, in other cases we would like to formally explain in what sense we might assume that the regularizer does not allow a comprehensive explanation of the implicit bias.
Most generally, a regularizer is any function $r:\mathcal{W}\to \mathbb{R}_{+}$. We will however make the following basic assumptions on the regularizers, to avoid degenerate cases:
\begin{itemize}
\item $\min_{\mathbf{w}\in \mathcal{W}} r(\mathbf{w})=0$
\item $r$ is non-constant at $\mathcal{W} \backslash \{0\}$;
\item $r$ is upper semi-continuous; namely, for every point $\mathbf{w} \in \mathcal{W}$ and every $\epsilon_0>0$ there exists a neighborhood $B_{\delta_0}(\mathbf{w})=\{\u: \|\mathbf{w}-\u\|<\delta_0\}$ for which $r(\u)> r(\mathbf{w})- \epsilon_0$ if $\u\in B_{\delta_0}(\mathbf{w})$.
\end{itemize}
Any regularizer that satisfies these properties will be said to be an \emph{admissible regularizer} (or shortly, a regularizer). The first assumption above is only for normalization. For the second assumption, the algorithms we will consider are all initialized at zero and may prefer the zero solution if it is a minimizer of the empirical error. But we are mostly concerned with the implicit bias in more involved cases then that.
The last assumption is perhaps somewhat strongest, but it is intended to rule out pathological examples. For example, one could consider a regularizer $r$ which is $0$ on almost all points, but is $1$ on the negligible, dense, set of real numbers that SGD would never reach.
One could argue that $r$ is an implicit bias of SGD. However, this does not capture our intuition of a regularizer. Thus, we add an assumption that a point penalized by the regularizer should also be penalized under small perturbations.
\subsection{Strongly-convex Regularizers}
While some of the results we will present are given for general (admissible) regularizers, it is natural and expected to study more structured classes of regularizers and ask if they induce the generalization properties of a certain algorithm.
One natural family of such regularizers is the class of $\lambda$-\emph{strongly-convex} functions, which we will also assume are $1$-Lipschitz.
As discussed in length, many of the prominent generalization results are provided in the context of strongly convex regularizers~\citep{bousquet2002stability,shalev2009stochastic}.
Strongly-convex regularizers come with a very natural property which allows us to rule out such regularizers on certain problems: a strongly convex function always attains a \emph{unique} minimizer on any convex set.
As such we can always identify if the output of an algorithm minimizes (approximately) the strongly-convex regularizer, by comparing the output to the minimizer of the regularizer over the given empirical risk.
\subsection{General (Admissible) Regularizers}
Studying implicit bias that does not stem from a strongly convex regularizer is no less important; however, it becomes much more subtle to rule out the latter.
Once the regularizer is allowed to have non-unique minima we should be more careful in stating what we mean when we say it \emph{does not explain generalization}.
In fact, almost any plausible algorithm can be said to be implicitly biased on any given distribution.
For example, the fact that the regularizer is constrained to the unit ball is a form of algorithmic bias---but as was shown by \citet{shalev2009stochastic}, it cannot explain generalization in the SCO setting.
Towards clarifying what we mean by ``explain generalization'', let us consider the following: given a regularizer $r$ and an algorithm $\mathcal{A}$ that outputs a solution $\mathcal{A}(S)$ on a sample $S$, define the set of ``competitive'' solutions
\begin{equation} \label{eq:ksr}
\begin{aligned}
K_{S,r}(\mathcal{A}(S))
=
\{\mathbf{w} \in \mathcal{W} \;:\; F_S(\mathbf{w}) \le F_S(\mathcal{A}(S)) \;\;\text{and}\;\; r(\mathbf{w}) \le r(\mathcal{A}(S))\}
.
\end{aligned}
\end{equation}
For shorthand, we will also use the notation $K_{S,r}(\mathcal{A})$ instead of $K_{S,r}(\mathcal{A}(S))$.
In words, $K_{S,r}(\mathcal{A})$ is the set of solutions that are comparable with (or better than) the output of~$\mathcal{A}$, with respect to both the empirical loss and the regularization penalty.
For example, consider a regularized ERM, as in \cref{eq:rerm}, then $K_{S,r}(\mathcal{A})$ depicts \emph{all} minimizers of \cref{eq:rerm} with comparable regularization penalty. For example, with a strongly-convex regularizer $r$ one can observe that the set $K_{S,r}(\mathcal{A})$ is in fact a set of a single \emph{unique} solution.
More generally, if a regularizer~$r$ is said to be the implicit bias of an algorithm $\mathcal{A}$, and as such it explains the generalization of the algorithm, it is expected that the set $K_{S,r}(\mathcal{A})$ would be ``small'' in the sense that choosing an arbitrary solution from it should provide principled guarantees.
If we cannot attain such guarantees without further investigation of the problem and algorithm, we argue that the regularizer does not provide a comprehensive explanation of generalization. This motivates the following definition for studying more general regularizers than, say, strongly convex ones:
\begin{definition}\label{def:scomplexity}
Let us say that a set $K$ is $(T,\epsilon_0)$-\emph{statistically complex} if for some distribution $D$ over $1$-Lipschitz convex functions, given $T$ i.i.d.~samples we have that with probability at least $1/10$ that for some $\mathbf{w}\in K$ it holds that
\frac{1}{T}\sum_{i=1}^T f(\mathbf{w},z_i)=0,
\;\text{yet}\;\;
\mathbf{E}_{z} [f(\mathbf{w},z)]>\epsilon_0.
\end{definition}
Note that the statistical complexity of the set $K$ is measured with respect to an \emph{arbitrary} distribution $D$ over convex functions: this captures our requirement that the set $K_{S,r}(\mathcal{A})$ should explain generalization, \emph{without further investigation of the problem}. In other words, it could be that for a correct choice of a regularizer, on a specific problem, all the models in $K_{S,r}(\mathcal{A})$ will generalize. However, what we want is to ensure that the generalization does not stem from any further structure in the problem that is not captured by the regularizer. Thus, we require that this set will be ``simple'' in the sense that on any arbitrary distribution over convex functions we can choose an arbitrary solution that minimizes the empirical risk.
\section{Results}
\subsection{Distribution Independent Implicit Regularization}
We start with the natural question, whether there is some distribution independent implicit regularization being promoted by SGD. As a warm-up we begin by ruling out the existence of a distribution-independent strongly convex regularizer that plays the role of the implicit bias of SGD. This family of regularizers is already very interesting, and has been studied extensively in the literature of stochastic convex optimization~\citep{bousquet2002stability,shalev2009stochastic}.
\begin{theorem}\label{thm:gdwarmup}
Let $\mathcal{W}=\{\mathbf{w}: \|\mathbf{w}\|\le 5\}$. For every $1$-Lipschitz and $\lambda$-strongly convex $r$, there is a distribution $D_r$ over
$1$-Lipschitz and $1$-smooth functions over $\mathcal{W}$,
and $\mathbf{w}_r \in \mathcal{W}$ such that, with probability $1$,
SGD with any step size $1/T^2 < \eta < 1$ over an input sample $S$ of size $T = \Omega(1/(\lambda \eta))$ outputs $\mathbf{w}_S$ such that:
\begin{alignat*}{3}
&F_S(\mathbf{w}_r) \le F_S(\mathbf{w}_S),~\quad~
\text{and}\quad
&r(\mathbf{w}_r) \le r(\mathbf{w}_S) - \Theta(\lambda)~.
\end{alignat*}
\end{theorem}
In words, for any strongly convex regularizer there exists an instance problem where SGD chooses a solution that is sub-optimal in terms of both empirical error, and regularization penalty.
The last result can be extended to general (admissible) regularizers. Here, the rate of divergence from a Pareto optimal solution depends on the structure of the regularizer $r$.
This dependence of the divergence-rate on the regularizer $r$ is unavoidable. Indeed if we consider a regularizer $r$ such that $r\approx 0$, it is not hard to be convinced that it would take SGD longer to become $r$-suboptimal.
\begin{theorem} \label{thm:gdr}
Let $\mathcal{W}=\{\mathbf{w}: \|\mathbf{w}\|\le 5\}$. For every admissible regularizer $r$, there are constants $c_{r}>0$, a distribution $D_r$ (over $1$-Lipschitz and $1$-smooth convex functions), and $\mathbf{w}_r\in \mathcal{W}$ such that, with probability $1$ over the input sample $S$, SGD with any step size $1/T^2<\eta<1$ and sample size $T_r= \Omega_r (1/\eta)$ outputs $\mathbf{w}_S$ such that:
\begin{alignat*}{3}
&F_S(\mathbf{w}_r) \le F_S(\mathbf{w}_S),~
\quad \text{and}\quad ~
r(\mathbf{w}_r) \le r(\mathbf{w}_S)-c_r~.
\end{alignat*}
\end{theorem}
The $\Omega_r(\cdot)$ notation hides constant that may depend on the regularizer $r$. The dependence on the regularizer is expected here, as we would need a very strong level of accuracy if we want to rule out a nearly-constant regularizer, for example.
\subsection{Distribution-Dependent Implicit Regularization}
Having ruled out a class of implicit regularizers in the distribution-independent model, we next move on to discuss the possibility of distribution dependent regularizers.
\begin{theorem}\label{thm:sgdr}
For every $T\ge 1$, a constant $C>2$ and dimension $d> T/10$: there exists a distribution $D$ over $1$-Lipschitz convex functions over $\mathcal{W}=\{\mathbf{w}: \|\mathbf{w}\|\le 1\}\subseteq \mathbb{R}^d$, such that if we run SGD with learning rate $1/T^2<\eta \le C/\sqrt{T}$ over a sample set of size $T$, then for any $1$-Lipschitz, $\lambda$-strongly convex regularizer $r$, with probability $0.1$ over the sample, SGD outputs $\mathbf{w}_{S}$ for which there is $\mathbf{w}^\star\in \mathcal{W}$, such that
\begin{align*}
F_{S}(\mathbf{w}^\star) &\leq F_{S}(\mathbf{w}_S),~
\quad
\text{and}\quad
r(\mathbf{w}^*) \leq r(\mathbf{w}_S) - 10^{-2}\frac{\lambda T\eta^2}{C}~.
\end{align*}
\end{theorem}
Utilizing a construction of a statistically complex set due to \citet{feldman2016generalization}, we can also obtain the following result:
\begin{theorem}\label{thm:nouc} For every $T\ge 1$, a constant $C>2$ and dimension $d\ge T/10^5$: there exists a distribution $D$ over convex $1$-Lipschitz functions over $\mathcal{W}=\{\mathbf{w}:\|\mathbf{w}\|\le 1\}\subseteq \mathbb{R}^d$, such that if we run SGD with stepsize $1/T^2 <\eta \le C/\sqrt{T}$ over a sample set of size $T$, then for any regularizer $r$ we have that with probability at least $1/10$ over the sample, the set $K_{S,r}(\mathbf{w}_{S})$ is $\left(2T,10^{-5}\frac{T\eta^2}{C}\right)$-statistically complex.
\end{theorem}
In words, \cref{thm:nouc} asserts that for a certain given distribution $D$ the output of SGD cannot be interpreted as coming from a ``small'' structured family of solutions that would generalize regardless of other specialized properties of the particular learning problem.
The requirement that $T\le O(d)$ is tight. Note that for a sample $S$ of order $T=O(d/\epsilon^2)$, by a standard covering argument, we can show that the set $K_{S,r}(\mathcal{W})$ is not $(2T,\epsilon)$ statistically complex (see, for example, Theorem 5 of \cite{shalev2009stochastic}). In particular, since $K_{S,r}(\mathbf{w}_S)\subseteq \mathbcal{K}_{S,r}(\mathcal{W})$ we obtain an upper bound of the statistical complexity of the given set.
\subsection{Implicit Bias in Constant Dimension}
In the results above we provided constructions in spaces with more parameters than samples. We next discuss the case $d \ll T$, which is interesting for certain contexts.
Regarding \cref{thm:nouc}, we again point out that such a result cannot hold in the aforementioned regime. Indeed, in this case uniform convergence over the unit-ball applies. In that sense, restricting an algorithm to choose a solution in the unit ball provides an inductive bias that provides generalization guarantees. But what about \cref{thm:sgdr}? It is interesting to know if one can rule out regularizers that are not benign like the unit ball.\footnote{We treat a set $K$ as a regularizer by identifying $K$ with a regularizer $r$ such that $r(\mathbf{w})=1$ if $\mathbf{w}\notin K$ and $0$ otherwise.}
We do not know the answer to this question and we leave it as an open problem. Nevertheless, we can provide the following intermediate result in a slightly more relaxed setting, where the instances may be non-convex, (and in fact non-Lipschitzian) but the expected loss function is indeed convex, and at each iteration the learner observes a bounded gradient $\|\nabla f(\mathbf{w},z)\|\le 1$ Thus, SGD's learning guarantee still apply.
We will state the next result for a slightly larger class of regularizers than merely convex regularizers. Recall that a function $f$ is called \emph{quasi-convex} if $f(\lambda x+ (1-\lambda)y)\le \max \{f(x),f(y)\}$ for every $0\le \lambda\le 1$ and $x,y\in \mathbcal{K}$, and \emph{strictly} quasi-convex if $f(\lambda x+ (1-\lambda)y)< \max \{f(x),f(y)\}$.
\begin{theorem}\label{thm:nonconvex}
Let $\mathcal{W}=\mathbb{R}^2$.
There exists a distribution $D$ over, not necessarily convex, functions in $\mathbb{R}^2$ such that $\mathbf{E}_{z}[f(\mathbf{w};z)]=0$ for every $\mathbf{w}\in \mathcal{W}$,
and for every strictly quasi-convex regularizer $r$, and for large enough $T$, if $\eta=\Theta(1/\sqrt{T})$ then with some positive probability, $\Theta(1)$, there exists $\mathbf{w}^\star$ such that:
\begin{align*}
F_S(\mathbf{w}^\star) \leq F_S(\mathbf{w}_S);
\qquad
r(\mathbf{w}^\star) < r(\mathbf{w}_S);
\qquad
\|\mathbf{w}_S-\mathbf{w}^\star\| =\Theta(1).
\end{align*}
\end{theorem}
\section{Constructions}
\fullversion{Here we give a high level description of our first construction which rules out any norm-based distribution-independent regularizers and any strongly convex distribution-independent regularizer (\cref{thm:gdwarmup}). This construction is also the basis of the rest of the results (\cref{thm:gdr,thm:sgdr,thm:nouc,thm:nonconvex}).
The other constructions, as well as the full proofs, are provided in the full-version \cite{fullversion}.}{Here we give a high level description of the constructions as well as the proofs of the main results.}
We note that for simplicity of exposition, the following description refers to the last iterate, but our full proofs refers to \cref{SGD_alg} (i.e., the algorithm that outputs $\mathbf{w}_S = \smash{\frac{1}{T}\sum_{t=1}^T \textbf{w}^{(t)}}$) .
\subsection{Distribution Independent Regularization}
Our constructions build upon the following class of functions in $\mathbb{R}^2$. Let $A$ be a set of the form $\{(\alpha,\theta): 0 \le \alpha \le b\}$, where $\theta, b$ are parameters of the set and $\Sigma$ is a PSD matrix. We then consider the function $f_{A,\Sigma}$ defined as follows:
\begin{align}\label{eq:f_overview}
f_{A,\Sigma}(\mathbf{w})
=
\tfrac12 \min_{\v\in A} \big\{ (\mathbf{w}-\v)^\top\Sigma (\mathbf{w}-\v) \big\}
.
\end{align}
One can observe that these functions are convex, and further the gradient of $f_{A,\Sigma}$ at point $\mathbf{w}$ will equal
\begin{align}\label{eq:grad}
\nabla f_{A,\Sigma}(\mathbf{w})
=
\Sigma(\mathbf{w}-\v(\mathbf{w}))
,
\qquad\text{where}\qquad
\v(\mathbf{w})
=
\argmin_{\v\in A} \{ (\mathbf{w}-\v)^\top \Sigma (\mathbf{w}-\v) \}
.
\end{align}
\paragraph{Warm-up: GD need not converge to a minimal-norm solution.}
We start by showing how we can construct a function (of the type in \cref{eq:f_overview}) that does not converge to minimal norm solution. Let us take a concrete case where
\begin{align*}
A
=
\{(\alpha,1): 0\le \alpha \le \infty\}
,
\qquad
\Sigma = \begin{pmatrix}1 & \tfrac12 \\ \tfrac12 & 1\end{pmatrix}
.
\end{align*}
We will suppress dependence on $A$ and $\Sigma$, and simply write~$f$.
The main observation is that the trajectory of $f$ is characterized by two phases.
At the first phase the closest point to $\smash{\textbf{w}^{(t)}}$ (with respect to the $\Sigma$-norm) is at the boundary of $A$ (i.e $\alpha=0$). At this phase, $\smash{\textbf{w}^{(t)}}$ can be seen to move ``towards'' the center of the interval, namely $\smash{w^{(t)}_1}$ is increasing (see \cref{eq:grad}). At the end of this phase, $\smash{w^{(t)}_1}$, is sufficiently large irrespective of the step size $\eta>0$.
The second phase, starts when $\textbf{e}_2 \equiv (\begin{smallmatrix} 0 \\1 \end{smallmatrix})$ stops being the closest point, and the closest point to $\smash{\textbf{w}^{(t)}}$ is at the interior of the interval. One can show that at this phase, the gradient moves upward hence $\smash{w^{(t)}_1}$ does not decrease and overall the trajectory will converge to a point away from $\textbf{e}_2$: the Euclidean closest minimizer to~$0$.
To see that when $\v(\mathbf{w})$ is at the interior of $A$ then $\nabla f(\mathbf{w}) \propto \textbf{e}_1$, consider the following scalar function $g(a) = (\mathbf{w}- (a,1))^\top \Sigma (\mathbf{w}-(a,1)).$
Our assumption is that $g$ attains its minimum at $0<v_1$. Taking the derivative at $v_1$ and equating to $0$ (because the minimum is attained at the interior), we can see that $g'(v_1)= (\mathbf{w}-(v_1,1))^\top\Sigma \textbf{e}_1 =0.$ Hence, $\nabla f(\mathbf{w})= (\mathbf{w}-v(\mathbf{w}))\Sigma \perp \textbf{e}_1$. We depicted here the trajectory of GD without the projection step, however one can observe that throughout, the algorithm never escapes the $2$-ball, hence projections are indeed never implemented.
The trajectory of $\smash{\textbf{w}^{(t)}}$ is illustrated in \cref{fig:trajectory1} (green line).
\paragraph{No strongly-convex implicit bias (\cref{thm:gdwarmup}).}
\begin{figure}[t]
\centering\small
\begin{subfigure}{0.24\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/field000.pdf}
\caption{$b=0$;}
\label{fig:field-a}
\end{subfigure}
\begin{subfigure}{0.24\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/field005.pdf}
\caption{$b=0.05$;}
\label{fig:field-b}
\end{subfigure}
\begin{subfigure}{0.24\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/field010.pdf}
\caption{$b=0.1$;}
\label{fig:field-c}
\end{subfigure}
\begin{subfigure}{0.24\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/field025.pdf}
\caption{$b=0.25$.}
\label{fig:field-d}
\end{subfigure}
\caption{\small%
The gradient field of $f_{A,\Sigma}$ (see \cref{eq:f_overview}) for $\theta=1$ and varying values of $b$;
near the origin, gradients (see \cref{eq:grad}) are skewed to the right, which causes GD to diverge from the nearest solution $\textbf{e}_2=(0,1)$.
}
\label{fig:trajectory1}
\end{figure}
\begin{figure}[t]
\centering\small
\begin{subfigure}{0.24\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/gd-b000.pdf}
\caption{$b=0$;}
\label{fig:trajectories-a}
\end{subfigure}
\begin{subfigure}{0.24\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/gd-b005.pdf}
\caption{$b=0.05$;}
\label{fig:trajectories-b}
\end{subfigure}
\begin{subfigure}{0.24\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/gd-b010.pdf}
\caption{$b=0.1$;}
\label{fig:trajectories-c}
\end{subfigure}
\begin{subfigure}{0.24\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/gd-b025.pdf}
\caption{$b=0.25$.}
\label{fig:trajectories-d}
\end{subfigure}
%
\caption{\small%
Simulation of GD (with step size $\eta=0.2$) on $f_{A,\Sigma}$ for $\theta=1$ and varying values of $b$. We see that GD does not necessarily converge to the nearest solution, and tuning $b$ changes the point towards which it is biased.}
\label{fig:trajectories}
\end{figure}
The construction above is the heart of most of our results.
Let us illustrate how it rules out a strongly convex regularizer (in the distribution-independent setting) and attain \cref{thm:gdwarmup}.
The key property of strongly-convex regularizers is that in any convex set they have a unique minimum. Moreover, two far away points cannot simultaneously attain close-to-minimal value. This is in fact the only property we will use. Thus, our result can in fact be extended to any regularizer that is a ``tie-breaker''---namely, it always prefers a single unique solution amongst a class of possible solutions with large diameter.
The construction above will allow us to generate two instances of convex learning problems, where SGD converges to two far away points. The first instance is the standard Euclidean distance. Namely, we take a function $f_{1}$ of the form in \cref{eq:f_overview}, with $\Sigma$ the identity and $A$ with boundaries $(-\infty,\infty)$. In this case, SGD is biased towards the nearest solution $\textbf{e}_2=(0,1)$.
The second instance, $f_2$, is the construction above where SGD is biased towards another point on the interval (see \cref{fig:trajectories-b,fig:trajectories-c,fig:trajectories-d}).
Now both points are global minima, for both $f_1$ and $f_2$, hence if SGD is implicitly biased towards solutions with minimum regularization penalty $r$, we must have that $r(\textbf{e}_2)= r(\v)$, where $\v$ is the choice of SGD when it observes $f_2$.
However, if $r$ is strongly convex,
because $\|\textbf{e}_2-\v\|=\Theta(1)$, there has to be a point on the interval between them that attain a strictly lesser regularization penalty, moreover it also attains minimal loss value. This contradicts the existence of such an $r$.
\paragraph{The general case.}
Our second result (\cref{thm:gdr}) rules out the existence of any distribution-independent regularizer. In contrast with the strongly-convex case we can not give uniform bounds that depend on parameters of strong convexity. As such, the rates depend on the regularizer.
But the construction here is similar. We basically start with the assumption that there are two points $\mathbf{w}_1$ and $\mathbf{w}_2$ with different regularization penalty, and we want to construct two functions $f_1,f_2$ that maps $\mathbf{w}_1,\mathbf{w}_2$ to the same empirical loss.
It might seem that through a simple linear transformation that maps, say, $\mathbf{w}_1$ to $\textbf{e}_2$ and $\mathbf{w}_2$ to $\v$ we can reduce this case to the case above. However, there is some subtlety since gradient descent is not invariant to linear transformations.%
\footnote{We note though that it can be turned to an affine invariant optimization algorithm~\citep{koren2017affine}.}
Towards this, we extend the construction above by constructing a more general example, where we can tune the point of convergence of SGD to \emph{any} point on the interval between $\v$ and $\textbf{e}_2$. This allows us to avoid scaling, and use only rotations (which SGD is invariant to) in order to reduce the problem to the former case.
This is done by changing the set $A$ from allowing $0\le \alpha < \infty$ and $\theta=1$, to adding a second boundary condition on the right and also scaling $\theta$. In \cref{fig:trajectories} we illustrate how changing the boundary condition changes the trajectory.
\section{Technical Background}
\ignore{
\subsection{Distance Functions}
Throughout, we will repeatedly use functions of the following form in our constructions:
\begin{align}\label{eq:canonical}
f_{A,\Sigma}(\mathbf{w})
=
\frac{1}{2}\min_{v \in A} \Big\{ (\mathbf{w}-\v)^\top \Sigma (\mathbf{w}-\v) \Big\}
,
\end{align}
where $A$ is a set of the form $A= \{(\alpha,\theta): -b_1<\alpha<b_2\},$
$\theta \in \mathbb{R}$, $b_1,b_2 \in \mathbb{R}_{+}$ and $\Sigma$ is a PSD matrix of the following form:
\[
\Sigma
=
\begin{pmatrix}
\sigma^2 & \sigma/2\\
\sigma/2 & 1
\end{pmatrix}.
\]
Because $A$ is convex, it is known that a function $f$ of the aforementioned form, depicted in \cref{eq:canonical}, is indeed a convex function (e.g., \cite{boyd2004convex}, Example 3.16).
If there is no reason for confusion we will omit dependence in $A$ and $\Sigma$ and simply write $f(\mathbf{w})$.
It can be seen that for a function $f_{A,\Sigma}$ of the form in \cref{eq:canonical}, the gradient is given by
\[ \nabla f(\mathbf{w}) = \Sigma (\mathbf{w}-\v(\mathbf{w})),\]
where we denote $\v(\mathbf{w})=\arg\min_{v\in A}(\mathbf{w}-\v)^\top\Sigma(\mathbf{w}-\v)$. As a corollary one can obtain the following expressions for the gradient
\begin{align}\label{property:derivative}
\nabla f(\mathbf{w})=
\begin{cases}
\Sigma(\mathbf{w}- (b_1,\theta))&
\text{if } \phantom{b_2<}w_1 +\frac{1}{2\sigma}(w_2-\theta) <b_1;\\
\tfrac34 \vectortwo{0}{w_2-\theta}&
\text{if } b_1<w_1 +\frac{1}{2\sigma}(w_2-\theta) <b_2;\\
\Sigma (\mathbf{w}-(b_2,\theta))&
\text{if } b_2 \le w_1 +\frac{1}{2\sigma}(w_2-\theta) \phantom{\ge b_2}.
\end{cases}
\end{align}
}
\subsection{Feldman's Statistically Complex Set}
A key technical tool in the proof of \cref{thm:nouc} is a construction by Feldman, \cite{feldman2016generalization}, of a statistically complex set in $\mathbb{R}^d$. While Feldman's construction is not the first to show that the sample complexity of an ERM algorithm may scale with the dimension, it greatly improved over previous construction \cite{shalev2009stochastic}, and showed that the dependence may be \emph{linear} in the dimension.
We will exploit here Feldman's set in order to construct an example where SGD essentially picks arbitrarily an element from a statistically complex set, akin to ERM, and we will need the following statement due to Feldman
\begin{theorem}[Essentially Theorem 3.3 in~\citealp{feldman2016generalization}] \label{thm:feldman}
Let $\mathcal{W}_d=\{-\frac{1}{\sqrt{d}},\frac{1}{\sqrt{d}}\}^d.$ There exists a distribution $D$ over $1$-Lipschitz convex functions such that given a sample $|S|<d/6$ drawn i.i.d from $D$ then w.p. $1/2$ (over the sample $S$) there exists $\mathbf{w}\in \mathcal{W}_d$ such that
\begin{align}
\frac{1}{|S|}\sum_{t=1}^{|S|} f(\mathbf{w},z_t) =0,
\end{align}
but
\begin{align}\mathbf{E}_{z\sim D}[f(\mathbf{w},z)] =1/4.\end{align}
\end{theorem}
We will need a slightly stronger version of the theorem which is an immediate corollary
\begin{corollary}\label{cor:feldman}
Let $A\subseteq \mathcal{W}_{d}$, such that $|A|\ge 2^{d-1}$, then $A$ is $(d/6,1/4)$-statistically complex.
\end{corollary}
\begin{proof}
For two vectors $\v\in \{-1,1\}^d$ and an element $\mathbf{w}\in \mathcal{W}_d$ let $\v*\mathbf{w}\in \mathcal{W}_d$ be the pointwise product between $\mathbf{w}$ and $\v$, i.e.
\[(\v*\mathbf{w})_i = \v_i\cdot \mathbf{w}_i.\]
Let $D$ be the distribution from \cref{thm:feldman} and consider a distribution where we draw uniformly an elements $\v\in \{-1,1\}^d$ and a sample $S$ of size d/6 i.i.d from $D$. One can show that with probability $|A|/(2^{d+1})$ we have that there exists an elements $\mathbf{w}\in A$ such that
\begin{align}\label{eq:feldmanerm}
\frac{1}{|S|}\sum_{t=1}^{|S|} f(\v*\mathbf{w};z_t) =0
\end{align}
but
\begin{align}\label{eq:feldmanrisk}\mathbf{E}_{z\sim D}f(\v*\mathbf{w};z) =1/4.\end{align}
In particular, there exists a $\v$ such that with probability $\frac{|A|}{2^{d+1}}$, \cref{eq:feldmanerm,eq:feldmanrisk} holds for some $\mathbf{w}\in A$ over the random sample $S$. Thus, we can define a convex Lipschitz mapping parameterized by $\mathbf{z}$ such that
\[f_{\v}(\mathbf{w};z)=f(\v*\mathbf{w};z).\]
From the above discussion if we draw $z\sim D$ we can see that this distribution demonstrates that $A$ is (d/6,1/4)-statistically complex
\end{proof}
\subsection{Berry-Esseen Theorem}
A very important and valuable tool for analysing the behavior of random walks that we will use is the well-known Berry-Esseen Theorem, discovered independently in \cite{berry1941accuracy,esseen1942liapunov}.
\begin{theorem}[Berry Esseen Theorem]\label{thm:be}
Let $X_1,X_2,\ldots, X_T$ be zero mean and independent random variables, with $\mathbf{E}(X^2_i)=\sigma_i^2$ and $\mathbf{E}(|X^3_i|)=\rho_i$. Let $S_T=\frac{1}{ \sqrt{\sum_{i=1}^T\sigma_i^2}}\sum_{i=1}^T X_i $, then we have
\[|P(S_T\leq a)-\Phi(a)|\le C_{\textrm{BE}} (\sum_{i=1}^T \sigma^2_i)^{-3/2}\sum_{i=1}^T \rho_i,\]
where $C_{\textrm{BE}}<1$ is an absolute constant, and $\Phi(a)$ is the CDF of a unit variate zero-mean Gaussian random variable.
\end{theorem}
For a bound $C_{\textrm{BE}}<1$ of the absolute constant see, for example, \citet{van1972application}. We will need the following technical Lemma which is derived via \cref{thm:be}:
\begin{lemma}\label{cor:be} Let $k\ge 0$, and assume $T>2\cdot k$. If $X_t$ is a random variables such that
\[
X_t= \begin{cases}
c\frac{T-t}{T} & \textrm{w.p. $1/4$}\\
-c\frac{T-t}{T} & \textrm{w.p. $1/4$}\\
0 & \textrm{w.p. $1/2$}
\end{cases},
\]
and $I= \{1,2,\ldots, T/k\}$, then
\[P\left(\left|\frac{1}{\sqrt{T}}\sum_{i\in I} X_i\right|<a \frac{c}{\sqrt{50k}}\right)\le \textrm{erf}(a)+\sqrt{\frac{50^3 k}{T}},\]
where $\textrm{erf}(a)=\Phi(a)-\Phi(-a)$ is the error function.
\end{lemma}
\begin{proof}
First, we lower bound $\sum_{i\in I} \sigma_i^2$, and obtain that:
\begin{align*}
\sum_{i\in I} E[|X_i|^2]
&= \frac{c^2}{2}\sum_{i\in I} \left(\frac{T-t}{T}\right)^2\\
&\ge \frac{c^2}{2T^2} \sum_{t=1}^{T/k} \left(T-t\right)^2\\
&=\frac{c^2}{2T^2}\sum_{t=0}^{T/k-1} \left(\left(\frac{k-1}{k}\right)T+t\right)^2\\
&\ge
\frac{c^2}{2T^2}\max\left\{\frac{(k-1)^2}{k^2}T^2 \frac{T}{k},\sum_{t=0}^{T/k-1} t^2\right\}
\end{align*}
We also have that for $T>2\cdot k$:
\[\sum_{t=0}^{T/k-1}t^2 =\frac{T/k\left(T/k-1\right)\left(2T/k-1\right)}{6}\ge \frac{T^3}{12k^3}\]
Taken together we obtain that
\begin{align*}
\sum_{i\in I} \mathbf{E}[|X_i|^2] &\ge \frac{c^2T}{2}\max\left\{ \frac{(k-1)^2}{k^2}, \frac{1}{12 k^2}\right\}\\
&\ge \frac{c^2T}{50k}
\end{align*}
Next, we lower bound $\sum \rho_i$:
\begin{align*}
\sum_{i\in I} \mathbf{E}[|X_i|^3]\le\frac{1}{2}\sum_{t=1}^{T/k}\mathbf{E}[ \big|c \frac{(T-t)}{T}\big|^3]
=\frac{c^3}{2T^3}\sum_{t=1}^{T/k} (T-t)^3
&\le \frac{c^3 T}{2T^3 k}T^3\le \frac{c^3 T}{2k}\\ \label{eq:berry_essen_third_moment}
\end{align*}
Taken together we obtain that
\begin{align*} P\left(\left|\frac{1}{\sqrt{T}}\sum X_i\right|<a\frac{c}{\sqrt{50k}}\right)&\le
P\left(\left|\frac{1}{\sqrt{\sum_{i=1}^T \sigma_i^2}}\sum X_i\right|<a\right)\\
&\le \Phi(a)-\Phi(-a) + 2\left(\sum_{i=1}^T \sigma_i^2\right)^{-3/2}\sum_{i=1}^T\rho_i\\
&\le\Phi(a)-\Phi(-a) + \frac{(50k)^{3/2}c^3 T }{c^3 T^{3/2}k}\\
& \le \Phi(a)-\Phi(-a) + \sqrt{\frac{50^3 k}{T}}
\end{align*}
\end{proof}
\section{Proofs: Distribution Independent Regularizers}
\subsection{Proof of \cref{thm:gdwarmup}}\label{prf:gdwarmup}
As discussed, the main technical gadget behind our distribution-independent-regularization results is a construction of a convex function on which GD does not converge to the minimal norm solution:
\begin{theorem}[GD does not converge to nearest solution]\label{lem:noneuclid}
Let $\mathcal{W}=\{\mathbf{w}: \|\mathbf{w}\|<5\}$. For every $0<\theta_2\le 1$, and $0<\theta_1 \leq 0.025\,\theta_2$, there exists a
a non-negative, convex, $1$--smooth, and $1$--Lipschitz function $F=F_{\theta_1,\theta_2}$ such that, if we run GD (as defined in~\cref{GD_alg})
with step size $0<\eta<1$ over $F$ then GD outputs $\mathbf{w}_F$ that satisfies the following
\begin{equation}\label{eq:convg}
\|\mathbf{w}_F-(\theta_1,\theta_2)\| \leq \frac{2640}{\eta T},\end{equation} but \begin{equation}\label{eq:ereq}F((0,\theta_2))= F((\theta_1,\theta_2))=0
.
\end{equation}
\end{theorem}
In words, even though $(0,\theta_2)$ and $(\theta_1,\theta_2)$ are both minimizers of $F$, GD converges closer to the latter despite it having the larger norm (that is, despite being farther away from the initial point---recall that we assume here that GD is initialized at the origin).
The proof of \cref{lem:noneuclid} is provided at the end of this section and we continue with the proof of \cref{thm:gdwarmup}.
\begin{proof}[Proof of~\cref{thm:gdwarmup}]
For every regularizer $r$ we will choose a distribution $D$ that is concentrated on a single function $F$ (dependent on $r$). Note that in this case, the iterates of SGD are completely equivalent to the iterates of GD with input function $F$. That is, \cref{thm:gdwarmup} in fact holds even for deterministic GD, and we continue with the analysis assuming we run GD over a fixed function $F$.
We now proceed to choose the function $F$ for a given $\lambda$-strongly convex regularization $r$. Denote $\textbf{e}_2=(0,1)$ and $\c=(0.024,1)$. Consider the set $\interval{\textbf{e}_2}{\c} = \{\alpha\textbf{e}_2+(1-\alpha)\c: 0\le \alpha\le 1\}$, and let \[\mathbf{w}^* = \argmin_{\mathbf{w}\in \interval{\textbf{e}_2}{\c}} r(\mathbf{w}).\]
We now want to choose function $F \ge 0$ such that $F(\textbf{e}_2)=F(\c)=F(\mathbf{w}^*)=0$ and that $\mathbf{w}_F$, the output of GD over $F$, will satisfy the following:
\begin{itemize}
\item
\|\mathbf{w}^*-\mathbf{w}_F\| > 0.01;
$.
\item If $\Pi(\mathbf{w}_{F})$ is the projection of $\mathbf{w}_F$ on $\interval{\textbf{e}_2}{\c}$ then
\|\mathbf{w}_{F}-\Pi(\mathbf{w}_{F})\| \leq 1/(\eta T).
$
\end{itemize}
This will conclude the proof. Indeed, by strong convexity:
\begin{align*}
r(\mathbf{w}_F) - r(\mathbf{w}^*)
&\ge
\nabla r(\mathbf{w}^*)^\top(\mathbf{w}_F-\mathbf{w}^*) + \frac{\lambda}{2} \|\mathbf{w}^*-\mathbf{w}_F\|^2
\tag{$\lambda$-strong convexity}
\\
&=
\nabla r(\mathbf{w}^*)^\top(\Pi(\mathbf{w}_F)-\mathbf{w}^*) + \nabla r(\mathbf{w}^*)^\top(\mathbf{w}_F-\Pi(\mathbf{w}_F)) +
\frac{\lambda}{2} \|\mathbf{w}^*-\mathbf{w}_F\|^2
\\
&=
\nabla r(\mathbf{w}^*)^\top(\mathbf{w}_F-\Pi(\mathbf{w}_F)) + \frac{\lambda}{2} \|\mathbf{w}^*-\mathbf{w}_F\|^2
\tag{$w^*$ minimizes $r$ over $\interval{\textbf{e}_2}{\c}$}
\\
&\ge
-\frac{2640}{T\eta} + 0.5 \cdot 10^{-4}\lambda
\tag{Lipschitz condition}
,
\end{align*}
and for $\eta = \Omega(1/\lambda T)$ we would get $r(\mathbf{w}_S) - r(\mathbf{w}^*) \geq \Theta(\lambda)$ as claimed.
We now demonstrate how to choose an appropriate $F$.
We will consider two possible cases: $\|\mathbf{w}^*-\textbf{e}_2\|\geq 0.012$, or $\|\mathbf{w}^*-\c\|\geq0.012$.
\begin{itemize}[wide]
\item
First assume that $\|\mathbf{w}^*-\textbf{e}_2\|>0.012$. We then choose
$$F(\mathbf{w}) = \min_{\v\in \interval{\textbf{e}_2}{\c}}\frac{1}{5280} \|\mathbf{w}-\v\|^2.$$
which can be seen to be $1$-smooth and $1$ Lipschitz on $
\mathcal{W}$.
A simple analysis of the update step shows that for $\eta<1$, we have that $\mathbf{w}^{(t+1)} = \sum_{i=0}^{t-1}(1-\frac{\eta}{2640})^i \frac{\eta}{2640} \textbf{e}_2$.
Hence,
\begin{align*} \|\mathbf{w}_F-\textbf{e}_2\|
&= \left\|\frac{1}{T}\sum_{t=1}^{T} \textbf{w}^{(t)}-{\textbf{e}_2}\right\|\\
&\le \frac{1}{T} \sum_{t=1}^{T} \|\textbf{w}^{(t)}-{\textbf{e}_2}\|\\
&= \frac{1}{T}\sum_{t=1}^{T} \left| 1-\frac{\eta}{2640}\sum_{i=1}^{t-1} (1-\frac{\eta}{2640})^i \right|\\
&= \frac{1}{T}\sum_{t=1}^{T} (1-\frac{\eta}{2640})^t \tag{$\sum_{i=1}^{t-1} (1-\eta)^i = \frac{1-(1-\eta)^{t}}{\eta}$}\\
& \le \frac{2640}{\eta T}
.
\end{align*}
In particular we have that
\[
\|\mathbf{w}_F-\Pi(\mathbf{w}_F)\| \leq \|\mathbf{w}_F-\textbf{e}_2\|
\leq
\frac{2640}{\eta T}
,
\]
and by simple geometry, we have also
\begin{align*}
\|\mathbf{w}^*-\mathbf{w}_F\|
\geq
\|\mathbf{w}^*-\textbf{e}_2\|
\geq
0.01.
\end{align*}
\item
Next we assume that $\|\mathbf{w}^*-\c\|>0.012$. We now apply \cref{lem:noneuclid} with $\theta_1=0.024$ and $\theta_2=1$ and consider $F=F_{\theta_1,\theta_2}$ as in the theorem's statement. Then, we have that $\|\mathbf{w}_{F}-\c\|<\frac{120}{\eta T}$, and we obtain as before that $\|\mathbf{w}_F-\Pi(\mathbf{w}_F)\| \leq \frac{2640}{\eta T}$ and that $\|\mathbf{w}^*-\mathbf{w}_F\|>0.01$, as required.\qedhere
\end{itemize}
\end{proof}
\subsection{Proof of \cref{lem:noneuclid}}
It will be more convenient to construct a function $F$ that is convex, $4$-smooth and $22$-Lipschitz such that if we run GD with step-size $0<\eta<1/3$ over $F$ then GD outputs $\mathbf{w}_F$ that satisfies \cref{eq:ereq} and
\[\|\mathbf{w}_F-(\theta_1,\theta_2)\|\le \frac{120}{\eta T}.\]
Then, by re-scaling $F\to \frac{1}{22}F$, and observing that running GD on $F$ with step size $\eta/22$ is equivalent to running GD on $\frac{1}{22}F$ with stepsize $\eta$, we obtain the desired result.
Next, we construct $F$. For $0\le \theta_2 \le 1$ and $0\le \theta_1\le 0.025\cdot \theta_2$ let us define the set: $A_{\theta_1,\theta_2}=\{(\alpha,\theta_2): 0\le \alpha \le \theta_1\}$. In turn, we define the function $F=F_{\theta_1,\theta_2}(\mathbf{w})$ to be:
\begin{align}\label{eq:f_A}
F(\mathbf{w})
=
\argmin_{\textbf{v} \in A_{\theta_1,\theta_2}} \left\{ \tfrac{1}{2}(\mathbf{w}-\textbf{v})^\top \Sigma (\mathbf{w}-\textbf{v}) \right\}
,
\end{align}
where
\[
\Sigma=\begin{pmatrix}
1 & \tfrac12 \\
\tfrac12 & 1
\end{pmatrix}
.
\]
We start with showing that $F$ is indeed convex, $4$-smooth and $22$-Lipschitz as required (As discussed at the beginning, then we obtain the desired result by rescaling).
It is a standard fact that
a function of the above form is indeed convex (see, e.g., Example 3.1 in \cite{boyd2004convex}).
We will next show that $F$ is also $4$-smooth and $22$-Lipschitz. first, one can show that from \cref{eq:f_A}), that the gradient is given by
\begin{equation}\label{eq:thegrad} \nabla F(\mathbf{w}) = \Sigma (\mathbf{w}-\v(\mathbf{w})),\end{equation}
where we denote $\v(\mathbf{w})=\arg\min_{v\in A_{\theta_1,\theta_2}}(\mathbf{w}-\v)^\top\Sigma(\mathbf{w}-\v)$.
Next, observe that for any $\mathbf{w},\mathbf{w}'$ we have $\|\v(\mathbf{w})-\v(\mathbf{w}')\|^2 \leq (\mathbf{w}-\mathbf{w}')^\top \Sigma (\mathbf{w}-\mathbf{w}') \leq \tfrac32 \|\mathbf{w}-\mathbf{w}'\|^2$ as $\v(\mathbf{w})$ is the projection of $\mathbf{w}$ onto $A_{\theta_1,\theta_2}$ with respect to the norm $\|x\|^2 = x^\top \Sigma x$, and since projections are contracting distances. Then,
\begin{align*}
\| \nabla F(\mathbf{w}) - \nabla F(\mathbf{w}') \|
&\leq
\|\Sigma\| \big( \|\mathbf{w}-\mathbf{w}'\| + \|\v(\mathbf{w})-\v(\mathbf{w}')\| \big)
\\
&\leq
\big( \tfrac32 + (\tfrac32)^{3/2} \big) \|\mathbf{w}-\mathbf{w}'\|
\\
&\leq
4\|\mathbf{w}-\mathbf{w}'\|
.
\end{align*}
Also, since $\v(0)=(0,\theta_2)$. We obtain that $\|\nabla F(0)\|\le \theta_2 \frac{\sqrt{5}}{2}$. Thus from smoothness we also get that for any $\mathbf{w}\in \mathcal{W}$, we have that $\|\nabla F(\mathbf{w})\|\le \frac{\sqrt{5}}{2}+20\le 22$. This proves that indeed $F$ is convex, $4$-smooth and $22$-Lipschitz.
To next prove the statement, we begin with the following analysis for trajectory of GD over the function $F_{\theta_1,\theta_2}$.
\begin{lemma} \label{cl:trajectory_analysis}
Let $\mathbf{w}^{(1)},...,\mathbf{w}^{(T)}$ be the sequence defined by running unprojected GD (i.e., with $\mathcal{W}= \mathbb{R}^d$) over $F$ with step size $\eta \leq \frac{1}{3}$, starting from $\mathbf{w}^{(1)} = 0$ for $T$ iterations. Then there exist $\frac{1}{2\eta}\leq t_0 \leq \frac{3}{\eta} ,t_0\le t_1\le t_0+ \frac{7}{\eta}$ s.t.:
\begin{align*}
\textrm{for~$1 \le t\le t_0$:}&\quad
\mathbf{w}^{(t)} = \big(I-(I-\eta\Sigma)^{t-1}\big) \xi_{0}&\numberthis\label{eq:wtt0}& \quad
\textrm{where $\xi_{0}=(0,\theta_2)$;}\\
\textrm{for $t_0<t\le t_1$:}& \quad \mathbf{w}^{(t)}= \begin{pmatrix}
w^{(t_0)}_1 \\
\left(1-\frac{3\eta}{4}\right)^{t-t_0}\big(w_2^{(t_0)}-\theta_2\big)+\theta_2
\end{pmatrix};&\numberthis\label{eq:wtt0t1}
\\
\textrm{for $t_1<t \le T$:}&\quad
\mathbf{w}^{(t)} = \big(I-(I-\eta \Sigma)^{t-t_1}\big) \xi_{1}+ \left(I-\eta \Sigma\right)^{t-t_1} \mathbf{w}^{(t_1)} &\numberthis\label{eq:wtt1}&\quad
\textrm{where $\xi_{1}=(\theta_1,\theta_2)$.}
\end{align*}
\end{lemma}
\cref{cl:trajectory_analysis} is the most technical part of the proof, and follows a careful step-by-step analysis of the trajectory of GD over the function $F$; we defer its proof to later in this section and proceed with the proof of \cref{lem:noneuclid}. We also complement the proof with a ``proof by picture'' and a schematic description of the trajectory in \cref{fig:trajectory3}
\begin{figure}[t]
\centering
\includegraphics[width=0.6\textwidth]{figures/proof-of-lemma.pdf}
\caption{\small We depict here the trajectory being analyzed in \cref{cl:trajectory_analysis}. The trajectory comprises of three phases. In each phase the gradient at point $\mathbf{w}^{(t)}$ is determined by the vector $v(\mathbf{w}^{(t)})$ which is the closest vector to $\mathbf{w}^{(t)}$ on the set $A$, w.r.t the matrix norm induced by $\Sigma$ (see \cref{eq:thegrad}).
At the first phase, the vector $\v(\mathbf{w}^{(t)})$ is the boundary point $(0,1)$. More generally, the closest point to $\mathbf{w}^{(t)}$ on the interval $\{(\beta,1): -\infty\le \beta\le \infty\}$ is left to $(0,1)$ due to the linear transformation $\Sigma$. As such $(0,1)$ is the closest point on $A$. The gradient is $\nabla_{\mathbf{w}} =\Sigma (\mathbf{w}-v(\mathbf{w}))$, and points upwards and right.
This phase continues until $\mathbf{w}^{(t)}$ has moved towards the interior and the closest point $\v(\mathbf{w}^{(t)})$ starts to be at the interior of $A$, then gradient points upwards (note that when the closest point is at the interior of the interval then, horizontally, the distance to the closest point remains constant hence the gradient is vertical). Finally, the closest point to $\mathbf{w}^{(t)}$ returns to be a boundary point $(\theta_1,1)$ and $\mathbf{w}^{(t)}$ starts to converge towards $(\theta_1,1)$.
}
\label{fig:trajectory3}
\end{figure}
\begin{proof}[Proof of \cref{lem:noneuclid}]
We next set out to show that if we run GD on $F$ with any step-size $0<\eta<1/3$, then \[\|\mathbf{w}_{F}-\xi_{1}\|\le \frac{120}{\eta T}\] and $F(0,\theta_2)=F(\theta_1,\theta_2)=0$. Then, as discussed at the beginning the result follows by rescaling $F$ to obtain a $1$-Lipschitz and smooth function $F$.
We thus proceed with the proof. The fact that $F(0,\theta_2)=F(\theta_1,\theta_2)=0$ is immediate from definitions.
Next, we bound the sizes $\|\xi_{0}\|, \|\xi_{1}\|,\|\mathbf{w}^{(t)}\|$ for the setting depicted in \cref{cl:trajectory_analysis}. In particular when $\mathcal{W}=\mathbb{R}^d$ and no projection steps occur.
One can easily observe that $\|\xi_{1}\|,\|\xi_{0}\|< 1.5$. Following the trajectory path of $\mathbf{w}^{(t)}$, provided in \cref{cl:trajectory_analysis}, we can also provide a bound on $\mathbf{w}^{(t)}$:
\begin{itemize}
\item if $t\le t_0$ we have that $\|\mathbf{w}^{(t)}\|<\|\xi_{0}\|\leq1$;
\item if $t_0\le t\le t_1$, then $\|\mathbf{w}^{(t)}\|\le \|\mathbf{w}^{(t_0)}\|+ \theta_2 \le 2$;
\item and if $t\ge t_1$ we have that $\|\mathbf{w}^{(t)}\|\le \|\mathbf{w}^{(t_1)}\|+\|\xi_{1}\| < 5$.
\end{itemize}
Taken together we have that $\|\mathbf{w}^{(t)}\|<5$. As such, one can show that for any set $\mathcal{W}$, not necessarily $\mathcal{W}=\mathbb{R}^d$, as long as $\{\mathbf{w}: \|\mathbf{w}\|\le 5\}\subseteq \mathcal{W}$ then \cref{cl:trajectory_analysis} holds. Indeed, in any such case running GD or GD without projection is completely equivalent.
Finally, by simple calculation we can show that the singular values of $\Sigma$ are $3/2$ and $1/2$. Hence,
\begin{align}\label{eq:Sigma_spectral}
\|I-\eta \Sigma\| \leq 1-\frac{\eta}{2}.
\end{align}
where $\|\cdot\|$ denotes the spectral (operator) norm.
We are now ready to show that $\mathbf{w}_{F}$ converges to $\xi_{1}$:
\begin{align*}
\|\mathbf{w}_{F}-\xi_{1}\|
&\leq \frac{1}{T}\sum_{t=1}^T \|\textbf{w}^{(t)}-\xi_{1}\|
\\
& =
\frac{1}{T}\sum_{t=1}^{t_1}\|\textbf{w}^{(t)}-\xi_{1}\| +\frac{1}{T}\sum_{t=t_1+1}^{T} \|\textbf{w}^{(t)}-\xi_{1}\|
\\
& \leq
\frac{10t_1}{T}+\frac{1}{T}\sum_{t=t_1+1}^{T} \|\textbf{w}^{(t)}-\xi_{1}\| \tag{$\|\mathbf{w}^{(t)}\|,\|\xi_{1}\|<5$}
\\
&=
\frac{100}{\eta T}+\frac{1}{T}\sum_{t=t_1+1}^T\|(1-\eta\Sigma)^{t-t_1} (\mathbf{w}^{(t_1)}-\xi_{1})\|
\tag{$t_1<10/\eta$; \cref{eq:wtt1}}
\\
& \leq
\frac{100}{\eta T}
+\frac{1}{T}\sum_{t=t_1+1}^T\|(I-\eta \Sigma)^{t-t_1}\| \cdot \|\mathbf{w}^{(t_1)}-\xi_{1}\| \\
&\le
\frac{100}{\eta T}
+\frac{10}{T}\sum_{t=1}^{T-t_1}\|I-\eta \Sigma\|^{t} \tag{$\|\mathbf{w}^{(t_1)}-\xi_{1}\|<10$}
\\
&\le
\frac{100}{\eta T}
+\frac{10}{T}\sum_{t=0}^{\infty}\Big(1- \frac{\eta}{2}\Big)^{t} \tag{\cref{eq:Sigma_spectral}}
\\
&\le \frac{100}{\eta T}
+\frac{10}{T}\cdot \frac{2}{\eta}
\leq \frac{120}{\eta T}
.
\end{align*}
\end{proof}
\subsubsection{Proof of \cref{cl:trajectory_analysis}}
Computing $\v(\mathbf{w})$, and from \cref{eq:thegrad} we obtain the following expressions for the gradient
\begin{align}\label{property:derivative}
\nabla F(\mathbf{w})=
\begin{cases}
\Sigma(\mathbf{w}- (0,\theta_2))&
\text{if } \phantom{b_2<}w_1 +\frac{1}{2}(w_2-\theta_2) <0;\\
\tfrac34 \vectortwo{0}{w_2-\theta}&
\text{if } 0<w_1 +\frac{1}{2}(w_2-\theta_2) <\theta_1;\\
\Sigma (\mathbf{w}-(b_2,\theta_2))&
\text{if } \theta_1 \le w_1 +\frac{1}{2}(w_2-\theta_2) \phantom{\ge b_2}.
\end{cases}
\end{align}
We thus obtain two boundary conditions that governs the behavior of the trajectory:
\begin{align}
\label{eq:boundary1}
w_1 + \tfrac{1}{2} w_2 &< \tfrac{1}{2} \theta_2;
\\
\label{eq:boundary2}
w_1 + \tfrac{1}{2} w_2 &\geq \tfrac{1}{2} \theta_2 + \theta_1.
\end{align}
Given $\eta$, we claim that \cref{cl:trajectory_analysis} holds if we let $t_0$ denote the first iterate such that $\mathbf{w}^{(t_0)}$ violates \cref{eq:boundary1}, when running GD, and if $t_1$ denotes the first iterate for which $\mathbf{w}^{(t)}$ satisfies \cref{eq:boundary2}. We will split the proof into 3 parts, according to GD's trajectory, i.e. $t\leq t_0, t_0< t\leq t_1, t>t_1$.
\begin{claim} \label{cl:before_t0}
There exists $\frac{1}{2\eta}\le t_0 \le \frac{3}{\eta}$ such that $\mathbf{w}^{(t_0)}$ is the first iterate that violates \cref{eq:boundary1}. Further, for any $t\leq t_0$, $\textbf{w}^{(t)}$ can be calculated by \cref{eq:wtt0}. And finally, $0.03 \theta_2\leq \wonet{t_0}$.
\end{claim}
\begin{proof
First note that $\mathbf{w}^{(1)}$ satisfies \cref{eq:boundary1}, hence $t_0 \geq 1$. Now, following the calculation of the derivative provided in, \cref{property:derivative} we obtain the update step
$\mathbf{w}^{(t+1)}=\mathbf{w}^{(t)}-\eta \Sigma (\textbf{w}^{(t)}-\xi_{0})$ which we can rewrite as
\begin{equation} \label{recursion}
\mathbf{w}^{(t+1)}=(I-\eta \Sigma )\textbf{w}^{(t)} + \eta \Sigma \xi_{0}.
\end{equation}
By induction one can show that for $2\le t \le t_0$:
\begin{align*}
\mathbf{w}^{(t)}
&=\sum_{i=0}^{t-2} (I-\eta \Sigma)^i \cdot (\eta \Sigma \xi_{0}) \\
&= \big(I-\left(I-\eta\Sigma\right)^{t-1}\big) \xi_{0}. \numberthis \label{eq:induction_claim}
\end{align*}
This shows that for any $t\le t_0$, $\mathbf{w}^{(t)}$ can be calculated by \cref{eq:wtt0}. We proceed with the proof to show that $\frac{1}{2\eta}\le t_0\le \frac{3}{\eta}$.
Considering the singular value decomposition of $\Sigma$ one can show that:
\begin{equation}\label{eq:o-sigmat}
(I-\eta \Sigma)^{t-1}=\frac{1}{2} \begin{pmatrix}
(1-\tfrac32\eta)^{t-1}+(1-\tfrac12\eta)^{t-1} & (1-\tfrac32\eta)^{t-1}-(1-\tfrac12\eta)^{t-1} \\
(1-\tfrac32\eta)^{t-1}-(1-\tfrac12\eta)^{t-1} & (1-\tfrac32\eta)^{t-1}+(1-\tfrac12\eta)^{t-1}
\end{pmatrix}.\end{equation}
Plugging this in \cref{eq:induction_claim}, we obtain that for any $t\le t_0$:
\begin{equation} \label{eq:first_trajectory_vector}
\textbf{w}^{(t)}
=
\frac{\theta_2}{2} \begin{pmatrix}
(1-\tfrac12\eta)^{t-1} - (1-\tfrac32\eta)^{t-1} \\
2 - (1-\tfrac12\eta)^{t-1} +(1-\tfrac32\eta)^{t-1}
\end{pmatrix}
.
\end{equation}
To obtain the lower bound on $t_0$ observe that $t_0$ satisfies:
$$w^{(t_0)}_1 + \frac{1}{2}(w^{(t_0)}_2-\theta_2) \geq 0 ,$$
Plugging \cref{eq:first_trajectory_vector} and dividing by $\theta_2/2$ we obtain that:
\[ \left(1-\frac{\eta}{2}\right)^{t_0-1} -\left(1-\frac{3\eta}{2}\right)^{t_0-1}+\frac{1}{2}\left(2- \left(1-\frac{\eta}{2}\right)^{t_0-1} -\left(1-\frac{3\eta}{2}\right)^{t_0-1}-2\right)
\ge 0.\]
Rearranging terms we get:
\[\frac{1}{2}\left(1-\frac{\eta}{2}\right)^{t_0-1}-\frac{3}{2}\left(1-\frac{3\eta}{2}\right)^{t_0-1}\ge 0,\]
which for $\eta<1/3$, can be rewritten as:
\begin{equation}\label{eq:kill_this_paper}\left(1 + \frac{2\eta}{2-3\eta}\right)^{t_0-1}=\left(\frac{2-\eta}{2-3\eta}\right)^{t_0-1} \geq 3.\end{equation}
This leads to
\begin{align*}
t_0&\ge \frac{1}{\ln (1+\frac{2\eta}{2-3\eta})}
\tag{$\ln (3) \ge 1$}\\
&\ge
\frac{2-3\eta}{2\eta}
\tag{$\ln(x+1)\le x$} \\
&= \frac{1}{\eta} -\frac{3}{2}\\
& \ge \frac{1}{2\eta}.
\tag{$\eta \le \frac{1}{3}$}
\end{align*}
Next we provide an upper bound for $t_0$. Again, for every $t<t_0$ \cref{eq:boundary1} is satisfied, which, as we already saw (recall \cref{eq:kill_this_paper}) means that for every $t< t_0$:
\begin{equation} \label{eq:upperbound_t0}
\forall t<t_0, \quad \left(1+\frac{2\eta}{2-3\eta}\right)^{t-1} \leq 3.
\end{equation}
Using the inequality $(1+2/n)^n \ge 3$, we obtain\begin{align*}
\left(1+ \frac{2\eta}{2-3\eta}\right)^{t-1}
\ge
\left( 1+\eta \right)^{t-1}
\geq
3^{\frac{\eta}{2} (t-1)}
.
\end{align*}
In particular for $t \ge \frac{2}{\eta}+1$ \cref{eq:upperbound_t0} is violated and hence $t_0 \le \frac{3}{\eta}$.
Finally, we provide a lower bound for $w_1^{(t_0)}$. Namely, we want to show that $w_1^{(t_0)}\geq0.04\theta_2$.
First, by rearranging terms at \cref{eq:kill_this_paper} we obtain that
$t_0$ is sufficiently large so that
$\left(1-\frac{\eta}{2}\right)^{t_0-1}\ge 3\left(1-\frac{3\eta}{2}\right)^{t_0-1}$.
Again applying the formula for $\mathbf{w}^{(t_0)}$ in \cref{eq:first_trajectory_vector} we have that:
\begin{align*}
w_1^{(t_0)}=\frac{\theta_2}{2}\cdot[(1-\frac{1}{2}\cdot \eta)^{t_0-1} - (1-\frac{3}{2}\cdot\eta)^{t_0-1}]
&\ge \frac{\theta_2}{4} \left(1-\frac{1}{2}\eta\right)^{t_0-1}\\
& \ge
\frac{\theta_2}{4} \left(1-\frac{1}{2}\eta\right)^{3/\eta} & t_0<\frac{3}{\eta}\\
& \ge 2^{-5}\theta_2. &\left(1-\frac{1}{2n}\right)^n >\frac{1}{2}
\numberthis \label{w1t0}
\end{align*}
This concludes the analysis of the first phase of the trajectory.
\end{proof}
We next move on to the case $t_0\le t\le t_1$.
\begin{claim} \label{cl:between_t0_t1}
Let $t_0 \le t \leq t_1$. Then $\textbf{w}^{(t)}$ can be calculated by \cref{eq:wtt0t1}. Moreover $t_1\leq t_0+\frac{7}{\eta}$.
\end{claim}
\begin{proof
We again apply the calculation of the derivative provided in \cref{property:derivative} at $t_0 \le t\le t_1$ and obtain :
\begin{equation} \label{eq:gradient_region2}
\nabla F(\textbf{w}^{(t)})=\begin{pmatrix}
0 \\ \frac{3}{4}(w_2^{(t)}-\theta_2)
\end{pmatrix}.
\end{equation}
Note that this proves that $w_1^{(t)}= w_1^{(t_0)}$. For $w_2^{(t)}$, we have that
\[w_2^{(t)}=w_2^{(t-1)}(1-\frac{3}{4}\eta)+\frac{3}{4} \cdot \eta \cdot \theta_2,\] which leads by induction to the following:
\begin{align*}w_2^{(t)} &=\left(1-\frac{3}{4}\eta\right)^{t-t_0}w_2^{(t_0)}+\sum_{i=0}^{(t-t_0)-1}\left(1-\frac{3}{4}\eta\right)^i \cdot \frac{3\eta\theta_2}{4}\\
&= \left(1-\frac{3}{4}\eta\right)^{t-t_0}w_2^{(t_0)}+ \left(1-\left(1-\frac{3}{4}\eta\right)^{t-t_0}\right)\theta_2\\
&=\left(1-\frac{3}{4}\eta\right)^{t-t_0}\left(w_2^{(t_0)}-\theta_2\right)+\theta_2.
\end{align*}
This shows that for any $t_0\le t\le t_1$ \cref{eq:wtt0t1} holds.
We next bound $t_1$. Recall that $t_1$ is defined to be the first iterate for which \cref{eq:boundary2} is satisfied. Let us show that for any $t$ s.t $t_0+\frac{7}{\eta}<t$ holds, \cref{eq:boundary2} is satisfied and hence $t_1\le t_0+7/\eta$. Equivalently we will show that for $t>t_0+7/\eta$, the following equation holds:
\begin{equation} \label{eq:t_1condition}
\theta_2-\wtwot{t} \leq 2(\wonet{t}-\theta_1).
\end{equation}
Indeed, let $t<t_1$, then
\begin{align*}
2\cdot(\wonet{t}-\theta_1)&= 2\cdot(\wonet{t_0}-\theta_1) & (w_1^{(t)}=w_{1}^{(t_0)} \textrm{~by~} \cref{eq:wtt0t1}) \\
&\geq 2\cdot (2^{-5}\cdot \theta_2 - \theta_1)&(w_1^{(t_0)}\ge 2^{-5}\theta_2 \textrm{~by~} \cref{w1t0} )\\
&\geq 2\cdot (2^{-5}\cdot \theta_2 - 0.025\theta_2)&(\theta_1\le 0.025\cdot \theta_2 )\\
&\ge 0.01\cdot\theta_2
\end{align*}
Next assume that $t\ge t_0+\frac{7}{\eta}$, then
\begin{align*}
0.01\cdot\theta_2 & \ge e^{-\frac{3\eta\cdot (t-t_0)}{4}}\theta_2 & t\ge t_0+\frac{20}{3\eta}\\
&\ge \left(1-\frac{3}{4}\eta\right)^{t-t_0} \theta_2 \\
&\ge \left(1-\frac{3}{4}\eta\right)^{t-t_0} \left[\theta_2-\wtwot{t_0}\right] & (\wtwot{t_0}\geq 0) \\
&=\theta_2-\mathbf{w}_2^{(t)}. & \cref{eq:wtt0t1}
\end{align*}
We now move to the last phase of the trajectory.
\end{proof}
\begin{claim} \label{cl:after_t1}
Let $t \geq t_1$, then $\textbf{w}^{(t)}$ can be calculated by \cref{eq:wtt1}.
\end{claim}
\begin{proof
Let $t\ge t_1$ be such that \cref{eq:boundary2} holds. Then again, we consider the formula of the derivative $\nabla F(\mathbf{w})$ (see \cref{property:derivative}) and have that
\[ \nabla F(\mathbf{w}) = \Sigma (\mathbf{w}- \xi_{1}).\]
We obtain the following recursive formula for $t$ if \cref{eq:boundary2} holds for all $t_1\le t' \le t$:
\begin{align*}
\textbf{w}^{(t)} &=(I-\eta \Sigma) \mathbf{w}^{(t-1)}+\eta \Sigma \xi_{1} \\
&=(I-\eta \Sigma)^{t-t_1}\mathbf{w}^{(t_1)}+\sum_{i=0}^{t-t_1-1} (I-\eta \Sigma)^i \eta \Sigma \xi_{1} \\
&=(I-\eta \Sigma)^{t-t_1}(\mathbf{w}^{(t_1)}-\xi_{1})+ \xi_{1}. \numberthis\label{eq:herenow}
\end{align*}
This shows that $\mathbf{w}^{(t)}$ can be calculated via \cref{eq:wtt1}. It remains thus to show that for any $t\ge t_1$, \cref{eq:boundary2} always holds. We prove this by induction. Note that for the base case, this follows from the definition of $t_1$. We can thus assume by induction hypothesis that $\textbf{w}^{(t)}$ satisfies \cref{eq:herenow}, and we want to prove that
\[
w^{(t)}_1+\tfrac12w^{(t)}_2-\theta_1-\tfrac12\theta_2 \ge 0
.
\]
For succinctness, let us write
\[
\alpha_t= \left(1-\tfrac32\eta\right)^{t-t_1}
,\quad \textrm{and}, \quad
\beta_t=\left(1-\tfrac12\eta\right)^{t-t_1}
.
\]
We will denote also $\v=\vectortwo{1}{1/2}$ Then using \cref{eq:o-sigmat} and \cref{eq:herenow} we have that
\begin{align*}
w^{(t)}_1+\tfrac12 w^{(t)}_2 -\theta_1 - \tfrac12 \theta_2
&= \v^\top(\textbf{w}^{(t)}-\xi_{1})
\\
&=\v^\top(1-\eta \Sigma)^{t-t_1}(\mathbf{w}^{(t_1)}-\xi_{1}) \tag{\cref{eq:herenow}}
\\
&=(\tfrac32\alpha_t+\tfrac12\beta_t)(\mathbf{w}^{(t_1)}_1-\theta_1)+(\tfrac32 \alpha_t - \tfrac12 \beta_t)(\mathbf{w}^{(t_1)}_2-\theta_2) \tag{\cref{eq:o-sigmat}}
\\
&\ge (\tfrac34\alpha_t - \tfrac34\beta_t)(\mathbf{w}^{(t_1)}_2-\theta_2) \tag{$2(\mathbf{w}_1^{(t_1)}-\theta_1)\ge \theta_2-\mathbf{w}^{(t_1)}_2$}
\\
&\ge 0,
\end{align*}
where the last inequality is true since $\alpha_t\le \beta_t$ for $\eta<1/3$ and we also have that $\mathbf{w}^{(t_1)}_2<\theta_2$.
This concludes the proof of \cref{cl:trajectory_analysis}.
\end{proof}
\subsection{Proof of \cref{thm:gdr}}\label{prf:gdr}
For a vector $\mathbf{w}\in \mathcal{W}\subseteq \mathbb{R}^2$ let us denote by $\mathbf{w}^\perp:=(w_2,-w_1)$. In particular, we have that $\mathbf{w}^\top \mathbf{w}^\perp =0$ and $\|\mathbf{w}\|=\|\mathbf{w}^\perp\|$. Our proof relies on the following claim which we prove at the end of this section.
\begin{claim}\label{cl:fichs1}
Let $r$ be an admissible regularizer over $\mathbb{R}^2$. There are two points $\mathbf{w}_1$ and $\mathbf{w}_2$ in the unit ball such that for some $-0.005\|\mathbf{w}_1\| <\delta< 0.005 \|\mathbf{w}_1\|$ we have
\[ \mathbf{w}_2 = \mathbf{w}_1 + \delta \mathbf{w}_1^\perp,\]
and $r(\mathbf{w}_1)\ne r(\mathbf{w}_2).$
\end{claim}
We next proceed with proof of \cref{thm:gdr}.
\begin{proof}[Proof of \cref{thm:gdr}]
Let $\mathbf{w}_1$ and $\mathbf{w}_2$ be as in \cref{cl:fichs1}. First, because GD is invariant to rotations, we can assume w.l.o.g that $\mathbf{w}_1= \|\mathbf{w}_1\|\cdot \textbf{e}_2$, and hence $\mathbf{w}_2 =(1,\delta )\|\mathbf{w}_1\|$.
We now set $c_r=\frac{1}{2}|r(\mathbf{w}_1)-r(\mathbf{w}_2)|$. To choose $T_r,D_r$ and $\mathbf{w}_r$ we now look at two cases: if $r(\mathbf{w}_1)>r(\mathbf{w}_2)$ and if $r(\mathbf{w}_1)<r(\mathbf{w}_2)$.
\begin{itemize}[wide]
\item
First suppose $r(\mathbf{w}_1)>r(\mathbf{w}_2)$. By upper-semicontinuity there exists a neighborhood $\delta_1$ such that for every $\mathbf{w}$ s.t. $\|\mathbf{w}-\mathbf{w}_1\|<\delta_1$, satisfies $r(\mathbf{w})>r(\mathbf{w}_2) + c_r$. We thus set $T_r=\frac{2640}{\eta\delta_1}$, and $\mathbf{w}_r=\mathbf{w}_2$.
We are left with choosing $D_r$. Note that in this case, the regularizer prefers a point with large Euclidean norm over a point with smaller Euclidean norm. Thus, to show it is not the implicit bias of SGD we only need to construct a distribution that is biased towards smaller Euclidean norms:
Indeed, consider the set $\interval{\mathbf{w}_1}{\mathbf{w}_2}= \{\alpha\mathbf{w}_1+(1-\alpha)\mathbf{w}_2: 0\le \alpha\le 1\}$ we set
$$f(\mathbf{w})= \frac{1}{5280}\cdot\min_{\v\in \interval{\mathbf{w}_1}{\mathbf{w}_2}}\|\mathbf{w}-\v\|^2$$
Our distribution $D_r$ is defined to choose $f$ w.p. $1$. Having defined $T_r,c_r,\mathbf{w}_r$ and $D_r$ we now set out to prove the result.
A simple analysis of the update step of SGD shows that for $\eta<1$ we have for every $\textbf{w}^{(t)}$ that $\mathbf{w}^{(t+1)} = \sum_{i=0}^{t-1}(1-\frac{\eta}{2640})^i \frac{\eta}{2640} \mathbf{w}_1$. Hence,
\begin{align*}\|\mathbf{w}_S-\mathbf{w}_1\| =\|\frac{1}{T_r}\sum_{t=1}^{T_r} \textbf{w}^{(t)}-\mathbf{w}_1\|
&\le \frac{1}{T_r} \sum_{t=1}^{T_r} \|\textbf{w}^{(t)}-\mathbf{w}_1\|\\
&= \frac{1}{T_r}\sum_{t=1}^{T_r} \|\sum_{i=1}^{t-1} (1-\frac{\eta}{2640})^i \frac{\eta}{2640} \mathbf{w}_1 - \mathbf{w}_1\|\\
&= \frac{1}{T_r}\sum_{t=1}^{T_r} \| (1-\frac{\eta}{2640})^t \mathbf{w}_1\| & \sum_{i=1}^t (1-\eta)^t = \frac{1-(1-\eta)^{t+1}}{\eta}\\
&\le \frac{1}{T_r}\sum_{t=1}^{T_r} (1-\frac{\eta}{2640})^{t}\\
& \le \frac{2640}{T_r\eta}\\
& = \delta_1 & T_r=\frac{2640}{\delta_1 \eta}
\end{align*}
By property of $\delta_1$ we have that $r(\mathbf{w}_S)>r(\mathbf{w}_2)+c_r$.
But because $\mathbf{w}_2$ is optimal (i.e. attain zero on $f$), we also have $F_S(\mathbf{w}_S)> F(\mathbf{w}_r)$. This proves the case $r(\mathbf{w}_1)>r(\mathbf{w}_2)$.
\item
Next, assume that $r(\mathbf{w}_1)<r(\mathbf{w}_2)$. As before we have a neighborhood $\delta_2$ such that if $\|\mathbf{w}-\mathbf{w}_2\|<\delta_2$ then we are guaranteed that $r(\mathbf{w})>r(\mathbf{w}_1)+c_r$. We choose then $T_r=\frac{1}{\delta_2 \eta}$ and $\mathbf{w}_r=\mathbf{w}_1$.
To define $D_r$, we now use the function $F_{\theta_1,\theta_2}$ from \cref{lem:noneuclid}. We assume w.l.o.g that $\delta>0$, if this is not the case we can use that function $F_{\theta_1,\theta_2}(\mathbf{w})=F_{\theta_1,\theta_2}(-\mathbf{w})$.
Let us set $\theta_2=\|\mathbf{w}_1\|$ and $\theta_1=|\delta| \|\mathbf{w}_1\| < 0.05\theta_2$. Again, we consider a deterministic distribution $D_r$ that chooses $F_{\theta_1,\theta_2}$ w.p. $1$.
Recall that we assume that $\mathbf{w}_1=\|\mathbf{w}_1\|\textbf{e}_2$, hence $\mathbf{w}_1=(0,\theta_2)$ and $\mathbf{w}_2=(\theta_1,\theta_2)$. Hence, by \cref{lem:noneuclid}, if we run over a sample of size $T_r>\frac{1}{\delta_2 \eta}$, we obtain that
\[\|\mathbf{w}_S - \mathbf{w}_2\|< \delta_2.\]
In particular $r(\mathbf{w}_S)>r(\mathbf{w}_1)+c_r$. But again $F_S(\mathbf{w}_S) \ge F(\mathbf{w}_1)$, because $\mathbf{w}_1$ is optimal.\qedhere
\end{itemize}
\end{proof}
Finally, we prove \cref{cl:fichs1}.
\begin{proof}[Proof of \cref{cl:fichs1}]
First, let us assume that there are $\u,\v$ such that $\|\u\|_2,\|\v\|_2=a$ and $r(\u)\ne r(\v)$ (at the end we will show that for admissible regularizer we always have such two points).
We will also assume that $\|\u-\v\|_2\le 10^{-9}\cdot a^2$. If this was not the case we can cover the sphere $\{\mathbf{w}:\|\mathbf{w}\|_2=a\}$ with balls with radius $10^{-9}\cdot a^2$, and have a constant function at every ball, concluding that $r$ is constant on the sphere (which contradicts our assumption).
Next, we also assume that $|u_2|>\frac{1}{2}a$, (either $|u_1|>\frac{1}{2}a$ or $|u_2|>\frac{1}{2}a$, and the proof is similar in both cases so we will analyse only the later case).
Then, since $u_{1}^2+u_{2}^2=v_{1}^2+v_{2}^2=a^2$, one can show that
\[ \frac{u_{1}-v_{1}}{v_{2}+u_{2}} = \frac{v_{2}-u_{2}}{u_{1}+v_{1}}.\]
So, by choosing $\delta= \frac{u_{1}-v_{1}}{v_{2}+u_{2}} = \frac{v_{2}-u_{2}}{u_{1}+v_{1}}$ we have that:
\begin{align*}
|\delta|=\frac{|u_{1}-v_{1}|}{|v_{2}+u_{2}|}
&=
\frac{|u_{1}-v_{1}|}{|v_{2}|+|u_{2}|}
&( |u_2-v_2|<10^{-9}a^2,0.5a<|u_2|)\\
&\le
4\frac{|u_{1}-v_{1}|}{a} &(|v_{2}|+|u_{2}|>a/4)\\
&\le
0.0025 a & (|u_{1}-v_{1}|<0.0025/4 \cdot a)
\end{align*}
Using the first equality we can show that $u_{1}-\delta u_{2}=v_{1}+\delta v_{2}$
Similarly we can show that
$
u_{2}+\delta u_{1}=v_{2}-\delta v_{1}
.$
Taken together we obtain that $$\u+\delta \u^\perp= \v-\delta \v^\perp.$$ In particular $r(\u+\delta\u^\perp)=r( \v-\delta \v^\perp)$. Since $r(\u)\neq r(\v)$, we either have $r(\u)\ne r(\u+\delta \u^{\perp})$, or $r(\v)\ne r(\v-\delta \v^\perp)$. In the former case we choose $\mathbf{w}_1=\u$, whereas in the latter case we choose $\mathbf{w}_1=\v$.\\
Finally, so far we assume we can find two points on a sphere with different regularization penalty. Next, we assume that on every sphere $r$ is constant. Assume also to the contrary that for every $-0.0025\|\mathbf{w}\|<\delta<0.0025\|\mathbf{w}\|$: \[r(\mathbf{w} + \delta \mathbf{w}^\perp)= r(\mathbf{w}).\] It is not hard to show that in this case $r$ is constant everywhere except maybe $0$, making it in-admissible.
\end{proof}
\section{Proofs II: Distribution Dependent Regularization}
\subsection{Proof of \cref{lem:r2}}\label{prf:r2}
We start this section by proving the existence of the auxiliary construction in \cref{lem:r2}.
\rtwo*
Before we continue with the proof, notice the following immediate corollary of \cref{lem:r2}:
\begin{corollary}\label{cor:r2}
For every constants $c,\rho>0$, there is a distribution $D$ over a pair of convex functions $\{f(\mathbf{w};1),f(\mathbf{w};-1)\}$, such that $f(\mathbf{w};z)$ is a $\rho$-Lipschitz convex function in $\mathbb{R}^2$ and, for every $c<\eta<1$ denote $\v_{z,\eta} =- \eta \nabla f(0;z).$ Then the following holds:
\begin{itemize}
\item For every $z\in\{-1,1\}$ we have that $f(\v_{z,\eta};z)=f(\v_{-z,\eta};z)$;
\item For every $z\in \{-1,1\}$, $\nabla f(v_{z,\eta},z)=\nabla f(v_{-z,\eta},z)=0$;
\item $\|\v_{z,\eta}-\v_{-z,\eta}\|>\frac{\rho\eta}{4}$.
\end{itemize}
\end{corollary}
To derive \cref{cor:r2} from \cref{lem:r2}, take a distribution that w.p. $1/2$ picks $\rho f(\mathbf{w};1)$ from \cref{lem:r2}, and with probability $1/2$ picks $\rho f(\mathbf{w};-1)$. One can observe that the result holds.
\begin{proof}[Proof of \cref{lem:r2}]
Let us define $f(\mathbf{w};\pm 1)$ as follows. Denote $\v_1=-(\tfrac14,\tfrac34)$, $\v_{-1}=-\tfrac34 \textbf{e}_2$ and let
\begin{align*}
f(\mathbf{w};{z})&=
\max\{ 0,-\v_z^\top \mathbf{w}+c\|\v_{z}\|^2 \}
.
\end{align*}
It is easy to check that $\nabla f(0;1)=-\v_1$ and that $\nabla f(0;-1)=-\v_{-1}$, and that $\|\v_1-\v_{-1}\|\ge \tfrac14$.
Next, note that if $\eta > c$ then
\begin{align*}
f(\v_{1,\eta};1)
=\max\big(0,(-\eta+c)\cdot\|\v_{1}\|^2\big)=0=
\max\big(0,-\eta\v_{1}^\top \v_{-1}+c\|\v_{-1}\|^2\big)=f(\v_{-1,\eta};1)
\end{align*}
Similarly, $f(\v_{-1,\eta};-1)=0=f(\v_{1,\eta};-1)$.
Note that, because $f\ge 0$ the above also proves that $\nabla f(v_{z,\eta},z)=\nabla f(v_{-z,\eta},z)=0$.
\end{proof}
\subsection{Proof of \cref{thm:sgdr}}\label{prf:sgdr}
\cref{thm:sgdr} is an immediate corollary of the following theorem:
\begin{theorem}\label{thm:sgdistance}
Let $\mathcal{W}=\{\mathbf{w}: \|\mathbf{w}\|\le 1\}$. For every $T$ and constant $C>2$, there exists a distribution $D$ over $1$-Lipschitz convex functions over $\mathbb{R}^d$ where $d=10\cdot T$ such that if we run SGD with step size $1/T^2 <\eta \le C/\sqrt{T}$, the following holds:
for any regularizer $r$, w.p.~at least $1/10$ over the sample $S$ there is $\mathbf{w}_r \in \mathcal{W}$ such that
\begin{align*}
F_S(\mathbf{w}_r) &\;\leq\; F_S(\mathbf{w}_S)~,
\\
r(\mathbf{w}_r) &\;\leq\; r(\mathbf{w}_S)~,
\end{align*}
Moreover
\begin{align*}
\|\mathbf{w}_r-\mathbf{w}_S\|^2_2 &\;\ge \; \frac{T \eta^2}{500C^2}~.
\end{align*}
\end{theorem}
To see how \cref{thm:sgdr} follows, Let $\mathbf{w}^*$ be the minimizer of $r(\mathbf{w})$ amongst all $\mathbf{w}\in \mathcal{W}$ with $F_{S}(\mathbf{w})\le F_{S}(\mathbf{w}_S)$ then by strong convexity
\[r(\mathbf{w}_{S}) \ge r(\mathbf{w}^*)+\frac{\lambda}{2}\|\mathbf{w}_S-\mathbf{w}^*\|^2.\]
Now if $\|\mathbf{w}_S-\mathbf{w}^*\|>\frac{1}{4}\cdot\|\mathbf{w}_r-\mathbf{w}_S\|$ we are done. If not, then
$$\|\mathbf{w}_S-\mathbf{w}^*\| \leq \frac{1}{4}\|\mathbf{w}_r-\mathbf{w}_S\|\leq\frac{1}{4}[\|\mathbf{w}_r-\mathbf{w}^*\|+\|\mathbf{w}_S-\mathbf{w}^*\|],$$
which leads to
$\|\mathbf{w}_r-\mathbf{w}^*\|\geq\frac{3}{4}\cdot\|\mathbf{w}_r-\mathbf{w}_S\|$. Using this, we get by strong convexity:
\begin{align*}
r(\mathbf{w}_S)&\ge r(\mathbf{w}_r)
\ge r(\mathbf{w}^*) + \frac{\lambda}{2}\|\mathbf{w}_r-\mathbf{w}^*\|^2
\ge r(\mathbf{w}^*) + \frac{9\lambda}{32}\|\mathbf{w}_r-\mathbf{w}_S\|^2.
\end{align*}
Now the result follows from \cref{thm:sgdistance}.
\paragraph{Proof of \cref{thm:sgdistance}}
Choose $d=10\cdot T$. Let $D_0$ be the distribution over convex functions in $\mathbb{R}^2$ whose existence follows from \cref{cor:r2} with $c<1/(4T^2)$ and $\rho=2/C$.
We now define a distribution over convex functions in $\mathbb{R}^d$ as follows: at each iteration pick uniformly $\mathbf{z}$ from the set $\{\mathbf{z}=(z;i): z\in \{-1,1\}, i=1,...,5T\}$ and let:
\[\mathbf{f}(\mathbf{w};\mathbf{z})= f((w_{2i-1},w_{2i});z).\]
To prove the result we proceed as follows: given a sample $S$ drawn i.i.d from the distribution $D$, let us call a sample point $\mathbf{z}_t=(z_t,i_t)$ \emph{good} if $t<T/2$ and if $i_t$ appears only once in the sample (i.e. for any $t'\le T$, $i_{t'}\ne i_t$). Denote by $S_g$ the set of good samples.
Next for a sample $S$ define a sample $S'=\{\mathbf{z}'_1,\ldots, \mathbf{z}'_{T}\}$ to be a sample that differ from $S$ only at good sample points, and for every good sample point if $\mathbf{z}_t=(z_t,i_t)$ then $\mathbf{z}'_t=(z'_t,i_t)=(-z_t,i_t)$. It is not hard to see that $S$ and $S'$ are identically distributed (though dependent).
Now first, we want to show that $F_{S}(\mathbf{w}_S)=F_{S}(\mathbf{w}_{S'})$ w.p. $1$ and that w.p. $0.2$ we have that
\[\|\mathbf{w}_{S}-\mathbf{w}_{S'}\|> \frac{\sqrt{T}\eta}{22C}.\]
If we can show that, then we are done. Indeed, by symmetry, we have with probability $1/2$ $r(\mathbf{w}_{S'})\le r(\mathbf{w}_{S})$. We can then take $\mathbf{w}_r=\mathbf{w}_{S'}$. Taken together we have that with probability $0.1$ all the requirements of the theorem hold.
\begin{claim}\label{cl:FS=FS'}
$F_{S}(\mathbf{w}_S)=F_{S}(\mathbf{w}_{S'})$
\end{claim}
\begin{proof}
Fix a sample $S$. To avoid cumbersome notations, and because $S$, $S'$ are fixed, we will denote here $\mathbf{w}_{S}=\bar\mathbf{w}$ and $\mathbf{w}_{S'}=\bar\mathbf{w}'$.
Next, for a vector $\mathbf{w}$ and coordinate $i_t$ let us also denote $\mathbf{w}(i_t)= (w_{2i_t-1},w_{2i_t})\in \mathbb{R}^2$.
We first analyze the trajectory of SGD over a sequence $\{\mathbf{z}_1,\ldots,\mathbf{z}_t\}$. One can prove, by induction, that at step $t$ the algorithm chooses point $\mathbf{w}^{(t)}$ as follows:
\begin{align}\label{eq:wti}
\mathbf{w}^{(t)}(i)=
\begin{cases}
-\eta \nabla f(0,z_{q}) & \textrm{If $i=i_q$ for some $q\le t-1$ and $q=\arg\min\{q': i_{q'}=i_q\}$}\\
0 & \textrm{else}
\end{cases}
\end{align}
Indeed, for $t=1$ this follows from initialization at $0$. For $t\ge 1$ we have that \begin{align*} w^{(t+1)}=\Pi_\mathcal{W} \left(\mathbf{w}^{(t)}-\eta\nabla \mathbf{f}(\mathbf{w}^{(t)},z_t)\right).
\end{align*}
Now first assume that for some $q\le t-1$, we have that $i_t=i_p$, then by assumption we have that $\mathbf{w}^{(t)}(i)= -\eta \nabla f(0,z_{q})= v_{z_q,\eta}$ in the notation of \cref{cor:r2}. Also by \cref{cor:r2} we have that $\nabla \mathbf{f}(\mathbf{w}^{(t)},\mathbf{z}_q)=\nabla f(w^{(t)}(i_t),z_q)=0$.
Next, if no such $p$ exists we have by induction hypothesis that $w^{(t)}(i_t)=0$, the result will now clearly follow if we can show that
\[ \Pi_{\mathcal{W}}\left(\mathbf{w}^{(t)}-\eta \nabla \mathbf{f}(\mathbf{w}^{(t)},z_t)\right)=
\mathbf{w}^{(t)}-\eta \nabla \mathbf{f}(\mathbf{w}^{(t)},z_t).\]
But since $\mathbf{f}(\mathbf{w},\mathbf{z}_t)$ depends only on the tuple in $i_t$ we have that $w^{(t)} \perp \nabla \mathbf{f}(\mathbf{w}^{(t)},z_t)=f(0,z_t)$ and we obtain that
\begin{align*}\|\mathbf{w}^{(t)}-\eta \nabla \mathbf{f}(\mathbf{w}^{(t)},z_t)\|^2 &= \|\mathbf{w}^{(t)}\|^2 +\eta^2 \nabla f(\mathbf{w}^{(t)},z_t)\|^2\\&
= \sum_{k=1}^d \|\mathbf{w}^{(t)}(k)\|^2 +\eta^2 \|\nabla f(0,z_t)\|^2\\&
\le \eta^2 \sum_{q=1}^t \|\nabla f(0,z_q)\|^2 \\&
\le \eta^2 T\rho^2 \\&\le 1.\end{align*}
This proves that \cref{eq:wti} holds.
Next, the value $\mathbf{f}(\mathbf{w}; \mathbf{z}_t)$ depends only on $\mathbf{w}(i_t)$ (i.e. independent of the other coordinates). Also, for any $i$ and $t$, we have that $\mathbf{w}^{(t)}(i)$ depends only on $\mathbf{z}_{t'}$'s such that $t'\le t$ and $i_{t'}=i$. In particular, for any $\mathbf{z}_t \notin S_g$ we have that $\bar \mathbf{w}(i_t)=\bar\mathbf{w}'(i_t)$, hence
\[ f(\bar\mathbf{w};\mathbf{z}_t)=f(\bar\mathbf{w}(i_t);z_t)=f(\bar{\mathbf{w}}'(i_t);z_t)=f(\bar\mathbf{w}';\mathbf{z}_t).\]
Next, we want to show that for a good coordinate $\mathbf{z}_t$ we also have that $f(\bar \mathbf{w};z_t)=f(\bar \mathbf{w}';z_t)$. For this, as in \cref{cor:r2} let us denote for any $\eta$ and $z$ by $\v_{z,\eta}=-\eta\nabla f(\textbf{0};z)\in \mathbb{R}^2$.
Then, for any good coordinate we can show that
\begin{align}\bar\mathbf{w}(i_t)&= -\frac{T-t}{T}\eta \nabla f(\textbf{0};z_t)=\v_{z_t,\eta'},\label{eq:onestep}\\
\bar\mathbf{w}'(i_t)&= -\frac{T-t}{T}\eta \nabla f(\textbf{0};-z_t)=\v_{-z_t,\eta'}\label{eq:reflectstep}
\end{align}
where $\eta'=\frac{T-t}{T}\eta>\frac{1}{2}\eta>c$. Indeed, recall that we chose $c=1/(4T^2)$.
Thus, from \cref{cor:r2} we obtain that $f(\bar\mathbf{w}(i_t),z_t)=f(\bar\mathbf{w}'(i_t),z_t)$ and in particular
\[ \mathbf{f}(\bar\mathbf{w};\mathbf{z}_t)=\mathbf{f}(\bar\mathbf{w}';\mathbf{z}_t)
.\]
\end{proof}
\begin{claim}\label{cl:sgdistance1} w.p. at least $0.2$ we have that
\[\|\mathbf{w}_{S}-\mathbf{w}_{S'}\|> \frac{\sqrt{T}\eta}{22C},\]
\end{claim}
\begin{proof}
Again we will use the notation $\bar\mathbf{w}=\mathbf{w}_{S}$ and $\bar\mathbf{w}'=\mathbf{w}_{S'}$. Note that by \cref{cor:r2}, as well as \cref{eq:onestep,eq:reflectstep} we have that
\[\|\bar\mathbf{w}(i_t)-\bar\mathbf{w}'(i_t)\| \ge \eta\rho/8,\]
for any good sample point $\mathbf{z}_t$. Now:
\begin{align*}
\|\bar\mathbf{w}'-\bar\mathbf{w}\|^2&=
\sum_{i=1}^{5T} \|\bar\mathbf{w}(i)-\bar\mathbf{w}'(i)\|^2\\
&\ge \sum_{i\in S_g} \|\bar\mathbf{w}(i)-\bar\mathbf{w}'(i)\|^2\\
& \ge \frac{|S_g|(\eta\rho)^2}{64}. \numberthis\label{eq:Sgleft}
\end{align*}
Thus, we only need to show that $\mathbf{E}[|S_g|]>\frac{T}{5}$. Indeed, since $|S_g|<T/2$, we obtain by Markov's inequality that with probability $0.25$, $|S_g|>T/7$
To show that $\mathbf{E}[|S_g|]>T/5$, for a sample $S$, let $S_b$ contain all coordinates that collided (i.e. $\mathbf{z}_t$ such that for some $\mathbf{z}_{t'}$ we have that $i_t=i_t'$).\\
In order to calculate $|S_b|$ define $\chi_{t,t'}=I(i_t=i_t')$ for every $t,t' \in [T]$. Note that
$Pr(\chi_{t,t'}=1)=\frac{1}{10T}$ and since there are at most $T(T-1)/2$ such pairs we get $\mathbf{E}[|S_b|]\le \sum_{t,t'} Pr(\chi_{t,t'}=1) \le (T-1)/20$. Note that any coordinate $i_t$ with $t<T/4$ that did not
collide is a good coordinate, hence
\begin{align*}
\mathbf{E}[|S_g|]&\ge T/4-\mathbf{E}[|S_b|]\\
&\ge T/5
\end{align*}
\end{proof}
\subsection{Proof of \cref{thm:nouc}}\label{prf:nouc}
\cref{thm:nouc} follows from the following refined statement:
\begin{theorem}\label{thm:noucquant}
Let $\mathcal{W}=\{\mathbf{w}: \|\mathbf{w}\|\le 1\}$. For every $T$ and constant $C>2$, there exists a distribution $D$ over $1$-Lipschitz convex functions over $\mathbb{R}^d$ where $d=10^5\cdot T$ such that if we run SGD with step size $1/T^2 <\eta < C/\sqrt{T}$, the following holds:
for any regularizer $r$, w.p.~at least $1/10$ over the sample $S$ there is a set $\mathcal{W}_{S} \subseteq \mathcal{W}$ such that
\begin{align*}
\sup_{\mathbf{w}\in \mathcal{W}_S} F_S(\mathbf{w}) &\;\leq\; F_S(\mathbf{w}_S)~,
\\
\sup_{\mathbf{w}\in \mathcal{W}_S} r(\mathbf{w})&\;\leq\; r(\mathbf{w}_S)~,
\end{align*}
Moreover $\mathcal{W}_S$ is $(2T,10^{-4}\frac{\sqrt{T}\eta}{C})$ statistically complex.
\end{theorem}
Note that since $\mathcal{W}_S\subseteq K_{S,r}(\mathbf{w}_S)$ we derive as a corollary \cref{thm:nouc}
\paragraph{Proof of \cref{thm:noucquant}}
Again, let $D_0$ be the distribution from \cref{cor:r2} with $c>\frac{1}{kT^2}$, for some constant $k$ (to be determined later) and $\rho=C/2$.
We define a distribution $D$ over $\mathbb{R}^d$, where we let $d=100T\cdot k$. as follows: pick $k$ r.v $\{z^{(1)},...,z^{(k)}\}\in\{-1,1\}$ and $k$ distinct coordinates $\{i^{(1)},\ldots,i^{(k)}\}\in [d/2]$ (chosen uniformly from all possible distinct $k$-tuples), set
\[
\mathbf{f}(\mathbf{w};\mathbf{z})
=
\frac{1}{k}\sum_{\ell=1}^k f((w_{2i^{(\ell)}-1},w_{2i^{(\ell)}}),z^{(\ell)}).
\]
Analogously to \cref{thm:sgdistance}, for a given sample $S$, let us define $S_g$ to be the set of ``good samples" as follows: a tuple $(\mathbf{z}_t,\ell)$ is said to be \emph{good} if $t<T/2$ and $i^{\ell}_t$ did not collide. Namely, any other sampled coordinate, $i^{(\ell')}_{t'}$ with $\ell'\in [k]$ and $t'\in [T]$ we have that if $i^{(\ell')}_{t'}= i^{(\ell)}_t$, then $\ell'=\ell$ and $t'=t$.
Next, for every sample $S$ define \[\mathcal{S}(S)=\{S'=(\mathbf{z}'_1,\ldots, \mathbf{z}'_T): {i'_t}^{(\ell)} = i_t^{(\ell)} \forall t,\ell \textrm{~and}~\forall (\mathbf{z}_t,\ell)\notin S_g~ {z'}^{(\ell)}_t=z^{(\ell)}_t~\}.\]
In words, $\mathcal{S}(S)$ includes all samples where at a good coordinate $(\mathbf{z}_t,\ell)$, $z_t^{\ell}$ may flip. And let us also define:\[\mathcal{W}_S=\{\mathbf{w}_{S'}, S'\in \mathcal{S}(S), r(\mathbf{w}_{S'})\le r(\mathbf{w}_S)\},\]
Analogously to \cref{thm:sgdistance} the statement holds once we prove the following two facts: first we show that for every $\mathbf{w}_{S'}\in \mathcal{W}_S$ we have that
$F_S(\mathbf{w}_S)=F_{S'}(\mathbf{w}_{S'})$ and secondly, we show that $\mathcal{W}_{S}$ is $(\frac{Tk}{62},\frac{30^{-3}}{kC}\eta \sqrt{T})$- statistically complex (claims \ref{cl:iwantthistoend} and \ref{lem:cScomplex} respectively). Thus, by setting $k=124$ we obtain the desired result.
\begin{claim}\label{cl:iwantthistoend}
For every $\mathbf{w}_{S'}\in \mathcal{W}_S$ we have that $F_S(\mathbf{w}_S)=F_{S'}(\mathbf{w}_{S'})$.
\end{claim}
\begin{proof}
The proof is very similar to the analog case in \cref{thm:sgdistance}, and by a similar argument (which we omit) we can show that for every $t$
\[ \mathbf{w}^{(t)}(i)=
\begin{cases}
-\frac{\eta}{k}\nabla f(0,z^{(j)}_q) & \textrm{if for $q\le t-1$ and $j\le k$ we have $i=i^{(j)}_q$ and for all $q'\le q$ and $j'\in [k]$, $i\ne i^{(j')}_{q'}$.} \\
0 & \textrm{else}
\end{cases}.\]
Fix $S'\in \mathcal{S}(S)$ and use the shorthand notation $\bar\mathbf{w}$ for $\mathbf{w}_S$ and $\bar \mathbf{w}'$ for $\mathbf{w}_{S'}$ as in the proof of \cref{thm:sgdistance}, we will also use $\mathbf{w}(i,\ell)=(w_{2i^{(\ell)}-1},w_{2i^{(\ell)}})\in \mathbb{R}^2$.
Another notation we add, as in \cref{cor:r2}, is as follows: for $\eta$ and $z$, $\v_{z,\eta}=-\eta \nabla f(0;z)\in \mathbb{R}^2$.
Then for any sample $(\mathbf{z}_t,\ell)\in S_g$, in $S_g$, we can show that
\begin{align}\bar\mathbf{w}(i_t,\ell)&= -\frac{T-t+1}{kT}\eta \nabla f(0;z_t)=\v_{z^{(\ell)}_t,\eta_t'},\label{eq:onestepuc}\\
\bar\mathbf{w}'(i_t,\ell)&= -\frac{T-t+1}{kT}\eta \nabla f(0;z_t)=\v_{z'^{(\ell)}_t,\eta_t'},\label{eq:reflectstepuc}
\end{align}
Where $\eta'=\frac{T-t+1}{kT}\eta>c$. Next, for any coordinate $(\mathbf{z}_t,\ell)\notin S_{g}$ we can show that $\bar \mathbf{w}(i,\ell)= \mathbf{w}'(i,\ell)$, hence if $\mathbf{z}_t$ is such that $(i_t,\ell_t)=(i,\ell)$ we clearly have that $f(\bar\mathbf{w},z^{(\ell)}_t)=f(\bar\mathbf{w}',z^{(\ell)}_t)$. Now for $(\mathbf{z}_t,\ell)\in S_g$, from \cref{cor:r2} we obtain that
\begin{align*}
\mathbf{f}(\mathbf{w},z^{(\ell)}_t)
&=\frac{1}{k}\sum_{\ell=1}^k f(\bar\mathbf{w}(i_t,\ell);z^{(\ell)}_{t})\\
&=\frac{1}{k}\sum_{\ell=1}^k f(\mathbf{w}_{\eta_t',z_t^{(\ell)}};z^{(\ell)}_{t})&\cref{eq:onestepuc}\\
&=\frac{1}{k}\sum_{\ell=1}^k f(\mathbf{w}_{\eta_t',{z'_t}^{(\ell)}};z^{(\ell)}_{t})&\cref{cor:r2}\\
&=\frac{1}{k}\sum_{\ell=1}^k
f(\bar\mathbf{w}'(i_t,\ell);z^{(\ell)}_{t})
&\cref{eq:reflectstepuc}\\
&=\mathbf{f}(\bar\mathbf{w}',z^{(\ell)}_t)
\end{align*}
\end{proof}
Next we prove the statistical complexity of $\mathcal{W}_S$:
\begin{claim}\label{lem:cScomplex}
The set $\mathcal{W}_S$ is $(\frac{Tk}{62},\frac{\eta \sqrt{T}}{30^3 k})$--statistically complex.
\end{claim}
\begin{proof}
One can show that if we randomly pick $S$ and then pick uniformly an elements from $S'\in \mathcal{S}(S)$ then $S$ and $S'$ are identically distributed. As a corollary if we pick a random sample $S$ then w.p. 0.5 we have that
\[|\mathcal{W}_S|\ge \frac{|\mathcal{S}(S)|}{2}.\]
We next argue that any set $A\subseteq \{\mathbf{w}_{S'}: S'\in \mathcal{S}(S)\}$ such that $|A|>\frac{|\mathcal{S}(S)|}{2}$, then $A$ is $(\frac{Tk}{62},\frac{30^{-3}}{k}\eta \sqrt{T})$- statistically complex
Indeed, fix $S$. Similar to the argument in \cref{cl:sgdistance1}, we have that with probability $0.2$, that $|S_g|> T\cdot k/7$. We claim that if this event occurred then every subset of size $|\mathcal{S}(S)|/2$ will be statistically complex.
Indeed, let us index the coordinates of $\mathbb{R}^{|S_g|}$ by the elements of $S_g$. Then, for every element $\mathbf{w}\in \{\mathbf{w}_{S'}:S' \in \mathcal{S}\}$ we let $\u(\mathbf{w}):\mathbb{R}^{d}\to \mathbb{R}^{|S_g|}$ be an affine projection such that: if $(\mathbf{z}_t,\ell)\in S_g$, then $\u(w)_{(\mathbf{z}_t,\ell)}$ satisfies the following:
\[
\u(w)_{(\mathbf{z}_t,\ell)}=
\begin{cases}
\frac{1}{\sqrt{|S_g|}} & \mathbf{w}(i_t,\ell)=\v_{1,\eta_t'}\\
-\frac{1}{\sqrt{|S_g|}} &\mathbf{w}(i_t,\ell)=\v_{-1,\eta_t'}\end{cases}
\]
It can be seen from \cref{eq:onestepuc} and \cref{eq:reflectstepuc} and \cref{cor:r2} that $\|\v_{1,\eta'_t}-\v_{-1,\eta'_t}\|>\frac{T-t+1}{kT}\eta\rho/4>\frac{\eta\rho}{12k}$, hence we can define $\u$ to be $g$-Lipschitz where
\[g= \frac{24 k}{\eta\rho\sqrt{T}}.\]
Combining this with \cref{cor:feldman}, we get that there exists a distribution $D$ over $1$-Lipschitz convex functions such that, given $m=|S_g|/6>Tk/62$ elements from $D$, with probability $1/4$ there is $\mathbf{w}\in A$ such that
\[\frac{1}{m}\sum_{i=1}^m f(\mathbf{w},z_t)=0.\]
but,
\[\mathbf{E}_{\mathbf{z}\sim D}f(\mathbf{w},z)>3/(g\cdot 4)>0.003\frac{\rho\eta \sqrt{T}}{k}=0.003\frac{\eta \sqrt{T}}{kC}\]
\end{proof}
\section{Proof of \cref{thm:nonconvex}}\label{prf:noconvex}
We begin the construction by the definition of the distribution $D$:\\
\begin{align*}
&f(\mathbf{w};z=1)=\begin{cases}
w_1 & \text{if } \mathbf{w} \in A \\
0 & \text{else}
\end{cases}, & f(\mathbf{w};z=3)=w_2, \\
& f(\mathbf{w};z=2)=\begin{cases}
-w_1 & \text{if } \mathbf{w} \in A \\
0 & \text{else}
\end{cases},
& f(\mathbf{w};z=4)=-w_2
\end{align*}
Where $z \sim Uniform([1,2,3,4])$ and
\[
A=\{(w_1,w_2): |w_1| ,|w_2| \leq \frac{1}{4}
\}.
\]
Note that by symmetry $F=E_z[f(\mathbf{w}, z)]=0$, and indeed in expectation this is a convex function.
For the proof we will define two ``good" events, set $c=\eta \sqrt{T}=\Theta(1)$, and let:
\begin{align*}
&E_1: |w_2^S|> \frac{1}{4}
,
\\
&E_2(\beta): |w^S_1|>\frac{\eta \sqrt{T}}{2}\cdot \beta \\
\end{align*}
where we write $\mathbf{w}_S= (w_1^S,w_2^S)$, and $\beta$ is a parameter sufficiently small so that.
\[\textrm{erf}(\beta)\le \sqrt{\textrm{erf}\left(\frac{\sqrt{50}}{4c}\right)}- \textrm{erf}\left(\frac{\sqrt{50}}{4c}\right).\]
Note that $\beta$ depends only on $c=\Theta(1)$.
Let us denote by $E(\beta)=E_1\cap E_2(\beta)$, then we will rely on the following claim that lower bounds the probability of the event $E$. We deter the proof of the claim to the end of the section and continue with the proof:
\begin{claim}\label{cl:good}
Let $E(\beta)=E_1\cap E_2(\beta)$ and suppose that $z\sim D$ then, for our choice of $\beta$, and sufficiently large $T$
\[P(E(\beta))>1-\sqrt{\textrm{erf}\left(\frac{50}{4c}\right)}.\]
\end{claim}
\begin{proof}
Taking \cref{cl:good} into account, Fix a random sample $S$. Let $\beta$ and $T$ be as in \cref{cl:good} and assume that event $E:=E(\beta)$ occurred. Throughout, let us denote $c=\eta \sqrt{T}$.
To show that the statement holds, we define $\mathbf{w}^*_0=(0,\bar w_2^S)$ and $\mathbf{w}^*_{-1}=(-w^S_1,\bar w^S_2)$. We will show that for one of these candidate vectors the statement holds.
First we want to show that if $\eta =\Theta(1/\sqrt{T})$, then $\|\mathbf{w}_S-\mathbf{w}^*_0\|=\Theta(1)$. Indeed, note that since $E_2(\beta)$ occurred $$\|\mathbf{w}_S-\mathbf{w}^*_0\|_2\ge |w^S_1|\ge\frac{\eta \sqrt{T}}{2}\cdot \beta=\Theta(1).$$
Similarly $\|\mathbf{w}_S-\mathbf{w}^*_{-1}\|=\Theta(1)$.
Next we want to show that $F_S(\mathbf{w}^*_0)\le F(\bar\mathbf{w})$, or $F_S(\mathbf{w}^*_{-1})\le F(\bar\mathbf{w})$. Note that for every $\mathbf{w}$ such that $|w_2|\ge \frac{1}{4}$, for every $z=\{1,2,3,4\}$, $f(\mathbf{w};z)$ depends only on the second coordinate, namely $w_2$. In particular, if $|w^S_2|\ge \frac{1}{4}$ we obtain by the construction that $F_S(\mathbf{w}_S)=F_S(\mathbf{w}^*_0)=F_S(\mathbf{w}^*_{-1})$. Thus, due to event $E_1$ we obtain the desired result.
Finally, we want to show $\min \{r(\mathbf{w}^*_0),r(\mathbf{w}^*_{-1})\}<r(\mathbf{w}_S)$, w.p probability at least $1/4$. First, assume that with probability $1/2$ we have that $r(\mathbf{w}^*_{-1})\ne r(\mathbf{w}_S)$. By symmetry one can show that in this case we have that $r(\mathbf{w}^*_{-1})<r(\mathbf{w}_S)$ with probability $1/2$. Next, assume that $r(\mathbf{w}^*_{-1})=r(\mathbf{w}_S)$ with probability at least $1/2$. In this case, we obtain that:
\begin{align*}
r(\mathbf{w}^*_0)&=r(0.5\cdot \mathbf{w}_S+0.5\cdot \mathbf{w}^*_{-1})\\
&<\max(r(\mathbf{w}_S),r(\mathbf{w}^*_{-1}))\\
&=r(\mathbf{w}_S)
\end{align*}
\end{proof}
We are left with proving \cref{cl:good}
\paragraph{Proof of \cref{cl:good}}
We will bound each event $E_1,E_2$ separately.
We begin by bounding the event $E_1$:
\paragraph{Bounding $E_1$:}
For $E_1$ we claim the following:
\begin{equation}\label{eq:E1}
Pr\Big(|w_2^S|\leq \frac{1}{4} \Big) \leq \textrm{erf}\left(\frac{\sqrt{50}}{4c}\right)+\sqrt{\frac{50^3}{T}}\end{equation}
where $\Phi$ is the CDF of a mean zero unit variate normally distributed random variable, and $\textrm{erf}$ is the error function, namely $\textrm{erf}\left(x\right)=1-2\Phi(-x)$.
Note that if $\eta =O(\frac{1}{\sqrt{T}})$, given the above bound, the probability that $|w_2^S| > \frac{1}{4}$ is a constant.
\begin{proof}
Recall that \[w_2^S= \frac{1}{T}\sum \eta (T-t) \frac{\partial f(\textbf{w}^{(t)},z_t)}{\partial w_2},\] and one can observe that $\frac{\partial f(\textbf{w}^{(t)},z_t)}{\partial w_2}$ equals $1$ w.p. $1/4$, $-1$, w.p $1/4$ and $0$ w.p $1/2$, independently of $z_{t'}$ for $t'\ne t$.
Hence, applying \cref{cor:be}, with $c=\eta \sqrt{T}$, $k=1$ and $a=\frac{\sqrt{50}}{4c}$ we obtain that
\begin{align*}
P(-\frac{1}{4}\le w_2^S\le \frac{1}{4})=
&P\Big(-\frac{\sqrt{50}}{4c}\frac{c}{\sqrt{50}}\leq w_2^S \leq \frac{\sqrt{50}}{4c}\cdot \frac{c}{\sqrt{50}}\Big)\\
&\leq \textrm{erf}\left(\frac{\sqrt{50}}{4c}\right)+\sqrt{\frac{50^3}{T}} \numberthis\label{eq:e1}
\end{align*}\end{proof}
We next move on to bound $E_2$
\paragraph{Bounding $E_2$:}
Let us consider a random sample $S'=\{z'_1,\ldots, z'_T\}$ that is generated by picking a random sample $S=z_1,\ldots, z_T$ i.i.d distributed according to $D$, and then for every $z_t$ such that $z_t\in \{1,2\}$ with probability half we let $z'_t=1$ and with probability half we let $z'_t=2$. It can be seen that $S'$ is an i.i.d sequence drawn according to the distribution $D$.
Next, let us denote $c=\eta \sqrt{T}$, and a parameter $\alpha$ (to be chosen later). Define the event $$E_\tau:\{S: \min\{t:\textbf{w}^{(t)} \notin A\}> \frac{T}{\alpha\cdot c}\}.$$
For our choice of $\beta>0$ we claim that for every $\alpha>0$
$$Pr\Big(|w_1^{S'}|<\frac{\beta }{\sqrt{50\alpha}}\Big| S,S'\in E_\tau\Big)\le 2\textrm{erf}(\beta)+2\sqrt{\frac{50^2 c^3\alpha}{T}}$$
Indeed, Given $S$, let $\tau = \min\{t: \mathbf{w}^{(t+1)}\notin A\}$ and set $S'_{\tau}=\{z'_1,\ldots, z'_{\tau}\}$ and denote
\[X_{\tau}=\frac{1}{\sqrt{T}}\sum_{t=1}^\tau c\frac{T-t}{T}x_t.\]
where $x_t$ are i.i.d random variables such that w.p. $1/4$ equals $1$, w.p. $1/4$ equals $-1$ and w.p. $1/2$ equals $0$.
Due to symmetry we have that:
\[Pr\Big(|w_1^{S'}|<\frac{\beta }{\sqrt{\alpha\cdot c}}\Big| S,S'\in E_\tau\Big)\le 2Pr\Big(|X_\tau|<\frac{\beta }{\sqrt{\alpha \cdot c}}\Big| S,S'\in E_\tau\Big)\]
One can observe that \[X_\tau= \sum_{t\in I} \eta \frac{T-t}{T}x_t= \frac{1}{\sqrt{T}} \sum_{t\in I} c \frac{T-t}{T}x_t.\]
Thus applying again \cref{cor:be} with $c=\sqrt{T}\eta$, $I=\{1,\ldots, T/k\}$ with, $k=\alpha\cdot c^3/50$ and $a=\beta$,
we obtain the desired result.
Next, we want to bound $P(E_{\tau})$. Now assume that for some $t<T/(\alpha\cdot c)$, we have that $\textbf{w}^{(t)}\notin A$.
Let $T_{\alpha}=T/(\alpha\cdot c)$ and let $Z_1,\ldots, Z_{T_\alpha}$, be i.i.d copies of a random variable such that $P(Z_t=1)=P(Z_t=-1)=1/2$. Then
\begin{align*}
P(\neg E_\tau)&\leq
4P\left(\min\{t: \eta \sum_{i=1}^t Z_i>\frac{1}{4}\}<\frac{T}{\alpha\cdot c}\right) \\
& \le 4P\left(\min\{t: \eta \sum_{i=1}^t Z_i>\frac{1}{4}\}<T_\alpha, \eta\sum_{i=1}^{T_\alpha} Z_i\geq\frac{1}{4}\right)+
4P\left(\min\{t: \eta \sum_{i=1}^t Z_i>\frac{1}{4}\}<T_\alpha, \eta\sum_{i=1}^{T_\alpha} Z_i\leq\frac{1}{4}\right)\\
&= 8P\left(\eta \sum_{i=1}^{T_{\alpha}}Z_i\geq\frac{1}{4} \right) \numberthis \label{eq:toZ}
\end{align*}
where the last inequality is by symmetry (reflection principle).
Next, by applying Hoeffding's inequality we obtain that
\begin{align*}
P(\eta \sum_{t=1}^{T_\alpha} Z_t \geq \frac{1}{4})=
P(\frac{\alpha\eta}{\sqrt{T}}\sum_{t=1}^{T_\alpha} Z_t \geq \frac{\alpha}{4\sqrt{T}})
=P(\frac{\alpha c}{T}\sum_{t=1}^{T_\alpha} Z_t \geq \frac{\alpha }{4\sqrt{T}})
\le e^{-\frac{\alpha c}{32}}
\end{align*}
Taken together we obtain that
\[P(\neg~ E_\tau)\leq 8e^{-\frac{\alpha c}{32}},\]
and
\begin{align*}P(\neg E_2(\beta))&\le P(\neg E_2|E_\tau)P(E_\tau)+ P(\neg E_\tau)\\
&\le
\mathbf{E}_{S} \left[P(|w_1^{S'}|<\frac{\beta}{\sqrt{\alpha c}}|S,S'\in E_{\tau})\right] + P(\neg E_2)\\
&\le 2\textrm{erf}(\beta)+2\sqrt{\frac{50^2c^3\alpha}{T}}+8e^{-\frac{\alpha c}{32}}\numberthis\label{eq:e2}
\end{align*}
which yields the desired result.
\paragraph{Bounding $E(\beta)$:}
\cref{eq:e1,eq:e2} yields then:
\begin{align*}
P(\neg E)&< P(\neg E_1)+ P(\neg E_2(\beta)) \\
&\le \textrm{erf}\left(\frac{\sqrt{50}}{4c}\right)
+2\textrm{erf}(\beta) + 3\sqrt{\frac{50^2(50+c^3\alpha)}{T}}+8e^{-\frac{\alpha \cdot c}{32}}
\end{align*}
Choosing $\beta$ sufficiently small, one can see that for large enough $\alpha$ and $T$ we obtain the desired result.
|
1,314,259,994,209 | arxiv | \section{Introduction}\label{section_introduction}
As of 2007 the mutual fund industry controlled $23\%$ of household taxable assets in the United States\footnote{
Data is taken from the Investment Company Institute's 2007 fact book available at www.ici.org. }.
%
In absolute terms this corresponded to 4.4 trillion USD and 24\% of U.S. corporate equity holdings. Large players such as institutional investors are known to play an important role in the market \citep{corsetti-2001}. This raises the question of who has this influence: Are mutual fund investments concentrated in a few dominant large funds, or spread across many funds of similar size? Are there mutual funds that are so large that they are ``too big to fail"?
This question is best addressed in terms of the behavior of the upper tail of the mutual fund size distribution. The two competing hypotheses usually made in studies of firms are Zipf's law vs. a lognormal. Zipf's law means that the distribution of the size $s$ is a power law with tail exponent $\zeta_s \approx 1$, i.e.
\
P(s>X) \sim X^{-\zeta_s},
\
Log-normality means that $\log s$ has a normal distribution, i.e. the density function $p_{LN}(s)$ obeys
\
p(s)=\frac{1}{s\sigma\sqrt{2\pi}}\exp\left(-\frac{(\log(s)-\mu_s)^2}{2\sigma_s^2}\right).
\
From the point of view of extreme value theory this distinction is critical, since it implies a completely different class of tail behavior\footnote{
According to extreme value theory a probability distribution can have only four possible types of tail behavior. The first three correspond to distributions with finite support, thin tails, and tails that are sufficiently heavy that some of the moments do not exist, i.e. power laws. The fourth category corresponds to distributions that in a certain sense do not converge; it is remarkable that most known distributions fall into one of the first three categories \citep{Embrechts97}.}.
These are both heavy tailed, but Zipf's law is much more heavy tailed. For a log-normal all the moments exist, whereas for Zipf's law none of the moments exist. For Zipf's law an estimator of the mean fails to converge. In practical terms, for mutual funds this would imply that for any sample size $N$, with significant probability an individual fund can be so large that it is bigger than all other $N-1$ firms combined. In contrast, for a log-normal, in the limit as $N \to \infty$ the relative size of a single fund becomes negligible.
This question takes on added meaning because the assumption that mutual funds follow Zipf's law has been argued to be responsible for the observed power law distribution of trading volume \citep{levy-1996,solomon-2001}. Gabaix et al. have also asserted that the mutual fund distribution follows Zipf's law and have used this in a proposed explanation for the distribution of price returns \citep{gabaix-2003-nature,gabaix-2006}.
We resolve this empirically using the Center for Research in Security Prices (CRSP) dataset and find that the equity fund size distribution is much better described by a log-normal distribution.
Our results are interesting in the broader context of the literature on firm size. Mutual funds provide a particularly good type of firm to study because there are a large number of funds and their size is accurately recorded. It is generally believed that the resulting size distribution from aggregating across industries has a power law tail that roughly follows Zipf's law, but for individual industries the tail behavior is debated\footnote{
Some studies have found that the upper tail is a log-normal \citep{simon-1958,stanley-1995,ijiry-1977,stanley-1996,amaral-1997,bottazzi-2003a,dosi-2005} while others have found a power law \citep{axtell-2001,bottazzi-2003a,dosi-2005}}.
A large number of stochastic process models have been proposed to explain this\footnote{
%
For past stochastic models see \citep{gibrat-1931,simon-1955,simon-1958,mandelbrot-1963,ijiry-1977,sutton-1997,gabaix-2003-mit,gabaix-2003-nature}}.
%
Our results add support to the notion that for single industries the distribution is log-normal.
The log-normality of the distribution of mutual funds is also interesting for what it suggests about the underlying processes that determine mutual fund size. In a companion paper \cite{Schwarzkopf10b} we develop a model for the random process of mutual fund entry, exit and growth under the assumption of market efficiency, and show that this gives a good fit to the data studied here. We show that while the steady-state solution is a power law, the timescale for reaching this solution is very slow. Thus given any substantial non-stationarity in the entry and exit processes the distribution will remain in its non-equilibrium log-normal state. See the discussion in Section V.
\section{Data Set}\label{data_set}
We analyze the Center for Research in Security Prices (CRSP) Survivor-Bias-Free US Mutual Fund Database\footnote{
The US Mutual Fund Database can be purchased from the Center for Research in Security Prices (www.crsp.com).}.
The database is survivor bias free as it contains historical performance data for both active and inactive mutual funds.
We study monthly data from 1991 to 2005\footnote{
There is data on mutual funds starting in 1961, but prior to 1991 there are very few entries. There is a sharp increase in 1991, suggesting incomplete data collection prior to 1991.}
on all reported equity funds.
We define an equity fund as one whose portfolio consists of at least $80\%$ stocks. The
results are not qualitatively sensitive to this, e.g. we get essentially the same results even if we use all funds.
The data set has monthly values for the Total Assets Managed (TASM) by the fund and the Net Asset Value (NAV). We define the size $s$ of a fund to be the value of the TASM, measured in millions of US dollars and corrected for inflation relative to July 2007. Inflation adjustments are based on the Consumer Price Index, published by the BLS.
\section{Is the tail a power law?}\label{section_is_pl}
\begin{figure}
\begin{center}
\includegraphics[width=9cm]{figures/s_CDF_inset.eps}
\caption{\label{s_CDF}
The CDF for the mutual fund size $s$ (in millions of 2007 dollars) is plotted with a double logarithmic scale. The cumulative distribution for funds existing at the end of the years 1993, 1998 and 2005 are given by the full, dashed and dotted lines respectively.\newline
Inset: The upper tail of the CDF for the mutual funds existing at the end of 1998 (dotted line) is compared to an algebraic relation with exponent $-1$ (solid line).}
\end{center}
\end{figure}
Despite the fact that the mutual fund industry offers a large quantity of well-recorded data, the size distribution of mutual funds has not been rigorously studied. This is in contrast with other types of firms where the size distribution has long been an active research subject. The fact that the distribution is highly skewed and heavy tailed can be seen in Figure~\ref{s_CDF}, where we plot the cumulative distribution of sizes $P(s>X)$ of mutual fund sizes in three different years.
A visual inspection of the mutual fund size distribution suggests that it does not follow Zipf's law\footnote{
%
Previous work on the size distribution of mutual funds by Gabaix et al. \citep{gabaix-2003-mit,gabaix-2003-nature,gabaix-2006}
argued for a power law while we argue here for a log-normal.}.
%
In the inset of Figure~\ref{s_CDF} we compare the tail for funds with sizes $s>10^2$ million to a power law $s^{-\zeta_s}$, with $\zeta_s=-1$. Whereas a power law corresponds to a straight line when plotted on double logarithmic scale, the data show substantial and consistent downward curvature. The main point of this paper is to make more rigorous tests of the power law vs. the log-normal hypothesis. These back up the intuitive impression given by this plot, indicating that the data are not well described by a power law.
To test the validity of the power law hypothesis we use the method developed by Clauset et al. \cite{clauset-2007}.
They use the somewhat strict definition\footnote{
In extreme value theory a power law is defined as any function that in the limit $s \to \infty$ can be written $p(s) = g(s)s^{-(\zeta_s + 1)}$ where $g(s)$ is a slowly varying function. This means it satisfies $\lim_{s \to \infty} g(ts)/g(s) = C$ for any $t > 0$, where $C$ is a positive constant. The test for power laws in reference \citep{clauset-2007} is too strong in the sense that it assumes that there exists an $s_0$ such that for $s > s_0$, $g(s)$ is constant.}
that the probability density function $p(s)$ is a power law if there exists an $s_{min}$ such that for sizes larger than $s_{min}$, the functional form of the density $p(s)$ can be written
\begin{equation}\label{eq_pdf_pl}
p(s)=\frac{\zeta_s}{s_{min}}\left(\frac{s}{s_{min}}\right)^{-(\zeta_s + 1)},
\end{equation}
where the distribution is normalized in the interval $[s_{min},\infty)$.
There are two free parameters $s_{min}$ and $\zeta_s$.
This crossover size $s_{min}$ is chosen such that it minimizes
the Kolmogorov-Smirnov (KS) statistic $D$, which is the distance between the
CDF of the empirical data $P_{e}(s)$ and that of the fitted model $P_f(s)$, i.e.
\
D=\max_{s\geq s_{min}}\left| P_e(s)-P_f(s)\right|.
\
Using this procedure we estimate $\zeta_s$ and $s_{min}$ for the years 1991- 2005 as shown in Table~\ref{table}. The values of $\zeta_s$ computed in each year range from $0.78$ to $1.36$ and average $\bar{\zeta_s}= 1.09\pm 0.04$. If indeed these are power laws this is consistent with Zipf's law. But of course, merely computing an exponent and getting a low value does not mean that the distribution is actually a power law.
To test the power law hypothesis more rigorously we follow the Monte Carlo method utilized by Clauset et al. Assuming independence, for each year we generate $10,000$ synthetic data sets,
each drawn from a power law with the empirically measured values of $s_{min}$ and $\zeta_s$. For each data-set we calculate the KS statistic to its best fit. The $p$-value is the fraction of the data sets for which the KS statistic to its own best fit is larger than the KS statistic for the empirical data and its best fit.
The results
are summarized in Table~\ref{table}. The power law hypothesis is rejected with two standard deviations or more in six of the years and rejected at one standard deviation or more in twelve of the years (there are fifteen in total). Furthermore there is a general pattern that as time progresses the rejection of the hypothesis becomes stronger. We suspect that this is because of the increase in the number of equity funds. As can be seen in Table~\ref{table},
the total number of equity funds increases roughly linearly in time, and the number in the upper tail $N_{tail}$ also increases.
\begin{turnpage
\begin{table*}
\begin{center}
\footnotesize
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c||c|c|}
\hline
variable & 91&92&93&94&95&96&97&98&99&00&01&02&03&04&05&mean&std \\
\hline \hline
${\cal R}$ & -0.50 & -1.35 & -1.49& -1.71 & -3.29 & -18.42 & -2.25 & -1.29 & -6.57 & -4.96& -2.63 & -2.95 & -2.00 & -1.05& -0.99 & -3.43 & 4.45 \\
\hline \hline
$N$ & 372 & 1069 & 1509 & 2194 & 2699 & 3300 & 4253 & 4885 & 5363 & 5914 & 6607 & 7102
&7794& 8457 & 8845& - & - \\
\hline \hline
E$[s]$ (mn)&810& 385 & 480 & 398 & 448 & 527 & 559 & 619 & 748 & 635 & 481 & 335 & 425 & 458 & 474 & 519 & 134\\
\hline
Std$[s]$ (bn) &1.98 & 0.99 & 1.7 & 1.66 & 1.68 & 2.41 & 2.82 & 3.38 & 4.05 & 3.37 & 2.69 & 1.87 & 2.45 & 2.64 & 2.65 & 2.42& 0.8\\
\hline \hline
E$[\omega]$&5.58 & 4.40 & 4.40 & 3.86 & 3.86 & 3.91 & 3.84 & 3.85 & 4.06 & 3.97 & 3.60 & 3.37 & 3.55 & 3.51 & 3.59&3.96&0.54\\
\hline
Std$[\omega]$ &1.51 & 1.98 & 2.09 & 2.43 & 2.50 & 2.46 & 2.50 & 2.51 & 2.46 & 2.45 & 2.63 & 2.42 & 2.49 & 2.59 & 2.50 & 2.34& 0.29\\
\hline \hline
$\zeta_s$ & 1.33 & 1.36 & 1.19 & 1.15 & 1.11 & 0.78 & 1.08 &1.10 & 0.95 & 0.97 & 1.01 & 1.07 & 1.07 & 1.10 & 1.14 & 1.09 & 0.14 \\
\hline
$s_{min}$ & 955 & 800 & 695 & 708 & 877 & 182 & 1494 & 1945 & 1147 & 903 & 728 & 836 & 868 & 1085 & 1383 & 974 & 408 \\
\hline $N_{tail}$& 81 & 129 & 232 & 256 & 280 & 1067& 290 & 283 & 557 & 662 & 717& 494 & 652 & 630 & 550 & - & - \\
\hline \hline
$p$-value & 0.58 & 0.48 & 0.02 & 0.45 & 0.07 & 0 & 0.01 & 0.11 & $5 \,10^{-4}$ & 0.04 & 0.03 & 0.07 & 0.08 & 0.08 & 0.15 & 0.15 & 0.19 \\
\hline \hline
\end{tabular}
\normalsize
\end{center}
\label{default}
\caption{\label{table}
Table of monthly parameter values for equity funds defined such that the portfolio contains a fraction of at least $80\%$ stocks.
The values for each of the monthly parameters (rows) were calculated for each year (columns). The mean and standard deviation are evaluated for the monthly values in each year. \newline
${\cal R}$ - the base 10 log likelihood ratio of a power law fit relative to a log-normal fit as given by equation (\ref{eq_LLR}).
A negative value of ${\cal R}$ indicates that the log-normal hypothesis is a likelier description than a power law. For all years the value is negative meaning that the log-normal distribution is more likely. \newline
$N$ - the number of equity funds existing at the end of each year. \newline
$E[\omega]$ - the mean log size of funds existing at the end of each year. \newline
$Std[\omega]$ - the standard deviation of log sizes for funds existing at the end of each year. \newline
$E[s]$ - the mean size (in millions) of funds existing at the end of each year. \newline
$Std[s]$ - the standard deviation of sizes (in billions) for funds existing at the end of each year. \newline
$\zeta_s$ - the power law tail exponent (\ref{eq_pdf_pl}). \newline
$s_{min}$ - the lower tail cutoff (in millions of dollars) above which we fit a power law (\ref{eq_pdf_pl}). \newline
$N_{tail}$ - the number of equity funds belonging to the upper tail s.t. $s\geq s_{min}$. \newline
$p$-value - the probability of obtaining a goodness of fit at least as bad as the one calculated for the empirical data, under the null hypothesis of a power law upper tail. \newline
}
\end{table*}
\end{turnpage}
We conclude that the power law tail hypothesis is questionable but cannot be unequivocally rejected in every year. Stronger evidence against it comes from comparison to a log-normal, as done in the next section.
\section{Is the tail log-normal?}\label{section_is_ln}
\begin{figure}
\begin{center}
\includegraphics[width=8.5cm]{figures/qq-pl.eps}
\caption{\label{qq-pl}
A Quantile-Quantile (QQ) plot for the upper tail of the size distribution of equity funds. The quantiles are the base ten logarithm of the fund size, in millions of dollars. The empirical quantiles are calculated from the size distribution of funds existing at the end of the year 1998. The empirical data were truncated from below such that only funds with size $s\geq s_{min}$ were included in the calculation of the quantiles. (a) A QQ-plot with the empirical quantiles as the x-axis and the quantiles for the best fit power law as the y-axis. The power law fit for the data was done using the maximum likelihood described in Section~\ref{section_is_pl}, yielding $s_{min}=1945$ and $\alpha= 1.107$.
(b) A QQ-plot with the empirical quantiles as the x-axis and the quantiles for the best fit log-normal as the y-axis, with the same $s_{min}$ as in (a).
The log-normal fit for the data was done used the maximum likelihood estimation given $s_{min}$ (\ref{eq_pln_smin})
yielding $\mu=2.34$ and $\sigma=2.5$.}
\end{center}
\end{figure}
A visual comparison between the two hypotheses can be made by looking at the Quantile Quantile (QQ) plots for the empirical data compared to each of the two hypotheses. In a QQ-plot we plot the quantiles of one distribution as the x-axis and the other's as the y-axis. If the two distributions are the same then we expect the points to fall on a straight line. Figure~\ref{qq-pl} compares the two hypotheses, making it clear that the log-normal is a much better fit than the power law.
For the log-normal QQ plot most of the large values in the distribution fall on the dashed line corresponding to a log-normal distribution, though the very largest values are somewhat above the dashed line. This says that the empirical distribution decays slightly faster than a log-normal. There are two possible interpretations of this result: Either this is a statistical fluctuation or the true distribution really has slightly thinner tails than a log-normal. In any case, since a log-normal decays faster than a power law, it strongly suggests that the power law hypothesis is incorrect and the log-normal distribution is a better approximation.
A more quantitative method to address the question of which hypothesis better describes the data is to compare the likelihood of the observation in both hypotheses \citep{clauset-2007}.
We define the likelihood for the tail of the distribution to be
\
L=\prod_{s_j\geq s_{min}}p(s_j).
\
We define the power law likelihood as \mbox{$L_{PL}=\prod_{s_j\geq s_{min}}p_{PL}(s_j)$}
with the probability density of the power law tail given by (\ref{eq_pdf_pl}).
The lognormal likelihood is defined as $L_{LN}=\prod_{s_j\geq s_{min}}p_{LN}(s_j)$
with the probability density of the lognormal tail given by
\begin{eqnarray}\label{eq_pln_smin}
p_{LN}(s)&=&\frac{p(s)}{1-P(s_{min})} \\%
&=&\frac{\sqrt2}{s\sqrt{\pi}\sigma}\left[\mathrm{erfc}\left( \frac{\ln s_{min} -\mu}{\sqrt{2}\sigma} \right)\right]^{-1} \exp\left[-\frac{(\ln s-\mu)^2}{2\sigma^2}\right]. \nonumber
\end{eqnarray}
\begin{figure}
\begin{center}
\includegraphics[width=8.5cm]{figures/logR_hist.eps}
\caption{\label{logR_hist}
A histogram of the base 10 log likelihood ratios ${\cal R}$ computed using (\ref{eq_LLR})
for each of the years 1991 to 2005. A negative log likelihood ratio implies that it is more likely that the empirical distribution is log-normal then a power law. The log likelihood ratio is negative in every year, in several cases strongly so.}
\end{center}
\end{figure}
The more probable that the empirical sample is drawn from a given distribution, the larger the likelihood for that set of observations. The ratio indicates which distribution the data are more likely drawn from.
We define the log likelihood ratio as
\begin{equation}\label{eq_LLR}
{\cal R}=\ln\left(\frac{L_{PL}}{L_{LN}}\right).
\end{equation}
For each of the years 1991 to 2005 we computed the maximum likelihood estimators for both the power law fit and the log-normal fit to the tail, as explained above and in Section~\ref{section_is_pl}. Using the fit parameters, the log likelihood ratio was computed and the results are summarized graphically in Figure~\ref{logR_hist} and in Table~\ref{table}. The ratio is always negative, indicating that the likelihood for the log-normal hypothesis is greater than that of the power law hypothesis in every year. It seems clear that tails of the mutual fund data are much better described by a log-normal than by a power law.
\section{Implications of log-normality}
The log-normal nature of the size distribution has important implications on the role investor behavior plays in the mutual fund industry. Is the size distribution of mutual funds, i.e. the concentration of assets, determined through investor choice or is it just a consequence of the random nature of the market? In a companion paper \cite{Schwarzkopf10b} we propose that the size distribution can be explained by a simple random process model. This model, characterizing the entry, exit and growth of mutual funds as a random process, is based on market efficiency, which dictates that fund performance is size independent and fund growth is essentially random.
This model provides a good explanation of the concentration of assets, suggesting that other effects, such as transaction costs or the behavioral aspects of investor choice, play a smaller role.
The fact that the fund distribution is a log-normal is interesting because, as we argue in the companion paper, this indicates a very slow convergence toward equilibrium. There we find a time-dependent solution for the underlying random process of mutual fund entry, exit, and growth, and show that the size distribution evolves from a log-normal towards a Zipf power law distribution. However, the relaxation to the steady-state solution is extremely slow, with time scales on the order of a century or more. Given that the mutual fund industry is still young, the distribution remains in its non-equilibrium state as a log-normal. Furthermore, given that the properties of the entry and exit processes are not stable over long periods of time, the non-equilibrium log-normal state will very likely persist indefinitely.
\section{Conclusions}
We have shown in unequivocal terms that the mutual fund size distribution is much closer to a log-normal than to a power law. Thus, while the distribution is concentrated, it is not nearly as concentrated as it might be. Among other things this suggests that that the power law distribution observed for trading volume by Gopikrishnan et al. \cite{Gopikrishnan00} cannot be explained based on a power law distribution for funds. The companion paper discussed in the previous section \cite{Schwarzkopf10b} constructs a theory that explains the log-normality based on the random nature of the mutual fund entry, exit and growth, and the very long-time scales required for convergence to the steady-state power law solution.
\begin{acknowledgments}
We would like to thank A. Clauset and C. R. Shalizi for useful comments. YS would like to thank Mark B. Wise. We gratefully acknowledge financial support from NSF grant HSD-0624351. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
\end{acknowledgments}
|
1,314,259,994,210 | arxiv | \section{Introduction}
The role of social networks in shaping collective decisions is widely recognised in today's society. Recent political developments, from fake news to data leaks, have also brought to light their powerful role in potentially altering election results, and, therefore, their vulnerability to various forms of manipulation ~\cite{JWF15,GDGM15,SCVM18,BLSS18}.
As a consequence of the growing importance of social networks in our public discourse, the research on collective decision-making is now looking at the spread of information as a key factor in determining voting outcomes \cite{acemoglu-spread,Wilder-Vorobeychik2018,Federico-et-al-2019}, with important ramifications for network engineering \cite{Castiglioni_Ferraioli_Gatti_2020}. The multi-agent systems community, in particular, explored the algorithmic side of social dynamics in existing voting models (e.g., \citet{Tsang-Larson-2016}), studying novel possibilities to manipulate collective outcomes~\cite{Alon-Feldman-Omer-2015,baumeister2020manipulation, GrandiT16,BredereckE17} and making the study of social choice on social networks an active research enterprise~\cite{Grandi2017}.
When studying the effect of social influence on collective decisions, establishing predictive metrics is paramount.
A recent {\em Nature} paper by \citet{Stewart2019} has shown how a simple graph-theoretic metric, the {\em influence gap}, is highly predictive of how influence dynamics will impact the result of an election, turning minority views (with well-placed supporters) into strong majorities.
More specifically, through computational simulations of a voter model, backed by social network experiments with human subjects, \citet{Stewart2019} found strong correlations between the outcome of the voters' decisions and their proposed metric. These results suggest that an increased presence in a voters' social neighbourhood (what they call {\em influence assortment}) is a good predictor of a party's chances to win elections. In other words, when voters update their preferences looking at their connections, it is the strategic positioning of a party's electorate that matters, rather than the initial majority (what they call {\em information gerrymandering}). Undoubtedly, a metric that allows us to forego the equilibrium computation of a highly complex system is an important practical tool, significantly simplifying the analysis of the opinion diffusion dynamics, a notoriously complex problem \cite{ChristoffG17, AulettaFG20, ChistikovEtAl20}. Moreover, it allows for a further understanding of the effects of manipulation, for example through the strategical placements of bots or zealots to alter the network dynamics.
The results in \citet{Stewart2019} are, however, based on a number of limiting assumptions, notably the analysis is carried out on two-party elections on regular graphs of degree 3 and large scale-free graphs.
The main goal of this paper is to examine whether the metric's usage can be expanded beyond this fairly narrow family of graphs to more realistic-looking graphs and into the multi-party case. Our focus is on graphs which are characterised by the presence of a community structure~\cite{Girvan7821,Flake-Lawrence-2002}, allowing for phenomena such as echo chambers and {\em homophily}~\cite{Bakshy1130,Tsang-Larson-2016}. These are well-established patterns of real-world social networks, which can be observed, for example, in how Americans are sorting themselves into partisan communities~\cite{Bis09}, and should, in our view, be accounted for by any reasonable model of how social influence affects collective decisions.
\paragraph{Our Contribution} We analyse multi-party elections on graphs with community structure, exploring the predictive power of the influence gap and comparing it against various other metrics. We first model communities using what we call {\em homophilic relaxed-caveman} graphs, which build on the classical clique-like community model of caveman graphs~\cite{Watts_1999}. The homophilic relaxed-caveman graphs introduce variance and reality-resembling interactions by determining connections as a noisy function of the degree of homophily. The key difference from \citet{Stewart2019} is that we look at networks with community structure, while they only consider regular and scale-free graphs. We compute how homophily and rewiring interact in these graphs to change the influence gap (Figure \ref{fig:hrc-surf-precise}) and we then examine the influence gap as a predictor of the final voting outcome, using ~\citet{Stewart2019}'s empirically backed opinion diffusion model and parameters.
We see that with equal initial representation but varying influence gap, the latter no longer correlates with the final voting outcome as it did in~\citet{Stewart2019} (Figure \ref{fig:equal-representation}). Once we look at settings which include varying initial partisan majorities, the gap generally correlates well with the final voting outcome (Figure \ref{fig:ig+maj_vs}). However, simply counting the initial majorities is an even better predictor (Figure \ref{fig:ig+maj_vs}) and remains consistently so for different levels of homophily and rewiring (Figure \ref{fig:pcc_lines}). Using regression models, we then determine the interrelation between influence gap and initial majority to strengthen the predictions (Table \ref{table:lin_reg}).
Finally, we move to the study of the influence gap in elections with more than two parties. We look at multiple extensions of the originally proposed definitions, analysing their behaviour on simple graphs such as rings, cycles and stars. Using the extended setup we provide a theoretical analysis on clique-like communities and we observe empirically how the initial votes count has an even stronger predictive power when compared to influence gap than it did in the two-party case (Figure \ref{fig:multiparty_pcc_lines}).
\paragraph{Other Related Literature}
Discussion of how opinions and ideas spread in society flourished as a research field since Rogers's seminal work \cite{Rog62}, who introduced many of the concepts still underlying the field. Research then expanded to cases where agents have limited information~\cite{FS85,KS85}, including on graph structures~\cite{Blu93,LW08}. There was a particular focus on ``information cascades'' or ``herd mentality'', where choices are made sequentially, both when there is a ground truth~\cite{BHW92,MNT14,FILW14} and where there is none~\cite{ABKLT12,Wat02}. We use this basic assumption that people wish to conform to their surroundings in this paper, as well.
A closely related avenue of research concerns opinion diffusion models, where agents are recipients of social influence and opinions spread in a network. Research on this has been both empirical~\cite{BR87} and theoretical~\cite{Gra73,GLM01,Alon-Feldman-Omer-2015} (see overviews \cite{MMB90,You09}), including attempts to find influential nodes in the social graph~\cite{KKT03}. Computational models of opinion diffusion have looked at the fixed-point properties of the graph dynamics, in connection with consensus formation \cite{AulettaFG20} and its complexity \cite{ChistikovEtAl20}. An important stream of research has looked at how to control opinion diffusion by external intervention, for example through bribery \cite{BredereckE17}, false-name attacks \cite{BrillCFS16} or information control \cite{baumeister2020manipulation}, and we see our results as introducing effective heuristics for outcome prediction in those frameworks.
\paragraph{Paper Structure}
In Section \ref{sec:preliminaries} we introduce our setup and the basic graph-theoretic terminology, in particular the model of caveman-graphs and some basic observations on the influence gap. In Section \ref{sec:RC} we present our homophilic extension, together with the algorithms to control the homophily level and the rewiring probability. Section \ref{sec:dynamics} focuses on two-party elections, introducing the opinion diffusion dynamics and measuring the predictive power of the influence gap and other key metrics. These results are further discussed in Subsection \ref{sec:comb_met}, where we combine metrics and establish the effects of homophily and rewiring. In Section \ref{sec:multiparty} we delve into the multi-party case. We first consider potential extensions of the influence gap, studying them on simple graphs and then using computer-aided simulations and larger graphs in Section \ref{sec:3-party-large} to observe their predictive power in a setting with three parties. We conclude in Section \ref{sec:discussion} presenting various follow-up research directions.
\section{Influence Gap and Communities}\label{sec:preliminaries}
Consider an election with parties $\mathcal{P}$, where for each party $P\in\mathcal{P}$ there are $N_P>0$ supporters (voters). The $N$ total voters are placed on an undirected graph $\mathcal{G}=(V,E)$, where they are represented as nodes in the node set $V = \{1,2,\cdots, N\}$ while social connections are given by edges in the edge set $E \subseteq V\times V$.
Let $p:V\to \mathcal{P}$ be the party assignment, representing individuals' initial opinions, such that, for example, $p(n)$ is the party of voter $n$. We can also apply this to a subset of nodes $U\subseteq V$, and denote by $p(U)$ the parties that all nodes in the subset $U$ vote for, i.e., $p(U) = \{p(n) \mid n\in U\}$. We denote the neighbourhood of $n\in V$, i.e., the set of social connections of voter $n$, by $\mathcal{N}_n = \{m \mid (n,m)\in E\}$. We also define the poll of a node $n$ as themselves with their neighbourhood, $\mathcal{N}'_n = n\cup\mathcal{N}_n$. For a node $n\in V$, denote by $\Delta^P_n$ be the fraction of $n$'s poll that votes for party $P$ and let $\bm{\Delta}_n = (\Delta^P_n \mid P \in \mathcal{P})$ be the vector of such polls; for brevity we shall often use $\Delta_n$ instead of $\Delta_n^{p(n)}$ when no ambiguity arises.
\subsection{Influence Gap}
Like much of the research on opinion dynamics (including, in particular, \citet{Stewart2019}), we shall, at least at first, focus on the two-party (or two-opinion) setting and we shall henceforth refer to the parties using two colours, red and blue. In other words, we fix a partisan structure $\mathcal{P} = \{R,B\}$ where the initial number of voters for the red (blue) party is $N_R$ ($N_B$), noticing that with only two parties it suffices to consider only the fraction of a node's poll that votes for their own party, i.e., $\Delta_n$.
The multiparty generalisation ($|\mathcal{P}| \geq 2$) has non-trivial ramifications for a number of concepts, and will be investigated in detail from Section \ref{sec:multiparty} onwards.
In the two-party case, {\em influence assortment} \cite{Stewart2019} is, intuitively, the relative advantage of a party against their rival, and acts on two different levels: on the level of a single node $n$, denoted by $a_n$; and on the level of a party $P$ denoted by $A_{P}$. The {\em influence gap} (IG), which we denote by $G_P$, is the average advantage in influence assortment of party $P \in \{R,B\}$ (resp., $G_{P'}$ denotes its dual for party $P'\neq P$). Below are the formal definitions (as per \citet{Stewart2019}); note the use of the Kroneker delta $\delta(i,j)$, which is 1 if $i=j$ and 0 otherwise.
\begin{ceqn}
\begin{align}
a_n &= \begin{cases}
\Delta_n &\Delta_n \geq \frac{1}{2} \\%\text{ for } p(n) \text{ plurality in } \mathcal{N}(n) \cup n\\
-(1-\Delta_n) &\Delta_n < \frac{1}{2} \\%\text{ else }
\end{cases} \label{eq:node_inf}\\
A_P &= \frac{1}{N_P}\sum_{n \in V} a_{n}\delta(p(n),P) \label{eq:party_inf}\\
G_P &= A_P - A_{P'} \label{eq:IG}
\end{align}
\end{ceqn}
Influence assortment on the level of nodes, $a_n$, can be thought of as the extent to which an agent's party is present in their own poll, and thus how much they can be influenced to vote for a different party. Its value (regardless of sign) highlights how homogeneous a node's neighbourhood is, while its sign indicates if a node belongs to the majority party in the local neighbourhood. The mean of node assortments over nodes of a single party is then the party assortment, $A_P$. The influence gap, $G_P$ can therefore be understood as the difference in how ``strategically placed'' a party is -- how much do its supporters interact with other parties (and therefore, able to be influenced by them).
Throughout the paper we focus on the case of a strong party assignment (SPA), following \citet{Alon-Feldman-Omer-2015}, where each party is assigned a fixed fraction of nodes -- initially this will be a half but in later sections we consider non-equal representation. Weak party assignment (WPA), in contrast, assigns to every node a party $P$ with some probability, e.g., they are red with probability $\frac{3}{4}$. While we leave the treatment of WPA for future research, we note that our random graph generation models with non-equal representation are de facto working with a constrained form of WPA.
\subsection{Caveman Graphs}\label{prelimCaveman}
A {\em caveman graph} $\mathcal{G} = (V,E)$ is a set of $l$ isolated cliques each of size $k$~\cite{Watts_1999}. These graphs encode a very basic form of community-structure without showing interesting variety or empirical relevance \cite{emp-study-cliques}\footnote{A connected version is formed by rewiring a single edge per clique it to a node in an adjacent clique along a central cycle~\cite{Watts_1999}, also known as a caveman graph. For the purposes of this paper a caveman graph will be taken to mean the unconnected set of isolated cliques.}, but we shall use them for the most basic theoretical insights and will build on them to develop more realistic-looking structures.
\subsubsection{Influence gap in caveman graphs}
In caveman graphs the influence gap shows some interesting features in connection with the partisan majorities in each clique. To see why this is the case, consider a graph of $l$ cliques (labelled $1,\cdots,l$), each of size $k$. Let us take, without loss of generality, the perspective of the red party first. A red node $n$ in clique $c\in\{1,\cdots,l\}$, with total number of red nodes $x_c$, sees exactly $\Delta_n = \frac{x_c}{k}$ red nodes in their poll, as, by definition, they see all of the nodes inside their clique. This clearly holds for all other red nodes inside $c$, as well. Thus, the influence assortment of any red node in $c$ is equal, so that $a_n = a_m$ for all nodes $n,m\in c$; the sum of influence assortments over all red nodes in $c$ is therefore $x_c a_n$.
Once all $x_c$ are known, we can group and loosely order cliques together into three types: those in which red holds a strict majority, those in which there's an exact tie and those in which blue holds a strict majority. To this end $\exists M, M' \in \{1,\cdots,l\}$ with $M \leq M'$ such that for $n\in V$:
\begin{ceqn}
\begin{equation*}
x_c \begin{cases} > \sfrac{1}{2} &\text{ for } c \leq M\\
= \sfrac{1}{2} &\text{ for } M < c \leq M'\\
< \sfrac{1}{2} &\text{ for } M' < c \\
\end{cases}
\end{equation*}
\end{ceqn}
In other words, $M$ is the number of cliques in which the red party hold a strict majority, $M'$ is the number of cliques where they hold a weak majority in and $\eta\equiv M'-M$ is the number of {\em marginal} cliques. Finally, denoting the sum of $x_c$'s from $c=1$ up to $c=d$ (for $1\leq d\leq l$) as $X_d \equiv \sum_{c=1}^d x_c$, knowing the cliques with red majority $M'$ allows us to calculate the red assortment on the level of the party, $A_R$.
\begin{ceqn}
\begin{align*}
A_R &= \frac{1}{N_R}\sum_{n \in V} a_n \delta(p(n),R) \\
&= \frac{1}{N_R} \Bigg( \sum_{c=1}^{M'} \big(x_c\frac{x_c}{k}\big) + \sum_{c=M'+1}^l x_c\big(\frac{x_c}{k}-1\big) \Bigg) \\
&= \frac{1}{N_R} \Bigg( \sum_{c=1}^l \frac{x_c^2}{k}-(N_R-X_{M'}) \Bigg) \\
A_R &= \frac{1}{N_R} \Bigg( \sum_{c=1}^l \frac{x_c^2}{k} + X_{M'} \Bigg)-1 \numberthis
\end{align*}
\end{ceqn}
We can find the equivalent for the blue party by making two observations. First, the influence assortment of any blue node $n\in c$ is $b_n = -a_m$ for a red node $m\in c$. Second, the number of blue nodes in clique $c$ is $y_c = k-x_c$ hence the number of blue nodes up to clique $d$ is $Y_d \equiv \sum_{c=1}^d y_c = dk-X_d$. We note that since there are $M$ cliques that contain a strict red majority, then equivalently in these $M$ cliques blue is strictly a minority. Following similarly from the red party, the influence assortment of the blue party, $A_B$, in terms of red counts $x_n$ is thus as follows.
\begin{align*}
&A_B= \frac{1}{N_B} \Bigg( \sum_{c=1}^l \frac{x_c^2}{k} + X_{M} +N-2N_R-Mk \Bigg) \numberthis
\end{align*}
Finally, this gives us an expression for the influence gap $G_R$ in favour of the red party for a general set of $l$ isolated cliques, under any party assignment, noting only that $N = N_B+N_R = lk$.
\begin{ceqn}
\begin{equation}
G_R = \frac{X_{M'}}{N_R}-\frac{X_{M}}{N_B} + \frac{N_B-N_R+Mk}{N_B} -1 + \frac{N_B-N_R}{N_RN_B} \sum_{c=1}^l\frac{x_c^2}{k} \label{eq:IG_caveman_all}
\end{equation}
\end{ceqn}
\paragraph{Influence gap with equal representation} We now show, using the expressions above, what happens in caveman graphs where parties have equal representation, i.e., $N_B = N_R = \sfrac{N}{2}$.
When this is the case a number of terms in Equation \ref{eq:IG_caveman_all} vanish, including the quadratic term, leaving only a linear equation in purely community-level quantities. In particular, note that the term $X_{M'}-X_{M} = (M'-M)\cdot\sfrac{k}{2}$ since all the cliques between $M$ and $M'$ are marginals. After some simplification this gives us an equation for $G_R$ in terms of the number of cliques that are (strictly) dominated by red -- as opposed to individual node quantities.
\begin{ceqn}
\begin{align*}
G_R &= \frac{2}{N}(X_M-X_{M'}) + \frac{2M'k}{lk}-1 \\
&= \frac{2}{lk} (M-M') \frac{k}{2} + \frac{2M'}{l}-1 \\
&= \frac{M + M'}{l}-1 \numberthis \label{eq:IG_caveman}
\end{align*}
\end{ceqn}
A corollary of this is when $G_R=G_B=0$, when by the above equation
caveman graphs with equal representation must also have as many strict red majorities as strict blue ones.
\paragraph{Influence gap and initial majority}
We now turn our attention on how predictive the influence gap is in these structures. We start with the restricted class of caveman graphs with equal representation, which we analysed above.
To do so, we first consider a simple opinion diffusion model, e.g., at each time step
voters stick to their own opinion unless most of their friends think differently, in which case they flip with probability $\Delta_n$. In Figure \ref{fig:ig0-clique}, for example, we expect such an opinion diffusion model to converge to an outcome where the red party conquers the clique on the left but loses the other two. We note that in this case IG seems to be a good predictor of the overall outcome.
\begin{figure}[ht!]
\begin{center}
\begin{tikzpicture}
[->,shorten >=1pt,auto,node distance=1.2cm,
semithick]
\node[shape=circle,draw=black, fill=red] (A) {};
\node[shape=circle,draw=black, fill=red, right of=A] (B) {};
\node[shape=circle,draw=black, fill=red, below of=A] (C) {};
\node[shape=circle,draw=black, fill=red, below of=B] (D) {};
\draw [thick,-] (B) to (A) ;
\draw [thick,-] (C) to (A) ;
\draw [thick,-] (D) to (A) ;
\draw [thick,-] (C) to (B) ;
\draw [thick,-] (C) to (D) ;
\draw [thick,-] (B) to (D) ;
\end{tikzpicture}\hspace{1cm}
\begin{tikzpicture}
[->,shorten >=1pt,auto,node distance=1.2cm,
semithick]
\node[shape=circle,draw=black, fill=red] (A) {};
\node[shape=circle,draw=black, fill=blue, right of=A] (B) {};
\node[shape=circle,draw=black, fill=blue, below of=A] (C) {};
\node[shape=circle,draw=black, fill=blue, below of=B] (D) {};
\draw [thick,-] (B) to (A) ;
\draw [thick,-] (C) to (A) ;
\draw [thick,-] (D) to (A) ;
\draw [thick,-] (C) to (B) ;
\draw [thick,-] (C) to (D) ;
\draw [thick,-] (B) to (D) ;
\end{tikzpicture}\hspace{1cm}
\begin{tikzpicture}
[->,shorten >=1pt,auto,node distance=1.2cm,
semithick]
\node[shape=circle,draw=black, fill=red] (A) {};
\node[shape=circle,draw=black, fill=blue, right of=A] (B) {};
\node[shape=circle,draw=black, fill=blue, below of=A] (C) {};
\node[shape=circle,draw=black, fill=blue, below of=B] (D) {};
\draw [thick,-] (B) to (A) ;
\draw [thick,-] (C) to (A) ;
\draw [thick,-] (D) to (A) ;
\draw [thick,-] (C) to (B) ;
\draw [thick,-] (C) to (D) ;
\draw [thick,-] (B) to (D) ;
\end{tikzpicture}
\caption{A caveman graph with equal representation and IG of $G_R=-1/3$. Notice that red has one strict majority while blue has two, i.e., $M' = M = 1$.
\label{fig:ig0-clique}
\end{center}
\end{figure}
However, when equal representation is relaxed, IG can also fail to predict robust configurations with clear winners while majority does, as illustrated in Figure \ref{fig:ig1-clique}.
\begin{figure}[ht!]
\begin{center}
\begin{tikzpicture}
[->,shorten >=1pt,auto,node distance=1.2cm,
semithick]
\node[shape=circle,draw=black, fill=red] (A) {};
\node[shape=circle,draw=black, fill=red, right of=A] (B) {};
\node[shape=circle,draw=black, fill=red, below of=A] (C) {};
\node[shape=circle,draw=black, fill=red, below of=B] (D) {};
\draw [thick,-] (B) to (A) ;
\draw [thick,-] (C) to (A) ;
\draw [thick,-] (D) to (A) ;
\draw [thick,-] (C) to (B) ;
\draw [thick,-] (C) to (D) ;
\draw [thick,-] (B) to (D) ;
\end{tikzpicture}\hspace{1cm}
\begin{tikzpicture}
[->,shorten >=1pt,auto,node distance=1.2cm,
semithick]
\node[shape=circle,draw=black, fill=blue] (A) {};
\node[shape=circle,draw=black, fill=blue, right of=A] (B) {};
\node[shape=circle,draw=black, fill=blue, below of=A] (C) {};
\node[shape=circle,draw=black, fill=blue, below of=B] (D) {};
\draw [thick,-] (B) to (A) ;
\draw [thick,-] (C) to (A) ;
\draw [thick,-] (D) to (A) ;
\draw [thick,-] (C) to (B) ;
\draw [thick,-] (C) to (D) ;
\draw [thick,-] (B) to (D) ;
\end{tikzpicture}\hspace{1cm}
\begin{tikzpicture}
[->,shorten >=1pt,auto,node distance=1.2cm,
semithick]
\node[shape=circle,draw=black, fill=blue] (A) {};
\node[shape=circle,draw=black, fill=blue, right of=A] (B) {};
\node[shape=circle,draw=black, fill=blue, below of=A] (C) {};
\node[shape=circle,draw=black, fill=blue, below of=B] (D) {};
\draw [thick,-] (B) to (A) ;
\draw [thick,-] (C) to (A) ;
\draw [thick,-] (D) to (A) ;
\draw [thick,-] (C) to (B) ;
\draw [thick,-] (C) to (D) ;
\draw [thick,-] (B) to (D) ;
\end{tikzpicture}
\caption{A caveman graph with $G_R=G_B=0$ and different majorities. Notice how the IG does not ``see" a win for the blue party.}
\label{fig:ig1-clique}
\end{center}
\end{figure}
\subsubsection{Towards more realistic communities}
We want to establish how good of a predictor the IG is in graphs with community structure. While we observed interesting edge cases in the simple caveman graphs, it is now important to extend these to more capture more realistic ones.
We do so by generalising them in two ways:
\begin{itemize}
\item Allowing some edges to be rewired with a set probability, i.e., relaxed-caveman graphs~\cite{Fortunato_2010}.
\item Allowing such probability to depend on homophily, i.e., how likely like-minded voters are to be connected to one another.
\end{itemize}
A {\em relaxed-caveman} graph is a modified version of the basic caveman graph, whereby edges are rewired with some given probability~\cite{Fortunato_2010}. Concretely, given a probability $p_0$ and iterating over all edges $E$ of a set of isolated cliques, an edge $(u,v) \in E$ is rewired as $(u,n)$, for some randomly selected $n \in V$; if $(u,n)$ already exists, nothing happens, such that all new edges are between nodes of different cliques. This extension provides a fairly diverse and intuitively clear set of communities, without the need to rigorously define the concept of community itself or to delve into the plethora of community detection \cite{Fortunato_2010,Schaub2017,Aldecoa2013} and generation \cite{HOLLAND1983109,Lee_2019} methods.
It is important to note that, at low rewire probabilities, the results above on caveman graphs can be extended to relaxed-caveman graphs (and their homophilic variant, see below) by considering the effects of a small number of rewired edges as perturbations $\mathcal{O}(\min(N_R,N_B)^{-1})$, since at worst for a single rewire the assortment of a node gets changed by $\pm 1$ and thus its contribution to the influence gap changes by $\mathcal{O}(N_R^{-1})$ or $\mathcal{O}(N_B^{-1})$.
The next section will present the model of homophilic relaxed-caveman graphs, where rewiring is linked with homophily, and use it to compare the predictive power of a number of metrics, including the influence gap.
\section{Homophilic Relaxed-Caveman Graphs} \label{sec:RC}
Relaxed-caveman graphs rewire the edges of the original caveman graph {\em without looking at party assignment}. This means that, effectively, the resulting graph, though exhibiting a rich community structure, abstracts away from the relation between connections and opinions, unlike real-world networks, where the two are highly intertwined \cite{Bis09,Bakshy1130}. To address this issue, we propose a modification to the relaxed-caveman graph as the {\em homophilic relaxed-caveman} (hRC) graph model in a similar fashion to the homophilic Erd\H{o}s-R\'{e}nyi and Barab\'{a}si-Albert graphs used in \citet{Tsang-Larson-2016}. This allows us to generate synthetic graphs with communities where the graph structure is dependent on the party assignment, following the observed behavior~\cite{Bis09} that people tend to cluster with people who share their views.
\begin{center}
\fbox{%
\parbox{0.7\linewidth}{%
\textbf{Algorithm: } Homophilic Relaxed-Caveman Graph, $\mathcal{G}$
\begin{enumerate}
\item \textbf{Initialise} $G$ as a set of $l$ cliques each of size $k$
\item \textbf{for} $(u,v) \in E$:
\begin{itemize}
\item Choose at random an $n \in V, n \neq v$
\item \textbf{if} $p(u)$ \textbf{=} $p(n)$; $\tilde{p} = p_0 h$
\item \textbf{else}; $\tilde{p} = p_0 (1-h)$
\item Rewire $(u,v)$ as $(u,n)$ with probability $\tilde{p}$
\end{itemize}
\item \textbf{return} $\mathcal{G}$
\end{enumerate}
}%
}
\end{center}
The algorithm above describes how to generate the hRC graph, starting from a set of disconnected cliques $G=(V,E)$. It becomes hRC graph $\mathcal{G}_p(l,k,p_0,h)$, with $l$ communities, each of size $k$, with {\em rewiring probability} $p_0$ and {\em homophily factor} $h$, given a party assignment $p:V\rightarrow \mathcal{P}$. The probability to rewire $p_0$ can be thought of as the likelihood of changing a pre-existing friendship to a new one, while the homophily is the probability of agent $u$'s new friend $n$ voting for the same party.
We highlight two important subclasses of the hRC model:
\begin{itemize}
\item For $h = 0.5$ we recover the relaxed-caveman graph with rewire probability $p = \sfrac{p_0}{2}$ since a node is equally likely to be rewired to their party as they are to the opposing party, but with probability $p_0 \cdot 0.5$.
\item For high values of $p_0$ nodes from different cliques intermingle sufficiently enough that the community structure begins to fade.
Community detection methods \cite{Fortunato_2010,Schaub2017}, such as modularity-based methods \cite{PhysRevE.69.066133}, may be used to reestablish communities in this case.
\end{itemize}
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=0.9\linewidth]{Pictures/Static/hRC-surface2-mean-precise.png}
\caption{A surface plot of the mean absolute value of the influence gap, across different homophily factors $h$ and a range of rewire probabilities $p_0$. Each point is measured from $10^4$ party assignments each generating a single graph.}
\label{fig:hrc-surf-precise}
\end{center}
\vspace{-3mm}
\end{figure}
We now show how the influence gap is distributed in homophilic relaxed-caveman graphs with equal partisan split ($N_R=N_B$). To do so we generated hRC graphs across the entire range of homophily and rewire probability. Due to the randomized nature of the model, for any given set of parameters $h$ and $p_0$, we produced 10,000 graphs and found the mean of their influence gaps, towards whichever party had the higher influence assortment. Note how this, therefore, represents the ability for either party to open an advantage over its opponent, not for a specific one. If, instead, we measured the gap towards a specific party, say red, then we would expect the mean of the gap to be 0 due to symmetry.
We plotted the results in Figure~\ref{fig:hrc-surf-precise}.
We find that for values of $h<0.85$, the absolute value of the influence gap increases monotonically with the rewire probability $p_0$. That is, taking a cross-section for some $h<0.85$ of Figure~\ref{fig:hrc-surf-precise} is monotonic in $p_0$. At higher homophily we find the curves are peaked at around $p_0 \approx 0.7$.
We see a more complex phenomenon when taking cross-sections in $p_0$, that is, curves change from nearly flat at low $p_0$ to being peaked at higher $p_0$. Specifically, the influence gap is asymmetrically unimodal, with a peak at around $p_0 = 1$ and $h = 0.3$. Notice that this means parties are marginally better off when the likelihood of forming a community between the party members is lower than with the opposing party. This asymmetry, we believe, is in large part due to the definition of the influence assortment for a node. When the neighbourhood $\mathcal{N}(n)$ of an agent $n$ is just slightly in support of the opposite party but their own preference is enough to push the poll into a (weak) majority $\Delta_n \geq 0.5$ then the maximal value of the influence gap is reached, when most/all nodes face this situation.
\section{Predicting Two-Party Elections: The Model}\label{sec:dynamics}
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=0.9\linewidth]{Pictures/Dynamics/Dynamics_example.png}
\caption{Example time series of a voting game, using \citet{Stewart2019} behavioural model on a homophilic relaxed-caveman graph with rewire parameter $p_0 = 0.3$ and homophily $h = 0.3$. The two parties initially have equal vote share but eventually the red party reaches a super-majority of $V = 0.6$ (the dashed lines are $V$ and $1-V$). The vertical dotted line at $t_*$ represents the transition between early and late phases of the game.}
\label{fig:dynamics_example}
\end{center}
\vspace{-3mm}
\end{figure}
In this section we examine whether the findings of \citet{Stewart2019} still hold for two-party elections when communities are present. Our conclusion, in summary, is that influence gap does not predict influence dynamics when initial votes are equally split, which is a central assumption in their analysis. When this assumption is relaxed, the correlation is restored, but a more predictive and computationally simpler metric exists, namely counting the initial majority of either party. This holds across all levels of homophily and rewiring, with some interesting specificities.
Next, we provide the details of our experiment and results, which uses the same behavioural parameters of \citet{Stewart2019}, starting from the voter model.
\subsection{Voter Model}
For $N$ voters, at least half are assigned to the red party and the remainder to the blue, and all are placed in an influence network, in line with~\citet{Stewart2019}. A voter's knowledge is restricted by their social network, only knowing the voting intentions of their neighbours as well as their own, serving as a form of poll, to which they wish to conform. The game lasts for a fixed amount of time, during which players can change their voting intentions synchronously\footnote{Unlike the iterative voting model of \citet{Tsang-Larson-2016}, in which every voter updates separately to the others, this is more akin to the synchronous updates of \citet{Alon-Feldman-Omer-2015}.}. The winning party is the one to hold a majority above a threshold $V > 0.5$ when the updating process is done, otherwise we consider it a deadlock.
The agents follow the stochastic behavioural model developed by~\citet{Stewart2019}, informed by a social experiment with human subjects who were given pay-offs depending on the success of their assigned party. At any given time, a voter, according to the behavioural model, would vote for their assigned party with a probability that depends on: a) what their surroundings predict will happen (win, lose or deadlock) and b) the stage of the game (early or late). In other words, for each individual there exists a family of six parameters $p_{ij}$, where $i \in \{$win, lose, deadlock$\}$ is the poll's prediction and $j \in \{$early, late$\}$ is the stage of the game, that are precisely these probabilities, henceforth \textit{strategies}.
\begin{table}[htb]
\centering
\begin{tabular}{ c|c c}
\hline
$\bar{p}_{ij}$ & \textit{Early} & \textit{Late} \\
\hline
\textit{Win} & 0.975 & 0.979\\
\textit{Deadlock} & 0.964 & 0.911\\
\textit{Lose} & 0.598 & 0.574\\
\hline
\end{tabular}
\caption{Mean agent strategies $\bar{p}_{ij}$, such that an agent with a neighbourhood poll predicting state $i$ during phase $j$ will stick to their party, on average, with probability $\bar{p}_{ij}$. These values were inferred from social experiments (see the Supplementary Material of \citet{Stewart2019} for an extensive discussion of how these numbers are obtained and why these are used independently of the graph structure).}
\label{table:pij_means}
\end{table}
Since voters are not a homogeneous bunch and will have different strategies, each parameter is sampled from the empirical distribution of the social experiment. Thus while $p_{ij}$ is a random variable, which has an empirical distribution with mean given by Table \ref{table:pij_means}, for each voter $v$ a set of 6 parameters $p_{ij}^v$ are realisations of the random variables.
We point out that the behavioural parameters can be used independently of the initial structure, as voters are unaware of the graph they sit in, an assumption also made by \citet{Stewart2019} when replicating their findings with simulations.
\subsection{Benchmark Metrics} \label{sub:benchmark}
Since an exact analysis of the opinion dynamics faces significant complexity barriers, \citet{Stewart2019} proposed using the Influence Gap as a prediction tool of the dynamic's outcome. To the IG proposal we add three more metrics, which we will compare against:
\begin{description}
\item [Majority] The initial majority of the red party.
\item [Deterministic voter skew (dVS)] A deterministic simplification of the update dynamics; at each time step every agent synchronously conforms to the strict majority party in their poll, keeping the current choice in case of a tie, and after $\sigma$ steps the voter skew is measured. In principle, one could evolve the system for as many steps $\sigma$ as in the stochastic process, but we use $\sigma=1$, as errors due to the simplification may be propagated and worsened with more steps.
\item [Efficiency gap (EG)] A classical political science metric~\cite{efficiencygap}, developed to measure gerrymandering in two-party elections, in which we examine how many votes were ``wasted'', i.e., could have been eliminated without changing the outcome.
\end{description}
\subsection{Experimental Setup}
We have $N = 20$ voters, each assigned a party and then are placed in a graph $G$, generated by a number of different assignments of $h$ and $p_0$. Following the setting of \citet{Stewart2019} (which was also based on experiments with people), each simulation game runs for $240$ seconds, which starts with the early phase for $83$ seconds, and then transitions to the late phase. During the game, every $3.3$ seconds a voter, $v$, can update their voting intention, with probability $p_{ij}^v$, which are sampled from the empirical parameter-distributions found from the human social experiment~\cite{Stewart2019}. In total, for a single simulation $N$ samples are taken from $6$ distributions each (for each stage -- early/late -- and each voting outcome -- Red victory/Blue victory/Deadlock). After $240$ seconds have elapsed the vote share across the entire graph is measured.
An example of a time series produced by a simulation is shown in Figure~\ref{fig:dynamics_example}. The different sections of the plot, partitioned by dashed and dotted lines, represent different strategies. For example, in the early phase ($t<t_*$) both parties are deadlocked and thus agents vote for their assigned party with probability $p_{\text{deadlock},\text{early}}^{\text{agent}}$. The convergence of a time series is not a guaranteed because, as in the original social experiment, the game finishes after $240$ seconds.
We simulate over $10^4$ elections for each set of hRC parameter values $(p_0,h)$ while varying the initial number of red nodes, $N_R\geq 10$. Parameters were chosen to cover a reasonable range, $p_0 \in \{0,0.4,1\}$ and $h\in \{0,0.1,\cdots,1\}$, so that in total there were over $33,000$ simulated elections. At the start of each election we measure the influence gap, the deterministic voter skew and the efficiency gap (see Section \ref{sub:benchmark}); the final outcome of the election is also recorded as a vote skew towards the red party $\sfrac{N_R}{N}-\sfrac{1}{2}$. For example, Figure~\ref{fig:ig+maj_vs} shows the elections occurring on hRCs with $(p_0,h)=(0.4,0.6)$ and illustrates how two metrics, the IG and majority, both correlate strongly with the final outcome (note that the figure shows a case where parties did not have equal power initially).
\begin{figure}[t!]
\centering
\begin{subfigure}{0.49\linewidth}
\centering
\includegraphics[width=1\linewidth]{./Pictures/Dynamics/Dynamics_RC_scatter_p03.png}
\label{fig:dynamics-RC}
\end{subfigure}%
\begin{subfigure}{0.49\linewidth}
\centering
\includegraphics[width=1\linewidth]{Pictures/Dynamics/Dynamics_hRC_scatter_p1h03.png}
\label{fig:dynamics-hRC}
\end{subfigure}
\caption{Simulations of the behavioural model of \citet{Stewart2019} acting on relaxed- and homophilic relaxed-caveman graph, starting with equal representation. In both cases the influence gap correlates very weakly with voter skew, having a Pearson's $\rho < 0.3$, and passing a significance test with $p < 10^{-7}$. \textbf{Left: } relaxed-caveman graphs with rewire parameter $p=0.3$. \textbf{Right: } homophilic relaxed-caveman with rewire probability $p_0 = 1$ and homophily factor $h = 0.3$.} \label{fig:single_maj_corr}
\vspace{-3mm}
\label{fig:equal-representation}
\end{figure}
\section{Predicting Two-Party Elections: Results} \label{sub:res}
Where \citet{Stewart2019} found strong correlation between IG and election outcome in scale-free (Barab\'{a}si-Albert) graphs for a given initial voter skew, we find the contrary in RC and hRC graphs. Starting with equal representation -- same number of red nodes as blue -- we find that the presence of communities suppresses the correlation noted there. In both the relaxed-caveman and the homophilic relaxed-caveman with given parameter sets, the Pearson correlation coefficient is small -- $\rho < 0.3$ -- as seen in Figure \ref{fig:single_maj_corr}.
When we allow for unequal initial setup, results show more complexity.
In Figure \ref{fig:ig+maj_vs} we plot as an example how the initial influence gap and majority compare to the final election outcome, for a hRC graph with intermediate rewire $p_0 = 0.4$ and intermediate homophily $h = 0.6$. We see in both cases very strong correlations with a high Pearson correlation coefficient (PCC) $\rho > 0.9$. Note that by symmetry near identical distributions would be found, if we had simulated $N_R<10$.
\begin{figure}[!h]
\centering
\includegraphics[width=0.9 \linewidth]{Pictures/Dynamics/hRC_p04_h06_samples10000a.png}
\caption{For a hRC with $p_0 = 0.4$ and $h=0.6$, we plot the final vote skew against the IG and the majority of the red party. }
\label{fig:ig+maj_vs}
\end{figure}
In other words, for this set of parameter values, both IG and majority are strong predictors of the final outcome -- in so far as their initial values correlate strongly with the result. However we can begin to see how the influence gap does not predict the outcome as accurately as the initial majority.
{\subsection{The Effects of Rewiring and Homophily}}
We now present our findings taking rewiring and homophily into account, a snapshot of which is shown in Figure \ref{fig:pcc_lines}.
Before delving into discussing them, it is important to note that neither homophily nor rewiring correlate, on their own, with the final outcome of the election, with Pearson correlation coefficients of $\rho = -0.0256$ and $\rho=0.0106$ respectively. This does not imply that neither have an effect at all -- indeed, Figure \ref{fig:pcc_lines} shows otherwise -- simply that they do not impact the election single-handedly.
\paragraph{High Homophily} When $h>0.5$, hRC graphs are rife with echo chambers in which an agent has friends mostly of the same party and opinion as themselves. Their polls are therefore fairly homogeneous and they see no compelling reason to change their vote, meaning the graph structure plays little to no role. Extending this intuition to all agents in the graph, very little diffusion of opinions occurs and thus the final outcome of the election will not be too dissimilar to the starting one. As such, counting the number of votes at the start will closely resemble -- and thus predict -- the number of votes at the end, and majority here can better predict the outcome than IG. Moreover, this logic can help explain the general trend that the correlation of most metrics increase drastically at high homophily
More formally, at high $h$ the average poll will show a majority towards their agent's party $\Delta_v > 0.5$ or even a super-majority $\Delta_v > V$. During phase $j$ of the voting game, most voters $v$ see a prediction of $i = \text{win}$, so that their strategy is $p_{\text{win},j}^v$. Since the empirical distribution for $p_{\text{win},j}$ is heavily biased towards $p_{\text{win},j} \geq 0.9$ most voters will stick to their initial opinion.
\begin{figure*}[!tb]
\centering
\includegraphics[width=0.9\linewidth]{Pictures/Dynamics/hRC_correlations_p1_samples10000a_3in1.png}
\caption{As model parameters of the homophilic relaxed-caveman are varied independently, the Pearson correlation coefficient (PCC) between the final voter skew and the majority (purple), the influence gap (IG, green), the deterministic voter skew (dVS, orange) and the efficiency gap (EG, red) are plotted. Quadratic lines of best fit for multiple rewire probabilities are plotted, on the left $p_0 = 0$, in the middle $p_0 = 0.4$ and on the right $p_0 = 1$. The influence gap only outperforms the majority once, at $(p_0,h)=(1,0)$.}
\label{fig:pcc_lines}
\vspace {-1mm}
\end{figure*}
\paragraph{Low Homophily} Conversely, for $h < 0.5$, an agent's poll, at least initially, likely shows a super-minority $\Delta_v < 1-V$ or equivalently $i = \text{lose}$ such that over 40\% of agents will change their votes. In other words, most polls are quite diverse and an agent will see mostly nodes of the opposite party and ``doubt" their opinion more often
In this scenario the dynamics become less stable, and more complex diffusion occurs. Therefore, most metrics perform either as well as or worse at $p_0=1$ than $p_0=0$. Only the deterministic voter skew sees an increase in predictive power as $p_0$ increases, at all values of homophily. We suspect this is as a single deterministic step causes the network to evolve towards a more stable configuration and, as such, closer to the final outcome.
Another effect of this is that the predictive powers of both majority and influence gap monotonically both decrease as homophily decreases; in fact for $h < 0.5$ the difference in PCC between IG and majority shrinks considerably.
As we move up the rewiring level, the majority PCC is linear while the IG PCC is a flat quadratic. Both the efficiency gap and deterministic voter skew, however, show a move from linear to quadratic as communities become more diluted. Moreover, both metrics are consistently outperformed by influence gap, though still have reasonably high correlations; once again majority bests all others.
Overall, only once does the influence gap predict outcomes better than the initial majority, at $p_0=1$ and $h=0$, in other words, when nodes are extremely diverse and seek out alternative opinions to their own. The difference, however, is tiny at $|\Delta\rho| = 0.00144$ -- and likely due to the stochastic nature of the elections. We conclude that simply counting the number of party votes is a better predictor than the metric of \citet{Stewart2019}.
\subsection{Regression Models} \label{sub:reg_models}\label{sec:comb_met}
In order to further explore the role of majority and influence gap ($x_M$ and $x_G$ respectively, following typical regression notation) as predictive or explanatory variables we use linear regression (Equation~\ref{eq:lin_reg}) to build several models of the vote skew, $y$. Two models are single-featured using only the initial majority or the influence gap and the third is a joint model built using a multiple regression of both features. All three are trained on the same $70\%$ of the data and tested on the remaining $30\%$.
\begin{equation}
y = \beta_M x_M + \beta_G x_G + \beta_0 \label{eq:lin_reg}
\end{equation}
\begin{table}[htb]
\centering
\caption{Coefficients of regression, $\beta$, and of determination, $R^2$, for regression models of the dynamic voting game outcome on homophilic relaxed-caveman graphs for two-party elections.} \label{table:lin_reg}
\begin{tabular}{ |c|c c c|c| }
\hline
\textit{Metric} & \textit{$\beta_M$} & \textit{$\beta_G$} & \textit{$\beta_0$} & \textit{$R^2$}\\ \hline
Majority & 0.0507 & & 0.0363 & 0.881\\
IG & & 0.226 & 0.00474 & 0.837\\
Majority, IG & 0.0333 & 0.0869 & 0.0144 & \textbf{0.902}\\
\hline
\end{tabular}
\end{table}
The regression confirms our observations , that majority is a better predictive tool than influence gap. This does not, however, render IG useless. In particular the joint model outperforms both individual models that either use majority or influence gap exclusively as shown in Table \ref{table:lin_reg}, despite some colinearity between features. In other words, IG is an informative metric that can build upon the simple predictions of majority, but alone is not as effective.
\section{Extending the Influence Gap to Multiple Parties} \label{sec:multiparty}
In many political systems more than two parties are competitive and relevant. While the US has two main parties, Canada has 5 parties with more than 5\% in parliament; the UK has 3 such parties, and the situation is similar in most Continental European democracies and around the world. In this section we will discuss the challenges in expanding the definitions for IG to the multi-party setting, by showing a few possible definitions for the influence assortment and gap. We then apply these definitions to small graphs to highlight the subtle differences and to motivate the choice of one definition over the other.
\subsection{Influence Assortment}
We examine two possible extensions to multi-party settings. Both assume, naturally, that plurality is the reasonable extension to majority (which always exists in two-party settings, but not guaranteed to exist with more parties). The first assumes the ``force'' of the plurality winner is all there is, and thus even when an agent does not support it, the negative weight it carries is only as much as the plurality winner could muster
\begin{equation}
a_n = \begin{cases}
\Delta^{p(n)}_n & \text{if } \Delta^{p(n)}_n = \max(\bm{\Delta}_n), \\
-\max(\bm{\Delta}_n) & \text{otherwise.}
\end{cases} \label{eq:a_n1}
\end{equation}
An alternative definition takes an all-against-one stance: rather than taking the most dominant party, we sum over all other parties. That is, when an agent does not support the plurality winner, their negative weight is the sum of all the parties one did not support.
\begin{equation}
a_n = \begin{cases}
\displaystyle \Delta^{p(n)}_n & \text{if } \Delta^{p(n)}_n = \max(\bm{\Delta}_n), \\
-(1-\Delta^{p(n)}_n) & \text{otherwise.}
\end{cases} \label{eq:a_n2}
\end{equation}
Both definitions reduce to the original two-party definition of influence assortment and have reasonable intuition behind them.
\subsection{Influence Gap}
There are two main ways to define the influence gap. First, to define it as the difference between influence assortments of party $P$ and the (next-) most \textit{influential} party. Second, as the difference between party $P$ and the (next-) plurality party (i.e., plurality runner-up).
\begin{align}
G_P &= A_P - \max_{Q\neq P}A_Q \label{eq:G_P1}\\
G_P &= A_P - A_Q \quad \text{where } Q = \argmax_{P'\neq P} N_{P'} \label{eq:G_P2}
\end{align}
These two gaps largely correlate with one another, since influence and partisan split are well correlated -- though not without exceptions. Specifically, the latter definition can produce non-unique values for the same network.\\
To more carefully explore our suggested extensions, we will now examine several settings of multi-party situations, and for that we shall explore the case of three parties: $\mathcal{P} = \{\circ, \triangle,\square\}$.
\subsection{$N=3$ Examples}
Consider three voters who each vote for a different party. There are only two connected networks to consider: the cycle and the line. Due to the symmetry of the cycle, all influence assortments are equal and thus all influence gaps are 0. For the line where $\triangle$ is the middle node, there is symmetry between the $\circ$ and the $\square$.
\begin{table}[!htbp]
\centering
\caption{Comparison of definitions, applied on $N=3$ graphs.}
\begin{tabular}{*6c}
\hline
Graph & $a_n$ & $G_P$ & $G_\circ$ & $G_\triangle$ & $G_\square$\\
\hline
Cycle & (*) & (*) & 0 & 0 & 0 \\
\hline
\multirow{2}{*}{$\circ-\triangle-\square$} & \multirow{2}{*}{(*)} & (\ref{eq:G_P1}) & 0&-\sfrac{1}{6}&0 \\
{}& {}& (\ref{eq:G_P2}) & 0, \sfrac{1}{6}&-\sfrac{1}{6}&0, \sfrac{1}{6} \\
\hline
\end{tabular}
\end{table}
For the line graph, notice that the definition of assortment leads to $\triangle$ being comparatively heavily influenced (Equation~\ref{eq:a_n2}), or only slightly influenced (Equation~\ref{eq:a_n1}). Moreover, the definition of the influence gap can lead to multiple results for the same graph -- this is clearly problematic. In general the latter definition of IG, Equation~\ref{eq:G_P2}, produces non-unique results if there are joint winners -- this can occur frequently and as such we hence only consider the first definition, Equation~\ref{eq:G_P1}.
\subsubsection{$N=4$ Examples} \label{sub:N4}
Consider a network of 4 voters, two of which vote for the $\circ$ party, one votes $\triangle$ and another votes $\square$. Already with this set-up there at least a dozen configurations to create a connected network; here we consider a few of these to illustrate the simple examples that complicate choosing a definition, starting with the 4 non-trivial line graphs (since swapping $\square$ and $\triangle$ produces a qualitatively identical result).
\begin{table}[!htbp]
\centering
\caption{Comparison of definitions, applied on lines of size $N=4$.}
\begin{tabular}{*5c}
\hline
Line & $a_n$ & $G_\circ$ & $G_\triangle$ & $G_\square$\\
\hline
{$\circ-\circ-\triangle-\square$} & (*) & \sfrac{1}{3}&-\sfrac{1}{2}&-\sfrac{1}{3} \\
\hline
{$\circ-\triangle-\square-\circ$} & (*) & \sfrac{1}{6}&-\sfrac{1}{6}&-\sfrac{1}{6} \\
\hline
{$\triangle-\circ-\square-\circ$} & (*) & -\sfrac{1}{12}&\sfrac{1}{12}&-\sfrac{13}{12} \\
\hline
$\triangle-\circ-\circ-\square$ & (*) & \sfrac{1}{6}&-\sfrac{1}{6}&-\sfrac{1}{6} \\
\hline
\end{tabular}
\end{table}
We see that for such line graphs, both assortment definitions coincide. In general, for any sized line graph (or regular graph of degree 2), definitions (\ref{eq:a_n1}) and (\ref{eq:a_n2}) always coincide. For any number of parties in $\mathcal{P}$, a node either has one neighbour or two. When $n$ has exactly one neighbour -- in other words, $n$ is at the end of the line -- $\Delta^{p(n)}_n = \sfrac{1}{2}$ or $1$, so that $\Delta^{p(n)}_n = \max(\bm{\Delta}_n)$ hence both definitions coincide. In the bulk of the line, with two neighbours, there are three cases:
\begin{itemize}
\item $p(n)=p(n-1)$ or $p(n+1)$; $p(n)$ has plurality in the poll of $n$, therefore $(\ref{eq:a_n1}) = (\ref{eq:a_n2})$ trivially.
\item $p(n-1)=p(n+1)\neq p(n)$; $p(n-1)$ holds plurality but is the only alternative party to $p(n)$, hence $\max (\bm{\Delta}_n) = 1 - \Delta^{p(n)}_n (= \sfrac{2}{3})$.
\item $p(n-1)=p(n)=p(n+1)$; all three parties hold joint plurality, therefore $(\ref{eq:a_n1}) = (\ref{eq:a_n2})$ trivially.
\end{itemize}
Using similar analysis, ring graphs (with degree 2) also cause the two definitions to coincide.
\begin{table}[!htbp]
\centering
\caption{Comparison of definitions, applied on stars of size $N=4$, with different centres.}
\begin{tabular}{*5c}
\hline
Centre & $a_n$ & $G_\circ$ & $G_\triangle$ & $G_\square$\\
\hline
\multirow{2}{*}{$\triangle$} & (\ref{eq:a_n1}) & 0&-1&0 \\
{} & (\ref{eq:a_n2}) & 0&-\sfrac{5}{4}&0 \\
\hline
$\circ$ & (*) & \sfrac{1}{4}&-\sfrac{1}{4}&-\sfrac{1}{4} \\
\hline
\end{tabular}
\end{table}
On the other hand as soon as a node has degree $3$ or higher with at least 3 parties we start to see non-unique solutions. For example, examine star graphs (i.e., graphs in which one node is connected to all others, and no other edges exist) when there are two $\circ$'s, one $\triangle$ and one $\square$. When the $\triangle$ is at the centre, $\circ$ holds plurality but is not the only alternative party. Hence Definition (\ref{eq:a_n2}) will produce a more negative assortment and thus a more extreme influence gap.
More complex edge configurations exist, but we look at two more to further illustrate the problem: a clique and a near-clique -- i.e., a clique where a single edge ($\circ-\triangle$) has been removed.
\begin{table}[!htbp]
\centering
\caption{Comparison of definitions, applied on near-cliques of size $N=4$, with different centres.}
\begin{tabular}{*5c}
\hline
Graph & $a_n$ & $G_\circ$ & $G_\triangle$ & $G_\square$\\
\hline
\multirow{2}{*}{Near-Clique} & (\ref{eq:a_n1}) & \sfrac{5}{6}&-\sfrac{5}{6}&-\sfrac{5}{3} \\
{} & (\ref{eq:a_n2}) & \sfrac{5}{6}&-\sfrac{5}{6}&-\sfrac{23}{12} \\
\hline
\multirow{2}{*}{Clique} & (\ref{eq:a_n1}) & 1&-1&-1 \\
{} & (\ref{eq:a_n2}) & \sfrac{5}{4}&-\sfrac{5}{4}&-\sfrac{5}{4} \\
\hline
\end{tabular}
\end{table}
Henceforth we consider only Definition (\ref{eq:a_n1}) for influence assortment and Definition (\ref{eq:G_P1}) for influence gap, due to their faithfulness to the original concepts and, in the case of IG, that they are well-defined, i.e., do not produce multiple results on the same graph.
\section{Influence Gap on Large Graphs with Multiple Parties}\label{sec:3-party-large}
In this section, we use the previously identified extensions of influence gap and assortment, to handle larger social networks with multiple parties. We first provide an analysis of their behaviour on cliques, for any number of parties, and then we show experimental evidence of their predictive power on hRC graphs.
\subsection{Cliques and Plurality Centres}
Consider a clique of size $N$ with a set of parties $\mathcal{P} = \{0,\cdots,|\mathcal{P}|-1\}$ -- in other words, every voter has perfect information on all other voters -- and let $N_P$ be the number of voters for party $P$. In this situation \textit{influence} perfectly correlates with voter split, i.e., the fraction of the population. There are two main cases to consider: 1) a single plurality winner, 2) multiple joint winners (a ``deadlock'' in the parlance of \citet{Stewart2019}).
In case 1, let us denote by $P=0$ the singular winner and note that $\max(\bm{\Delta}_n) = \frac{N_0}{N}, \forall n \in V$ since it is a clique. Moreover, for all voters of a single party $P$, the influence assortment is the same, due to the symmetry of the system, and hence the influence assortment at the party level is equal to the assortment at the node level. From Definition (\ref{eq:a_n1}) we can conclude that $A_0 = \frac{N_0}{N}$ and $A_P=-\frac{N_0}{N}, \forall P\in\mathcal{P}\setminus 0$. As such the influence gap for the plurality winner is $+2\frac{N_0}{N}$, and for the losers $-2\frac{N_0}{N}$.
In case 2, let $W$ be the number of jointly winning parties $\mathcal{W} = \{0,\cdots,W-1\}$, with $N_0$ number of voters. In this case for $P<W$, $A_P=\frac{N_0}{N}$ with the remainder as $A_P=-\frac{N_0}{N}$ for $P\geq W$; the influence gap follows simply as $+2\frac{N_0}{N}$ for the winners and $-2\frac{N_0}{N}$ for the losers.
Combining the two cases, we find the influence gap for a clique of size $N$ with $W$ (joint) winners ($P<W$), who have won with $N_0$ number of voters each. The magnitude of the gap is the same for all parties, but whose sign is determined by being a winning party or not.
\begin{equation}
G_P = \begin{cases}
+\frac{2N_0}{N} & \text{for } P<W,\\
-\frac{2N_0}{N} & \text{otherwise}
\end{cases} \label{eq:G_P-complete}
\end{equation}
From a clique we can find the influence gap of other networks by removing edges systematically from it. Consider removing an edge between two nodes $u$ and $v$, neither of which vote for a plurality party, $P_0$. In doing so, in the polls of $u$ and $v$, $P_0$ (and all other plurality winners) still holds plurality and even has a higher poll fraction $\Delta^{P_0}_u=\Delta^{P_0}_v=\frac{N_0}{(N-1)}$. As such $a_u$ and $a_v$ decrease, so that overall $G_{P_0}$ increases while $G_{p(u)}$ and $G_{p(v)}$ decreases.
Repeating this procedure one can keep removing edges until plurality winners form a center or core (of size $W N_0$) -- in other words, all voters of $P<W$ are connected to all other nodes -- while those in the periphery each have $WN_0$ neighbours. Thus all nodes have polls that show plurality towards $P<W$ with fraction $\Delta^{P<W}_n = \frac{1}{W+\frac{1}{N_0}}\approx\frac{1}{W}$ such that the influence gap is simple.
\begin{equation}
G_P = \begin{cases}
+\frac{2N_0}{WN_0+1} & \text{for } P<W,\\
-\frac{2N_0}{WN_0+1} & \text{else.}
\end{cases} \label{eq:G_P-plurality-core}
\end{equation}
Any network between this plurality-core and the clique has an influence gap bounded by Equations \ref{eq:G_P-complete} and \ref{eq:G_P-plurality-core}: $\frac{2N_0}{WN_0+1}\leq|G_P|\leq\frac{2N_0}{N}$, with the same dependency on winning for the sign.
\subsection{Predicting Three-Party Elections} \label{sec:3-party-hrc}
Equipped with the formal multiparty definition for \textit{influence assortment} (Equation~\ref{eq:a_n1}) and \textit{influence gap} (Equation~\ref{eq:G_P1}), we can replicate the methodology of Section \ref{sec:dynamics} in order to simulate elections with $3$ parties -- red, blue and green. The voter model easily extends to 3 or more parties; with probability $p_{ij}^v$ a voter $v$ in state $i$ in phase $j$ of the election sticks to her current vote, and with probability $1-p_{ij}^v$ she votes for the party with highest poll fraction (excluding her own). Similarly the algorithm for the homophilic relaxed-caveman graph works for any arbitrarily sized set of parties $\mathcal{P}$.
For the benchmark metrics, we focus only on party votes -- counting the number of voters for each party. For simplicity, we omit the analysis of the simplied dVS dynamic and the two-party intended EG.
\begin{figure*}[h!]
\centering
\includegraphics[width=\linewidth]{Pictures/Multiparty/hRC_correlations_samples10260_3in1.png}
\caption{For a three-party election occurring on hRC graphs, the Pearson correlation coefficient (PCC) between the final vote skew and the initial number of party voters (pluses) and influence gap (crosses) are plotted for each party colour (red, blue or green). As before for different values of the rewire -- $p_0 = 0$ on the left, $p_0 = 0.4$ in the middle and $p_0=1$ on the right -- the homophily is varied. In black are lines of best fit for each metric, across all parties. For visual clarity we leave out the deterministic voter skew and efficiency gap, the latter being canonically used for the American two-party system hence unsuitable for the multi-party case.}
\label{fig:multiparty_pcc_lines}
\vspace {-1mm}
\end{figure*}
Looking at Figure \ref{fig:multiparty_pcc_lines}, there is a very clear and consistent rift between the influence gap and the party votes -- IG is always the weaker predictor. Moreover, while in the two-party case the difference between PCCs can be as small as $|\Delta\rho|\geq 0.00141$, the smallest difference for the three-party case is $|\Delta\rho|\geq0.0191$. In other words, in the three-party case the influence gap never gets arbitrarily close in its predictive accuracy to the simple vote count, there seems to be a bound on how close the IG will get.
Another curious effect in moving to the three-party case is the behaviour with high rewiring probability. In this case, the PCC for influence gap becomes noticeably worse for low homophily, causing a widening divide between it and the party votes, in direct contrast to the two-party case where the difference shrinks.
As a conclusive remark, we note that for, more than three parties and random initial assignment and rewiring, the share of any winning party in any neighbourhood is smaller in expectation. Therefore, the magnitude of the assortment values $a_n$ (for any node $n\in V$) is smaller, and so is the influence gap, leading it to be an even noisier predictor.
\section{Conclusion}\label{sec:discussion}
We studied the predictive power of a recently proposed metric, the influence gap, to compute the results of many-party elections on networks with community structure. To do so, we proposed a novel model, the homophilic relaxed-caveman, as a means to generate synthetic graphs with communities that may exhibit echo chambers.
In order to extend our analysis to the multi-party case we then proposed multiple definitions for the influence assortment and gap. Their similarities and subtle differences were illustrated first in small simple graphs and then characterised for large graphs -- such as cliques and plurality centres.
We showed that the presence of communities suppresses the power of the influence gap as a predictor of the final outcome. A much simpler metric -- the initial majority -- is a far better predictor, albeit one which can be improved somewhat by combining it with the influence gap for an even better predictor. Surprisingly, the initial advantage is an even stronger metric in the multi-party cases.
Having measured the efficacy of several metrics, this poses the question of whether a metric exists that can predict the voting dynamics most accurately, while still being easy to compute.
A further important direction concerns the difference in voters' behaviour. In our hRC models, the homophily level and the probability of rewiring are the same for both parties. However, we may want to distinguish between electorates that have different levels of approaching others. For example more open-minded voters, who do not mind accepting connections that do not share their view, while others are more close-minded. Preliminary results suggest that a party can leverage a higher homophily factor in this way to structure the network to its favour.
The dynamic model has only shown that the outcome of an election on networks with community can be biased by the structure. However, we may want to model manipulation explicitly, by allowing parties to insert artificial bots or zealots, as \citet{Stewart2019} did, to influence members behaviour. Just like the forceful agents in \citet{acemoglu-spread}, these can be modelled as nodes that have a party affiliation but never update their view.
Finally, agents may be allowed to actively seek new friendships, whether to express their views more widely or to receive more opinions -- the distinction becoming important when considering directed networks. Can a party utilise these dynamic connections to its advantage?
\section*{Acknowledgement}
This work was supported by ISF grant 1965/20; GIF grant I-2527-407.6/2019; and EPSRC grant EP/S022244/1 for the MathSys CDT.
|
1,314,259,994,211 | arxiv | \section{Introduction}
\subsection{General setting}
Dunes are land formations of sand which are subject to different
forms and sizes based on their interaction with the wind or water or
some other mobile medium. In the case of dunes in the
desert their shapes depend mainly on the amount of sand available and
on the change of the direction of the wind with time (see
Herrmann and Sauermann \cite{hs:hs}). Some examples of dune patterns
are longitudinal, transverse, star and Barchan dunes, however, there are more
than 100 categories of dunes. Dunes also occur under rivers, for similar reasons,
but their shapes are less exotic in this case, because the flow is mainly
uni-directional.
An interesting topic is to try to understand if
the shape of a dune is maintained when it moves. With
regard to Barchan dunes, for example, Herrmann and Sauermann \cite{hs:hs}
have given some arguments against the hypothesis
that Barchan dunes are solitary waves, mainly because they constantly
lose sand at the two horns and tend to disappear
if not supplied with new sand. Recently, Dur\'an, Schw\"ammle
and Herrmann \cite{dsh:dsh} considered a minimal model for dunes consisting
of three coupled equations of motion to study numerically the
mechanisms of dune interactions for the case when a small
Barchan dune collides with a bigger one; four different cases were
observed, depending only on the relative sizes of the two dunes, namely,
coalescence, breeding, budding, and solitary wave behavior.
In this paper, we are concerned with the following evolution equation
proposed by Fowler (see \cite{f0:f0}, \cite{f1:f1} and \cite{f2:f2}
for more details) to study nonlinear dune formation:
\begin{equation}\label{eq:fow0}
\frac{\partial u}{\partial t} (x,t) + \frac{\partial}{\partial x} \Big{[}
\frac{u^2}{2}(x,t) - \frac{\partial u}{\partial x}(x,t) + \int_0^{+\infty}
\xi^{-1/3} \frac{\partial u}{\partial x} (x-\xi, t) d\xi \Big{]}=0,
\end{equation}
where $u=u(x,t)$ represents the dune amplitude, $x \in \mathbb R$, and $t \ge 0$. The
second and fourth terms of equation (\ref{eq:fow0}) correspond to
the nonlinear and nonlocal terms respectively, while the third term is the
dissipative term.\par
Let us give a brief description of the model derivation. For more details, we
refer to Fowler \cite{f0:f0, f1:f1, f2:f2}, which we follow closely. The model
stems from the Exner law, which is the conservation of mass for the sediment:
$$ \frac{\partial u}{\partial t} + \frac{\partial q}{\partial x} = 0,$$ where
the bedload transport $q = q(\tau)$ is assumed, in the case of dunes, to depend
only on the stress $\tau$ exerted by the fluid on the erodible bed. We
assume a two-dimensional flow, where $x$ is the horizontal direction and
the second direction is the upwards coordinate orthogonal to $x$. This should account for transverse dunes, but obviously not for other types of dunes. The nonlocal
term in equation (\ref{eq:fow0}) arises from a subtle modelling
of the basal shear stress $\tau_b$. Roughly speaking, the turbulent bottom
shear stress is given by $\tau_b \approx f \rho v^2$, where $\rho$ is the fluid
density, $f$ is a dimensionless friction coefficient and $v$ is the mean fluid
velocity (vertically averaged). By performing an asymptotic expansion with
respect to the aspect ratio $\epsilon$ of the evolving bedform,
$\epsilon =\frac{\text{bed thickness}}{\text{fluid depth}} \ll 1,$
and a perturbation analysis of a basic Poiseuille flow (Orr-Sommerfeld equation),
Fowler \cite{f0:f0, f1:f1, f2:f2} was able to obtain the following expression:
$$\tau_b \approx f \rho v^2 \left\{ 1-u + \alpha \int_0^{+\infty}
\xi^{-1/3} \frac{\partial u}{\partial x} (x-\xi, t) d\xi \right\},$$
where $\alpha$ is a positive constant proportional to $Re^{1/3}$, $Re$ being
the Reynolds number. Due to the bed slope $\frac{\partial u}{\partial x}$, there
is an additional force generated by gravity $g$. Therefore, the net stress causing
motion is actually $\tau = \tau_b - (\rho_s - \rho) g D_s\frac{\partial u}{\partial x}$,
where $\rho_s$ is the sediment density and $ D_s$ the mean diameter of a
sediment particle. As long as $u$ is small, the shallow water approximation
applies to the velocity $v$ and, for small Froude number, the (dimensionless) mean
fluid velocity can be approximated by $v \approx \frac{1}{1-u}$. Thus, the mean
fluid velocity and the bottom shear stress depend on the motion of the dune
profile $u$, and therefore there is a feedback between the dune profile and the
motion of the fluid. In dimensionless variables, taking all physical constants
equal to $1$, the resulting net stress is then given by
$$
\tau \approx 1 + u + u^2 + \int_0^{+\infty} \xi^{-1/3}
\frac{\partial u}{\partial x} (x-\xi, t)\, d\xi - \frac{\partial u}{\partial x}.
$$
Notice that the nonlinear nonlocal term
$ 2 u \int_0^{+\infty} \xi^{-1/3} \frac{\partial u}{\partial x} (x-\xi, t) d\xi$ has
been discarded. By a Taylor expansion, up to order $2$, we get
$q(\tau) \approx q(1) + q'(1)(\tau -1) + \frac{1}{2}q''(1)(\tau -1)^2$.
Now, considering a moving spatial coordinate, i.e. replacing $x$ by
the new variable $x-q'(1)t$, plugging $q$ into the Exner equation, after a
suitable rescaling, we obtain the canonical equation (\ref{eq:fow0}).
\par
Some numerical computations have been performed by Fowler
\cite{f1:f1,f2:f2} and Alibaud, Azerad and Is\`ebe \cite{aai:aai}. Fowler
mentions the fact that the numerical solution, computed with a pseudo-spectral
method in a large domain, starting from random initial data, converges to a final
state consisting of one travelling-wave. Alibaud et al., using a finite difference
scheme valid for a bounded time interval, starting from a compactly supported
nonnegative initial data, showed that the numerical solution of the Fowler
equation (\ref{eq:fow0}) quickly evolves to a solution with a non zero negative
part, showing the erosive effect of the nonlocal term. They also establish
theoretically the non monotone property of (\ref{eq:fow0}), namely the
violation of the maximum principle (see also Remark \ref{Remark:nonmonotone} below).
\par
To the authors' knowledge, ours is the first study to report a
rigorous mathematical proof of the existence of travelling-waves for dune
morphodynamics. We notice that we have not found nontrivial travelling-waves
of the solitary-wave type for this model (see Remark \ref{R:Exist-soliton} below),
however we could not exclude the possibility that they exist. What we obtain is
more bore-like travelling-waves. This type of travelling dunes has not been
observed yet, to the author's knowledge. This may put under question the
validity of the Fowler equation to faithfully describe dune morphodynamics. The
authors hope that these results could be of interest for geographers, geologists,
oceanographers and others.
\subsection{Organization of the paper}
In Section \ref{section:travelling} we study the existence of travelling-wave
solutions to equation (\ref{eq:fow0}). The main result
of this section is Theorem \ref{theo:exist} which implies that for each
wave speed $d>0$, and $\eta$ in a neighborhood of zero, $\eta \in \mathbb R$, there
exists a travelling-wave solution $u(x,t)=\phi(x-dt)$ to the following
version of equation (\ref{eq:fow0})
\begin{equation*}
\frac{\partial u}{\partial t} (x,t) + \frac{\partial}{\partial x} \Big{[}
\frac{u^2}{2}(x,t) - \frac{\partial u}{\partial x}(x,t) + \eta \int_0^{+\infty}
\xi^{-1/3} \frac{\partial u}{\partial x} (x-\xi, t) d\xi \Big{]}=0,
\end{equation*}
where $\phi \in C^1_b(\mathbb R)$; the idea of its proof is to use the implicit
function theorem on suitable Banach spaces. Then, by a scaling argument and
considering a suitable translation of the travelling-wave, we extend this
result for any $\eta \in \mathbb R$ and any wave speed $d\in \mathbb{R}$.
Section \ref{section:local} is devoted to proving local well-posedness (LWP)
for the integral equation associated to the initial value problem (IVP)
for equation (\ref{eq:fow0}). Inspired by the regularity of the
travelling-wave obtained in Section 2, we consider a suitable subspace of
$C^1_b(\mathbb R)$. The analysis of the linear equation associated to equation
(\ref{eq:fow0}) is addressed in Sub-section \ref{subsection:linear}. Next,
in Sub-section \ref{subsection:local}, the main result of this section is
stated in Theorem \ref{T:local}; it gives local-in-time existence of the solution of
the integral equation associated to the IVP for equation (\ref{eq:fow0}), with initial
data belonging to the subspace $X$ of $C^1_b(\mathbb R)$, where
$$X := \{f \in C_b^1(\mathbb R) ; f' \text{ is uniformly continuous} \}.$$
\subsection{Notations}
- We denote by $\mathbb R$ and $\mathbb C$ the sets of all real and complex
numbers respectively. $\mathbb N$ denotes the set of all natural numbers.\\
- We denote by $C(c_1, c_2, \ldots)$ a constant which depends on the
parameters $c_1, c_2, \ldots$ $C$ is assumed to be a non-decreasing
function of its arguments. \\
- The norm of a measurable function $f\in L^p(\Omega)$, for $\Omega$ a
subset of $\mathbb R$, is written
$\|f\|^p_{L^p(\Omega)}=\int_{\Omega}|f|^pdx$ for $1\le p < +\infty$, and
$\|f\|_{L^\infty(\Omega)}=\text{ess sup}_{\Omega}|f|$. The inner
product of two functions $f,g \in L^2(\Omega) $ is written as
$(f,g)=\int_{\Omega}f\bar gdx$. We will often omit set $\Omega$ when
context is clear.\\
- We denote by $\hat f = \mathcal{F} f$ the Fourier transform of $f$ ($\mathcal{F} ^{-1}$ and
$\; \check{} \;$ are used to denote the inverse of the Fourier transform), where
$\hat f (\xi) := \frac{1}{\sqrt{2\pi}} \int e^{-i\xi x}f(x)dx$
for $f \in L^1(\mathbb R)$ (it follows that $\widehat{f * g}= \sqrt{2\pi}
\hat f \hat g$ for $f, g \in L^1(\mathbb R)$). \\
- The Schwartz space of rapidly decreasing functions on $\mathbb R$ is denoted
$\mathcal S(\mathbb R)$. \\
- We denote $\Lambda:=(1-\partial^2_x)^{1/2}$ and $H^s(\mathbb R)$ ($s\in\mathbb R$)
the usual Sobolev space
$H^s(\mathbb R)=\{u\in {\mathcal S}'(\mathbb R), \|u\|_{H^s}<\infty\}$,
where $\| u\|_{H^s}=\| \Lambda^s u\|_{L^2}$. \\
- Let $\Omega \subset \mathbb R$. $C^0(\Omega)= C(\Omega)$ is used to denote the space
of all continuous complex-valued functions on $\Omega$. Moreover,
$C^k(\Omega) = \{u:\Omega \mapsto \mathbb C \; ; \; u, u', \ldots, u^{(k)}
\in C^0(\Omega)\}$, for $k \in \mathbb N$. We write $C^{\infty} (\Omega)$ to denote
the set of infinitely differentiable complex-valued functions on $\Omega$. Similarly,
we use the notations $C^0(\Omega;Y)= C(\Omega;Y), C^k(\Omega; Y), C^\infty (\Omega;Y)$
when functions take values in the Banach space $Y$.\\
- We write $C_\infty(\mathbb R)$ to denote the space of all continuous complex-valued
functions defined on $\mathbb R$ which tend to zero at infinity.\\
- We denote by $C_b(\mathbb R)=C_b^0(\mathbb R)$ the space of all bounded continuous real-valued
functions on $\mathbb R$ with the norm $\| \cdot \|_{L^\infty}$. Moreover, for every
$k \in \mathbb N$, we write
$$
C_b^k(\mathbb R):= \{f\in C^k(\mathbb R) \; ; \; f, f', \ldots, f^{(k)} \in C_b(\mathbb R) \},
$$
where $\|f\|_{C_b^k} := \sum_{i=0}^k\| f^{(i)} \|_{L^\infty}$, for all
$f \in C_b^k(\mathbb R)$. \\
- If $X$ and $Y$ are two Banach spaces, we denote by ${\mathfrak L}(X,Y)$
the set of all continuous linear mappings defined on $X$
with values in $Y$; if $X=Y$, we denote by $\mathfrak{L}(X)$.
\section{Existence of Travelling-Wave Solutions of the
Fowler Equation}\label{section:travelling}
We begin this section with some notations and preliminary results.
We define
\begin{equation}\label{eq:psi}
\psi (x):= \chi_{(0,\infty)}(x) \cdot x^{-1/3},
\;\; \text{for all} \;\; x \in \mathbb R,
\end{equation}
where $\chi_A$ is used to denote the characteristic
function of the set $A$. We also define
\begin{equation}\label{eq:g[u]}
g[u]:= \psi * \partial_x u.
\end{equation}
We note that, since $\psi \in {\mathcal S}'(\mathbb R)$,
it follows that for $\phi \in \mathcal S(\mathbb R)$,
one has that $\psi * \phi \in C^\infty(\mathbb R)\cap{\mathcal S}'(\mathbb R)$ and
$\widehat{\psi * \phi} = \sqrt{2\pi} \hat \psi \hat\phi$ (see Rudin \cite{r:r}).
Then, for $\varphi \in \mathcal{S}(\mathbb R)$, $g[\varphi] (\cdot)=
\psi * \partial_x \varphi (\cdot)= \int_0^{+\infty} \xi^{-1/3}
\partial_x \varphi (\cdot-\xi) d\xi$. Next lemma gives the
Fourier transform of function $\psi$.
\begin{Lemma}\label{lema:foupsi}
For the function $\psi$ defined by (\ref{eq:psi}) we have
\begin{equation}\label{eq:fourier}
\hat \psi (\xi) = \frac{1}{\sqrt{2 \pi}} \Gamma\Big(\frac23\Big)
\Big(\frac12 -i \frac{\sqrt 3}{2} \text{sgn}(\xi) \Big)
|\xi|^{-2/3},
\end{equation}
where
\begin{equation*}
\text{sgn}(\xi) =\left\{
\begin{array}
[c]{r}
-1,\text{ } \xi <0,\\
1,\text{ } \xi>0,
\end{array}
\right.
\end{equation*}
and $\Gamma$ is the gamma function.
\end{Lemma}
\begin{proof}
We define the function $\psi_n (x) := \chi_{(0,n)}(x) x^{-1/3}$, for all
$x \in \mathbb R$, and $n \in \mathbb N$. It is not difficult to see
that $\psi_n \rightarrow \psi$ in ${\mathcal S}'(\mathbb R)$ as $n$
goes to infinity. Let $\varphi \in {\mathcal S}(\mathbb R)$. Then
\begin{eqnarray*}
\langle \hat \psi_n, \varphi \rangle
&=& \frac{1}{\sqrt{2\pi}} \int \Big[
\int_0^n \frac{\cos (\xi x)}{\xi^{1/3}} d\xi
-i \int_0^n \frac{\sin (\xi x)}{\xi^{1/3}} d\xi \Big]
\varphi (x) dx\\
&=& \frac{1}{\sqrt{2\pi}} \int x^{-2/3} \Big[
\int_0^{nx} \frac{\cos (u)}{u^{1/3}} du
-i \int_0^{nx} \frac{\sin (u)}{u^{1/3}} du \Big]
\varphi (x) dx.
\end{eqnarray*}
Since
\begin{equation*}
\int_0^{+\infty} \frac{\cos x}{x^{1/3}} dx = \frac12 \Gamma\Big(\frac23\Big), \;\;
\text{and} \;\;
\int_0^{+\infty} \frac{\sin x}{x^{1/3}} dx = \frac{\sqrt 3}{2}
\Gamma\Big(\frac23\Big),
\end{equation*}
it follows that
\begin{equation*}
\Big|x^{-2/3} \int_0^{nx} \frac{e^{-iu}}{u^{1/3}} du \;
\varphi(x) \Big| \le C |x|^{-2/3} |\varphi(x)|,
\end{equation*}
for all $n \in \mathbb N$, and $x \in \mathbb R$. Therefore, the dominated convergence
theorem implies that
\begin{equation*}
\lim_{n \rightarrow \infty} \langle \hat \psi_n, \varphi \rangle
= \frac{1}{\sqrt{2\pi}} \int x^{-2/3} \Gamma \Big( \frac23 \Big)
\Big(\frac12 - i \frac{\sqrt 3}{2} \text{sgn} (x) \Big) \varphi(x) dx.
\end{equation*}
This completes the proof of the lemma.
\end{proof}
\begin{Remark}\label{R:def}
Let $s \in \mathbb R$. If $u \in H^s(\mathbb R)$, one can define $g[u]$
through its Fourier transform by
\begin{equation}\label{eq:fourierg}
\widehat{g[u]}(\xi) := \Gamma \Big( \frac23 \Big) \Big(
\frac{\sqrt 3}{2} \text{sgn} (\xi)+\frac{i}{2} \Big) \xi^{1/3} \hat u (\xi),
\end{equation}
for almost every $\xi \in \mathbb R$. Thus, if $u \in H^s(\mathbb R)$, it follows
that $g[u] \in H^{s-1/3}(\mathbb R)$ and
$ \| g[u] \|_{H^{s-1/3}} \le \Gamma \Big( \frac23 \Big) \|u\|_{H^s}$.
\end{Remark}
In this section we consider the following, more general, version
of equation (\ref{eq:fow0}):
\begin{equation}\label{eq:fow}
\partial_t u (x,t)+ \partial_x \Big ( \frac{u^2}{2}
- \partial_x u + \eta \; g[u]\Big) (x,t)=0,
\end{equation}
where $\eta \in \mathbb R$. We will show existence of travelling-wave
solutions to equation (\ref{eq:fow}), for any $\eta \in \mathbb R$. First,
we consider the case $\eta=0$. For any $d\in \mathbb R$
(see Johnson \cite{j:j}),
\begin{equation}\label{eq:eta=0}
u_d(x,t)= \frac{d}{2} \Big[ 1-\tanh \Big(\frac{d}{4} (x-\frac{d}{2}t)
\Big) \Big]
\end{equation}
is a solution to equation (\ref{eq:fow}) with $\eta=0$.
\begin{Remark}\label{R:scale}
Let $\lambda >0$. We define
\begin{equation}\label{eq:scale}
u_\lambda (x,t) := \frac{1}{\lambda} u\Big(\frac{x}{\lambda},
\frac{t}{\lambda^2} \Big), \;\;\text{for} \;\; x \in \mathbb R,
\;\; \text{and}\;\; t \ge 0.
\end{equation}
It is straightforward to check that if $u$ is a solution to the equation
\begin{equation}\label{eq:fow1}
\partial_t u (x,t)+ \partial_x \Big ( \frac{u^2}{2}
- \partial_x u + \lambda^{2/3} \eta \; g[u] \Big)(x,t)=0,
\end{equation}
then $u_\lambda$ satisfies equation (\ref{eq:fow}). Hence, if
$\phi$ is a travelling-wave solution of equation (\ref{eq:fow1})
with speed $c$, then $\phi_\lambda (\cdot)= \frac{1}{\lambda}
\phi(\frac{1}{\lambda} \cdot)$ is a travelling-wave solution of
equation (\ref{eq:fow}) with speed $c/\lambda$.
\end{Remark}
We define, for $c \in \mathbb R$, the functions
\begin{equation}\label{eq:gc}
g_c(x):= c \Big(1- \tanh \Big( \frac{c}{2} x \Big) \Big), \;\;\;\text{and} \;\;\;
h_c(x) := g_c'(x) = -\frac{c^2}{2} \text{sech}^2 \Big( \frac{c}{2} x \Big).
\end{equation}
\begin{Remark}\label{R:gc}
Let $c \in \mathbb R$. We see that $g[g_c]=I_1+I_2$,
where $I_j := \psi_j * h_c$ for $j=1,2$, with
$\psi_1:= \psi \cdot \chi_{(0,1)}$, and
$\psi_2 := \psi \cdot \chi_{(1,+\infty)}$. Now, we state
some immediate properties of the function $g[g_c]$.\\
{\bf{a.)}} Let $p>3$. Since $\psi_1 \in L^1(\mathbb R)$, and $\psi_2\in L^p(\mathbb R)$, it
follows from the Young inequality for convolution that $g[g_c] \in L^p(\mathbb R)$. \\
{\bf{b.)}} Furthermore, $g[g_c] \in C_\infty(\mathbb R)$. In fact, it follows from
the dominated convergence theorem that $I_1$ is continuous and
$I_1(x) \rightarrow 0$ as $|x| \rightarrow \infty$. Moreover, H\"older's
inequality and the dominated convergence theorem imply that
\begin{equation*}
I_2(x) \le C(c) \Big( \int_1^{+\infty}
\frac{\text{sech}^4(\frac{c}{2} (x-\xi))}{\xi^{4/3}} d\xi \Big)^{1/4}
\rightarrow 0, \; \text{as} \; |x|\rightarrow +\infty.
\end{equation*}
The continuity of $I_2$ is shown similarly to the continuity of $I_1$.
\end{Remark}
Let $c\in \mathbb R$. In the sequel, we will consider the following spaces:
\begin{eqnarray*}
\mathfrak{X} =\mathfrak{X}_c &:=& \Big\{ \varphi \in C_b^1(\mathbb R) \; \; ; \;\;
\int \varphi' h_c' \; dx=0\Big\}, \\
\tilde \mathfrak{X} = \tilde \mathfrak{X}_c&:=& \Big\{ g_c+\varphi \; \; ; \;\; \varphi \in \mathfrak{X} \Big\},
\end{eqnarray*}
where $\| \cdot \|_{\mathfrak{X}} = \- \| \cdot \|_{C_b^1}$. One sees that
$(\mathfrak{X}, \| \cdot \|_{\mathfrak{X}})$ is a Banach space.
\begin{Remark}\label{R:property}
Assume that $\varphi \in C_b^1(\mathbb R)$. By integration by parts one has that
\begin{equation}\label{eq:new}
g[\varphi](x) = \int_0^1 \frac{1}{\xi^{1/3}} \varphi'(x-\xi) d\xi
+\varphi(x-1) - \frac13
\int_1^{+\infty} \frac{1}{\xi^{4/3}} \varphi(x-\xi) d\xi.
\end{equation}
Then $g[\varphi] \in C_b(\mathbb R)$. Moreover,
\begin{equation}\label{eq:ineqnew}
\| g[\varphi] \|_{L^{\infty}} \le C \; \| \varphi \|_{C_b^1}.
\end{equation}
Hence, if $\phi \in \tilde \mathfrak{X}$, it follows from Remark \ref{R:gc}-{\bf{b.)}}
above that $g[\phi] \in C_b(\mathbb R)$.
\end{Remark}
Suppose now that $u(x,t)=\phi(x-ct)$ is a solution to
equation (\ref{eq:fow}), where $\phi \in \tilde \mathfrak{X}$. Then
$-c \phi' + \frac{d}{dx}(\frac{\phi^2}{2}-\phi'+\eta g[\phi])=0$. Thus,
a sufficient condition to guarantee that $\phi$ satisfies the last
equation is
\begin{equation}\label{eq:travelling1}
F(\eta,\phi)= F_c(\eta,\phi):= c\phi - \frac{\phi^2}{2}+\phi' -\eta g[\phi] =0.
\end{equation}
We denote by $\tau_c$ the function given by
$\tau_c(x) := c-g_c(x)= c \; \text{tanh}(\frac{c}{2}x)$, for $x \in \mathbb R$.
We now define the function $G=G_c$, which is well defined on $\mathbb R \times \mathfrak{X}$
by Remarks \ref{R:gc} and \ref{R:property} above, as
\begin{eqnarray}\label{eq:travelling2}
G:\mathbb R \times \mathfrak{X} &\mapsto& C_b(\mathbb R) \nonumber \\
(\eta,\varphi) &\mapsto&
G(\eta,\varphi) = \tau_c \varphi - \frac{\varphi^2}{2} +\varphi'
-\eta g[\varphi] -\eta g[g_c].
\end{eqnarray}
Assume that $\phi= \varphi+g_c \in \tilde \mathfrak{X}$. Since $F(0,g_c)=0$, it
follows that $F(\eta, \phi) = G(\eta,\varphi)$. Hence, $\phi$ satisfies
equation (\ref{eq:travelling1}) if and only if $\varphi$ verifies the equation
$G(\eta,\varphi)=0$.
The following theorem implies the existence of a travelling-wave
solution, $u(x,t)=\phi(x-ct)$ with $c>0$ and $\phi \in \tilde \mathfrak{X}$, to
equation (\ref{eq:fow}) for $\eta$ in a neighborhood of zero; its proof
uses the implicit function theorem.
\begin{Theorem}\label{theo:exist}
Suppose $c >0$. Then there exist $\delta, \delta_0>0$ such that
for every $\eta \in (-\delta,\delta)$, there is exactly one
$\varphi_\eta=\varphi_{\eta,c} \in \mathfrak{X}$ for which
$\|\varphi_\eta\|_{\mathfrak{X}} \le \delta_0$ and $G(\eta,\varphi_\eta)=0$. Moreover,
the mapping $\eta \mapsto \varphi_\eta$ is a $C^{\infty}$-map on a
neighborhood of $0$.
\end{Theorem}
\begin{proof}
Let $c>0$. The mapping $G=G_c$ is defined on the Banach space
$\mathbb R \times \mathfrak{X}$ taking values in the Banach space
$(C_b(\mathbb R), \| \cdot \|_{L^\infty})$, and satisfies $G(0,0)=0$.
We now claim that $\partial_1 G$ and $\partial_2 G$ exist as
partial F-derivatives (Fr\'echet derivative) on $\mathbb R \times \mathfrak{X}$ and that
the partial F-derivative $\partial_2 G(0,0):\mathfrak{X} \mapsto C_b(\mathbb R)$ is bijective. \\
In fact, let us take $(\eta, \varphi) \in \mathbb R \times \mathfrak{X}$. One can see
that
\begin{equation*}
\partial_1 G(\eta,\varphi) \cdot
=-(g[\varphi] + g[g_c]) \cdot
\end{equation*}
and
\begin{equation}\label{eq:partial_2G}
\partial_2 G(\eta,\varphi) \cdot
=(\tau_c -\varphi) \cdot +\partial_x \cdot
-\eta g[\cdot].
\end{equation}
Then $\partial_1 G(\eta,\varphi) \in \mathfrak L(\mathbb R, C_b(\mathbb R))$, and
$\partial_2G(\eta,\varphi) \in \mathfrak L(\mathfrak{X},C_b(\mathbb R))$. Moreover,
we obtain that
$\|\partial_1 G(\eta,\varphi)\|_{\mathfrak L(\mathbb R, C_b(\mathbb R))} \le C \cdot
(\|\varphi \|_{C^1_b} + \|g[g_c]\|_{L^\infty})$, and
$\|\partial_2G(\eta,\varphi) \|_{\mathfrak{L}(\mathfrak{X},C_b(\mathbb R))} \le C \cdot
(1+|\eta|+\| \tau_c - \varphi\|_{L^\infty})$, where we have
used inequality (\ref{eq:ineqnew}). Hence, $\partial_1G, \partial_2G$
exist as partial F-derivatives on $\mathbb R\times \mathfrak{X}$. \\
We will now show that the partial F-derivative
$ \partial_2G(0,0) =\tau_c+\partial_x:\mathfrak{X} \mapsto C_b(\mathbb R)$ is bijective.
We begin with the injectivity; we emphasize here that the definition of the
space $\mathfrak{X} \subset C^1_b(\mathbb R)$ was chosen to ensure the injectivity of
the mapping $ \partial_2G(0,0)$. Let $f$ be an element of $\mathfrak{X}$ such that
$\tau_c f +f'=0$. By solving the last ordinary differential equation, one gets
\begin{equation*}
f(x)=f(0) \; \cdot \; e^{-\int_0^x \tau_c(s)ds}
= f(0) \; \cdot \; \text{sech}^2\Big(\frac{c}{2}x\Big).
\end{equation*}
Since $f\in \mathfrak{X}$, it follows that
\begin{equation*}
\int f'(x) h_c'(x) dx = -f(0)\frac{2}{c^2} \int (h_c')^2(x) dx =0.
\end{equation*}
Then $f(0)=0$, and therefore $f=0$. \\
We will now show that the mapping $\partial_2G(0,0)$ is onto. Let $y$ be
an element of $C_b(\mathbb R)$. By the method of variation of parameters, we
obtain that the function
\begin{equation}\label{eq:sobrej}
g(x) := \lambda l_c (x) + l_c (x)
\int_0^x \frac{y(s)}{l_c(s)} ds
\end{equation}
is a solution to the equation $\tau_c g +g' = y$, for any
$\lambda \in \mathbb R$, where $l_c:= -\frac{2}{c^2}h_c =
\text{sech}^2(\frac{c}{2}x)$. We will
prove that $g \in \mathfrak{X}$ for a suitably chosen real number $\lambda$.
First, we remark that there exists a unique
$\lambda=\lambda_{y,c} \in \mathbb R$ such that $\int g'h_c'dx$=0. In
fact take
\begin{equation}\label{eq:lambda}
\lambda := \frac{c^2}{2\int(h_c')^2(x)dx}
\int \Big[ (h_c')^2(x) \int_0^x \frac{y(s)}{h_c(s)}ds
+y(x) h_c'(x) \Big]dx,
\end{equation}
where we note that
\begin{equation*}
0< \int (h_c')^2(x)dx \le \frac{c^6}{4} \int
\text{sech}^4 \Big(\frac{c}{2}x\Big) dx = C(c), \;\;\;
\int|h_c'(x)|dx =c^2,
\end{equation*}
and
\begin{eqnarray*}
&& \int (h_c')^2(x) \Big| \int_0^x \frac{y(s)}{h_c(s)}ds \Big| dx
\le \frac{c^4}{2} \|y\|_{L^\infty}
\int \frac{\sinh^2(\frac{c}{2}x)}{\cosh^6(\frac{c}{2}x)}
\Big| \int_0^x \frac{1+\cosh(cs)}{2} ds \Big| dx \\
&&\le \frac{c^3}{4} \|y\|_{L^\infty}
\int \Big[ c|x| \text{sech}^4 \Big( \frac{c}{2}x \Big)
+ 2 \; \text{sech}^2 \Big( \frac{c}{2}x \Big) \Big] dx
\le C(c) \|y\|_{L^\infty}.
\end{eqnarray*}
It remains to show that $g$ given by (\ref{eq:sobrej}) and
(\ref{eq:lambda}) belongs to $C_b^1(\mathbb R)$. It is immediate to see that
$g \in C(\mathbb R)$, we need to show that $g$ is bounded. We have that
\begin{eqnarray*}
&& \text{sech}^2 \Big( \frac{c}{2}x \Big) \Big| \int_0^x
\frac{y(s)}{\text{sech}^2(\frac{c}{2}s)} ds \Big|
\le \frac{\|y\|_{L^\infty}}{1+\cosh(cx)}
\Big|\int_0^x (1+\cosh(cs)) ds \Big| \\
&& \le \|y\|_{L^\infty} \Big(
\frac{|x|}{1+\cosh(cx)} +\frac{1}{c} |\tanh(cx)|\Big)
\le C(c) \|y\|_{L^\infty}.
\end{eqnarray*}
Then $g \in C_b(\mathbb R)$. Moreover, since $g$ satisfies the
equation $\tau_c g +g'=y$, it follows that $g \in C_b^1(\mathbb R)$. Hence,
$g\in \mathfrak{X}$. Therefore, $\partial_2 G(0,0)$ is a surjective mapping.
It is not difficult to see, by using inequality (\ref{eq:ineqnew}),
that $G$, $\partial_1 G$ and $\partial_2 G$ are continuous
on $\mathbb R \times \mathfrak{X}$. Then, the implicit function theorem
implies the first part of the theorem. Furthermore, from (\ref{eq:travelling2})
one can see that function $G$ is quadratic in $\varphi$ and linear
in $\eta$, therefore it is not difficult to verify that
$\partial_{i,j}^2G(\eta,\varphi)$ is independent of
$(\eta,\varphi) \in \mathbb R \times \mathfrak{X}$, for all $i,j\in \{1,2\}$. Hence,
$\partial_{i_1,\ldots,i_k}^kG(\eta,\varphi)=0$ for all $k\ge3$, where
$i_1,\ldots,i_k\in\{1,2\}$, and $(\eta,\varphi) \in \mathbb R \times \mathfrak{X}$.
Finally, the second part of the theorem is then a consequence of the fact
that the mapping $G$ is a $C^\infty$-map on $\mathbb R \times \mathfrak{X}$.
\end{proof}
\begin{Corollary}
Let $\eta \in \mathbb R$ and $d\in \mathbb{R}$. Then there is a travelling-wave
solution $\tilde \phi \in C^1_b(\mathbb R)$ of equation (\ref{eq:fow}) with speed $d$.
\end{Corollary}
\begin{proof}
Let $c>0$. By Theorem \ref{theo:exist} there exists
$\lambda_0=\lambda_0(\eta,c)>0$ such that
for every $\lambda \in (0,\lambda_0)$, there is a
$\phi=\phi_{\lambda,\eta,c} \in C_b^1(\mathbb R)$ such that $u(x,t)=\phi(x-ct)$ is
a solution to equation (\ref{eq:fow1}). Now we can see, by using
Remark \ref{R:scale}, that
$\phi^{\dagger}(\cdot)=\frac{1}{\lambda}\phi (\frac{1}{\lambda}\cdot)$
is a travelling-wave solution of equation (\ref{eq:fow}) with speed
$c/\lambda \in (\frac{c}{\lambda_0}, +\infty)$. The result now follows
from the fact that if $\phi^{\dagger}(x-\tilde{c}t)$ is a solution of
equation (\ref{eq:fow}) for some $\tilde c >0$, then
$\tilde \phi(x,t):= \phi^{\dagger}(x-(\tilde{c}+k)t)+k$ is also a solution
for all $k \in \mathbb{R}$.
\end{proof}
\begin{Remark}\label{R:Exist-soliton}Let us comment about the existence of solitary travelling waves.
Let us proceed formally at first. By multiplying the equation
$-c \phi' + \frac{d}{dx}(\frac{\phi^2}{2}-\phi'+\eta g[\phi])=0$
by $\phi$, then integrating between $-\infty$ and $x$, assuming that
$\phi, \phi' \rightarrow 0$ as $|x|\rightarrow +\infty$, we get
$$
-c \frac{\phi^2(x)}{2}+\frac{\phi^3(x)}{3}
-\int_{-\infty}^{x} \phi(y) \phi''(y)dy
+\eta \int_{-\infty}^x \frac{dg[\phi]}{dy}(y) \phi(y)dy=0.
$$
Making $x\rightarrow +\infty$, integrating by parts, and then
applying Parseval's relation and Remark \ref{R:def}, we obtain
\begin{equation}\label{eq:soliton}
\int_{-\infty}^{+\infty} \Big(\xi^2-\frac{\eta}{2}
\Gamma\Big(\frac23\Big) \xi^{4/3} \Big)
|\hat \phi (\xi)|^2 d\xi =0.
\end{equation}
These formal steps can be justified by assuming for instance that
$\phi \in H^2(\mathbb{R})$. Thus, equation (\ref{eq:soliton}) implies
that if $\eta \le 0$, then $\phi=0$. We can then conclude that there
are no nontrivial travelling-waves of the solitary-wave type
for equation (\ref{eq:fow}) when $\eta \le 0$. However, in the
physical case, that is to say when $\eta=1$ or more generally when
$\eta>0$, equation (\ref{eq:soliton}) does not preclude the possibility that
they may exist.
\end{Remark}
\section{Local Theory in a subspace of $C^1_b(\mathbb R)$}\label{section:local}
In Section \ref{section:travelling}, we proved the existence of a
travelling-wave solution $u(x,t)=\phi(x-ct)$ to equation (\ref{eq:fow}) for
any $\eta \in \mathbb R$, where $c$ is an appropriate positive number and $\phi\in C^1_b(\mathbb R)$.
Motivated by this last result, we will consider in this section the local
well-posedness theory for the following initial value problem (IVP)
\begin{equation}\label{eq:IVP}
\left \{
\begin{array}{l}
\partial_t u (x,t)+ \partial_x \big ( \frac12 u^2
-\partial_x u + g[u] \big)(x,t)=0, \\
u(0)=u_0,
\end{array}
\right.
\end{equation}
where $g[u]$ is given by (\ref{eq:g[u]}), and $u_0$ belongs to a
suitable subspace of $C^1_b(\mathbb R)$. The Cauchy problem associated to the IVP
(\ref{eq:IVP}) for initial data $u_0 \in L^2(\mathbb R)$ was recently studied by
Alibaud, Azerad and Is\`ebe \cite{aai:aai}.
\subsection{The Linear Equation}\label{subsection:linear}
First, we consider the linear part associated to the IVP (\ref{eq:IVP}), namely
\begin{equation}\label{eq:IVP-linear}
\left \{
\begin{array}{l}
\partial_t u (x,t)
-\partial_x^2 u (x,t)+ \partial_x g[u](x,t) =0, \\
u(0)=u_0.
\end{array}
\right.
\end{equation}
By formally taking the Fourier transform of the last expression, we get
\begin{equation}\label{eq:sol-linear}
\hat u(\xi,t) = \hat K(\xi,t) \hat u_0(\xi),
\end{equation}
where
\begin{equation}\label{eq:K}
\hat K(\xi,t) = e^{-t[\xi^2-\xi^{4/3}(a+ib \; \text{sgn}(\xi))]},
\end{equation}
for $\xi \in \mathbb R$ and $t\ge 0$, with $a:=\frac12 \Gamma(\frac23)$ and
$b:=-\frac{\sqrt 3}{2} \Gamma(\frac23)$. For $\xi \in \mathbb R$, we define
\begin{equation}\label{eq:Phi}
\Phi(\xi) := (a+ib \; \text{sgn}(\xi)).
\end{equation}
We note that $|\Phi(\xi)|= \Gamma(\frac23)$, for all $\xi \in \mathbb R$.
\begin{Remark} The non local term $\partial_x g[u]$ is anti-dissipative of order $4/3$.
\end{Remark}
\begin{Remark}\label{Remark:nonmonotone}
For every $t>0$, the kernel $K(\cdot,t)$ is not a nonnegative function. Indeed, by contradiction,
if $K(\cdot,t)$ would be nonnegative, one could bound
$$|\hat{K}(\xi,t)| \leq \big{|} \frac{1}{\sqrt{2\pi}}\int e^{-i\xi x} K(x,t) dx \big{|} \leq \hat{K}(0,t) = 1.$$
But, on the other hand,
$|\hat{K}(\xi,t)| = e^{-t[\xi^2-a \xi^{4/3}]} >1,\; \mbox{for} \; 0<|\xi| < a^{3/2}.$
Hence, for every $t>0$, there exists $x\in \mathbb R$ such that
$K(x,t)<0$. This fact implies, in particular, that the IVP
(\ref{eq:IVP}) is non-monotone (see \cite{aai:aai} for more details).
\end{Remark}
For $t\ge 0$, we define the operator $E(t)$ by
\begin{equation}\label{eq:ope}
\left \{
\begin{array}{l}
E(t) \phi (x) = \frac{1}{\sqrt{2\pi}}\big(K(\cdot,t)*\phi \big) (x),
\;\;\;\text{for} \;\; t>0 \;\; \text{and} \;\; x \in \mathbb R, \\
E(0)\phi=\phi,
\end{array}
\right.
\end{equation}
where $\phi \in C_b(\mathbb R)$ (see Lemma \ref{Lemma:semig} below). Now, we define
the following spaces
\begin{eqnarray}\label{eq:space}
Y &:=& \{g \in C_b(\mathbb R) ; g \text{ is uniformly continuous} \} \;; \\
X &:=& \{f \in C_b^1(\mathbb R) ; f' \text{ is uniformly continuous} \}.
\end{eqnarray}
One can see that $(Y, \|\cdot\|_{C_b(\mathbb R)})$, and
$(X, \|\cdot\|_{C_b^1(\mathbb R)})$ are Banach spaces and that
$X \hookrightarrow Y$. In Sub-section \ref{subsection:local}
we will show local-in-time well-posedness of the IVP (\ref{eq:IVP}), with
initial data $u_0 \in X$.
The following lemma contains a calculus result.
\begin{Lemma}\label{Lemma:calculus}
Let $h:\mathbb R \mapsto \mathbb C$ be a function which satisfies the following conditions:\\
{\bf{i.)}} $h \in L^1(\mathbb R) \cap C_{\infty}(\mathbb R) \cap C^2(\mathbb R \setminus \{ 0 \})$; \\
{\bf{ii.)}} $h' \in L^1(\mathbb R)$, $|h'(x)| \rightarrow 0$ as $|x|\rightarrow +\infty$.
Moreover, there exist $\lim_{x \downarrow0} h'(x)=h'(0^+)$, and
$\lim_{x \uparrow0} h'(x)=h'(0^-)$; \\
{\bf{iii.)}} $h'' \in L^1(\mathbb R)$. \\
Then $\hat h \in L^1(\mathbb R) \cap C_{\infty}(\mathbb R)$, and
\begin{equation}\label{eq:calculus}
\|\hat h \|_{L^1} \le \sqrt{\frac{2}{\pi}} \Big[
\|h\|_{L^1} + |h'(0^+)-h'(0^-)| + \|h''\|_{L^1} \Big].
\end{equation}
\end{Lemma}
\begin{proof}
Since $h\in L^1(\mathbb R)$, it follows from the Riemann-Lebesgue lemma that
$\hat h \in C_{\infty}(\mathbb R)$. After using integration by parts twice, we see that
\begin{equation*}\label{eq:calculus1}
\hat h(\xi) = \frac{1}{\sqrt{2\pi}} \Big[
\int_{-\infty}^{+\infty} h''(x) \frac{e^{-i\xi x}}{(i \xi)^2} dx
+\frac{h'(0^+)-h'(0^-)}{(i\xi^2)} \Big], \;\;\; \text{for } \;
\xi \not = 0.
\end{equation*}
Expression (\ref{eq:calculus}) follows from the last equation and from the
fact that $\|\hat h\|_{L^{\infty}} \le \frac{1}{\sqrt{2\pi}} \|h\|_{L^1}$.
\end{proof}
\begin{Remark}\label{Remark:calculus}
It is well-known that $W^{1,1}(\mathbb R) \subset C_{\infty}(\mathbb R) \cap AC(\mathbb R)$,
where $AC(\mathbb R)$ denotes the space of all complex-valued functions, which are
absolutely continuous on $\mathbb R$. Therefore, it follows
from Lemma \ref{Lemma:calculus} above that if
$f \in W^{2,1}(\mathbb R)$, then $\hat f \in L^1(\mathbb R) \cap C_\infty(\mathbb R)$ and
\begin{equation}\label{eq:calculus2}
\|\hat f \|_{L^1} \le \sqrt{\frac{2}{\pi}} \Big[
\|f\|_{L^1} + \|f''\|_{L^1} \Big].
\end{equation}
\end{Remark}
\begin{Remark}\label{Remark:Kernel}
Suppose now that $t \in (0,1)$. Since
\begin{eqnarray*}
K(x,t) &=& \frac{1}{\sqrt{2\pi}} \int e^{ix\xi}
e^{-t[\xi^2-\xi^{4/3}\Phi(\xi)]} d\xi \\
&=& \frac{t^{-1/2}}{\sqrt{2\pi}} \int e^{i (t^{-1/2}x) \xi} \;
e^{-[\xi^2-\xi^{4/3}\Phi(\xi)]} \;
e^{-(1-t^{1/3})\xi^{4/3}\Phi(\xi)} d\xi,
\end{eqnarray*}
it follows that
\begin{equation}\label{eq:Kernel}
K(x,t) = t^{-1/2} \big(K(\cdot,1) * G(\cdot,1-t^{1/3})\big)(t^{-1/2}x), \;\;
\text{for} \;\; x \in \mathbb R,
\end{equation}
where
\begin{equation}\label{eq:Kernel-G}
G(\cdot,1-t^{1/3}) = \frac{1}{\sqrt{2\pi}} \mathcal{F}^{-1}
(e^{-(1-t^{1/3})\xi^{4/3}(a+ib \; \text{sgn}(\xi))}) (\cdot).
\end{equation}
\end{Remark}
The next three lemmas are elementary calculus results which will be used
in the sequel.
\begin{Lemma}\label{Lemma:calculus1}
Suppose that $\alpha>-1$, and $\beta>0$. Then
\begin{equation*}
I(\alpha,\beta) :=\int |\xi|^\alpha e^{-\beta |\xi|^{4/3}} d\xi
= C(\alpha) \beta^{-\frac34(\alpha+1)}.
\end{equation*}
\end{Lemma}
\begin{proof}
The assertion of the lemma follows from the fact that
$$I(\alpha,\beta) =\beta^{-\frac34(\alpha+1)}
\int |\tau|^\alpha e^{-|\tau|^{4/3}} d\tau. $$
\end{proof}
\begin{Lemma}\label{Lemma:calculus2}
Suppose that $\alpha>-1$, $\beta>0$, and $t>0$. Then
\begin{equation*}
I(\alpha,\beta,t) :=\int |\xi|^\alpha e^{-t[\xi^2-\beta |\xi|^{4/3}]} d\xi
\le C(\alpha,\beta) \Big[ e^{\frac{4}{27}\beta^3t} +t^{-\frac{\alpha+1}{2}} \Big].
\end{equation*}
\end{Lemma}
\begin{proof}
It is elementary to check that $\xi^2-\beta\xi^{4/3} \ge -\frac{4}{27}\beta^3$ for all
$\xi \in \mathbb R$, and $\xi^2-\beta\xi^{4/3} \ge \xi^2/2$ for $\xi\ge(2\beta)^{3/2}$. Then
\begin{eqnarray*}
I(\alpha,\beta,t) &\le& 2 \Big[
\int_0^{(2\beta)^{3/2}} \xi^\alpha e^{\frac{4}{27}\beta^3t} d\xi
+\int_{(2\beta)^{3/2}}^{+\infty} \xi^\alpha e^{-\frac{t}{2}\xi^2} d\xi\Big] \\
&\le& C(\alpha,\beta) \Big[ e^{\frac{4}{27}\beta^3t} +
\int_0^{+\infty} \Big(\frac{2}{t}\Big)^{\frac{\alpha}{2}} u^\alpha
e^{-u^2} \sqrt{\frac{2}{t}} du\Big],
\end{eqnarray*}
where in the last inequality we have used the fact that $\alpha >-1$. The
result now follows.
\end{proof}
\begin{Lemma}\label{Lemma:calculus3}
Suppose that $g \in W^{1,1}(\mathbb R)$, and $l \in L^\infty (\mathbb R)$. If
$f:=g*l$, then $f \in C^1(\mathbb R) \cap W^{1,\infty}(\mathbb R)$, and
$f'(x)= (g'*l)(x)$ for all $x \in \mathbb R$.
\end{Lemma}
\begin{proof}
Since $g \in L^1(\mathbb R)$ and $l \in L^\infty(\mathbb R)$, it follows from Young's inequality
that $f \in L^\infty (\mathbb R)$. Moreover, since
$|f(x+h)-f(x)|\le \|g(\cdot+h)- g(\cdot) \|_{L^1} \|l\|_{L^\infty}$ for
all $x\in \mathbb R$, and $ \|g(\cdot+h)- g(\cdot) \|_{L^1} \rightarrow 0$ as
$h$ tends to zero, it follows that $f \in C(\mathbb R)$. Let $x\in \mathbb R$. Since
$W^{1,1} (\mathbb R) \subset AC(\mathbb R)$, we see that
\begin{eqnarray*}
&& \Big|\frac{f(x+h)-f(x)}{h} - \int g'(x-y) l(y) dy \Big| = \\
&& \Big|\int \int_0^1 \big(g'(x-y+th) - g'(x-y)\big) l(y) dtdy \Big| \\
&& \le \|l\|_{L^\infty} \int_0^1 \|g'(\cdot+th) -g'(\cdot) \|_{L^1} dt \rightarrow 0
\;\;\; \text{as} \;\; h\rightarrow 0,
\end{eqnarray*}
where the last expression is a consequence of the dominated convergence theorem.
\end{proof}
The following five lemmas provide more explicit estimates than the corresponding
results mentioned in \cite{aai:aai}. The next lemma gives an upper bound, which
goes to infinity as $t$ tends to $1$, for
$\| G(\cdot, 1-t^{1/3}) \|_{L^1}$ when $t \in [0,1)$.
\begin{Lemma}\label{Lemma:Kernel-G}
Let $t_0 \in(0,1)$. Then, for all $t\in[0,t_0]$, the function
$G(\cdot,1-t^{1/3})$ given by (\ref{eq:Kernel-G}) belongs to
$ L^1(\mathbb R) \cap C_{\infty}(\mathbb R)$. Moreover,
\begin{equation}\label{eq:Kernel-G-1}
\| G(\cdot, 1-t^{1/3}) \|_{L^1}
\le C \big[ (1-t^{1/3})^{3/4} + (1-t^{1/3})^{-3/4}\big]
\le C \cdot (1-t_0^{1/3})^{-3/4},
\end{equation}
for all $t\in[0,t_0]$, where $C$ is a positive constant independent of $t$.
\end{Lemma}
\begin{proof}
Let $t\in[0,1)$. We define
$g(\xi,t):=e^{-(1-t^{1/3})\xi^{4/3}\Phi(\xi)}$ for $\xi \in \mathbb R$. It
follows that $g(\cdot,t)$ is continuous. Furthermore,
\begin{equation}\label{eq:Kernel-G-1a}
\partial_{\xi} g(\xi,t) = -\frac{4}{3} (1-t^{1/3}) \xi^{1/3} \Phi(\xi)
e^{-(1-t^{1/3})\xi^{4/3}\Phi(\xi)}, \;\; \text{for} \;
\xi \not= 0.
\end{equation}
Then $|\partial_\xi g (\xi,t)| \rightarrow 0$ as $|\xi| \rightarrow + \infty$,
and $\partial_{\xi}g(0^+,t)=0=\partial_{\xi}g(0^-,t)$. Moreover,
\begin{eqnarray*}
\partial_{\xi}^2 g(\xi,t) &=& \Big[-\frac{4}{9} (1-t^{1/3}) \xi^{-2/3} \Phi(\xi)
+\Big(\frac43 (1-t^{1/3}) \xi^{1/3} \Phi(\xi) \Big)^2 \Big] \\
&& \times
e^{-(1-t^{1/3})\xi^{4/3}\Phi(\xi)}, \;\;\;\; \text{for} \;\;
\xi \not= 0.
\end{eqnarray*}
We see that $g(\cdot,t) \in C_{\infty}(\mathbb R) \cap C^2(\mathbb R \setminus \{ 0 \})$.
In addition, $\|g(\cdot,t)\|_{L^1} = C \cdot (1-t^{1/3})^{-3/4}$,
and $\|\partial_{\xi} g(\cdot,t) \|_{L^1} =4$. Furthermore,
\begin{eqnarray*}
\|\partial_{\xi}^2 g(\cdot,t)\|_{L^1} &\le&
C \cdot (1-t^{1/3}) \int |\xi|^{-2/3} e^{-(1-t^{1/3}) a \xi^{4/3}} d\xi \\
&& + C \cdot (1-t^{1/3})^2 \int |\xi|^{2/3} e^{-(1-t^{1/3}) a \xi^{4/3}} d\xi \\
&\le& C \cdot (1-t^{1/3})^{3/4},
\end{eqnarray*}
where the last inequality is a consequence of Lemma \ref{Lemma:calculus1} above. The
result now follows from Lemma \ref{Lemma:calculus}.
\end{proof}
Lemmas \ref{Lemma:Kernel-K} and \ref{Lemma:Kernel-Kd} below provide estimates
for $\| K(\cdot, t) \|_{L^1}$ and $\|\partial_x K(\cdot, t) \|_{L^1}$, for any $t>0$.
\begin{Lemma}\label{Lemma:Kernel-K}
Suppose that $t>0$. Then the function
$K(\cdot,t) \in L^1(\mathbb R) \cap C_{\infty}(\mathbb R)$, and
\begin{equation}\label{eq:Kernel-K-1}
\| K(\cdot, t) \|_{L^1}
\le C \cdot \big( 1 + t^2 e^{\frac{4}{27}a^3t} \big),
\end{equation}
where $C$ is a positive constant independent of $t$.
\end{Lemma}
\begin{proof}
Let $t>0$. It follows from (\ref{eq:K}) that
\begin{equation*}
\partial_\xi \hat K(\xi,t) = -t\Big[2\xi-\frac43 \xi^{1/3}
\Phi(\xi) \Big] \hat K(\xi,t), \;\;\;\; \text{for} \; \xi \not= 0,
\end{equation*}
and
\begin{equation*}
\partial_\xi^2 \hat K(\xi,t) = \Big\{ -t\Big[ 2-\frac49 \xi^{-2/3} \Phi(\xi)
\Big] +t^2 \Big[ 2\xi-\frac43 \xi^{1/3}
\Phi(\xi) \Big]^2 \Big\} \hat K(\xi,t), \;\; \text{for} \;
\xi \not= 0.
\end{equation*}
Then $\hat K(\cdot,t) \in C_{\infty}(\mathbb R) \cap C^2(\mathbb R \setminus \{ 0 \})$. Moreover,
$|\partial_\xi \hat K (\xi,t)| \rightarrow 0$ as $|\xi| \rightarrow + \infty$, and
$\partial_\xi \hat K(0^+,t)=0=\partial_\xi \hat K(0^-,t)$. Furthermore,
\begin{equation*}
\|\hat K (\cdot,t)\|_{L^1} = 2\int_0^{+\infty} e^{-t[\xi^2-a\xi^{4/3}]} d\xi
\le C\Big[ e^{\frac{4}{27}a^3t} +\frac{1}{\sqrt t} \Big],
\end{equation*}
where the last inequality is a consequence of Lemma \ref{Lemma:calculus2}.
Again using Lemma \ref{Lemma:calculus2}, we see that
\begin{eqnarray*}
\|\partial_\xi \hat K (\cdot,t)\|_{L^1} &\le& C\big[ 1+t^{1/3}
+t \; e^{\frac{4}{27}a^3t}\big], \;\;\; \text{and} \\
\|\partial_\xi^2 \hat K (\cdot,t)\|_{L^1} &\le& C\big[ \sqrt t + t^{5/6}
+ t^{7/6} + (t+t^2) e^{\frac{4}{27}a^3t}\big]
\le C\big[\sqrt t + t^2 e^{\frac{4}{27}a^3t} \big].
\end{eqnarray*}
Lemma \ref{Lemma:calculus}, applied to $h(\cdot)= \hat K(\cdot,t)$, implies that
\begin{equation}\label{eq:Kernel-K-1a}
\| K(\cdot, t) \|_{L^1}
\le C \Big[ \frac{1}{\sqrt t} + t^2 e^{\frac{4}{27}a^3t} \Big], \;\;
\text{ for all } t>0.
\end{equation}
Suppose now that $t\in(0,1)$. Then
\begin{eqnarray}\label{eq:Kernel-K-1b}
\int |K(x,t)|dx
&=& \frac{1}{\sqrt t}
\int \big| K(\cdot,1) * G(\cdot,1-t^{1/3})\big|(x/\sqrt t) dx \nonumber \\
&=& \int (1+y^2)^{1/2} \frac{\big| K(\cdot,1) * G(\cdot,1-t^{1/3})
\big|(y)}{(1+y^2)^{1/2}} dy
\nonumber \\
&\le& C \; \big{\|} \hat K(\cdot,1) \hat G(\cdot, 1-t^{1/3})\big{\|}_{H^1},
\end{eqnarray}
where the first equality above comes from (\ref{eq:Kernel}). From
the fact that $e^{-(1-h^{1/3}) a \xi^{4/3} } \le 1$, for all $h\in[0,1)$,
$\xi \in \mathbb R$, and equation (\ref{eq:Kernel-G-1a}) we have that
$|\hat G(\xi,1-h^{1/3})| \le C$, and
$|\partial_\xi \hat G(\xi,1-h^{1/3})| \le C |\xi|^{1/3}$ for all
$\xi \in \mathbb R$. Now, from Lemma \ref{Lemma:calculus2}, we obtain
\begin{eqnarray}\label{eq:unif1}
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\|\hat K(\cdot,1) \hat G (\cdot,1-h^{1/3})\|_{H^1}
\le \|\hat K(\cdot,1) \hat G (\cdot,1-h^{1/3})\|_{L^2} \nonumber \\
&&+ \|\partial_\xi \hat K(\cdot,1) \hat G (\cdot,1-h^{1/3})\|_{L^2}
+ \|\hat K(\cdot,1) \partial_\xi \hat G (\cdot,1-h^{1/3})\|_{L^2} \le C,
\end{eqnarray}
for all $h\in [0,1)$. The assertion of the lemma now follows from
(\ref{eq:Kernel-K-1a})-(\ref{eq:unif1}).
\end{proof}
The following result gives an upper bound for $\|\partial_x K(\cdot,t) \|_{L^1}$
when $t \in (0,1)$.
\begin{Lemma}\label{Lemma:Kernel-Kd-0}
Let $t\in (0,1)$. Then, $K(\cdot,t) \in L^1(\mathbb R) \cap C^1(\mathbb R) \cap W^{1,\infty}(\mathbb R)$.
In addition, $\partial_x K(\cdot,t)(x) = t^{-1} \big(\partial_x K(\cdot,1) *
G(\cdot, 1-t^{1/3})\big)(t^{-1/2}x)$ for all $x \in \mathbb R$,
$\partial_x K(\cdot,t) \in L^1(\mathbb R) \cap C_{\infty}(\mathbb R)$, and
\begin{equation}\label{eq:Kernel-Kd-0}
\|\partial_x K(\cdot,t) \|_{L^1} \le \frac{C}{\sqrt t}
\big[(1-t^{1/3})^{3/4}+ (1-t^{1/3})^{-3/4} \big],
\end{equation}
where $C$ is a positive constant independent of $t$.
\end{Lemma}
\begin{proof}
Let $f$ denote the function given by $f(\xi):= \xi \hat K(\xi,1)$
for all $\xi \in \mathbb R$. Then $f \in C_\infty(\mathbb R)\cap C^2(\mathbb R)$. Furthermore,
\begin{equation*}
f'(\xi)= \Big[1-\xi\Big(2\xi-\frac43 \xi^{1/3} \Phi(\xi)
\Big) \Big] \hat K (\xi,1),
\;\;\; \text{for} \;\; \xi \not = 0,
\end{equation*}
and
\begin{equation*}
f''(\xi)= \Big\{ \Big[-4\xi+\frac{16}{9} \xi^{1/3} \Phi(\xi)\Big]
-\Big[1-\xi\Big(2\xi-\frac43 \xi^{1/3} \Phi(\xi)\Big) \Big]
\Big[2\xi-\frac43\xi^{1/3} \Phi(\xi) \Big] \Big\} \hat K (\xi,1),
\end{equation*}
for $\xi \not = 0$. By using Lemma \ref{Lemma:calculus2}, it follows that
$\|f \|_{L^1} \le C$, $\|f' \|_{L^1} \le C$, and $\|f'' \|_{L^1} \le C$.
Moreover, $|f' (\xi)| \rightarrow 0$ as $|\xi| \rightarrow + \infty$, and
$f'(0^+)=1=f'(0^-)$. Thus, Lemma \ref{Lemma:calculus} implies that
$\partial_x K(\cdot,1) \in C_{\infty}(\mathbb R) \cap L^1(\mathbb R)$. Therefore,
using Lemma \ref{Lemma:Kernel-K}, we have that $K(\cdot,1)\in W^{1,1}(\mathbb R)$.
Let $t\in (0,1)$. Lemma \ref{Lemma:calculus1} implies that
\begin{equation}\label{eq:Kernel-Kd-0b}
\| G(\cdot,1-t^{1/3}) \|_{L^\infty} \le C \cdot (1-t^{1/3})^{-{3/4}}.
\end{equation}
Thus, applying Lemma \ref{Lemma:calculus3} to
equation (\ref{eq:Kernel}), taking into account (\ref{eq:Kernel-Kd-0b}),
one sees that $K(\cdot,t) \in C^1(\mathbb R)\cap W^{1,\infty}(\mathbb R)$, and
$\partial_x K(\cdot,t)(x)=t^{-1}\big( \partial_xK(\cdot,1)
*G(\cdot,1-t^{1/3})\big)(t^{-1/2}x)$ for all $x \in \mathbb R$. Furthermore,
\begin{eqnarray*}
\| \partial_x K(\cdot,t) \|_{L^1}
&=& t^{-1/2} \int |\partial_x K(\cdot,1) * G(\cdot,1-t^{1/3}) |(y)dy \\
&\le& Ct^{-1/2}\big[(1-t^{1/3})^{3/4} + (1-t^{1/3})^{-3/4} \big],
\end{eqnarray*}
where in the last step we have used Young's inequality and
Lemma \ref{Lemma:Kernel-G}.
\end{proof}
Next lemma will be useful to study $\|\partial_x K(\cdot, t)\|_{L^1}$ for
$t \ge t_0$, where $t_0>0$.
\begin{Lemma}\label{Lemma:Kernel-Kd-00}
Suppose that $t>0$. Then, $\partial_x K(\cdot,t) \in L^1(\mathbb R) \cap C_{\infty}(\mathbb R)$,
and
\begin{equation}\label{eq:Kernel-Kd-00}
\|\partial_x K(\cdot, t)\|_{L^1} \le C\Big[ \frac{1}{t}
+t^2 e^{\frac{4}{27}a^3t}\Big],
\end{equation}
where $C$ is a positive constant independent of $t$.
\end{Lemma}
\begin{proof}
Let $t>0$. For $\xi \in \mathbb R$, we define $h(\xi,t):=\xi \hat K(\xi,t)$. Then
\begin{equation*}
\partial_\xi h(\xi,t) = \Big[ 1-t\Big(2\xi^2-\frac43 \xi^{\frac43} \Phi(\xi)\Big)
\Big] \hat K(\xi,t),
\;\;\;\; \text{for} \;\; \xi \not = 0,
\end{equation*}
and
\begin{equation*}
\partial_\xi^2 h(\xi,t) = -t\Big\{
\Big[4\xi -\frac{16}{9}\xi^{\frac13} \Phi(\xi) \Big]
+\Big[ 1-t\Big(2\xi^2-\frac43 \xi^{\frac43} \Phi(\xi)\Big) \Big]
\Big[2\xi -\frac43 \xi^{\frac13}\Phi (\xi) \Big]
\Big\}\hat K(\xi,t),
\end{equation*}
for $\xi \not = 0 $. We see that
$h(\cdot,t)\in C_{\infty}(\mathbb R) \cap C^2(\mathbb R)$,
$|\partial_\xi h(\xi,t)|\rightarrow 0$ as $|\xi| \rightarrow + \infty$, and
$\partial_\xi h (0^+,t)=1=\partial_\xi h(0^-,t)$. Moreover, by Lemma
\ref{Lemma:calculus2}, we have that
\begin{eqnarray*}
\|h(\cdot,t)\|_{L^1} &\le& C\Big[\frac{1}{t} + e^{\frac{4}{27}a^3t} \Big], \\
\|\partial_\xi h(\cdot,t)\|_{L^1} &\le&
C\Big[\frac{1}{t^{1/2}} + \frac{1}{t^{1/6}} +(1+t)e^{\frac{4}{27}a^3t} \Big],
\;\;\;\text{and} \\
\|\partial_\xi^2 h(\cdot,t)\|_{L^1} &\le&
C\Big[1+t^{1/3}+t^{2/3} +(t+t^2)e^{\frac{4}{27}a^3t} \Big].
\end{eqnarray*}
The proof of the lemma is now completed by applying Lemma \ref{Lemma:calculus}.
\end{proof}
The next result provides a unified upper bound for
$\|\partial_x K(\cdot, t) \|_{L^1}$ for any $t>0$, which takes the best
of the corresponding bounds obtained in Lemmas \ref{Lemma:Kernel-Kd-0}
and \ref{Lemma:Kernel-Kd-00}.
\begin{Lemma}\label{Lemma:Kernel-Kd}
Suppose that $t>0$. Then the function
$\partial_x K(\cdot,t) \in L^1(\mathbb R) \cap C_{\infty}(\mathbb R)$.
Moreover,
\begin{equation}\label{eq:Kernel-Kd-1}
\|\partial_x K(\cdot, t) \|_{L^1}
\le C \Big[\frac{1}{\sqrt t}+t^2 e^{\frac{4}{27}a^3t}\Big],
\end{equation}
where $C$ is a positive constant independent of $t$.
\end{Lemma}
The following lemma will be used in the proof of Lemma \ref{Lemma:semig}
below.
\begin{Lemma}\label{Lemma:unif}
\begin{equation}\label{eq:unif}
\lim_{A \rightarrow +\infty } \int_{|y|>A}
\big|K(\cdot,1) * G(\cdot, 1-h^{1/3})\big|(y) dy =0,
\;\;\; \text{ uniformly in } h \in [0,1).
\end{equation}
\end{Lemma}
\begin{proof}
We recall that
\begin{equation*}
\hat K(\xi,1)=e^{-[\xi^2-\xi^{4/3} \Phi(\xi)]},
\;\;\text{ and } \;\;
\hat G (\xi,1-h^{1/3}) = \frac{1}{\sqrt{2\pi}}
e^{-(1-h^{1/3}) \xi^{4/3} \Phi(\xi)}.
\end{equation*}
From (\ref{eq:unif1}) we see that
\begin{eqnarray*}
&&\int_{|y|>A} \big|K(\cdot,1) * G(\cdot, 1-h^{1/3})\big|(y) dy \\
&&\le \Big( \int_{|y|>A} \frac{dy}{1+y^2} \Big)^{\frac12}
\Big(\int (1+y^2) \big|K(\cdot,1) * G(\cdot, 1-h^{1/3})(y)\big|^2
dy \Big)^{\frac12} \\
&&\le \frac{C}{\sqrt A} \; \| \hat K(\cdot,1)
\hat G(\cdot, 1-h^{1/3}) \|_{H^1} \le \frac{C'}{\sqrt A}.
\end{eqnarray*}
This concludes the proof.
\end{proof}
The next lemma shows that $(E(t))_{t \ge0}$ is a $C^0$-semigroup on the
Banach space $Y$ and also on the Banach space $X$.
\begin{Lemma}\label{Lemma:semig}
{\bf{i.)}} If $u_0 \in C_b(\mathbb R)$, then $u(t):=E(t)u_0 \in C_b(\mathbb R)$ for every
$t\ge0$. In addition,
\begin{equation}\label{eq:semig}
\|E(t)\|_{\mathfrak{L} (C_b(\mathbb R))} \le C \cdot \big(1 +t^2
e^{\frac{4}{27}a^3t} \big), \;\;\; \text{ for all } t>0.
\end{equation}
Moreover, $(E(t))_{t \ge0}$ is a $C^0$-semigroup on $Y$.
\noindent
{\bf{ii.)}} $(E(t))_{t \ge0}$ is a $C^0$-semigroup on $X$. Furthermore,
(\ref{eq:semig}) remains true if the space $C_b(\mathbb R)$ is replaced by $C_b^1(\mathbb R)$.
\end{Lemma}
\begin{proof}
{\bf{i.)}} Let $u_0 \in C_b(\mathbb R)$, and $t>0$. Since
$u(x,t)=E(t)u_0(x)=\frac{1}{\sqrt{2\pi}} \int K(x-y,t)u_0(y)dy$, it
follows that
\begin{equation*}
\|u(t)\|_{L^\infty} \le \frac{1}{\sqrt{2\pi}} \|u_0\|_{L^\infty}
\|K(\cdot,t)\|_{L^1}
\le C \cdot \big(1+ t^2 e^{\frac{4}{27}a^3t}
\big) \|u_0\|_{L^\infty},
\end{equation*}
where the last inequality is a consequence of Lemma
\ref{Lemma:Kernel-K}. Moreover,
\begin{equation*}
|u(x+h,t)-u(x,t)| \le \frac{\|u_0 \|_{L^\infty}}{\sqrt{2\pi}}
\|K(\cdot+h,t) -K(\cdot,t)\|_{L^1} \;\; \rightarrow 0
\text{ as } h \rightarrow 0.
\end{equation*}
Thus, we have proved that if $u_0 \in C_b(\mathbb R)$, then $u(t) \in C_b(\mathbb R)$
for all $t\ge0$. In addition, one can see that
$E(t+s) \phi = E(t) E(s) \phi$, for all $t,s \ge 0$, and $\phi \in C_b(\mathbb R)$.
Suppose now that $t=0$, $u_0 \in Y \setminus \{ 0\}$, and $h \in (0,1)$.
Since $\hat K(0,h) = \frac{1}{\sqrt{2\pi}} \int K(z,h)dz=1$,
and using (\ref{eq:Kernel}) we have that
\begin{eqnarray}\label{eq:semig1}
&& |u(x,h)- u_0(x)|= \frac{1}{\sqrt{2\pi}} \Big| \int K(z,h) \big(
u_0(x-z)-u_0(x)\big) dz \Big| \nonumber \\
&& = \frac{1}{\sqrt{2\pi}} \Big|\int h^{-1/2} \big(
K(\cdot,1) * G(\cdot, 1-h^{1/3}) \big)(h^{-1/2}z)
\big(u_0(x-z)-u_0(x)\big) dz \Big| \nonumber \\
&& = \frac{1}{\sqrt{2\pi}} \Big|\int \big(
K(\cdot,1) * G(\cdot, 1-h^{1/3}) \big)(y)
\big(u_0(x-y \sqrt h)-u_0(x) \big) dy \Big|.
\end{eqnarray}
Let $\epsilon >0$. By Lemma \ref{Lemma:unif}, there exists $A>0$,
such that for every $h \in [0,1)$,
\begin{equation}\label{eq:semig2}
\int_{|y|>A} \big|K(\cdot,1) * G(\cdot, 1-h^{1/3})\big|(y) dy
< \frac{\sqrt{2\pi} \epsilon}{4\|u_0\|_{L^\infty}}.
\end{equation}
Since $u_0$ is uniformly continuous, there exists $\delta>0$ such
that for all $z,w \in \mathbb R$,
\begin{equation}\label{eq:semig3}
\text{if } |z-w|<\delta, \text{ then }
|u_0(z)-u_0(w)|<\sqrt{2\pi} \epsilon/(2C \|K(\cdot,1)\|_{L^1}),
\end{equation}
where $C$ is a positive constant such that
$\|G(\cdot, 1-h^{1/3})\|_{L^1} \le C$ for all $h \in [0,1/2)$
(see Lemma \ref{Lemma:Kernel-G}). Let
$h\in(0, \min\{\frac12,\frac{\delta^2}{A^2} \})$. Using
(\ref{eq:semig1})-(\ref{eq:semig3}), we get
\begin{eqnarray*}
&&|u(x,h)-u_0(x)| \le \frac{2\|u_0\|_{L^\infty}}{\sqrt{2\pi}}
\int_{|y|>A} \big|K(\cdot,1) * G(\cdot, 1-h^{1/3}) (y) \big| dy \\
&&+\frac{1}{\sqrt{2\pi}} \int_{|y| \le A} \big|
K(\cdot,1) * G(\cdot, 1-h^{1/3})(y)\big| \;\;
\big|u_0(x-y \sqrt h)-u_0(x) \big| dy \\
&&\le \frac{\epsilon}{2} +\frac{\epsilon}{2C\|K(\cdot,1) \|_{L^1}}
\| K(\cdot,1) * G(\cdot, 1-h^{1/3}) \|_{L^1} \le \epsilon,
\end{eqnarray*}
for all $x \in \mathbb R$, where the last inequality is a consequence
of Young's inequality. Therefore,
\begin{equation}\label{eq:semig4}
\lim_{h \rightarrow 0} \|u(h) - u_0\|_{L^\infty} =0.
\end{equation}
We notice that if $u_0\in Y$, then $u(t)=E(t)u_0 \in Y$ for all $t\ge0$.
In fact, assume $t>0$ and let $\epsilon>0$ be given. Since $u_0$ is uniformly
continuous, there exists $\delta>0$ such that if $|h| <\delta$, then
$|u_0(x+h)-u_0(x)| < \epsilon \sqrt{2 \pi}/ \|K(\cdot,t)\|_{L^1}$, for
any $x\in \mathbb R$. Suppose $|h|<\delta$, then
\begin{equation*}
|u(x+h,t)-u(x,t)| \le \frac{1}{\sqrt{2\pi}}
\int \big| K(y,t) \big| \big|u_0(x-y+h)-u_0(x-y)\big|dy
\le \epsilon, \;\; \text{ for all } x \in \mathbb R.
\end{equation*}
Hence, $u(t)$ is uniformly continuous, for all $t>0$.
Assume now that $t>0$, and $u_0 \in Y$. It follows from (\ref{eq:semig4}) and
the semigroup property that
\begin{equation*}
\lim_{h \downarrow 0} \|E(t+h) u_0 -E(t) u_0 \|_{L^\infty} =0.
\end{equation*}
On the other hand, for $h>0$, one can see that
\begin{eqnarray*}
|u(x,t-h)-u(x,t)| &=& \frac{1}{\sqrt{2\pi}}
\Big| \int K(x-y,t-h) \big(u_0(y) -u(y,h) \big) dy \Big| \\
&\le& \frac{1}{\sqrt{2\pi}} \|K(\cdot,t-h)\|_{L^1}
\|u(h)-u_0\|_{L^\infty} \\
&\le& C\cdot \big(1 +(t-h)^2 e^{\frac{4}{27}a^3(t-h)} \big)
\|u(h)-u_0\|_{L^\infty},
\end{eqnarray*}
where the last inequality is a consequence of Lemma \ref{Lemma:Kernel-K}.
Equation (\ref{eq:semig4}) and the last inequality imply that
\begin{equation*}
\lim_{h \downarrow 0} \|E(t-h) u_0 -E(t) u_0 \|_{L^\infty} =0.
\end{equation*}
{\bf{ii.)}} Let $u_0 \in X$. By item {\bf{i.)}} above, we already know
that $u\in C([0,+\infty);Y)$, where $u(t)=E(t)u_0$ for all $t\ge0$. We will
now prove that $\partial_xu(t) \in C_b(\mathbb R)$, $\partial_xu(t)$ is uniformly continuous,
and $\lim_{h \rightarrow 0} \|\partial_x u(t+h) -\partial_x u(t)\|_{L^\infty} =0$,
for all $t\ge 0$. Suppose first that $t>0$. Then
\begin{eqnarray*}
&&\Big| \frac{u(x+h,t)-u(x,t)}{h} -\frac{1}{\sqrt{2\pi}}
K(\cdot,t)*u_0'(x)\Big| \\
&&=\frac{1}{\sqrt{2\pi}} \Big|\Big(K(\cdot,t)*
\frac{u_0(\cdot+h)-u_0(\cdot)}{h} \Big)(x) -K(\cdot,t) *u_0'(x)\Big| \\
&& \le \frac{1}{\sqrt{2\pi}} \int \big|K(x-y,t)\big|
\Big|\frac{u_0(y+h)-u_0(y)}{h} -u_0'(y) \Big| dy.
\end{eqnarray*}
The dominated convergence theorem and Lemma \ref{Lemma:Kernel-K} imply
that the last expression tends to zero as $h$ goes to zero. Then there exists
\begin{equation}\label{eq:semig5}
\partial_x u(x,t) = \frac{1}{\sqrt{2\pi}} K(\cdot,t)*u_0'(x),
\;\; \text{for all } x \in \mathbb R, \text{ and } t>0.
\end{equation}
It is easy to see that the last expression is also valid if we only require
that $u_0 \in C_b^1(\mathbb R)$. It follows from (\ref{eq:semig5}) and
Lemma \ref{Lemma:Kernel-K} that
\begin{equation*}
\|\partial_x u(\cdot,t) \|_{L^\infty} \le C \cdot
\big(1+t^2 e^{\frac{4}{27}a^3t} \big) \|u_0'\|_{L^\infty},
\;\; \text{for all } t>0.
\end{equation*}
Using the fact that $u_0'$ is uniformly continuous, (\ref{eq:semig5}), and
Lemma \ref{Lemma:Kernel-K}, it follows that
$\partial_x u(\cdot,t)$ is uniformly continuous, for all $t>0$.
Finally, since $(E(t))_{t\ge0}$ is a $C^0$-semigroup on the space $Y$ and
using (\ref{eq:semig5}), we see that
$\lim_{h \rightarrow 0} \|\partial_x u(t+h) -\partial_x u(t)\|_{L^\infty} =0$,
for all $t\ge 0$.
\end{proof}
\subsection{Local Theory in the Space $X$}\label{subsection:local}
In this Sub-section we will use the Banach fixed-point theorem on an
appropriate complete metric space to find a local-in-time solution to
the integral equation associated to the IVP (\ref{eq:IVP}). The following
lemma will be helpful during the proof of Theorem \ref{T:local} below.
\begin{Lemma}\label{Lemma:int}
Suppose that $u \in C([0,T];X)$. We define
\begin{equation}\label{eq:int}
D(\cdot,t):= \int_0^t K(\cdot,t-s) * \frac12 \partial_x u^2(\cdot,s) ds,
\;\;\; \text{ for } t \in [0,T].
\end{equation}
Then $D \in C([0,T];X)$.
\end{Lemma}
\begin{proof}
{\bf{i.)}} Let $t\in(0,T]$. Now we first prove that $D(t) \in X$ . In fact,
\begin{eqnarray*}
\|D(\cdot,t) \|_{L^\infty} &\le& \sup_{s\in[0,T]} \|u(\cdot,s)\|_{C_b^1}^2
\int_0^t \|K(\cdot,t-s)\|_{L^1} ds \\
&\le& C \|u\|_{C([0,T];X)}^2
\int_0^t \big(1 + (t-s)^2 e^{\frac{4}{27}a^3(t-s)}\big) ds,
\end{eqnarray*}
where the last inequality is a consequence of Lemma \ref{Lemma:Kernel-K}. Then
\begin{equation}\label{eq:int1}
\|D(\cdot,t) \|_{L^\infty} \le C' \; \|u\|_{C([0,T];X)}^2 \; \nu(t),
\end{equation}
where
\begin{equation}\label{eq:int1a}
\nu(r):= r + r^2 e^{\frac{4}{27}a^3r}, \;\; \text{ for all } r \ge 0.
\end{equation}
Moreover,
\begin{equation*}
\|D(\cdot+h,t) -D(\cdot,t)\|_{L^\infty}
\le \int_0^t \| K(\cdot, t-s) \|_{L^1}
\|\frac{1}{2}\partial_x u^2(\cdot+h,s) -
\frac{1}{2}\partial_x u^2(\cdot,s)\|_{L^\infty} ds.
\end{equation*}
Using the fact that $\partial_x u(\cdot,s) u(\cdot,s)$ is uniformly
continuous on $\mathbb R$, for all $s \in [0,T]$, Lemma \ref{Lemma:Kernel-K},
and the dominated convergence theorem, it follows from the last inequality that
$D(t)$ is uniformly continuous on $\mathbb R$. Now we claim that there exists
$\frac{\partial D}{\partial x} (x,t)$ (in the classical sense), for all $x \in \mathbb R$, and
\begin{eqnarray}\label{eq:int2}
\frac{\partial D}{\partial x} (x,t)
&=& \int_0^t \partial_x \big( K(\cdot, t-s)*\frac12 \partial_x u^2(\cdot,s) \big)(x) ds
\nonumber \\
&=& \int_0^t \partial_x K(\cdot,t-s) * \frac12 \partial_x u^2(\cdot,s) (x) ds,
\;\; \text{ for all } x \in \mathbb R.
\end{eqnarray}
We now establish the last claim. It follows from Lemmas
\ref{Lemma:Kernel-K}, \ref{Lemma:Kernel-Kd}, and \ref{Lemma:calculus3} that
$K(\cdot,t-s) *\frac12 \partial_x u^2(\cdot,s) \in C^1(\mathbb R) \cap W^{1,\infty} (\mathbb R)$,
and
\begin{equation}\label{eq:int3}
\partial_x \big(K(\cdot,t-s) *\frac12 \partial_x u^2(\cdot,s) \big)(x)
=\big(\partial_x K(\cdot,t-s) *\frac12 \partial_x u^2(\cdot,s) \big) (x),
\end{equation}
for all $x \in \mathbb R$ and $s \in [0,t)$. Moreover,
\begin{eqnarray}\label{eq:int4}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!&&
\Big|\frac{D(x+h,t)-D(x,t)}{h} - \int_0^t \partial_x K(\cdot,t-s)
*\frac12 \partial_x u^2(\cdot,s) (x) ds\Big| \nonumber \\
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!&&
= \Big| \int_0^t\Big(\frac{K(\cdot+h,t-s)-K(\cdot,t-s)}{h}
-\partial_xK(\cdot,t-s)\Big)*\frac12 \partial_x u^2(\cdot,s) (x) ds \Big|.
\end{eqnarray}
In addition,
\begin{eqnarray}\label{eq:int5}
&& \Big{\|}\Big(\frac{K(\cdot+h,t-s)-K(\cdot,t-s)}{h}
-\partial_xK(\cdot,t-s)\Big)*\frac12 \partial_x u^2(\cdot,s)
\Big{\|}_{L^\infty} \nonumber \\
&& \le \|u \|_{C([0,T];X)}^2 \;\;
\Big{\|}\frac{K(\cdot+h,t-s)-K(\cdot,t-s)}{h}
-\partial_xK(\cdot,t-s)\Big) \Big{\|}_{L^1} \nonumber \\
&& \le \|u \|_{C([0,T];X)}^2 \;\;
\Big( \frac{1}{|h|} \Big|\int_0^h \|\partial_xK(\cdot+y,t-s)\|_{L^1}dy
\Big| + \| \partial_x K(\cdot,t-s) \|_{L^1} \Big) \nonumber \\
&& \le C \|u \|_{C([0,T];X)}^2 \;\;
\Big( \frac{1}{\sqrt{t-s}} +(t-s)^2
e^{\frac{4}{27}a^3(t-s)} \Big) \in L^1((0,t),ds),
\end{eqnarray}
where the last inequality is a consequence of Lemma \ref{Lemma:Kernel-Kd}.
The claim now follows from (\ref{eq:int3})-(\ref{eq:int5}) and the
dominated convergence theorem. \\
It follows directly from (\ref{eq:int2}) and Lemma \ref{Lemma:Kernel-Kd} that
\begin{equation}\label{eq:int6}
\|\partial_x D(\cdot,t) \|_{L^\infty}
\le C \; \|u \|_{C([0,T];X)}^2 \; \mu(t),
\end{equation}
where
\begin{equation}\label{eq:int7a}
\mu(r):= \sqrt r + r^2 e^{\frac{4}{27}a^3r}, \;\; \text{ for all } r \ge 0.
\end{equation}
The fact that $\partial_x D(\cdot,t)$ is uniformly continuous on $\mathbb R$ can be
shown similarly to the analogous result for $D(\cdot,t)$, using
Lemma \ref{Lemma:Kernel-Kd} instead of Lemma \ref{Lemma:Kernel-K}.
{\bf{ii.)}} We will now prove that $D \in C([0,T];X)$. Let $t \in [0,T)$.
We first assume that $h>0$. Then
\begin{equation*}
\|D(\cdot,t+h) -D(\cdot,t) \|_{L^\infty}
\le I_1(t,h) +I_2(t,h),
\end{equation*}
where
\begin{eqnarray*}
I_1(t,h)&:=& \int_0^t \big{\|} \big( K(\cdot,t+h-s) -K(\cdot,t-s) \big)
*\frac12 \partial_x u^2(\cdot,s)\big{\|}_{L^\infty} ds ,
\;\;\text{ and} \\
I_2(t,h)&:=& \int_t^{t+h} \big{\|} K(\cdot,t+h-s)
*\frac12 \partial_x u^2(\cdot,s)\big{\|}_{L^\infty} ds.
\end{eqnarray*}
We see that
\begin{eqnarray*}
&& \big{\|} \big( K(\cdot,t+h-s) -K(\cdot,t-s) \big) *\frac12
\partial_x u^2(\cdot,s)\big{\|}_{L^\infty} \\
&& \le \|u \|_{C([0,T];X)}^2 \;\;
\| K(\cdot,t+h-s) -K(\cdot,t-s) \|_{L^1} \\
&& \le C \|u \|_{C([0,T];X)}^2 \;\;
\big(1+ T^2 e^{\frac{8}{27}a^3T}\big)
\;\; \in L^1((0,t),ds),
\end{eqnarray*}
where the last inequality follows from Lemma \ref{Lemma:Kernel-K} and
the fact that $h \in (0,T)$. Thus, using Lemma \ref{Lemma:semig} and the
dominated convergence theorem we have that
\begin{equation*}
I_1(t,h)= \sqrt{2\pi} \int_0^t \big{\|}\big(E(h)-1\big) E(t-s) \frac12
\partial_x u^2(\cdot,s) \big{\|}_{L^\infty} ds \rightarrow 0,
\;\; \text{ as } h \downarrow 0.
\end{equation*}
Moreover, using Lemma \ref{Lemma:Kernel-K} we get like in (\ref{eq:int1}) that
\begin{eqnarray*}
I_2(t,h) \le C \; \|u \|_{C([0,T];X)}^2 \; \nu(h) \rightarrow 0,
\;\; \text{ as } h \downarrow 0,
\end{eqnarray*}
where $\nu(\cdot)$ is given by (\ref{eq:int1a}). Hence,
\begin{equation}\label{eq:int7}
\lim_{h \downarrow 0} \|D(\cdot,t+h)-D(\cdot,t) \|_{L^\infty} =0.
\end{equation}
On the other hand, it follows from (\ref{eq:int2}) that
\begin{equation*}
\|\partial_xD(\cdot,t+h) -\partial_xD(\cdot,t) \|_{L^\infty}
\le J_1(t,h) +J_2(t,h),
\end{equation*}
where
\begin{eqnarray*}
J_1(t,h)&:=& \int_0^t \big{\|} \big( \partial_xK(\cdot,t+h-s)
-\partial_xK(\cdot,t-s) \big)
*\frac12 \partial_x u^2(\cdot,s)\big{\|}_{L^\infty} ds ,
\;\;\text{ and} \\
J_2(t,h)&:=& \int_t^{t+h} \big{\|} \partial_xK(\cdot,t+h-s)
*\frac12 \partial_x u^2(\cdot,s)\big{\|}_{L^\infty} ds.
\end{eqnarray*}
It follows directly from Lemma \ref{Lemma:Kernel-Kd} that
\begin{eqnarray*}
J_2(t,h) &\le& \|u \|_{C([0,T];X)}^2 \;\; \int_0^h
\|\partial_x K(\cdot,\tau) \|_{L^1} d\tau \\
&\le& C \; \|u \|_{C([0,T];X)}^2 \; \mu(h) \rightarrow 0,
\;\; \text{ as } h \downarrow 0,
\end{eqnarray*}
where $\mu(\cdot)$ is given by (\ref{eq:int7a}).
To estimate $J_1(t,h)$ we first extend $\partial_xK$ for all times in the
following way:
\begin{equation*}
H(\cdot,s) := \left\{
\begin{array}
[c]{l}
\partial_x K(\cdot,s),\text{ if } s \in [0,T],\\
0, \text{ if } s \in \mathbb R \setminus [0,T].
\end{array}
\right.
\end{equation*}
We note that $H \in L^1(\mathbb R^2)$. In fact, by Lemma \ref{Lemma:Kernel-Kd}
we get
\begin{equation*}
\int\int |H(x,s)|dxds = \int_0^T \|\partial_x K(\cdot,s) \|_{L^1}ds
\le C \; \mu(T).
\end{equation*}
Then
\begin{eqnarray*}
J_1(t,h)
&\le& \|u \|_{C([0,T];X)}^2 \;\; \int_0^t \| \partial_x K(\cdot,\tau+h)
- \partial_xK(\cdot,\tau) \|_{L^1} d\tau \\
&\le& \|u \|_{C([0,T];X)}^2 \;\; \int\int |H(x,\tau+h)-H(x,\tau)|dx d\tau
\rightarrow 0, \;\; \text{ as } h \downarrow 0,
\end{eqnarray*}
where the last assertion follows from the continuity of translations in
$L^1(\mathbb R^2)$. Hence,
\begin{equation}\label{eq:int8}
\lim_{h \downarrow 0} \|\partial_x D(\cdot,t+h)-
\partial_xD(\cdot,t) \|_{L^\infty} =0.
\end{equation}
It follows from (\ref{eq:int7}) and (\ref{eq:int8}) that
$ \lim_{h \downarrow 0} \|D(\cdot,t+h)-D(\cdot,t) \|_{C_b^1} =0$.
The case when $t \in (0,T]$ and $h<0$ can be shown similarly to
the previous case. This finishes the proof of the lemma.
\end{proof}
The next theorem is the main result of this section, it states
local-in-time existence of the solution of the integral equation
associated to the IVP (\ref{eq:IVP}).
\begin{Theorem}\label{T:local}
Suppose $u_0 \in X$. Then there exist $T=T(\|u_0\|_{C_b^1})>0$ and a
unique function $u \in C([0,T];X)$ satisfying the integral equation
\begin{equation}\label{eq:local}
u(\cdot,t) = E(t)u_0(\cdot) - \frac12 \int_0^t
E(t-s) \partial_x u^2(\cdot,s)ds,
\end{equation}
where $E(t)$ is defined by (\ref{eq:ope}).
\end{Theorem}
\begin{proof}
Let $M:=1+2\|u_0\|_{C^1_b}$. Let $T>0$ be fixed. $T$ will be suitably
chosen later. We now consider the nonlinear operator $A$ given by
\begin{equation*}\label{eq:ope1}
(Af)(\cdot,t) := E(t)u_0(\cdot) -\frac12 \int_0^t E(t-s)
\partial_x f^2(\cdot,s)ds,
\end{equation*}
defined on the complete metric space
\begin{equation*}\label{eq:ope2}
\Theta_T^M :=\Big{\{}f \in C([0,T];X) ; \sup_{t\in[0,T]}
\|f(\cdot,t) \|_{C_b^1} \le M\Big{\}}.
\end{equation*}
Let $f \in \Theta_T^M$. It follows from Lemmas \ref{Lemma:semig}
and \ref{Lemma:int} that $Af \in C([0,T];X)$.
We will now prove that we can choose $T=\tilde T>0$ small enough such that
$A(\Theta_{\tilde T}^M) \subset \Theta_{\tilde T}^M$.
Suppose $f \in \Theta_T^M$. By Lemma \ref{Lemma:semig} we know that
$\lim_{h \downarrow 0} \|(E(h)-1) u_0 \|_{C_b^1}=0$. Then there exists
$\delta = \delta (\|u_0\|_{C_b^1})>0$ such that if
$0 \le h \le \delta$, then $\|E(h)u_0\|_{C^1_b} \le \frac12 \big( 1
+ 3\| u_0 \|_{C_b^1}\big)$. If $T \le \delta$, using
Lemmas \ref{Lemma:Kernel-K} and \ref{Lemma:Kernel-Kd}, and
(\ref{eq:int2}), we get
\begin{eqnarray*}
&& \|(Af)(\cdot,t) \|_{C_b^1} \le \frac12 \big( 1+3\|u_0\|_{C_b^1}\big) \\
&& +\frac{1}{2\sqrt{2\pi}} \int_0^t \big(\|K(\cdot,t-t')\|_{L^1}
+\|\partial_x K(\cdot,t-t') \|_{L^1}\big) \|f\|_{C([0,T];X)}^2 dt' \\
&& \le \frac12 \big( 1+3\|u_0\|_{C_b^1}\big) + M^2C
\int_0^t\Big[\frac{1}{\sqrt \tau} +\tau^2 e^{\frac{4}{27}a^3\tau}\Big] d\tau \\
&& \le \frac12 \big( 1+3\|u_0\|_{C_b^1}\big) + M^2C \; \mu(T),
\end{eqnarray*}
for all $t \in [0,T]$, where $\mu(\cdot)$ is given by (\ref{eq:int7a}).
Take $T^\dagger>0$ such that
$M^2C \; \mu(T^\dagger) \le \frac12\big( 1+\|u_0\|_{C_b^1} \big)$. Thus, if
$\tilde T \in (0, \min \{\delta, T^\dagger \}]$, then
$\|(Af)(\cdot,t) \|_{C_b^1} \le M$ for all $t\in [0,\tilde T]$.
Finally, we will prove that there exists $T' \in (0,\tilde T]$ such
that $A$ is contractive on $\Theta_{T'}^M$. Suppose that
$f,g \in \Theta_{\tilde T}^M$. Let $t\in[0,\tilde T]$. Then
\begin{eqnarray*}
&& \|(Af)(\cdot,t) - (Ag)(\cdot,t) \|_{C_b^1} \\
&& \le C \int_0^t \big(\|K(\cdot,t-t')\|_{L^1}
+\|\partial_x K(\cdot,t-t') \|_{L^1}\big) \|\partial_x f^2(\cdot,t')
-\partial_xg^2(\cdot,t')\|_{L^\infty} dt' \\
&& \le C \int_0^t \big(\|K(\cdot,t-t')\|_{L^1}
+\|\partial_x K(\cdot,t-t') \|_{L^1}\big) \\
&& \times \big[ \|f(\cdot,t') \|_{L^\infty}
\|\partial_x(f(\cdot,t')-g(\cdot,t'))\|_{L^\infty}
+\|f(\cdot,t')-g(\cdot,t') \|_{L^\infty}
\|\partial_xg(\cdot,t') \|_{L^\infty}\big] dt' \\
&& \le C M \|f-g\|_{C([0,\tilde T];X)} \; \mu(t).
\end{eqnarray*}
Taking $T' \in (0,\tilde T]$ such that $CM \; \mu(T')<1$, it
follows that $A$ is a contraction on $\Theta_{T'}^M$. Therefore,
the mapping $A$ has a unique fixed point $u \in \Theta_{T'}^M$ which
satisfies equation (\ref{eq:local}) with $T'=T'(\|u_0\|_{C_b^1})>0$.
The uniqueness of the solution of equation (\ref{eq:local}) in the
class $C([0,T'];X)$ is a consequence of Proposition \ref{Prop:dep} below.
\end{proof}
The next proposition shows the continuous dependance of the solutions
of equation (\ref{eq:local}) on the initial data.
\begin{Proposition}\label{Prop:dep}
Suppose that $u,v \in C([0,T];X)$ are solutions of equation
(\ref{eq:local}) with initial data $u_0, v_0 \in X$ respectively. Then
for all $t \in [0,T]$ we have
\begin{equation}\label{eq:dep}
\|u(\cdot,t) -v(\cdot,t) \|_{C_b^1} \le C
e^{\alpha t} \|u_0-v_0\|_{C_b^1},
\end{equation}
where $C$ and $\alpha$ are positive constants depending on
$T, \|u\|_{C([0,T];X)}$, and $\|v\|_{C([0,T];X)}$.
\end{Proposition}
\begin{proof} Let $t \in [0,T]$. We write
$w(\cdot,t):=u(\cdot,t)-v(\cdot,t)$. Then
\begin{equation}\label{eq:dep1}
\|w(\cdot, t) \|_{C_b^1} \le \|E(t)(u_0-v_0)\|_{C_b^1}
+\frac12 \Big{\|} \int_0^t E(t-t') \big(\partial_x u^2(\cdot,t')
-\partial_xv^2 (\cdot,t')\big) dt' \Big{\|}_{C_b^1}.
\end{equation}
It follows from Lemma \ref{Lemma:semig} that
\begin{equation}\label{eq:dep2}
\|E(t)(u_0-v_0)\|_{C_b^1}
\le C' \|u_0-v_0\|_{C_b^1},
\end{equation}
where
\begin{equation*}\label{eq:depC1}
C'= C \cdot \big(1+T^2 e^{\frac{4}{27}a^3T}\big).
\end{equation*}
Moreover, by Lemmas \ref{Lemma:Kernel-K} and \ref{Lemma:Kernel-Kd}, we get
\begin{eqnarray}\label{eq:dep3}
&& \frac12 \Big{\|} \int_0^t E(t-t') \big(\partial_x u^2(\cdot,t')
-\partial_xv^2 (\cdot,t')\big) dt' \Big{\|}_{C_b^1} \nonumber \\
&& \le \frac{\|u\|_{C([0,T];X)}+\|v\|_{C([0,T];X)}}{\sqrt{2\pi}} \nonumber \\
&& \times \int_0^t \big(\|K(\cdot,t-t')\|_{L^1}
+\|\partial_x K(\cdot,t-t') \|_{L^1}\big) \|w(\cdot,t')\|_{C_b^1} dt'
\nonumber \\
&& \le C \cdot \big( \|u\|_{C([0,T];X)}+\|v\|_{C([0,T];X)} \big)
\nonumber \\
&& \times \int_0^t \Big[ 1+(t-t')^2 e^{\frac{4}{27}a^3(t-t')} +
\frac{1}{\sqrt{t-t'}} \Big] \|w(\cdot,t')\|_{C_b^1} dt'
\nonumber \\
&& \le \tilde C \int_0^t \frac{\|w(\cdot,t')\|_{C_b^1}}{\sqrt{t-t'}} dt',
\end{eqnarray}
where
\begin{equation*}\label{eq:depC2}
\tilde C := C \cdot (1+T^{5/2} e^{\frac{4}{27}a^3T})
\; \big(\|u\|_{C([0,T];X)}+\|v\|_{C([0,T];X)}\big).
\end{equation*}
Thus, it follows from (\ref{eq:dep1}), (\ref{eq:dep2}) and (\ref{eq:dep3}) that
\begin{equation*}
\|w(\cdot,t) \|_{C_b^1} \le C' \|u_0-v_0\|_{C_b^1}
+ \tilde C \int_0^t \frac{\|w(\cdot,t')\|_{C_b^1}}{\sqrt{t-t'}} dt'.
\end{equation*}
Then
\begin{eqnarray*}
&& \|w(\cdot, t) \|_{C_b^1} \le C' \|u_0-v_0\|_{C_b^1} \\
&& + \tilde C \int_0^t \frac{1}{\sqrt{t-t'}}\Big[ C' \|u_0-v_0\|_{C_b^1}
+\tilde C \int_0^{t'}
\frac{\|w(\cdot,r)\|_{C_b^1}}{\sqrt{t'-r}} dr \Big] dt' \\
&& \le C' (1+2\tilde C \sqrt T) \|u_0-v_0\|_{C_b^1}
+\tilde C^2 \int_0^t \int_r^{t}
\frac{\|w(\cdot,r)\|_{C_b^1}}{\sqrt{t-t'}\sqrt{t'-r}} dt'dr \\
&& = C \|u_0-v_0\|_{C_b^1}
+\tilde C^2 B\big(\frac12,\frac12\big) \int_0^t \|w(\cdot,r)\|_{C_b^1} dr,
\end{eqnarray*}
where $B(\cdot,\cdot)$ denotes the beta function defined by
$B(x,y):=\int_0^1 t^{x-1} (1-t)^{y-1} dt$, for $\Re (x), \Re (y) >0$. The
proposition now follows by applying Gronwall's inequality to the last expression.
\end{proof}
\subsection{Future Work}\label{subsection:future}
Some interesting problems remain, though: the study of the global
well-posedness for the IVP (\ref{eq:IVP}) with initial data
belonging to the space $X$, and the nonlinear stability theory of
the travelling-wave solution of equation (\ref{eq:fow0}). These
two problems will be addressed elsewhere.
\medskip
\noindent {\bf{Acknowledgements:}} The authors were supported by
the ANR-France project COPTER (Conception, optimisation et prototypage
d'ouvrages de lutte contre l'erosion en domaine littoral) under
grant No. NT05-2\_42253. The authors wish to thank
R\'emi Carles (I3M-Universit\'e Montpellier 2) and Natha\"el Alibaud
(D\'epartement de Math\'ematiques de Besan\c{c}on) for
fruitful discussions.
|
1,314,259,994,212 | arxiv | \subsection{The auxiliary linear quadratic problem}
Let (LQ) denote the optimal control problem defined by \eqref{LQ.cost}-\eqref{LQ.constraints} below
\begin{align}
\label{LQ.cost}
\text{minimize } & \Omega_{\mathcal{P}_2}({\bar\xi}, \bar{u}, \bar{y}, \bar{h})\\
\nonumber \text{subject to} & \\
\label{LQ.dynamics}
& \dot{\bar{\xi}} = f_x\bar{\xi} + f_u\bar{u} + B\bar{y},\\
\label{h.dyanmics}
& \dot{\bar{h}} = 0,\\
\label{LQ.constraints}
& 0 = D \eta_j(\hat{x}(0), \hat{x}(T))\left(\bar{\xi}(0), \bar{\xi}(T) + f_v(T)\bar{h}\right),
\end{align}
where $\bar{u}$ and $\bar{y}$ denote the control variables, ${\bar\xi}$ and $\bar{h}$ are the states. Note that the feasible trajectories of (LQ) are the critical directions in $\mathcal{P}_2$. Once the coercivity condition \eqref{Omega.P2.coersity.unique} is assumed, the unique optimal solution of (LQ) is $(\bar\xi, \bar{u}, \bar{y}, h) = 0$.
In order to prove that the derivative of the shooting function ${\mathcal S}$ is injective at a weak minimum, we exploit the correspondence between solutions of (LQ) and solutions of the linearized system (LS) (see Lemma \ref{LS-LQS_equivalence} below).
Let $\bar{\chi}$ and $\bar{\chi}_h$ denote the costates associated with ${\bar\xi}$ and $\bar{h}$, respectively. The qualification condition for the original problem given in Assumption \ref{qualification.constraints} easily translates into an analogous constraint qualification for problem (LQ). Consequently, the weak minimizer $({\bar\xi}, \bar{u}, \bar{y}, \bar{h}) = 0$ of (LQ) also has a unique multiplier, which we shall refer as $\lambda^{LQ} := \left(\bar{\chi}, \bar{\chi}_h, \beta^{LQ} \right)$.
Define the pre-Hamiltonian for problem (LQ) and the endpoint Lagrangian as
{\small
\begin{multline*}
\mathcal{H}({\bar\xi}, \bar{u}, \bar{y},\bar{\chi}):= \bar{\chi} (f_x\bar{\xi} + f_u\bar{u} + B\bar{y})
\\+ \mbox{$\frac{1}{2}$}\bar{\xi}^TH_{xx}\bar{\xi}
+ \bar{u}^TH_{ux}\bar{\xi}
+ \bar{y}^TM\bar{\xi} + \mbox{$\frac{1}{2}$}\bar{u}^TH_{uu}\bar{u} + \bar{y}^TE\bar{u} + \mbox{$\frac{1}{2}$}\bar{y}^TR\bar{y},\end{multline*}}
{\small
\begin{equation*}
\ell^{LQ}\left({\bar\xi}_0, {\bar\xi}_T, \bar{h}, \beta^{LQ}\right) := \mbox{$\frac{1}{2}$} g({\bar\xi}_0, {\bar\xi}_T, \bar{h})
+ \sum_{j = 1}^{d_\eta} \beta^{LQ}_jD \eta_j\left(\bar{\xi}_0, \bar{\xi}_T + f_v(T)\bar{h}\right),
\end{equation*}}
respectively,
where $g$ was defined in \eqref{function.g}. The costate dynamics becomes
\begin{equation}
\label{LQ.costatedynamics}
-\dot{\bar{\chi}} = \frac{\partial\mathcal{H}}{\partial {\bar\xi}} = \bar{\chi} f_x + {\bar\xi}^TH_{xx} + \bar{u}^TH_{ux} + \bar{y}^TM,
\end{equation}
with transversality conditions
{\small\begin{align}
\label{LQ.costateinicial}
\bar{\chi}(0) & \displaystyle
= -{\bar\xi}^T(0)D^2_{x_0^2}\ell + ({\bar\xi}(T) + f_v(T)\bar{h})^TD^2_{x_0x_T}\ell + \sum_{j = 1}^{d_{\eta}}D_{x_0}\eta_j,\\
\label{LQ.costatefinal}
\bar{\chi}(T) & \displaystyle
={\bar\xi}^T(T)D^2_{x_T^2}\ell + {\bar\xi}^T(0)D^2_{x_0x_T}\ell + \bar{h}^TH_{vx}(T) + \sum_{j = 1}^{d_{\eta}}D_{x_T}\eta_j.
\end{align}}
The costate variable $\bar{\chi}_h$ vanishes identically since $\dot{\bar{\chi}}_h = 0$ and $\bar{\chi}_h(0) = 0 $.
Finally, the stationarity of the Hamiltonian gives
\begin{align}
\label{LQ.stationary_u}
0 = \mathcal{H}_{\bar{u}} &= \bar{\chi} f_u + {\bar\xi}^TH_{xu}^T + \bar{u}^TH_{uu} + \bar{y}^TE,\\
\label{LQ.stationary_y}
0 = \mathcal{H}_{\bar{y}} &= \bar{\chi} B + {\bar\xi}^TM^T + \bar{u}^TE^T + \bar{y}^TR.
\end{align}
The set of equations \eqref{LQ.dynamics}-\eqref{LQ.constraints}, \eqref{LQ.costatedynamics}-\eqref{LQ.costatefinal} and \eqref{LQ.stationary_u}-\eqref{LQ.stationary_y} will be referred as the Linear Quadratic System (LQS). Notice that for this system, the matrix of the Legendre-Clebsch condition takes the form
\begin{equation}
D_{(\bar{u}, \bar{y})^2}^2\mathcal{H} =
\left(
\begin{array}{cc}
H_{uu} & E^T\\
E & R
\end{array}
\right).
\end{equation}
Hence, if we assume coercivity for the original problem, Corollary \ref{ColrecoverLCconditions} implies that $D_{(\bar{u}, \bar{y})^2}^2\mathcal{H}$ is uniformly positive definite and then, solving the linear quadratic optimal control problem (LQ) is equivalent to solving its optimality condition (LQS).
\subsection{Linking the auxiliary problem with the optimality system}
Define the mapping
\begin{equation}
(\bar{x}, \bar{u}, \bar{v}, \bar{p}, \beta) \mapsto \left({\bar\xi}, \bar{u}, \bar{y}, \bar{h}, \bar{\chi}, \bar{\chi}_h, \beta^{LQ}\right)
\end{equation}
through the equations
\begin{equation}
\label{mappingLS-LQS}
\begin{split}
\displaystyle \bar{y}(t) := \int_0^t\bar{v}(s){\rm d} s, \qquad {\bar\xi} := \bar{x} - f_v\bar{y},\qquad \bar{\chi} := \bar{p} + \bar{y}^{T}H_{vx},\\
\bar{\chi}_h := 0, \qquad \bar{h} := \bar{y}(T), \qquad \beta^{LQ} := \beta.
\end{split}
\end{equation}
This Goh-type transformation is clearly one-to-one.
Recalling the linearization (LS) of the optimality system \eqref{optimality.system}, we show that this transformation maps solutions of (LS) into solutions of (LQS). Afterwards we shall use this property and the coercivity condition \eqref{Omega.P2.coersity.unique} to deduce the uniqueness of solution of (LS).
\begin{lemma}
\label{LS-LQS_equivalence}
If $\hat{w}$ is a weak minimum of \textnormal{(OC)}, the injective mapping $(\bar{x}, \bar{u}, \bar{v}, \bar{p}, \beta) \mapsto ({\bar\xi}, \bar{u}, \bar{y}, \bar{h}, \bar{\chi}, \bar{\chi}_h, \beta^{LQ})$ defined in \eqref{mappingLS-LQS} converts solutions of \textnormal{(LS)} into solutions of \textnormal{(LQS)}.
\end{lemma}
The proof of this lemma is left for the Appendix \ref{appendix_Goh.computations}.
\if{
\begin{proof}
We must check that given a solution $(\bar{x}, \bar{u}, \bar{v}, \bar{p}, \beta)$ of (LS), the corresponding transformed variables $({\bar\xi}, \bar{u}, \bar{y}, \bar{h}, \bar{\chi}, \bar{\chi}_h, \beta^{LQ})$ solve (LQS).
Starting with the state ${\bar\xi}$, we recall the dynamics of the linearized variable $\bar{x}$ given in \eqref{statedynamics.linearized} so that one has $\dot{{\bar\xi}} = \dot{\bar{x}} - \dot{f}_v\bar{y} - f_v\dot{\bar{y}} = f_x{\bar\xi} + f_u\bar{u} + B\bar{y},$ retrieving the dynamics in \eqref{LQ.dynamics}. The initial conditions are trivially satisfied since $\bar{y}(0) = 0$. The dynamics for $\bar{h}$ are satisfied by the definition.
For the costate dynamics we recall the dynamics of the linearized costates from \eqref{costatedynamics.linearized} and the definition of the matrix $M$ in \eqref{matrixM}. We get
\begin{align*}
-\dot{\bar{\chi}} &= - \dot{\bar{p}} - \dot{\bar{y}}^TH_{vx} - \bar{y}^T\dot{H}_{vx}\\
&= \underbrace{(\bar{p} + \bar{y}^TH_{vx})}_{=\bar{\chi}}f_x + \underbrace{(\bar{x} - f_v\bar{y})^T}_{={\bar\xi}^T}H_{xx} + \bar{y}\underbrace{(f_v^TH_{xx} - \dot{H}_{vx} - H_{vx}f_x)}_{=M}\\
&= \bar{\chi} f_x + {\bar\xi}^TH_{xx} + \bar{y}^TM.
\end{align*}
Hence the dynamics of $\bar{\chi}$ matches \eqref{LQ.costatedynamics}. From equation \eqref{mappingLS-LQS} we obtain $\bar{\chi}(0) = \bar{p}(0)$ and deduce \eqref{LQ.costateinicial}. For the final conditions one substitutes the expressions for $\bar{x}(T)$ and $\bar{p}(T)$ into \eqref{linHvT} and conclude since $S = H_{vx}f_v = f^T_vH_{vx}^T,$ which is a consequence of the Goh conditions \eqref{Goh.condition}.This way we recover the transversality condition for $\bar{\chi}(T)$.
Finally we must check the stationarity \eqref{LQ.stationary_u} and \eqref{LQ.stationary_y} of the Hamiltonian for (LQS). Starting from \eqref{linHu} and \eqref{mappingLS-LQS}, we obtain
\begin{align*}
0 &= (\bar\chi - \bar{y}^TH_{vx})f_u + (\bar\xi + f_v\bar{y})^TH_{ux}^T + \bar{u}^TH_{uu}\\
&= \bar\chi f_u + \bar\xi^TH_{ux}^T + \bar{u}^TH_{uu} + \bar{y}^T(\underbrace{f_v^TH_{ux}^T - H_{vx}f_u}_{=E}),
\end{align*}
which corresponds to the stationarity with respect to $\bar{u}$. On the other hand, the same substitutions applied to \eqref{linHv} yield $0 = \bar{\chi} f_v + {\bar\xi}^TH_{vx}^T.$
Differentiating with respect to time and using the definitions of $B$ in \eqref{matrixB} and $E$ in \eqref{matrixM}, we recover the stationarity \eqref{LQ.stationary_y} with respect to $\bar{y}$. This shows that the tuple $({\bar\xi}, \bar{u}, \bar{y}, \bar{h}, \bar{\chi}, \bar{\chi}_h, \beta^{LQ})$ is a solution of (LQS) and concludes the proof.
\end{proof}
}\fi
\subsection{Convergence of the shooting algorithm}
We are in position to prove the convergence of the shooting algorithm given in \eqref{nu.update}-\eqref{linear.regression.solution}. We will use the following result on the behavior of the Gauss-Newton algorithm.
\begin{proposition}[\cite{bonnans2006numerical,fletcher2013practical}]
\label{convergence_criteria_GN}
If the matrix $\mathcal{S}'(\hat{\nu})$ is injective, then the Gauss-Newton algorithm \eqref{nu.update}-\eqref{linear.regression.solution} is locally convergent. If in addition $\mathcal{S}'$ is Lipschitz continuous, then the algorithm converges locally quadratically.
\end{proposition}
The main result of this article is the theorem below that states a sufficient condition for the local quadratic convergence of the shooting algorithm.
\begin{theorem}[Convergence of the shooting algorithm]
\label{convergence.unconstrained-problem}
Let $\hat w$ be a feasible trajectory satisfying the \textnormal{PMP} that verifies the coercivity condition \eqref{Omega.P2.coersity.unique}. Then the shooting algorithm is locally quadratically convergent.
\end{theorem}
\begin{proof}
From Theorem \ref{SOSC}, the trajectory $\hat{w}$ is a weak minimum for problem (OC). From Corollary \ref{ColrecoverLCconditions} and Proposition \ref{coercivity_implies_LC}, \eqref{LC-like.mixedcontrols} holds. Consequently, Theorem \ref{OC-OSequivalence} implies that (OS) is well-posed so that we can properly formulate the shooting algorithm. Therefore, consider some solution $(\bar{x}, \bar{u}, \bar{v}, \bar{p}, \beta)$ of (LS), and the associated transformed process $({\bar\xi}, \bar{u}, \bar{y}, \bar{h}, \bar{\chi}, \bar{\chi}_h, \beta^{LQ})$ given by \eqref{mappingLS-LQS}. The latter is a solution of (LQS) in view of Lemma \ref{LS-LQS_equivalence}.
However, once we assume condition \eqref{Omega.P2.coersity.unique}, the unique solution to (LQS) is the null trajectory and, since the transformation \eqref{mappingLS-LQS} is one-to-one, the solution to (LS) is also null.
But from equation \eqref{shootingfunction.derivative.explicit}, the vectors $\bar{\nu}$ in the kernel of ${\mathcal S}'(\hat{\nu})$ are precisely the solutions of (LS). We conclude that ${\mathcal S}'(\hat{\nu})$ is injective. In addition ${\mathcal S}'$ is Lipschitz continuous due to Assumption \ref{datafunctions.lipschitz}. The claim follows from Proposition \ref{convergence_criteria_GN}.
\end{proof}
\subsection{Controls in feedback form}
The conventional Legendre-Clebsch condition assumes the form
\begin{equation}
\label{LegendreClebsch.mixed}
\tag{LC}
\left(
\begin{array}{cc}
H_{uu}[\hat\lambda](\hat{w}) & H_{uv}[\hat\lambda](\hat{w})\\
\\
H_{vu}[\hat\lambda](\hat{w})
& H_{vv}[\hat\lambda](\hat{w})
\end{array}
\right)
\succeq 0.
\end{equation}
A proof of \eqref{LegendreClebsch.mixed} for the present setting can be found in {\em e.g.} Aronna \cite[Corollary 1]{aronna2017second}. Note that, since $H_{vv}[\lambda](\hat{w}) = 0$, condition \eqref{LegendreClebsch.mixed} holds if, and only if
\begin{equation}
\label{LegendreClebsch.mixed.equivalent}
H_{uu}[\lambda](\hat{w}) \succeq 0 \text{ and } H_{uv}[\lambda](\hat{w}) = 0.
\end{equation}
Since the matrix in \eqref{LegendreClebsch.mixed} is singular we cannot apply the IFT to obtain our desired representations of the controls. Instead, we resort to the time derivatives of the {\em switching function}, $H_v$.
In order to compute the required time derivatives, consider a vector field $F:\mathbb{R}^n \times \mathbb{R}^m \to \mathbb{R}^n$ and let us evaluate the time derivative of the scalar function $\hat{p} \cdot F(\hat{x}, \hat{u})$
\begin{equation}
\label{aux}
\begin{split}
\displaystyle \frac{{\rm d}}{{\rm d} t}&(\hat{p} \cdot F(\hat{x}, \hat{u}))
= \displaystyle
\dot{\hat{p}}\cdot F(\hat{x}, \hat{u}) + \hat{p} \cdot D_x F(\hat{x}, \hat{u}) \dot{\hat{x}} + \hat{p} \cdot D_u F(\hat{x}, \hat{u}) \dot{\hat{u}}\\
& = \displaystyle
\hat{p} \cdot\left( D_x F f_0 - D_x f_0F + \sum_{i = 1}^m \hat{v}_i \left( D_x Ff_i - D_x f_iF\right) \right) + \hat{p} \cdot D_u F \dot{\hat{u}}.
\end{split}
\end{equation}
It is advantageous to introduce the definition of {\em Lie brackets}. Given two differentiable vector fields $g,h:\mathbb{R}^n \to \mathbb{R}^n$ the {\em Lie bracket} is defined by
\begin{equation}
\label{Lie.Brackets}
[g,h] := D_x h(x)g(x) - D_x g(x)h(x).
\end{equation}
We use the same notation for functions depending on other variables as well, however the jacobians are taken only w.r.t. $x$. \deleted{Using the new introduced notation, \eqref{aux} becomes} \textcolor{cyan}{This way, \eqref{aux} becomes}
\begin{equation}
\label{genericvectorfield.D1}
\frac{{\rm d}}{{\rm d} t}(\hat{p} \cdot F(\hat{x}, \hat{u})) = \hat{p} \cdot [f_0, F] + \sum_{i = 1}^m \hat{v}_j \hat{p} \cdot [f_i, F] + \hat{p} \cdot D_u F \dot{\hat{u}}.
\end{equation}
We obtain \deleted{the first time derivative of $H_{v_i}$} \textcolor{cyan}{$\dot{H}_{v_i}$}, by choosing $F = f_i$ so that, recalling $H_{vu}=0$, \deleted{equation \eqref{genericvectorfield.D1} becomes} \textcolor{cyan}{we get}
\begin{equation}
\label{switching.function.D1_preview}
\dot{H}_{v_j}[\hat\lambda](\hat{w}) = \hat{p} \cdot [f_0, f_j] + \sum_{i = 1}^m \hat{v}_j \hat{p} \cdot [f_i, f_j].
\end{equation}
\deleted{The difficulties arise from the fact that \eqref{genericvectorfield.D1} does not depend explicitly of the linear controls.} \textcolor{cyan}{We still can not retrieve $(u,v)$ since, in fact, \eqref{genericvectorfield.D1} does not depend explicitly of the linear controls.} This is a consequence of the following proposition which is a corollary of second order necessary conditions for optimality when the set of multipliers is a singleton, as it is our case in virtue of Assumption \ref{qualification.constraints}.
\begin{proposition}[Goh conditions]
\label{Goh.condition}
Assume that $\hat{w}(\cdot)$ is a weak minimum. Then the following identities hold
\begin{equation*}
\hat{p} \cdot [f_i, f_j](\cdot) \equiv 0, \quad \text{ for $i,j = 1, \cdots, m$}.
\end{equation*}
\end{proposition}
Proposition \ref{Goh.condition} was first introduced by Goh, and was further generalized to this context by Aronna in \cite[Cor. 5.2]{aronna2017second} (see also \cite{aronna2012quadratic} and \cite{frankowska2013pointwise}). Equation \eqref{switching.function.D1_preview} reduces to
\begin{equation}
\label{switching.function.D1}
\dot{H}_{v_i}[\hat\lambda](\hat{w}) = \hat{p} \cdot [f_0, f_i].
\end{equation}
We now compute the second time derivative, applying again \eqref{genericvectorfield.D1}, this time choosing $F = [f_0, f_i]$. We obtain
\begin{equation}
\label{hamiltonian_2timedev}
\ddot{H}_{v_i} = \hat{p} \cdot \left[f_0, [f_0, f_i]\right]+\sum_{j = 1}^m \hat{v}_j \hat{p} \cdot \left[f_j, [f_0, f_i]\right] + \hat{p} \cdot D_u[f_0, f_i]\dot{\hat{u}}.
\end{equation}
Provided that the singular controls do not vanish in \eqref{hamiltonian_2timedev}, we must remove the dependence on $\dot{\hat{u}}$. This can be done by using the stationarity condition $H_u[\lambda](\hat{w}) = 0$. Assuming enough regularity, the total time derivative of this expression gives
\begin{equation}
\label{H_udot}
\dot{H}_u[\hat\lambda](\hat{w}) = H_{ux}[\hat\lambda](\hat{w}) \dot{\hat{x}} + H_{up}[\hat\lambda](\hat{w}) \dot{\hat{p}} + H_{uu}[\hat\lambda](\hat{w}) \dot{\hat{u}} + H_{uv}[\hat\lambda]\dot{v}= 0,
\end{equation}
\textcolor{cyan}{where the term $H_{uv}[\hat\lambda]\dot{v}$ vanishes because of \eqref{LegendreClebsch.mixed.equivalent}.}To formalize \eqref{H_udot}, we make the following assumption on the controls.
\begin{assumption}[Regularity of the controls]
\label{regularity.controls}
The nonlinear control $\hat{u}(\cdot)$ is continuously differentiable and the linear control $\hat{v}(\cdot)$ is continuous.
\end{assumption}
This assumption is not restrictive since it follows from the IFT, once we assume the strengthened generalized Legendre-Clesbch condition, \eqref{LC-like.mixedcontrols} below. \deleted{The term $H_{uv}[\hat\lambda]\dot{v}$ vanishes from \eqref{LegendreClebsch.mixed.equivalent}, hence we do not require further regularity for $v$.} \textcolor{cyan}{We do not require differentiability for $v$, since the coefficient of $\dot{v}$ vanishes}. Using the last equation and assuming the strengthened Legendre-Clebsch condition for $u$, {\em i.e.} $H_{uu}\succ 0$, we can lose the dependence of $\dot{\hat{u}}$. Taking $\Theta = \dot{H}_u$, by \eqref{H_udot}, system \eqref{IFTsys} becomes
\begin{equation}
\dot{H}_u = 0, \quad
H_{uu} \succ 0.
\end{equation}
Applying the IFT we get the following representation of $\dot{\hat{u}}$
\begin{equation}
\label{representation.nonlin.derivative}
\dot{\hat{u}} = \Gamma(\hat{u}, \hat{v}, \hat{x}, \hat{p}),
\end{equation}
where $\Gamma$ is a $\mathcal{C}^1$ function.
Equation \eqref{representation.nonlin.derivative} shows that the dependence on $\dot{\hat{u}}$ can be dropped from \eqref{hamiltonian_2timedev}. Therefore we are in position to formulate a system that can be used to achieve our desired representation.
Consider the mapping
\begin{equation}
\label{somemapping}
(w, \lambda) \mapsto
\left(\begin{array}{cc}
H_u[\lambda](w)\\
\\
-\ddot{H}_v[\lambda](w)
\end{array}\right).
\end{equation}
In order to obtain a formulation such as in \eqref{IFTsys} once more, we assume the derivative of mapping \eqref{somemapping} w.r.t. $(u,v)$ satisfies the {\em Strengthened Legendre-Clesbch condition} at the extremal $(\hat{w}, \hat\lambda)$
\begin{equation}
\label{LC-like.mixedcontrols}
\tag{SLC}
\mathcal{J} :=
\left(\begin{array}{cc}
\displaystyle
H_{uu}[\hat \lambda](\hat{w}) & H_{uv}[\hat \lambda](\hat{w}) \\
& \\
\displaystyle
-\frac{\partial \ddot{H}_v}{\partial u}[\hat \lambda](\hat{w}) &\displaystyle -\frac{\partial \ddot{H}_v}{\partial v}[\hat \lambda](\hat{w})
\end{array}\right) \succ 0.
\end{equation}
Once again, we are in position to apply (IFT) and retrieve the controls.
\begin{proposition}[Elimination of the controls]
\label{control.elimination}
If $\hat{w}$ is a weak solution for problem (OC), provided that equation \eqref{LC-like.mixedcontrols}, then one has the representation
\begin{equation}
\label{control.elimination.equation}
\hat{u} = U(\hat{x}, \hat{p}) \quad \hat{v} = V(\hat{x}, \hat{p}),
\end{equation}
where $U$ and $V$ are smooth functions of the states and costates.
\end{proposition}
From the previous discussion, we can write our optimal control problem as a Two Point Boundary Value Problem (TPBVP), which is often called the optimality system and is stated in the following theorem.
\begin{theorem}
\label{OC-OSequivalence}
Under \ref{LC-like.mixedcontrols}, if $\hat{w}$ is a {\em weak minimum} with associated multiplier $\hat\lambda$, then $(\hat{w}, \hat\lambda)$ satisfy the following {\em optimality system}
\begin{equation}
\label{optimality.system}
\tag{OS}
\left\{\begin{array}{ll}
\displaystyle
\dot{x} = f(x,U(x,p), V(x,p)), &\text{ a.e. on $[0,T],$}\\
\\
\displaystyle
\dot{p} = -p\cdot D_xf(x,U(x,p), V(x,p)), &\text{ a.e. on $[0,T],$}\\
\\
\eta_j(x(0), x(T)) = 0, &\text{ for } j = 1, \cdots, d_{\eta},\\
\\
p(0) = -D_{x_0}\ell[\lambda](x(0), x(T)),&\\
\\
p(T) = D_{x_T}\ell[\lambda](x(0), x(T)),&\\
\\
H_v(x(T), U(x(T), p(T))) = 0, &\\
\\
\dot{H}_v(x(0), U(x(0), p(0))) = 0. &\\
\end{array}\right.
\end{equation}
\end{theorem}
The stationarity of the Hamiltonian is implied by the substitution of the controls from Proposition \ref{control.elimination} and boundary conditions $H_v(T) = \dot{H}_v(0) = 0.$
\input{linear_cont_linear_sys.tex}
\subsection{Controls in feedback form}
The conventional Legendre-Clebsch condition assumes the form
\begin{equation}
\label{LegendreClebsch.mixed}
\tag{LC}
\left(
\begin{array}{cc}
H_{uu}(\hat{w},\hat{p}) & H_{uv}(\hat{w},\hat{p})\\
\\
H_{vu}(\hat{w},\hat{p})
& H_{vv}(\hat{w},\hat{p})
\end{array}
\right)
\succeq 0.
\end{equation}
A proof of \eqref{LegendreClebsch.mixed} for the present setting can be found in Aronna \cite[Corollary 1]{Aronna2018}. Note that, since $H_{vv}(\hat{w}, \hat{p}) \equiv 0$ and $H_{vu} = H_{uv}^T$, condition \eqref{LegendreClebsch.mixed} holds if, and only if
\begin{equation}
\label{LegendreClebsch.mixed.equivalent}
H_{uu}(\hat{w},\hat{p}) \succeq 0 \text{ and } H_{uv}(\hat{w},\hat{p}) = 0.
\end{equation}
Since the matrix in \eqref{LegendreClebsch.mixed} is singular we cannot apply the IFT to \eqref{stationarity.Hamiltonian} and obtain our desired representations of the controls. Instead, what one usually does is computing the time derivatives of the {\em switching function} $H_v$ that may depend explicitly on the controls (see {\em e.g.} Bryson and Ho \cite{BrysonHo75}).
In order to simplify the calculations involved in computing these derivatives, we consider a general formula for the time derivative of a product $p \cdot F,$ where $F:\mathbb{R}^n \times \mathbb{R}^m \to \mathbb{R}^n$ is a vector field. Employing the notation of Lie brackets given in the notation paragraph, we get
\begin{equation}
\label{genericvectorfield.D1}
\frac{{\rm d}}{{\rm d} t} \big( \hat{p} \cdot F(\hat{x}, \hat{u}) \big) = \hat{p} \cdot [f_0, F] + \sum_{i = 1}^m \hat{v}_j \hat{p} \cdot [f_i, F] + \hat{p} \cdot D_u F \dot{\hat{u}}.
\end{equation}
We obtain $\dot{H}_{v_i}$ by choosing $F = f_i$. Recalling that $H_{vu}=0$, we get
\begin{equation}
\label{switching.function.D1_preview}
\dot{H}_{v_j}(\hat{w},\hat{p}) = \hat{p} \cdot [f_0, f_j] + \sum_{i = 1}^m \hat{v}_j \hat{p} \cdot [f_i, f_j].
\end{equation}
As a consequence of the following Proposition \ref{Goh.condition}, equation \eqref{switching.function.D1_preview} does not depend explicitly of the linear control $v$.
\begin{proposition}[Goh conditions]
\label{Goh.condition}
Assume that $\hat{w}$ is a weak minimum. Then the following identities hold
\begin{equation*}
\hat{p} \cdot [f_i, f_j] = 0, \quad \text{ for $i,j = 1, \dots, m$}.
\end{equation*}
\end{proposition}
Proposition \ref{Goh.condition} was proposed and proved by Goh \cite{Goh66}. A generalization that applies to the framework of the current paper was given by Aronna in \cite[Cor. 5.2]{Aronna2018} as a corollary of second order necessary conditions for optimality when the set of multipliers is a singleton (see also \cite{ABDL12} and \cite{frankowska2013pointwise}). In view of Proposition \ref{Goh.condition}, equation \eqref{switching.function.D1_preview} reduces to
\begin{equation}
\label{switching.function.D1}
\dot{H}_{v_i}(\hat{w},\hat{p}) = \hat{p} \cdot [f_0, f_i].
\end{equation}
By derivating the latter equation once more w.r.t. time, we obtain
\begin{equation}
\label{hamiltonian_2timedev}
\ddot{H}_{v_i} = \hat{p} \cdot \left[f_0, [f_0, f_i]\right]+\sum_{j = 1}^m \hat{v}_j \hat{p} \cdot \left[f_j, [f_0, f_i]\right] + \hat{p} \cdot D_u[f_0, f_i]\dot{\hat{u}}.
\end{equation}
We aim at removing the dependence on $\dot{\hat{u}}$ from \eqref{hamiltonian_2timedev}. This can be done by using the stationarity condition $H_u(\hat{w},\hat{p}) = 0$. Assuming enough regularity, the total time derivative of this expression gives
\begin{equation}
\label{H_udot}
\dot{H}_u(\hat{w},\hat{p}) = H_{ux} \dot{\hat{x}} + H_{up} \dot{\hat{p}} + H_{uu} \dot{\hat{u}} = 0,
\end{equation}
where the term $H_{uv}\dot{v}$ vanishes in view of \eqref{LegendreClebsch.mixed.equivalent}. To make \eqref{H_udot} more rigorous, we make the following assumption on the controls.
\begin{assumption}[Regularity of the controls]
\label{regularity.controls}
The nonlinear control $\hat{u}$ is continuously differentiable and the linear control $\hat{v}$ is continuous.
\end{assumption}
This assumption is not restrictive since it follows from the IFT, once we assume the strengthened generalized Legendre-Clesbch condition \eqref{LC-like.mixedcontrols} below. In fact, using equation \eqref{H_udot} and assuming the strengthened Legendre-Clebsch condition w.r.t. $u$, {\em i.e.} $H_{uu}\succ 0$, we can lose the dependence of $\dot{\hat{u}}$, by using the IFT on \eqref{H_udot}, which yields
\begin{equation}
\label{representation.nonlin.derivative}
\dot{\hat{u}} = \Gamma(\hat{u}, \hat{v}, \hat{x}, \hat{p}),
\end{equation}
for a $\mathcal{C}^1$-function $\Gamma$.
Equation \eqref{representation.nonlin.derivative} shows that the dependence on $\dot{\hat{u}}$ can be removed from \eqref{hamiltonian_2timedev}. We are now in position to formulate a system that can be used to achieve our desired representation.
Consider the mapping
{\small
\begin{equation}
\label{somemapping}
(w, \lambda) \mapsto
\left(\begin{array}{cc}
H_u(w,p)\\
\\
-\ddot{H}_v(w,p)
\end{array}\right),
\end{equation}
}
whose Jacobian w.r.t. $(u,v)$ at the extremal $(\hat{w}, \hat\lambda)$ is
\begin{equation}
\label{somemapping_jacobian}
\mathcal{J} :=
\left(\begin{array}{cc}
\displaystyle
H_{uu}(\hat{w},\hat{p}) & H_{uv}(\hat{w},\hat{p}) \\
& \\
\displaystyle
-\frac{\partial \ddot{H}_v}{\partial u}(\hat{w},\hat{p}) &\displaystyle -\frac{\partial \ddot{H}_v}{\partial v}(\hat{w},\hat{p})
\end{array}\right).
\end{equation}
To apply (IFT) to $H_u = 0, -\ddot{H}_v = 0$ and retrieve the controls, we assume the following {\em strengthened generalized Legendre-Clebsch condition}
\begin{equation}
\label{LC-like.mixedcontrols}
\tag{SLC}
H_{uu}(\hat{w},\hat{p}) \succ 0, \quad -\frac{\partial \ddot{H}_v}{\partial v}(\hat{w},\hat{p}) \succ 0.
\end{equation}
We get the following result.
\begin{theorem}
\label{OC-OSequivalence}
Assume that \eqref{LC-like.mixedcontrols} holds. If $\hat{w}$ is a {\em weak minimum} with associated multiplier $\hat\lambda$, then the optimal control $(\hat{u}, \hat{v})$ admits the feedback form
\begin{equation}
\label{control.elimination}
\hat{u} = U(\hat{x}, \hat{p}) \quad \hat{v} = V(\hat{x}, \hat{p}),
\end{equation}
where $U$ and $V$ are $\mathcal{C}^1$-functions.
Furthermore, the extremal $(\hat{w}, \hat\lambda)$ satisfies the {\em optimality system}
\begin{equation}
\label{optimality.system}
\tag{OS}
\left\{\begin{split}
&\dot{x} = f(x,U(x,p), V(x,p)), \quad\text{ a.e. on $[0,T],$}\\
&\dot{p} = -p\cdot D_xf(x,U(x,p), V(x,p)), \quad \text{ a.e. on $[0,T],$}\\
&\eta_j(x(0), x(T)) = 0, \quad \text{ for } j = 1, \cdots, d_{\eta},\\
&\left(p(0),p(T)\right) = \left(-D_{x_0}\ell,D_{x_T}\ell \right)(x(0), x(T),\beta),\\
&H_v(x(T), U(x(T), p(T))) = 0,\quad \dot{H}_v(x(0), U(x(0), p(0))) = 0.
\end{split}\right.
\end{equation}
\end{theorem}
\begin{proof}
From our previous discussion, since $H_{uu}\succ 0$, we can remove the dependence of $\dot{\hat{u}}$ from $\ddot{H}_v$. Note that since $H_{uv} \equiv 0$,
\begin{equation}
\label{matrices}
\mathcal{J} = \left(\begin{array}{cc}
H_{uu} & 0 \\
\displaystyle - \frac{\partial \ddot{H_v}}{\partial u} & \displaystyle - \frac{\partial \ddot{H_v}}{\partial v}
\end{array}\right) =
\left(\begin{array}{cc}
H_{uu} & 0 \\
0 & \displaystyle - \frac{\partial \ddot{H_v}}{\partial v}
\end{array}\right)
\left(\begin{array}{cc}
I & 0 \\
\displaystyle \frac{\partial \ddot{H_v}}{\partial v} ^{-1}\frac{\partial \ddot{H_v}}{\partial u} & I
\end{array}\right).
\end{equation}
Since the second matrix in \eqref{matrices} is invertible from \eqref{LC-like.mixedcontrols} and the third one is invertible by inspection, $\mathcal{J}$ is also invertible. Representation \eqref{control.elimination} follows from the IFT.
Moving on to \eqref{optimality.system}, note that it is derived from the PMP. However, the feedback forms in \eqref{control.elimination} are equivalent to $H_u = 0, \ddot{H}_v = 0$. To obtain the stationarity of the Hamiltonian w.r.t. $v$, we include the boundary conditions $H_v(T) = \dot{H}_v(0) = 0$.
We could have chosen other pair of boundary conditions, but this choice will simplify the presentation of the results that follow.
\end{proof}
\input{linear_cont_linear_sys.tex}
\subsection{Proof of Lemma \ref{goh_computations}}
In this section we prove the following identity
\begin{equation}
\label{appendix_identity}
E = -\frac{\partial \dot{H}_v}{\partial u} \quad \text{ and } - \frac{\partial \ddot{H}_v}{\partial v} = R - EH_{uu}^{-1}E^T,
\end{equation}
that are relevant in the recovery of the strengthened Legendre-Clebsch conditions \eqref{LC-like.mixedcontrols} from the sufficient conditions stated in Theorem \ref{SOSC}. Our strategy will be to establish the equality of the matrices involved entry wise.
The first identity in \eqref{appendix_identity} is easily obtained with the definition of $E$ in \eqref{matrixM}. Before proceeding to the second one, let us establish some conventions that will make the computations clearer. Many conditions throughout the text state that some quantity $Q$ is null when evaluated along the optimal trajectories. For instance, we can recall the Goh conditions $\hat{p}\cdot [f_i, f_j](\hat{w}) = 0$. We want to stress out a distinction from the case that some other quantity $N$ identically assumes the value $0$, as is the case for $H_{vv} \equiv 0$. We will make a distinction of these two cases with the following notation
\begin{equation}
Q = 0, \quad N \equiv 0.
\end{equation}
Naturally, if we take the time derivative of some quantity $Q = 0$, this property is maintained and we obtain $\dot{Q} = 0$. However, this is not true when we take partial derivatives, this is, $\partial_v Q$ is not necessarily null. With this in mind we recall the expressions from \eqref{kernel_equation.prev1} that were used to obtain the linear controls. While these expressions are suitable for this task, we cannot use them to compute the partial derivatives $\partial_v \ddot{H}_v$ since we have removed terms that vanish due to the Goh conditions in Proposition \eqref{Goh.condition} or as a consequence of the Legendre-Clebsch conditions \eqref{LegendreClebsch.mixed.equivalent}.
The full expressions we are interested in are still easily obtainable by using formula \eqref{genericvectorfield.D1}. We get,
\begin{align}
\if{
\label{Hv}
H_{v_i} &= \hat{p}\cdot f_i,\\
\label{Hvdot}
\dot{H}_{v_i} &= \hat{p}\cdot[f_0, f_i] + \underbrace{ \sum_{k = 1}^m \hat{v}_k\hat{p}\cdot[f_k,f_i]}_{\textnormal{ $= 0,$ Goh conditions}} + \underbrace{ H_{v_iu}\dot{\hat{u}}, }_{\textnormal{ $= 0$, LC conditions}}\\
}\fi
\label{Hvddot}
\begin{split}
\ddot{H}_{v_i} &=
\hat{p}\cdot[f, [f_0, f_i]] + \hat{p}\cdot D_u [f_0,f_i] \dot{\hat{u}}\\
&+ \sum_{k = 1}^m\left\{ \dot{\hat{v}}_k\hat{p}\cdot[f_k,f_i] + \hat{v}_k \frac{{\rm d}}{{\rm d} t} \hat{p}\cdot[f_k,f_i]\right\} + \frac{{\rm d}}{{\rm d} t}\left(H_{v_iu}\right)\dot{\hat{u}} + H_{v_iu}\ddot{\hat{u}}.
\end{split}
\end{align}
Notice that the coefficient of $\ddot{\hat{u}}$ is zero, so we do not require further regularity for $\hat{u}$. Taking the partial derivative w.r.t. $v_j$ in \eqref{Hvddot} yields
\begin{equation}
\label{DvddotHv}
\begin{split}
&\frac{\partial \ddot{H}_{v_i}}{\partial v_j} = \hat{p}\cdot[f_j, [f_0,f_i]] + \hat{p}\cdot D_u [f_0,f_i]\frac{\partial \dot{\hat{u}}}{\partial v_j}\\
& + \sum_{k = 1}^m\left\{ \frac{\partial \dot{\hat{v}}_k}{\partial v_j}\underbrace{\hat{p}\cdot[f_k,f_i]}_{ = 0} + \dot{\hat{v}}_k \hat{p}\cdot\underbrace{\frac{\partial }{\partial v_j}[f_k,f_i]}_{\equiv 0} + \frac{\partial \hat{v}_k}{\partial v_j}\underbrace{\frac{{\rm d}}{{\rm d} t} \hat{p}\cdot[f_k,f_i]}_{= 0}+ \hat{v}_k \underbrace{\frac{\partial}{\partial v_j}\frac{{\rm d}}{{\rm d} t} \hat{p}\cdot[f_k,f_i]}_{=: A_k}
\right\}\\
& + \underbrace{\frac{\partial}{\partial v_j}\frac{{\rm d}}{{\rm d} t}\left(H_{v_iu}\right)\dot{\hat{u}}}_{=: B} + \underbrace{\frac{{\rm d}}{{\rm d} t}\left(H_{v_iu}\right)}_{= 0}\frac{\partial \dot{\hat{u}}}{\partial v_j} + \underbrace{\frac{\partial}{\partial v_j}H_{v_iu}}_{\equiv 0}\ddot{\hat{u}} + \underbrace{H_{v_iu}}_{= 0}\frac{\partial \ddot{\hat{u}}}{\partial v_j}.
\end{split}
\end{equation}
Once again, the coefficients of $\dot{\hat{v}}$ and $\ddot{\hat{u}}$ vanish so we do not require any further regularity on the optimal controls. By computing the remaining time derivatives, we obtain the expressions
\begin{align}
A_k = \frac{\partial}{\partial v_j}\frac{{\rm d}}{{\rm d} t} \hat{p}\cdot[f_k,f_i] &= \hat{p}\cdot[f_j,[f_k,f_i]] + \hat{p}\cdot D_u[f_k,f_i] \frac{\partial \dot{\hat{u}}}{\partial v_j},\\
B = \frac{\partial}{\partial v_j}\frac{{\rm d}}{{\rm d} t}\left(H_{v_iu}\right)\dot{\hat{u}} & = \hat{p}\cdot \left(
\frac{\partial^2 f_i}{\partial x\partial u}f_j - \frac{\partial f_j}{\partial x}\frac{\partial f_i}{\partial u}
\right)\dot{\hat{u}} + \dot{\hat{u}}^T H_{v_iuu}\frac{\partial \dot{\hat{u}}}{\partial v_j}.
\end{align}
\if{
and our expression for $\partial_{v_j} \ddot{H}_{v_i}$ becomes
\begin{equation}
\frac{\partial \ddot{H}_{v_i}}{\partial v_j} = p\cdot[f_j, [f,f_i]] + p\cdot D_u[f,f_i] \frac{\partial \dot{u}}{\partial v_j} + \frac{\partial}{\partial v_j}\frac{{\rm d}}{{\rm d} t}\left(H_{v_iu}\right)\dot{u}.
\end{equation}
}\fi
The proof of identity \eqref{appendix_identity} is organized in the following 3 claims.
\begin{claim}
\label{claim1}
The entries of the matrix $R = f_v^TH_{xx}f_v - \left(H_{vx}B + (H_{vx}B)^T\right) - \dot{S}$, given in equation \eqref{matrixM}, satisfy
\begin{equation*}
R_{ij} = -\left\{ \hat{p}\cdot[f_j, [f, f_i]] + \hat{p}\cdot \left(
\frac{\partial^2 f_i}{\partial x\partial u}f_j - \frac{\partial f_j}{\partial x}\frac{\partial f_i}{\partial u}
\right)\dot{\hat{u}} \right\}.
\end{equation*}
\end{claim}
\begin{claim}
\label{claim2}
It holds
\begin{equation*}
\frac{\partial \dot{\hat{u}}}{\partial v_j} = - H_{uu}^{-1}E^T_{(:,j)},
\end{equation*}
where the matrix $E = f_v^TH^T_{ux} - H_{vx}f_u$ was introduced in \eqref{matrixM}.
\end{claim}
\begin{claim}
\label{claim3}
For the matrix $E$ given in \eqref{matrixM}, the following expression holds
\begin{equation*}
- E_{(i,:)} = {\hat{p}\cdot D_u [f,f_i] + \dot{\hat{u}}^TH_{v_iuu}}.
\end{equation*}
\end{claim}
\noindent
{\em Proof of Claim \ref{claim1}.}
In our case, where we assume uniqueness of multipliers, the matrix $S$ given in \eqref{matrixG} takes the form $S = H_{vx}f_v$, since $H_{vx}f_v$ is symmetric due to Goh conditions. For $i,j=1,\dots,m,$ we obtain
\begin{equation}\label{dotS}
\begin{split}
\dot{S}_{ij} = \frac{{\rm d}}{{\rm d} t}\left(\hat{p}\cdot\frac{\partial f_i}{\partial x}f_j\right)
& = \hat{p}\cdot\left[f, \frac{\partial f_i}{\partial x}f_j\right] + D_u \hat{p}\cdot \frac{\partial f_i}{\partial x}f_j \dot{\hat{u}}\\
& = \hat{p}\cdot\left[f, \frac{\partial f_i}{\partial x}f_j\right] + \hat{p}\cdot\left( \frac{\partial f_i}{\partial x}\frac{\partial f_j}{\partial u} + \frac{\partial^2 f_i}{\partial x\partial u}f_j\right) \dot{\hat{u}}.
\end{split}
\end{equation}
We will make use of the following expression that comes directly from the definition of Lie brackets:
$ \displaystyle
\hat{p}\cdot\frac{\partial f}{\partial x}f_i =\hat{p}\cdot \frac{\partial f_i}{\partial x}f +\hat{p}\cdot [f_i, f],\text{ for $i = 1,\dots, m$}.
$ Clearly, the additional term $\hat{p}\cdot[f_i, f]$ vanishes, however, as we have discussed, we cannot neglect it once we take partial derivatives
Summing and subtracting the term $\displaystyle \hat{p}\cdot \frac{\partial^2 f}{\partial x^2}f_if_j$ from the expression for $\displaystyle \hat{p}\cdot \left[f, \frac{\partial f_i}{\partial x}f_j\right]$ we obtain
\begin{align*}
\hat{p}\cdot \left[f, \frac{\partial f_i}{\partial x}f_j\right] &= \hat{p}\cdot\left(\frac{\partial}{\partial x}\left(\frac{\partial f_i}{\partial x}f_j \right)f - \frac{\partial f}{\partial x}\frac{\partial f_i}{\partial x}f_j \pm \frac{\partial^2f}{\partial x^2}f_if_j\right)\\
\if{
&= \hat{p}\cdot\left(\frac{\partial}{\partial x}\left(\frac{\partial f_i}{\partial x}f_j \right)f - \frac{\partial}{\partial x}\left(\frac{\partial f}{\partial x}f_i \right)f_j + \frac{\partial^2 f}{\partial x^2}f_if_j\right)\\
}\fi
&= \hat{p}\cdot\left(\frac{\partial}{\partial x}\left(\frac{\partial f_i}{\partial x}f_j \right)f - \frac{\partial}{\partial x}\left(\frac{\partial f_i}{\partial x}f + [f_i,f] \right)f_j + \frac{\partial^2 f}{\partial x^2}f_if_j\right)\\
&= \hat{p}\cdot\left( \frac{\partial f_i}{\partial x}[f,f_j] + \frac{\partial}{\partial x}[f,f_i] f_j + \frac{\partial^2 f}{\partial x^2}f_if_j\right)
\end{align*}
Hence, from \eqref{dotS}, we have
\begin{align*}
\dot{S}_{ij} = \left(f_v^TH_{xx}f_v\right)_{ij} + \hat{p}\cdot \frac{\partial f_i}{\partial x}[f,f_j] + \hat{p}\cdot\frac{\partial}{\partial x}[f,f_i] f_j + \hat{p}\cdot\left( \frac{\partial f_i}{\partial x}\frac{\partial f_j}{\partial u} + \frac{\partial^2 f_i}{\partial x\partial u}f_j\right) \dot{\hat{u}}.
\end{align*}
Moving on to the terms $\left(H_{vx}B\right)_{ij}$ and $\left(H_{vx}B\right)^T_{ij} = \left(H_{vx}B\right)_{ji}$, and recalling the definition of $B = f_xf_v - \frac{{\rm d}}{{\rm d} t}f_v$, given in \eqref{matrixB}, we obtain that the column of index $j$ for this matrix assumes the form
$
B_{(:,j)} = -\left([f,f_j] + \frac{\partial f}{\partial u}\dot{\hat{u}}\right),
$
so that
\[
H_{vx(i,:)}B_{(:,j)} = - \hat{p}\cdot\frac{\partial f_i}{\partial x}\left([f,f_j] + \frac{\partial f}{\partial u}\dot{\hat{u}}\right).
\]
Summing all terms to get the matrix $R$, we obtain the desired identity.
\end{proof}
\noindent
{\em Proof of Claim \ref{claim2}.}
To obtain an expression for $\displaystyle \frac{\partial \dot{\hat{u}}}{\partial v_j}$, we start by solving the equation $\dot{H}_u = 0$ for $\dot{\hat{u}}$. We obtain
\if{
\begin{align*}
0 = \dot{H}_u &= H_{ux}\dot{x} + H_{up}\dot{p} + H_{uu}\dot{u}\\
&= H_{ux}f -\frac{\partial f^T}{\partial u}H_x^T + H_{uu}\dot{u},
\end{align*}
obtaining }\fi
\begin{equation*}
\dot{\hat{u}} = -H_{uu}^{-1}\left(H_{ux}f -\frac{\partial f^T}{\partial u}H_x^T\right).
\end{equation*}
Taking the partial derivative w.r.t. $v_j$ in the latter equation yields
\begin{align*}
\frac{\partial \dot{\hat{u}}}{\partial v_j} =& -H_{uu}^{-1}\underbrace{\left(H_{ux}f_j -\frac{\partial f^T}{\partial u}H_{xv_j}^T \right) }_{ = E^T_{(:,j)}} - H_{uu}^{-1}\underbrace{\left(H_{v_jux}f -\frac{\partial f_j^T}{\partial u}H_x^T\right)}_{ = \frac{\partial}{\partial x}H_{v_ju}\dot{\hat{x}} + \frac{\partial}{\partial p}H_{v_ju}\dot{\hat{p}} = -H_{uuv_j}\dot{\hat{u}}}\\
& -\frac{\partial H_{uu}^{-1}}{\partial v_j}\underbrace{\left(H_{ux}f_j - \hat{p}\cdot \frac{\partial f_j}{\partial x}f_u\right)}_{ = - H_{uu}\dot{\hat{u}}}\\
=& -H_{uu}^{-1}E^T_{(:,j)}+ \underbrace{\left(H_{uu}^{-1}\frac{\partial H_{uu}}{\partial v_j} + \frac{\partial H_{uu}^{-1}}{\partial v_j}H_{uu} \right)}_{= \frac{\partial }{\partial v_j}H_{uu}^{-1}H_{uu} = 0 }\dot{\hat{u}}
= -H_{uu}^{-1}E^T_{(:,j)}.
\end{align*}
\end{proof}
\noindent
{\em Proof of Claim \ref{claim3}.}
Let us expand $D_u\left(\hat{p}\cdot [f, f_i]\right)$:
\begin{align*}
D_u\left(\hat{p}\cdot [f, f_i]\right) &= \frac{\partial}{\partial u}\left(\hat{p}\frac{\partial f_i}{\partial x}f - \hat{p}\frac{\partial f}{\partial x}f_i\right)\\
&= \underbrace{\hat{p}\cdot\frac{\partial f_i}{\partial x}\frac{\partial f}{\partial u} - f_i^TH_{xu}}_{= -E_{(i,:)}} + \underbrace{f^TH_{v_ixu} - H_x\frac{\partial f_i}{\partial u}}_{= \left(\frac{\partial H_{v_iu}}{\partial x}\dot{\hat{x}} + \frac{\partial H_{v_iu}}{\partial p}\dot{\hat{p}}\right)^T}= - E_{(i,:)} - \dot{\hat{u}}^TH_{v_iuu}.
\end{align*}
\end{proof}
Finally, we add the contributions of all these claims to prove Lemma \ref{goh_computations}.
\noindent{\em Proof of Lemma \ref{goh_computations}.}
It suffices to check the expression for $\partial v_j\ddot{H}_{v_i}$ in \eqref{DvddotHv}:
\begin{align*}
\frac{\partial \ddot{H}_{v_i}}{\partial v_j} &= \underbrace{\hat{p}\cdot[f_j, [f, f_i]] + \hat{p}\cdot \left(
\frac{\partial^2 f_i}{\partial x\partial u}f_j - \frac{\partial f_j}{\partial x}\frac{\partial f_i}{\partial u}
\right)\dot{\hat{u}}}_{-R_{ij}} + \underbrace{{\hat{p}\cdot D_u [f,f_i] + \dot{\hat{u}}^TH_{v_iuu}}}_{-E_{(i,:)}}\underbrace{\frac{\partial \dot{\hat{u}}}{\partial v_j}}_{-H_{uu}^{-1}E^T_{(:,j)}}\\
&= -\left(R - EH^{-1}_{uu}E^T\right)_{ij}.
\end{align*}
This concludes the proof.
\end{proof}
\subsection{}{\em Proof of Lemma \ref{LS-LQS_equivalence}}
We must check that given a solution $(\bar{x}, \bar{u}, \bar{v}, \bar{p}, \beta)$ of (LS), the corresponding transformed variables $({\bar\xi}, \bar{u}, \bar{y}, \bar{h}, \bar{\chi}, \bar{\chi}_h, \beta^{LQ})$ solve (LQS).
Starting with the state ${\bar\xi}$, we recall the dynamics of the linearized variable $\bar{x}$ given in \eqref{statedynamics.linearized} so that one has $\dot{{\bar\xi}} = \dot{\bar{x}} - \dot{f}_v\bar{y} - f_v\dot{\bar{y}} = f_x{\bar\xi} + f_u\bar{u} + B\bar{y},$ retrieving the dynamics in \eqref{LQ.dynamics}. The initial conditions are trivially satisfied since $\bar{y}(0) = 0$. The dynamics for $\bar{h}$ are satisfied by the definition.
For the costate dynamics we recall the dynamics of the linearized costates from \eqref{costatedynamics.linearized} and the definition of the matrix $M$ in \eqref{matrixM}. We get
\begin{align*}
-\dot{\bar{\chi}} &= - \dot{\bar{p}} - \dot{\bar{y}}^TH_{vx} - \bar{y}^T\dot{H}_{vx}\\
&= \underbrace{(\bar{p} + \bar{y}^TH_{vx})}_{=\bar{\chi}}f_x + \underbrace{(\bar{x} - f_v\bar{y})^T}_{={\bar\xi}^T}H_{xx} + \bar{y}\underbrace{(f_v^TH_{xx} - \dot{H}_{vx} - H_{vx}f_x)}_{=M}\\
&= \bar{\chi} f_x + {\bar\xi}^TH_{xx} + \bar{y}^TM.
\end{align*}
Hence the dynamics of $\bar{\chi}$ matches \eqref{LQ.costatedynamics}. From equation \eqref{mappingLS-LQS} we obtain $\bar{\chi}(0) = \bar{p}(0)$ and deduce \eqref{LQ.costateinicial}. For the final conditions one substitutes the expressions for $\bar{x}(T)$ and $\bar{p}(T)$ into \eqref{linHvT} and conclude since $S = H_{vx}f_v = f^T_vH_{vx}^T,$ which is a consequence of the Goh conditions \eqref{Goh.condition}.This way we recover the transversality condition for $\bar{\chi}(T)$.
Finally we must check the stationarity \eqref{LQ.stationary_u} and \eqref{LQ.stationary_y} of the Hamiltonian for (LQS). Starting from \eqref{linHu} and \eqref{mappingLS-LQS}, we obtain
\begin{align*}
0 &= (\bar\chi - \bar{y}^TH_{vx})f_u + (\bar\xi + f_v\bar{y})^TH_{ux}^T + \bar{u}^TH_{uu}\\
&= \bar\chi f_u + \bar\xi^TH_{ux}^T + \bar{u}^TH_{uu} + \bar{y}^T(\underbrace{f_v^TH_{ux}^T - H_{vx}f_u}_{=E}),
\end{align*}
which corresponds to the stationarity with respect to $\bar{u}$. On the other hand, the same substitutions applied to \eqref{linHv} yield $0 = \bar{\chi} f_v + {\bar\xi}^TH_{vx}^T.$
Differentiating with respect to time and using the definitions of $B$ in \eqref{matrixB} and $E$ in \eqref{matrixM}, we recover the stationarity \eqref{LQ.stationary_y} with respect to $\bar{y}$. This shows that the tuple $({\bar\xi}, \bar{u}, \bar{y}, \bar{h}, \bar{\chi}, \bar{\chi}_h, \beta^{LQ})$ is a solution of (LQS) and concludes the proof.
\end{proof}
\subsection{} {\em Proof of Lemma \ref{CP-TP-relation}.}
Since $\hat{w}$ is a Pontryagin minimum of (CP), from Definition \ref{pontryagin_minimum}, there exists $\varepsilon > 0$ such that
\begin{equation}
\label{ineqw}
\norm{x - \hat{x}}_{\infty} < \varepsilon, \ \norm{(u,v) - (\hat{u},\hat{v})}_{1} < \varepsilon, \ \norm{(u,v) - (\hat{u},\hat{v})}_{\infty} < 1.
\end{equation}
Let $\Wh$ be the transformation of $\hat{w}$ through \eqref{transform.CP-TP}. We now prove that $\Wh$ is weakly optimal for (TP). Hence we search appropriate $\bar\delta, \bar\varepsilon$ for which all feasible trajectories $W = \big((x^k), (u^k), (v^k), (T_k)\big)$ of (TP) that satisfy
\begin{equation}
\label{weak_optimality_W}
\left| T_k - \Th_k\right|<\bar\delta, \quad \norm{(u^k,v^k) - (\hat{u}^k,\hat{v}^k)}_{\infty} < \bar\varepsilon, \ \text{ for all } k = 1, \cdots, N
\end{equation}
will be mapped into a neighborhood of $\hat{w}$ where it is optimal. Such mapping $W \mapsto w$ is done as follows
\begin{gather}
\label{transform.TP-CP.xu}
x(t) := x^k\left(\frac{t - T_{k-1}}{T_k - T_{k-1}}\right), \quad u(t) := u^k\left(\frac{t - T_{k-1}}{T_k - T_{k-1}}\right),\quad \text{ for $t \in I_k$},\\
\label{transform.TP-CP.v}
v_i(t) :=
\left\{
\begin{array}{cc}
0,& \text{if $t \in I_k$ and $i\in A_k,$}\\
v_i^k\left(\frac{t - T_{k-1}}{T_k - T_{k-1}}\right),& \text{if $t \in I_k$ and $i\in S_k,$}\\
1,& \text{if $t \in I_k$ and $i\in B_k.$}\\
\end{array}
\right.
\end{gather}
The dynamics \eqref{state.dynamics} are clearly satisfied by $(x,u,v)$ obtained from \eqref{transform.TP-CP.xu}-\eqref{transform.TP-CP.v}. The end-point constraints in \eqref{initial-final.constraints} are also easy to verify since $x(0) = x^1(0)$ and $x(T) = x^N(1)$ along with the feasibility of $W$.
The last step to check feasibility of $w$ are the control constraints. For the nonlinear controls, note that since $\norm{u^k - \hat{u}^k}_{\infty} < \bar\varepsilon$, we have that $\norm{u - \hat{u}}_{\infty} < \bar\varepsilon$. Recalling $\rho'$ given in \eqref{control_discontinuities}-\eqref{control_set.rho-Ball}, if we choose $\bar\varepsilon < \rho'$, then $u\left([0,T]\right) \subset U$.
To discuss the feasibility of the linear controls, from equation \eqref{control_discontinuities}, we can choose $\bar\varepsilon$ so that, whenever $t \in I_k$ and $i \in S_k,$
\begin{equation}
0 < \rho' - \bar\varepsilon \le v_i(t) \le 1 -\rho' + \bar\varepsilon < 1.
\end{equation}
On the other hand, for $i \in A_k \cup B_k$, we know that $v_i(t) \in \{0,1\}$ in view of \eqref{transform.TP-CP.v}, so that the control constraints are still satisfied. This concludes the proof of the feasibility of $(x,u,v)$.
In the sequel, we find $\bar\delta$ and $\bar\varepsilon$ so that, if $W$ satisfies \eqref{weak_optimality_W}, then the transformed $w$ verifies \eqref{ineqw} for the given $\varepsilon.$ The analysis is analogous for both controls $u$ and $v$, hence we will conduct the calculations only for $u$. We have
{\small
\begin{equation}
\begin{array}{cc}
\displaystyle \int_{I_k\cap \Ih_k} |u_i(t) - \hat{u}_i(t)|{\rm d} t & \displaystyle \le
\int_{I_k\cap \Ih_k} \left| u_i^k\left(\frac{t - T_{k-1}}{T_k - T_{k-1}}\right) - \hat{u}_i^k\left(\frac{t - T_{k-1}}{T_k - T_{k-1}}\right)\right|{\rm d} t \\
& \displaystyle + \int_{I_k\cap \Ih_k} \left| \hat{u}_i^k\left(\frac{t - T_{k-1}}{T_k - T_{k-1}}\right) - \hat{u}_i^k\left(\frac{t - \Th_{k-1}}{\Th_k - \Th_{k-1}}\right)\right|{\rm d} t.
\end{array}
\end{equation}}
The first integral in the r.h.s. of latter display is bounded by $\bar\varepsilon|I_k\cap \Ih_k|$ in view of \eqref{weak_optimality_W}. For the second term, recall that $\hat{u}$ is continuous on $[0,T]$ and so are the components of $\hat{u}^k$ over $\Ih_k$, so that they are uniformly continuous over these intervals. Therefore, for each $k = 1, \cdots, N,$ we can find some $\bar\delta_k>0$ such that, if $|T_k - \Th_k| < \bar\delta_k,$ then
\begin{equation*}
\left| \hat{u}_i^k\left(\frac{t - T_{k-1}}{T_k - T_{k-1}}\right) - \hat{u}_i^k\left(\frac{t - \Th_{k-1}}{\Th_k - \Th_{k-1}}\right) \right| < \bar\varepsilon
\end{equation*}
for every component of $\hat{u}^k$. Hence we only need to choose $\displaystyle \bar\delta:=\mathop{\rm min}_{k = 1, \cdots, N} \bar\delta_k$. We proved that
\be
\label{estimate_u_1}
\displaystyle \int_{I_k\cap \Ih_k} |u_i(t) - \hat{u}_i(t)|{\rm d} t \leq 2\bar\varepsilon|I_k\cap \Ih_k|.
\ee
Next, we need to estimate the integral outside the intersection $I_k\cap \Ih_k.$ We assume w.l.o.g. that $T_k < \Th_k$ hence, in view of \eqref{weak_optimality_W},
\begin{equation}
\label{estimate_u_2}
\int_{T_k}^{\Th_k}|u_i(t) - \hat{u}_i(t)|{\rm d} t \le \bar\delta\bar\varepsilon.
\end{equation}
Adding up all the terms, we get from \eqref{estimate_u_1}-\eqref{estimate_u_2}, that
$$\norm{u_i - \hat{u}_i}_1 < \bar\varepsilon(2T + (N-1)\bar\delta).
$$
An analogous estimate can be obtained for $\|v-\hat{v}\|_1.$ Finally, taking into account all the control components $m$ of the linear controls and $l$ from the nonlinear controls, we get that, if
\begin{equation*}
\bar\varepsilon(2T + (N-1)\bar\delta) < \frac{\varepsilon}{m + l},
\end{equation*}
then $\norm{u - \hat{u}}_1 < \varepsilon$, as desired.
\end{proof}
\subsection{Second Order Necessary Conditions of Optimality}
The following result holds.
\begin{theorem}[Second order necessary condition \cite{Aronna2018, MR1641590}]
\label{SONC.theorem.C2}
Suppose that $\hat{w}$ is a weak minimum of problem (OC). Then
\begin{equation}
\label{SONC.general.C2}
\Omega(\bar{w}) \ge 0,\quad \text{ for all } \bar{w} \in \mathcal{C}_2.
\end{equation}
\end{theorem}
\if{
\begin{remark}
\label{LegendreClesbch.corollary}
As discussed in \cite{Aronna2018}, for the case when the set of multipliers is not a singleton, \ref{SONC.general.C2} can be extended by restricting the set of multipliers to a set with more information around the nominal trajectory.
The new second order necessary condition is given as
\begin{equation}
\label{maximum.SONC}
\mathop{\rm max}_{\lambda \in \left({\rm co}\Lambda\right)^\#}\Omega[\lambda](\bar{w}) \ge 0, \text{ for all } \bar{w} \in \mathcal{C}_2,
\end{equation}
where ${\rm co} \Lambda$ denotes the convex hull of $\Lambda$ and the set $\left({\rm co}\Lambda\right)^\#$ can be characterized as
\begin{equation}
\left({\rm co}\Lambda\right)^\# = \{ \lambda \in {\rm co} \Lambda : H_{uu}[\lambda] \succeq 0 \ \text{and} \ H_{uv}[\lambda] = 0, \text{ a.e. on }[0,T] \}.
\end{equation}
When the set of multipliers is a singleton, \eqref{maximum.SONC} implies that the set $\left({\rm co}\Lambda\right)^\#$ is nonempty and coincides with $\Lambda$. As a consequence, the Legendre-Clebsch condition stated in \eqref{LegendreClebsch.mixed.equivalent} follows. Hence the term $H_{uv}$ can be omitted from the quadratic form $\Omega$.
\end{remark}
}\fi
To state second order sufficient conditions one can not rely on coercivity of $\Omega$ w.r.t. the controls since $H_{vv} \equiv 0$. In order to overcome this problem, the {\em Goh transform} is employed. The latter is a change of variables introduced by Goh in \cite{goh1966second} and applied by him and other authors to derive second order conditions \cite{Goh66, Dmi77}. For the linearized system \eqref{statedynamics.linearized}, Goh transform is defined as
\begin{equation}
\label{transformation.Goh}
\bar{y}(t) := \displaystyle \int_0^t \bar{v}(\tau){\rm d} \tau,\quad \bar{\xi}(t):= \bar{x}(t) - f_v(t)\bar{y}(t),
\quad \text{for} \,\, t \in [0,T].
\end{equation}
One can easily check that the dynamics of the new variable $\bar{\xi}$ is given by
\begin{gather}
\label{dynamics.xi}
\dot{\bar{\xi}} = f_x\bar{\xi} + f_u\bar{u} + B\bar{y}, \quad \bar{\xi}(0) = \bar{x}(0),\\
\label{matrixB}
\text{where }B := f_xf_v - \frac{{\rm d}}{{\rm d} t}f_v,
\end{gather}
and $B$ is well-defined since $u$ is differentiable as stated in Assumption \ref{regularity.controls}.
We are interested in how the functional $\Omega$ and the critical cone are expressed in terms of the transformed variables $(\bar{\xi}, \bar{u}, \bar{y})$. For this, consider a critical direction $ \bar{w} \in {\mathcal C}$. Note that $\bar{x}(T) = \bar{\xi}(T) + f_v(T)\bar{y}(T)$ and $\bar{x}(0) = \bar{\xi}(0)$. Hence we introduce the new variable $\bar{h} := \bar{y}(T)$, which appears in the transformation of the quadratic functional through integration by parts and becomes a value that is independent of $\bar{y}$ when passing to the limit in the $L^2$-topology. Equation \eqref{endpoint.linearized.constraints} can be rewritten as
\begin{equation}
\label{endpoint.linearized.constraints.Goh}
D \eta_j(\hat{x}(0), \hat{x}(T))\left(\bar{\xi}(0), \bar{\xi}(T) + f_v(T)\bar{h}\right) = 0, \ \text{ for $j = 1, \cdots, d_{\eta}$},
\end{equation}
so that the critical cones ${\mathcal C}_2$ and ${\mathcal C}$ are respectively mapped into the sets
{\small
\begin{gather}
\mathcal{P}_2 := \left\{ (\bar{\xi}, \bar{u}, \bar{y}, \bar{h}) \in \mathcal{W}_2 \times \mathbb{R}^m : \bar{y}(0) = 0, \bar{y}(T) = \bar{h}, \eqref{dynamics.xi} \text{ and }\eqref{endpoint.linearized.constraints.Goh} \text{ hold}\right\},\\
\mathcal{P} := \left(\mathcal{P}_2 \cap \mathcal{W}\right) \times \mathbb{R}^m.
\end{gather}}
The quadratic functional $\Omega$ {can also be written in terms of the new variables $(\bar{\xi}, \bar{u}, \bar{y}, \bar{h})$, and} takes the form
\begin{multline}
\label{second_variation.Gohtransform}
\Omega_{\mathcal{P}}(\bar{\xi}, \bar{u}, \bar{v}, \bar{y}, \bar{h}):= g(\bar{\xi}(0), \bar{\xi}(T), \bar{h}) + \displaystyle\int_0^T\left(\bar{\xi}^TH_{xx}\bar{\xi} + 2\bar{u}^TH_{ux}\bar{\xi} \right. \\
\left. + 2\bar{y}^TM\bar{\xi} + \bar{u}^TH_{uu}\bar{u} + 2\bar{y}^TE\bar{u} + \bar{y}^TR\bar{y} + 2\bar{v}^TG\bar{y}\right) {\rm d} t,
\end{multline}
where
\begin{gather}
\label{matrixM}
M:= f_v^TH_{xx}-\dot{H}_{vx} - H_{vx}f_x, \ \ E:= f_v^TH^T_{ux} - H_{vx}f_u,\\
\label{matrixG}
S:= \mbox{$\frac{1}{2}$} \left(H_{vx}f_v + (H_{vx}f_v)^T\right), \ \ G:= \mbox{$\frac{1}{2}$}\left(H_{vx}f_v - (H_{vx}f_v)^T\right),\\
\label{matrixR}
R:= f_v^TH_{xx}f_v - (H_{vx}B + (H_{vx}B)^T) - \dot{S},\\
\label{function.g}
g(\bar{\xi}_0, \bar{\xi}_T , \bar{h}):= D^2\ell(\bar\xi_0, \bar\xi_T + f_v(T)\bar{h})^2 + \bar{h}^T(2H_{vx}(T)\bar\xi_T + S(T)\bar{h}).
\end{gather}
For every critical variation $(\bar{x}, \bar{u},\bar{v})$ and its respective transformed version $(\bar{\xi}, \bar{u}, \bar{y}, \bar{y}(T))$, one can relate the quadratic functionals $\Omega$ and $\Omega_{\mathcal{P}}$ through integration by parts, as in \cite{Dmi77, Aronna2018}, obtaining
\begin{equation}
\label{quadraticforms.relation}
\Omega(\bar{x}, \bar{u}, \bar{v}) = \Omega_{\mathcal{P}}(\bar{\xi}, \bar{u}, \bar{v}, \bar{y}, \bar{y}(T)).
\end{equation}
In view of latter identity, one can obtain optimality conditions in terms of $\Omega_{\mathcal{P}}$ and its extension to to $\mathcal{W}_2 \times \mathbb{R}^m$ introduced below.
An important issue is the presence of the term $2\bar{v}^TG\bar{y}$, which depends on the untransformed variation $\bar{v}$. The expression of $G$ (see \eqref{second_variation.Gohtransform} and \eqref{matrixG}) gives
\begin{equation}
G_{ij} = - p\cdot[f_i,f_j].
\end{equation}
Hence, using Goh's conditions from Proposition \ref{Goh.condition}, the matrix $G$ vanishes and our quadratic form does not depend on $\bar{v}$. The new quadratic form $\Omega_{\mathcal{P}_2}$, obtained from continuously extending $\Omega_{\mathcal{P}}$ to $\mathcal{W}_2 \times \mathbb{R}^m$, assumes the form
{\small
\begin{multline}
\label{second_variation.Gohtransform.P2}
\Omega_{\mathcal{P}_2}(\bar{\xi}, \bar{u}, \bar{y}, \bar h):= g(\bar{\xi}(0), \bar{\xi}(T), \bar h) \\+ \int_0^T\left(\bar{\xi}^TH_{xx}\bar{\xi} + 2\bar{u}^TH_{ux}\bar{\xi} + 2\bar{y}^TM\bar{\xi} + \bar{u}^TH_{uu}\bar{u} + 2\bar{y}^TE\bar{u} + \bar{y}^TR\bar{y} \right) {\rm d} t.
\end{multline}}
We are able now to state a version of necessary conditions which can be strengthened to sufficient conditions, once we assume coerciveness of $\Omega_{\mathcal{P}_2}$.
\begin{theorem}[\cite{Aronna2018}]
If $\hat{w}$ is a weak minimum of problem (OC), then
\begin{equation}
\label{SONC.Goh.setG.P2}
\Omega_{\mathcal{P}_2}(\bar{\xi}, \bar{u}, \bar{y}, \bar{h}) \ge 0, \quad \text{on} \ \mathcal{P}_2.
\end{equation}
\end{theorem}
\subsection{Second Order Sufficient Conditions of Optimality}
We introduce the following $\gamma$-order, which shall be used to state the sufficient conditions. For $(\bar{x}(0), \bar{u}, \bar{y}, \bar{h}) \in \mathbb{R}^n\times \mathcal{U}_2\times \mathcal{V}_2 \times \mathbb{R}^m,$ we define
\begin{equation}
\gamma_{\mathcal{P}}(\bar{x}(0), \bar{u}, \bar{y}, \bar{h}) := |\bar{x}(0)|^2 + \left|\bar h\right|^2 + \displaystyle \int_0^T(|\bar{u}(t)|^2 + |\bar{y}(t)|^2) {\rm d} t.
\end{equation}
We can also express it as a function of the original variations by setting
\begin{equation*}
\gamma(\bar{x}(0), \bar{u}, \bar{v}) := \gamma_{\mathcal{P}}(\bar{x}(0), \bar{u}, \bar{y}, \bar{h}),
\end{equation*}
where $\bar{y}$ is obtained from $\bar{v}$ through Goh's transform \eqref{transformation.Goh} and $\bar{h}:=\bar{y}(T)$.
\begin{definition}[$\gamma$-growth]
We say that a trajectory $\hat{w} = (\hat{x}, \hat{u}, \hat{v})$ satisfies the {\em $\gamma$-growth condition in the weak sense} if there exist $\varepsilon, \rho >0$ such that
\begin{equation}
\label{g-growth}
\phi(x(0), x(T)) \ge \phi(\hat{x}(0), \hat{x}(T)) + \rho\gamma(x(0) - \hat{x}(0), u - \hat{u}, v - \hat{v}),
\end{equation}
for every feasible trajectory $w$ that verifies $\norm{w - \hat{w}}_{\infty} < \varepsilon$.
\end{definition}
The following theorem was proved in \cite{Aronna2018} for a more general case allowing inequality endpoint constraints and possibly non-unique multiplier, and previously proposed by Dmitruk in \cite{Dmi77} in the totally control-affine setting.
\begin{theorem}[Sufficient condition for weak optimality \cite{Aronna2018}]
\label{SOSC}
Let $\hat w$ be a feasible trajectory satisfying the PMP with unique associated multiplier $\hat\lambda$. If for some $\rho > 0$ the quadratic functional $\Omega_{\mathcal{P}_2}$ satisfies
\begin{equation}
\label{Omega.P2.coersity.unique}
\Omega_{\mathcal{P}_2}(\bar{\xi}, \bar{u}, \bar{y}, \bar{h}) \ge \rho \gamma_{\mathcal{P}}(\bar{x}(0), \bar{u}, \bar{y}, \bar{h}) , \quad on \ \mathcal{P}_2,
\end{equation}
then $\hat{w}$ is a weak minimum satisfying the $\gamma$-growth in the weak sense.
Conversely, if $\hat{w}$ is a weak minimum satisfying $\gamma$-growth, then \eqref{Omega.P2.coersity.unique} is satisfied for some $\rho > 0$.
\end{theorem}
\begin{corollary}[\cite{Aronna2018}]
\label{ColrecoverLCconditions}
Let $\hat w$ be a feasible trajectory satisfying the PMP with unique associated multiplier $\hat\lambda$ and satisfying the coercivity condition \eqref{Omega.P2.coersity.unique}, then
\begin{equation}
\label{coercivity.matrix}
\left(\begin{array}{cc}
H_{uu} & E^T \\
E & R
\end{array}\right) \succeq \rho I, \quad \textnormal{a.e. on $[0,T]$}.
\end{equation}
\end{corollary}
Goh stated in \cite{Goh66} that \eqref{coercivity.matrix} can be used to recover the strengthened Legendre-Clebsch condition \eqref{LC-like.mixedcontrols}. This result (see Proposition \ref{coercivity_implies_LC} below) is of great use since condition \eqref{LC-like.mixedcontrols} is necessary to obtain the controls in feedback form and assemble the optimality system (OS), as done in Theorem \ref{OC-OSequivalence}. To prove this implication we use the following Lemma \ref{goh_computations} that can be found in \cite{GohThesis, Goh66} and that was used in the literature by numerous authors. Nevertheless, since we believe that in Goh's work \cite{Goh66} there were some miscalculations, we included a revisited proof of Lemma \ref{goh_computations} in Appendix \ref{appendix_Goh.computations}.
\begin{lemma}
\label{goh_computations}
The following identities hold:
\begin{equation}
E = -\frac{\partial \dot{H}_v}{\partial u} \quad \text{ and } \quad R - EH_{uu}^{-1}E^T = -\frac{\partial \ddot{H}_v}{\partial v}.
\end{equation}
\end{lemma}
\begin{proposition}
\label{coercivity_implies_LC}
Let $\hat{w}$ be a feasible trajectory satisfying the coercivity condition \eqref{coercivity.matrix}. Then, the strengthened Legendre-Clebsch conditions, in the form of \eqref{LC-like.mixedcontrols}, hold.
\end{proposition}
\begin{proof}
The argument is inspired by the discussion from Goh in \cite{goh2008optimal}. If the matrix in \eqref{coercivity.matrix} is positive definite then, for any $Q \in \mathbb{R}^{(l+m) \times (l+m)}$, we have
\begin{equation}
a^TQ^T
\left(\begin{array}{cc}
H_{uu} & E^T \\
E & R
\end{array}\right)
Qa > 0,
\end{equation}
provided that the vector $a$ is not in the kernel of $Q$. Therefore, in order for the product matrix to be positive definite, it suffices to choose $Q$ with full rank.
Setting
$Q:={\small \left(\begin{array}{cc}
I & -(H_{uu})^{-1}E^T \\
0 & I
\end{array}\right)^T}
$, we check that
{\small
\begin{equation*}
Q^T
\left(\begin{array}{cc}
H_{uu} & E^T \\
E & R
\end{array}\right)
Q =
\left(\begin{array}{cc}
H_{uu} & 0 \\
0 & R - E(H_{uu})^{-1}E^T
\end{array}\right) = \left(\begin{array}{cc}
H_{uu} & 0 \\
0 & - \displaystyle \frac{\partial \ddot{H_v}}{\partial v}
\end{array}\right),
\end{equation*}}
where the last equality comes from Lemma \ref{goh_computations}. Since the matrix $Q$ is non singular, \eqref{LC-like.mixedcontrols} follows.
\end{proof}
\subsection{The shooting function}
We define the {\em shooting function} as follows.
\begin{definition}[Shooting function]
\label{shooting.function}
Let ${\mathcal S} : \mathbb{R}^n \times \mathbb{R}^{n,*} \times \mathbb{R}^{d_{\eta}} =: D({\mathcal S}) \to \mathbb{R}^{d_{\eta}} \times \mathbb{R}^{2n + 2m}$ be the shooting function given by
\begin{equation}
\label{shooting.function.equation}
(x_0, p_0, \beta) =: \nu \mapsto {\mathcal S} (\nu) =
\left(\begin{array}{c}
\eta(x_0,x(T))\\
p_0 + D_{x_0}\ell(x_0,x(T),\beta)\\
p(T) - D_{x_T}\ell(x_0,x(T),\beta)\\
H_v\left(x(T), U(x(T), p(T))\right)\\
\dot{H}_v\left(x_0, U(x_0, p_0)\right)
\end{array}\right),
\end{equation}
where $(x,p)$ is the solution of the initial value problem
\begin{equation}
\label{dynamics.substituted}
\begin{split}
\dot{x} &= H_p(x,U(x,p), V(x,p),p), \quad x(0) = x_0,\\
\dot{p} &= -H_x(x,U(x,p), V(x,p),p), \quad p(0) = p_0.\\
\end{split}
\end{equation}
\end{definition}
Solving the differential-algebraic system \eqref{optimality.system} is equivalent to finding the roots of the shooting function ${\mathcal S}$. Since the number of unknowns in ${\mathcal S}(\hat\nu) = 0$ may be smaller than the number of equations, the Gauss-Newton method is a suitable approach. At each step the method updates the current approximation $\nu_k$ by
\begin{equation}
\label{nu.update}
\nu_{k+1} \leftarrow \nu_k + \Delta_k,
\end{equation}
where the increment $\Delta_k$ is computed by solving the linear approximation of the least squares problem
\begin{equation}
\label{gauss-newton.approximation}
\begin{array}{l}
\displaystyle \mathop{\rm min}_{\Delta \in D({\mathcal S})} \left| {\mathcal S}(\nu_k) + {\mathcal S}'(\nu_k)\Delta \right|^2.
\end{array}
\end{equation}
The solution of the linear regression \eqref{gauss-newton.approximation} is known to be
\begin{equation}
\label{linear.regression.solution}
\Delta_k = -\left( {\mathcal S} '(\nu_k)^T{\mathcal S} '(\nu_k)\right)^{-1}{\mathcal S} '(\nu_k)^T{\mathcal S} (\nu_k),
\end{equation}
provided the matrix ${\mathcal S} '(\nu_k)^T{\mathcal S} '(\nu_k)$ is non-singular.
One can prove that the Gauss-Newton method \eqref{nu.update}-\eqref{linear.regression.solution} converges at least linearly as long as the derivative ${\mathcal S}'(\hat\nu)$ exists and is injective. If in addition it is also Lipschitz continuous, the method converges locally quadratically (see {\em e.g.} Fletcher \cite{fletcher2013practical}, or alternatively Bonnans \cite{bonnans2006numerical}). \
\subsection{ Computation of the derivative of the shooting function}
In this paragraph we aim at obtaining a linearized differential system to be used afterwards to compute the derivative of the shooting function.
A general differential-algebraic control system can be written as
\begin{equation}
\label{differential-algebraic.general}
\left\{\begin{split}
\dot{\xi} &= \mathcal{F}(\xi, \alpha),\\
0 &= \mathcal{G}(\xi,\alpha),\\
0 &= \mathcal{I}(\xi(0), \xi(T)),
\end{split}\right.
\end{equation}
where $\mathcal{F}:\mathbb{R}^n\times\mathbb{R}^m \to \mathbb{R}^n,\, \mathcal{G}:\mathbb{R}^n\times\mathbb{R}^m \to \mathbb{R}^{d_{\mathcal{G}}}$ and $\mathcal{I}:\mathbb{R}^n\times\mathbb{R}^n \to \mathbb{R}^{d_{\mathcal{I}}}$ are $\mathcal{C}^1$-functions. The functions $\xi$ and $\alpha$ represent the tuple of states and costates and the control, respectively. Consider $\tilde{w} = (\tilde\xi, \tilde\alpha)$ a solution of \eqref{differential-algebraic.general}, then the {\em linearization of \eqref{differential-algebraic.general}} at $\tilde{w}$ is given by
\begin{equation}
\label{differential-algebraic.general.linearized}
\left\{\begin{split}
\dot{\bar{\xi}}&= D_{\xi}\mathcal{F}(\tilde{w})\bar{\xi} + D_{\alpha}\mathcal{F}(\tilde{w})\bar{\alpha},\\
0 &= D_{\xi}\mathcal{G}(\tilde{w})\bar{\xi} + D_{\alpha}\mathcal{G}(\tilde{w})\bar{\alpha},\\
0 &= D_{\xi_0}\mathcal{I}\left(\tilde{\xi}(0), \tilde{\xi}(T)\right)\bar{\xi}(0) + D_{\xi_T}\mathcal{I}\left(\tilde{\xi}(0), \tilde{\xi}(T)\right)\bar{\xi}(T).
\end{split}\right.
\end{equation}
Let us apply this procedure to get the linearization of \eqref{optimality.system}. We set $\xi:=(x,p)$, $\alpha:=(u,v)$ and $w := (\xi, \alpha)$.
The linearized state and costate dynamics \eqref{state.dynamics}, \eqref{costate_dynamics} can be written as
\begin{align}
\label{statedynamics.linearized}
\dot{\bar{x}} &= D_xf(w)\bar{x} + D_uf(w)\bar{u} + D_vf(w)\bar{v},\\
\label{costatedynamics.linearized}
\dot{\bar{p}} &= -\left(\bar{p} H_{xp} + \bar{x}^T H_{xx} + \bar{u}^T H_{ux} + \bar{v}^TH_{vx}\right).
\end{align}
The endpoint conditions are also easily linearized, giving
{\small
\begin{align}
\label{endpoint.linearized.constraints}
0 &= D\eta(\hat{x}(0), \hat{x}(T))(\bar{x}(0), \bar{x}(T)),\\
\label{transversality_0.linearized}
\bar{p}(0) &= -\left(\bar{x}^T(0) D^2_{x_0}\ell(\hat{w}, \hat\beta) + \bar{x}^T(T) D^2_{x_0x_T}\ell(\hat{w}, \hat\beta) + \sum_{j = 1}^{d_{\eta}} \hat\beta_jD_{x_0}\eta_j \right),\\
\label{transversality_T.linearized}
\bar{p}(T) &= \left(\bar{x}^T(T) D^2_{x_T}\ell(\hat{w}, \hat\beta) + \bar{x}^T(T) D^2_{x_0x_T}\ell(\hat{w}, \hat\beta) + \sum_{j = 1}^{d_{\eta}} \hat\beta_jD_{x_T}\eta_j \right).
\end{align}}
The linearization of the other components of \eqref{shooting.function} gives
\begin{align}
\label{linHu}
\textnormal{Lin } H_u &= \bar{p} D_uf + \bar{x}^TH^T_{ux} + \bar{u}^TH_{uu}\\
\label{linHv}
\textnormal{Lin } \ddot{H}_v &= \bar{p} D_vf + \bar{x}^TH^T_{vx}\\
\label{linHvT}
\left.\textnormal{Lin } H_v \right|_{t = T} &= \left. \bar{p} D_vf \right|_{t = T}+\left. \bar{x}^TH^T_{vx} \right|_{t = T}\\
\label{lindotHv0}
\left.\textnormal{Lin } \dot{H_v}\right|_{t = 0} &= \left. \frac{{\rm d}}{{\rm d} t}\right|_{t = 0} \left(\bar{p} D_vf + \bar{x}^TH^T_{vx}\right).
\end{align}
The linearized system \eqref{statedynamics.linearized}-\eqref{transversality_T.linearized}, \eqref{linHu}-\eqref{lindotHv0} is referred as (LS).
Finally, the evaluation of ${\mathcal S}'$ in the direction $\bar{\nu} := (\bar{x}_0, \bar{p}_0, \bar{\beta})$ gives:
\begin{equation}
\label{shootingfunction.derivative.explicit}
{\mathcal S}'(\hat{\nu}) \bar{\nu} =
\left(
\begin{array}{c}
D\eta(\hat{x}(0), \hat{x}(T))(\bar{x}_0, \bar{x}(T))\\
\bar{p}_0 + \left[\bar{x}^T_0 D^2_{x_0}\ell + \bar{x}^T(T) D^2_{x_0x_T}\ell + \sum_{j = 1}^{d_{\eta}} \bar{\beta}_jD_{x_0}\eta_j \right]\\
\bar{p}(T) - \left[\bar{x}^T(T) D^2_{x_T}\ell + \bar{x}^T(T) D^2_{x_0x_T}\ell + \sum_{j = 1}^{d_{\eta}} \bar{\beta}_jD_{x_T}\eta_j \right]\\ \displaystyle
\left. \bar{p} D_vf + \bar{x}^TH^T_{vx} \right|_{t = T}\\
\left. \frac{{\rm d}}{{\rm d} t} \left(\bar{p} D_vf + \bar{x}^TH^T_{vx}\right) \right|_{t = 0}
\end{array}
\right).
\end{equation}
\subsection{The transformed problem}
Given a feasible control $(\hat{u},\hat{v})$, we call {\em control structure} the configuration of bang and singular arcs of $\hat{v}$. In (CP), there may be feasible trajectories with a bang-singular structure different from the one of $(\hat{u},\hat{v}).$ However, if $(\hat{u},\hat{v})$ is a local solution for (CP), it will also be a local solution for a problem with a fixed control structure. We assume {\em a priori} knowledge of the optimal control structure to formulate a new unconstrained problem whose feasible controls correspond to controls of the original problem that have such fixed structure. This is achieved by a reparametrization from $[0,T]$ to the interval $[0,1]$ as described next.
In this new unconstrained problem, for each switching time we associate a state variable $T_k$ having null dynamics, keeping the convention that $T_0 = 0$ and $T_N = T$. Such variables are initialized in the algorithm as a rough estimate of the optimal switching times, that will be iteratively tunned by the shooting scheme. For each interval $I_k := [T_k, T_{k+1}]$, we also associate a state variable $x^k$, that is the reparametrization of the state restricted to $I_k$ to the interval $[0,1]$.
The control variables of the new problem are defined as follows. For each interval $I_k$ of the partition we define a control variable $u^k\colon I_k\to \mathbb{R}^l$ that appears nonlinearly and an affine control $v^k\colon I_k \to \mathbb{R}^{|S_k|}$. This way, each $v^k$ has as many entries as the number of singular components of $\hat{v}$ in $\Ih_k$. The bang components of $v$ appear as constants and not as control variables, {\em i.e.} are fixed to either $0$ or to $1$.
The trajectories of the transformed problem have the form
{\small
\begin{equation}
\label{Wh}
W := \left(\left(x^k\right)_{k=1}^N, \left(u^k\right)_{k=1}^N, \left(v^k\right)_{k=1}^N, \left(T_k\right)_{k=0}^{N}\right),
\end{equation}}
and the transformed problem, denoted as (TP), is the following:
{ \small
\begin{align*}
\mathop{\rm min}
&\,\phi(x^1(0), x^N(1)) \\
\text{s.t. } & \dot{x}^k = (T_k - T_{k-1})\left(\sum_{i\in B_k\cup \{0\}}f_i(x^k,u^k) + \sum_{i\in S_k} v^k_if_i(x^k,u^k)\right), \ \ k = 1,\cdots,N ,\\
&\dot{T}_k = 0, \ \ k = 1,\cdots,N-1 ,\\
&\eta(x^1(0), x^N(T)) = 0, \ \ \\
&x^k(1) = x^{k+1}(0), \ \ k = 1,\cdots,N-1.
\end{align*}
}
Note that given some admissible trajectory $(x,u,v)$ of (CP), and its associated switching times $\left(T_k\right)$, we can obtain a feasible trajectory for (TP) via the following transformation
\begin{equation}\label{transform.CP-TP}
\begin{split}
x^k(s) &:= x\left(T_{k-1} + (T_k - T_{k-1})s\right),\\
u^k(s) &:= u\left(T_{k-1} + (T_k - T_{k-1})s\right),\\
v^k(s) &:= v\left(T_{k-1} + (T_k - T_{k-1})s\right).
\end{split}
\qquad \text{ for $s \in [0,1]$}.
\end{equation}
In fact, we discuss below that we can derive the weak optimality of a solution for (TP) from the optimality, in an appropriate sense, of a solution for (CP). To do this, consider the definition of {\em Pontryagin minimum} \cite{MR1641590} given next.
\begin{definition}
\label{pontryagin_minimum}
A feasible trajectory $\hat{w} \in \mathcal{W}$ is a {\em Pontryagin minimum} of \textnormal{(CP)} if, for any positive $N$, there exists some $\varepsilon_N > 0$ such that $\hat{w}$ is a minimum in the set of feasible trajectories $w = (x,u,v) \in \mathcal{W}$ satisfying
\begin{equation*}
\norm{x - \hat{x}}_{\infty} < \varepsilon_N, \ \norm{(u,v) - (\hat{u},\hat{v})}_{1} < \varepsilon_N, \ \norm{(u,v) - (\hat{u},\hat{v})}_{\infty} < N.
\end{equation*}
\end{definition}
\begin{lemma}
\label{CP-TP-relation}
If $\hat{w}$ is a Pontryagin minimum of \textnormal{(CP)}, then $\Wh$ obtained from $\hat{w}$ using transformation \eqref{transform.CP-TP} is a weak minimum of \textnormal{(TP)}.
\end{lemma}
The proof of this lemma follows as a direct extension of a similar result for the totally control-affine case given in \cite{aronna2013shooting}. For the sake of completeness, we included the proof in Appendix \ref{appendix_Goh.computations}.
\if{
\begin{proof}
Since $\hat{w}$ is a Pontryagin minimum of (CP), from Definition \ref{pontryagin_minimum}, there exists $\varepsilon > 0$ such that
\begin{equation}
\label{ineqw}
\norm{x - \hat{x}}_{\infty} < \varepsilon, \ \norm{(u,v) - (\hat{u},\hat{v})}_{1} < \varepsilon, \ \norm{(u,v) - (\hat{u},\hat{v})}_{\infty} < 1.
\end{equation}
Let $\Wh$ be the transformation of $\hat{w}$ through \eqref{transform.CP-TP}. We now prove that $\Wh$ is weakly optimal for (TP). Hence we search appropriate $\bar\delta, \bar\varepsilon$ for which all feasible trajectories $W = \big((x^k), (u^k), (v^k), (T_k)\big)$ of (TP) that satisfy
\begin{equation}
\label{weak_optimality_W}
\left| T_k - \Th_k\right|<\bar\delta, \ \norm{(u^k,v^k) - (\hat{u}^k,\hat{v}^k)}_{\infty} < \bar\varepsilon, \ \text{ for all } k = 1, \cdots, N
\end{equation}
will be mapped into neighborhood of $\hat{w}$ where it is optimal. Such mapping of $W \mapsto w$ is done as follows
\begin{gather}
\label{transform.TP-CP.xu}
x(t) := x^k\left(\frac{t - T_{k-1}}{T_k - T_{k-1}}\right), \quad u(t) := u^k\left(\frac{t - T_{k-1}}{T_k - T_{k-1}}\right),\quad \text{ for $t \in I_k$},\\
\label{transform.TP-CP.v}
v_i(t) :=
\left\{
\begin{array}{cc}
0,& \text{if $t \in I_k$ and $i\in A_k,$}\\
v_i^k\left(\frac{t - T_{k-1}}{T_k - T_{k-1}}\right),& \text{if $t \in I_k$ and $i\in S_k,$}\\
1,& \text{if $t \in I_k$ and $i\in B_k.$}\\
\end{array}
\right.
\end{gather}
The dynamics \eqref{state.dynamics} are clearly satisfied by $(x,u,v)$ obtained from \eqref{transform.TP-CP.xu}-\eqref{transform.TP-CP.v}. The end-point constraints in \eqref{initial-final.constraints} are also easy to verify since $x(0) = x^1(0)$ and $x(T) = x^N(1)$ along with the feasibility of $W$.
The last step to check feasibility of $w$ are the control constraints. For the nonlinear controls, note that since $\norm{u^k - \hat{u}^k}_{\infty} < \bar\varepsilon$, we have that $\norm{u - \hat{u}}_{\infty} < \bar\varepsilon$. Recalling $\rho$ given in \eqref{control_discontinuities}-\eqref{control_set.rho-Ball}, if we choose $\bar\varepsilon < \rho$, then $u\left([0,T]\right) \subset U$.
To discuss the feasibility of the linear controls, from equation \eqref{control_discontinuities}, we can choose $\bar\varepsilon$ so that, whenever $t \in I_k$ and $i \in S_k,$
\begin{equation}
0 < \rho - \bar\varepsilon \le v_i(t) \le 1 -\rho + \bar\varepsilon < 1.
\end{equation}
On the other hand, for $i \in A_k \cup B_k$, we know that $v_i(t) \in \{0,1\}$ in view of \eqref{transform.TP-CP.v}, so that the control constraints are still satisfied. This concludes the proof of the feasibility of $(x,u,v)$.
In the sequel, we find $\bar\delta$ and $\bar\varepsilon$ so that, if $W$ satisfies \eqref{weak_optimality_W}, then the transformed $w$ verifies \eqref{ineqw} for the given $\varepsilon.$ The analysis is analogous for both controls $u$ and $v$, hence we will conduct the calculations only for $u$. We have
{\small
\begin{equation}
\begin{array}{cc}
\displaystyle \int_{I_k\cap \Ih_k} |u_i(t) - \hat{u}_i(t)|{\rm d} t & \displaystyle \le
\int_{I_k\cap \Ih_k} \left| u_i^k\left(\frac{t - T_{k-1}}{T_k - T_{k-1}}\right) - \hat{u}_i^k\left(\frac{t - T_{k-1}}{T_k - T_{k-1}}\right)\right|{\rm d} t \\
& \displaystyle + \int_{I_k\cap \Ih_k} \left| \hat{u}_i^k\left(\frac{t - T_{k-1}}{T_k - T_{k-1}}\right) - \hat{u}_i^k\left(\frac{t - \Th_{k-1}}{\Th_k - \Th_{k-1}}\right)\right|{\rm d} t.
\end{array}
\end{equation}}
The first integral in the r.h.s. of latter display is bounded by $\bar\varepsilon|I_k\cap \Ih_k|$ in view of \eqref{weak_optimality_W}. For the second term, recall that $\hat{u}$ is continuous on $[0,T]$ and so are the components of $\hat{u}^k$ over $\Ih_k$, so that they are uniformly continuous over these intervals. Therefore, for each $k = 1, \cdots, N,$ we can find some $\bar\delta_k>0$ such that, if $|T_k - \Th_k| < \bar\delta_k,$ then
\begin{equation*}
\left| \hat{u}_i^k\left(\frac{t - T_{k-1}}{T_k - T_{k-1}}\right) - \hat{u}_i^k\left(\frac{t - \Th_{k-1}}{\Th_k - \Th_{k-1}}\right) \right| < \bar\varepsilon
\end{equation*}
for every component of $\hat{u}^k$. Hence we only need to choose $\displaystyle \bar\delta:=\mathop{\rm min}_{k = 1, \cdots, N} \bar\delta_k$. We proved that
\be
\label{estimate_u_1}
\displaystyle \int_{I_k\cap \Ih_k} |u_i(t) - \hat{u}_i(t)|{\rm d} t \leq 2\bar\varepsilon|I_k\cap \Ih_k|.
\ee
Next, we need to estimate the integral outside the intersection $I_k\cap \Ih_k.$ We assume w.l.o.g. that $T_k < \Th_k$ hence, in view of \eqref{weak_optimality_W},
\begin{equation}
\label{estimate_u_2}
\int_{T_k}^{\Th_k}|u_i(t) - \hat{u}_i(t)|{\rm d} t \le \bar\delta\bar\varepsilon.
\end{equation}
Adding up all the terms, we get from \eqref{estimate_u_1}-\eqref{estimate_u_2}, that
$$\norm{u_i - \hat{u}_i}_1 < \bar\varepsilon(2T + (N-1)\bar\delta).
$$
An analogous estimate can be obtained for $\|v-\hat{v}\|_1.$ Finally, taking into account all the control components $m$ of the linear controls and $l$ from the nonlinear controls, we get that, if
\begin{equation*}
\bar\varepsilon(2T + (N-1)\bar\delta) < \frac{\varepsilon}{m + l},
\end{equation*}
then $\norm{u - \hat{u}}_1 < \varepsilon$, as desired.
\end{proof}
}\fi
\subsection{The shooting algorithm for the transformed problem} In order to have an algorithm suitable to solve control constrained problems, our final step is to define a proper shooting function and apply the procedure described in Section \ref{shooting_algorithm}.
We start by stating the PMP for this unconstrained problem (TP).
Define the endpoint Lagrangian
\begin{equation}
\tilde{\ell}:= \phi(x^1(0), x^N(1)) + \sum_{j = 1}^{d_{\eta}}\beta_j\eta_j(x^1(0), x^N(T)) + \sum_{k = 1}^{N-1}\theta^k\left(x^k(1)-x^{k+1}(0)\right).
\end{equation}
Note that each multiplier $\beta_j$ is associated with the endpoint constraints that come from the original problem and each $\theta^k$ is associated with the additional constraints of continuity of the state from (TP). The pre-Hamiltonian of (TP) is given by
\begin{equation*}
\begin{split}
\tilde H &:= \sum_{k = 1}^{N}(T_k - T_{k-1})H^k,
\end{split}
\end{equation*}
where {\small $H^k := \displaystyle p^k\cdot \left(\sum_{i\in B_k\cup \{0\}}f_i(x^k,u^k) + \sum_{i\in S_k} v^k_if_i(x^k,u^k)\right).$}
Hence, from the PMP, the costates follow the dynamics
\begin{equation}
\dot{p}^k = - (T_k - T_{k-1})D_{x^k}H^k,
\end{equation}
with transversality conditions
\begin{gather}
p^1(0) = -D_{x^1_0}\phi - \sum_{j -1}^{d_{\eta}}\beta_jD_{x^1_0}\eta_j(x^1(0), x^N(T)),\\
\label{costates_TP_continuity}
\begin{array}{ll}
p^k(1) = \theta^k, & \text{ for $k = 1, \cdots, N-1,$}\\
p^k(0) = \theta^{k-1}, & \text{ for $k = 2, \cdots, N,$}
\end{array}\\
p^N(1) = D_{x_1^N}\phi + \sum_{j -1}^{d_{\eta}}\beta_jD_{x_1^N}\eta_j(x^1(0), x^N(T)).
\end{gather}
Note that equation \eqref{costates_TP_continuity} can be replaced by
\begin{equation}
\label{costates_TP_continuity2}
p^k(1) = p^{k+1}(0), \quad\text{for $k = 1, \cdots, N-1$},
\end{equation}
hence, eliminating the multipliers $\theta^k$.
We must also address the costates $p^{T_k}$ associated with the switching times, which satisfy
{\small
\begin{equation}
\label{costatesTk}
\begin{array}{cccc}
\dot{p}^{T_k} = -H^k + H^{k+1}, & p^{T_k}(0) = 0,& p^{T_k}(1) = 0, & \text{ for $k = 1, \cdots, N-1$}.
\end{array}
\end{equation}}
Combining all conditions from \eqref{costatesTk}, we obtain
\begin{equation}
\label{condition.pTk}
\int_{0}^{1}\left(H^{k+1} -H^k \right){\rm d} t = p^{T_k}(0) - p^{T_k}(1) = 0.
\end{equation}
Since the dynamics are autonomous, the Hamiltonian is constant for the optimal trajectory and we equivalently express the conditions \eqref{condition.pTk} for $p^{T_k}$ as
\begin{equation}
H^k = H^{k+1}, \quad \text{ for $k = 1, \cdots, N-1.$}
\end{equation}
Now we are in position to adapt the shooting scheme for solving (TP). Following the steps from Section \ref{shooting_algorithm} we start by finding the feedback form for the controls. It suffices to use the representation given in equation \eqref{control_feedback_mappings}
\begin{equation}
u^k = U\left(x^k, p^k\right), \quad v^k = V_{S_k}\left(x^k, p^k\right),\quad \text{ for $k = 0, \cdots, N.$}
\end{equation}
By Lemma \ref{CP-TP-relation} such controls must also be feasible for (TP) and when the feedback arguments $\hat{x}^k$ and $\hat{p}^k$ correspond to the nominal trajectory, we obtain optimal controls.
We must also define an appropriate shooting function that will express the stationarity of the Hamiltonian, the initial-final constraints and transversality conditions. Stationarity with respect to the nonlinear controls is equivalent to the feedback representation for $u$ given in equation \eqref{control_feedback_mappings}. For the linear controls, the feedback form is equivalent to $\ddot{H}_{v_{S_k}} = 0$. Hence we must also impose $H^k_{v^k_i}(0) = 0$ and $\dot{H}^k_{v^k_i}(0) = 0$ to ensure the stationarity $H_{v_k} = 0$.
Note that we can choose to include the constraints related the continuity of the states and costates or integrate each $x^k$ and $p^k$ using the final values of $x^{k-1}$ and $p^{k-1}$ as initial conditions. The clear advantage of the latter strategy is the smaller number of shooting variables, i.e. the initial conditions for states and costates at the switching times can be omitted. On the other hand, explicitly including these constraints makes the algorithm more stable numerically and favors the parallelization of computational implementations, see \cite{stoer2013introduction}.
The following is the shooting function associated to (TP) with the full set of shooting variables
\begin{equation}
\label{shooting.function.TP}
\begin{array}{c}
\mathcal{S} \colon \mathbb{R}^{Nn}\times\mathbb{R}^{Nn,*}\times\mathbb{R}^{N-1}\times\mathbb{R}^{d_{\eta}} \to \mathbb{R}^{(N-1)n + d_{\eta}}\times \mathbb{R}^{(N+1)n + N-1 +2\sum |S_k|,*}\\
\nu \mapsto \mathcal{S}(\nu) :=
\left(
\begin{array}{c}
\eta(x^1(0), x^N(1)) \\
\left( x^k(1) - x^{k+1}(0)\right)_{k = 1,\cdots,N-1}\\
\left( p^k(1) - p^{k+1}(0)\right)_{k = 1,\cdots,N-1}\\
p^1(0) + D_{x_0^1}\tilde \ell\left(x^1(0), x^N(1)\right)\\
p^N(1) - D_{x_1^N}\tilde \ell\left(x^1(0), x^N(1)\right)\\
\left( H^k(1) - H^{k+1}(0)\right)_{k = 1,\cdots,N-1}\\
\left( p^i\cdot f_i\left(x^i, U^i\right)(0) \right)_{\tiny
\begin{array}{l}
\tiny i \in S_k\\
\tiny k = 1,\dots, N
\end{array}
}\\
\displaystyle
\left(p^i\cdot [f_0, f_i]^x(0)\right)_{\tiny
\begin{array}{l}
\tiny i \in S_k\\
\tiny k = 1,\dots, N
\end{array}
}
\end{array}
\right)
\end{array}
\end{equation}
where we define the vector of shooting arguments as
{\small
\begin{equation}
\nu := \left(\left(x^k(0)\right)_{k=1}^{N}, \left(p^k(0)\right)_{k=1}^{N}, \left(T_k\right)_{k=1}^{N-1}, \beta\right).
\end{equation}}
We recall equation \eqref{switching.function.D1} that gives a concise analytical form for $\dot{H}_{v_i}$ and was used in the formulation of the last component of the above shooting function.
Since the new problem (TP) falls in the same category of unconstrained problem (OC), we join Lemma \ref{CP-TP-relation} and Theorem \ref{convergence.unconstrained-problem} in the following result.
\begin{theorem}
\label{convergence.constrained_problem}
If $\hat{w}$ is a Pontryagin minimum of \textnormal{(CP)} such that $\Wh$ given in \eqref{Wh} satisfies the coercivity condition \eqref{Omega.P2.coersity.unique} for problem \textnormal{(TP)}, then the shooting algorithm for \textnormal{(TP)} is locally quadratically convergent.
\end{theorem}
\subsection{Open contraint sets}
We begin with a simpler case where the constraint sets are of the form
\begin{equation*}
U = (-\rho, 1 + \rho)^l, \ \ V = (-\rho, 1 + \rho)^m,
\end{equation*}
with $\rho > 0$, and consider the following assumption.
\begin{assumption}
\label{number_switchs}
Each component of $\hat{u}$ and $\hat{v}$ is a finite concatenation of bang and singular arcs. Equivalently, the optimal solution has a finite number of switching times.
\end{assumption}
From Assumption \ref{number_switchs} we can consider a partition of the time interval $[0,T]$ given by the switching times
\begin{equation*}
\{0 := \Th_0 < \Th_1 < \Th_2 < \cdots < \Th_{N-1} < \Th_N := T\}.
\end{equation*}
Set the intervals $\Ih_k := [\Th_{k-1}, \Th_k]$, for $k = 1, \cdots, N$ and define the following set of indexes
{
\color{red}
\begin{equation*}
\begin{array}{c}
S_k^{\alpha} := \{ 1\le i \le l: \bar\alpha_i \text{ is singular on }\Ih_k \} \\
E_k^{\alpha} := \{ 1\le i \le l: \bar\alpha_i = 0 \text{ a.e. on }\Ih_k \} \\
N_k^{\alpha} := \{ 1\le i \le l: \bar\alpha_i = 1 \text{ a.e. on }\Ih_k \},
\end{array}
\end{equation*}
for $\alpha \in \{u,v\}$.\footnote{These sets are not clear when we consider controls in an open set. This is the only short coming of this approach that I see. Can we get around it?}
}
\begin{assumption}
\label{LC-constrained}
For each $k = 1, \cdots, N$, denote by $u_{S_k}$ the vector with components $u_i$ with $i \in S_k^u$, analogously for $v_{S_k}$. In the interval $\Ih_k$ we assume the following form of the Legendre-Clesbch conditions
\begin{equation*}
\begin{array}{cc}
\displaystyle \frac{\partial^2}{\partial u_{S_k}^2}H \succ 0, & \displaystyle -\frac{\partial }{\partial v_{S_k}}\ddot{H}_{v_{S_k}} \succ 0.
\end{array}
\end{equation*}
\end{assumption}
As was done in section \ref{differential_algebraic}, the following
\begin{equation}
\left(\begin{array}{cc}
H_u[\hat\lambda](\hat{w})\\
\\
-\ddot{H_v}[\hat\lambda](\hat{w})
\end{array}\right) = 0,
\end{equation}
along with Assumption \ref{LC-constrained} can be solved with the implicit function theorem and as a consequense, such controls are continuos in the interval $\Ih_k$. Finally, since the controls are continuous in each interval $\Ih_k$, we conclude that they satisfy for each $i = 1,\cdots, l$ and $j = 1, \cdots, m$
\begin{equation}
\label{control_bounds}
\begin{array}{c}
-\rho < u_{min} \le \hat{u}_i(t) \le u_{max} < 1+\rho\\
-\rho < v_{min} \le \hat{v}_j(t) \le v_{max} < 1+\rho
\end{array}, \text{ on $[0,T]$}.
\end{equation}
Now, given the partition of $[0,T]$ induced by the switching times, and the arcs structure, we define the following unconstrained problem with normalized time in the interval $[0,1]$. In this new control problem the states correspond to $x^k$, the states of the original problem in the interval $I_k:=[T_{k-1}, T_k]$, where $T_k \in \mathbb{R}$ are variables of zero dynamics. Note that the latter are a rought estimate of the optimal switing times, that will be interatively tunned by the shooting algorithm. We only require $T_0 = 0$ and $T_N = T$. The controls $u^k$ and $v^k$ are defined analogously. The transformed problem (TP) in the interval $[0,1]$ is the following.
\begin{align}
\underset{(u,v)\in \mathcal{U}\times\mathcal{V}}{\text{ minimize }} &
\phi(x^1(0), x^N(1)) \\
\text{subject to} & \nonumber \\
& \dot{x}^k(t) = (T_k - T_{k-1})f(x^k(t), u^k(t), v^k(t)), \ \ k = 1,\cdots,N\\
&\dot{T}_k = 0, \ \ k = 1,\cdots,N\\
& \eta(x^1(0), x^N(T)) = 0, \ \ \\
& x^k(1) = x^{k+1}(0), \ \ k = 1,\cdots,N-1.
\end{align}
Intuitively we obtain a solution of (TP) from (CP), and vice versa, using the following trasformation
\begin{equation*}
\begin{array}{cc}
\hat{x}^k(t) &:= \hat{x}\left(\Th_{k-1} + (\Th_k - \Th_{k-1})t\right),\\
\hat{u}^k(t) &:= \hat{u}\left(\Th_{k-1} + (\Th_k - \Th_{k-1})t\right),\\
\hat{v}^k(t) &:= \hat{v}\left(\Th_{k-1} + (\Th_k - \Th_{k-1})t\right),
\end{array}
\text{ for $t \in [0,1]$}.
\end{equation*}
Set
\begin{equation}
\Wh := \left(\left(\hat{x}^k\right)_{k=1}^N, \left(\hat{u}^k\right)_{k=1}^N, \left(\hat{v}^k\right)_{k=1}^N, \left(\Th_k\right)_{k=1}^{N-1}\right).
\end{equation}
The relation between the solutions of (CP) and (TP) is formalized in Lemma
\begin{lemma}
\label{CP-TP-relation}
If $\hat{w}$ is a Pontryagin minimum of \textnormal{(CP)}, as in definition \ref{pontryagin_minimum}, then $\Wh$ is a weak minimum of \textnormal{(TP)}.
\end{lemma}
\begin{definition}
\label{pontryagin_minimum}
A feasible trajectory $\hat{w} \in \mathcal{W}$ is a Pontryagin minimum of problem \textnormal{(CP)} iff for any positive $N$, there exists some $\varepsilon_N > 0$ such that $\hat{w}$ is a minimum in the set of feasible trajectories $w = (x,u,v) \in \mathcal{W}$ satisfying
\begin{equation*}
\norm{x - \hat{x}}_{\infty} < \varepsilon_N, \ \norm{(u,v) - (\hat{u},\hat{v})}_{1} < \varepsilon_N, \ \norm{(u,v) - (\hat{u},\hat{v})}_{\infty} < N.
\end{equation*}
\end{definition}
The strategy of our proof will be to extract the weak optimality for a solution of (TP) from the optimality of (CP) by transforming a neighborhood of optimality in the Pontryagin sense for the nominal trajectory $\hat{w}$ into a neighborhood in the weak sense for the trajectory $\Wh$.
\begin{proof}{\em of Lemma \ref{CP-TP-relation}}
Since we assume $\hat{w}$ to be a Pontryagin minimum of (CP), from Definition \ref{pontryagin_minimum}, there exists $\varepsilon > 0$ such that
\begin{equation}
\norm{x - \hat{x}}_{\infty} < \varepsilon, \ \norm{(u,v) - (\hat{u},\hat{v})}_{1} < \varepsilon, \ \norm{(u,v) - (\hat{u},\hat{v})}_{\infty} < 1.
\end{equation}
So we must find a neighborhood of $\Wh$ where it is optimal in the weak sense. In other words, we search for $\bar\delta, \bar\varepsilon > 0$, such that $\Wh$ is with respect to all feasible solutions $W = ((x^k), (u^k), (v^k), (T_k))$ that satisfy
\begin{equation}
\left| T_k - \Th_k\right|<\bar\delta, \ \norm{(u^k,v^k) - (\hat{u}^k,\hat{v}^k)}s_{\infty} < \bar\varepsilon, \ \text{ for all } k = 1, \cdots, N.
\end{equation}
So for each $W$ we map into a feasible variation of (CP)
\begin{equation}
\alpha(t) := \alpha^k\left(\frac{t - T_{k-1}}{T_k - T_{k-1}}\right),
\end{equation}
with $\alpha \in \{x,u,v\}$. Therefore, we have that $x(0) = x^1(0)$ and $x(T) = x^N(1)$ and from the feasibility of $W$, the end point constraints in \eqref{initial-final.constraints} are satisfied.
The control constraints can be verified using the picewise continuity of each component of the controls, and by construction the continuity of $\left((u^k), (v^k)\right)$ in the interval $(0,1)$. Therefore, we can use equation \eqref{control_bounds} and we can choose $\bar\varepsilon$ so that
\begin{equation}
\begin{array}{c}
-\rho < u_{min} - \bar\varepsilon \le u_i(t) \le u_{max} + \bar\varepsilon < 1+\rho\\
-\rho < v_{min} - \bar\varepsilon\le v_j(t) \le v_{max} + \bar\varepsilon < 1+\rho
\end{array}, \text{ on $[0,T]$}.
\end{equation}
In the sequence, we relate the size neighborhood around $\Th_k$ with $\varepsilon$ by finding bounds to $\norm{(u,v) - (\hat{u},\hat{v})}_{1}$. The analysis is analogous for both controls $u$ and $v$, hence we will conduct the calculations component wise only for $u$. We compute
\begin{equation}
\begin{array}{cc}
\displaystyle \int_{I_k\cap \Ih_k} |u_i(t) - \hat{u}_i(t)|{\rm d} t & \displaystyle \le
\int_{I_k\cap \Ih_k} \left| u_i^k\left(\frac{t - T_{k-1}}{T_k - T_{k-1}}\right) - \hat{u}_i^k\left(\frac{t - T_{k-1}}{T_k - T_{k-1}}\right) \hat{u}_i(t)\right|{\rm d} t \\
& \displaystyle + \int_{I_k\cap \Ih_k} \left| \hat{u}_i^k\left(\frac{t - T_{k-1}}{T_k - T_{k-1}}\right) - \hat{u}_i^k\left(\frac{t - \Th_{k-1}}{\Th_k - \Th_{k-1}}\right) \right|{\rm d} t
\end{array}
\end{equation}
The first integral is bounded by $\varepsilon|I_k\cap \Ih_k|$ by the previous arguments. For the second term, since $\hat{u}$ is uniformly continuous in $I_k$ and $\Ih_k$, we can find some $\bar\delta_k > |T_k - \Th_k|$ so that
\begin{equation*}
\left| \hat{u}_i^k\left(\frac{t - T_{k-1}}{T_k - T_{k-1}}\right) - \hat{u}_i^k\left(\frac{t - \Th_{k-1}}{\Th_k - \Th_{k-1}}\right) \hat{u}_i(t)\right| < \bar\varepsilon
\end{equation*}
for each $k = 1, \cdots, N$. Hence we need only choose $\bar\delta$ to be the smaller.
Next, we need to evaluate the integral outside this intersection. We assume w.l.o.g. that $T_k < \Th_k$ hence
\begin{equation*}
\int_{T_k}^{\Th_k}|u_i(t) - \hat{u}_i(t)|{\rm d} t \le \bar\delta\bar\varepsilon.
\end{equation*}
Summing all the terms, we have that $\norm{u_i - \hat{u}_i}_1 < \varepsilon(2T + (N-1)\bar\delta)$. Thus, taking in acount all the control compenents, $m$ from the linear controls and $l$ from the nonlinear controls, if
\begin{equation*}
\bar\varepsilon(2T + (N-1)\bar\delta) < \frac{\varepsilon}{m + l},
\end{equation*}
then $\norm{u - \hat{u}} < \varepsilon$, as we wished.
\end{proof}
\subsection{Pertubation of the Constrained Problem}
Consider the following control problem
\begin{equation}
\label{epsilon_CP}
\tag{\epsilon CP}
\begin{array}{cl}
\text{ minimize } &
\phi(\hat{x}(0),x(T)) \\
\text{subject to} & \\
& \dot{x}(t) = f(x(t), u(t), v(t)), \ \ \text{a.e. on $[0,T]$}\\
& \eta_j(x(0),x(T)) = 0, \ \ \text{for} j = 1,\cdots, d_{\eta},\\
& (u, v) \in \mathcal{U}_{ad}^{\epsilon} \times \mathcal{V}_{ad}^{\epsilon}\\
& \mathcal{U}_{ad}^{\epsilon} := L^{\infty}([0,T]; U + \epsilon B(0;1)),\\
& \mathcal{V}_{ad}^{\epsilon} := L^{\infty}([0,T]; V + \epsilon B(0;1)).
\end{array}
\end{equation}
Call the optimal value of \eqref{epsilon_CP} as $V(\epsilon)$, then one notes that $V(0)$ refers to the optimal value of the original constrained problem (CP).
We claim that the function $V:\mathbb{R} \to \mathbb{R}$ satisfies
\begin{equation}
V(0) < V(\epsilon) + \epsilon L_{\phi}e^{TL_f},
\end{equation}
and hence is l.s.c, where $L_{\phi}$ and $L_f$ are the Lipschitz coeficients of the data functions $f$ and $\phi$.
\begin{proof}
Clearly, we have
\begin{equation*}
\mathcal{U}_{ad} \times \mathcal{V}_{ad} = \mathcal{U}_{ad}^{0} \times \mathcal{V}_{ad}^{0} \subset \mathcal{U}_{ad}^{\epsilon} \times \mathcal{V}_{ad}^{\epsilon},
\end{equation*}
hence, minimizing w.r.t. the controls $u,v$, implies that $V(\epsilon) \le V(0)$.
Then, for each pair $(u,v) \in \mathcal{U}_{ad} \times \mathcal{V}_{ad}$, consider an $\epsilon$-pertubation, some pair $(u^{\epsilon}, v^{\epsilon}) \in \mathcal{U}_{ad}^{\epsilon} \times \mathcal{V}_{ad}^{\epsilon}$ such that $\norm{(u,v) - (u^{\epsilon}, v^{\epsilon})}_{\infty} < \epsilon$.
Denote by $x^{\epsilon}$ the function obtained by integrating the system with such controls and $\delta x:= x - x^{\epsilon}$. Then one has
\begin{align*}
|\delta \dot{x}| &= \left|f(x, u, v) - f\left(x^{\epsilon}, u^{\epsilon}, v^{\epsilon}\right)\right|\\
&\le L_f\left(|\delta x| + |(u - u^{\epsilon}, v - v^{\epsilon})\right|.
\end{align*}
With Gronwall's lemma we conclude that
\begin{equation*}
|\delta x(t)| \le \epsilon e^{tL_f}.
\end{equation*}
From the Lipschitz continuity of $\phi$ we obtain
\begin{equation*}
\phi(\hat{x}(0), x(T)) \le \phi(\hat{x}(0), x^{\epsilon}(T)) +\epsilon L_{\phi}e^{TL_f}.
\end{equation*}
However, since we choose an arbitrary $\epsilon$-perturbation of the controls, we can obtain any element in $\mathcal{U}_{ad}^{\epsilon} \times \mathcal{V}_{ad}^{\epsilon}$ choosing apropriate $(u,v) \in \mathcal{U} \times \mathcal{V}$ and afterwards a fitting variation, that can also be null. Therefore, minimization in $ \mathcal{U} \times \mathcal{V}$ we get
\begin{align*}
V(0) & \le V(\epsilon) + \epsilon L_{\phi}e^{TL_f}.
\end{align*}
\end{proof}
\subsection{The totally nonlinear case} When we have no affine controls in the dynamics, the classical approach is to use the stationarity of the Hamiltonian to write the control as a function of the states and costates.
\begin{equation}
\label{representation.nonlin}
\hat{u}(t) = U(\hat{x}(t), \hat{p}(t))
\end{equation}
Using the stationarity of the Hamiltonian \eqref{stationarity.Hamiltonian} and the {\em Implicit Function Theorem (IFT)}, one can achieve \eqref{representation.nonlin}. However, to satisfy the requirements of the {\em (IFT)}, the following assumption is necessary.
\begin{assumption}[Strengthened Legendre-Clebsch condition]
\label{strengthenedLC}
The second derivative of the Hamiltonian function, with respect to the control is positive definite,
\begin{equation*}
H_{uu}[ \lambda](\hat{w}) \succ 0.
\end{equation*}
\end{assumption}
In fact, the second partial derivative of the Hamiltonian function with respect to the control being positive semi-definite is a necessary condition for optimality of a trajectory, therefore, this assumption is a not too strong. In the future we shall show that this necessary condition holds if one assumes the uniqueness of the Lagrange multipliers, that is a consequence of Assumption \ref{qualification.constraints}.
\subsection{The totally affine case} When the controls appear linearly, we immediately face complications concerning the Strengthened Legendre-Clesbch condition, since the affine nature of the controls implies that the matrix $H_{vv}$ is null. Therefore, the process for achieving a representation for the optimal control in terms of the states as
\begin{equation}
\label{representation.linear}
\hat{v}(t) = V(\hat{x}(t), \hat{p}(t))
\end{equation}
is not so trivial.
The solution for such problem is to turn our analysis to the time derivatives of $H_v$, which is some times referred as the {\em switching function}. Following the notation in \cite{aronna2013shooting}, we define the switching function
\begin{equation}
\label{switching.function}
\begin{array}{rl}
\Phi \colon [0,T] &\to \mathbb{R}^m\\
t &\mapsto H_{v}[\lambda](\hat{w}) = \left(\hat{p}(t)\cdot f_i(\hat{x}(t))\right)_{i = 1}^m.
\end{array}
\end{equation}
We shall try to extract the optimal control from the time derivatives of the switching function \eqref{switching.function}. To simplify our analysis, consider a vector field $F:\mathbb{R}^n \to \mathbb{R}^n$ that is a function only of the states, that is $F = F(x)$, we will evaluate the time derivative of the scalar function $\hat{p} \cdot F(x)$
\begin{align}
\frac{{\rm d}}{{\rm d} t}(\hat{p} \cdot F(\hat{x})) &= \dot{\hat{p}}\cdot F(\hat{x}) + \hat{p} \cdot D_x F(\hat{x}) \dot{\hat{x}}\\
&= \hat{p} \cdot\left( D_x F(\hat{x})f_0(\hat{x}) - D_x f_0(\hat{x})F(\hat{x}) + \sum_{i = 1}^m \hat{v}_i \left( D_x F(\hat{x})f_i(\hat{x}) - D_x f_i(\hat{x})F(\hat{x})\right) \right).
\end{align}
Before proceeding it is advantageous to introduce the definition of {\em Lie brackets}. Given two $\mathcal{C}^1$ functions $g,h$ the Lie bracket is denoted as
\begin{equation}
\label{Lie.Brackets}
[g,h]^x := D_x h(x)g(x) - D_x g(x)h(x).
\end{equation}
Using the new introduced notation, we can write
\begin{equation}
\label{genericvectorfield.D1}
\frac{{\rm d}}{{\rm d} t}(\hat{p} \cdot F(x)) = \hat{p} \cdot [f_0, F]^x + \sum_{i = 1}^m \hat{v}_j \hat{p} \cdot [f_i, F]^x.
\end{equation}
We can obtain the first time derivative of the component of index $i$ of the switching function by choosing $F = f_i$ so that Equation \eqref{genericvectorfield.D1} becomes
\begin{equation}
\label{switching.function.D1_preview}
\dot{H}_{v_j}[\hat\lambda](\hat{w}) = \hat{p} \cdot [f_0, f_j]^x + \sum_{i = 1}^m \hat{v}_j \hat{p} \cdot [f_i, f_j]^x.
\end{equation}
The difficulties arise since this derivative is independent of the control function when evaluated at the optimal trajectory. This is a consequence of the following proposition which is a corollary of second order necessary conditions for the optimality when the set of multipliers is a singleton. This is the case of our problem in virtue of Assumption \ref{qualification.constraints}.
\begin{proposition}[Goh conditions]
\label{Goh.condition}
Assume that $\hat{w}(\cdot)$ is a weak minimum having an unique multiplier. Then the following identity holds
\begin{equation*}
\hat{p} \cdot [f_i, f_j]^x(\cdot) = 0, \quad \text{ for $i,j = 1, \cdots, m$}.
\end{equation*}
\end{proposition}
Proposition \ref{Goh.condition} was first introduced by Goh, and was further generalized by Aronna \footnote{ The credit for this result is given to Aronna and many others, check the literature for accurate references.} . The reader is referred to Corollary 5.2 of \cite{aronna2017second} for a proof. It is a direct consequence that the time derivative of the switching function \eqref{switching.function.D1_preview} can now be expressed as
\begin{equation}
\label{switching.function.D1}
\dot{H}_{v_i}[\hat\lambda](\hat{w}) = \hat{p} \cdot [f_0, f_i]^x,
\end{equation}
showing explicitly the independence on the controls. Our hope is to turn to the second time derivative, which can be achieved with equation \eqref{genericvectorfield.D1}, this time choosing $F = [f_0, f_i]^x$. We obtain
\begin{equation}
\label{switching.function.D2}
\ddot{H}_{v_i}[\hat\lambda](\hat{w}) = \hat{p} \cdot \left[f_0, [f_0, f_i]^x\right]^x+\sum_{j = 1}^m \hat{v}_j \hat{p} \cdot \left[f_j, [f_0, f_i]^x\right]^x.
\end{equation}
\begin{remark}
One could easily be mislead into thinking that the control shall appear explicitly in equation \eqref{switching.function.D2}, as was the case with the first derivative. In fact, as was proved by {\em Kelly et al.}\cite{kelley19673}, for each control index $i$, the order $M_i$ of the first derivative of the switching function which presents the control explicitly is even.\footnote{
{\bf Future work:} Could we reproduce Kelly's results in our context and using our notation (Lie bracket)?
}
This is equivalent to for every odd integer $\nu$
\begin{equation}
\frac{{\rm d} ^\nu H_{v}}{{\rm d} t^{\nu}} [\hat\lambda](\hat{w}) = 0
\end{equation}
is not explicitly dependent of the control $\hat{v}$, and therefore, it can not be used in order to achieve a representation as in Equation \eqref{representation.linear}.
\end{remark}
The way to proceed then is to take as many derivatives as necessary in order to force the explicit appearance of the linear controls. For simplicity, we shall consider that such task is achieved with only two derivatives. From the stationarity of the Hamiltonian for the linear controls, we have $\ddot{H}_v[\lambda](\hat{w}) = 0$. As a consequence, the controls can be retrieved by means of the {\em (IFT)}, once we assume the following {\em strengthened generalized Legendre-Clebsch conditon}.
\begin{assumption}[Strengthened Generalized Legendre-Clebsch conditon]
\label{strengethenedLC.generalized}
\begin{equation*}
-\frac{\partial \ddot{H_v}}{\partial v}
[\hat
\lambda](\hat{w}) \succ 0.
\end{equation*}
\end{assumption}
Once again, as in the totally non linear case, the positive semi-definiteness of the quantity in Assumption \ref{strengethenedLC.generalized} is a necessary condition for the optimality of a feasible trajectory. Finally the controls can be retrieved and we substitute their expressions in the state and costate dynamics, again obtaining a two point boundary value problem.
\subsection{The partially-affine case:}
Our strategy is to use the Implicit Function Theorem in order to obtain a representation for both $\hat{u}$ and $\hat{v}$ as a function of $\hat{x}$ and $\hat{p}$. For this, we shall make use of the stationarity of the Hamiltonian, \eqref{stationarity.Hamiltonian}, in combination with Assumptions \ref{strengthenedLC} and \ref{strengethenedLC.generalized} in order to satisfy the requirements of the theorem.
As for the conventional Legendre-Clebsch condition, it still holds in this case, since it is a necessary condition for optimality as previously discussed. For our problem it assumes the form
\begin{equation}
\label{LegendreClebsch.mixed}
\tag{LC}
\left(
\begin{array}{cc}
H_{uu}[\hat\lambda](\hat{w}) & H_{uv}[\hat\lambda](\hat{w})\\
\\
H_{vu}[\hat\lambda](\hat{w})
& H_{vv}[\hat\lambda](\hat{w})
\end{array}
\right)
\succeq 0.
\end{equation}
A proof of this can be found in Aronna \cite{aronna2017second}, see Corollary 1. Furthermore, since $H_{vv}[\lambda](\hat{w}) = 0$, this condition holds if, and only if,
\begin{equation}
\label{LegendreClebsch.mixed.equivalent}
H_{uu}[\lambda](\hat{w}) \succeq 0 \text{ and } H_{uv}[\lambda](\hat{w}) = 0.
\end{equation}
In order to find the representation analogous to \eqref{representation.nonlin} and \eqref{representation.linear}, we proceed in a similar manner. The stationarity condition for the nonlinear controls $H_u[\lambda](\hat{w}) = 0$ can be differentiated assuming temporary enough regularity, giving
\begin{equation}
\dot{H}_u[\lambda](\hat{w}) = H_{ux}[\lambda](\hat{w}) \dot{\hat{x}} + H_{up}[\lambda](\hat{w}) \dot{\hat{p}} + H_{uu}[\lambda](\hat{w}) \dot{\hat{u}} + H_{uv}[\lambda](\hat{w}) \dot{\hat{v}} = 0.
\end{equation}
To formalize such expression, we make the following assumption on the controls.
\begin{assumption}[Regularity of the controls]
\label{regularity.controls}
The nonlinear control $\hat{u}(\cdot)$ is continuously differentiable and the linear control $\hat{v}(\cdot)$ is continuous.
\end{assumption}
Note that in fact we only need them to be in the Sobolev space $W^{1,\infty}$, however it is also a requirement for proving the convergence of the shooting algorithm proposed further on.
The coefficient referent to $\dot{v}$ vanishes in virtue of \eqref{LegendreClebsch.mixed.equivalent}. We can lose the dependence of the derivative of the nonlinear controls, for this define the function $F(\hat{x}, \hat{u}, \hat{v}, \hat{p}, \dot{\hat{u}}) = \dot{H}_u[\lambda](\hat{w})$. Using the last equation and assuming the strengthened Legendre-Clebsch condition we write the system
\begin{align}
F(\hat{x}, \hat{u}, \hat{v}, \hat{p}, \dot{\hat{u}}) &= 0\\
D_{\dot{\hat{u}}} F(\hat{x}, \hat{u}, \hat{v}, \hat{p}, \dot{\hat{u}}) &\succ 0.
\end{align}
This enables us to apply the Implicit Function Theorem, giving the derivative of the nonlinear controls in the form
\begin{equation}
\label{representation.nonlin.derivative}
\dot{\hat{u}}(t) = \Gamma^u(\hat{u}, \hat{v}, \hat{x}, \hat{p}),
\end{equation}
where $\Gamma$ is a $\mathcal{C}^1$ function, as a consequence of the $(IFT)$ as well.
Using Equation \eqref{switching.function.D1}, in our new setting we obtain
\begin{equation}
\dot{H}_{v_i} = \hat{p} \cdot [f_0, f_i]^x + \dot{H}_{v_iu}\dot{\hat{u}}.
\end{equation}
We can therefore use the representation for $\dot{\hat{u}}(\cdot)$ and differentiate one more time, obtaining
\begin{equation}
\ddot{H}_{v_i} = \hat{p} \cdot \left[f_0, [f_0, f_i]^x\right]^x+\sum_{j = 0}^m \hat{v}_j \hat{p} \cdot \left[f_j, [f_0, f_i]^x\right]^x + \frac{{\rm d} }{{\rm d} t}\left( H_{v_iu}\Gamma_u \right).
\end{equation}
Therefore, $\ddot{H}_{v} = \ddot{H}_{v}(\hat{w}, \hat{p}, \dot{\hat{v}})$, since any dependence on the time derivative of the nonlinear controls can be substituted by the representation \eqref{representation.nonlin.derivative}. In a similar manner as for the nonlinear controls, if we choose the function $F = -\ddot{H}_{v}$, then the stationarity conditions and the strengthened generalized Legendre-Clebsch condition gives the system
\begin{align}
F(\hat{x}, \hat{u}, \hat{v}, \hat{p}, \dot{\hat{v}}) &= 0\\
D_{\dot{v}} F(\hat{x}, \hat{u}, \hat{v}, \hat{p}, \dot{\hat{v}}) &\succ 0.
\end{align}
This once again satisfies the Implicit Function Theorem and gives the representation
\begin{equation}
\label{representation.lin.derivative}
\dot{\hat{v}}(t) = \Gamma^v(\hat{u}, \hat{v}, \hat{x}, \hat{p}).
\end{equation}
This finally shows that $\ddot{H}_{v}$ is a function of the states, costates and controls only and we turn to the mapping
\begin{equation}
(w, \lambda) \mapsto
\left(\begin{array}{cc}
H_u[\hat\lambda](\hat{w})\\
\\
-\ddot{H_v}[\hat\lambda](\hat{w})
\end{array}\right)
\end{equation}
that has derivate with respect to the controls at $(\hat{w}, \hat \lambda)$ given by
\begin{equation}
\mathcal{J} :=
\left(\begin{array}{cc}
\displaystyle
H_{uu}[\hat \lambda](\hat{w}) & H_{uv}[\hat \lambda](\hat{w}) \\
& \\
\displaystyle
-\frac{\partial \ddot{H}_v[\hat \lambda](\hat{w})}{\partial u} &\displaystyle -\frac{\partial \ddot{H}_v[\hat \lambda](\hat{w})}{\partial v}
\end{array}\right).
\end{equation}
Since \eqref{LegendreClebsch.mixed} holds along the optimal trajectory and in virtue of the strengthened Legendre Clebsch conditions holds, assumptions \ref{strengthenedLC} and \ref{strengethenedLC.generalized}, the matrix $\mathcal{J}$ is positive definite and we can once again use the Implicit Function Theorem to write the desired representations as in \eqref{representation.nonlin} and \eqref{representation.linear}. This is summarized in the following proposition.
\begin{proposition}[Elimination of the controls]
\label{control.elimination}
If $\hat{w}$ is a weak solution for problem \eqref{unconstrained.problem} with a single multiplier, provided that assumptions \ref{regularity.controls}, \ref{strengthenedLC} and \ref{strengethenedLC.generalized} are satisfied, then one has the representation
\begin{equation}
\label{control.elimination.equation}
\hat{u} = \Uh(\hat{x}, \hat{p}) \quad \hat{v} = \Vh(\hat{x}, \hat{p}),
\end{equation}
where $U$ and $V$ are smooth function of the states and costates.
\end{proposition}
The previous discussion enable us to write the optimal control problem at hand as a two point boundary value problem as in the previous cases. Taking in account the initial-final constraints and the stationarity of the Hamiltonian as well, we obtain an equivalent differential-algebraic system of equations formed from \eqref{unconstrained.problem}, \eqref{costate_dynamics}, \eqref{transversality.conditions} and \eqref{stationarity.Hamiltonian}. Such problem, also called the {\em Optimality System}, is written as
\begin{equation}
\label{optimality.system}
\tag{OS}
\left\{\begin{array}{l}
\displaystyle
\dot{x} = f(x,U(x,p), V(x,p)), \text{ a.e. on $[0,T],$}\\
\\
\displaystyle
\dot{p} = -p\cdot D_xf(x,U(x,p), V(x,p)), \text{ a.e. on $[0,T],$}\\
\\
\eta_j(x_0, x_T) = 0 \text{ for } j = 1, \cdots, d_{\eta},\\
\\
p(0) = -D_{x_0}l[\lambda](x(0), x(T)),\\
\\
p(T) = D_{x_T}l[\lambda](x(0), x(T)),\\
\\
H_v(x(T), U(x(T), p(T))) = 0,\\
\\
\dot{H}_v(x(0), U(x(0), p(0))) = 0.\\
\end{array}\right.
\end{equation}
We note that the first and second equations in \eqref{optimality.system} represent the state and costate dynamics, the third, the initial-final restrictions in problem \eqref{unconstrained.problem}. The transversality conditions \eqref{transversality.conditions} are represented in the forth and fifth equations. Finally, the stationarity of the Hamiltonian are implicitly satisfied by the substitution of the controls from Proposition \ref{control.elimination}, since such substitutions are equivalent to $H_u[\hat\lambda](\hat{w}) = 0$ and $\ddot{H}_v[\lambda](\hat{w}) = 0$. The stationarity with respect to the linear controls is retrieved with the last two equations from \eqref{optimality.system}.
\subsection{Computing the Linear Controls}
To solve \eqref{optimality.system}, we need explicit analytical expressions for the controls in terms of $x$ and $p$. The nonlinear controls usually can be obtained from the stationarity $H_u = 0.$ We start by assuming that the representation $\hat{u} = U(\hat{x}, \hat{p})$ was already obtained.
In the sequel we introduce the Poisson bracket notation. Given two functions $g,h$ that depend on $x,p$, the {\em Poisson bracket} is given by
\begin{equation}
\label{poisson_bracket}
\{g,h\} := D_xgD_ph - D_pgD_xh = \sum_{i = 1}^n\left(\frac{\partial g}{\partial x_i}\frac{\partial h}{\partial p_i} - \frac{\partial g}{\partial p_i}\frac{\partial h}{\partial x_i}\right).
\end{equation}
The following result is a direct consequence of this definition.
\begin{proposition}
\label{poisson_bracket.proposition}
Let $F = F(x,p,t)$ be a $\mathcal{C}^1$-function. Then
\begin{equation}
\frac{{\rm d}}{{\rm d} t}F(x,p,t) = \{F,H\} + \frac{\partial F}{\partial t},
\end{equation}
provided that $(x,p)$ follows the Hamiltonian dynamics $\dot{x} = H_p, \,\, -\dot{p} = H_x.$
\end{proposition}
As a consequence of Proposition \ref{poisson_bracket.proposition}, if the optimal control $(\hat{u}, \hat{v})$ admits a feedback representation $\hat{u} = U(x,p)$, then
\begin{equation}
\label{u_dev}
\dot{\hat{u}} = \{U, H\} = \{U, p\cdot f_0\} + \sum_{j = 1}^{m}\hat{v}_j \{U,p\cdot f_j \}.
\end{equation}
By substituting \eqref{u_dev} in equation \eqref{hamiltonian_2timedev}, we obtain, for $i,j = 1, \dots, m$,
\begin{equation}
\label{kernel_equation.prev1}
\begin{split}
\ddot{H}_{v_i} = \ \gamma_{i0} + \displaystyle \sum_{j = 1}^{m}\hat{v}_j\gamma_{ij} = 0,\quad\text{where }\gamma_{ij} := \ \displaystyle \hat{p}\cdot \left([f_j, [f_0, f_i]] + D_u[f_0, f_i]\{U,\hat{p}\cdot f_j\}\right).
\end{split}
\end{equation}
\chapter{\LARGE \bf {#1}}\vspace{-40mm}}
\renewcommand{\sectionl}[1]{\section{\LARGE \bf {#1}}}
\renewcommand{\subsectionl}[1]{\subsection{\LARGE \bf {#1}}}
\renewcommand{\subsubsectionl}[1]{\subsubsection{\Large \bf {#1}}}
\renewcommand{\paragraphl}[1]{\paragraph{\LARGE \bf {#1}}}
\renewcommand{\subparagraphl}[1]{\subparagraph{\LARGE \bf {#1}}}
\renewcommand{\captionl}[1]{\caption{\LARGE \bf {#1}}}
\renewcommand{\Largel}{\Large}}{{\caption}}
} \fi
\newcommand\end{proof}{\hfill{$\blacksquare$}\medskip}
\newenvironment{proofdu}[1]{{\noindent \bf D\'emonstration #1.}\quad}{\hfill{$\blacksquare$}\medskip}
\newcommand\ML{{\rm M}}
\def\check{a}{\check{a}}
\def\check{b}{\check{b}}
\def\check{c}{\check{c}}
\def\check{d}{\check{d}}
\def\check{e}{\check{e}}
\def\check{f}{\check{f}}
\def\check{g}{\check{g}}
\def\check{h}{\check{h}}
\def\check{i}{\check{i}}
\def\check{j}{\check{j}}
\def\check{k}{\check{k}}
\def\check{l}{\check{l}}
\def\check{m}{\check{m}}
\def\check{n}{\check{n}}
\def\check{o}{\check{o}}
\def\check{p}{\check{p}}
\def\check{q}{\check{q}}
\def\check{r}{\check{r}}
\def\check{s}{\check{s}}
\def\check{t}{\check{t}}
\def\check{u}{\check{u}}
\def\check{v}{\check{v}}
\def\check{w}{\check{w}}
\def\check{x}{\check{x}}
\def\check{y}{\check{y}}
\def\check{z}{\check{z}}
\def\hat{a}{\hat{a}}
\def\hat{b}{\hat{b}}
\def\hat{c}{\hat{c}}
\def\hat{d}{\hat{d}}
\def\hat{e}{\hat{e}}
\def\hat{f}{\hat{f}}
\def\hat{g}{\hat{g}}
\def\hat{h}{\hat{h}}
\def\hat{i}{\hat{i}}
\def\hat{j}{\hat{j}}
\def\hat{k}{\hat{k}}
\def\hat{l}{\hat{l}}
\def\hat{m}{\hat{m}}
\def\hat{n}{\hat{n}}
\def\hat{o}{\hat{o}}
\def\hat{p}{\hat{p}}
\def\hat{q}{\hat{q}}
\def\hat{r}{\hat{r}}
\def\hat{s}{\hat{s}}
\def\hat{t}{\hat{t}}
\newcommand\tth{\hat{t}}
\def\hat{u}{\hat{u}}
\def\hat{v}{\hat{v}}
\def\hat{w}{\hat{w}}
\def\hat{x}{\hat{x}}
\def\hat{y}{\hat{y}}
\def\hat{z}{\hat{z}}
\newcommand\Ah{\hat{A}}
\newcommand\Bh{\hat{B}}
\newcommand\Ch{\hat{C}}
\newcommand\Dh{\hat{D}}
\newcommand\Eh{\hat{E}}
\newcommand\Hh{\hat{H}}
\newcommand\Ih{\hat{I}}
\newcommand\Lh{\hat{L}}
\newcommand\Ph{\hat{P}}
\newcommand\Qh{\hat{Q}}
\newcommand\Th{\hat{T}}
\newcommand\Uh{\hat{U}}
\newcommand\Vh{\hat{V}}
\newcommand\Wh{\hat{W}}
\def\bar{a}{\bar{a}}
\def\bar{b}{\bar{b}}
\def\bar{c}{\bar{c}}
\def\bar{d}{\bar{d}}
\def\bar{e}{\bar{e}}
\def\bar{f}{\bar{f}}
\def\bar{g}{\bar{g}}
\def\bar{h}{\bar{h}}
\def\bar{i}{\bar{i}}
\def\bar{j}{\bar{j}}
\def\bar{k}{\bar{k}}
\def\bar{l}{\bar{l}}
\def\bar{m}{\bar{m}}
\def\bar{n}{\bar{n}}
\def\bar{o}{\bar{o}}
\def\bar{p}{\bar{p}}
\def\bar{q}{\bar{q}}
\def\bar{r}{\bar{r}}
\def\bar{s}{\bar{s}}
\def\bar{t}{\bar{t}}
\def\bar{u}{\bar{u}}
\def\bar{v}{\bar{v}}
\def\bar{w}{\bar{w}}
\def\bar{x}{\bar{x}}
\def\bar{y}{\bar{y}}
\def\zb{\bar{z}}
\def\bar{A}{\bar{A}}
\def\bar{B}{\bar{B}}
\def\bar{C}{\bar{C}}
\def\bar{D}{\bar{D}}
\def\bar{E}{\bar{E}}
\def\bar{F}{\bar{F}}
\def\bar{G}{\bar{G}}
\def\bar{H}{\bar{H}}
\def\bar{I}{\bar{I}}
\def\bar{J}{\bar{J}}
\def\bar{K}{\bar{K}}
\def\bar{L}{\bar{L}}
\def\bar{M}{\bar{M}}
\def\bar{N}{\bar{N}}
\def\bar{O}{\bar{O}}
\def\bar{P}{\bar{P}}
\def\bar{Q}{\bar{Q}}
\def\bar{R}{\bar{R}}
\def\bar{S}{\bar{S}}
\def\bar{T}{\bar{T}}
\def\bar{U}{\bar{U}}
\def\bar{V}{\bar{V}}
\def\bar{W}{\bar{W}}
\def\bar{X}{\bar{X}}
\def\bar{Y}{\bar{Y}}
\def\bar{Z}{\bar{Z}}
\def\bar{C}{\bar{C}}
\def\bar{T}{\bar{T}}
\def\bar{Z}{\bar{Z}}
\def\tilde{a}{\tilde{a}}
\def\tilde{b}{\tilde{b}}
\def\tilde{c}{\tilde{c}}
\def\tilde{d}{\tilde{d}}
\def\tilde{e}{\tilde{e}}
\def\tilde{f}{\tilde{f}}
\def\tilde{g}{\tilde{g}}
\def\tilde{h}{\tilde{h}}
\def\tilde{i}{\tilde{i}}
\def\tilde{j}{\tilde{j}}
\def\tilde{k}{\tilde{k}}
\def\tilde{l}{\tilde{l}}
\def\tilde{m}{\tilde{m}}
\def\tilde{n}{\tilde{n}}
\def\tilde{o}{\tilde{o}}
\def\tilde{p}{\tilde{p}}
\def\tilde{q}{\tilde{q}}
\def\tilde{r}{\tilde{r}}
\def\tilde{s}{\tilde{s}}
\def\tilde{t}{\tilde{t}}
\def\tilde{u}{\tilde{u}}
\def\tilde{v}{\tilde{v}}
\def\tilde{w}{\tilde{w}}
\def\tilde{x}{\tilde{x}}
\def\tilde{y}{\tilde{y}}
\def\tilde{z}{\tilde{z}}
\def\At{\tilde{A}}
\def\Lt{\tilde{L}}
\def\Pt{\tilde{P}}
\def\Qt{\tilde{Q}}
\def\Rt{\tilde{R}}
\def\St{\tilde{S}}
\def\Vt{\tilde{V}}
\def\Wt{\tilde{W}}
\def\Xt{\tilde{X}}
\def\Yt{\tilde{Y}}
\def\Zt{\tilde{Z}}
\def\tA{\tilde{A}}
\def\tL{\tilde{L}}
\def\tS{\tilde{S}}
\def\tV{\tilde{V}}
\def\tW{\tilde{W}}
\def\tX{\tilde{X}}
\def\tY{\tilde{Y}}
\def\tZ{\tilde{Z}}
\def\ebf { {\bf e}}
\def\fbf { {\bf f}}
\def\gbf { {\bf g}}
\def\hbf { {\bf h}}
\def\mbf { {\bf m}}
\def\pbf { {\bf p}}
\def\qbf { {\bf q}}
\def\rbf { {\bf r}}
\def\sbf { {\bf s}}
\def\ubf { {\bf u}}
\def\vbf { {\bf v}}
\def\wbf { {\bf w}}
\def\xbf { {\bf x}}
\def\ybf { {\bf y}}
\def\zbf { {\bf z}}
\def\hebf {\hat {{\bf e} }}
\def\hfbf {\hat {{\bf f} }}
\def\hgbf {\hat {{\bf g} }}
\def\hhbf {\hat {{\bf h} }}
\def\hat {{\bf p} } {\hat {{\bf p} }}
\def\hqbf {\hat {{\bf q} }}
\def\hrbf {\hat {{\bf r} }}
\def\hsbf {\hat {{\bf s} }}
\def\hubf {\hat {{\bf u} }}
\def\hvbf {\hat {{\bf v} }}
\def\hwbf {\hat {{\bf w} }}
\def\hxbf {\hat {{\bf x} }}
\def\hybf {\hat {{\bf y} }}
\def\hzbf {\hat {{\bf z} }}
\def\bebf {\bar {{\bf e} }}
\def\bfbf {\bar {{\bf f} }}
\def\bgbf {\bar {{\bf g} }}
\def\bhbf {\bar {{\bf h} }}
\def\bar {{\bf p} } {\bar {{\bf p} }}
\def\bqbf {\bar {{\bf q} }}
\def\brbf {\bar {{\bf r} }}
\def\bsbf {\bar {{\bf s} }}
\def\bubf {\bar {{\bf u} }}
\def\bvbf {\bar {{\bf v} }}
\def\bwbf {\bar {{\bf w} }}
\def\bxbf {\bar {{\bf x} }}
\def\bybf {\bar {{\bf y} }}
\def\bzbf {\bar {{\bf z} }}
\def\tebf {\tilde {{\bf e} }}
\def\tfbf {\tilde {{\bf f} }}
\def\tgbf {\tilde {{\bf g} }}
\def\thbf {\tilde {{\bf h} }}
\def\tilde {{\bf p} } {\tilde {{\bf p} }}
\def\tqbf {\tilde {{\bf q} }}
\def\trbf {\tilde {{\bf r} }}
\def\tsbf {\tilde {{\bf s} }}
\def\tubf {\tilde {{\bf u} }}
\def\tvbf {\tilde {{\bf v} }}
\def\twbf {\tilde {{\bf w} }}
\def\txbf {\tilde {{\bf x} }}
\def\tybf {\tilde {{\bf y} }}
\def\tzbf {\tilde {{\bf z} }}
\def{\tilde {\mathcal A}}{{\tilde {\mathcal A}}}
\def{\mathcal A}{{\mathcal A}}
\def{\mathcal B}{{\mathcal B}}
\def{\mathcal C}{{\mathcal C}}
\def{\mathcal D}{{\mathcal D}}
\def{\mathcal E}{{\mathcal E}}
\def{\mathcal F}{{\mathcal F}}
\def{\mathcal G}{{\mathcal G}}
\def{\mathcal H}{{\mathcal H}}
\def{\mathcal I}{{\mathcal I}}
\def{\mathcal J}{{\mathcal J}}
\def{\mathcal K}{{\mathcal K}}
\def{\mathcal L}{{\mathcal L}}
\def{\mathcal M}{{\mathcal M}}
\def{\mathcal N}{{\mathcal N}}
\def{\mathcal O}{{\mathcal O}}
\def{\mathcal P}{{\mathcal P}}
\def{\mathcal Q}{{\mathcal Q}}
\def{\mathcal R}{{\mathcal R}}
\def{\mathcal S}{{\mathcal S}}
\def{\mathcal T}{{\mathcal T}}
\def{\mathcal U}{{\mathcal U}}
\def{\mathcal V}{{\mathcal V}}
\def{\mathcal W}{{\mathcal W}}
\def{\mathcal X}{{\mathcal X}}
\def{\mathcal Y}{{\mathcal Y}}
\def{\mathcal Z}{{\mathcal Z}}
\def{\mathcal A}{{\mathcal A}}
\def{\mathcal F}{{\mathcal F}}
\def{\mathcal L}{{\mathcal L}}
\def\mathbb{P}{{\mathcal P}}
\def\mathbb{Q}{{\mathcal Q}}
\def\mathbb{R}{{\mathcal R}}
\def{\mathcal V}{{\mathcal V}}
\def{\mathcal W}{{\mathcal W}}
\def{\mathcal O}{{\mathcal O}}
\def\widetilde{\cal P}{\widetilde{\cal P}}
\def\mathcal{A}{\mathcal{A}}
\def\mathcal{B}{\mathcal{B}}
\def\mathcal{C}{\mathcal{C}}
\def\mathcal{D}{\mathcal{D}}
\def\mathcal{E}{\mathcal{E}}
\def\mathcal{F}{\mathcal{F}}
\def\mathcal{G}{\mathcal{G}}
\def\mathcal{H}{\mathcal{H}}
\def\mathcal{I}{\mathcal{I}}
\def\mathcal{J}{\mathcal{J}}
\def\mathcal{K}{\mathcal{K}}
\def\mathcal{L}{\mathcal{L}}
\def\mathcal{M}{\mathcal{M}}
\def\mathcal{N}{\mathcal{N}}
\def\mathcal{P}{\mathcal{P}}
\def\mathcal{Q}{\mathcal{Q}}
\def\mathcal{R}{\mathcal{R}}
\def\mathcal{S}{\mathcal{S}}
\def\mathcal{T}{\mathcal{T}}
\def\mathcal{U}{\mathcal{U}}
\def\mathcal{V}{\mathcal{V}}
\def\mathcal{W}{\mathcal{W}}
\def\mathcal{X}{\mathcal{X}}
\def\mathcal{Y}{\mathcal{Y}}
\def\mathcal{Z}{\mathcal{Z}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathcal{E}}{\mathcal{E}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathcal{T}}{\mathcal{T}}
\newcommand{\mathcal{V}}{\mathcal{V}}
\newcommand{\overline{\mathcal H}}{\overline{\mathcal H}}
\def\overline{\bf H}{\overline{\bf H}}
\newcommand{\overline{X}}{\overline{X}}
\newcommand{\overline{ Y}}{\overline{ Y}}
\def\underline{a}{\underline{a}}
\def\underline{x}{\underline{x}}
\def\underline{V}{\underline{V}}
\def\varepsilon{\varepsilon}
\def{\Omega}{{\Omega}}
\def{\Omegab}{{\Omegab}}
\def{\omega}{{\omega}}
\def\lambda{\lambda}
\def\hat{\alpha}{\hat{\alpha}}
\def\hat{\beta}{\hat{\beta}}
\def\hat{\ell}{\hat{\ell}}
\def\hat{\eta}{\hat{\eta}}
\def\hat{\gamma}{\hat{\gamma}}
\def\hat{\kappa}{\hat{\kappa}}
\def\hat{\lambda}{\hat{\lambda}}
\def\hat{\lambda}{\hat{\lambda}}
\def\hat{\Lambda}{\hat{\Lambda}}
\def\hat{\mu}{\hat{\mu}}
\def\hat{\nu}{\hat{\nu}}
\def{\hat\pi}{{\hat\pi}}
\def{\hat\Phi}{{\hat\Phi}}
\def{\hat\Phi}{{\hat\Phi}}
\def{\hat\psi}{{\hat\psi}}
\def{\hat\Psi}{{\hat\Psi}}
\def{\hat\Psi}{{\hat\Psi}}
\def{\hat\rho}{{\hat\rho}}
\def{\hat\sigma}{{\hat\sigma}}
\def{\hat\tau}{{\hat\tau}}
\def{\hat\theta}{{\hat\theta}}
\def{\hat\xi}{{\hat\xi}}
\def\bar{\alpha}{\bar{\alpha}}
\def\bar{\beta}{\bar{\beta}}
\def\bar{\chi}{\bar{\chi}}
\def\bar{\gamma}{\bar{\gamma}}
\def\bar{\delta}{\bar{\delta}}
\def\bar{\ell}{\bar{\ell}}
\def\bar{\varepsilon}{\bar{\varepsilon}}
\def\bar{\eta}{\bar{\eta}}
\def\bar{\kappa}{\bar{\kappa}}
\def\bar{\lambda}{\bar{\lambda}}
\def\bar{\lambda}{\bar{\lambda}}
\def\bar{\Lambda}{\bar{\Lambda}}
\def\bar{\mu}{\bar{\mu}}
\def\bar{\nu}{\bar{\nu}}
\def{\bar\pi}{{\bar\pi}}
\def{\bar\Phi}{{\bar\Phi}}
\def{\bar\phi}{{\bar\phi}}
\def{\bar\psi}{{\bar\psi}}
\def{\bar\Psi}{{\bar\Psi}}
\def{\hat\Psi}{{\hat\Psi}}
\def{\bar\sigma}{{\bar\sigma}}
\def{\bar\tau}{{\bar\tau}}
\def{\bar\theta}{{\bar\theta}}
\def{\bar\varphi}{{\bar\varphi}}
\def{\bar\xi}{{\bar\xi}}
\def{\bar\zeta}{{\bar\zeta}}
\def\tilde{\alpha}{\tilde{\alpha}}
\def\tilde{\beta}{\tilde{\beta}}
\def\tilde{\ell}{\tilde{\ell}}
\def\tilde{\eta}{\tilde{\eta}}
\def\tilde{\kappa}{\tilde{\kappa}}
\def\tilde{\lambda}{\tilde{\lambda}}
\def\tilde{\lambda}{\tilde{\lambda}}
\def\tilde{\Lambda}{\tilde{\Lambda}}
\def\tilde{\mu}{\tilde{\mu}}
\def\tilde{\nu}{\tilde{\nu}}
\def\tilde{\omega}{\tilde{\omega}}
\def\tilde{\Omega}{\tilde{\Omega}}
\def{\tilde\pi}{{\tilde\pi}}
\def{\tilde\Phi}{{\tilde\Phi}}
\def{\tilde\Phi}{{\tilde\Phi}}
\def{\tilde\psi}{{\tilde\psi}}
\def{\tilde\Psi}{{\tilde\Psi}}
\def{\hat\Psi}{{\hat\Psi}}
\def{\tilde\sigma}{{\tilde\sigma}}
\def{\tilde\tau}{{\tilde\tau}}
\def{\tilde\theta}{{\tilde\theta}}
\def{\tilde\theta}{{\tilde\theta}}
\def{\tilde\xi}{{\tilde\xi}}
\def{\tilde\varphi}{{\tilde\varphi}}
\def\boldsymbol\alpha {\boldsymbol\alpha}
\def\bar{\boldsymbol\alpha} {\bar{\boldsymbol\alpha}}
\def\bar{\boldsymbol\eta} {\boldsymbol\beta}
\def\bar{\boldsymbol\beta} {\bar{\boldsymbol\beta}}
\def\boldsymbol\eta {\boldsymbol\eta}
\def\bar{\boldsymbol\eta} {\bar{\boldsymbol\eta}}
\def\boldsymbol\mu {\boldsymbol\mu}
\def\bar{\boldsymbol\lambda} {\bar{\boldsymbol\lambda}}
\def\bar{\boldsymbol\mu} {\bar{\boldsymbol\mu}}
\def\boldsymbol\pi {\boldsymbol\pi}
\def\bar{\boldsymbol\pi} {\bar{\boldsymbol\pi}}
\def\boldsymbol\xi {\boldsymbol\xi}
\def\bar{\boldsymbol\xi} {\bar{\boldsymbol\xi}}
\def\boldsymbol\tau {\boldsymbol\tau}
\def{\bf 1}{{\bf 1}}
\newcommand{\ceil}[1]{\lceil #1 \rceil}
\newcommand{\floor}[1]{\lfloor #1 \rfloor}
\def\mathop{\stackrel{\rightharpoonup}{\subset}}{\mathop{\stackrel{\rightharpoonup}{\subset}}}
\def\mathop{\rm ad}{\mathop{\rm ad}}
\def\mathop{\rm affhull}{\mathop{\rm affhull}}
\def\mathop{\rm argmin}{\mathop{\rm argmin}}
\def\mathop{\rm argmax}{\mathop{\rm argmax}}
\def\mathop{\rm cl}{\mathop{\rm cl}}
\newcommand\Comp{\mathop{\rm Comp}}
\def\mathop{\rm \overline{affhull}}{\mathop{\rm \overline{affhull}}}
\def\mathop{\rm cone}{\mathop{\rm cone}}
\def\mathop{\rm core}{\mathop{\rm core}}
\newcommand\conebar{\mathop{\rm \overline{cone}}}
\def\mathop{\rm conv}{\mathop{\rm conv}}
\def\mathop{\rm \overline{conv}}{\mathop{\rm \overline{conv}}}
\def\mathop{\rm deg}{\mathop{\rm deg}}
\def\mathop{\rm det}{\mathop{\rm det}}
\def\mathop{\rm det}{\mathop{\rm det}}
\def{\mathop{\rm diag}}{{\mathop{\rm diag}}}
\newcommand\diam{\mathop{\rm diam}}
\def\mathop{\rm dist}{\mathop{\rm dist}}
\def\mathop{\rm div}{\mathop{\rm div}}
\def\mathop{{\rm dom}}{\mathop{{\rm dom}}}
\def\mathop{\text{epi}}{\mathop{\text{epi}}}
\newcommand{\mathop{\rm ess}}{\mathop{\rm ess}}
\def\mathop{\rm essinf}{\mathop{\rm essinf}}
\def\mathop{\rm esssup}{\mathop{\rm esssup}}
\def\mathop{\rm esssup}{\mathop{\rm esssup}}
\def\mathop{\rm int}{\mathop{\rm int}}
\def\mathop{\rm inv}{\mathop{\rm inv}}
\def\mathop{\rm Isom}{\mathop{\rm Isom}}
\def\mathop{\rm lin}{\mathop{\rm lin}}
\newcommand\Jac{{\mathop{\rm Jac}}}
\newcommand\Lip{\mathop{\rm Lip}}
\newcommand\snorm{\|}
\newcommand\lnlt{\mathop{\rm lnlt}}
\newcommand\psicirc{ \psi}
\def\mathop{\rm Im}{\mathop{\rm Im}}
\def\mathop{\rm rec}{\mathop{\rm rec}}
\def\mathop{\rm rint}{\mathop{\rm rint}}
\def\mathop{\rm rint}{\mathop{\rm rint}}
\def\mathop{\rm sign}{\mathop{\rm sign}}
\def\mathop{\rm supp}{\mathop{\rm supp}}
\def\mathop{\rm trace}{\mathop{\rm trace}}
\def\mathop{\rm val}{\mathop{\rm val}}
\def\mathop{\rm var}{\mathop{\rm var}}
\def\mathop{{\rm width}}{\mathop{{\rm width}}}
\def\mathop{\rm View}{\mathop{\rm View}}
\def\mathop{\rm inf}{\mathop{\rm inf}}
\def\mathop{\rm Ker}{\mathop{\rm Ker}}
\def\mathop{\rm min}{\mathop{\rm min}}
\def\mathop{\rm max}{\mathop{\rm max}}
\def\mathop{\delta w}{\mathop{\delta w}}
\def\mathop{\delta \Psi}{\mathop{\delta \Psi}}
\def\mathop{\underline{\lim}}{\mathop{\underline{\lim}}}
\def\mathop{\overline{\lim}}{\mathop{\overline{\lim}}}
\def\mbox{$\frac{1}{2}$}{\mbox{$\frac{1}{2}$}}
\def\mbox{$\frac{1}{3}$}{\mbox{$\frac{1}{3}$}}
\def\mbox{$\frac{1}{4}$}{\mbox{$\frac{1}{4}$}}
\def\mbox{$\frac{1}{6}$}{\mbox{$\frac{1}{6}$}}
\def{\bf 1}{{\bf 1}}
\def\sbdeux#1#2{\mbox{\scriptsize$#1$}\atop\mbox{\scriptsize$#2$}}
\def\sbtrois#1#2#3{\mbox{\scriptsize$#1$}\atop\mbox{\scriptsize$#2$}\atop\mbox{\scriptsize$#3$}}
\newcommand{\mathbb{E}}{\mathbb{E}}
\newcommand{\mathbb{F}}{\mathbb{F}}
\newcommand{\mathbb{N}}{\mathbb{N}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathbb{Z}}{\mathbb{Z}}
\def\mathbb{C}{\mathbb{C}}
\def\mathbb{N}{\mathbb{N}}
\def\mathbb{P}{\mathbb{P}}
\def\mathbb{Q}{\mathbb{Q}}
\def\mathbb{Q}{\mathbb{Q}}
\def\mathbb{R}{\mathbb{R}}
\def\mathbb{Z}{\mathbb{Z}}
\newcommand{{\mathbb C}}{{\mathbb C}}
\newcommand{{\mathbb H}}{{\mathbb H}}
\newcommand{{\mathbb I}}{{\mathbb I}}
\newcommand{{\mathbb K}}{{\mathbb K}}
\newcommand{{\mathbb N}}{{\mathbb N}}
\newcommand{{\mathbb O}}{{\mathbb O}}
\newcommand{{\mathbb P}}{{\mathbb P}}
\newcommand{{\mathbb Q}}{{\mathbb Q}}
\newcommand{{\mathbb R}}{{\mathbb R}}
\newcommand{{\mathbb S}}{{\mathbb S}}
\newcommand{{\mathbb Z}}{{\mathbb Z}}
\newcommand{\mathsf{A}}{\mathsf{A}}
\newcommand{\mathsf{E}}{\mathsf{E}}
\newcommand{\mathsf{F}}{\mathsf{F}}
\newcommand{\mathsf{G}}{\mathsf{G}}
\newcommand{\mathsf{N}}{\mathsf{N}}
\newcommand{\mathsf{T}}{\mathsf{T}}
\newcommand{\mathsf{V}}{\mathsf{V}}
\newcommand{\mathcal{G}}{\mathcal{G}}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\mathcal{A}}{\mathcal{A}}
\newcommand{I\!\!E}{I\!\!E}
\newcommand{\mathbb{\bar{R}}}{\mathbb{\bar{R}}}
\newcommand{\overline{\mathbb{R}}}{\overline{\mathbb{R}}}
\newcommand{\rbar^{_{\scriptstyle\sN}}}{\overline{\mathbb{R}}^{_{\scriptstyle\mathcal{N}}}}
\newcommand\be{\begin{equation}}
\newcommand\ee{\end{equation}}
\newcommand\ba{\begin{array}}
\newcommand\ea{\end{array}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\begin{eqnarray*}}{\begin{eqnarray*}}
\newcommand{\end{eqnarray*}}{\end{eqnarray*}}
\newenvironment{paoplist}{\vspace{-1ex}\begin{list}{-}
{\itemsep 0mm \leftmargin 2mm \labelwidth 0mm}}
{\end{list} \vspace{-1ex}}
\newenvironment{myenumerate}{
\renewcommand{\theenumi}{\roman{enumi}}
\renewcommand{\labelenumi}{(\theenumi)}
\begin{enumerate}}{\end{enumerate}}
\newcommand{\noindent}{\noindent}
\newcommand{\bigskip}{\bigskip}
\newcommand{\medskip}{\medskip}
\newcommand{{\bigskip \noindent}}{{\bigskip \noindent}}
\def\marginpar{$\leftarrow$}{\marginpar{$\leftarrow$}}
\def\rightarrow{\rightarrow}
\def\rightarrow{\rightarrow}
\def\displaystyle{\displaystyle}
\def\displaystyle{\displaystyle}
\def\langle{\langle}
\def\langle{\langle}
\def\rangle{\rangle}
\newcommand{\mypsfig}[3]
{\begin{figure}[hbtp]
\centerline{\input #1}
\caption{\rm{#2}} \label{#3}
\end{figure}}
\section{Introduction}
\label{intro}
\input{Introduction.tex}
\section{Statement of the Problem and Assumptions}
\label{problem_statement}
\input{ProblemStatement.tex}
\section{The Equivalent Differential-Algebraic System}
\label{differential_algebraic}
\input{DifferentialAlgebraic_test.tex}
\section{The Shooting Algorithm}
\label{shooting_algorithm}
\input{ShootingAlg.tex}
\section{Second Order Optimality Conditions}
\label{second_order}
\input{SecondOrderOptCond.tex}
\section{Convergence of the Shooting Algorithm}
\label{converge_shooting}
\input{Convergence.tex}
\section{Control-constrained Problems}
\label{simpler_control_constraints}
\input{control_constraints.tex}
\section{Examples}
\label{Section_examples}
\subsection{Degenerate Linear Quadratic Problem}
\label{degen_lin_quad}
\input{degen_lin_quad.tex}
\subsection{Optimal Control of an SIRS Epidemiological Model}
\label{SIRS_OC}
\input{SIR.tex}
\section{Conclusion}
\label{conclusion}
\input{conclusion.tex}
|
1,314,259,994,213 | arxiv | \section*{References}
\bibliographystyle{unsrt}
|
1,314,259,994,214 | arxiv | \section{Introduction}\label{sec:intro}
Lattice QCD is proving to be a powerful and precise tool for quantitative studies in a wide range of non-perturbative hadronic processes in general and in flavour physics and the extraction of CKM matrix elements from experimental measurements in particular.
Until relatively recently almost all lattice simulations were performed in isosymmetric QCD, i.e. neglecting electromagnetic interactions and with equal up and down quark masses ($m_u=m_d\equiv m_{ud}$). Over the past decade however, the precision of lattice computations of hadronic quantities relevant for flavour physics phenomenology has reached such an impressive level of precision that both electromagnetic and strong isospin-breaking effects can no longer be neglected. For a review of recent results and references to the original literature see the latest report from the \emph{Flavour Physics Lattice Averaging Group, FLAG}\cite{Aoki:2019cca}. The aim of this report is to discuss the theoretical issues which arise when isospin breaking effects, and electromagnetic corrections in particular, are included and to review the development and implementation of the framework which, together with colleagues from Rome, we have been developing to handle these issues.
Isospin breaking effects are given in terms of two small parameters $O(\alpha_{\mathrm{em}})$ and
$(m_d-m_u)/\Lambda_{\mathrm{QCD}}$, each of which as a first approximation we take to be of $O(1\%)$ (unless there are particular reasons to expect an enhancement or suppression of these effects). In this review, I will follow the RM123 approach of Refs.\cite{deDivitiis:2013xla,Giusti:2017dmp}, in which physical observables are evaluated at first order in these two small parameters. Alternatively, one might add QED directly to the action and perform QCD+QED simulations at a number of values of the electric charge (see for example Refs\cite{Borsanyi:2014jba,Boyle:2017gzv}).
An advantage of the RM123 method is that the two small expansion parameters are factorised out, so that one can get relatively large numerical signals for the corrections, computed directly in isosymmetric QCD.
Formulating QED in a finite spatial box ($V=L^3$) raises some significant issues. For example, with the frequently used periodic boundary conditions Gauss' Law is not satisfied for a charged particle in the box. The electric flux across the boundary is zero in this situation. There have been a number of proposed approaches to circumvent or mitigate this.
A pragmatic approach, and one which we will follow in the following discussion, is to implement the QED$_\mathrm{L}$~ formulation, defined by omitting the three-momentum zero mode from the sum over the photon's momentum $\vec{k}$\cite{Hayakawa:2008an,Borsanyi:2014jba}:
\begin{equation}\label{eq:QEDLdef}
\int\frac{\dthree k}{(2\pi)^3}f(\vec{k})\to\frac1{L^3}\sum_{\vec{k}\neq\vec{0}}f(\vec{k})\,,
\end{equation}
where $f(\vec{k})$ is some function of $\vec{k}$.
On the left-hand side of Eq.\,(\ref{eq:QEDLdef}) we have the infinite-volume integral over the photon's momentum, which corresponds to the physical result we are attempting to derive. On the right-hand side, with periodic boundary conditions the sum is over the discrete momenta $\vec{k}=(2\pi/L)\vec{n}$, where $\vec{n}$ is a vector of integers, with the contribution from $\vec{n}=\vec{0}$ omitted. The key question is what is the difference between the lattice results obtained using the QED$_\mathrm{L}$~ regulator of the zero mode (right-hand side of (\ref{eq:QEDLdef})) and the physical result (left-hand side of (\ref{eq:QEDLdef})). We will address this question in the following, but in general the difference decreases only as inverse powers of the volume, and not exponentially.
We note that other interesting approaches to formulating QED in a finite volume include the use of $C^\ast$ boundary conditions which allow for a non-zero electric flux across the boundary of the volume\cite{Wiese:1991ku,Polley:1993bn,Lucini:2015hfa,Hansen:2018zre} and the infinite-volume reconstruction method in which correlation functions at large time separations are obtained from computations at moderate separations with only exponentially small finite-volume corrections\cite{Feng:2018qpx,Feng:2020mmb}.
Accurate lattice results including electromagnetic and strong isosping breaking effects have been obtained for the hadron spectrum, for example for the mass splitting between charged and neutral pseudoscalar mesons and baryons (see, e.g., Refs.\cite{deDivitiis:2013xla,Borsanyi:2014jba}). The calculation of electromagnetic effects in the hadron spectrum does not suffer from the presence of infrared divergences. The same is not true however, in the case of hadronic amplitudes, where electromagnetic infrared divergences are present and cancel for well defined, measurable physical quantities only after including diagrams containing both real and virtual photons~\cite{BN37}.
This is the case, for example, for the leptonic decays $\pi_{\ell 2}$ (i.e. $\pi\to\ell\bar{\nu}_\ell$, where $\ell$ is a charged lepton, $e$ or $\mu$)
and $K_{\ell 2}$ as well as the semileptonic $K_{\ell 3}$ decays (i.e. $K\to\pi\ell\bar{\nu}_\ell$). These decays play a central role in the accurate determination of the Cabibbo-Kobayashi-Maskawa (CKM) entries $|V_{us} / V_{ud}|$ and $|V_{us}|$\cite{CKM}.
In a recent series of papers we have developed a framework for the evaluation of first-order isospin-breaking corrections to leptonic decays of pseudoscalar mesons\cite{Carrasco:2015xwa}, calculated the corresponding finite-volume corrections up to and including $O(1/(m_PL))$ (where $m_P$ is the mass of the meson and the spatial volume of the lattice is $V=L^3$) using the QED$_\mathrm{L}$ regulator of the zero mode in the photon propagator\cite{Lubicz:2016xro} and successfully implemented the framework in the study of the leptonic decays $K,\pi\to\mu\nu_\mu(\gamma)$\cite{Giusti:2017dwk,DiCarlo:2019thl}. The theoretical framework will be summarised in the sections below together with a sketch of the numerical results.
The plan for the remainder of this paper is as follows. In the following section we look at the question of what is meant by isospin breaking corrections and how one might calculate them in principle. This may seem to be a surprising question but at the level of the $O(1\%)$ effects we are considering it is necessary to define what we mean by QCD and in particular what the quark and hadron masses are in QCD without QED. In Sec.\,\ref{sec:IR} we discuss infrared divergences which are present in the leptonic and semileptonic processes discussed later and this is followed in Sec.\,\ref{sec:FV} by a discussion of the finite-volume corrections in the QED$_\mathrm{L}$~ formulation. Sections\,\ref{sec:leptonic} and \ref{sec:semileptonic} contain the applications of the framework to leptonic and semileptonic decays respectively. Sec.\,\ref{sec:concs} contains a brief summary and conclusions.
The material presented below is intended to be an introduction for a general theory audience to the problem of including isospin breaking effects, and electromagnetic corrections in particular, in a finite Euclidean volume. Although the motivation for these studies is to include isospin breaking effects in lattice computations, which are necessarily performed in a finite volume, the focus will be on the long-distance aspects rather than on ultra-violet issues associated with the finite lattice spacing. Moreover, although we do discuss and include strong isospin breaking effects in the computations, the principal theoretical difficulties concern the inclusion of the propagator of a zero-mass photon and so naturally most of the presentation is devoted to this.
\section{What is meant by isospin breaking corrections?}
\label{sec:QEDQCD}
When performing lattice QCD computations, with or without QED corrections, we need to choose a discretisation of the field theory and the numerical values of the parameters of the Standard Model, the masses and coupling constants. ``Physical" values of the bare quark masses are determined
by requiring that the results for a chosen set of physical quantities correctly reproduce their experimentally measured values.
It is important to note that, once isospin breaking effects, including electromagnetism, are introduced into QCD computations, it is only the full QCD+QED theory which is unambiguous. Strong isospin breaking implies that there is a difference in the masses of the up and down quarks, $m_d\neq m_u$. However since the electric charges of the $u$ and $d$ quarks are different, electromagnetic corrections themselves induce a difference between $m_u$ and $m_d$, so that
asking the question of how much of the isospin breaking, not only for the quark masses but in general, is attributed to different input masses in QCD and how much to electromagnetism cannot be answered without introducing a prescription. Physical results, of course, must be independent of the prescription.
In this section I describe how the quark masses and the lattice spacing, $a$, are determined using lattice QCD, both in isosymmetric QCD (Sec.\,\ref{subsec:renormalisationQCD}) and in the \emph{full theory} in which both electromagnetism and strong isospin breaking is included (Sec.\,\ref{subsec:renormalisationfull}). In Sec.\,\ref{subsec:hadronicschemes} I explain how QCD and hence isospin breaking corrections might be defined in the full theory. The discussion in this section follows Sec.II of Ref.\!\cite{DiCarlo:2019thl} where more details can be found.
\subsection{Calibrating the lattice in isosymmetric QCD}\label{subsec:renormalisationQCD}
Imagine that we wish to compute some physical quantities in a lattice QCD computation with $N_f=2+1+1$ flavours of quarks, in the isosymmetric limit, i.e. with $m_u=m_d\equiv m_{ud}$ and without including electromagnetic effects. To perform the computations it is necessary to choose a value for the (dimensionless) strong coupling constant $g_s(a)$ and the corresponding parameter is then the (dimensionful) lattice spacing $a$. The four parameters to be determined are the bare quark masses $m_{ud}, m_s, m_c$ and the lattice spacing $a$.
This requires us to sacrifice the possibility of making predictions for four physical quantities and instead imagine tuning the bare quark masses in the lattice QCD action to ensure that the lattice results for these quantities reproduce their physical values. To illustrate the procedure imagine that we have found values of $m_{ud},\, m_s$ and $m_c$ such that the dimensionless ratios
\begin{equation}\label{eq:isosymmetricratios}
\frac{am_{\pi^0}}{am_\Omega},\quad\frac{am_{K^0}}{am_\Omega},\quad\frac{am_{D^0}}{am_\Omega}
\end{equation}
reproduce the values in the Particle Data Group\cite{Zyla:2020zbs}. At this value of $\alpha_s(a)$ we will use these quark masses to determine all other physical quantities in which we are interested.
In the numerators and denominators of Eq.\,(\ref{eq:isosymmetricratios}), the hadron masses are written in the form $am_H$ to underline the point that they are obtained in lattice units from the computations.
In order to determine the lattice spacing, we need to compare the lattice result for a dimensionful quantity, for example the mass of the $\Omega$ baryon in lattice units ($am_\Omega$) with its physical value in conventional units such as GeV:
\begin{equation}\label{eq:isosymmetrica}
a=\frac{am_\Omega}{m_\Omega^{\mathrm{phys}}}=\frac{am_\Omega}{1.672\,\mathrm{GeV}}\,.
\end{equation}
(In Sec.\,\ref{subsec:hadronicschemes} the lattice spacing obtained using this procedure will be denoted by $a_0^\mathrm{ISO}$ where the subscript $0$ denotes that it has been obtained in QCD without QED and the superscript {\footnotesize ISO} indicates that $m_u=m_d$.)
Having determined the bare-quark masses and the lattice spacing in this way, other physical quantities can be computed. They will be subject to systematic uncertainties, including discretisation errors ("lattice artefacts"), which are proportional to $a^2$ in most currently used lattice formulations of QCD.
The choice of the ratios in Eq.\,(\ref{eq:isosymmetricratios}) to determine the ``physical" quark masses and the use of $m_\Omega$ to set the scale, is convenient and introduced for illustration. It is certainly not unique, four other physical quantities can be used for this calibration instead.
The presentation in this subsection describes an idealised situation in which we can afford to perform a scan of the results with different input quark masses to determine the ones which reproduce the ratios in Eq.\,(\ref{eq:isosymmetricratios}) correctly. In practice this is not possible and some level of interpolation and extrapolation is necessary.
\subsection{Calibration of the full theory}
\label{subsec:renormalisationfull}
The main difference in the steps required to calibrate the full theory (i.e. QCD+QED) compared to the procedure in isosymmetric QCD is the presence of the photon as well as the fact that $m_u\neq m_d$. The presence of the massless photon implies that the finite-volume (FV) corrections appear as inverse powers of $L$.
By contrast, in QCD for leptonic and semileptonic decays the FV corrections are exponentially small in the volume.\\
A possible strategy for the determination of the quark masses and lattice spacing in principle is the following:\\[0.1cm]
1. Using a four flavour theory for illustration, choose a value of the strong coupling constant $g_s$, the bare quark masses $\mathbf{m}=\{m_u,m_d,m_s,m_c\}$ and the number of lattice points $N$, e.g.~$T = 2 aN$ and $L = aN$. (The specific choice $T = 2L$ is convenient for illustration but not necessary for the following argument.)\\[0.2cm]
2. In order eventually to determine the four physical bare quark masses and the lattice spacing, we compute five quantities, e.g.~the four dimensionless ratios
\begin{eqnarray}
&&\label{eq:Ri}
R_1(aN; \textbf{m}) = \frac{am_{\pi^+}}{am_\Omega}(aN; \textbf{m})\,,\quad R_2(aN; \textbf{m}) = \frac{am_{K^0}}{am_\Omega}(aN; \textbf{m})
\\
&&
R_3(aN; \textbf{m}) = \frac{am_{D_s}}{am_\Omega}(aN; \textbf{m}) \,,
\quad R_4(aN; \textbf{m}) = \frac{am_{K^+}-aM_{K^0}}{am_\Omega}(aN; \textbf{m}) \,, \nonumber
\end{eqnarray}
as well as a dimensionful quantity, e.g.~the mass of the $\Omega$ baryon, computed in lattice units, from which the lattice spacing will be determined after extrapolation to the infinite volume limit (see below),
\begin{equation}
R_0(aN; \textbf{m}) = \frac{am_\Omega(aN; \textbf{m})}{m_\Omega^\textrm{phys}} ~ ,
\label{eq:spacing}
\end{equation}
where $m_\Omega^\textrm{phys} = 1.672$\,GeV is the physical value of the mass of the $\Omega$ baryon. For illustration we are considering the masses of QCD$+$QED stable pseudoscalar mesons in the numerators of the dimensionless ratios (\ref{eq:Ri}) and using $m_\Omega^{\mathrm{phys}}$ to determine the lattice spacing, but of course other quantities can be used instead.
In Eqs.\,(\ref{eq:Ri})\,-\,(\ref{eq:spacing}) we have used $a N$ instead of $L$ to highlight that the infinite-volume limit should be taken at fixed lattice spacing (see Eq.\,(\ref{eq:RiIV}) below).
\\[0.2cm]
3. Up to this point the procedure is the natural generalisation of that used in isosymmetric QCD simulations, as described in Sec.\,\ref{subsec:renormalisationQCD}. The difference here is the presence of FV effects which behave as inverse powers of $L$. We therefore envisage extrapolating the ratios $R_i$ to the infinite-volume limit:
\begin{equation}
\label{eq:RiIV}
R_i(\textbf{m}) \equiv \lim_{N \to \infty}R_i(aN; \textbf{m}) ~ , \qquad i = 0, 1, 2, 3, 4\,.
\end{equation}
\mbox{}\\[-0.15cm]
4. For a given discretisation and choice of the strong coupling constant $g_s$, the {\it physical} bare quark masses, $\textbf{m}^\textrm{phys}$, are defined by requiring that the four ratios $R_{1, 2, 3, 4}$ take their physical values
\begin{equation}\label{eq:Riphys}
R_i(\textbf{m}^\textrm{phys}(g_s)) = R_i^\textrm{phys}\,, \qquad i = 1, 2, 3, 4\,.
\end{equation}
In practice, of course, this will require some extrapolations of results obtained at different values of the bare quark masses.\\[0.2cm]
5. The lattice spacing $a$ at this value of the coupling $g_s$ is now defined to be
\begin{equation} \label{eq:spacingfull}
a = R_0(\textbf{m}^\textrm{phys}(g_s))\,.
\end{equation}
Note that with such a procedure the bare parameters and the lattice spacing $a$ do not depend on the lattice volume.\\[0.2cm]
6. At first order in isospin breaking, i.e.~${\cal{O}}(\alpha_{\mathrm{em}}, m_d - m_u)$, the renormalisation of the lepton masses is performed perturbatively, by requiring that the on-shell masses correspond to the physical ones.
\subsection{Defining observables in QCD}
\label{subsec:hadronicschemes}
As mentioned above, once strong isospin breaking effects and electromagnetism are included, then it only the full QCD+QED theory which is unambiguous. If in this context we wish to define separately what we mean by QCD and what we mean by electromagnetic corrections then we have to introduce a prescription. One possibility, an example of what we call hadronic schemes, is to determine the quark masses in QCD by following the same procedure as for the full theory described in Sec.\,\ref{subsec:renormalisationfull}, i.e.~using the ratios $R_0$\,-\,$R_4$
in Eqs.\,(\ref{eq:Ri}) and (\ref{eq:spacing}). This is one possible definition of QCD using a hadronic scheme. (By hadronic schemes we mean ones which are defined in terms of experimentally measurable hadronic quantities. This is in contrast to possible schemes such as the GRS scheme which is defined in terms of quark and gluon Green functions renormalised at a chosen scale in the $\overline{\footnotesize\textrm{MS}}$ renormalisation scheme\cite{Gasser:2003hk}.) We denote the lattice spacing obtained in this way by $a_0$ to distinguish it from $a=a_0+\delta a$, the spacing in the full theory:
\begin{equation}
\label{eq:latticespacings}
a_0 = \frac{\langle a_0M_\Omega \rangle^\textrm{QCD}}{M_\Omega^\textrm{phys}} \quad \textrm{and} \quad
a = \frac{\langle aM_\Omega \rangle^\textrm{full}}{M_\Omega^\textrm{phys}} \equiv a_0 (1 + \delta a) ~ .
\end{equation}
When we add electromagnetism to QCD as defined above, the hadron masses used in the calibration, i.e.~those in Eqs.\,(\ref{eq:Ri}) and (\ref{eq:spacing}), will change away from their physical values (indeed the shift will be logarithmically ultra-violet divergent). To cancel this shift we introduce mass counterterms for the quark masses, which then have to be included in all correlation functions.
To illustrate the procedure imagine that we wish to calculate an observable $O$ of mass dimension 1, for example the mass of a hadron which has not been used in the calibration. The generalisation to other cases is straightforward and presented in Ref.\!\cite{DiCarlo:2019thl}. At a fixed value of the strong coupling, which we choose to be the same in QCD and in QCD+QED, we denote the best estimate of the observable $O$, which is the one obtained in the full theory, by $O^\textrm{phys}$, and that obtained in QCD as defined above by $O^\textrm{QCD}$:
\begin{equation}
O^{\textrm{phys}} \equiv \frac{\langle a O\rangle^{\textrm{full}}}{a}\quad\textrm{and} \quad
O^{\textrm{QCD}} \equiv \frac{\langle a_0 O\rangle^{\textrm{QCD}}}{a_0} ~ .
\end{equation}
We \emph{define} the difference of the two as being due to QED effects, $\delta O^\textrm{QED}\equiv O^\textrm{phys} - O^\textrm{QCD}$. There are 3 contributions to $\delta O^\textrm{QED}$:
\begin{enumerate}
\item The first contribution comes from the diagrams which contain the explicit exchange of virtual photons.
\item The second contribution comes from the fact that the bare quark masses appearing in QCD and the full theory are different. The corresponding quark-mass counterterms must therefore be inserted into the correlation functions used to determine $O^\textrm{phys}$. We stress that the need to include quark-mass counterterms is generic and arises from the requirement that the conditions being used to determine the quark masses must be satisfied both in the full theory and in QCD (for the hadronic scheme being used for illustration we impose that the conditions in Eq.\,(\ref{eq:Riphys}) are satisfied in both theories).
\item Finally we must account for the difference in the lattice spacings $\delta a=a-a_0$ in the full theory and QCD.
\end{enumerate}
Combining these contributions we arrive at
\begin{equation}
\label{eq:Ophys}
O^\textrm{phys} = O^\textrm{QCD} + \frac{\langle a_0 \,\delta O\rangle^\textrm{QCD}}{a_0} -
\frac{\delta a}{a_0^2} \langle a_0\,\! O\rangle^\textrm{QCD} ~ ,
\end{equation}
where we have combined the contributions to the correlation functions from the exchange of virtual photons and from the insertion of the mass counterterms into $\langle a_0 \delta O\rangle^\textrm{QCD}$.
The first term on the right-hand side is one that can be calculated within QCD alone. It has a well defined continuum limit as does the sum of all the terms in Eq.\,(\ref{eq:Ophys}). This term allows us to define what is the difference between QCD (defined as above) and the full theory in the hadronic scheme: $\delta O^{\mathrm{QED}} = O^\textrm{phys} - O^\textrm{QCD}$.
An important feature of the RM123 approach, is that the ${\cal{O}}(\alpha_{\mathrm{em}})$ terms are computed explicitly and so we do not have to take the difference between numerical calculations performed in the full theory and in QCD. Each of the terms on the right-hand side of Eq.\,(\ref{eq:Ophys}) is calculated directly.
We have devoted a considerable discussion to the definition of the isospin-breaking effects due to electromagnetism, $\delta O^\textrm{QED}$. Having done this, the subsequent definition of the strong isospin breaking effects is straightforward. To do this however, we need to define the isosymmetric theory by imposing appropriate conditions to determine the bare quark masses and the lattice spacing. A convenient possibility
is to use the procedure sketched in Sec.\,\ref{subsec:renormalisationQCD}.
The strong isospin breaking correction $\delta O^\textrm{SIB}$ to the observable $O$ can now be defined by
\begin{equation}
\delta O^\textrm{SIB} = O^\textrm{QCD} - O^\textrm{ISO} ~ ,
\end{equation}
where $O^\textrm{ISO} = \frac{\langle a_0^\textrm{ISO} O\rangle^\textrm{ISO}}{a_0^\textrm{ISO}}$ is the value of the observable obtained in isosymmetric QCD. With these definitions we have the natural relation $O^\textrm{phys} = O^\textrm{ISO} + \delta O^\textrm{QED} + \delta O^\textrm{SIB}$. We underline however that $\delta O^\textrm{SIB}$ depends on the quantities used for calibration, both in 4-flavour QCD and in isosymmetric QCD.
\section{Infrared Divergences}\label{sec:IR}
\begin{center}
\begin{figure}[t]
\includegraphics[width=0.3\hsize]{Figs/freeself.eps}\qquad
\includegraphics[width=0.205\hsize]{Figs/tadpole.eps}\qquad
\includegraphics[width=0.25\hsize]{Figs/vertex.eps}
\caption{Examples of diagrams contributing to electromagnetic corrections to the mass ((a) and (b)) and to a decay amplitude ((c)).
\label{figs:irdiags}}
\end{figure}\end{center}
\vspace{-0.3in}In Sec.\,\ref{sec:intro} I simply stated that infrared divergences are absent in the evaluation of electromagnetic corrections to the spectrum, while they are present in the decay amplitudes. To illustrate the absence of infrared divergences in the spectrum consider the diagrams in Figs.\,\ref{figs:irdiags}(a) and (b). In the diagrams the solid line represents an elementary charged scalar meson of mass $m$. These diagrams contribute to the electromagnetic mass shift. The contribution from diagram (a) is proportional to the integral
\begin{equation}\label{eq:Ia}
I_a=i\int\frac{\dfour{k}}{(2\pi)^4}\,\frac{(2p-k)^2}{[k^2+i\epsilon][(p-k)^2-m^2+i\epsilon]}\,,
\end{equation}
evaluated at $p^2=m^2$. At small $k$, the integrand behaves as $1/k^3$ and so the four-dimensional integral is infrared convergent. The integrand of diagram (b) is $1/k^2$ and is therefore also infrared convergent.
The diagram of Fig.\,\ref{figs:irdiags}(c) is an example of a particle decaying into two elementary charged scalar particles of masses $m_1$ and $m_2$. Now the corresponding integral is
\begin{equation}\label{eq:Ic}
I_c=i\int\frac{\dfour{k}}{(2\pi)^4}\,\frac{(2p_1-k)\cdot(2p_2+k)}{[k^2+i\epsilon][(p_1-k)^2-m_1^2+i\epsilon]
[(p_2+k)^2-m_2^2+i\epsilon]}\,,
\end{equation}
so that at small $k$ the integrand behaves as $1/k^4$ and the four dimensional integral is infrared divergent.
The treatment of infrared divergences in evaluating decay widths or scattering cross sections was first understood by Bloch and Nordsieck in 1937\cite{BN37}. Diagrams with virtual photons must be combined with those corresponding to the emission of real photons; in this way the infrared divergences cancel. In intermediate stages of perturbative calculations, an infrared regulator, such as a small photon mass $m_\gamma$, is introduced and the divergences manifest themselves as factors of $\log (m^2/m_\gamma^2)$, where $m$ is a finite mass scale. In lattice computations the volume is finite, $V=L^3$, and the volume itself acts as a regulator with factors of $\log(mL)$.
In section\,\ref{sec:leptonic} I present our framework for the evaluation of the widths for leptonic decays of pseudoscalar mesons $P$, $P\to\ell\bar{\nu}_\ell(\gamma)$, fully consistent of course with the Bloch-Nordsieck mechanism for the cancelation of infrared divergences. Before that however, we discuss the central issue of finite-volume corrections.
\section{Finite-Volume Corrections}\label{sec:FV}
Lattice computations are necessarily performed in finite-volumes, $V=L^3$ say, which implies that the momenta of the photon and other particles are discrete. Integrals, such as those in Eqs.\,(\ref{eq:Ia}) and (\ref{eq:Ic}) are replaced by momentum sums. With periodic boundary conditions for the photon, we repeat Eq.\,(\ref{eq:QEDLdef}) writing
\begin{equation}
\int\!\frac{\dthree{k}}{(2\pi)^3}\,f(\vec{k})\to\sum_{\vec{k}\neq\vec{0}}f(\vec{k})\,,
\end{equation}
where the sum is over $\vec{k}=(2\pi/L)\,\vec{n}$ and $\vec{n}$ is a vector of integers. The powerful tool for evaluating the relationship between finite-volume sums and infinite-volume integrals is the Poisson summation formula which can be written in the form
\begin{equation}\label{eq:Poisson}
\sum_{\vec{k}=\frac{2\pi}{L}\vec{n}}\!\!f(\vec{k})=\int\!\frac{\dthree{k}}{(2\pi)^3}\,f(\vec{k})\,+\,\sum_{\vec{m}\neq\vec{0}}\int\!\frac{\dthree{k}}{(2\pi)^3}\,f(\vec{k})
e^{i\vec{k}\cdot\vec{m}L}\,.
\end{equation}
If the function $f$ has no singularities, then the oscillating exponential in the second term on the right-hand side of Eq.\,(\ref{eq:Poisson}) suppresses the integrals for large $L$ and the finite-volume sum is equal to the infinite-volume integral up to terms which are exponentially suppressed in the volume. On the other-hand, if $f$ contains singularities, which must be regulated, then the oscillating behaviour of the exponential factors is overcome by the abrupt behaviour at the singularity and the finite-volume effects may decrease only as inverse powers of $L$.
For integrals such as those in Eqs.\,(\ref{eq:Ia}) and (\ref{eq:Ic}) the integrands are singular at $k=0$. In the absence of other singularities, a practical rule summarising the relation between the power of the finite-volume corrections and the leading singularity of the integrand at $k=0$ is the scaling law derived in \cite{Lubicz:2016xro}:
\begin{equation}\label{eq:scaling}
\xi^\prime =\int \frac{dk_0}{2\pi} \, \left( \frac{1}{L^3} \sum_{\vec k \neq 0}\, -\int \frac{d^3k}{(2\pi)^3} \right) \frac{1}{(k^2)^{n/2}} = O\left(\frac{1}{L^{4-n}}\right)\,, \end{equation}
where the $1/(k^2)^{n/2}$ simply represents the leading behaviour as $k\to 0$. Thus for example, the integrand in $I_a$ in Eq.\,(\ref{eq:Ia}) is proportional to $1/k^3$ at small $k$, i.e. $n=3$, so that the leading finite-volume correction is of $O(1/(mL))$. The integrand in $I_c$ in Eq.\,(\ref{eq:Ic}) on the other hand behaves as $1/k^4$ at small $k$ corresponding to infrared divergent terms containing factors of $\log(mL)$.
Although the discussion above was presented for structureless elementary particles, it contains a number of important points which are valid also for composite particles. At small momenta the photon couples to the charge of the particle, independently of its internal structure. Thus we would expect that the leading finite-volume corrections are \emph{universal} and this is indeed the case. Studies of the possible higher-order couplings of the photon, such as those to the electric dipole moment of the mesons, reveal that the next-to-leading order finite-volume effects are also universal for the spectrum\cite{Hayakawa:2008an,Borsanyi:2014jba} (see Sec.\,\ref{subsec:FVmasses}) and leptonic decay amplitudes\cite{Lubicz:2016xro} (see Sec\,\ref{sec:leptonic}).
The Minkowski-space integral $I_c$ in Eq.\,(\ref{eq:Ic}) contains an imaginary part, corresponding to a cut through the two internal propagators. This leads to additional singularities from those at $k=0$ which must be treated separately. Such cuts are absent in the calculation of the spectrum and leptonic decay rates, where there is a single particle in the final state to which the photon can couple. They are present however, in the study of semileptonic decays and we comment on this in Sec.\,\ref{sec:semileptonic}.
We postpone further discussion of the finite-volume corrections to leptonic decay rates until Sec.\,\ref{sec:leptonic}, but now discuss the corrections to the spectrum.
\subsection{Leading finite-volume corrections to hadron masses}\label{subsec:FVmasses}
When calculating the electromagnetic corrections to the mass of hadron H, the finite-volume corrections decrease only as powers of $1/L$, starting at $O(1/(m_HL))$, and not exponentially as is the case for many physical quantities in QCD. As mentioned above, with QED$_\mathrm{L}$~ the situation is
made somewhat easier in that the leading two terms, i.e. those of $O(1/(m_HL))$ and $O(1/(m_HL)^2)$ are independent of the structure of the hadron. Thus if the
FV corrections of order ${\cal{O}}(e^2/(m_HL)^3)$ can be neglected then the extrapolation to the infinite-volume limit can be avoided by making use of the formula\cite{Hayakawa:2008an,Borsanyi:2014jba} (similar formulae also exist for other finite-volume formulations of the theory\cite{Lucini:2015hfa})
\begin{equation}
\label{eq:FVmass}
\frac{am_H(L)}{am_H} = 1 - \kappa \, \alpha_{\mathrm{em}} \, e_H^2 \left\{\frac1{2L \, m_H} +
\frac1{L^2 \, m^2_H} \right\} ~ ,
\end{equation}
where $e_H$ is the charge of the hadron $H$, $m_H(L)$ and $m_H$ are the masses of the hadron in the finite and infinite volume respectively and $\kappa = 2.837297\,(1)$. Equation\,(\ref{eq:FVmass}) can be used to determine the infinite-volume mass of the hadron $H$ from the value measured on the finite-volume $L^3$, up to corrections of order of ${\cal{O}}(e^2/(m_H L)^3)$.
Even if one wishes to study the behaviour with $L$ by performing simulations at different volumes, the subtraction of the universal $O(e^2/(m_HL))$ and $O(e^2/(m_HL)^2)$ terms using Eq.\,(\ref{eq:FVmass}) is a useful starting point; the residual leading behaviour of hadronic masses is then of $O(e^2/(m_HL)^3)$. For reviews of isospin-breaking contributions to the spectrum and discussions of the different approaches used to perform QCD+QED computations of the spectrum see, for example, Refs.\!
\cite{Tantalo:2013maa,Portelli:2015wna,Patella:2017fgk}.
\section{Leptonic Decays}\label{sec:leptonic}
In this section we briefly review the framework which we have developed and implemented in the series of papers \cite{Carrasco:2015xwa,Lubicz:2016xro,Giusti:2017dwk,DiCarlo:2019thl}. In the absence of electromagnetic corrections, the width for the decay of a pseudoscar meson $P$ into a charged lepton $\ell$ and its neutrino, $P\to\ell\bar\nu_\ell$, is given by
\begin{equation}
\Gamma(P\to\ell\bar\nu_\ell)=\frac{G_F^2|V_{\mathrm{CKM}}|^2f_P^2}{8\pi}\,m_Pm_\ell^2\left(1-\frac{m_\ell^2}{m_P^2}\right)^2\,,
\end{equation}
where $G_F$ is the Fermi constant, $V_{\mathrm{CKM}}$ is the CKM matrix element corresponding to the flavours of the valence quarks of $P$, and $f_P$ is the decay constant given by the matrix element of the corresponding axial current. For example for the decay of a kaon, $V_{\mathrm{CKM}}=V_{us}$ and $f_K$ is given by
\begin{equation}\label{eq:fK}
\langle 0|\bar{u}\gamma^\mu\gamma^5s|K^-(p_K)\rangle=if_Kp_K^\mu\,,
\end{equation}
so that all hadronic effects are contained in the single number $f_K$, or more generally $f_P$. There are now a very large number of lattice computations of the $f_P$ at the percent or sub-percent level\cite{Aoki:2019cca}.
If one wishes to include the electromagnetic corrections to the width $\Gamma$ and hence to access the CKM matrix element with greater precision, one needs to include contributions from the amplitude with a real photon in the final state:
\begin{equation}\label{eq:Gamma01}
\Gamma(\Delta E_\gamma)=\Gamma_0+\Gamma_1(\Delta E_\gamma)\,,
\end{equation}
where the subscripts $0$ and $1$ indicate the number of photons in the final state and $\Delta E_\gamma$ is the maximum detected energy of the emitted real photon (in the meson rest-frame). The calculations are performed up to $O(\alpha_{\mathrm{em}})$. Both $\Gamma_0$ and $\Gamma_1$ are individually infrared divergent, but the divergences cancel in the sum.
In Sec.\,\ref{subsec:IRleptonic} we describe how one might handle the infrared divergences and their cancelation in lattice computations. Before this we introduce the effective Hamiltonian for leptonic and semileptonic decays.
\subsection{The effective Hamiltonian}\label{sec:Heff}
For illustration, consider the Fermi Hamiltonian for the leptonic decay $K^-\to\mu^-\bar\nu_\mu$; this is given by
$H_F=\frac{G_F}{\sqrt{2}}\,V_{us}\,\big[\bar{u}\gamma^\rho(1-\gamma^5)s\big]\,\big[\bar\mu\gamma_\rho(1-\gamma^5)\nu_\mu\big]$\,, where $G_F$ is the Fermi constant and is generally obtained from muon $\beta$-decay. Since we aim to calculate the $O(\alpha_{\mathrm{em}})$ corrections to leptonic decay rates, we need to ensure that the definition and determination of $G_F$ is consistent with our procedure at this order. We use the formula for the muon lifetime $\tau_\mu$\cite{Berman:1958ti,Kinoshita:1958ru}
\begin{equation}
\frac1{\tau_\mu}=\frac{G_F^2m_\mu^5}{192\pi^3}\left(1-\frac{8m_e^2}{m_\mu^2}\right)\,\left[1+\frac{\alpha_{\mathrm{em}}}{2\pi}\left(\frac{25}{4}-\pi^2\right)+O(\alpha_{\mathrm{em}}^2)\right]\label{eq:muondecay}
\end{equation}
from which, together with the measured value of $\tau_\mu$, one deduces the value $G_F=1.16632(2)\times 10^{-5}$. Many electroweak corrections are absorbed into the definition of $G_F$; the explicit $O(\alpha_\mathrm{em})$ corrections on the right-hand side of Eq.(\ref{eq:muondecay}) come from the diagrams in Fig.\ref{fig:muondecay}. The diagrams are evaluated with the $W$-regularisation in which the photon propagator is modified by\cite{Sirlin:1980nh}:
\begin{equation} \frac{1}{k^2}\to\frac{1}{k^2}-\frac{1}{k^2-M_W^2}=\frac{M_W^2}{M_W^2-k^2}\,\frac{1}{k^2}\,.
\label{eq:Wregularisation}\end{equation}
\begin{center}\begin{figure}[t]
\includegraphics[width=0.28\hsize]{Figs/mudecay1.eps}\qquad
\includegraphics[width=0.28\hsize]{Figs/mudecay2.eps}\qquad
\includegraphics[width=0.28\hsize]{Figs/mudecay3.eps}
\caption{\label{fig:muondecay}Diagrams contributing at $O(\alpha_{\mathrm{em}})$ to the right-hand side of Eq.\,(\ref{eq:muondecay}).}
\end{figure}\end{center}
\vspace{-0.22in}Many of the electroweak corrections which are absorbed in $G_F$ are common to leptonic and semileptonic decays which leads to a factor in the
amplitude of $(1+(\alpha_{\mathrm{em}}/\pi)\log (M_Z/M_W))$\cite{Sirlin:1981ie,Braaten:1990ef} and the effective Hamiltonian for the leptonic or semi-leptonic decay of a $K^-$ meson is
\begin{equation}
H_\mathrm{eff}=\frac{G_F}{\sqrt{2}}\,V_{us}\left(1+\frac{\alpha}{\pi}\log\frac{M_Z}{M_W}\right) O_1^W
\end{equation}
where $O_1^W=(\bar{u}\gamma^\mu (1-\gamma^5)s)
(\bar{\ell}\,\gamma_\mu\,(1-\gamma^5) \nu_\ell)$ renormalised in the W-regular\-isation scheme. Its matrix elements are finite, but depend on $M_W$.
In lattice computations we evaluate the matrix elements of operators in the bare theory defined by a chosen lattice discretisation of QCD with the lattice spacing $a$ as the ultraviolet cut-off. In order to obtain matrix elements of $O_1^W$ we therefore have to perform the renormalisation into the W-regularisation scheme.
If the lattice theory breaks chiral symmetry, then $O_1^W$ is a linear combination of the lattice operator $O_1^L$ and four other lattice four-fermion operators which transform under different chiral representations:
\[O_1^W=\sum_{i=1}^5 Z_{1i} O^L_i\,,\] where at one-loop order only $Z_{11}$ is divergent (proportional to $\log[aM_W]$).
Since in current simulations $a^{-1}\ll M_W$ it is not feasible to perform the renormalisation fully non-perturbatively (even with step-scaling) and we employ a combination of non-perturbative renormalisation and perturbative running and matching. For a recent report on the current status of the renormalisation of lattice operators into the W-regularisation scheme see Ref.\!\cite{DiCarlo:2019knp}.
\subsection{Infrared divergences in lattice computations of radiative corrections to leptonic decays}
\label{subsec:IRleptonic}
In practice, it is convenient to rewrite Eq.\,(\ref{eq:Gamma01}) in the form
\begin{eqnarray}
\Gamma(\Delta E_\gamma) & = & \displaystyle \lim_{L \to \infty} \left[ \Gamma_0(L) - \Gamma_0^{{\rm pt}}(L) \right] +
\displaystyle \lim_{m_\gamma \to 0}\left[ \Gamma_0^{{\rm pt}} (m_\gamma)+
\Gamma_1^{{\rm pt}}(m_\gamma,\Delta E_\gamma) \right] ~ \nonumber \\[2mm]
& &\hspace{0.5in}+\, \Gamma_1^{\mathrm{SD}}(\Delta E_\gamma) + \Gamma_1^{\mathrm{INT}}(\Delta E_\gamma)\,.
\label{eq:Gamma}
\end{eqnarray}
The superscript {\small pt} indicates that
$\Gamma_{0,1}^{\mathrm{pt}}$ are calculated perturbatively in the point-like approximation. We have written $\Gamma_1=\Gamma_1^{\mathrm{pt}}+\Gamma_1^{\mathrm{SD}}+\Gamma_1^\mathrm{INT}$, where the superscripts {\scriptsize SD} and {\scriptsize INT} refer respectively to the \emph{Structure Dependent} contribution and to that from the \emph{Interference} between the structure dependent and point-like contributions to the amplitude. These terms are given in terms of the vector and axial-vector form factors in the decomposition of the non-local matrix element:
\begin{eqnarray}
H^{\alpha r}_W(k,\vec p)&=&
\epsilon_\mu^r(k)\, \int \dfour y\, e^{ik\cdot y}\, {\mathrm T}\,\bra{0} \,j_W^\alpha(0) j^\mu_{\mathrm{em}}(y)\ket{P(\vec p)}\nonumber\\
&&\hspace{-0.8in}=\epsilon_\mu^r(k)\Bigg\{
H_1\,\left[k^2 g^{\mu\alpha}-k^\mu k^\alpha\right]
+
H_2\, \left[(p\cdot k-k^2)k^\mu-k^2(p-k)^\mu\right](p-k)^\alpha]]\nonumber\\
&&
-i\frac{F_V}{m_P}\varepsilon^{\mu\alpha\gamma\beta}k_\gamma p_\beta
+\frac{F_A}{m_P}\left[(p\cdot k-k^2)g^{\mu\alpha}-(p-k)^\mu k^\alpha\right]
\nonumber\\ &&\hspace{0.2in}
+
f_P\left[g^{\mu\alpha}+\frac{(2p-k)^\mu(p-k)^\alpha}{2p\cdot k-k^2}\right]\label{eq:FFdefinition}
\Bigg\}\;.
\end{eqnarray}
In Eq.\,(\ref{eq:FFdefinition}), $\epsilon^r_\mu$ is the polarisation vector of the photon with polarisation state $r$, $j_{\mathrm{em}}^\mu$ is the electromagnetic current to which the photon couples and $j_W^\alpha$ is the hadronic component of the weak operator. For decays into a real photon, for which $k^2=0$ and $\varepsilon\cdot k=0$, only the decay constant $f_P$ and the structure-dependent vector and axial form factors $F_V(x_\gamma)$ and $F_A(x_\gamma)$ are needed to specify the amplitude, where $x_\gamma=2p\cdot k/m_P$. The final term on the right-hand side of Eq.\,(\ref{eq:FFdefinition}) is the point-like (or inner bremsstrahlung) contribution.\\[-0.2cm]
We now discuss each of the terms on the right hand side of Eq.\,(\ref{eq:Gamma}).\\[-0.2cm]
\noindent 1. $\Gamma_0(L)$ is the contribution to the width which includes all the finite-volume modes of the photon's momentum except for $\vec{k}=0$ and therefore depends on the structure of the meson and must be computed nonperturbatively. At small photon momenta, for which the photon couples to the charge of the meson, $\Gamma_0(L)\to\Gamma_0^{\mathrm{pt}}(L)$, and the infrared divergences cancel in the difference $\Gamma_0(L) - \Gamma_0^{{\rm pt}}(L)$. While in our calculations we use the volume as the infrared regulator, $\Gamma_0(L) - \Gamma_0^{{\rm pt}}(L)$ is independent of the regulator.\\[-0.2cm]
\noindent 2. The second term on the right-hand side of Eq.\,(\ref{eq:Gamma})
\vspace{-0.25cm}$$\displaystyle\lim_{m_\gamma \to 0}\left[ \Gamma_0^{{\rm pt}} (m_\gamma)+
\Gamma_1^{{\rm pt}}(m_\gamma,\Delta E_\gamma) \right]$$
\vspace{-0.25cm}\noindent is purely perturbative and can be calculated directly in infinite volume. Each of the two terms are infrared divergent, so a regulator, such as a photon mass $m_\gamma$, has to be introduced. The divergences cancel in the sum of the two terms and the result is independent of the regulator.\\[-0.2cm]
\noindent 3. The infrared divergence in $\Gamma_1$ comes from the point-like coupling of the photon and so the term on the second line of Eq.\,(\ref{eq:Gamma}), $\Gamma_1^{\mathrm{SD}}(\Delta E_\gamma) + \Gamma_1^{\mathrm{INT}}(\Delta E_\gamma)$, is infrared convergent. It can be computed directly in infinite volume requiring knowledge of the
structure-dependent form factors, $F_A(x_\gamma)$ and $F_V(x_\gamma)$, and of the meson decay constant $f_P$\cite{Desiderio:2020oej,Bijnens:1992en}.\\[-0.2cm]
Originally we had proposed to perform the calculations with a cut-off $\Delta E_\gamma$ which was sufficiently small for structure-dependent effects to be negligible, but with $\Delta E_\gamma$ large enough to allow for experimental measurements of $\Gamma(\Delta E_\gamma)$ to be possible (20\,MeV or so).
While this is practicable for the decays of pions and kaons, particularly into muons for which the rate for large $E_\gamma$ is suppressed\cite{Carrasco:2015xwa}, this is not the case for the decays of heavy mesons. More recently we have demonstrated that the structure-dependent contributions to $\Gamma_1$ can be calculated\cite{Desiderio:2020oej,Frezzotti:2020bfa}, thus extending the framework to the decays of heavy mesons.
\subsection{Finite-volume corrections to leptonic decay rates}\label{sec:FVleptonic}
We reported in Sec.\,\ref{subsec:FVmasses} that the leading and next-to-leading finite-volume effects in the calculation of electromagnetic corrections to the spectrum are of $O(1/(m_HL))$ and $O(1/(m_HL)^2)$ with coefficients which are universal, i.e. independent of the structure of the hadron $H$,
see Eq.\,(\ref{eq:FVmass}). For leptonic decays of a pseudoscalar meson $P$, $P\to\ell\nu_\ell$, we organise the calculation as in Eq.\,(\ref{eq:Gamma}) and have found that $\Gamma^{\mathrm{pt}}_0(L)$ takes the form:
\begin{equation}
\Gamma_0^{\mathrm{pt}}(L) = C_0(r_\ell) + \tilde C_0(r_\ell)\log\left(m_PL\right)+ \frac{C_1(r_\ell)}{m_P L}+
\dots \, ,\end{equation}
where $r_\ell=m_\ell/m_P$ and $m_\ell$ is the mass of the final-state charged lepton\cite{Lubicz:2016xro}.
The exhibited $L$-dependent terms are \emph{universal}, i.e.~independent of the structure of the meson, and in Ref.\!\cite{Lubicz:2016xro} we have
calculated the coefficients $C_0,\,\tilde{C}_0$ and $C_1$. The leading structure-dependent FV effects in $\Gamma_0(L)-\Gamma_0^{\textrm{pt}}(L)$ are therefore of $O(1/(m_PL)^2)$. If necessary, these can be determined by extrapolating results obtained on different volumes (see for example Fig.\,\ref{fig:FVE}).
\subsection{Numerical Results}\label{sec:numericalleptonic}
In order to demonstrate that the framework presented above is practicable we briefly present some numerical results for the $K_{\mu2}$ and $\pi_{\mu2}$ decays\cite{Giusti:2017dwk,DiCarlo:2019thl} obtained using gauge ensembles generated by the European Twisted Mass Collaboration (ETMC) with $N_f = 2 + 1 + 1$ dynamical quarks\cite{Baron:2010bv,Baron:2011sf} in the quenched QED approximation in which the charges of the sea quarks are set to 0.
In Ref.\!\cite{Giusti:2017dwk} we started by calculating the electromagnetic and strong isospin breaking corrections to the ratio of $K_{\mu2}$ and $\pi_{\mu2}$ decay rates. This ratio is
less sensitive to various sources of uncertainty than the isospin breaking corrections to $\pi_{\mu 2}$ and $K_{\mu 2}$ decay rates separately.
In Ref.\cite{DiCarlo:2019thl} we provided a more complete description of the calculation and did evaluate the electromagnetic and strong isospin breaking corrections to the decay processes $\pi_{\mu 2}$ and $K_{\mu 2}$ separately.
Since the corresponding experimental rates are fully inclusive in the energy of the final state photon, structure-dependent contributions to the real photon emission should be included, however
the Chiral Perturbation Theory (ChPT) predictions of Ref.\cite{Cirigliano:2007ga} indicate that these structure-dependent contributions are negligible for both kaon and pion decays into muons, an expectation explicitly verified in a recent lattice computation\cite{Frezzotti:2020bfa}, where the structure dependent contributions to $\Gamma_1$ were shown to be negligible. The same is not true to the same extent for decays into final-state electrons\,(see Ref.\cite{Carrasco:2015xwa}) and so we focus here on decays into muons.
For a detailed presentation of our study of isospin breaking contributions to $K_{\mu2}$ and $\pi_{\mu2}$ decays, including many important technical issues, please see Ref.\cite{DiCarlo:2019thl}. Here we focus on two general points: i) a check that the leading finite-volume corrections are indeed of $O(1/(m_PL)^2)$ ($P=K$ or $\pi$) and ii) the phenomenological implications of our calculations and in particular the determination of the CKM matrix element $V_{us}$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.75\hsize]{Figs/FVE1.pdf}
\caption{Volume dependence of $\delta R_{\pi}$ and $\delta R_{K}$ for a pion of mass 320\,MeV and a kaon of mass 580\,MeV. The data come from computations on 4 different volumes at the same value of the lattice spacing $a$ and are consistent with the expectation that the leading behaviour should be linear in $1/L^2$~\cite{DiCarlo:2019thl}.\label{fig:FVE}}
\end{center}
\end{figure}
For the leptonic decay $P\to\ell\bar\nu_\ell(\gamma)$ we choose to define the isospin-breaking correction to the rate, $\delta R_P$, by
\begin{equation}
\Gamma(P\to\ell\bar\nu_\ell(\gamma))=\frac{G_F^2}{8\pi}|V_{q_1q_2}|^2m_\ell^2m_P\left(1-\frac{m_\ell^2}{m_P^2}\right)(f_P^{(0)})^2\,
[1+\delta R_P]\,,
\end{equation}
where $q_{1,2}$ are the valence quarks of the meson $P$, $m_P$ is its mass and $f_P^{(0)}$ is its decay constant obtained within isosymmetric QCD
using
\begin{equation}
\langle 0 | \bar{q}_2 \gamma_0 \gamma_5 q_1 | P(\vec{0}) \rangle \equiv f_P^{(0)} m_P^{(0)} \,,
\label{eq:fP0}
\end{equation}
where the initial-state meson $P$ is at rest. Here $m_P^{(0)}$ is the mass of $P$ in QCD. As discussed above, $f_P^{(0)}$ is prescription dependent. In order to be able to exploit existing ETMC correlation functions
(which for example do not include correlation functions for the $\Omega^-$ baryon), in the isosymmetric theory we have adopted a "FLAG Scheme"\cite{Aoki:2019cca} taking $m_\pi^{(0)}=134.98$\,MeV, $m_K^{(0)}=494.2(3)$\,MeV and $f_\pi^{(0)}=130.41$\,MeV (as well as $m_{D_s^+}=1969.0(1.4)$\,MeV). Having used $f_\pi^{(0)}$ as part of the calibration means that we sacrifice the possibility of determining $V_{ud}$. Such a scheme turns out to be numerically equivalent within the uncertainties (although theoretically different) to the GRS scheme\cite{Gasser:2003hk} used in the chiral perturbation theory study\cite{Cirigliano:2011tm}. The GRS scheme is defined by imposing values for the renormalised strong coupling and masses in the $\overline{\footnotesize\textrm{MS}}$ scheme at a scale of $2$\,GeV. The numerical near equivalence of the two schemes is convenient for comparison of the lattice and ChPT results.
In Fig.\,\ref{fig:FVE} we show the results for $\delta R_K$ and $\delta R_\pi$ obtained at four different values of the volume at the same value of the lattice spacing. The data correspond to meson masses $m_\pi\simeq 320$\,MeV and $m_K\simeq 580$\,MeV. The expectation is that after the subtraction of the universal terms, the results should be largely linear in $1/L^2$ and the data is nicely consistent with this.
Extrapolating our lattice results to physical quark masses and to the continuum and infinite-volume limits we found:
\begin{equation}\label{eq:deltaRresults}
\delta R_\pi^{\mathrm{phys}}=+0.0159(20)\quad\mathrm{and}\quad \delta R_K^{\mathrm{phys}}=+0.0032(11)\,.
\end{equation}
Our results in Eq.\,(\ref{eq:deltaRresults}) can be compared with the ChPT predictions $\delta R_\pi^{\textrm{phys}} = 0.0176(21)$ and $\delta R_K^{\textrm{phys}} = 0.0064(24)$ obtained in Ref.\!\cite{Cirigliano:2011tm} and adopted by the PDG\cite{PDG,Rosner:2015wva}.
The difference is within one standard deviation for $\delta R_\pi^{\textrm{phys}}$ and a little larger for $\delta R_K^{\textrm{phys}}$.
Since, as mentioned above, we have used $f_\pi^{(0)}$ in the determination of the lattice spacing, we cannot use our calculation to obtain $V_{ud}$.
For the kaon on the other hand, adopting the best lattice determination of the QCD kaon decay constant, $f_K^{(0)} = 156.11(21)$\,MeV~\cite{Aoki:2019cca,Dowdall:2013rya,Carrasco:2014poa,Bazavov:2017lyh} (after subtracting the strong isospin breaking effects)
and combining it with the experimental result $\Gamma(K^- \to \mu^- \bar{\nu}_\mu [\gamma]) = 5.134 (11) \cdot 10^7$ s$^{-1}$ from the PDG\cite{PDG}, we obtain the very precise result:
\begin{equation}
\label{eq:Vus_K}
|V_{us}| = 0.22561(26)_{\mathrm{exp}}\,(33)_{\mathrm{th}} = 0.22561(42)\,.
\end{equation}
Following Ref.\!\cite{Giusti:2017dwk}, we can also determine $|V_{us}|$ from the ratio of the pion and kaon experimental decay rates which yields
\begin{equation}
\frac{|V_{us}|}{|V_{ud}|} \frac{f_K^{(0)}}{ f_\pi^{(0)}} = 0.27677 \,(29)_{\mathrm{exp}} \, (20)_{\mathrm{th}} = 0.27677 \, (35) \, .
\label{eq:ratioVf}
\end{equation}
Using the best $N_f = 2+1+1$ lattice determination of the ratio of the QCD kaon and pion decay constants, $f_K^{(0)} / f_\pi^{(0)} = 1.1966~(13)$\cite{Aoki:2019cca,Dowdall:2013rya,Carrasco:2014poa,Bazavov:2017lyh}, we find
\begin{equation}
\frac{|V_{us}|}{|V_{ud}|} = 0.23130 \, (24)_{\mathrm{exp}} \, (30)_{\mathrm{th}} = 0.23130 \, (38) \, .
\label{eq:VusVud}
\end{equation}
Taking the updated value $|V_{ud}| = 0.97420\,(21)$ from super-allowed nuclear beta decays\cite{Hardy:2016vhg}, Eq.\,(\ref{eq:VusVud}) yields the following value for the CKM element $|V_{us}|$:
\begin{equation}
\label{eq:Vus}
|V_{us}| = 0.22533 \,(24)_{\mathrm{exp}} \, (30)_{\mathrm{th}} = 0.22533(38) \, ,
\end{equation}
which agrees with our result (\ref{eq:Vus_K}) within the errors.
Note that our result (\ref{eq:Vus}) agrees with the latest estimate $|V_{us}| = 0.2252(5)$, recently updated by the PDG\cite{Zyla:2020zbs}.
Taking the values $|V_{ub}| = 0.00413(49)$\cite{PDG} and $|V_{ud}| = 0.97420(21)$\cite{Hardy:2016vhg} our result in Eq.\,(\ref{eq:Vus}) implies that the unitarity of the first-row of the CKM matrix is confirmed to better than
the per-mille level
\begin{equation}
\label{eq:unitarity}
|V_{ud}|^2 + |V_{us}|^2 + |V_{ub}|^2 = 0.99986 \, (44) \, .
\end{equation}
\section{Semileptonic Decays}\label{sec:semileptonic}
\begin{figure}[t]\begin{center}
\includegraphics[width=0.75\hsize]{Figs/LL_Example.eps}\end{center}
\caption{(a) Diagram at $O(\alpha_{\mathrm{em}})$ contributing to the semileptonic decay $K\to\pi\ell\nu_\ell$; (b) diagram contributing to the leptonic decay of a kaon.\label{fig:LLexample}}
\end{figure}
In this section we discuss our ongoing work to develop a framework for the evaluation in a finite Euclidean volume of electromagnetic contributions to amplitudes for semileptonic decays $P_1\to P_2\ell\bar\nu_\ell(\gamma)$, where $P_1$ and $P_2$ are pseudoscalar mesons and $\ell$ is a charged lepton. A discussion of the issues has previously been presented in Ref.\!\cite{Sachrajda:2019uhh}. Throughout this section, we illustrate the issues by considering $K_{\ell 3}$ decays,
e.g. $K^0\to\pi^-\ell^+\nu_\ell$ decays where $\ell=\mu$ or $e$, but
the discussion is general to all semileptonic decays. In QCD without electromagnetic corrections, the amplitudes are given by two invariant form factors, which for $K_{\ell3}$ decays for example can be defined by
\begin{equation}
\langle\,\pi^-(p_\pi)\,|\bar{s}\gamma_\mu u\,|\,K^0(p_K)\,\rangle
=f_+(q^2)\,(p_K+p_\pi)_\mu+f_-(q^2)\,(p_K-p_\pi)_\mu\,,
\end{equation}
where the momentum transfer $q=p_K-p_\pi$.
When computing electromagnetic corrections, for which contributions to the rate with a photon in the final state must be included,
an appropriate measurable quantity to consider is
$$\frac{d^2\Gamma}{dq^2 ds_{\pi\ell}},$$
where $q^2=(p_K-p_\pi)^2$ and $s_{\pi\ell}=(p_\pi+p_\ell)^2$.
Much of the discussion in Sec.\ref{sec:leptonic} applies also to semileptonic decays, however there is an additional significant complication which arises due to the presence of two particles in the final state to which the photon can couple. This leads to additional non-exponential finite-volume effects, analogous to those due to QCD re-scattering effects in nonleptonic $K\to\pi\pi$ decays which are corrected by the Lellouch-L\"uscher factor\cite{Lellouch:2000pv,Lin:2001ek}.
Consider, for example, the contribution to the $K_{\ell 3}$ decay amplitude illustrated in the diagram of Fig.\,\ref{fig:LLexample}(a). In Minkowski space, this diagram contains an imaginary part corresponding to the cut over the internal pion and lepton propagators.
In order to relate the physical amplitude to the results from a computation on a finite Euclidean lattice, we imagine first performing the $k_0$ integration.
The imaginary part arises because the internal energy with on-shell particles can be smaller than the external energy, i.e. $\Delta E>0$ where
\begin{equation}
\Delta E\equiv \omega_\pi+\omega_\ell - (\omega_\pi^\prime+\omega_\ell^\prime)\,,
\end{equation}
$\omega_\pi = \sqrt{\vec{p}_\pi^{\hspace{2.5pt}2}+m_\pi^2},\,
\omega_\ell=\sqrt{\vec{p}_\ell^{\hspace{2.5pt}2}+m_\ell^2},\,
\omega_\pi^\prime=\sqrt{\vec{p}_\pi^{\hspace{2.5pt}\prime\,2}+m_\pi^2}$ and
$\omega_\ell^\prime=\sqrt{\vec{p}_\ell^{\hspace{2.5pt}\prime\,2}+m_\ell^2}$\,.
The presence of the imaginary part manifests itself by a term with a factor of $\frac1{\Delta E+i\epsilon}$ in the integrand of the integration over $\vec{k}$. The singularity at $\Delta E=0$ is present in the region of integration and the corresponding $\delta$-function leads to an imaginary contribution.\\[-0.15cm]
The presence of points with $\Delta E\ge 0$ in the integration region in Mink\-owski space, presents a number of significant difficulties in the evaluation of finite-volume Euclidean correlation functions.\\[-0.3cm]
\noindent 1. In lattice computations of the diagram in Fig.\,\ref{fig:LLexample}(a), the weak Hamiltonian and interpolating operators which create the kaon and annihilate the pion and lepton are inserted at fixed times.
The correlation functions contain terms which are proportional to $e^{-(\omega_\pi^\prime+\omega_\ell^\prime)\,t}$ where $t$ is the time interval between the insertions of the weak Hamiltonian and the interpolating operators which annihilate the pion and lepton.
Energy is therefore not conserved and
the correlation functions are, as usual, dominated by the intermediate states of lowest energy. If $\Delta E>0$ the dominant component will provide matrix elements different from those contributing to the physical decay amplitude which we wish to evaluate. These exponentially dominant, but unphysical, contributions have therefore to be subtracted in order to obtain the physical result. This is the issue raised in 1990 by Maiani and Testa in the context of QCD final-state interactions\cite{Maiani:1990ca}.\\[-0.3cm]
\noindent 2. Assuming that after the subtraction the matrix element with the correct energy can be extracted, the most significant theoretical issue is to determine the non-exponential finite-volume corrections. The finite-volume matrix element contains terms which take the schematic form
\begin{equation}\label{eq:FVsum}
\frac1{L^3}\sum_{\vec{k}}\myprime~\frac{f(\vec{k})}{\Delta E}\,,
\end{equation}
where the prime on the summation indicates that in QED$_\mathrm{L}$~ the term with $\vec{k}=0$ is omitted and that possible other terms corresponding to $\Delta E=0$ are also not included. The theoretical challenge is to relate the sum in Eq.(\ref{eq:FVsum}) to the real part of the corresponding infinite-volume integral:
\begin{equation}
\label{eq:IVintegral}
\mathrm{Re}\int\frac{\dthree{k}}{(2\pi)^3}\,\frac{f(\vec{k})}{\Delta E+i\epsilon}\,.
\end{equation}
with controlled finite-volume corrections.
We are currently working towards this goal; here we simply note that the necessary subtractions require knowledge of the pion's electromagnetic form factor and the $K\to\pi$ transition form factors in QCD, both for a range of momentum transfers.\\[-0.3cm]
\noindent 3. We note that the $1/\Delta E$ singularity and related difficulties are also present in the semileptonic decays of charged mesons, e.g. $K^+\to\pi^0\ell^+\nu_\ell$ decays in which the final-state pion is neutral. The photon still couples to the neutral pion, e.g. to its dipole moment, so that diagrams such as that in Fig.\,\ref{fig:LLexample}(a) are also present in this case and it remains to be seen whether the numerical effects are less severe. \\[-0.3cm]
\noindent 4. Finite-volume effects which decrease only as inverse powers of $L$, do not only arise because of the presence of the $1/\Delta E$ factor discussed above. Indeed we have seen in Sec.\,\ref{sec:leptonic} that such effects are also present in the computation of electromagnetic corrections to the spectrum\cite{Davoudi:2014qua,Borsanyi:2014jba} and to
leptonic decay amplitudes (see for example the diagram in Fig.\,\ref{fig:LLexample}(b)), where they arise due to terms in the summand which diverge sufficiently as $|\vec{k}|\to 0$ (see the scaling law in Eq.\,(\ref{eq:scaling})). However, the denominator of each such term in the summand only vanishes at the single point $|\vec{k}|=0$
and we have developed the techniques necessary to calculate the corresponding power-law finite-volume corrections\cite{Lubicz:2016xro}.\\
\noindent 5. From the above discussion it follows that the computation of semileptonic decay rates is considerably simpler at the edge of phase space, $(p_\pi+p_\ell)^2=(m_\pi+m_\ell)^2$, where the cuts leading to the imaginary part of the amplitude are absent. In this case the finite-volume effects which decrease only as inverse powers of $L$ still occur because the denominators of terms in the summand vanish, but now only at the single point $k\equiv |\vec{k}|=0$. This is a similar situation to the computation of electromagnetic corrections to the spectrum\cite{Davoudi:2014qua,Borsanyi:2014jba} and to
leptonic decay amplitudes (see for example the diagram in Fig.\,\ref{fig:LLexample}(b)) and we have the techniques to compute the universal finite-volume corrections.
In Sec.\,\ref{sec:leptonic} we explained that the two leading FV corrections to the spectrum and to leptonic decay amplitudes are universal, independent of the structure of the mesons. In the case of leptonic decays they only require the knowledge of the decay constant of the pseudoscalar meson, $f_P$, computed
in QCD. For semileptonic decays there is also a universality in the leading two terms, but the coefficient of the $1/(m_PL)$ corrections requires knowledge of the derivative of the form factors, $\partial f_{\pm}(q^2)/\partial q^2$. The reason for this can be understood as follows. The most singular summand in the sum over $\vec{k}$ is proportional to $1/k^3$ and leads to the infrared divergent terms proportional to $\log[m_PL]$. The terms proportional to $1/k^2$, which lead to corrections of $O(1/(m_PL))$, therefore require the leading $\vec{k}$-dependent term in the form-factors which is proportional to the derivative.\\[-0.3cm]
In summary, the techniques developed to include electromagnetic corrections to leptonic decays of pseudoscalar mesons can also be applied to semileptonic decays. At the edge of phase-space, for example for $K_{\ell3}$ decays for $s_{\pi\ell}\equiv(p_\pi+p_\ell)^2=(m_\pi+m_\ell)^2$, so that the only singular term in the summand is at $|\vec{k}|=0$, these techniques can be applied directly. The cancelation of infrared divergences occurs as for leptonic decays, the $O(1/(m_KL))$ finite-volume corrections are also "universal" but the coefficients depend on the derivatives of the form-factors,
$\partial f_\pm/\partial q^2$, which are physical quantities, computed in QCD. For $s_{\pi\ell}>(m_\pi+m_\ell)^2$, the physical (Minkowski) amplitude has an imaginary part which corresponds to a $1/\Delta E$ singularity and poles away from $|\vec{k}|=0$, requiring knowledge of the electromagnetic form-factor of the pion and $f_{\pm}$ for a range of values of momentum transfer. We are currently investigating the optimal way to implement the necessary subtractions. For illustration in Fig.\,\ref{fig:physicalKmu3} we exhibit the physical phase space for $K^0\to\pi^-\mu^+\bar\nu_\mu$ decays and the point with $s_{\pi\ell}=(m_{\pi^-}+m_{\ell^+})^2$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.3\hsize]{Figs/physical.pdf}
\caption{The physical region for $K_{\mu 3}$ decays. The quantities $s_{\pi\mu}$ and $q^2$ are given in GeV$^2$. The black circle represents the point at the minimum value of $s_{\pi\mu}$\,.\label{fig:physicalKmu3}}
\end{center}
\end{figure}
\section{Summary and Conclusions}\label{sec:concs}
The remarkable recent improvement in the precision of lattice QCD results for many quantities relevant for flavour physics has necessitated the inclusion of isospin breaking effects, and electromagnetic corrections in particular, into the computations. The presence of a zero-mass photon leads to significant long-distance issues, including infrared divergences and finite-volume effects which decrease only as inverse powers of $L$ and not exponentially. In this paper I have reviewed the issues and the status of the framework which, together with colleagues from Rome, we have been developing and implementing in leptonic and semileptonic decays of pseudoscalar mesons.
As explained in Sec.\,\ref{sec:leptonic}, for leptonic decays $P\to\ell\bar\nu_\ell(\gamma)$, the framework is complete and has been successfully implemented for $\pi_{\mu2}$ and $K_{\mu2}$ decays. We are able to handle the cancelation of infrared divergences and the subtraction of the universal finite-volume corrections which are of $O(1/(m_PL))$.
We have demonstrated that after this subtraction, the expectation that the leading residual (structure-dependent) finite-volume corrections, are of $O(1/(m_PL)^2)$ is satisfied numerically.
We have been able to determine $V_{us}$ with excellent precision (see Eqs.\,(\ref{eq:Vus_K}) and (\ref{eq:Vus})) and to verify the unitarity of the first row of the CKM matrix to better than per-mille accuracy (see Eq.\,(\ref{eq:unitarity})).
The most recent development has been the calculation of the radiative decays $P\to\ell\bar\nu_\ell\gamma$ for light and charmed mesons, and a phenomenological comparison of our results with those from experimental measurements\cite{Desiderio:2020oej,Frezzotti:2020bfa}.
For semileptonic decays $P_1\to P_2\ell\bar\nu_\ell(\gamma)$, where $P_1$ and $P_2$ are pseudoscalar mesons, there are additional non-exponential finite-volume effects associated with diagrams such as that in Fig.\,\ref{fig:LLexample}(a) which in Minkowski space contain an imaginary part. As explained in Sec.\,\ref{sec:semileptonic} the subtraction of these additional finite-volume effects requires knowledge of the electromagnetic form factor of the $P_2$ meson and the weak $P_1\to P_2$ transition form factors, both for a range of momentum transfer and studies of how best to perform this subtraction are currently in progress. This difficulty is generic and relevant for most decay processes; leptonic decays are a rare exception.
The techniques developed for leptonic decays can however, be directly applied to semileptonic decays at the edge of phase space where the invariant mass of the $P_2$\,-\,$\ell$ pair is $m_{P_2}+m_\ell$.
\subsection*{\textbf{Personal Note}}
It has been an honour and pleasure to have been invited to make this contribution to the volume celebrating 60 years of the Kraków Schools in Theoretical Physics. I have very fond recollections, both scientific and personal, of the four previous times I have lectured at the school: 1977 (\emph{Asymptotic Freedom and Deep Inelastic Electroproduction}), 1991 (\emph{Heavy Quark Physics from Lattice QCD}), 2006 (\emph{Lattice Flavour Dynamics}) and 2014 (\emph{Flavour Physics}). \\[-0.3cm]
I warmly congratulate all the organisers, from Professor Andrzej Białas who organised the first School through to Professor Michał Praszałowicz who has organised this 60th one, for creating and maintaining such an important and high-quality forum for the presentation and discussion of the latest developments in theoretical physics. On this anniversary, I wish the School the traditional Polish \emph{Sto Lat} (a hundred years).
\subsection*{\textbf{Acknowledgements}}
I warmly thank my collaborators from the Universities of Rome \emph{La Sapienza}, \emph{Tor Vergata} and \emph{Roma Tre} with whom the ideas discussed in this paper were developed and implemented. I was partially supported by an Emeritus Fellowship from the Leverhulme Trust
and by STFC (UK) grants ST/P000711/1 and ST/T000775/1.
|
1,314,259,994,215 | arxiv | \section{\label{sec1}Introduction}
ZnO is a promising material for photocatalysis~\cite{Maeda} and photovoltaic applications.~\cite{Law, Riaz} Mn substituting for the divalent cation in ZnO introduces a Mn$^{2+}$/Mn$^{3+}$ level located in the forbidden gap.~\cite{Johnson} The mid-gap position of Mn$^{2+}$ has been already practically utilized and powers the research on water splitting.~\cite{Maeda}
Mn-doped ZnO also exhibits a chromatographic effect: the undoped transparent crystals upon doping with Mn turn reddish-brown due to the strong absorption interpreted as Mn$^{2+}\to$ (Mn$^{3+}, e_{CB}$) photo-ionization transition,~\cite{Johnson} where $e_{CB}$ denotes a photoelectron in the conduction band. This absorption is accompanied by photoconductivity.~\cite{Johnson}
The nature of this transition has been inferred only indirectly. Though the presence of Mn in the 2+ charge state in ZnO was detected with use of electron paramagnetic resonance (EPR),~\cite{Hausmann, Chikoidze} no optical spectra related to
intra-center transitions of Mn$^{2+}$ were observed. Since these transitions can occur at energies higher than the observed photo-ionization band, it was concluded that the excited states of Mn$^{2+}$ are degenerate with the conduction band of ZnO~\cite{Godlewski2010}, consistent with the midgap position of the Mn$^{2+}$ energy level. However, no direct evidence of the depopulation of the Mn$^{2+}$ state under illumination was presented so far.
In this paper we study directly the occupancy of Mn$^{2+}$ ions under illumination by means of photo-EPR spectroscopy. We observe a temperature dependent decrease of the EPR signal intensity under excitation with light of energies corresponding to the Mn related absorption band. The kinetics of the EPR signal photo-quenching points out to a process involving photocarriers and the Mn ions directly.
First principles calculations indicate that the observed photoquenching is due to a transition of Mn$^{3+}$ to a metastable state after photoionization. Metastability of defects and/or dopants typically originates in strong lattice relaxations after the change of the defect charge state.
In the case of the As antisite in GaAs studied in the past (the EL2 center), optical excitation is followed by a large displacement, exceeding 1~\AA, of the defect towards the metastable interstitial site.~\cite{EL2, EL2_prl} A similar mechanism is operative also in the case of donors, which can acquire the DX configuration when a shallow donor captures an electron and becomes a deep one with a strongly localized electronic state in the band gap,~\cite{Chadi, Dobaczewski, BBdx, Wetzel, Thio} and in the case of native defects,~\cite{Lany_DX} where the (meta)stability is responsible for quenching of doping efficiency.
A metastable configuration can also consist in a breathing-like displacement of the surrounding host atoms.~\cite{Jones, Schmidt} According to the present results, metastability of Mn in ZnO also requires substantial lattice relaxations induced by the change of the charge state. However, a novel factor that drives metastability of Mn$^{3+}$ is the strong intracenter Coulomb coupling between the $d$(Mn) states, which prevents the electron capture by Mn$^{3+}$ followed by recombination. Finally, regarding the absorption measurements, the calculations predict that intracenter transitions should occur at energies higher than photoionization, in agreement with experimental data.
The paper is organized in the following way: in Sec.~\ref{sec2} the experimental setup and results are presented and discussed.
In Sec.~\ref{sec3}, details of the theoretical approach, based on the Generalized Gradient Approximation (GGA) to the Density Functional Theory, are given.
The $+U$ corrections~\cite{Anisimov1991, Anisimov1993, Cococcioni}
are applied to $d$(Zn), $p$(O), and the $d$(Mn) shell.
The proposed mechanism of metastability of the photoionized Mn is presented in Sec. \ref{sec3} D.
Section~\ref{sec4} summarizes the obtained results.
\section{\label{sec2}Results and discussion}
\subsection{\label{sec2a}Experimental methods}
Mn doped ZnO single crystals were grown by chemical vapor transport.~\cite{mycio} For the photo-EPR experiments the Mn concentration of 0.2~\% was chosen, as it ensures well resolved, narrow-line EPR spectra of Mn$^{2+}$. The sample was placed in an Oxford Instruments He gas flow cryostat enabling temperature dependent measurements in the range of 3-300~K. The EPR experiments were performed at 9.5~GHz, with use of a BRUKER ESP300 spectrometer
equipped with Oxford Instruments ESR 900 cryostat operating in the temperature range 1.8-300~K.
The magnetic field was oriented perpendicular to the $c$-axis of the crystal. The sample was illuminated at right angle to the magnetic field direction with a set of laser diodes of wavelengths varying from 445~nm to 980~nm. For power dependent measurements a set of gray filters was employed.
\subsection{\label{sec2b}Experimental results}
The as-grown ZnO:Mn 0.2~\% sample is highly resistive, in contrast to the n-type conductivity of undoped ZnO crystals grown with the same method. A part of the Mn ions occurs in the Mn$^{2+}$ charge state and can be easily detected by EPR. Annealing the crystal in hydrogen atmosphere leads to a substantial (more than fivefold) increase of the Mn$^{2+}$ ion fraction accompanied by the appearance of n-type conductivity.
\begin{figure*}[th!]
\begin{center}
\includegraphics[width=8.3cm]{fig1a}
\includegraphics[width=8.3cm]{fig1b}
\caption{\label{fig1}
(a) EPR spectrum of Mn$^{2+}$ in ZnO with $B$ perpendicular to the
$c$-axis. Black line shows the signal in the dark, green and red traces are recorded under illumination with 532~nm (50~mW) and 850~nm (660~mW) laser lines, respectively. (b) EPR signal of (a) in the magnetic field range 4060-4160~G showing the change of the lineshape and the shift of the line position under illumination.
}
\end{center}
\end{figure*}
Figure~\ref{fig1}a shows the EPR spectrum of Mn$^{2+}$ in the as grown ZnO:Mn sample at 3.8~K taken with the magnetic field oriented perpendicular to the $c$-axis of the crystal. The spectrum consists of 30 partly overlapped resonances grouped into 5 sextets. The five so-called fine structure groups stem from allowed $\Delta M_S=\pm 1$ transitions between electronic spin levels of a $d^5$ ion with the electronic spin of $S=5/2$. Each group consist of 6 equally intense lines due to hyperfine interaction with the $I=5/2$ nuclear spin of Mn$^{55}$. The spectrum is characteristic of isolated Mn$^{2+}$ ions in ZnO.~\cite{Hausmann, Chikoidze} Analysis of the angular dependence of the resonance peak positions measured~\cite{note2} yields the spin Hamiltonian parameters $g = 2.0025 \pm 0.0003$, $D = -248.53 \pm 0.07$~G, $A_\parallel = 80.4 \pm 0.1$~G, $A_\perp = 80.3 \pm 0.3$~G, and $a = 4.0 \pm 0.2$~G at 3~K, consistent with earlier studies.~\cite{Johnson, Gluba}
Apart from the EPR spectrum of isolated Mn$^{2+}$ ions, no other EPR signals were detected in our crystals, in particular neither complexes of Mn$^{2+}$ with other defects (up to second nearest neighbors), nor spectra related to Mn-Mn pairs were observed.
Illumination with light in the 980 - 445~nm range leads to a drastic reduction of the detected EPR signal intensity of Mn$^{2+}$. Exemplary spectra recorded at 3.8~K under illumination with 532~nm and 850~nm laser lines are shown in Fig.~\ref{fig1}b. The laser power was 50 and 660~mW, respectively. Not all of the observed signal reduction can be attributed to a change of Mn$^{2+}$ concentration alone. The dominant mechanism of the EPR intensity quenching shown in Fig.~\ref{fig1} comes from the skin effect, {\it i.e.}, absorption by photogenerated free carriers, which reduces the microwave penetration depth and hence the effective volume of the sample. The skin effect manifests itself in a change of the resonance line shape from Gaussian to Dysonian, as shown in Fig.~\ref{fig1}b. In addition, we observe a small shift of the resonance line positions towards higher magnetic fields under illumination.
This shift is due to exchange interaction between localized magnetic moments of
Mn$^{2+}$ and free carrier spins,~\cite{StoryPRL} an analogue of the Knight shift in nuclear magnetic resonance. Both the change of the EPR lineshape and the shift of the resonance fields directly prove that illumination with light in the whole wavelength range (445 - 980~nm) studied leads to generation of free carriers.
To eliminate the skin effect the sample was thinned down to 100~$\mu$m. This thickness was found to be sufficient to ensure microwave penetration of the entire sample. We no longer observed changes of the line shape accompanying the reduction of the Mn$^{2+}$ EPR signal intensity upon illumination. We can also exclude another possible source of intensity decrease in our experiment, {\it i.e.}, sample heating due to incident laser power. Since the fine structure $-5/2\to -3/2$ (high field resonances in Fig.~\ref{fig1}a) and $3/2\to 5/2$ transitions (low field resonances) have the same probability, the difference in the intensities of the high field and low field resonance lines reflects the difference in the thermal population of the -5/2 and 3/2 levels. At low temperatures (see Fig.~\ref{fig1}a) the intensity of high field resonances is more than twice higher than that of the low field ones.
With increasing sample temperature the intensity ratio decreases, and at 300~K both EPR resonances are almost equally intense. Even under illumination with 2.4~W at the lowest applied wavelength of 980~nm we observed no measurable change in the EPR signal intensity ratio between the $-5/2\to -3/2$ and $3/2\to 5/2$ resonances.
Thus, any light induced changes of the EPR signal intensity measured in the so prepared sample reflect solely the change in the occupancy of the manganese 2+ charge state. Unless explicitly specified, all further data reported here refer to measurements performed on the thin sample.
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=8.3cm]{fig2a}
\includegraphics[width=8.3cm]{fig2b}
\caption{\label{fig2}
(a) Quenching of Mn$^{2+}$ EPR intensity, $\Delta I_{EPR}$, at T=2.8~K as a function of excitation wavelength measured in the thin sample. (b) Spectral dependence of the absorption coefficient for both pure ZnO and ZnO doped with 0.5~\% Mn at 300~K.
}
\end{center}
\end{figure*}
The spectral dependence of the Mn$^{2+}$ EPR signal photoquenching is presented in Fig.~\ref{fig2}a. Depicted is the relative reduction of the EPR signal intensity, $\Delta I_{EPR}$, under illumination at a constant power of 50~mW. $\Delta I_{EPR}$ is defined as the difference between the signal intensities in the dark, $I_{EPR}(dark)$, and under illumination, $I_{EPR}(ilumin)$, divided by the dark intensity:
\begin{equation}
\label{eq1}
\Delta I_{EPR}=\frac{I_{EPR}(dark)-I_{EPR}(ilumin)}{I_{EPR}(dark)}.
\end{equation}
For comparison, the room temperature absorption spectra of ZnO:Mn 0.5~\% and undoped ZnO are shown (Fig.~\ref{fig2}b). As can be seen in Fig.~\ref{fig2}b, the onset of the Mn-related absorption is close to 620~nm (2~eV), which is consistent with the optical ($E_{opt} = 2.6 \pm 0.1$~eV) and thermal ($E_{th} = 2.1 \pm 0.1$~eV) ionization energies determined in Ref.~\onlinecite{Godlewski2009} for the postulated Mn 2+ to 3+ photoionization transition. The agreement between the spectral dependence of the Mn$^{2+}$ EPR signal photoquenching below 600~nm in Fig.~\ref{fig2}a and the absorption shown in
Fig~\ref{fig2}b proves unambiguously that the absorption band is indeed due to photoionization of Mn$^{2+}$.
However, above 600~nm there is a non-vanishing tail in $\Delta I_{EPR}$, which we attribute to an indirect quenching mechanism, {\it i.e.}, capture of holes generated in the photoneutralization processes of other defects present in the sample. Although the tail seems to be weak at the excitation power of 50~mW, at high incident powers $\Delta I_{EPR}$ due to the indirect mechanism is comparable to that observed for direct photoionization of Mn$^{2+}$. This means that the concentration of the defects involved is not negligible.
We note that the spectral dependence of $\Delta I_{EPR}$ in Fig.~\ref{fig2} may not be very accurate, as in the photoquenching experiment different light sources for each wavelength were used. Although the light path was always optimized for the maximum response, there still remains the error connected to the difference in the spot sizes of the laser diodes.
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=8.3cm]{fig3a}
\includegraphics[width=8.3cm]{fig3b}
\caption{\label{fig3}
(a) Ratio $\Delta I_{EPR}$(thick)/$\Delta I_{EPR}$(thin) in thick and thin samples vs. excitation wavelength. (b) Spectral dependence of absorbance at $T$=16~K. Note the difference wavelength window in (a) and (b).
}
\end{center}
\end{figure*}
As already mentioned, in the thick ZnO:Mn sample we observe an additional reduction of the Mn$^{2+}$ signal intensity under illumination, related to microwave absorption by free carriers. This reduction should increase with the concentration of photogenerated carriers as the effective volume penetrated by microwaves decreases. If the free carriers would originate solely from Mn$^{2+}$, the spectral dependencies measured in thin and thick samples should scale with the behavior of the Mn$^{2+}$ photoionization band. In Fig.~\ref{fig3}a we show the spectrally dependent quenching of the EPR signal intensity of Mn$^{2+}$ measured in the thick sample divided by the quenching measured after the sample was thinned down to 100~$\mu$m, $\Delta I_{EPR}$(thick)/$\Delta I_{EPR}$(thin). The measurements were performed at a constant temperature of 3.8~K.
As can be seen, the photoquenching ratio increases with increasing wavelength, in contrast to the behavior expected if Mn$^{2+}$ photoionization would be the only mechanism of free carrier generation.
This result demonstrates that there is at least a second channel of carrier photogeneration, dominant for wavelengths longer than 500~nm. Absorption extending to even longer wavelengths than applied in the photoquenching experiment is also observed in the optical spectrum of ZnO:Mn 0.2~\% measured at low temperatures (16~K) shown in Fig.~\ref{fig3}b. The nature and number of the defects responsible for carrier generation cannot be determined in our experiment as they give no paramagnetic signal. In the studied samples no EPR signal other than that of Mn$^{2+}$ was detected.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=8.3cm]{fig4}
\caption{\label{fig4}
Temperature dependencies of Mn$^{2+}$ EPR signal photoquenching ($\Delta I_{EPR}$) in the as-grown thin sample (squares) and in the same sample after hydrogenation (triangles) illuminated with 532~nm light at 50~mW. The solid lines were calculated assuming a thermally activated recapture process.
}
\end{center}
\end{figure}
In contrast to absorption measurements, where the photoionization transition is observed up to room temperature, in the photo-EPR experiment photoquenching was found to be temperature dependent, as shown in Fig.~\ref{fig4}. This is due to a fundamental difference between the two experiments. Whereas in absorption the signal is predominantly proportional to the occupancy of the initial state (Mn$^{2+}$), in photo-EPR the change of the signal is proportional to the transient occupancy of the final state (Mn$^{3+} + e_{CB}$). This means that a fast recapture of the ionized electron by Mn$^{3+}$ decreases photoquenching. In other words, photoquenching can only be observed if the occupancy of the final state is metastable. This can be achieved in two ways: either the photoionized electrons are trapped on other defect centers, or the recapture proceeds via an energy barrier. In both processes the recapture has a thermally activated character.
Shown in Fig.~\ref{fig4} are the temperature dependencies of $\Delta I_{EPR}$ in the as-grown sample (squares) and the same sample after hydrogenation (triangles) illuminated with 532~nm light at 50~mW.
The solid lines were calculated with use of the following simple relation:
\begin{equation}
\label{eq2}
(1-\Delta I_{EPR})^{-1}= A[1+B\exp(E_B/kT)],
\end{equation}
where the parameter $A$ is the ratio of the concentration of Mn$^{2+}$ ions in the dark to the total
Mn concentration.
Parameter $B=I\sigma/(nr)$ depends on the electron concentration $n$, light intensity $I$, Mn$^{2+}$
absorption cross section $\sigma$, and the temperature independent part of electron capture probability by Mn$^{3+}$, $r$. $E_B$ is the energy barrier for recapture of photoionized electrons. This relation was obtained assuming that the concentration of photogenerated electrons is much higher than that photoionized from Mn$^{2+}$. Unfortunately, as we do not know the fraction of occupied Mn$^{2+}$ ions in the sample
(given by the parameter $A$), the parameters cannot be determined independently of each other.
However, we obtain a reasonable agreement with experimental data with a finite barrier with the lowest estimated value of ~meV.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=8.3cm]{fig5}
\caption{\label{fig5}
Kinetics of Mn$^{2+}$ photoquenching under 445~nm excitation at a power of 190 (upper trace) and 20~mW (lower trace). Red line is calculated for an exponential decay, $\exp(-t/\tau)$, with $\tau=63$~ms.
}
\end{center}
\end{figure}
The kinetics of Mn$^{2+}$ photoionization is very abrupt, as shown in Fig.~\ref{fig5}. The kinetics was measured at 3~K at a constant field value, corresponding to a maximum intensity of one of the resonance lines. The upper and lower traces show the change of the signal intensity under 445~nm excitation at a power of 190 and 20~mW, respectively. The red line is calculated assuming a decay time of 63~ms
(apparatus response time).
This decay constant is much too short to account for the kinetics of a process involving charge transfer between Mn and other trap centers, which is usually of the order of minutes.~\cite{Godlewski1985} Moreover, the observed decay time does not depend on excitation power, which suggests that the real decay time is shorter than the spectrometer response. However, at wavelengths above 600~nm there is also a slower component, responsible for a few percent of the signal decrease.
The photo-EPR experiments have confirmed that the Mn$^{3+}$/Mn$^{2+}$ level is located at 2.1~eV below the conduction band minimum of ZnO. In addition, it was shown that in as grown ZnO: Mn crystals the manganese impurity occurs predominantly in the 3+ charge state. To account for the partial occupancy of the Mn$^{2+}$ state, there have to be other acceptors centers in the sample which push the Fermi level below the impurity level. One of the candidates is the complex of manganese with interstitial oxygen, Mn-O$_i$, postulated by Gluba and Nickel.~\cite{Gluba} The presence of such an acceptor center, however, is not confirmed in our experiment. We observe only the recharging of isolated Mn ions. It should be also noted that apart from Mn$^{2+}$ we detect no other EPR signals, whether of acceptors nor donors, in our crystals.
In particular, the EPR signal of a residual donor with the $g-$factor of 1.956, identified as hydrogen related shallow donor,~\cite{Detlev} is not observed even after hydrogenation of the sample. This signal does not appear also under illumination, which suggests that if this donor is present it is not effectively populated, {\it i.e.}, the electron capture rate is much lower than that of Mn$^{3+}$ ions. It should be also stressed that the temperature dependence of Mn$^{2+}$ photo-quenching is not governed by activation energies typical for ZnO donors, which range from 35 to about 70~meV.~\cite{Meyer} Instead, activation energies of the order of 1~meV, slightly dependent on sample treatment, are detected. All these point out to the conclusion that another temperature dependent mechanism must lead to a metastable change of the Mn$^{2+}$ occupancy under illumination.
\section{\label{sec3}Theory}
\subsection{\label{sec3a}Calculation details}
The calculations are performed within the density functional theory in the generalized gradient approximation (GGA) of the exchange-correlation potential.~\cite{Hohenberg,KohnSham,PBE} The $+U$ corrections are included.~\cite{Anisimov1991, Anisimov1993, Cococcioni} We use the pseudopotential method implemented in the QUANTUM ESPRESSO code,~\cite{QE} with the valence atomic configuration $3d^{10}4s^2$ for Zn, $2s^2p^4$ for O and $3s^2p^6 4s^2p^0 3d^5$ for Mn, respectively.
The plane-waves kinetic energy cutoffs of 30~Ry for wavefunctions and 180~Ry for charge density are employed. The electronic structure of the wurtzite ZnO is examined with a $8\times 8\times 8$ $k$-point grid. Analysis of a single Mn impurity in ZnO is performed using $3\times 3\times 2$ supercells with 72 atoms (2.8 atomic per cent of Mn). $k$-space summations are performed with a $3\times 3\times 3$ $k$-point grid for density of states (DOS) calculations,
while calculations with fixed occupation matrices
are performed using the $\Gamma$ point only. The $U$ terms for $3d$(Zn), $2p$(O), and $3d$(Mn) orbitals are treated as free parameters, whose values are discussed below. Ionic positions are optimized until the forces acting on ions became smaller than 0.02~eV/\AA.
\subsection{\label{sec3b}Pure ZnO}
It was previously shown that both the local density approximation (LDA) and GGA fail to give correct band characteristics of ZnO. In particular, the band gap, $E_{gap}$, of ZnO calculated within LDA/GGA~\cite{Schroer, Jaffe, Lim} is about 1~eV. This is due to the universal "band gap problem", {\it i.e.}, the underestimation of the gap within LDA/GGA on the one hand, but also to the too high calculated energies of the $d$(Zn)-derived bands~\cite{Wei} on the other hand. The inclusion of the $U$(Zn) term~\cite{Zhou, Lim, Dong} solves this problem only partially, since the band gap is still underestimated by about 2~eV. For example, we find that when $U$(Zn)=10~eV is employed the $d$(Zn) band is at about 8~eV below the valence band maximum (VBM), in agreement with experiment,~\cite{Lim, Dong, Ley, Vesely} but $E_{gap}\approx 1.2$~eV is still wrong.
This is because the coupling between $d$(Zn) and VBM is weak due to the large energy difference between those states, and thus $E_{gap}$ is not sensitive to the energy of the $d$(Zn) band. To obtain a correct value of $E_{gap}$ one should observe that the upper valence band is derived from $p$(O) orbitals. Indeed, the inclusion of the $U$(O) term for the $p$(O) orbitals, in addition to $U$(Zn), gives a correct band structure.~\cite{Ma, Lim, Agapito} We find that $U$(Zn)=12.5~eV and $U$(O)=6.25~eV reproduce both the experimental $E_{gap}$ of 3.3~eV~\cite{Dong} and the energy of the $d$(Zn) band, centered about 8~eV below the VBM, in excellent agreement with Ref.~\onlinecite{Agapito}. These values also lead to the correct width of $\sim 6$~eV of the upper valence band of mostly $2p$(O) character, and the lower conduction band of $4s$(Zn) character.
The relaxed crystal structure agrees well with experiment: the lattice parameters $a = 3.23$~\AA\ and $c = 5.19$~\AA, as well as the internal parameter $u = 0.38$ are underestimated by less than 1~\% in comparison with experimental values:
$a = 3.25$~\AA, $c = 5.20$~\AA, and $u = 0.38$.~\cite{Karzel} One should finally observe that the electronic structure of ZnO represents a problem even for the GW approach: as it is discussed in Refs.~\onlinecite{Lim, Lany} different GW calculations, including quasiparticle self-consistent GW calculations, still place the $d$(Zn) band at an energy too high by about 1~eV, and an additional potential on Zn cations is needed to achieve the correct band structure.
\subsection{\label{sce3c}Mn impurity in ZnO}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=8.3cm]{fig6a}
\includegraphics[width=8.3cm]{fig6b}
\end{center}
\caption{\label{fig6}
Energy bands (left panels) and DOS (right panels) of (a) ZnO:Mn$^{2+}$, and (b) ZnO:Mn$^{3+}$. Gray area and black lines (blue lines in the on line version) in DOS display the total DOS and DOS projected on $d$(Mn) orbitals, respectively. Horizontal (red) lines denote the band gap of ZnO.
}
\end{figure}
Properties of the Mn ion in ZnO depend on its charge state. The band structure and DOS of ZnO doped with Mn$^{2+}$ and Mn$^{3+}$ are shown in Fig.~\ref{fig6} for $U$(Mn)=0. Mn$^{2+}$ introduces two levels into the gap, a $t_{2\uparrow}$ triplet at 2.64~eV above VBM and an $e_{2\uparrow}$ doublet at 1.90~eV. (Actually, $t_{2\uparrow}$ is split into a singlet and a doublet by the wurtzite crystal field with a small splitting of about 0.1~eV.) The spin-down states form resonances degenerate with the conduction band, and thus, in agreement with experiment, Mn cannot assume the 1+ charge state in n-type ZnO.
The $t_{2\uparrow}$ and $e_{2\uparrow}$ levels of Mn$^{3+}$ are at about 1.45 and 0.28~eV above VBM, respectively, {\it i.e.}, they are lower by $\sim 1.5$~eV than those of Mn$^{2+}$. This large difference in the level energies of Mn$^{2+}$ and Mn$^{3+}$ stems from the strong intra-center Coulomb repulsion between $d$(Mn) electrons caused by the localization of their wavefunctions. Moreover, the localized character of $d$(Mn) is responsible for the relatively large 6~\% reduction of the Mn-O bond length, from 2.02~\AA\ for Mn$^{2+}$ to 1.90~\AA\ for Mn$^{3+}$, which is induced by the decrease in the Coulomb coupling between Mn and O anions. We also mention that the energies of the gap states of the isolated Mn$^{3+}$ and those of Mn$^{3+}$ with a photoelectron $e_{CB}$ in the conduction band are the same to within 0.02~eV, and the Mn-O bond lengths are the same to within 0.01~\AA. This is because of the delocalized character of the wave function from the bottom of the
conduction
band. The results for Mn$^{3+}$ with $e_{CB}$ we are shown in Fig.~\ref{fig8}c.
\subsection{\label{sec3d}Photoionization, recombination, and mechanism of metastability}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=8.3cm]{fig7}
\caption{\label{fig7}
(a) Total energy change of Mn$^{2+}$ and (Mn$^{3+}, e_{CB}$) as a function of configuration coordinate $Q$. $Q_2$ and $Q_3$ are equilibrium atomic configurations of Mn$^{2+}$ and and (Mn$^{3+}, e_{CB}$) charge states, respectively, and $Q_B$ is the configuration coordinate of the barrier. (b) Single particle energy of the $t_{2\uparrow}$ level for both Mn$^{2+}$ (red symbols) and (Mn$^{3+}, e_{CB}$) (blue symbols); note the strong dependence of the $t_{2\uparrow}$ energy on the charge state.
$U$(Mn)=0 is assumed.
}
\end{center}
\end{figure}
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=2.86cm]{fig8a}
\includegraphics[width=2.86cm]{fig8b}
\includegraphics[width=2.86cm]{fig8c}
\includegraphics[width=2.86cm]{fig8d}
\includegraphics[width=2.86cm]{fig8e}
\includegraphics[width=2.86cm]{fig8f}
\end{center}
\caption{\label{fig8}
Calculated diagrams of Mn levels for (a) Mn$^{2+}$ at equilibrium configuration $Q_2$, (b) Mn$^{3+}$ with a photoelectron $e_{CB}$ in the CB in the same atomic configuration $Q_2$, (c) Mn$^{3+}$ with $e_{CB}$ in the relaxed configuration $Q_3$, (d) Mn$^{3+}$ with $e_{CB}$ in the barrier configuration $Q_B$, (e) Mn$^{2+}$ (when $e_{CB}$ is captured by Mn) in the barrier configuration $Q_B$, and (f) Mn$^{2+}$ relaxed to $Q_2$; this is the end step of the recombination process, the configuration is the same as in (a). $U$(Mn)=0 is assumed.
}
\end{figure*}
Due to the strong dependence of gap levels on the Mn charge state, the energies of absorption and/or recombination cannot be deduced directly from single particle states of Mn$^{2+}$ (or Mn$^{3+}$), as it was indicated in, {\it e.g.}, Refs.~\onlinecite{Badaeva, Gamelin}. Consequently, energies of processes analyzed below are calculated from the total energy difference between final and initial states.~\cite{note1}
The absorption-recombination cycle of Mn$^{2+}$ occurs in five steps. They are presented in Fig.~\ref{fig7}, which shows both the total energy and the Mn energy levels for each step for $U$(Mn)=0. Mn levels are shown in Fig.~\ref{fig8} in detail.
(i) In the first step (Fig.~\ref{fig7}a, A~$\to$~B) one electron from $t_{2\uparrow}$ of Mn$^{2+}$ is excited to the conduction band, with the atomic positions kept fixed at the equilibrium configuration of Mn$^{2+}$, $Q_2$. The excitation energy is $E_{abs}=2.0$~eV. Photoionization induces a strong decrease of the $t_{2\uparrow}$ energy by about 2~eV, see Figs~\ref{fig7}b and \ref{fig8}b, because the depopulation of the $d$(Mn) shell reduces the strength of the Coulomb repulsion.
(ii) In the second step (B~$\to$~C), atoms are allowed to relax towards the equilibrium configuration $Q_3$ of Mn$^{3+}$ with the photoelectron $e_{CB}$ in the conduction band. This case is denoted by (Mn$^{3+}, e_{CB}$) in Figs~\ref{fig7} and \ref{fig8}. During this step the Mn-O bonds are reduced by $\sim 6$~\%, and the energy of $t_{2\uparrow}$ increases by about 1~eV (Figs~\ref{fig7}b and \ref{fig8}c), in agreement with its antibonding character. The corresponding energy gain ($E_{tot}$(B)$- E_{tot}$(C)) is 0.69~eV.
This energy gain takes place in spite of the fact that the single particle gap level $t_{2\uparrow}$ increase in energy by more than 1~eV, see Figs.~\ref{fig7}b and \ref{fig8}c. This illustrates the fact that total energy differences cannot be deduced directly from single particle states of Mn$^{2+}$ (or Mn$^{3+}$), since other factors such as the Madelung ion-ion energy are dominant.
According to our results, the relaxed (Mn$^{3+}, e_{CB}$) state of Mn$^{3+}$ with one electron in the conduction band is metastable, because its energy is higher than that of the relaxed Mn$^{2+}$ by 1.32~eV (($E_{tot}$(C)-$ E_{tot}$(A)) in Fig.~\ref{fig7}a), but a direct recombination of the photoelectron to the $t_{2\uparrow}$ level of Mn$^{2+}$ is not possible. The instability stems from the fact that in the configuration $Q_3$ the energy of the $t_{2\uparrow}$ level of Mn$^{2+}$ occupied with 3 electrons is above the conduction band bottom (CBB), see Fig.~\ref{fig7}b. Indeed, the calculated dependence of $t_{2\uparrow}$ of Mn$^{2+}$ on the configuration coordinates, presented in Fig.~\ref{fig7}b, shows that $t_{2\uparrow}$ increases in energy with the decreasing Mn-O bond lengths, and merges with the conduction band for the atomic configuration $Q_B$.
For smaller bond lengths, in particular in the $Q_3$ configuration, it is a resonance degenerate with the conduction band.
The extrapolated $t_{2\uparrow}$ energies are shown by dashed lines in Fig.~\ref{fig7}b, and the corresponding extrapolated total energy is shown by the dashed line in Fig.~\ref{fig7}a. We use extrapolated values because for the configuration coordinates in the range $(Q_B,Q_3)$ we could not arrive at convergent results when fixing the occupation of the $t_{2\uparrow}$ level by 3 electrons.
The occupancy of $t_{2\uparrow}$ by 3 electrons is unstable since there are empty conduction states lower in energy. In other words, Mn$^{2+}$ is stable for configuration coordinates in the range from $Q_2$ to $Q_B$, when $t_{2\uparrow}$ is a gap state, while (Mn$^{3+}, e_{CB}$) is locally stabilized for configuration coordinates in the range ($Q_3$, $Q_B$). At $Q_B$, the electron recombination is possible, with the corresponding energy gain $E_{rec}$. The difference in total energy of (Mn$^{3+}, e_{CB}$) between $Q_3$ and $Q_B$ is the energy barrier $E_B$.
The difference between total energy of Mn$^{2+}$ and (Mn$^{3+}, e_{CB}$) states in the $Q_3$ configuration is only estimated from single particle levels at the state halfway between, thanks to the Janak theorem.~\cite{Janak} It formally gives the lower energy of Mn$^{2+}$ state by 0.59~eV. Actually, however, the Mn$^{2+}$ state is unreachable in the $Q_3$ configuration and does not converge in our calculations. The Janak theorem can be applied in this case, since the ionic positions are kept fixed at $Q_3$.
The recombination of $e_{CB}$ requires three more steps shown in Figs~\ref{fig7} and \ref{fig8}.
(iii) A thermally driven atomic transition from $Q_3$ to the barrier configuration $Q_B$ (C~$\to$~D), which is described in detail below. According to our estimates, the upper limit for the barrier $E_B$ is 60~meV.
(iv) The capture of the photoelectron by Mn$^{3+}$ (D~$\to$~E), {\it i.e.}, the transition to the Mn$^{2+}$ state. This transition at the estimated $Q_B$ configuration provides the energy gain of $E_{rec} = 0.98$~eV.
(v) The relaxation of Mn$^{2+}$ from $Q_B$ to $Q_2$ (E~$\to$~A) with $E_{relax}= 0.41$~eV.
Finally, we notice that the metastable atomic configuration $Q_3$ of (Mn$^{3+}$, $e_{CB}$) is an excited state of the crystal as a whole, since the corresponding total energy is higher than that of ZnO:Mn$^{2+}$ in the ground state configuration $Q_2$.
Importantly, however, in those particular atomic configurations electrons are in the respective ground states, which justifies the usage of GGA$+U$.
Consequently, both the total crystal energies and the total energy difference between the metastable configuration $Q_3$ and the ground state $Q_2$ are well defined as well.
Moreover, in the $Q_3$ configuration small displacements of anions around Mn increase the total energy of (Mn$^{3+}, e_{CB}$), which proves that this is indeed a metastable state of the crystal, and the barrier for electron recombination is non-vanishing.
\subsection{\label{sec3e}Estimation of the energy barrier}
\begin{figure}[t!]
\begin{center}
(a)\includegraphics[width=3.5cm]{fig9a}
(b)\includegraphics[width=4.0cm]{fig9b}
\caption{\label{fig9}
(a) Mn-O bonds. The atomic positions in cartesian coordinates are Mn$(0,0,0)$, O1$(0,0,d_{1z})$, O2$(0,d_{2y} ,d_{2z})$. (b) Two path between $Q_2$ and $Q_3$. For $d_{2y}^B$, Mn$^{2+}$ becomes unstable, {\it i.e.}, the $t_{2\uparrow}$ level of Mn$^{2+}$ is above CBM. $Q_B$ is the estimated barrier configuration.
}
\end{center}
\end{figure}
\begin{table}
\caption{\label{tabI}
Equilibrium coordinates of O1 and O2 shown in Fig.~\ref{fig9}a, the respective Zn-O and Mn-O bond lengths
$d_1$ and $d_2$, and the average bond length $< d >= (d_1 + 3d_2)/4$. All values are in \AA.
}
\begin{ruledtabular}
\begin{tabular}{l c c c c c}
& $d_1=d_{1z}$ & $d_{2y}$ & $d_{2z}$ & $d_2$ & $<d>$\\
\hline
ZnO:& 1.98& 1.87& -0.62& 1.97& 1.97\\
Mn$^{2+}$:& 2.03& 1.92& -0.63& 2.02& 2.02\\
Mn$^{3+}$:& 1.94& 1.79& -0.59& 1.88& 1.90\\
\end{tabular}
\end{ruledtabular}
\end{table}
A detailed description of the metastability, in particular of the barrier height for return to the ground state, is difficult, because the atomic relaxations around Mn involve not only the nearest but also more distant neighbors. To make the problem tractable, we limit the parameter space to the four Mn-O bonds shown in Fig.~\ref{fig9}a. The local symmetry of Mn is $C_{3v}$ in all the considered cases, and consequently there are 3 parameters that define the geometry, $d_{1z}$, $d_{2y}$, and $d_{2z}$, which are defined in the caption to Fig.~\ref{fig9}. The three basal O atoms are equivalent. Mn is assumed to be at $r=0$, and the atoms beyond the first neighbors are allowed to relax.
The calculated coordinates of the two non-equivalent oxygen ions for both Mn$^{2+}$ and Mn$^{3+}$ in the (Mn$^{3+}, e_{CB}$) configuration, together with the Zn-O bond lengths in ZnO for comparison, are given in Table~\ref{tabI}.
Two possible paths between $Q_2$ and $Q_3$ are displayed in Fig.~\ref{fig9}b. In both cases we found that the $t_{2\uparrow}$ level of Mn$^{2+}$ is much more sensitive to the changes of $d_{2y}$ than of $d_{1z}$ or $d_{2z}$.
For both paths, the Mn$^{2+}$ instability begins at almost
the same $d_{2y}$, which is denoted by $d_{2y}^B$ in Fig.~\ref{fig9}b. Therefore, the barrier configuration $Q_B$ is taken as a point which is achieved from $Q_3$ by changing only the $d_{2y}$ coordinate.
With this assumption we find $Q_B$ as the configuration at which the $t_{2\uparrow}$ level of Mn$^{2+}$ is degenerate with CBM. This allows us to find the corresponding energy barrier, $E_B=E($Mn$^{3+},Q_3)-E($Mn$^{3+},Q_B) = 60$~meV, which clearly represents the upper limit.
\subsection{\label{sec3f} Dependence on $U$(Mn)}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=8.3cm]{fig10}
\caption{\label{fig10}
Dependence of the $t_{2\uparrow}$ energy level on $U$(Mn)
for
Mn$^{2+}$ and (Mn$^{3+},e_{CB}$) in the configurations $Q_2$ and $Q_3$.
}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=8.3cm]{fig11}
\caption{\label{fig11}
The $Q$ dependence of (a) the total energy and (b) single particle levels of Mn$^{2+}$ and (Mn$^{3+}, e_{CB}$) for $U$(Mn) = 4~eV.
}
\end{center}
\end{figure}
The analysis presented in the previous Section was conducted assuming $U$(Mn)=0. As it was mentioned in Sec.~\ref{sec3a}, the value of $U$(Mn) is treated here as a free parameter, which can be adjusted to fit the experimental data. We have performed calculations for a few values of $U$(Mn), and the results are presented in Fig.~\ref{fig10}.
As is follows from Fig.~\ref{fig10}, the energies of gap levels of both Mn$^{2+}$ and Mn$^{3+}$ decrease with increasing $U$.
In particular, assuming $U$(Mn)=4~eV brings $t_{2\uparrow}$ about 0.9~eV above the VBM, and puts the $e_{2\uparrow}$ level below the VBM.
We also note that for $U$(Mn)=4~eV, in the equilibrium configuration $Q_2$ Mn-O bond lengths are 2.06~\AA, while they are reduced to about 1.98~\AA\ for $Q_3$, which shows that the impact of $U$ on bond lengths is moderate.
Comparing our results with previous theoretical investigations of Mn in ZnO
we note that the LDA calculations including SIC corrections~\cite{Toyoda} were performed for high Mn content, for which a wide Mn-induced band in the band gap was found in qualitative agreement with our results.
LDA supplemented with the $+U$ term imposed on the $d$(Mn) orbitals was also used~\cite{Raebiger, Chanier, Gluba}.
For $U$(Mn)=3~eV, there is a reasonable agreement with Ref.~\onlinecite{Raebiger}, which uses $U=3.9$~eV and $J=1$~eV (this corresponds to the effective $U=3.9-1.0=2.9$~eV), and with the time dependent DFT.~\cite{Badaeva}
The other applied $U$(Mn) values were 6~eV~\cite{Chanier} obtained from the fit to the experimental magnetisation data, and 3.2~eV~\cite{Gluba} estimated according to Ref.~\onlinecite{Janotti}.
In these works, the Mn$^{2+}$ $t_{2\uparrow}$ level is situated at about 0.7-1.0~eV above the VBM.
The levels of Mn$^{3+}$ were not investigated.
Our results obtained with $U$(Mn)=3-4~eV are reasonably close to those quoted above.
To obtain the optimal value of $U$(Mn) by fitting to our experimental results we note that
the $U$-induced downward shifts of the Mn levels imply that the excitation energy (step A~$\to$~B in Fig.~\ref{fig7}a) increases from 2.0 to 2.84~eV when $U$ changes from 0 to 4~eV.
Moreover, the barrier $E_B$ depends on the $U$(Mn) term: it decreases with the increasing $U$ and it vanishes for $U$(Mn) higher than about 1.7~eV. The decrease of $E_B$ is related with the $U$-induced decrease of the Mn levels. In particular, for $U=0$ the $t_{2\uparrow}$ level is degenerate with the conduction band for the configuration $Q_3$, while for $U>1.7$~eV it is below the CBM, and therefore a direct transition of the photoelectron from the conduction band to $t_{2\uparrow}$ is possible. This feature is illustrated in Fig.~\ref{fig11} for $U$(Mn) = 4~eV.
Therefore, the best overall value of $U$(Mn) is about 1.5~eV, giving a barrier of about 1~meV, and the excitation energy of about 2.4~eV.
\section{\label{sec4}Summary}
Photo-EPR experiments performed on ZnO:Mn single crystals have confirmed that the Mn$^{3+}$/Mn$^{2+}$ level is located about 2.1~eV below the conduction band minimum of ZnO, as illumination of the crystal with photon energies higher than 2.1~eV leads to a partial, temperature dependent depopulation of the Mn$^{2+}$ state at low temperatures, accompanied by photoconductivity. The unusually small thermal deactivation energy (of the order of 1~meV) together with the untypically fast kinetics of Mn$^{2+}$ photoquenching point out to a process different from charge transfer from Mn$^{2+}$ to other defect centers. We interpret the observed metastable change of Mn$^{2+}$ occupancy under illumination as due to a small energy barrier for electron recapture from the conduction band by Mn$^{3+}$.
GGA+$U$ approach was employed to study both the Mn$^{2+}$ - Mn$^{3+}$
optical transitions, and the stability of the (Mn$^{3+}$, $e_{CB}$) photoexcited state.
The excited state is found to be metastable, because in the relaxed configuration a direct recombination of the photoelectron is not possible, and recapture requires overcoming an energy barrier.
The energy barrier $E_B$ decreases with the incrasing $U$(Mn), and vanishes for $U>1.7$~eV.
Comparing theory with experiment we find that $U$(Mn) of about 1.5~eV leads to photoionization energy, 2.4~eV, and the barrier $E_B$ of the order 1 meV, in a reasonable agreement with the experimental data.
Moreover, one should note that similar results hold for Mn and Fe ions in GaN,~\cite{VZB, ZB} for which the experimental intra-center transition energies are reproduced with very small $U$ terms.
The metastability of (Mn$^{3+}$, $e_{CB}$) is related with the reduction of the Mn-O bonds after the Mn$^{2+}$ ionization.
This change of atomic configuration is coupled to electronic degrees of freedom,
and it rises the energy of the d(Mn) donor level.
More importantly, the Coulomb repulsion between the $d$(Mn) electrons is strong,
and it rises the Mn level energy by $\sim 1$~eV when Mn changes its charge state to Mn$^{2+}$.
In more detail, while the donor level of Mn$^{3+}$ is situated below the bottom of the conduction band,
after capturing the photoelectron the occupied donor level of Mn$^{2+}$ would be
above the empty conduction band, which is an unstable electronic configuration.
This factor blocks the recombination of the photoelectron, and drives the metastability of Mn$^{3+}$.
While the role of the local lattice relaxatons was recognized and extensively discussed
for metastable centers in semiconductors,
~\cite{EL2, EL2_prl, Chadi, Dobaczewski, BBdx, Wetzel, Thio, Lany_DX, Jones, Schmidt}
the role of the strong Coulomb coupling between $d$ electrons represents a novel aspect of the physics of defect metastbility.
\section*{Acknowledgments}
The authors acknowledge the support from the projects No. 2012/05/B/ST3/03095 and 2011/01/D/ST7/02657, which are financed by Polish National Science Centre (NCN). Calculations were performed on ICM supercomputers of University of Warsaw (Grant Nos. G46-13 and G16-11).
|
1,314,259,994,216 | arxiv | \section{Introduction}
Functional flow equations permit to interpolate continuously from the microscopic or classical action to the macroscopic or quantum effective action. For Yang Mills theories and quantum gravity local gauge symmetries play a central role. A functional renormalization approach to such theories should keep carefully track of gauge symmetries and resulting restrictions on the general form of the effective action.
The goal is to realize a gauge-invariant effective action $\bar{\Gamma}(\bar g)$ for a single metric $g_{\mu\nu}$ in gravity, or a single gauge potential $A_\mu = g_\mu$ for electromagnetism, once all fluctuations are taken into account. (Quantum) field equations are then obtained directly as $\partial \bar\Gamma/\partial \bar g = K$, with $K$ an appropriate conserved source. These field equations are the basis for the ``classical'' field theories of gravity and electromagnetism, which are well tested by many precision observations. A similar gauge-invariant effective action will be formulated for Yang-Mills theories. Physical correlations or Green's functions are obtained by inverting the second functional derivative of $\bar\Gamma$ in the space of physical fluctuations.
At the present stage, the formulation of functional flow equations for gauge theories has to deal with the problem that regularization in continuum field theories typically breaks the gauge symmetry, necessitating gauge fixing. Furthermore, quadratic infrared cutoff terms are usually not compatible with the gauge symmetry. Exact flow equations for the effective average action of gauge theories have been formulated in the background field formalism \cite{RW1,RWA,RWB,RWC}. This formalism has been extended to quantum gravity \cite{MR1}. These flow equations involve, however, two independent fields. The first is the expectation value of the microscopic or fluctuating field $g^\prime$, over which the functional integral is performed,
\begin{equation}
g
= \langle g^\prime\rangle,
\end{equation}
while the second ``background field'' $\bar{g}$ is used to formulate covariant derivatives for the gauge fixing and infrared cutoff. The effective action is only invariant under simultaneous transformations of $g$ and $\bar g$.
Alternatively, one may omit the background field, which amounts to setting $\bar g=0$ in the background field formalism. The effective action is no longer gauge invariant. Rather sophisticated approximation schemes \cite{CLPR,CKPR} are needed in order to cope with the many terms contributing already in low orders of the gauge field. It has been proposed to maintain gauge symmetry by the use of rather complex gauge invariant regularizations \cite{TM2,MO,MOR,FM} involving additional fields. Our present approach is more modest. Technically, it shows analogies to background gauge fixing in a particular ``physical'' gauge. We obtain, however, a gauge invariant effective action depending only on one macroscopic gauge field. This is achieved by employing the macroscopic field for the formulation of the gauge fixing and infrared regulator term. No separate background field is introduced. At the end, we obtain indeed a quantum effective action that is gauge invariant and depends on a single metric or gauge field. This can be used as the basis for general relativity and Maxwells equations, including corrections to these equations generated by quantum fluctuations.
In the usual ``background field formalism'' $\bar{g}$ is considered as fixed. We propose here to replace the fixed background field by a macroscopic field $\bar{g}(g)$, with a relation to $g$ that is, in principle, computable. The macroscopic field $\bar{g}$ is the argument of the gauge-invariant effective action $\bar\Gamma(\bar{g})$ which only depends now on one field. The metric or gauge field in the field equations is identified with $\bar{g}$. Also the flow equations describe the scale dependence of the effective action at fixed $\bar{g}$. Thus $\bar{g}$ is the relevant field for all macroscopic considerations. (We keep here the notation $\bar{g}$ for comparison with the background field formalism -- the bar may be dropped at later stages.) The choice of the relation between $\bar{g}$ and $g=\langle g^\prime\rangle$ is such that a closed gauge invariant flow equation can be formulated for $\bar\Gamma(\bar{g})$. The precise relation between $ g$ and $\bar{g}$ is of secondary importance.
Approximative solutions (truncations) of previous versions of exact flow equations for gauge theories have been successfully used to understand various phenomena. Superconductivity or the abelian Higgs model has been investigated in various dimensions \cite{RWA,RWC,BFLLW,BLLW2}. Increasingly sophisticated truncations in quantum chromodynamics (QCD) provide for an increasingly complete analytical understanding \cite{G2,LP1,PA1}. Functional renormalization has addressed the running of the gauge coupling in various dimensions \cite{RW1,RWB,EHW,G1,G4}. Applied to thermal equilibrium, with an effective non-perturbative (``confinement'') scale increasing with temperature, it has been advocated that non-perturbative strong interaction effects should be visible in the quark gluon plasma even at high temperature \cite{RW1,BG}. (This qualitative finding has been made quantitative by computations of thermodynamic quantities in lattice gauge theories, or by the experimental observation of strong interaction properties in heavy ion collisions at high effective temperature.) Detailed studies of the flow in the non-perturbative regime have addressed the issues of the heavy quark potential \cite{CWEQ,CWGF,BW}, gluon condensation \cite{RW2,EGP}, the gluon propagator \cite{CWGF,BW,CFM} and confinement \cite{BGP}. The comparison of flow equation results in Landau gauge with lattice simulations \cite{CFM,SIM} and Schwinger-Dyson equations \cite{FP1,FP2,CFM,MPS} have added confidence in the reliability of results for QCD. For the electroweak interactions the crossover character of the high-temperature transition has first been advocated based on the effective three-dimensional running of couplings \cite{RWB}.
In quantum gravity the non-perturbative flow equations for the effective average action have permitted to address the asymptotic safety scenario \cite{Wei} in four dimensions \cite{MR1}. The corresponding ultraviolet (UV) fixed point of the flow \cite{MR1,Sou} has been seen to persist for rather extended truncations \cite{Dou:1997fg,Reuter:2001ag,Litim:2003vp,Codello:2006in,Machado:2007ea,Codello:2008vh,Fischer:2006fz,Benedetti:2009rx,Eichhorn:2010tb,Donkin:2012ud,Christiansen:2012rx,Rechenberger:2012dt,Dietz:2012ic,Codello:2013fpa,Falls:2013bv,Benedetti:2013jk,Christiansen:2014raa,Christiansen:2015rva,Dietz:2015owa,Demmel:2015oqa,Falls:2015qga,Gies:2015tca,Gies:2016con}. Within dilaton quantum gravity a similar UV-fixed point can be related directly to inflationary cosmology \cite{H1,H2}. Despite these many striking successful applications of functional flow equations for gauge theories, further progress is partially hindered by the proliferation of the number of invariants in the absence of a realization of gauge symmetry for a single gauge field. While the conceptual setting and the exactness of the flow equation is not in doubt, the absence of gauge symmetry or the presence of two gauge fields in the background field formalism makes it hard to derive series of truncations that do not rapidly become very complex.
In the background field formalism the flowing action or effective average action $\Gamma(g,\bar{g})$ is gauge invariant if $g = \langle g^\prime\rangle$ and $\bar{g}$ are transformed simultaneously. In contrast, gauge invariance is broken if only $g^\prime$ and $g$ are transformed while $\bar{g}$ is held fixed. A gauge-invariant effective action involving only one field, $\Gamma(\bar{g}) = \Gamma(\bar{g},\bar{g})$, can be formed if $g$ is identified with $\bar{g}$. This object is in the center of many studies in the past. The exact flow equation for $\Gamma(\bar{g})$ involves, however, the exact propagator which is encoded in $\Gamma(g,\bar{g})$ \cite{RW1}. Indeed, the inverse propagator is given by the second functional derivative of $\Gamma(g,\bar{g})$ with respect to $g$, taken at fixed $\bar{g}$. It is not directly related to the second functional derivative of $\Gamma(\bar{g})$. What is needed is an estimate of the shape and influence of
\begin{equation}\label{eqn:I1}
\Delta\Gamma(g,\bar{g})
= \Gamma(g,\bar{g}) - \Gamma(\bar{g},\bar{g}).
\end{equation}
Many practical computations assume that $\Delta\Gamma$ can be sufficiently well described by a simple gauge fixing term. A reliable estimate of the effects from $\Delta\Gamma$ beyond such a simple ansatz is perhaps the most important present source of uncertainty and error in the functional renormalization group approach to gauge theories and quantum gravity.
The functional form of $\Delta\Gamma(g,\bar{g})$ obeys various constraints which guarantee that there are no physical degrees of freedom beyond the ones contained in a single gauge field. An exact ``background field identity'' \cite{RW1} yields a one loop type exact equation for the dependence of the effective action on the background field at fixed $g$, i.e. $\partial\Gamma(g,\bar{g})/\partial\bar{g}|_g$. In the absence of an infrared cutoff ($k = 0$) gauge theories with gauge fixing obey exact Ward or Slavnov-Taylor identities. For an infrared cutoff scale $k \neq 0$ those are violated by the infrared cutoff. Exact modified Ward identities \cite{BEC,EL,BAM,AM} do now account for this violation of the usual Ward identities by effects of the infrared cutoff. The modified Ward identities can be obtained from the background field identity \cite{FW,FLP,LP3} - both sets of identities express the same physics, namely the absence of unphysical propagating degrees of freedom. Already in a perturbative setting these identities are cumbersome to handle, however, and there is very little experience how to implement these often rather complex identities in a non-perturbative situation.
Alternatively, one may directly compute the flow of $\Delta\Gamma$ by an exact flow equation \cite{MR,MRS,BEM}. Since $\Delta \Gamma$ involves $h = g - \bar{g}$, and $h$ transforms homogeneously as a tensor under simultaneous gauge transformations of $g$ and $\bar{g}$, possible truncations get quickly rather involved. There are many invariants that can be constructed from the tensor $h$. Keeping the dependence on $h$ and $\bar{g}$ unconstrained results in new relevant operators in the flow. The values of their coefficients have to be tuned in order to maintain consistency with the background field identity or the modified Ward identities. It is precisely these identities that are responsible for the absence of physical additional relevant couplings in the two-field formalism. Since the solutions of the identities are only poorly known, it is often difficult to achieve the physical tuning in a given truncation, resulting in potentially large errors.
In this note we propose to avoid the problems with $\Delta\Gamma(g,\bar{g})$ altogether by the definition of a flowing gauge-invariant effective action $\bar\Gamma(\bar{g})$ which admits a closed flow equation. This means that the flow generator $\zeta_k$ (r.h.s. of the flow equation) can be expressed as a functional of $\bar\Gamma(\bar{g})$, typically involving the second functional derivative $\bar\Gamma^{(2)}(\bar{g})$. Since $\bar\Gamma^{(2)}(\bar{g})$ has zero modes due to gauge symmetry, the propagator for the physical modes has to be found by inversion on a suitably projected subspace. Our formalism shows some analogies with ``geometric flows'' \cite{P5,Donkin:2012ud,DSZ}.
In practice, computations with the proposed gauge-invariant flow equation are rather similar to computations in the background field formalism for a specific physical gauge. The flow equation retains its one-loop form. Due to the dependence of the cutoff function $R_k$ on $\bar{g}$ via covariant derivatives, the flow of derivatives of $\bar\Gamma$ involves additional diagrams $\sim \partial R_k/\partial \bar{g}$. For $k \neq 0$ the inverse propagator obtained from the second functional derivative of $\bar\Gamma[\bar{g}]$ does not equal the connected two-point function for the microscopic fluctuations, however.
The paper is organized as follows: In sect. \ref{Flow of gauge-invariant effective action} we describe the proposed gauge invariant flow equation and establish its gauge invariance. Sect. \ref{Flow equation from functional integral} derives the flow equation from a functional integral. In contrast to the usual formulation the partition function depends on the macroscopic gauge field $\bar g$, rendering the definition of the effective action an integro-differential functional equation. This issue and the consequences for the flow equation are discussed in sect. \ref{Macroscopic field}. In sect. \ref{Projection on physical fluctuations} we describe the projection on the physical fluctuations and the notion of a physical gauge fixing that acts solely on the gauge fluctuations. The optimal choice of the macroscopic gauge field is discussed in sects. \ref{Choice of macroscopic field} and \ref{Optimal macroscopic field}. Only this optimal choice permits a simple closed form of the flow equation. The short sect. \ref{Quantum field equation} addresses the quantum field equations that are the basis for classical field theory. We discuss our results in sect. \ref{Discussion}.
\section{Flow of gauge-invariant effective action}
\label{Flow of gauge-invariant effective action}
We start our discussion by proposing a flow equation for a gauge invariant functional $\bar\Gamma_k(\bar g)$ that depends only on the macroscopic gauge field $\bar g$. This equation for the evolution with the infrared scale $k$ is closed, such that specifying the ``initial condition'' at some ultraviolet value of $k$ permits, in principle, to extract the gauge invariant effective action as the solution of the flow equation at $k=0, ~\bar \Gamma(\bar g)=\bar\Gamma_{k=0}(\bar g)$. The first derivative of $\bar\Gamma$ defines the field equation and the second the inverse propagator. If the usual relation between the two-point correlation of fluctuations and the propagator holds, $\bar\Gamma(\bar g)$ is sufficient to compute the quantities of interest for gravity, electromagnetism or Yang-Mills theories. In a second step we will relate this flow equation to a functional integral and discuss if it is exact or should be considered as an approximation.
The flow equation involves the exact propagator for the physical fluctuations. It is therefore important to separate the physical fluctuations and the gauge fluctuations by appropriate projections. Only on the projected subspace for physical fluctuations the second functional derivative of $\bar\Gamma[\bar g]$ is invertible and related to the propagator of the physical fluctuations. The gauge invariant effective action $\bar\Gamma[\bar g]$ contains no gauge fixing term. In the space all fluctuations the second functional derivative of $\bar\Gamma[\bar g]$ has zero modes and is not invertible. The projections avoid the problems with the zero modes.
The proposed flow equation for the gauge-invariant effective average action $\bar\Gamma_k(\bar{g})$,
\begin{equation}\label{eqn:A}
k\partial_k\bar\Gamma_k
= \zeta_k
= \pi_k + \delta_k - \epsilon_k,
\end{equation}
involves a contribution $\pi_k$ from the physical fluctuations as well as ``measure contributions'' $\delta_k$ and $\epsilon_k$. Only $\pi_k$ depends on $\bar\Gamma$, according to
\begin{equation}\label{eqn:B}
\pi_k
= \frac{1}{2} \tr (k\partial_k\bar{R}_PG_P),
\end{equation}
with $\bar{R}_P$ the infrared regulator for the physical fluctuations which depends on the normalization scale $k$. The trace $\tr$ contains a momentum integration such that eq. \eqref{eqn:B} takes a one loop form, as well as a suitable trace over Lorentz- and internal indices of the gauge fields or the metric. All indices, including position or momentum labels, are collectively denoted by $i$.
The propagator matrix $G_P$ for the physical fluctuations obeys the projection properties $PG_P = G_PP^\mathrm{T} = G_P$, with projector $P = P^2$. The same holds for $\bar{R}_P = P^\mathrm{T}\bar{R}_P = \bar{R}_PP$. The projector is defined such that it projects on the physical fluctuations. It annihilates the infinitesimal gauge variations of the variable $\bar{g}$ according to
\begin{equation}\label{eqn:C}
\delta_\xi\bar{g}
= (1 - P)\delta_\xi\bar{g}~,~P\delta_\xi\bar g=0.
\end{equation}
(Here $\bar{g}$ is considered as a vector with index $i$, e.g. $\bar{g}_i = A^z_\mu(x)$ or similar. Correspondingly, $P,\bar G_P$ and $\bar{R}_P$ are matrices.)
The projector is a given functional of the macroscopic field $\bar{g}$, determined uniquely by the action of gauge transformations on $\bar{g}$. For the example of non-abelian gauge theories the projector reads \cite{RW1}
\begin{equation}\label{eqn:N}
P_\mu{^\nu}
= \delta^\nu_\mu - \bar P_\mu{^\nu},
\qquad
\bar P_\mu{^\nu}
= D_\mu D^{-2} D^\nu,
\end{equation}
with $D_\mu$ the covariant derivative in the adjoint representation involving the macroscopic field $\bar{g} = \bar A_\mu$, and $D^2 = D_\rho D^\rho$. For quantum gravity the projector again involves covariant derivatives and therefore depends on $\bar g$. It is discussed in ref. \cite{CWQC}.
We define the projected second derivative of $\bar\Gamma$ by
\begin{equation}\label{eqn:D}
\bar\Gamma^{(2)}_P
= P^\mathrm{T}\bar\Gamma^{(2)}P,
\qquad
\bar\Gamma^{(2)ij}
= \frac{\partial^2\bar\Gamma}{\partial\bar{g}_i\partial\bar{g}_j}.
\end{equation}
The quantity $\bar\Gamma^{(2)}_P + \bar{R}_P$ is invertible on the space of physical fluctuations. We define the propagator $G_P$ as the inverse of $\bar\Gamma_P^{(2)}$ in the projected space of physical fluctuations,
\begin{equation}\label{eqn:E}
\left(\bar\Gamma^{(2)}_P + \bar{R}_P\right)G_P
= P^\mathrm{T}.
\end{equation}
Thus $G_P$ is computable in terms of $\bar\Gamma$, and the flow equation \eqref{eqn:A}, \eqref{eqn:B} is closed. Without the projection on physical fluctuations the matrix $\bar\Gamma^{(2)}$ is not invertible due to the presence of gauge modes and the gauge invariant construction of the propagator \eqref{eqn:E} would not be possible.
The measure contributions are fixed functions of suitable differentiable operators. In a gauge fixed version they account for the contribution from gauge fluctuations and the Faddeev Popov determinant for a ``physical gauge fixing''. For Yang-Mills theories one has
\begin{equation}\label{XAB}
\delta_k-\epsilon_k=-\frac12 \text{tr}\Big\{k\partial_kR_{gf}({\cal D}_S)\big({\cal D}_S+R_{gf}({\cal D}_S)\big)^{-1}\Big\},
\end{equation}
with ${\cal D}_S=-D^\mu D_\mu$ and $R_{gf}$ a suitable infrared cutoff function. The covariant derivative $D_\mu$ is formed with the macroscopic gauge field $\bar g$, such that $\delta_k-\epsilon_k$ is gauge invariant. This part of the flow can be computed independently of the precise form of $\bar\Gamma_k$ and its possible truncations. The main differences of the proposed flow equation \eqref{eqn:A} as compared to the exact flow equation for scalars or fermions is the projection on physical fluctuations and the related presence of a measure contribution.
A gauge-invariant effective action obeys
\begin{equation}\label{eqn:F}
\frac{\partial\bar\Gamma}{\partial\bar{g}}
= \frac{\partial\bar\Gamma}{\partial\bar{g}}P,
\end{equation}
such that
\begin{equation}\label{eqn:G}
\delta_\xi\bar\Gamma
= \frac{\partial\bar\Gamma}{\partial\bar{g}}P\delta_\xi\bar{g}
= 0.
\end{equation}
We want to show that $\pi_k$ is gauge invariant, e.g. $\partial \pi_k/\partial\bar g=(\partial\pi_k/\partial\bar g)P$. Gauge invariance is then preserved by the flow. If one starts with a gauge invariant functional $\Gamma_\Lambda$ at some scale $k=\Lambda$, the effective average action $\Gamma_k$ remains gauge invariant for all $k<\Lambda$.
For this purpose we first note that $\bar\Gamma^{(2)}$ in eq. \eqref{eqn:D} transforms homogeneously as a symmetric tensor (e.g. rank four for gravity, rank two and adjoint in internal space for Yang-Mills theories). The projectors involve covariant derivatives and transform as tensors as well. This implies that $\bar\Gamma^{(2)}_P$ in eq. \eqref{eqn:D} transforms as a tensor. We choose $\bar R_P$ to have the same tensor transformation as $\bar\Gamma^{(2)}_P$. This is straightforward if one uses covariant derivatives depending on $\bar g$ for its construction. From eq. \eqref{eqn:E} one infers that the projected propagator $G_P$ transforms as a tensor as well, consistent with a correlation function of field fluctuations. Finally, the derivative $k\partial_k R_k$ also transforms as a tensor, and the r.h.s of eq. \eqref{eqn:B} is therefore gauge invariant.
More in detail, one can convince oneself that the gauge variation of the projector does not contribute. Formally, one may write
\begin{equation}\label{eqn:J}
\bar{R}_P
= P^\mathrm{T}\bar{R}P,
\qquad
G_P
= PGP^\mathrm{T},
\end{equation}
and gauge variation of $\pi_k$ could receive contributions from
\begin{equation}\label{eqn:K}
\delta_\xi P
= \Delta_P.
\end{equation}
From the projector properties $P = P^2 = P^3$ one derives for $\Delta_P$ the relation
\begin{equation}\label{eqn:L}
\Delta_P
= \bar P\Delta_P P + P\Delta_P \bar P,
\qquad
\bar P
= 1 - P.
\end{equation}
As a consequence of the structure of \eqref{eqn:B} the factor $\bar P$ in $\Delta_P$ always gets multiplied by $P$, and with $\bar PP = 0$ we conclude that the gauge variation $\Delta_P$ does not contribute.
For Yang-Mills theories the relevant differential operators are explicitly known. Both $\delta_k$ and $\epsilon_k$ are traces of functions of $D^2$. Similarly, the IR cutoff for the physical fluctuations $\bar{R}_P$ is a function of the covariant operator
\begin{equation}\label{eqn:O}
(\mathcal{D})_\mu{^\nu}
= -D^2\delta^\nu_\mu + D_\mu D^\nu + 2 i {F}{_\mu^\nu},
\end{equation}
where ${F}{_\mu^\nu}$ is the contraction of the field strength $F_\mu^{z\nu}$ with the generator $T_z$ in the adjoint representation. For $D^\nu \, F^{\mu\nu} = 0$ it obeys $P \mathcal{D} = \mathcal{D} P = \mathcal{D}$. For gravity the explicit form of projected differential operators is not known for arbitrary background fields. For a large class of interesting backgrounds the classification of physical fluctuations remains nevertheless rather simple \cite{CWQC}.
More generally, explicit knowledge or use of the non-local projectors is often not needed for practical computations. Typically, $\bar\Gamma^{(2)}$ automatically satisfies the projection property in eq. \eqref{eqn:D}, and similar for $\bar{R}$. Solutions for $G_P$ obeying eq. \eqref{eqn:E} can be found without the need of explicit knowledge of $P$. For example, $\bar\Gamma_P^{(2)}$ and $G_P$ are often both proportional to $P = P^\mathrm{T}$, such that only $\tr P$ is needed for eq. \eqref{eqn:B}. The trace is typically known, given simply by the number of physical degrees of freedom. Furthermore, the projection onto $G_P$ can also be realized by adding ``by hand'' a ``physical gauge fixing term'' with infinite coefficient $1/\alpha$. Inversion of the second functional derivative of the effective action in presence of this physical gauge fixing projects onto $G_P$, with all other components of the inverse of $\Gamma^{(2)}$ vanishing $\sim \alpha$.
If the solution of the proposed gauge-invariant flow equation belongs to the same universality class as usual gauge theories, explicit knowledge of the microscopic formulation of the gauge-invariant effective average action $\bar\Gamma$ is actually not needed. We could use the flow equation \eqref{eqn:A} for an ``ERGE-regularization'' of a non-abelian gauge theory or quantum gravity. A derivation of the flow equation \eqref{eqn:A} from a functional integral is, nevertheless, relevant for several issues. For a functional integral the conditions for the description of a unitary quantum field theory (e.g. Osterwalder-Schrader positivity) are well established. Furthermore, the functional integral formulation of $\bar\Gamma_k(\bar g)$ makes the connection of the proposed flow equation to other formulations of gauge theories more apparent. This will also shed light on the choice of the measure contributions $\delta_k-\epsilon_k$. For a given microscopic functional integral formulation we can also address the question if eq. \eqref{eqn:A} is exact or if it is some type of approximation. If exact, the functional integral can be viewed as a formal solution of the differential flow equation. The remainder of this note will discuss the functional integral representation of $\bar\Gamma_k(\bar g)$.
\section{Flow equation from functional \newline integral}
\label{Flow equation from functional integral}
We will next discuss the emergence of the flow equation from a microscopic functional integral formulation \cite{CWFE}. We investigate here a continuum formulation with gauge fixing and do not address the relation of this continuum formulation to possible discrete (lattice) gauge invariant formulations. Starting from a particular physical background gauge fixing we derive the flow equation \eqref{eqn:A} in two steps. For the first step we keep an arbitrary field $\bar{g}$ independent of $g = \langle g^\prime\rangle$. This closely follows ref. \cite{RW1}. In the second step we choose a suitable relation between $\bar{g}$ and $g$ which relates the macroscopic field $\bar{g}$ to a nonlinear function of the expectation value of the microscopic field $g'$.
Our starting point is the usual functional integral for the partition function in presence of a background field $\bar{g}$,
\begin{alignedeqn}\label{eqn:B1}
&\begin{aligned}Z(L,\bar{g})
= \int \mathcal{D} g^\prime M_k(g^\prime,\bar{g}) &\exp\bigl\{-S(g^\prime) - S_\text{gf}(g^\prime,\bar{g})\\
&- \Delta S_k(g^\prime,\bar{g}) + L^\mathrm{T} g^\prime\bigr\},
\end{aligned}\\
&W(L,\bar{g})
= \ln Z(L,\bar{g}).
\end{alignedeqn}
The microscopic field $g^\prime$ may be considered as a generalized vector $g^\prime_i$, with indices $i$ including space or momentum labels, Lorentz indices $\mu$, indices for the representations of the gauge group $z$, and labels for different species. For the example of a pure Yang-Mills gauge theory $g^\prime$ stands for the gauge fields $A^{\prime z}_\mu(x)$, while for quantum gravity it denotes the microscopic metric $g_{\mu\nu}^\prime(x)$. The corresponding sources are denoted by $L, L^\mathrm{T} g^\prime = L^i g^\prime_i$.
The microscopic action is given by $S(g^\prime)$, while the background field appears in the gauge fixing term $S_\text{gf}(g^\prime,\bar{g})$ and the infrared cutoff term $\Delta S_k$. The factor
\begin{equation}\label{eqn:B1A}
M_k
= M(g^\prime,\bar{g}) \, E_k(\bar{g})
\end{equation}
contains the Faddeev-Popov determinant $M$ and an associated regulator $E_k$ \cite{RW1}, with
\begin{equation}\label{eqn:B1B}
\epsilon_k(\bar{g})
= \tr\big \{\ln k\partial_k E_k (\bar{g})\big\}.
\end{equation}
This defines the measure contribution $\epsilon_k$ in \eqref{eqn:A}. The infrared cutoff term $\Delta S_k(g^\prime,\bar{g})$ vanishes for $k = 0$. In this limit \eqref{eqn:B1} becomes the standard setting for a gauge theory with background gauge fixing. For $k \to \infty$ the infrared cutoff should remove the fluctuation contributions such that $\bar\Gamma_{k \to \infty}(\bar{g}) = S(\bar{g})$. We discuss more details below.
As usual, one may define an effective action by a Legendre transform at fixed $\bar{g}$
\begin{equation}\label{eqn:B2}
\tilde\Gamma(g,\bar{g})
= -W(L,\bar{g}) + L^\mathrm{T} g,
\end{equation}
with $g$ and $L$ related by
\begin{equation}\label{eqn:B3}
g
= \frac{\partial W(L,\bar{g})}{\partial L}
= \langle g^\prime\rangle,
\qquad
L
= \frac{\partial \tilde\Gamma(g,\bar{g})}{\partial g}.
\end{equation}
For suitable choices of $S_\text{gf}$ and $\Delta S_k$ the effective action \eqref{eqn:B2} is invariant under simultaneous gauge transformations of $g$ and $\bar{g}$. This is usually called ``background gauge symmetry'', while ``microscopic gauge transformations'' only transform $g'$, and therefore $g$, leaving $\bar g$ fixed. The effective action $\tilde \Gamma$ is not invariant under microscopic gauge transformations.
As the key idea of this note we employ here a macroscopic field $\bar{g}(g)$ which depends on the expectation value $g$, rather than a fixed value. Since $\bar g(g)$ and $g$ do not transform independently, a distinction between background gauge transformations and microscopic gauge transformations is no longer possible. All fields $g',g$ and $\bar g(g)$ transform under a single gauge transformation. Inserting $\bar{g} (g)$ in $\tilde\Gamma(g,\bar{g})$ yields an effective action that only depends on one field. As independent variable we choose the macroscopic field $\bar g$, with $g$ expressed in terms of $\bar{g}$ by inverting $\bar{g}(g)$. The use of $\bar{g}(g)$ in the gauge fixing and infrared cutoff terms transmutes the defining equation for $W$ into an integro-differential equation. Now $\partial W/\partial L$ appears in the gauge-fixing term, $\Delta S_k$ and $M_k$ through $\bar{g}(g) = \bar{g}(\partial W/\partial L)$. We will see, however, that there is no need to solve this integro-differential equation explicitly. We emphasize that the definition of $\tilde\Gamma(g,\bar{g})$ is the Legendre transform of $W(L,\bar{g})$ at \textit{fixed} $\bar{g}$. One could define a different object as the Legendre transform of $W(L) = W\big (L,\bar{g}(L)\big)$. This is not what we use here.
An exact flow equation for $\tilde \Gamma(g,\bar{g})$ can be derived \cite{RW1} by varying the infrared cutoff term in eq. \eqref{eqn:B1} which should be at most quadratic in the microscopic field $g^\prime$,
\begin{equation}\label{eqn:B4}
\Delta S_k(g^\prime,\bar{g})
= \frac{1}{2}(g^\prime - \bar{g})^\mathrm{T} R_k(\bar{g})(g^\prime - \bar{g}).
\end{equation}
The cutoff function $R_k$ is assumed to vanish for $k \to 0$ and to diverge in the limit $k \to \infty$. We may write $R_k = k^n r_k$ with dimensionless $r_k$ typically depending on ratios of suitable differential operators $\mathcal{D}$ over the appropriate power of $k,\mathcal{D}/k^m$. (For gauge fields and scalars in four dimensions one has $n = m = 2$ and $\mathcal{D}$ will contain second covariant derivatives formed with the background field $\bar{g}$.) We require that $r_k$ vanishes fast for large $|\mathcal{D}/k^m|$, such that $\Delta S_k$ only affects the long distance modes. For small $|\mathcal{D}/k^m|$ we assume that $r_k$ approaches a constant. In the presence of the IR-cutoff $W$ and $\tilde \Gamma$ depend on the scale $k$. For $k \to 0$ the cutoff vanishes and $\tilde \Gamma_{k = 0}$ is the quantum effective action where all fluctuation contributions are included. For $k \to \infty$ the fluctuations are cut off and therefore not yet included in $\tilde\Gamma_k$. For an appropriate choice of $R_k$ the saddle point approximation becomes valid, such that $\tilde\Gamma_{k \to \infty} = S + S_\text{gf}+\Delta S_k$. In a slight modification of the usual treatment the cutoff \eqref{eqn:B4} acts directly on the fluctuations $g^\prime - \bar{g}$.
The flow of $W(L,\bar{g})$ with $k$ is given by the exact flow equation at fixed $L$ and $\bar{g}$,
\begin{alignedeqn}\label{eqn:B6}
\partial_k W(L,\bar{g}) - \epsilon_k(\bar{g})
&= -\frac{1}{2}\langle \partial_k\Delta S_k\rangle\\
&= -\frac{1}{2} \Str\big\{\partial_k R_k(G^\prime + h \cdot h)\big\},
\end{alignedeqn}
with connected correlation function
\begin{equation}\label{eqn:B7}
G_{ij}^\prime
= \langle g^\prime_ig^\prime_j\rangle_c
= \langle g^\prime_ig^\prime_j\rangle - \langle g^\prime_i\rangle\langle g^\prime_j\rangle,
\end{equation}
and
\begin{equation}\label{eqn:B7a}
(h\cdot h)_{ij}
= h_ih_j,
\qquad
h
= g - \bar{g}
= \langle g^\prime\rangle - \bar{g}.
\end{equation}
(The minus sign in the supertrace $\Str$ for fermions arises from the permutation of Grassmann variables.) Performing the Legendre transform \eqref{eqn:B2} at fixed $\bar{g}$ results in the flow of the effective action at fixed $g$ and $\bar{g}$
\begin{equation}\label{eqn:B8}
\partial_k\tilde\Gamma (g,\bar{g})
= \frac{1}{2} \Str\big\{\partial_k R (G^\prime + h \cdot h\big\} - \epsilon_k(\bar{g}).
\end{equation}
By virtue of the relation
\begin{equation}\label{eqn:B8A}
\tilde\Gamma^{(2)} \, G^\prime
= 1,
\qquad
\tilde\Gamma^{(2)ij}
= \frac{\partial^2\tilde\Gamma(g,\bar{g})}{\partial g_i \partial g_j},
\end{equation}
the flow equation for $\tilde\Gamma(g,\bar{g})$ is closed in the two-field formalism.
Defining
\begin{equation}\label{eqn:B8B}
\Gamma_k(g,\bar{g})
= \tilde\Gamma_k(g,\bar{g}) - \frac{1}{2} h^\mathrm{T} R_k(\bar{g})h,
\end{equation}
the second functional derivative obeys
\begin{equation}\label{eqn:B8C}
\tilde\Gamma^{(2)}
= \Gamma^{(2)} + R,
\qquad
G^\prime
= (\Gamma^{(2)} + R)^{-1},
\end{equation}
and
\begin{equation}\label{eqn:B8D}
\partial_k\Gamma
= \frac{1}{2} \Str(\partial_k R G^\prime) - \epsilon_k.
\end{equation}
Eq. \eqref{eqn:B8D} has a one-loop form, with regulator $R_k$ appearing in the propagator $G^\prime$ according to \eqref{eqn:B8C}. For $k \to 0$ or $h \to 0$ the expressions for $\tilde\Gamma$ and $\Gamma$ coincide.
\section{Macroscopic field}
\label{Macroscopic field}
We will investigate a suitable choice of the macroscopic field $\bar{g}(g)$ and a suitable definition of a gauge-invariant effective action $\bar\Gamma(\bar{g})$ such that the propagator $G_P$ for the physical fluctuations can be expressed in terms of $\bar\Gamma$, typically involving the second functional derivative $\bar\Gamma^{(2)}$. This will produce the closed form \eqref{eqn:A} for the evolution equation of the gauge-invariant effective action. The exact flow equation \eqref{eqn:B8D} has been derived for an arbitrary background field $\bar{g}$ kept fixed. We want to translate this to a macroscopic field that is a function of $g, \bar{g} = \bar{g}_k(g)$. Here we have indicated that the relation between $\bar{g}$ and $g$ may depend on $k$.
Insertion of $\bar{g}(g)$ defines an effective action $\tilde\Gamma(g)$ that only depends on one argument
\begin{equation}\label{eqn:A1}
\tilde\Gamma(g)
= \tilde\Gamma\big (g,\bar{g}(g)\big).
\end{equation}
If the relation between $\bar{g}$ and $g$ is compatible with infinitesimal gauge transformations, i.e.
\begin{equation}\label{eqn:A2}
\bar{g}(g + \delta_\xi g)
= \bar{g} + \delta_\xi\bar{g},
\qquad
\delta_\xi\bar{g}
= \frac{\partial \bar{g}}{\partial g}\delta_\xi g
= \bar P\delta_\xi \bar{g},
\end{equation}
the effective action $\tilde\Gamma(g)$ is gauge invariant. This follows from the gauge invariance of $\tilde\Gamma(g,\bar{g})$ under the simultaneous transformation of $g$ and $\bar{g}$, with
\begin{equation}\label{eqn:A3}
\delta_\xi\tilde\Gamma(g,\bar{g})
= \frac{\partial\tilde\Gamma}{\partial g}\delta_\xi g + \frac{\partial \tilde\Gamma}{\partial\bar{g}}\delta_\xi\bar{g}
= 0
\end{equation}
implying
\begin{equation}\label{eqn:A4}
\delta_\xi\tilde\Gamma(g)
= \frac{\partial \tilde\Gamma}{\partial g}(g,\bar{g})\delta_\xi g + \frac{\partial\tilde\Gamma}{\partial \bar{g}}\frac{\partial\bar{g}}{\partial g}
\delta_\xi g
= 0.
\end{equation}
This argument extends to a gauge-invariant effective action $\bar\Gamma$ which is related to $\tilde\Gamma$ by subtraction of a suitable gauge-invariant piece, similar to \eqref{eqn:B8B}. Replacing the argument $g$ by $\bar g$, i.e. inserting $g(\bar g)$, yields the gauge invariant action $\bar\Gamma(\bar g)$.
A simple choice corresponding to the background field formalism would be the identification
\begin{equation}\label{eqn:A5}
\bar{g}(g)
= g.
\end{equation}
The resulting effective action $\tilde\Gamma(g)$ is gauge invariant. However, the first derivative of $\tilde\Gamma(g)$ produces no longer the source \cite{RW1},
\begin{alignedeqn}\label{eqn:A6}
\frac{\partial\tilde\Gamma}{\partial g}
= \frac{\partial\tilde\Gamma(g,\bar{g})}{\partial g} + \frac{\partial\tilde\Gamma(g,\bar{g})}{\partial \bar{g}}
\frac{\partial\bar{g}}{\partial g}
= L + \kappa,
\end{alignedeqn}
with
\begin{equation}\label{eqn:A7}
\kappa
= \frac{\partial \tilde\Gamma(g,\bar{g})}{\partial \bar{g}} \frac{\partial \bar g}{\partial g}.
\end{equation}
The matrix of second derivatives contains a generalized ``gauge fixing correction''
\begin{equation}\label{eqn:A8}
Q^{ij}
= \frac{\partial^2\tilde\Gamma(g,\bar{g})}{\partial g_i\partial g_j}-
\frac{\partial^2\tilde\Gamma(g)}{\partial g_i\partial g_j}.
\end{equation}
This gauge fixing contribution appears in the exact flow equation \eqref{eqn:B8D} for $\Gamma(g) = \Gamma(g,g)$. Since $Q$ cannot be expressed in terms of $\Gamma(g)$ the flow of $\Gamma(g)$ is no longer given by a closed equation \cite{RW1}. It involves the generating functional $\tilde\Gamma(g,\bar{g})$ with two arguments, which has to be determined by some assumption or approximation. This is one of the main uncertainties in the present use of approximations to the flow equation.
The choice of $\bar{g}(g)$ is not unique, however. We will investigate a suitable choice such that the flow equation for the gauge-invariant effective action $\bar\Gamma(\bar{g})$ becomes closed. If $\bar\Gamma$ is gauge invariant its second derivative has zero eigenvalues. The propagator $G^\prime$ in the flow equation \eqref{eqn:B8D} can therefore not be given by the inverse of $\bar\Gamma^{(2)}$ for $k \to 0$. We will separate the ``physical fluctuations'' and ``gauge fluctuations'' by suitable projections. On the projected space of physical fluctuations $\bar\Gamma^{(2)}$ will become invertible and we will express $G_P$ by the inverse of $(\bar\Gamma^{(2)} + R)$ on this projected subspace, according to eq. \eqref{eqn:E}. The physical fluctuations will contribute the term $\pi_k$ in the flow equation \eqref{eqn:A}, while the gauge fluctuations are responsible for $\delta_k$.
\section{Projection on physical fluctuations}
\label{Projection on physical fluctuations}
The ``macroscopic fluctuations'' $h = g - \bar{g}$ can be split into ``physical fluctuations'' $f$ and ``gauge fluctuations'' $a$,
\begin{equation}\label{eqn:38A}
h
= g - \bar{g}
= f + a.
\end{equation}
The gauge fluctuations $a$ obey
\begin{equation}\label{eqn:G4}
P(\bar{g})a
= 0,
\qquad
a
= \big (1 - P(\bar{g})\big)h.
\end{equation}
On the other hand, $P(\bar{g})$ projects on the ``physical fluctuations'',
\begin{equation}\label{eqn:G5}
f
= P(\bar{g})h.
\end{equation}
For infinitesimal $h$ the gauge variation of $g$ at fixed $\bar{g}$ can be expressed as an inhomogeneous transformation of $h$,
\begin{equation}\label{eqn:38B}
\hat \delta h
= \bar P(\bar{g} + h)\delta_\xi(\bar{g} + h)
= \bar P(\bar{g})\delta_\xi(\bar{g}) + \delta_h h.
\end{equation}
The inhomogeneous part only affects the gauge fluctuations $a$,
\begin{equation}\label{eqn:38C}
\delta_\text{inh} a
= \bar P(\bar{g})\delta_\xi(\bar{g}),
\end{equation}
while $f$ transforms as a tensor according to the homogeneous part $\delta_h f$. Infinitesimal gauge fluctuations $a$ can therefore be viewed as the result of an infinitesimal gauge transformation acting only on $g$.
Let us write $\Gamma(g,\bar{g})$, as defined in eq. \eqref{eqn:B8B}, in the form
\begin{equation}\label{eqn:G1}
\Gamma(g,\bar{g})
= \hat\Gamma(g,\bar{g}) + \Gamma_\text{gf}(g,\bar{g}).
\end{equation}
We assume that the ``gauge fixing term'' $\Gamma_\text{gf}$ is quadratic in the gauge fluctuations $a$
\begin{equation}\label{eqn:G3}
\Gamma_\text{gf}(g,\bar{g})
= \frac{1}{2\alpha}a^\mathrm{T} \bar Q(\bar{g})a.
\end{equation}
We will take the limit $\alpha \to 0$ and assume that $\hat\Gamma$ remains finite in this limit. It will be important that no terms independent of $a$ or linear in $a$ diverge for $\alpha \to 0$. This selects a particular class of gauge fixing terms that are quadratic in $a$. Due to the divergence for $\alpha \to 0$ the term \eqref{eqn:G3} is the dominant contribution to $\Delta\Gamma(g,\bar{g})$ as defined by eq. \eqref{eqn:I1}.
For the second functional derivative the gauge fixing term contributes a term that diverges for $\alpha \to 0$,
\begin{equation}\label{eqn:G6}
\Gamma^{(2)}_\text{gf}
= \frac{1}{\alpha} \big (1 - P^\mathrm{T}(\bar{g})\big)\bar Q(\bar{g})\big(1 - P(\bar{g})\big).
\end{equation}
The infrared cutoff is taken to contain a part $\bar{R}_k$ for the physical fluctuations as well as a cutoff for the gauge fluctuations $\sim R_{k,\text{gf}}$,
\begin{equation}\label{eqn:G7}
R_k
= \bar{R}_k(\bar{g}) + \frac{1}{\alpha} \big(1 - P^\mathrm{T}(\bar{g})\big)
R_{k,gf}(\bar{g})\big(1 - P(\bar{g})\big),
\end{equation}
such that the second functional derivative of $\tilde\Gamma$ obeys
\begin{equation}\label{eqn:G8}
\tilde\Gamma^{(2)}
= \hat \Gamma^{(2)} + \bar{R}_k + \frac{1}{\alpha} (1 - P^\mathrm{T})(\bar Q + R_\text{gf})(1 - P).
\end{equation}
We next decompose $\tilde\Gamma^{(2)}$ into four blocks corresponding to the different projections with $P$ or $\bar P = (1 - P)$ from left or right. The propagator $G$ can be decomposed similarly and one finds from eqs. \eqref{eqn:B8A}, \eqref{eqn:G8} for $\alpha \to 0$
\begin{equation}\label{eqn:G9}
G
= G_P + \alpha \, G_l,
\qquad
G_P
= PGP^\mathrm{T}.
\end{equation}
Here $G_P$ is the inverse of $\tilde\Gamma^{(2)}_P$ on the projected subspace
\begin{equation}\label{eqn:G10}
\tilde\Gamma^{(2)}_PG_P
= P^\mathrm{T},
\qquad
\tilde\Gamma^{(2)}_P
= P^\mathrm{T}(\hat \Gamma^{(2)} + \bar{R}) P.
\end{equation}
The piece $\alpha \, G_l$ vanishes for $\alpha \to 0$, and we observe that $\Gamma^{(2)}_\text{gf}$ and $R_\text{gf}$ do not contribute to $\tilde \Gamma^{(2)}_P$.
In the limit $\alpha \to 0$ the flow equation \eqref{eqn:B8D} consists of a part involving $G_P$ and a gauge contribution $\delta_k(\bar{g})$,
\begin{equation}\label{eqn:G11}
k\partial_k\Gamma(g,\bar{g})
= \frac{1}{2} \Str(k\partial_k\bar{R}G_P) + \delta_k - \epsilon_k,
\end{equation}
with
\begin{equation}\label{eqn:G12}
\delta_k
= \frac{1}{2} \tr\big\{k\partial_k R_\text{gf}(1 - P)
(\bar Q + R_\text{gf})^{-1}(1 - P^\mathrm{T})\big\}.
\end{equation}
The gauge contribution arises from multiplication of $\alpha^{-1}\partial_k R_\text{gf}$ with the appropriate projection of $\alpha \, G_l$. It depends on $\bar{g}$ via $P,\bar Q$ and $R_\text{gf}$, but is does not involve $\hat \Gamma$. This defines the measure contribution $\delta_k$ in eq. \eqref{eqn:A}. We observe that for $\alpha \to 0$ only $\tilde \Gamma^{(2)}_P$ and the second term $\sim 1/\alpha$ in eq. \eqref{eqn:G8} enter in the flow equation \eqref{eqn:G11}. The parts in $\tilde\Gamma^{(2)} - \tilde\Gamma^{(2)}_P$ that do not diverge for $\alpha \to 0$ are projected out and do not influence the flow. We also observe that the leading part in $\Delta\Gamma$ is given by $\Gamma_\text{gf}$ in eq. \eqref{eqn:G3} for all $k$. All contributions to the flow are finite for $\alpha \to 0$ and therefore cannot change the divergent part $\sim 1/\alpha$. Furthermore, the r.h.s.\ of the flow equation contains no term that diverges for $\alpha \to 0$. As a consequence, one has $\partial_t (1/\alpha) = b_\alpha$ with finite $b_\alpha$, such that $\partial_t \alpha = -b_\alpha \, \alpha^2$ has a fixed point for $\alpha = 0$ \cite{CWEQ,CWGF}. Our gauge-fixing condition is not changed by the flow. All terms induced by the flow in the sector of gauge fluctuations are subleading and give vanishing contributions to $\pi_k$, $\delta_k$ and $\epsilon_k$ for $\alpha \to 0$.
We are finally interested in the flow of $\bar\Gamma(\bar{g})$, which is related to $\Gamma\big (g(\bar{g}),\bar{g}\big)$ by a suitable subtraction. We will choose $\bar{g}(g)$ such that $a(g) = 0$. Then $\bar{g}(g)$ is determined by specifying $f(\bar{g})$. Both $\delta_k$ and $\epsilon_k$ involve only $\bar{g}$. For the contribution $\pi_k$ of the physical fluctuations one needs the evaluation of the first term in eq. \eqref{eqn:G11} by inserting $g = \bar{g} + f(\bar{g})$. Furthermore, the flow at fixed $\bar{g}$ has to take into account that the relation between $g$ and $\bar{g}$ may depend on $k$. This flow equation for $\bar\Gamma(\bar{g})$ will be closed if we can find a suitable choice of $\bar{g}(g)$ such that $\hat \Gamma^{(2)}$ can be expressed in terms of $\bar\Gamma(\bar{g})$ and its functional derivatives. Then $G_P$ can be expressed in terms of $\bar\Gamma$ by solving eq. \eqref{eqn:G10}.
The particular form of the gauge fixing term \eqref{eqn:G3} is crucial for our construction. One may add other terms that do no diverge for $\alpha \to 0$, but there should be no term linear in $a$ that diverges for $\alpha \to 0$. Correspondingly, we employ a particular ``physical gauge fixing'' in the microscopic formulation \eqref{eqn:B1}
\begin{equation}\label{eqn:48A}
S_\text{gf}(g^\prime,g)
= \frac{1}{2\alpha}a^{\prime T} \bar Q(\bar{g})a^\prime,
\end{equation}
where
\begin{equation}\label{eqn:48B}
a^\prime
= \big (1 - P(\bar{g})\big)(g^\prime - \bar{g}).
\end{equation}
For $\alpha \to 0$ the saddle point approximation becomes exact in the sector of the gauge fluctuations and one infers the leading gauge fixing term in eqs. \eqref{eqn:G1}, \eqref{eqn:G3}.
For the example of Yang-Mills theories \eqref{eqn:48A} is realized by background gauge fixing in Landau gauge
\begin{equation}\label{eqn:48C}
\Gamma_\text{gf}
= \frac{1}{2\alpha}\int_x G^z G_z^*,
\qquad
G^z
= \big (D^\mu(A^\prime_\mu - \bar A_\mu)\big)^z,
\end{equation}
with covariant derivative $D^\mu$ formed with the macroscopic field $\bar A_\mu$. With
\begin{equation}\label{eqn:48D}
\bar Q
= -D_\mu D^\nu,
\qquad
\mathcal{D}_S
= -D^\rho D_\rho,
\end{equation}
and infrared cutoff for gauge fluctuations $R_\text{gf}(\bar Q)$, one obtains
\begin{alignedeqn}\label{eqn:48E}
\delta_k
&= \frac{1}{2} \tr \Big\{k\partial_k R_\text{gf}(\bar Q)\big (\bar Q + R_\text{gf}(\bar Q)\big)^{-1}\Big\}\\
&= \frac{1}{2}\tr\Big\{k\partial_k R_\text{gf}(\mathcal{D}_S)\big(\mathcal{D}_S + R_\text{gf}(\mathcal{D}_S)\big)^{-1}\Big\}.
\end{alignedeqn}
For the Faddeev-Popov determinant,
\begin{alignedeqn}\label{eqn:48F}
M
&= \det\big(-D^\mu(\bar A)D_\mu(A^\prime)\big)\\
&= \det \big (\mathcal{D}_S + iD^\mu(A^\prime_\mu - \bar A_\mu)\big)
\end{alignedeqn}
we may choose in eq. \eqref{eqn:B1A} the regularization
\begin{equation}\label{eqn:48G}
E_k
= \frac{\det\big(\mathcal{D}_S + R_\text{gf}(\mathcal{D}_S)\big)}{\Det(\mathcal{D}_S)}.
\end{equation}
This results in
\begin{equation}\label{eqn:48H}
\epsilon_k
= 2\delta_k.
\end{equation}
As advocated, $\mathcal{D}_S$ in $E_k$ and therefore $E_k$ and $\epsilon_k$ are fixed expressions of $\bar{g}$. We assume a similar type of regularization for quantum gravity. This contrasts with the alternative of introducing ghosts and regularizing the ghost propagator. In the formulation with ghosts the flow of the effective action for coupled gauge fields and ghosts has to be followed. The induced higher order ghost interactions enhance the complexity of the problem. We find it worthwhile to explore the possibility to regularize the Faddeev-Popov determinant by a fixed $\bar{g}$-dependent factor $E_k$.
\section{Choice of macroscopic field}
\label{Choice of macroscopic field}
We next use the freedom in the precise choice of the macroscopic field $\bar{g}(g)$ in order to obtain a closed flow equation for a suitably defined gauge-invariant effective action $\bar \Gamma(\bar{g})$. The main idea is to express the propagator for physical fluctuations $G_P$ in terms of the second functional derivative $\bar\Gamma^{(2)}$. This will determine the choice of $\bar{g}(g)$ and the precise definition of $\bar\Gamma(\bar{g})$.
Let us define
\begin{alignedeqn}\label{eqn:G14}
\bar\Gamma(\bar{g})
&= \Gamma\big(g = \bar{g} + f(\bar{g}),\bar{g}\big) - C(\bar{g})\\
&= \hat\Gamma(g = \bar{g} + f(\bar{g}),\bar{g}) - C(\bar{g}),
\end{alignedeqn}
for some suitably chosen $f(\bar{g})$ and $C(\bar{g})$. This amounts to the choice $a(\bar g)=0$ or $\bar{g}(g) = g - f\big(\bar{g}(g)\big)$. The second derivative of $\bar\Gamma(\bar{g})$ becomes
\begin{alignedeqn}\label{eqn:G15}
(\bar\Gamma^{(2)})^{ij}
= \frac{\partial^2\bar\Gamma(\bar{g})}{\partial \bar{g}_i\partial \bar{g}_j}
&= (\hat \Gamma^{(2)})^{ij}(g = \bar{g} + f,\bar{g})\\
&\hphantom{{}=}+ B^{ij} - \frac{\partial^2 C}{\partial\bar{g}_i\partial\bar{g}_j}.
\end{alignedeqn}
The term $B^{ij}$ arises from derivatives of $\Gamma(g,\bar{g})$ with respect to $\bar{g}$, as well as from $\partial g_i/\partial\bar{g}_j = \delta^j_i + \partial f_i/\partial\bar{g}_j$,
\begin{alignedeqn}\label{eqn:G16}
B^{ij}
&= S^{ij} + \frac{\partial\hat \Gamma}{\partial g_m}\frac{\partial^2 f_m}{\partial\bar{g}_i\partial\bar{g}_j}\\
&\hphantom{{}=}+ (\hat\Gamma^{(2)})^{mn}
\left(\delta^i_m\frac{\partial f_n}{\partial\bar{g}_j} + \delta^j_n\frac{\partial f_m}{\partial \bar{g}_i} +
\frac{\partial f_m}{\partial\bar{g}_i}\frac{\partial f_n}{\partial \bar{g}_j}\right).
\end{alignedeqn}
Here we define for the $\bar{g}$-dependence of $\hat \Gamma$ at fixed $g$
\begin{equation}\label{eqn:G17}
\sigma^i(g,\bar{g})
= \frac{\partial \hat \Gamma(g,\bar{g})}{\partial \bar{g}_i},
\end{equation}
and
\begin{equation}\label{eqn:G18}
S^{ij}
= \frac{\partial\sigma^i}{\partial\bar{g}_j} + \frac{\partial\sigma^j}{\partial g_i}
+ \frac{\partial\sigma^i}{\partial g_j}
+ \frac{\partial\sigma^j}{\partial g_m}
\frac{\partial f_m}{\partial \bar{g}_i} + \frac{\partial\sigma^i}{\partial g_m}\frac{\partial f_m}{\partial \bar{g}_j}.
\end{equation}
All quantities are evaluated for $g = \bar{g} + f(\bar{g})$, $a(\bar{g}) = 0$.
There is a certain freedom in the choice of $f(\bar{g})$ and $C(\bar{g})$. The only requirement is that $f(\bar{g})$ transforms as a tensor and $C(\bar{g})$ is gauge invariant, and that $P^\mathrm{T} BP$ can be expressed as a functional of $\bar\Gamma$. A possible simple choice determines $f(\bar{g})$ by a solution of the differential equation
\begin{equation}\label{eqn:G19}
P_i{^l}\left(B^{ij} - \frac{\partial^2 C}{\partial \bar{g}_i\partial \bar{g}_j}\right)P_j{^k}
= 0.
\end{equation}
This allows us to replace $\hat\Gamma^{(2)}$ by $\bar\Gamma^{(2)}$ in \eqref{eqn:G10} and therefore to close the flow equation. The solution of eq. \eqref{eqn:G19} depends on $C(\bar{g})$. Our aim is a simultaneous choice of $f(\bar{g})$ and $C(\bar{g})$ such that the flow equation remains simple.
The flow equation \eqref{eqn:G11} holds for fixed $g$ and $\bar{g}$. For the flow of $\bar\Gamma$ at fixed $\bar{g}$ we have to take into account that the solution $g(\bar{g})$ according to eq. \eqref{eqn:G19} will depend on $k$. With the definition \eqref{eqn:G14} one finds for the flow of $\bar\Gamma$ at fixed $\bar{g}$
\begin{alignedeqn}\label{eqn:G20}
k\partial_k\bar\Gamma(\bar{g})
&= \frac{1}{2} \Str(k\partial_k\bar{R} G_P) + \delta_k(\bar{g}) - \epsilon_k(\bar{g})\\
&\hphantom{{}=}+ A_k(\bar{g}) - k\partial_k C(\bar{g}),
\end{alignedeqn}
with
\begin{equation}\label{eqn:G21}
A_k(\bar{g})
= \frac{\partial \Gamma(g,\bar{g})}{\partial g}k\partial_k f(\bar{g}).
\end{equation}
Here $\partial\Gamma(g,\bar{g})/\partial g$ has to be evaluated for fixed $\bar{g}$ at $g(\bar{g}) = \bar{g} + f(\bar{g})$, and we employ $\partial_k g_{|\bar{g}} = \partial_k f$. A possible simple choice employs
\begin{equation}\label{eqn:G22}
k\partial_k C
= A.
\end{equation}
Then the two last terms in eq. \eqref{eqn:G20} vanish. This realizes the flow equation \eqref{eqn:A}, \eqref{eqn:B}, \eqref{eqn:E}. The system of eqs. \eqref{eqn:G19}, \eqref{eqn:G22} determines both $f(\bar{g})$ and $C(\bar{g})$. It is rather complex. Fortunately, there is no need to solve this system in practice. It is sufficient to realize that a solution exists. Neither $\sigma$ nor $f$ or $C$ enter explicitly the proposed gauge-invariant flow equation. A choice of $\bar g(g)$ for which eqs. \eqref{eqn:G19} and \eqref{eqn:G22} hold for a suitable $C(\bar g)$ will be called ``optimal macroscopic field''.
\section{Optimal macroscopic field}
\label{Optimal macroscopic field}
We want to argue in favor of the existence of solutions to the system of equations \eqref{eqn:G19} and \eqref{eqn:G22}. For $\sigma^i = 0$ eqs. \eqref{eqn:G19} and \eqref{eqn:G22} have the simple solution
\begin{equation}\label{eqn:S1}
f(\bar{g})
= 0,
\qquad
C(\bar{g})
= 0.
\end{equation}
This shows that non-zero $f$ and $C$ are related to the $\bar{g}$-dependence of $\hat \Gamma(g,\bar{g})$ at fixed $g$, and therefore to the $\bar{g}$-dependence of $W(L,\bar{g})$ at fixed $L$ in \eqref{eqn:B1}. The gauge fixing term does not contribute for $a = 0$. Its contribution to the second derivative \eqref{eqn:G6} would be projected out in eq. \eqref{eqn:G19}, and we have already defined $\hat\Gamma^{(2)}$ without a contribution from the gauge fixing term. The $\bar{g}$-dependence relevant for $\sigma^i$ can therefore only arise from $\Delta S_k(g^\prime,\bar{g})$ and
\begin{equation}\label{eqn:S2}
S_{FP}(g^\prime,\bar{g})
= -\ln M_k(g^\prime,\bar{g}).
\end{equation}
For a better understanding of $\sigma^i$ we need an expression for the $\bar{g}$-dependence of $\hat \Gamma$, which follows from the $\bar{g}$-dependence of $W$. The $\bar{g}$-dependence of $W(L,\bar{g})$ obeys
\begin{equation}\label{eqn:S3}
\frac{\partial}{\partial\bar{g}_i}W(L,\bar{g})
= -\langle \frac{\partial}{\partial \bar{g}_i}(\Delta S_k + S_{FP} + S_\text{gf})\rangle.
\end{equation}
With
\begin{equation}\label{eqn:S3-2}
\frac{\partial\tilde\Gamma }{\partial \bar{g}_i}_{|g}
= -\frac{\partial W}{\partial \bar{g}_i}_{|L}
\end{equation}
one obtains
\begin{align}\label{eqn:S4}
\sigma^i
&= \langle\frac{\partial }{\partial \bar{g}_i}(\Delta S_k + S_{FP})\rangle - \frac{1}{2} f_m\frac{\partial}{\partial\bar{g}_i}\bar{R}^{mn}f_n
+ \bar{R}^{mi}f_i\notag\\
&= \sigma_R^i + \sigma^i_{FP},\\
\sigma_R^i
&= \frac{1}{2}\Str\left\{\frac{\partial\bar{R}}{\partial\bar{g}_i}G_P\right\}.\notag
\end{align}
The regularized Faddeev-Popov determinant typically involves some operator $\tilde{\mathcal{D}}$, $M_k = \det (\tilde{\mathcal{D}})$, such that
\begin{equation}\label{eqn:S5}
\sigma^i_{FP}
= \left\langle\frac{\partial}{\partial\bar{g}_i}S_{FP}\right\rangle
= -\left\langle \tr\left\{\left(\frac{\partial}{\partial\bar{g}_i}\tilde{\mathcal{D}}\right) \tilde{\mathcal{D}}^{-1}\right\}\right\rangle.
\end{equation}
For Yang-Mills theories and $k = 0$ the contribution of $\sigma_\text{FP}^i$ to the projected $B$ in \eqref{eqn:G19} may vanish for a suitable choice of the gauge fixing, but we have not yet investigated this issue.
The part $\sigma_R^i$ is proportional to $\partial \bar{R}/\partial \bar{g}_i$. It therefore vanishes for $k = 0$ where $\bar{R} = 0$. On the other hand, for large $k$ the cutoff function $\bar{R}$ approaches a $k$-dependent constant. In this limit $\sigma_R^i$ vanishes again. Thus $\sigma_R^i$ only plays a role in the range where typical differential operators are of a similar size as the appropriate power of $k$.
For small $\sigma^i$ we can solve the system of differential equations \eqref{eqn:G19} and \eqref{eqn:G22} iteratively. We split
\begin{equation}\label{eqn:X1}
C
= C_0 + C_1,
\qquad
C_0(\bar{g})
= \frac{\partial\Gamma(g,\bar{g})}{\partial g_i}f_i,
\end{equation}
with $C_1$ obeying
\begin{equation}\label{eqn:X2}
\gamma_k
= \left(k\partial_k\frac{\partial\Gamma(g,\bar{g})}{\partial g_i}\right)f_i + k\partial_k C_1(\bar{g})
= 0.
\end{equation}
For $B - C^{(2)}_0$ we observe that the second derivative of $C_0$ cancels the last term in eq. \eqref{eqn:G16} and the two last terms in eq. \eqref{eqn:G18}. We define
\begin{alignedeqn}\label{eqn:74A}
\Delta B^{lk}_P
&= P_i{^l}
\biggl\{\tilde S^{ij} + \hat \Gamma^{(2)im}\frac{\partial f_m}{\partial \bar{g}_j} + \frac{1}{2}\hat\Gamma^{(2)mn}
\frac{\partial f_m}{\partial \bar{g}_i}\frac{\partial f_m}{\partial \bar{g}_j}\\
&\hphantom{{}=P_i{^l}
\biggl\{}-\frac{1}{2}\frac{\partial^2 C_1}{\partial\bar{g}_i\partial \bar{g}_j} + (i\leftrightarrow j)\biggr\}P_j{^k},
\end{alignedeqn}
with
\begin{equation}\label{eqn:74B}
\tilde S^{ij}
= \frac{1}{2}\frac{\partial\sigma^i}{\partial\bar{g}_j} + \frac{\partial \sigma^i}{\partial g_j} - \frac{1}{2}\frac{\partial^2\sigma^i}{\partial\bar{g}_j\partial g_m}f_m,
\end{equation}
such that the condition \eqref{eqn:G19} reads $\Delta B_P = 0$.
In lowest order we consider $\sigma$ and $f$ as small quantities in which we linearize. One obtains $\Delta B_P = 0$ for
\begin{equation}\label{eqn:74C}
\frac{\partial f_m}{\partial\bar{g}_j}
= -(\hat G_P)_{mk}
\left(\frac{\partial \sigma^k}{\partial g_j} + \frac{1}{2}\frac{\partial \sigma^k}{\partial \bar{g}^j} - \frac{1}{2}
\frac{\partial^2C_1}{\partial\bar{g}^k\partial\bar{g}^j}\right),
\end{equation}
where
\begin{equation}\label{eqn:74D}
\hat G_PP^\mathrm{T} \bar \Gamma^{(2)}P
= P.
\end{equation}
We may start with $C_1 = 0$, compute $f_m$ by solving the linear differential equation \eqref{eqn:74C} with suitable initial conditions, then determine $C_1$ for this solution from $\gamma_k = 0$, and iterate. From the linear solution higher order terms in $f$ and $\sigma$ can again be determined iteratively.
There seems to be no obstruction to find solutions to eq. \eqref{eqn:74C}. Typically, a particular solution will involve an initial condition that we may take as $f(\bar{g}_0) = 0$ for some suitably chosen configuration $\bar{g} = \bar{g}_0$. Linearization in $f$ is then expected to be valid for $\bar{g}$ in the vicinity of $\bar{g}_0$. These arguments do not constitute a proof that a solution of eqs. \eqref{eqn:G19}, \eqref{eqn:G22} exists for arbitrary $k$ and $\bar g$, even though this seems rather likely. A proof may be tried by starting at $k\to\infty$ with $C=0,f=0$, and computing the flow of these quantities by imposing the conditions \eqref{eqn:G19}, \eqref{eqn:G22}. If this solution breaks down the flow equation \eqref{eqn:A} is only an approximation, and we address the form of possible corrections in the discussion in sect. \ref{Discussion}. In a certain sense we use the freedom in the choice of $\bar g(g)$ and $C(\bar g)$ in order to bring the form of an exact flow equation as close as possible to the form \eqref{eqn:A}. We stress again that the whole construction is only possible for a particular class of ``physical gauge fixings''.
\section{Quantum field equation}
\label{Quantum field equation}
For a given choice of $f(\bar{g})$ and $C(\bar{g})$ we can introduce a source $\bar J$ by
\begin{equation}\label{eqn:G23}
\frac{\partial\bar\Gamma}{\partial\bar{g}}
= \bar J.
\end{equation}
Gauge invariance of $\bar\Gamma$ implies
\begin{equation}\label{eqn:G24}
P^\mathrm{T} \bar J
= \bar J.
\end{equation}
This amounts to the usual conservation of $\bar J$. There is no need, however, that $\bar J^\mathrm{T}$ equals precisely the projected ``microscopic source'' $J^\mathrm{T} = L^\mathrm{T} P$. The choice of $\bar{g}(g)$ determines the precise relation between $\bar{J}$ and $J$. The exact ``quantum field equation'' \eqref{eqn:G23} is the basis of the use of macroscopic field equations for gravity or electromagnetism, i.e. for general relativity and electrodynamics as ``classical field theories''. The appropriate ``quantum definition'' of the energy momentum tensor or the electromagnetic current is given by their coupling in the quantum effective action $\bar\Gamma$, as defined by eq. \eqref{eqn:G24}.
\section{Discussion}
\label{Discussion}
We propose a closed flow equation for the effective action of gauge theories that only involves one macroscopic gauge field - instead of separate fields for the expectation value of fluctuations and background field. Gauge invariance is maintained by this flow. This flow equation will be useful if $\bar\Gamma$ remains simple enough such that meaningful truncations can be devised and the ``initial value'' at large $k$ can be controlled. In particular, this concerns the locality properties of $\bar\Gamma$. The locality properties of $\bar\Gamma$ have to be found by practical computations for given models with gauge symmetry.
In particular, a gauge invariant effective action only involves the physical (``transverse'') fluctuations around a given solution of the field equations. There is no propagator for the gauge (``longitudinal'') fluctuations. Therefore no separate ``longitudinal gluon mass'' (or similar object in gravity) exists. A non-local mass term for gluons remains possible, however, induced by terms of the type $\sim F^{\mu\nu}f(-D^2)F_{\mu\nu}$, with $D^2$ a suitable differential operator acting on the physical fluctuations. Only a detailed computation can answer if our approach is useful to understand the infrared behavior of Yang-Mills theories.
The simple form of the proposed gauge invariant flow equations hinges on the mere existence of a solution of eqs. \eqref{eqn:G19}, \eqref{eqn:G22}. (The precise form of the solution does not matter.) If not, the gauge invariant flow equation will contain a ``correction term'', $\zeta_k = \pi_k + \delta_k - \epsilon_k - \gamma_k$, with $\gamma_k$ defined by eq. \eqref{eqn:X2}, as well as a correction in the propagator equation \eqref{eqn:E},
\begin{equation}\label{eqn:Z1}
\left(\bar\Gamma^{(2)}_P + \bar{R}_P + \Delta B_P\right) G_P
= P^{(T)},
\end{equation}
with $\Delta B_P$ given by eq. \eqref{eqn:74A}. If solutions for $f(\bar{g})$ and $C(\bar{g})$ with $\gamma_k = 0$, $\Delta B_P = 0$ exist, as suggested by our discussion above, the flow equation \eqref{eqn:A}, \eqref{eqn:B}, \eqref{eqn:E}, \eqref{XAB} is exact. If not, the proposed invariant flow equation can be still be used as an approximation, with errors $\sim f$. (The error can be minimized by the choice of optimal $f$ and $C$.)
Depending on the choice of $f(\bar{g})$ and $C(\bar{g})$ several versions of closed gauge-invariant flow equations for $\bar\Gamma(\bar{g})$ can be constructed. It is sufficient that $\Delta B_P$ and $\gamma_k$ can be expressed in terms of $\bar\Gamma(\bar{g})$. Besides gauge invariance, a simple structure and, in particular, a sufficiently local form of $\bar\Gamma$ are needed for devicing useful truncations. It is possible that a compromise with nonzero $\Delta B_P$ and $\gamma_k$ is advantageous for locality properties, even if solutions with $\Delta B_P = 0$, $\gamma_k = 0$ exist.
In practice, the non-local projections inherent in our approach are often not needed explicitly. The projections can be implemented by adding ``by hand'' a suitable ``physical gauge fixing''. In view of this, our approach argues in favor of the use of a particular ``physical gauge fixing'' that only acts on gauge fluctuations. For Yang-Mills theories this is realized by Landau gauge, e.g. \eqref{eqn:48C} with $\alpha \to 0$. For precise computations it may be advantageous from the point of view of locality properties of $\bar\Gamma$ to follow explicitly the flow of a ghost sector, computing the measure term $\epsilon_k$ from the contribution of the ghost fluctuations to the flow of the effective action, evaluated at nonzero gauge field $\bar A_\mu$ and vanishing ghost fields. Many computations of this type have been performed in the past in the background field formalism. They have neglected in practice the correction terms \cite{RW1} arising in the two-field formalism. We argue that this neglection can be justified. For $\alpha \to 0$ the gauge fixed flow equation, as employed so far, turns out to be identical to the projected flow equation \eqref{eqn:A}, \eqref{eqn:B}, \eqref{eqn:E}, \eqref{XAB}. Only the precise status of the macroscopic field $\bar A_\mu$ differs from the expectation value of the microscopic field if $f_\mu\neq 0$.
In quantum gravity the ``physical gauge fixing''
\begin{equation}\label{eqn:Z2}
S_\text{gf}
= \frac{1}{2\alpha}\int_x\sqrt{\bar{g}}\left(D^\mu h^\prime_{\mu\nu}\right)^2,
\quad
h^\prime_{\mu\nu}
= g^\prime_{\mu\nu} - \bar{g}_{\mu\nu},
\end{equation}
involves the covariant derivative $D^\mu$ formed with the macroscopic metric $\bar{g}_{\mu\nu}$, and we take $\alpha \to 0$. This gauge fixing is purely quadratic in the gauge fluctuations. It has been advocated as the ``physical gauge fixing'' in ref. \cite{CWQC}, and used in practical computations in ref. \cite{Christiansen:2015rva}. Unfortunately, the algebraic complexity for this gauge is somewhat higher than for more popular gauges used in practical computations so far. These other gauges do not obey our criteria for the decoupling of gauge fluctuations. Corrections to simple truncations in the two-field formalism may therefore be substantial and are difficult to control \cite{MR,MRS,BEM}. We suggest to use the gauge fixing \eqref{eqn:Z2} and to employ the projected flow equation \eqref{eqn:A}, \eqref{eqn:B}, \eqref{eqn:E}. We hope that this helps to put the understanding of asymptotic safety in quantum gravity on a solid basis.
\medskip
\paragraph{Acknowledgment} The author would like to thank H.~Gies, J.~Pawlowski and M.~Reuter for useful comments and discussion. This work is supported by ERC-AdG \href{http://cordis.europa.eu/project/rcn/101262_en.html}{290623}.
\medskip
\paragraph{Note added} Since the first version of this work the proposed flow equation has been used in quantum gravity \cite{CWCC,CWIGR} and for Yang-Mills theories \cite{CWGI}. For the simple truncations employed there, no complications from the non-locality of projectors arose. A truncation for Yang-Mills theories based on an effective action $\sim F_{\mu\nu} \, F^{\mu\nu}$ produces the one-loop $\beta$-function for the running gauge coupling as well as $5/6$ of the two-loop coefficient. In this simple local truncation the flow of the gluon propagator contains no mass term.
|
1,314,259,994,217 | arxiv | \section{Introduction}
The long-standing trend of miniaturizing electronic components
is expected to encounter serious obstacles in the not
too distance future. Despite the impressive successes in designing
electronic components down to the atomic and molecular scales, serious
dif{fi}culties are anticipated in assembling them into realistic
functional architectures using current technologies. One such issue is the
expected, excessive cost of both designing and reliably manufacture chips
on such small scales using conventional methods.
It is possible, in fact, that a major limiting factor in future efforts at
miniaturization won't be related to science or technology issues,
but rather to the high cost of the associated manufacturing processes.
Development of novel approaches for assembling nanochips may be a critical
ingredient for this new technology. One possibility, for example, could
be to rely on chemical synthesis methods (self-assembly, etc...) to
assemble individual components (which may be individual molecules)
into larger functional unit \cite{tour03}. Chemical synthesis methods
are capable of organizing large numbers of atoms and molecules into
large, regular, ordered structures. This might be the basis for a
relatively cheap and ef{fi}cient assembly process. What is not clear,
however, is if any of the resulting structures would correspond to
something resembling a computer chip. Would we know how to
program it, for example?
Another potential source of dif{fi}culty is the degree of randomness inherent in
nano-systems. Traditional computer chips are designed and manufactured
very meticulously, and therefore, are very sensitive to the presence of
defects or disorder. With nanocomputers, the individual components
themselves are expected to be extremely sensitive to disorder, and this
may have a serious impact on their reliability. Randomness in single
electron transistors, for example, may be unavoidable \cite{lik99}.
The effects of integrating such components into a larger circuit are only
beginning to be considered.
If we are to take advantage of novel assembly
methods and potentially exotic architectures, we will need to learn
how to program the resulting devices. Some assembly methods may give
very ordered, predictable structures. Other techniques may be more
extreme and produce a range of essentially random configurations.
In this paper, we consider general purpose methods
to program such devices that are independent
of the internal structure, and therefore also of the assembly process.
Minimally, the devices we consider have input and output leads as well as
a set of additional leads we designate as ``controls". We assume
that the internal components are sufficiently well-connected to
allow signals to propagate across the device. Beyond that we consider
the internal structure as unknown. By
adjusting voltages on the controls leads, we
manipulate the outputs that appear for a given set of inputs.
In particular, we use adaptive methods to find the set of
control voltages that implement a desired function. In this way,
we program the device.
To develop and test these methods,
we introduce a simple, abstract model that we call
a randomly assembled computer (RAC). In this model, we place $N$ two-states
devices (``diodes") randomly on a chip and connect them together with
random strengths. We designate a subset of the diodes as inputs, outputs,
and controls and attempt to program the RAC by manipulating the controls.
We phrase the programming of such a device as an optimization problem.
We define an error function that measures the difference between the
function currently implemented by the RAC and some desired
input-output function $f$. Since this error function $E_f(\vec{c})$ depends
on the values of the controls, we seek a control vector $\vec{c}$
that minimizes the error. The task is to find a $\vec{c}$
such that $E_f(\vec{c})=0$. In general, there may be
many solutions to this equation, or possibly, none at all, for a
particular $f$ and a particular RAC.
It is important to emphasize that our main interest is in developing
programming algorithms. We are not advocating a particular hardware
architecture or assembly process. A RAC is therefore only a test bed for our
optimization methods. In fact, we expect that in the real world,
there will be a large amount of information available about the internal
structure of the device from the assembly process itself.
For example, in a chemical
assembly process, the device might be imaged. This information could be
exploited to facilitate programming/optimization.
Additionally, different degrees of programming can be imagined. For example,
for very predictable, repeatable assembly processes, adaptive programming
may only be needed once for an entire class of devices. On the other
hand, for devices with more random internal structures,
individual programming may be necessary.
However, even in this more extreme case, devices could be programmed
in parallel, or possibly be connected together in large number where they
could program each other. In this work, however,
we do not take explicit advantage of such information.
Thus the problem we set for ourselves may be
more challenging that what will be faced in the real world. However,
it will provide a good testing ground for our algorithms.
Related work has included ideas using neural nets \cite{nets,bool},
as well as the Nanocell architecture \cite{nanocell}.
The Nanocell approach has some ideas in common with ours,
however, in that approach a particular architecture based
on molecular components is presented,
while we aim to present methods that are independent
of a particular architecture. Furthermore, many of those results
depended on detailed knowledge and control of the full internal states of the
Nanocell. This permitted a ``proof of concept" to demonstrate
that internal states do indeed exist that correspond to certain
simple functions.
The broader issue of how to {\em find} those states was only hinted at, however.
Here, this issue of ``black box programming" is our main concern.
In this paper, we consider different optimization methods to program randomly
assembled computers. In particular, we will employ simulated annealing
as well as adaptive, multi-agent methods. We will see that while simulated
annealing is adequate for small systems, larger devices require more
sophisticated approaches. We have found in previous work that adaptive
multi-agent methods perform very well for large scale
optimization problems \cite{wotu04}.
Other optimization methods could be considered
as well including genetic algorithms, cellular automata, etc. These
approaches may have advantages in certain situations, but we found our
methods to be more than adequate for our purposes.
We will be primarily concerned with two general questions.
First, in what generality can RACs be programmed i.e. what is the range of
functionality that can be implemented on a given RAC?
And secondly, what methods might be appropriate to perform the programming?
\section{Randomly Assembled Computers}
We consider a RAC as having $P$ input variables
$\vec{I}=(I_1,...,I_P)$, $M$ output variables $\vec{O}=(O_1,...,O_M)$, and $K$
control variables $\vec{c}=(c_1,...,c_K)$. A schematic of a RAC can be seen
in Figure~\ref{fig:chip}. Here input and output variables
are binary-valued, while the controls can take continuous values. Our goal
is to {fi}nd a $\vec{c}$ that implements a desire mapping
\begin{equation}
f:\vec{I} \rightarrow \vec{O}.
\end{equation}
We specify a ``target'' function $f$ as the ordered set of outputs associated
with all possible input vectors, and where $\vec{T}=(T_1,...,T_M)$ are the
target outputs.
A RAC is properly ``programmed" when $\vec{c}$
causes the RAC outputs $\vec{O}$ to equal the targets $\vec{T}$ over a
``training set" of $L$ input/output pairs $\{\vec{I}^l,\vec{T}^l\}$
representing $f$. We quantify how well we have programmed a RAC thru an
error function de{fi}ned as
\begin{equation}
E\equiv E_f(\vec{c})=\frac{1}{ML} \sum_{l=1}^{L} || \vec{O}^l
- \vec{T}^l ||
\end{equation} where $\vec{O}^l$ is shorthand for $\vec{O}_{\vec{c}}(\vec{I}^l)$
and $|| \vec{O}^l - \vec{T}^l ||$ is the number of components
in which $\vec{O}^l_{\vec{c}} (\vec{I}^l)$ and $\vec{T}^l$ disagree.
Thus, $E$ is the fraction over all output variables and training set elements
of mistakes made by the RAC.
Programming a RAC corresponds to minimizing $E$.
In general, the results we will present will be averages over large
ensembles of RACs.
To study different programming methods, the following model
system was used \cite{womil03}. The internal structure of a RAC
was represented by randomly
placing $N$ ``diodes"
$d_i(t), i=1,...,N$ (i.e., two-state devices such that $d_i\in [0,1]$)
on a chip, and
connecting them together with random strengths. An arbitrary
subset of diodes was designated as inputs, and another subset (perhaps overlapping) as
outputs. Diodes can change state based on their individual inputs at discrete
time steps $t$ governed by a global clock. Thus, a RAC has the structure of
an iterated function $\vec{d}(t+1)=F[\vec{d}(t)]$.
A computation begins at $t=0$ by setting each
input diode $d_{p'}(0)=I_p$ where $I_p$ is the input to the RAC
and $p'$ is the diode associated with input bit $p$.
All other diodes are initialized to zero. At subsequent
time steps, the diodes update their states according to the rule
\begin{equation}
d_i(t+1)=g(\sum_j^N \omega_{ij} d_j(t) + \sum_j^K \rho_{ij} c_j(t))
\end{equation}
where the $\omega_{ij},\rho_{ij}$ are pre-{fi}xed random numbers, and
$g(x)=\frac{1}{2}(1+tanh(\beta x))$ is a smoothed step function with
``gain" $\beta$ that models the switching behavior of the diodes.
Here, the $\omega_{ij}$ represent the strength of the connection between
two diodes, $i$ and $j$, analogous to a resistance. Note also that for low values of the gain,
the diodes can take on values between zero and one, given by the function $g(x)$.
After a {fi}xed number of time steps $t=T$, the outputs
can be read to give the result of the computation $O_m=d_{m'}(T)$,
where $m'$ is the diode associated with output bit $m$.
For simplicity, we consider only circuits composed of diodes and do
not include other possible components such as memory elements, etc.
It should be noted that our model is used to test our programming approach.
However, none of the adaptive programming methods that we present here depend
on the particular details of the model, and are expected to be general enough
to work for any ``black box" nano-circuit. Moreover, our model captures
several important features of real circuits consisting of an iterated network
of diodes whoses states can change depending on their local environment.
It is also important to note that while the $d_i(t)$ evolve according to a
dynamical rule, Equation 3, the $c_i(t)$ are external, and potentially
time-dependent parameters. Our task, in fact, will be to set these parameters.
\section{Multi-Agent Methods}
We want to {fi}nd the best $\vec{c}$ that minimizes $E_f(\vec{c})$
for a given training set. The best case scenario is to {fi}nd a $\vec{c}$
that gives $E_f(\vec{c})=0$, i.e., {fi}nd a $\vec{c}$ so that the RAC gives
the correct outputs for the given inputs over all the examples. We will
see that this is often achievable. Several issues make this a potentially
dif{fi}cult optimization problem. First, $E$ is a very nonlinear function
with a potentially large number of local minima. Secondly, the exact
functional form of $E_f(\vec{c})$ is not known due to our desire to treat the
RAC as a black box. The only information that we have available is the set
of input/outputs pair values $(\vec{I}^l,\vec{O}^l)$ for a given control
$\vec{c}$. Lastly, there may be a large number of controls that need to be set.
Because of these dif{fi}culties, we will employ a set of methods recently
developed in the context of multi-agent systems. These multi-agent methods
have some relation to simulated annealing (SA), but have signi{fi}cant
differences, and have shown dramatically faster convergence in a number
of problems.
There are three main differences between the multi-agent methods we will
consider and more conventional methods such as Simulated Annealing (SA)
\cite{wotu04,wowh00,wotu99c,wotu01a}.
First, the multi-agent approach is distributed. Thus, instead of considering
a $N$-dimensional optimization problem, we deal with $N$ 1-dimensional ones.
Each independent variable $c_i$ is regarded as an agent, and each agent $i$
separately sets its variable $c_i$ in order to try to optimize an associated
objective function $e_i(f, \vec{c})$.
Thus, we have $N$ independent optimization processes, each of which we are
calling an ``agent".
In general, distributed approaches
such as these are expected to scale better for large problems.
Secondly, each agent solves its particular optimization problem using
Reinforcement Learning (RL) \cite{suba98}. The version of RL we use is
called Boltzmann learning and is related to SA. The principal difference is
that instead of taking ``random" trial steps, RL relies on previous data to
make ``smart" trial steps. In this way, RL algorithms converge very rapidly.
Boltzmann learning is also easy to implement. More sophisticated algorithms
may be necessary, however, for more dif{fi}cult problems.
In our RL algorithm, at each simulation time step $\tau$, each agent $i$
randomly generates a number of candidate values for $c_i(\tau+1)$ from some
$\Delta$ neighborhood of the current $c_i(\tau)$. Next, the agent estimates,
based on data from previous time steps, a value of $e_i$ for each candidate
value. This is done by performing a weighted average over all previous pairs
$(c_i(\tau'),e_i(\tau'))$, for $\tau'<\tau$. The weighting damps the
contribution from ``old" data (i.e., from $c_i(\tau')$ and $e_i(\tau')$
where $\tau'<<\tau$). Finally, a Boltzmann probability distribution over
these estimates is sampled to select the resulting trial move. A nonzero
simulation temperature prevents us from getting stuck in local minima.
After the agents have all made their moves, it may turn out that the
global error $E$ has actually increased. This may happen if, for example,
the agents' estimates are inaccurate. To address this, we also
include a Metropolis-style global accept/reject step. If after the
agents have made their moves $E$ has decreased, then we accept the
new $\vec{c}$. Otherwise we reject it with conditional probability
proportional to $~e^{-\beta (E-E')}$ \cite{wotu04}.
The third difference with conventional methods is that the individual agents may be
assigned different objective functions. Since our global objective is to
minimize $E$, we might expect that each agent should use $E$ as its individual
objective function, $e_i=E$. Such a system of agents who all have the same objective
function is called a Team Game (TG), in analogy with the scenario in game
theory where all the players/agents have the same payoff function.
We expect --- and indeed observe --- that for small systems Team Games
outperform SA. This is because by using learning algorithms, the agents can
make ``smart" trial moves inferred from their past history, as opposed to the
random ones with conventional SA.
However, as the system size grows, it becomes more dif{fi}cult for each
individual agent to discern its impact on $E$, and thus it becomes more
dif{fi}cult for each agent to choose an optimal value if $e_i = E$. To
address this we de{fi}ne, heuristically, a signal-to-noise measure for each
agent $i$ that we call ``learnability":
\begin{equation}
\lambda_i^{e_i} = \frac{|\partial_i e_i|}{|\vec{\nabla}_{\hat{} i} e_i|}
\end{equation}
where
$\partial_i e_i = \partial e_i/\partial c_i$, and $\vec{\nabla}_{\hat{} i} e_i$
denotes the gradient of $e_i$, but with the component in the $i^{th}$
direction removed. The {\em learnability} of agent $i$ measures
the {\em sensitivity}
of $e_i$ to changes in $c_i$ relative to changes in the system as a whole.
It is expected that for large $\lambda_i$, agent $i$ can more easily discern
its impact on $e_i$, and therefore, it can make better trial moves. On the
other hand, we expect that beyond a certain system size, the denominator
of $\lambda_i^{e_i}$ will grow to such a point that the agents will be
essentially making random moves. At that point, Team Games and SA will
basically give the same result.
Even though our goal is for the system as a whole to minimize the error $E$,
there is nothing to prevent us from giving different $e_i$ to the different
agents. To exploit this, each agent is assigned a ``difference utility" $e_i$
of the form
\begin{equation}
e_i(\vec{c}) = E(\vec{c}) - E(\vec{c})|_{c_i=0}.
\end{equation}
Since $\partial_i e_i = \partial_i E$, critical points of $e_i$ will be
critical points of $E$. Thus, if agent $c_i$ optimizes $e_i$, it will
optimize $E$ as well. The functions $e_i$ that are {\em aligned} with $E$ in this
manner are called {\em factored}.
In addition to being factored, $e_i$ also typically has better learnability
than $E$. This is because the numerators of $\lambda_i^E$ and $\lambda_i^e$
are identical, but the denominators are very different. In fact, we expect
in general that $|\partial_j e_i| << |\partial_j E|$. This should lead to a
substantial reduction in the background noise, and hence an increase in
$\lambda_i$. This behavior is born out by many simulations. Such $e_i$
that are both ``factored" and have high ``learnability" are expected to
perform well for large systems.
Notice that equation (5) is only one choice for a difference utility.
In previous work, this utility was called the Wonderful Life Utility
(WLU). The value for $c_i$ in the subtracted term could have been set
to a different, non-zero, value, and the advantages would still
hold. A similar possibility is to subtract an expectation value taken
over all $c_i$ weighted by an appropriate probability density. \begin{equation}
e_i(\vec{c}) = E(\vec{c}) - < E(\vec{c}) >_{c_i}. \end{equation} Previously,
this utility has been called the Aristocrat Utility (AU). Both WLU,
AU, and other utilities have been studied extensively \cite{wotu01a}.
In this paper, we will concentrate on WLU for simplicity.
A fully formal derivation of WLU and AU based on bias and variance may
be found in \cite{woinfo,wobien}. This work has many other advantages
beyond that presented here, e.g., explicit connection to bounded
rational game theory and statistical physics. However it is more
involved than is needed for current purposes.
\section{Results}
We performed simulations on a variety of target functions $f$, and on RACs of different
sizes (N,K), where N is the number of diodes and K is the number of controls.
Once a RAC was assembled and a function selected to be programmed, an error
function could be de{fi}ned. Depending on the size of the RAC, we use different
optimization algorithms to program them. For smaller RACs $(N,K < 20)$, SA
often proved adequate, but for larger RACs $(N,K > 50)$, multi-agent methods
were essential.
To begin, we {fi}rst considered 2-input, 1-output logic gates: INV, AND, OR, and XOR
programmed using simulated annealing for the optimization.
For these simple cases, SA performs adequately, and in general, runs
faster (in minutes as opposed to hours on a workstation)
than the multi-agent approaches.
We demonstrate that not only can these functions indeed be
programmed, but we attempt to answer an important general
question which is what is the range of functionality that
a given RAC can implement.
Indeed, we expect that some functions cannot be implemented at all on a
particular RAC. We might hope, however, that a variety of functions can be
programmed on a RAC of suf{fi}cient size. That minimal size will depend on
the particular function being considered. Thus, even for logic gates, we might
ask: how big of a RAC is needed to have a high programming success rate?
The results for certain logic gates are shown in Figure~\ref{fig:inv}.
Each data point represents the result of programming attempts on 10,000
different RACs of the given size $(N,K)$. The y-axis gives the fraction
that were successfully programmed i.e. all the outputs matched the targets
with no mistakes ($E=0$). Different curves correspond to different proportion
of controls to diodes. We see that the Inverter (INV) with $K=N$ has a greater
than $90\%$ success rate for $N\geq12$ whereas the XOR required $N\geq18$.
This is expected due to the greater complexity of the XOR. Furthermore,
we see that reducing the number of controls by half has only a relatively
small effect on the performance, with the success rate still better than
$90\%$, but for $N\geq30$. Lastly, we see, even with a fixed number of
controls (K=2), a large fraction ($50\%$ for the INV and $30\%$ for the XOR)
can still be programmed.
Next, we considered larger circuits and larger RACs. We implemented larger
functional units, namely arithmetic circuits. In particular, we examined
2-bit adders with carry and multipliers. These are signi{fi}cantly more
complicated functions than the 2-input logic gates. Our adders and multipliers
performed operations on two 2-bit numbers, giving a total of 4 input bits,
and 3 output bits. With 4 inputs, there are 16 possible input combinations,
and in total, $2^{3 \times 2^4}=2^{48}$
possible binary functions from 4 inputs to 3
outputs. An adder or a multiplier is only one of these possible functions.
Our optimization algorithm was required to {fi}nd these particular functions
out of the large number of possibilities. For these functions, with 16 possible
inputs and 3 bit output, there are 48 total output bits that must be correctly
set in order for the RAC to have been programmed perfectly. The
truth table in Table 1 gives the required input/output combinations that de{fi}ne
these functions.
For functions of the complexity of adders and multipliers, we used RACs of
size (N=100,K=40) with (T=5) iteration cycles. For a dynamically controlled
RAC (i.e. the control voltages change with each clock cycle
$\vec{c}(t)$), this results in 200 control parameters. For such a
large number of parameters, multi-agent methods were essential.
The upper plots in the Figures~\ref{fig:add},~\ref{fig:mult} for the
adder and the multiplier show the
convergence of different methods as a function of simulation time.
In each case, the error function $E_f$ is averaged over an ensemble of 1000 RACs.
We see for both the adder and the multiplier, that for a RAC of this size, the Team Game (TG)
behavior is only marginally better than SA, while the multi-agent WLU does considerably better
due to its superior scaling properties.
In the lower plots,
we deconstructed the convergence graphs and show how well the
individual RACs were programmed.
Each data point gives
the fraction of RACs that made a given number of mistakes after programming
was completed. The ideal situation would be a fraction of 1 (all the RACs)
making 0 mistakes;
this would correspond to all the RACs being perfectly programmed. We see that
WLU programmed RACs made much fewer mistakes than SA programmed ones.
In fact, for the adder,
almost $50\%$ of the RACs programmed with WLU made only 0 or 1 mistake out of
48. In no case, were there more 10 mistakes with WLU.
On the other hand, the best performing SA RAC still made 9 mistakes.
For the multiplier, more that $20\%$ of the WLU RACs make no mistakes while the
best for SA is at 4 mistakes.
For these larger circuits, we did additional
runs where we show how including additional
information or control over the circuit can affect the
performance. In programming unconventional
circuits, some information may be available about the internal structure.
For example, with a random assembly process,
large batches of chips might be produced very inexpensively due to the
negligible design and placement costs.
It would be very easy therefore to quickly generate and
test a subset of these devices
to see which functions they implement without any programming all,
and then pick the ones whose initial error was the lowest
for subsequent programming.
Clearly, it would more ef{fi}cient
to program a RAC that is closer originally to the desired function.
To model this possibility, we generated 100 RACs, and programmed
the one with the lowest initial error.
Additionally, it may also be possible that during the programming process,
we may have control over some physical parameters of the device
such as an external field.
To model this scenario, we adjust the gain parameter $\beta$ of our model
during the programming process.
In particular, we found an improvement in performance by annealing the gain.
Sets of runs with these additional features are labeled ``Opt" in
Figures~\ref{fig:add},~\ref{fig:mult}. In all cases, the ``Opt" runs
improved performance.
The random assembly of the RAC and the adaptive nature of the multi-agent
methods makes them well-suited for handling faults, defects, noisy
components, and also allows them to be redesigned/reused. To illustrate this
{fl}exibility, we performed the following simulations. First, to illustrate RAC
reuse/recon{fi}gurability, we initially programmed an ensemble of RACs
with $(N=20,K=10)$ to implement AND functions. From Figure~ref{fig:inv}, we
expect a $90\%$ success rate across the ensemble.
Then, at simulation time step $\tau=3000$, we
abruptly changed the same RAC to implement a different function, namely an XOR.
As can be seen in the top plot of Figure ~\ref{fig:apps}
which is averaged over 1000 RACs, the RACs are able
to adjust quickly to the new functionality.
Secondly, we considered recovery from a fault. Initially, we programmed the
RACs to implement ANDs. Then at $\tau=3000$, we randomly chose a diode and
{fi}xed its value at $d_i=1$ for the remainder of the simulation. This was to
simulate a ``stuck" fault. We see that initially there is a large increase
in the error due to the fault, but then the adaptive algorithm was able to
program around it. Note that the $\vec{c}$ that implements the AND, before
and after the fault will, in general, be very different.
Finally, we
considered the case of a ``noisy diode". To simulate this, we again programmed
a group of RACs to implement AND functions. At $\tau=3000$, we randomly chose
a diode to become noisy. At each simulation time step, its value was {fl}ipped
at random. This is in contrast to the stuck fault were the diode maintained
a {fi}xed value. We see that even though this diode is continually changing
state, the adaptive algorithms are able to effectively neutralize it,
and again implement a AND function. These results are shown in
Figure~\ref{fig:apps}.
\section{Summary}
In this paper, we considered programming nanocomputers with unconventional,
and potentially unknown and/or random, architectures. Novel assembly
processes may represents potentially inexpensive alternatives to the high
costs anticipated to extend conventional design and manufacturing methods
to the nanoscale. The resulting architectures may not respond to
conventional computer languages. Thus, we will need new methods to program them.
To develop such methods, we introduced a simple model called a randomly
assembled computer (RAC), where nano-electronic components are placed
and connected at random on a chip. Input, output, and control lines
connect the chip to the outside world. By manipulating the control
voltages, different functions, $f$, can be implemented. Programming is achieved
by minimizing an error function $E_f(\vec{c})$, that depended on the values of the
controls $\vec{c}$. This is a potentially dif{fi}cult optimization
problem since in general $E_f$ may be a very nonlinear function in a high
dimensional space. Furthermore, since the RAC was treated as a black box,
the functional form for E was unknown. The only information available
is the input/output pairs that resulted for a given set of controls $\vec{c}$.
It should be emphasized that this model is used only for algorithm development.
In this paper, we are not advocating a particular hardware or physical design.
In fact, these methods are general and should be applicable to circuits with
a wide range of internal structures.
We considered two methods to optimize the error function: conventional
simulated annealing and a more recently developed multi-agent approach.
Multi-agent optimization methods were found to be well-suited for this class
of problems. These are adaptive methods based on learning algorithms.
Unlike more conventional techniques, such as simulated annealing, these methods
are distributed, use reinforcement learning, and have the possibility of
assigning different objective functions to different optimization processes
or agents. This collection of features can result in a substantial increase
in convergence, especially for high dimensional problems.
One of the main question we sought to address was whether an arbitrary function
can be programmed on a RAC. To help answer this question, we began by considering
programming small RACs as logic gates, such as INV and XOR.
Simulation results suggest that a surprisingly wide-variety of functions can be
implemented on a RAC of {\em suf{fi}cient} size.
For logic gates, we found that that size was on the order of $N\geq18$ diodes
to ensure a greater than $90\%$ success rate.
We found simulated annealing adequate for programming RACs of this size.
For larger RACs and more complicated functions, however, multi-agent
methods were found to be essential for successful programming. We
considered two-bit adders and multipliers programmed on RACs with
(N=100,K=50). The space of functions with 4 inputs and 3 outputs is very large
and picking one particular function out of that large number of
possibilities is a nontrivial problem. Nonetheless, we found a large
fraction of RACs could do just that using multi-agent methods. Simulated
annealing, on the other hand, was shown to be not nearly as effective.
Finally, we considered issues related to fault tolerance, reprogrammability,
and reliability. Electronic components at the nanoscale are expected to be
very sensitive to randomness, and indeed, some degree of randomness may be
unavoidable in certain types of components, such as single electron
transistors. This potential intrinsic randomness could seriously impact
the reliability of the components, and is thus a serious concern. RACs,
on the other hand, are quite robust with respect to such randomness. The
same adaptive methods that dealt with the black box nature of a RAC could adapt
equally well to other potential sources of randomness. This could include
intrinsic randomness of components, random assembly, as well as the unexpected
appearance of faults. Furthermore, once a RAC is programmed, it could be as
easily re-programmed, and reused potentially in a different application.
\bibliographystyle{plain}
|
1,314,259,994,218 | arxiv | \section{Introduction}
The formal methods community has studied many approaches for automatic verification that are diverse and even seemingly disparate.
Two of the main approaches for verifying safety by the automatic inference of inductive invariants are abstract interpretation~\cite{DBLP:conf/popl/CousotC77} and model checking~\cite{DBLP:conf/lop/ClarkeE81,DBLP:conf/programm/QueilleS82}.
State-of-the-art model checking algorithms are based on SAT solving~\cite[e.g.][]{DBLP:conf/cav/GrafS97,DBLP:conf/cav/McMillan03,DBLP:conf/fmcad/GurfinkelI17,DBLP:conf/sigsoft/GurfinkelSM16,DBLP:conf/fmcad/SheeranSS00,DBLP:conf/cav/AlbarghouthiLGC12}, with many of the best tools implementing variants of the famous IC3/PDR algorithm~\cite{ic3,pdr}, which combines several heuristic ideas to achieve good performance in practice.
While PDR is practical and widely deployed, very little is known about it theoretically.
In particular, a previous investigation of PDR using the theory of abstract interpretation~\cite{DBLP:conf/vmcai/RinetzkyS16} had to employ abstractions that are both far from the usual practice of abstract interpreters, and are also too rich in that they can accommodate many algorithms that are unrelated to PDR (see~\Cref{sec:related}).
\begin{changebar}
As a result, there is currently no conceptual framework that explains how and when PDR is able to overapproximate and avoid the enumeration of reachable states, which is a key challenge to every invariant inference algorithm.
\end{changebar}
\begin{changebar}
In this paper, we set out to investigate the principles of how PDR achieves overapproximation.
\end{changebar}
To this end, we continue a line of work that applies learning theory to invariant inference~\cite[e.g.][]{ICELearning,DBLP:journals/jar/NeiderMSGP20,DBLP:journals/pacmpl/EzudheenND0M18,DBLP:conf/icse/JhaGST10,DBLP:conf/popl/0001NMR16,DBLP:conf/sas/0001GHAN13,DBLP:conf/cav/SharmaNA12,DBLP:conf/esop/0001GHALN13,DBLP:journals/fmsd/SharmaA16,DBLP:conf/pldi/KoenigPIA20,DBLP:journals/acta/JhaS17,DBLP:journals/pacmpl/FeldmanISS20,DBLP:journals/pacmpl/FeldmanSSW21} with a surprising result: the monotone theory from exact learning~\cite{DBLP:journals/iandc/Bshouty95} enables viewing PDR as classical abstract interpretation (in a new domain). This draws a deep connection between these techniques,
\begin{changebar}
and identifies a form of abstraction performed by PDR that distinguishes it both from explicit enumeration and from other algorithmic approaches.
\end{changebar}
We focus on the fundamental setting of propositional systems,
\begin{changebar}
which also applies to infinite-state systems using predicate abstraction~\cite{DBLP:conf/popl/FlanaganQ02,DBLP:conf/cav/GrafS97}.
\end{changebar}
PDR constructs a sequence of formulas, called \emph{frames}, by blocking counterexamples (states that can reach bad states).
Given a counterexample, the algorithm conjoins to the frame a generalization clause that blocks the counterexample and also additional states, but not states reachable in one step from the previous frame (we explain PDR in detail in~\Cref{sec:overview-pdr-alg}).
Theoretically analyzing the behavior of the algorithm is complicated by its highly nondeterministic nature---it depends on the choices of counterexamples and generalization clauses (many of them affected by idiosyncrasies of the underlying SAT solver), and different choices may lead PDR down radically different paths.
To ameliorate this,
we present an algorithm, called \emph{$\Lambda$-PDR}, which resolves this nondeterminism by using all possible answers to these queries, blocking all counterexamples with all admissible generalizations. The resulting frames are tighter than those of PDR, as they include all lemmas that PDR could learn in any execution. This provides a theoretical handhold to study PDR.
$\Lambda$-PDR uncovers a key aspect of the generalization performed by standard PDR. The frames are usually viewed as a sequence of overapproximations that prove bounded safety with an increasing bound. While correct, this does not capture the full essence of generalization in PDR. In particular, naive exact forward reachability also computes such a sequence, albeit a trivial one.
We show that in $\Lambda$-PDR---and hence, in PDR---there is an inherent \emph{abstraction} that includes additional states in each frame beyond exact forward reachability. Applied successively, we show that this is a form of abstract interpretation,
which can lead to an exponential gap between the number of frames in $\Lambda$-PDR and the number of steps required for exact forward reachability.
We prove several results:
\begin{enumerate}
\item We show that the relation between successive frames in $\Lambda$-PDR is characterized by an operation from Bshouty's monotone theory~\cite{DBLP:journals/iandc/Bshouty95}. The idea is that taking all the generalizations that block a state $b$ amounts to computing the least $b$-monotone overapproximation of the post-image of the previous frame (\Cref{sec:monotone}).
\item We introduce a new abstract domain, of the formulas for which backward reachable states form a monotone basis.
We show that $\Lambda$-PDR can be viewed as computing Kleene iterations with the best abstract transformer in this domain.
Standard PDR also operates in the same domain, and its frames overapproximate the Kleene iterations that $\Lambda$-PDR performs.
This is the first time that the theory of state abstraction is able to explain property-directed generalization (\Cref{sec:ai}).
\item We show exponential gaps between the number of frames in $\Lambda$-PDR and the number of iterations of exact forward reachability, as well as the unrolling depth in a dual interpolation-based algorithm (\Cref{sec:itp-friends}).
\item We prove an upper bound on the number of frames in $\Lambda$-PDR in terms of the DNF size of certain ``monotonizations'' of the transition relation. Although not always tight, this result sheds light on the benefit of the abstraction in certain cases. The proof brings together results from the monotone theory, abstract interpretation, and diameter bounds for transitions systems. This is done by constructing a (hyper)transition system where the states reachable in $i$ steps correspond to the $i$th Kleene iteration, and bounding the system's diameter (\Cref{sec:abstract-diameter-all}--\Cref{sec:hyper-all}).
\item We show that in some cases the abstraction of $\Lambda$-PDR is overly precise, whereas the looser frames of standard PDR converge in fewer and smaller frames (\Cref{sec:vs-pdr}).
\end{enumerate}
\section{Overview}
\label{sec:overview}
\subsection{PDR, the Frames}
\label{sec:overview-frame-props}
\newcounter{overview-frame-props}
How does property-directed reachability find inductive invariants?
Given a set of initial states $\Init$, a transition relation $\tr$ describing one step of the system, and a set of bad states $\Bad$, the goal is to find an \emph{inductive invariant}: a formula $I$ such that $\Init \implies I$, $I \implies \neg\Bad$, and $\tr(I) \implies I$, where the post-image $\tr(X)$ is the set of states that $\tr$ reaches in one step from $X$.\footnote{
We use a formula and the set of states that satisfy it interchangeably. Unless otherwise stated, the formula to represent a given set of states is chosen arbitrarily.
}
Such an $I$ proves \emph{safety}, that no execution of the system can reach a bad state.
The central data structure that PDR uses to find inductive invariants is the
\emph{frames}.
These are a sequence of formulas $\Frame_0,\Frame_1,\ldots,\Frame_N$ that satisfies the following properties, for all $0 \leq i \leq N-1$:
\iflong\begin{enumerate}\else\begin{inparaenum}\fi
\item \label{it:frames-start} \label{it:frames-init} $\Frame_0 = \Init$;
\item \label{it:frames-monotone} $\Frame_i \implies \Frame_{i+1}$;
\item \label{it:frames-onestep-overapprox} $\tr(\Frame_i) \implies \Frame_{i+1}$;
\item \label{it:frames-end} \label{it:frames-safety} $\Frame_i \implies \neg \Bad$.
\setcounter{overview-frame-props}{\value{enumi}}
\iflong\end{enumerate}\else\end{inparaenum}\fi
\noindent
In words, the frames start with the set of initial states, grow monotonically, always include the states reachable in one step of $\tr$ from the previous frame, and are strong enough to prove safety (except possibly the last frame which is ``under construction'').
These properties ensure that each $\Frame_i$ overapproximates the set of states reachable in at most $i$ steps, and yet excludes the bad states; this constitutes a proof of \emph{bounded safety}. However, the ultimate goal of PDR is \emph{un}bounded safety, and it is not clear why frames would avoid ``overfitting'' to bounded executions, and rather converge to a true inductive invariant. In informal discussions, this is sometimes phrased as the criticism that the algorithm merely ``happens to find'' a bounded safety proof that generalizes to the unbounded case.
Indeed, properties~\ref{it:frames-start}--\ref{it:frames-end} of frames do not reflect any bias away from bounded proofs, as they are also satisfied by the exact forward reachability algorithm, $\Frame_0 = \Init, \Frame_{i+1}=\postimage{\tr}{\Frame_i}$, where $\postimage{\tr}{X} = X \cup \tr(X)$ denotes the reflexive closure of the post-image. Exact forward reachability might require many frames to converge to an unbounded proof if some states are reachable only by very long paths.
\begin{figure*}[t]
\centering
\begin{minipage}{\textwidth}
\begin{minipage}{0.43\textwidth}
\begin{lstlisting}[numbersep=5pt, escapeinside={(*}{*)}, xleftmargin=3.0ex]
init $\vec{x} = (x_n,x_{n-1},\ldots,x_0) = 0\ldots0$,
$\vec{y} = (y_n,y_{n-1},\ldots,y_0) = 0\ldots0$,
$z=0$
repeat: increase_x() | increase_y()
assert $\neg\left(\vec{x} = 10\ldots0 \land \vec{y} = 11\ldots1 \land z=1\right)$
\end{lstlisting}
\end{minipage}
\begin{minipage}{0.26\textwidth}
\begin{lstlisting}[numbersep=5pt, escapeinside={(*}{*)}, xleftmargin=3.0ex]
increase_x():
if $z = 0$:
$\vec{x} = \vec{x}+1 \pmod{2^{n+1}}$
if $\vec{x}=10\ldots0$:
$\vec{x} = \vec{x}+1 \pmod{2^{n+1}}$
\end{lstlisting}
\end{minipage}
\begin{minipage}{0.25\textwidth}
\begin{lstlisting}[numbersep=5pt, escapeinside={(*}{*)}, xleftmargin=3.0ex]
increase_y():
if $z = 1$:
$\vec{y} = \vec{y}+1 \pmod{2^{n+1}}$
$$
$$
\end{lstlisting}
\end{minipage}
\vspace{-0.4cm}
\captionof{figure}{
\iflong
\footnotesize
\fi
Skip-counter: running example of propositional transition system over the variables $\vec{x}=x_n,\ldots,x_0$. Either \texttt{increase\_x()} or \texttt{increase\_y()} is executed in each step according to whether $z=0$ or $z=1$, incrementing $\vec{x}$ or $\vec{y}$ resp., but skipping over the value $\vec{x}=10\ldots0$.}
\label{fig:skip-counter}
\end{minipage}
\iflong
\else
\vspace{-0.5cm}
\fi
\end{figure*} Consider, for example, the simple family of propositional systems in~\Cref{fig:skip-counter}, parametrized by $n$. A bit $z$ chooses between incrementing a counter $\vec{x}$ or a counter $\vec{y}$, represented in binary by $x_n,x_{n-1},\ldots,x_0$ and $y_n,y_{n-1},\ldots,y_0$ respectively. The safety property to prove is that it is impossible for $\vec{x}$ to have the value $10\ldots 0$ while $\vec{y}$ is $11\ldots 1$. This property is not inductive as is (for instance, the state $\vec{x}=10\ldots 0, \vec{y}=11\ldots 10, z=1$ satisfies the safety property but reaches a bad state in one step);
one inductive invariant for this system is
\begin{equation}
\label{eq:skip-counter-invariant}
\vec{x} \neq 10\ldots0 \land \vec{y}=00\ldots0 \land z=0,
\end{equation}
which implies safety and is closed under a step of the system.
In these systems, exact forward reachability requires $\Omega\left(2^n\right)$ iterations before it converges to an inductive invariant. This is because some states, such as $\vec{x}=10\ldots 01, \vec{y}=00 \ldots 00,z=0$, require an exponential number of steps to reach---the system has an exponential \emph{reachability diameter}---so exact forward reachability discovers all reachable states and converges only after that many iterations.
Clearly, invariant inference algorithms must perform some sort of \emph{overapproximation}, or \emph{abstraction}, to overcome this slow convergence.
This raises two important questions:
\begin{enumerate}
\item \label{it:q-abstraction} What characterizes the abstraction that PDR performs?
\item \label{it:q-convergence} How does this abstraction achieve faster convergence than exact forward reachability?
\end{enumerate}
The commonly-stated properties of frames do not provide an answer; to address these questions we must dive more deeply into how PDR works.
\subsection{PDR, the Algorithm}
\label{sec:overview-pdr-alg}
\vspace{-0.45cm}
\begin{algorithm}[H]
\caption{PDR}
\label{alg:pdr}
\vspace{-0.5cm}
\begin{multicols}{2}
\begin{algorithmic}[1]
\begin{footnotesize}
\Procedure{PDR}{$\Init$, $\tr$, $\Bad$}
\State $\Framepdr_0 \gets \Init$
\State $N \gets 0$
\While{$\forall 1 \leq i \leq N. \ \Framepdr_{i} \not \implies \Framepdr_{i-1}$} $\label{ln:pdr-find-inductive-frame}$
\State $\Framepdr_{N+1} \gets \true$ $\label{ln:pdr-init-frame}$
\While{$\Framepdr_{N+1} \notimplies \neg\Bad$}
\For{$\sigma_b \in \Framepdr_{N+1} \land \Bad$} $\label{ln:pdr-sample-bad}$
\State \Call{block}{$\sigma_b$, $N+1$} $\label{ln:pdr-block-bad}$
\EndFor
\EndWhile
\State $N \gets N+1$
\EndWhile
\State \Return $\Framepdr_i$ such that $\Framepdr_{i} \implies \Framepdr_{i-1}$
\EndProcedure
\Procedure{block}{$\sigma_b$, $i$}
\If{$i=0$}
\State \textbf{unsafe}
\EndIf
\While{$\postimage{\tr}{\Framepdr_{i-1}} \notimplies \neg \sigma_b$} $\label{ln:pdr-back-refine}$
\State take $\sigma$ s.t.\ $\sigma \models \Framepdr_i, (\sigma,\sigma_b) \models \reflextr{\tr}$ $\label{ln:pdr-back-sample-prestate}$
\State \Call{block}{$\sigma$, $i-1$}
\EndWhile
\State
take $c$ minimal s.t.\ $c \subseteq \neg \sigma_b$ and $\tr({\Framepdr_{i-1}}) \implies c$ \\
\qquad \qquad \qquad \qquad \qquad \qquad \quad and $\Init \implies c$\strut} $\label{ln:pdr-minimal-clause}$
\For{$1 \leq j \leq i$}
\State $\Framepdr_j \gets \Framepdr_j \land c$ $\label{ln:pdr-strengthen}$
\EndFor
\EndProcedure
\end{footnotesize}
\end{algorithmic}
\end{multicols}
\vspace{-0.35cm}
\end{algorithm}
\vspace{-0.4cm}
\Cref{alg:pdr} presents a simple version of the basic PDR algorithm. The sequence of frames it manipulates are denoted $\Framepdr_0,\ldots,\Framepdr_N$, to distinguish between PDR's frames and frames of other algorithms in the paper. Initially, $\Framepdr_0$ is initialized to the set of initial states (thereby satisfying property~\ref{it:frames-init}).
The outer loop terminates once one of the frames is inductive (\cref{ln:pdr-find-inductive-frame}), which is when $\Framepdr_{i} \implies \Framepdr_{i-1}$ (because then, from the other properties of frames, $\tr(\Framepdr_{i-1}) \implies \Framepdr_i \implies \Framepdr_{i-1}$).
Otherwise, it initializes a new frontier frame to $\true$ (\cref{ln:pdr-init-frame}), and samples bad states (\cref{ln:pdr-sample-bad}) to block (exclude from the frame) until the frontier frame is strong enough to exclude all bad states (satisfying property~\ref{it:frames-safety}).
In order to satisfy property~\ref{it:frames-onestep-overapprox}, before a state $\sigma_b$ is blocked, the previous frame must be refined it excludes all the pre-states of $\sigma_b$ (\cref{ln:pdr-back-refine}); this is performed by sampling pre-states and blocking them recursively (\cref{ln:pdr-back-sample-prestate}).
Once all the pre-states are blocked in the previous frame, $\sigma_b$ can be excluded from the current frame.
However, at this point,
PDR \emph{generalizes} and blocks a \emph{set} of states; this is done by finding \emph{clause} $c$---also called a \emph{lemma}---that excludes $\sigma_b$ and still does not exclude any state that is reachable in one step or less from the previous frame (preserving property~\ref{it:frames-onestep-overapprox}).
This is done (in~\cref{ln:pdr-minimal-clause}) by starting from all literals (variables or their negations) that are falsified in $\sigma_b$, and choosing a subset whose disjunction (which is a clause) satisfies the desired properties. PDR chooses a \emph{minimal subset} in order to exclude as many states as possible. (In practice, this involves a linear number of SAT calls.)\footnote{
Practical implementations also attempt to push existing lemmas to other frames whenever possible; we omit this for simplicity. (We discuss inductive generalization below, in~\Cref{sec:overview-inductive-generalization}.)
}
The clause is conjoined (in~\cref{ln:pdr-strengthen}) to the frame as well as the preceding ones (thereby satisfying property~\ref{it:frames-monotone}, relying on $\Init \implies c$).
The above is an operational description of how the frames are generated to be overapproximations, but does not lay bare the principles of why they are computed in this way, and how to characterize the abstraction that frames perform.
To study this, we introduce $\Lambda$-PDR\footnote{In homage to Bshouty's $\Lambda$-algorithm~\cite{DBLP:journals/iandc/Bshouty95}, not the \textsc{SARS-CoV-2} variant.}, an alteration of PDR that is simpler for analysis.
This algorithm is a theoretical device to study the abstraction in PDR's frames: each frame of $\Lambda$-PDR is tighter than the corresponding frame of PDR, and thus \textbf{\textit{the overapproximation that $\Lambda$-PDR's frames perform is also performed in usual PDR}}.
We characterize the abstraction that $\Lambda$-PDR performs, and show how it can converge more rapidly than exact forward reachability, which sheds light on the abstraction in PDR.
\subsection{$\Lambda$-PDR}
\label{sec:overview-eepdr}
\subsubsection{The Algorithm}
\begin{wrapfigure}{R}{0.35\textwidth}
\vspace{-0.75cm}
\begin{minipage}{0.35\textwidth}
\begin{algorithm}[H]
\caption{$\Lambda$-PDR}
\label{alg:eepdr}
\begin{algorithmic}[1]
\begin{footnotesize}
\Procedure{$\Lambda$-PDR}{$\Init$, $\tr$, $\Bad$, $k$}
\State $\Frame_{-1} \gets \false$
\State $\Frame_0 \gets \Init$ $\label{ln:eepdr-frame0}$
\State $\bkwrch{k} = \set{\sigma \ | \ \bmc{\tr}{\sigma}{k} \cap \Bad \neq \emptyset}$ $\label{ln:eepdr-bk}$
\If{$\Init \cap \bkwrch{k} \neq \emptyset$}
\textbf{unsafe} $\label{ln:eepdr-unsafe}$
\EndIf
\State $i \gets 0$
\While{$\Frame_{i} \not \implies \Frame_{i-1}$} $\label{ln:eepdr-while-not-inductive}$
\If{$\tr(\Frame_{i}) \cap \bkwrch{k} \neq \emptyset$} $\label{ln:eepdr-restart}$
\State \textbf{restart} with larger $k$ \label{ln:eepdr-increase-k}
\EndIf
\State $\Frame_{i+1} \gets \true$
\For{$\sigma_b \in \bkwrch{k}$} $\label{ln:eepdr-bmc}$
\For{$c \subseteq \neg \sigma_b$} $\label{ln:eepdr-for-clause}$
\If{$\postimage{\tr}{\Frame_i} \implies c$} $\label{ln:eepdr-lemma-check}$
\State $\Frame_{i+1} \gets \Frame_{i+1} \land c$
\EndIf
\EndFor
\EndFor
\State $i \gets i+1$
\EndWhile
\State \Return $\Frame_i$
\EndProcedure
\end{footnotesize}
\end{algorithmic}
\end{algorithm}
\end{minipage}
\vspace{-0.4cm}
\end{wrapfigure}
\Cref{alg:eepdr} presents $\Lambda$-PDR.
Briefly, it constructs frames one after the other, by including all possible lemmas that any execution of PDR might learn; $\Frame_{i+1}$ is the conjunction of \emph{all} \emph{clauses} that \emph{block} a state from $\bkwrch{k}$---the set of states that \emph{can reach a bad state} in at most $k$ steps---yet \emph{retain} the states \emph{reachable in one step} from $\Frame_i$.
The algorithm's essentials are similar to PDR's, with important changes.
First, it is useful for our purpose to decouple two roles that frames serve in PDR. One is as a sequence of approximations to the invariant until the frame where an invariant is found (which is usually somewhere in the middle of the sequence). The other is a way to find counterexamples---which are states that can reach a bad state---without unrolling the transition relation~\cite{ic3}.
\begin{changebar}
In $\Lambda$-PDR we instead imagine that we are provided, through some arbitrary means (such as unrolling), with $\bkwrch{k}$, the set of states that can reach a state in $\Bad$ along some execution of length at most $k$ steps (\cref{ln:eepdr-bk}).
This allows us to focus on the other role of frames as approximations that converge to the invariant.
The number $k$ is chosen in advance, independently of the number of frames $N$.\footnote{
At first sight $\Framepdr_i$ consists of clauses that exclude counterexamples from a lower backward reachability bound, $\bkwrch{N-i}$; but in fact, pushing lemmas forward means that even $\Framepdr_N$ can include clauses learned at $\Framepdr_1$ from counterexamples in $\bkwrch{N}$.
}
\end{changebar}
Second, frames are computed without backtracking to refine previous frames; lemmas to support future frames are learned eagerly, in advance.
In particular, convergence is always at the last frame.
Third, whereas PDR ``samples'' $k$-backward reachable states and blocks each counterexample in $\Frame_{i+1}$ using a single, arbitrary (minimal) clause that does not exclude states in $\postimage{\tr}{\Frame_i}$, $\Lambda$-PDR generates \emph{all} such clauses---for \emph{any} counterexample state from $\bkwrch{k}$ (\cref{ln:eepdr-bmc}) and \emph{any} order of dropping literals (\cref{ln:eepdr-for-clause}). This ``determinization'' makes the algorithm easier to analyze.
\begin{changebar}
Overall, the algorithm computes each frame $\Frame_{i+1}$ iteratively, from the previous $\Frame_i$, without ever refining previous frames. Each frame is the conjunction of all the clauses that can be obtained as lemmas from blocking any counterexample from $\bkwrch{k}$ while still overapproximating $\postimage{\tr}{\Frame_i}$. This process continues until an inductive invariant is found (\cref{ln:eepdr-while-not-inductive})---unless the current frame does include a counterexample from $\bkwrch{k}$, which prompts an increase of $k$ (\cref{ln:eepdr-restart}) in order to distinguish between spurious overapproximation and truly unsafe systems (detected in~\cref{ln:eepdr-unsafe} by an initial state that can reach a bad state in $k$ steps).
\end{changebar}
\begin{changebar}
As an example, this is how $\Lambda$-PDR proceeds on the example of~\Cref{fig:skip-counter} with (say) $k=1$:
The $k$-backward reachable states $\bkwrch{k}$ are those where $\vec{x} = 10\ldots00 \land y_n y_{n-1} \ldots y_1 = 11\ldots1 \land z=1$ (every value of $y_0$ yields a backward reachable state).
The frame sequence is initialized with $\Frame_0 = \Init$. As $\postimage{\tr}{\Frame_0}$ does not intersect $\bkwrch{k}$, the algorithm proceeds to computing $\Frame_1$. It starts as $\true$, and the algorithm iterates through the states in $\bkwrch{k}$ to generate clauses. Suppose that the first counterexample $\sigma_b$ is $\vec{x} = 10\ldots00 \land \vec{y} = 11\ldots10 \land z=1$. We write
$\neg \sigma_b = (x_n \neq 1) \lor (x_{n-1} \neq 0) \lor \ldots \lor (x_{1} \neq 0) \lor (x_{0} \neq 0) \lor (y_n \neq 1) \lor (y_{n-1} \neq 1) \lor \ldots \lor (y_1 \neq 1) \lor (y_0 \neq 0) \lor (z \neq 1)$, and consider every possible sub-clause $c$ thereof, checking whether $\postimage{\tr}{\Frame_0} \implies c$, namely, $c$ includes both $\vec{x}=00\ldots00,\vec{y}=00\ldots00,z=0$ and $\vec{x}=00\ldots01,\vec{y}=00\ldots00,z=0$. In this case, there are several incomparable (and minimal) such $c$'s: $x_n \neq 1$, $y_i \neq 1$ for every $i>1$, and $z \neq 1$. \TODO{did I miss something?}
In $\Lambda$-PDR, \emph{all} these potential clauses are conjoined to $\Frame_1$.
The algorithm performs the same procedure for all the counterexamples in $\bkwrch{k}$.
Once this is done, $\Frame_1$ never changes again in the course of the algorithm, and it becomes the basis for constructing $\Frame_2$ in the same way, and so on until an inductive invariant is found or a restart becomes necessary. (We later show the resulting $\Frame_1,\Frame_2,\ldots$ in this example.)
\end{changebar}
\begin{changebar}
\subsubsection{PDR Overapproximates at Least as Much as $\Lambda$-PDR}
The importance of $\Lambda$-PDR for our investigation of abstraction in PDR stems from the fact that $\Frame_i \implies \Framepdr_i$, when $k=N$ is the final number of frames in PDR (\Cref{cor:lambda-pdr-underapproximates-pdr}).
This implies that whatever overapproximation $\Lambda$-PDR performs also transfers to PDR: the overapproximation in $\Lambda$-PDR is a lower bound for the overapproximation in PDR.
The relationship $\Frame_i \implies \Framepdr_i$ holds because every clause $c$ that PDR can use to strengthen $\Framepdr_i$ is used to strengthen $\Frame_i$ of $\Lambda$-PDR (roughly, in PDR, for $c$ to added to $\Framepdr_i$, it must block a counterexample from $\bkwrch{k}$ and overapproximate the post-image of the previous frame; the same properties would hold for $c$ in $\Lambda$-PDR, thus ensuring that $c$ is conjoined to $\Frame_i$---see~\Cref{cor:lambda-pdr-underapproximates-pdr}).
Our goal then is to show that $\Lambda$-PDR performs significant overapproximation over exact forward reachability, and thereby establish the same for PDR.
Our first step is to characterize the overapproximation that $\Lambda$-PDR performs, and for this we need tools developed in exact concept learning.
\end{changebar}
\subsection{Abstraction from The Monotone Theory}\label{sec:overview-monotone}
The main technical enabler of our paper is the observation (\Cref{lem:justify-overview-next-frame}) that in $\Lambda$-PDR, there is a well-defined relation between successive frames, through what we call the \emph{monotone hull}:
\vspace{-0.15cm}
\begin{tcolorbox}[boxsep=-4pt]
\begin{equation}
\label{eq:overview-next-frame}
\Frame_{i+1} = \mhull{\postimage{\tr}{\Frame_i}}{\bkwrch{k}} \eqdef \bigwedge_{b \in \bkwrch{k}}{\monox{\postimage{\tr}{\Frame_i}}{b}},
\end{equation}
\end{tcolorbox}
\vspace{-0.15cm}
\noindent
where $\monox{\varphi}{b}$ is the central operator in the monotone theory~\cite{DBLP:journals/iandc/Bshouty95}, the \emph{least $b$-monotone overapproximation of $\varphi$} (``$b$-monotonization'' in short). A Boolean function $f$ is $b$-monotone, when $b$ is a state, if whenever $f(\sigma_1)=1$ and $\sigma_1 \leq_b \sigma_2$, which means that $\sigma_2$ is obtained from $\sigma_1$ by flipping bits on which $\sigma_1,b$ agree, then also $f(\sigma_2)=1$.
$\monox{\varphi}{b}$ is the \emph{least} $b$-monotone formula (function) implied by $\varphi$.
(We elaborate on the technical details in~\Cref{sec:monotone-background}.)
The insight of~\Cref{eq:overview-next-frame} is that, as we show, every lemma in PDR is implied by the monotone hull, and the conjunction of all possible lemmas is exactly the monotone hull.
Technically, the observation builds on an equivalent formulation of $\monox{\varphi}{b}$ in a conjunctive form, which is not explicit in Bshouty's paper (\Cref{lem:monox-conjunction-clauses}).
Our central observation is that the monotone hull operator introduces \emph{overapproximation} to the sequence of frames---$\mhull{\postimage{\tr}{\Frame_i}}{\bkwrch{k}}$ can include many more states than $\postimage{\tr}{\Frame_i}$.
This is an interesting deviation from Bshouty's use of monotonizations in \emph{exact} learning of an unknown $\varphi$,
where a set $B$ is chosen such that $\varphi \equiv \mhull{\varphi}{B}$; in that case $B$ is said to be a monotone basis for $\varphi$, denoted $\varphi \in \mspan{B}$ (\Cref{def:monotone-span}).
In contrast, here the monotone hull is applied to intermediate frames, and we are interested in the cases that the set $\bkwrch{k}$ is \emph{not} a monotone basis for $\postimage{\tr}{\Frame_i}$, for then \Cref{eq:overview-next-frame} indicates a strict \emph{overapproximation of exact forward reachability} that $\Lambda$-PDR performs.
\label{sec:overview-example-frame}
Consider again the running example (\Cref{fig:skip-counter}).
One step of exact forward reachability discovers the state $\vec{x}=00\ldots01, \vec{y}=00\ldots00, z=0$ in addition to the initial state.
In contrast, by~\Cref{eq:overview-next-frame}, the first frame of $\Lambda$-PDR with $k=1$ is $\Frame_1 = \mhull{\postimage{\tr}{\Init}}{\bkwrch{k}}$, resulting in
\begin{equation}
\label{eq:running-frame-1}
\Frame_1 \, = \, x_n=0 \land \vec{y}=00\ldots00 \land z=0;
\end{equation}
in a single iteration the algorithm has leaped over an exponential number of steps of $\tr$!
To compute $\mhull{\postimage{\tr}{\Init}}{\bkwrch{k}}$, in order to arrive at~\Cref{eq:running-frame-1}, we compute the monotone overapproximations.
In our example, $\bkwrch{k}$ is a single cube:
\begin{align*}
\bkwrch{k} &= (x_n=1 \land x_{n-1}=0 \land \ldots \land x_1=0 \land x_0=0 \land y_n=1 \land \ldots \land y_1=1 \land z=1),
\\
\intertext{in which case $\mhull{\postimage{\tr}{\Frame_0}}{\bkwrch{k}}$ can be calculated by writing $\postimage{\tr}{\Frame_0}$ in DNF (see~\Cref{lem:mhull-dnf-base,lem:bshouty-mon-mindnf}):}
\postimage{\tr}{\Frame_0} &=
(x_n=0 \land \litabs{x_{n-1}=0} \land \ldots \land \litabs{x_1=0} \land \litabs{x_0=0} \land y_n=0 \land \ldots \land y_1=0 \land y_0=0 \land z=0)
\\
&\lor
(x_n=0 \land \litabs{x_{n-1}=0} \land \ldots \land \litabs{x_1=0} \land x_0=1 \land y_n=0 \land \ldots \land y_1=0 \land y_0=0 \land z=0),
\end{align*}
and in each term omitting every literal that agrees with the cube of $\bkwrch{k}$ (appearing in color).
(When $\bkwrch{k}$ is not a single cube, $\Frame_{i+1}$ is computed as a conjunction of the above for each cube in $\bkwrch{k}$.)
\begin{figure}[t]
\centering
\begin{subfigure}[t]{0.35\textwidth}
\includegraphics[width=\textwidth]{figs/protected}
\caption{}
\iflong\else\vspace{-0.2cm}\fi
\label{fig:protected-single}
\end{subfigure}
\hspace{0.15\textwidth}
\begin{subfigure}[t]{0.3\textwidth}
\includegraphics[width=\textwidth]{figs/hull}
\caption{}
\iflong\else\vspace{-0.2cm}\fi
\end{subfigure}
\caption{(a) $\sigma$ is ``protected'' by $\postimage{\tr}{\Frame_i}$ from exclusion due to blocking $\bkwrch{k}$,
thus (b) $\sigma$ is in $\Frame_{i+1} = \mhull{\postimage{\tr}{\Frame_i}}{\bkwrch{k}}$.
\TODO{Say that we are ``punning'' Euclidean geometry and Hamming geometry.}}
\label{fig:protected-hull}
\iflong\else\vspace{-0.45cm}\fi
\end{figure}
What is especially significant about this overapproximation in $\Lambda$-PDR is that it exists \emph{in each step}, as we describe in the next subsection (\Cref{sec:overview-successive}). But before that, we explain the intuition for where this overapproximation stems from.
The cause of overapproximation in $\Lambda$-PDR is the special constraints on the lemmas the algorithm can generate.
Recall that the states that remain in $\Frame_{i+1}$ are those that \emph{cannot be excluded} by \emph{any} lemma starting from \emph{any} counterexample, due to the need to satisfy property~\ref{it:frames-onestep-overapprox}.
Since lemmas are \emph{not} arbitrary formulas, perhaps surprisingly, such states exist beyond the exact post-image.
We demonstrate what these states are using the running example.
Consider a state $\sigma$ that satisfies~\Cref{eq:running-frame-1}.
Why does no lemma $c$ learned by the algorithm exclude $\sigma$, i.e., $\sigma \not\models c$? The reason is that every $c$ excludes some counterexample $b \in \bkwrch{k}$, and, furthermore, $c$ is a clause---a negation of a cube. A cube is a very rigid geometric shape; if $\neg c$ contains both $\sigma$ and $b$ then it necessarily contains many other states---it must include all states that are within the smallest cube that contains both $\sigma,b$, a.k.a.\ the Hamming interval between $\sigma,b$~\cite[e.g.][]{wiedemann1987hamming}.
For example, the Hamming interval between $\sigma = (\vec{x}=011\ldots101,\vec{y}=00\ldots00,z=0)$ and $b = (\vec{x}=10\ldots00,\vec{y}=11\ldots10,z=1)$ is $x_1=0 \land y_0=0$---the conjunction of the literals (or constraints) that hold in both $\sigma,b$.
However, $\neg c$ must not contain any state in $\postimage{\tr}{\Frame_i}$, so the Hamming interval between $\sigma,b$ cannot intersect $\postimage{\tr}{\Frame_i}$.
In our example, the Hamming interval between $\sigma$ and $b$ includes the state $\widetilde{\sigma}=(\vec{x}=00\ldots00, \vec{y}=00\ldots00, z=0)$, and $\widetilde{\sigma} \in \postimage{\tr}{\Frame_0}$, so a lemma $c$ that excludes $\sigma$ and originates from blocking $b$ cannot be conjoined to $\Frame_1$.
Put differently, $\widetilde{\sigma} \in \postimage{\tr}{\Frame_0}$ ``protects'' $\sigma \in \Frame_{1}$ from being excluded due to $b$.
In general, a state $\sigma$ is included in $\Frame_{i+1}$ if a protector state $\widetilde{\sigma} \in \Frame_i$ exists \emph{for every} $b \in \bkwrch{k}$, namely, the Hamming interval between $\sigma,b$ crosses $\postimage{\tr}{\Frame_i}$ for all $b$'s (\Cref{fig:protected-hull}). In our example, the same $\widetilde{\sigma}$ actually protects every $\sigma \in \Frame_1$ from exclusion due to any $b \in \bkwrch{k}$, but multiple protector states may be necessary in general.
The idea of protector states explains why $\Frame_{i+1}$ is $\mhull{\postimage{\tr}{\Frame_i}}{\bkwrch{k}}$ (\Cref{eq:overview-next-frame}).
Every state $\widetilde{\sigma} \in \postimage{\tr}{\Frame_0}$ is a protector state. The states that $\widetilde{\sigma}$ protects from $b$ are the states $\sigma$ such that $\widetilde{\sigma}$ is in the Hamming interval between $\sigma$ and $b$; these states are ``farther away'' from $b$ than $\widetilde{\sigma}$, in the sense of $\widetilde{\sigma} \leq_b \sigma$ as defined above (and formally in~\Cref{def:b-monotone-order}).
The set protected from $b$ by $\postimage{\tr}{\Frame_i}$ is therefore $\monox{\postimage{\tr}{\Frame_i}}{b}$, and the states that are protected from all $b \in \bkwrch{k}$ are the conjunction over all $b$'s, namely $\mhull{\postimage{\tr}{\Frame_{i}}}{\bkwrch{k}}$.
\subsection{Successive Overapproximation: Abstract Interpretation}
\label{sec:overview-successive}
The overapproximation of~\Cref{eq:overview-next-frame} is present between each two consecutive frames; it is thus performed repeatedly, using the previous overapproximation as the starting point of the next. In~\Cref{sec:ai} we show that $\Lambda$-PDR can be cast as \textbf{\textit{abstract interpretation in a new logical domain}}, of the formulas in $\bkwspan{k}$, the formulas $\varphi$ s.t.\ $\mhull{\varphi}{\bkwrch{k}} \equiv \varphi$, which are the formulas expressible by a conjunction of clauses that each excludes a state from $\bkwrch{k}$ (\Cref{def:monotone-span}). \textbf{\textit{The frames of $\Lambda$-PDR are completely characterized as Kleene iterations with the best abstract transformer}} in this domain (\Cref{lem:ai-eepdr-sandwich}), when correcting for the slightly different initial frame ($\Frame_0=\Init$ vs.\ $\mhull{\Init}{\bkwrch{k}}$; we show that the resulting difference in the number of frames is at most one).
\newcounter{reorder-fig-idx}
\begin{figure}[t]
\begin{minipage}{.37\textwidth}
\begin{minipage}{\textwidth}
\centering
\captionsetup{width=.9\textwidth}
\includegraphics[width=\textwidth]{figs/successive-approx}
\vspace{-1cm}
\caption{\iflong\footnotesize\fi
Repeatedly interleaving the post-image and monotone hull operators results in
successive overapproximation.}
\label{fig:successive-approx}
\vspace{0.3cm}
\end{minipage}
\begin{minipage}{\textwidth}
\setcounter{reorder-fig-idx}{\value{figure}}
\setcounter{figure}{\value{reorder-fig-idx}+1}
\centering
\begin{tikzpicture}
\def\w{0.5in}
\def\h{1in}
\def\b{3pt}
\def\mytiny#1{\scalebox{.5}{#1}}
\node[draw,shape=rectangle,minimum width=\w,minimum height=\h] (Bk) {$\bkwrch{k}$};
\draw[gray,->,>={Classical TikZ Rightarrow[]},double,double distance=1.5pt] ($(Bk.north west)-(\b,\b)$) -- +(-\w, 0);
\draw[gray,->,>={Classical TikZ Rightarrow[]},double,double distance=1.5pt] ($(Bk.west)-(\b,0)$) -- node[align=center,above=8pt] {\mytiny{direction of}\\[-6pt]\mytiny{monotonization}} +(-\w, 0);
\draw[gray,->,>={Classical TikZ Rightarrow[]},double,double distance=1.5pt] ($(Bk.south west)+(-\b,\b)$) -- +(-\w, 0);
\node[circle,fill=black,inner sep=0.75pt,outer sep=3pt, label={[label distance=-7pt]below right:{\tiny{$\sigma_2$}}}] (t1s) at ($(Bk.south west)!0.25!(Bk.north west)+1.5*(-\w,0)$) {};
\node[circle,fill=black,inner sep=0.75pt,outer sep=3pt, label={[label distance=-7pt]above left:{\tiny{$\sigma_2'$}}}] (t1t) at ($(Bk.south west)!0.75!(Bk.north west)+2.5*(-\w,0)$) {};
\node[circle,fill=black,inner sep=0.75pt,outer sep=3pt, label={[label distance=-7pt]above right:{\tiny{$\sigma_1'$}}}] (t2t) at ($(Bk.south west)!0.75!(Bk.north west)+1.5*(-\w,0)$) {};
\node[circle,fill=black,inner sep=0.75pt,outer sep=3pt, label={[label distance=-7pt]below left:{\tiny{$\sigma_1$}}}] (t2s) at ($(Bk.south west)!0.25!(Bk.north west)+2.5*(-\w,0)$) {};
\draw[->] (t1s) -- node[right, pos=.3]{\tiny{$t_1$}} (t1t);
\draw[->] (t2s) -- node[left, pos=.3]{\tiny{$t_2$}} (t2t);
\draw[gray,->,>={Classical TikZ Rightarrow[]},double,double distance=0.5pt] (t2t) -- (t1t);
\draw[gray,->,>={Classical TikZ Rightarrow[]},double,double distance=0.5pt] (t1s) -- (t2s);
\end{tikzpicture}
\caption{\iflong\footnotesize\fi The monotonization of a transition $t_1=(\sigma_1,\sigma'_1) \in \tr $ subsumes that of $t_2=(\sigma_2,\sigma'_2) \in \tr$ if $\sigma_1$ is farther from the backward reachable cube than $\sigma_2$, and $\sigma'_1$ is closer than $\sigma'_2$. If few transitions subsume the rest, $\dnfsize{\absr{\tr}}$ is small, which implies convergence with few frames for $\Lambda$-PDR.}
\label{fig:arrow-subsumption}
\end{minipage}
\end{minipage}\hspace{0.2cm
\begin{minipage}{.59\textwidth}
\centering
\setcounter{figure}{\value{reorder-fig-idx}}
\setcounter{reorder-fig-idx}{\value{figure}}
\captionsetup{width=.95\textwidth}
\includegraphics[width=\textwidth]{figs/lambda-pdr-frames}
\caption{\iflong\footnotesize\fi The frames of $\Lambda$-PDR on the running example (only values of $\vec{x}$ are displayed, always $\vec{y}=0\ldots0,z=0$).
The number of frames required for convergence is always 4, independent of the parameter $n$.
}
\label{fig:lambda-pdr-frames}
\end{minipage}
\vspace{-0.4cm}
\end{figure}
\setcounter{figure}{\value{reorder-fig-idx}+1}
Repeatedly applying the abstraction can cause $\Lambda$-PDR to converge much faster than exact forward reachability (illustrated schematically in~\Cref{fig:successive-approx}).
For example, the frames of $\Lambda$-PDR on the running example are displayed in~\Cref{fig:lambda-pdr-frames} (we perform the calculation in detail in~\Cref{ex:running-all-frames}), and
$\Frame_3$ is none other than the inductive invariant of~\Cref{eq:skip-counter-invariant}.
In this way, \textbf{\textit{successive overapproximation can lead to convergence in a smaller number of frames than exact forward reachability}}; in this example, $\Lambda$-PDR converges in $4$ frames, rather than an exponential number as exact forward reachability would use.\footnote{
The operations in the abstract domain are not efficient; we focus on the number of iterations until convergence.
}
In~\Cref{sec:itp-friends}, we show that $\Lambda$-PDR holds a similar advantage over the unrolling depth of an interpolation-based algorithm.
\subsection{Convergence Bounds via (Hyper)Diameter Bounds}
\label{sec:overview-diameter-bound}
When does the successive overapproximation of $\Lambda$-PDR terminate in a small number of iterations? The lattice height of the abstract domain is exponential in the number of variables (see~\Cref{sec:ai}), and rapid convergence depends on properties of the transition system rather than of the abstract domain.
We prove a convergence bound using one such property: \textit{\textbf{when the DNF size}} (the number of terms in the smallest DNF representation) \textit{\textbf{of specific monotone overapproximations of the transition relation are small, then $\Lambda$-PDR terminates after a small number of iterations}} (\Cref{thm:abstract-diamter-bound,thm:abstract-hyperdiamter-bound}).
The central idea is to relate $\Lambda$-PDR to \textit{\textbf{exact forward reachability in an ``abstract'' transition system}}, and then bound the number of iterations using techniques for analyzing the diameter of transition systems.
Let us consider an example where~\Cref{thm:abstract-diamter-bound} derives an efficient convergence bound (specifically, a linear bound, as opposed to the potential exponential number of iterations) through a syntactic analysis of the transition system in question.
Consider the same example of~\Cref{fig:skip-counter}, but with additional transitions that ``bounce back'' from $\vec{x}=01\ldots11$ to $\vec{x}=00\ldots00$ and from $\vec{x}=11\ldots11$ to any $\vec{x}=10\ldots010\ldots0$ (number with msb and other variable $1$). \yotamsmall{turn every subset of $1$ variables to $0$---not so easy to describe in a transition relation formula. where every number can bounce back---commented out}
(The new transitions have no effect on the behavior of either exact forward reachability or $\Lambda$-PDR;\footnote{
\yotamsmall{new} The algorithms are affected by the new transitions when they arrive to the transitions' pre-states, but at this point both algorithms will have already arrived to the smaller numbers in the post-states of the new transitions, resulting in the same frames as without these transitions.
}
we explain below why this is needed to obtain a good bound through~\Cref{thm:abstract-diamter-bound}.\footnote{
\yotamsmall{new} An earlier version of this paper stated that \Cref{thm:abstract-diamter-bound} proves $\Lambda$-PDR's convergence in a linear number of frames for the system in~\Cref{fig:skip-counter}. However, this is incorrect; see~\Cref{ex:skip-counter-pure-exponential}. A similar error is now corrected also in~\Cref{ex:multiskip-counter-poly}.
})
In this transition system, we bound the number of iterations of $\Lambda$-PDR by applying~\Cref{thm:abstract-diamter-bound} to the part of $\tr$ restricted to states where $\vec{y}=0\ldots0,z=0$ (this is valid because both the transitions of $\tr$ and monotonization
w.r.t.\ $\bkwrch{k}$, $\monox{\cdot}{\vec{x}=10\ldots0,\vec{y}=1\ldots1,z=1}$, leave $\vec{y}=0\ldots0,z=0$ unchanged---see~\Cref{lem:diameter-bound-project-irrelevant}).
The transition relation $\tilde{\tr} = \restrict{\tr}{\vec{y}\gets00\ldots00, z\gets0}$ can be written in DNF (as a double-vocabulary formula, with unprimed variables for the pre-state and primed variables for the post-state) as the disjunction of individual transitions, as appears on the left hand side here (colors and boxes should be ignored at this point):
\newcommand*{\bbox}
\tcboxmath[colback=white, colframe=black, size=fbox, arc=0pt, boxrule=0.4pt
}
\newcommand*{\nbox}
\tcboxmath[colback=white, colframe=white, size=fbox, arc=0pt, boxrule=0.4pt
}
\small
\begin{align}
\tilde{\tr}
=
\label{eq:it:00start}
&\nbox{(\vec{x}=\litabs{0}00\ldots0000 \land \vec{x}'=0\litabs{00\ldots000}1)}
&& \absr{\tr} = \monox{\tilde{\tr}}{\vec{x}=011\ldots11,\vec{x}'=100\ldots00} =
\\
\lor
&\nbox{(\vec{x}=\litabs{0}00\ldots000\litabs{1} \land \vec{x}'=0\litabs{00\ldots00}1\litabs{0})}
&&
\\
\lor
&
\nbox{(\vec{x}=\litabs{0}00\ldots00\litabs{1}0 \land \vec{x}'=0\litabs{00\ldots00}11)}
&&
\\
\lor
&\nbox{(\vec{x}=\litabs{0}00\ldots00\litabs{11} \land \vec{x}'=0\litabs{00\ldots0}1\litabs{00})}
&&
\\
\label{eq:it:00end}
\lor
&
\nbox{\ldots}
&&
\\
\label{eq:it:0bounce}
\lor
&\bbox{(\vec{x}=\litabs{011\ldots1111} \land \vec{x}'=0\litabs{00\ldots0000})}
&& \quad \lor x'_n=0
\\
\label{eq:it:01skip}
\lor
&\bbox{(\vec{x}=\litabs{011\ldots1111} \land \vec{x}'=\litabs{100\ldots000}1)}
&& \quad \lor x'_0=1
\\
\lor
\label{eq:it:11start}
&\nbox{(\vec{x}=100\ldots000\litabs{1} \land \vec{x}'=\litabs{100\ldots00}1\litabs{0})}
&&
\\
\lor
&\nbox{(\vec{x}=100\ldots00\litabs{1}0 \land \vec{x}'=\litabs{100\ldots00}11)}
&&
\\
\lor
&
\nbox{\ldots}
&&
\\
\lor
\label{eq:it:11end}
&\nbox{(\vec{x}=1\litabs{11\ldots111}0 \land \vec{x}'=\litabs{1}11\ldots1111)}
&&
\\
\lor
\label{eq:it:10wrap}
&
\nbox{(\vec{x}=1\litabs{11\ldots1111} \land \vec{x}'=0\litabs{00\ldots0000})}
&&
\\
\label{eq:it:1bounce-start}
\lor
&\nbox{(\vec{x}=1\litabs{11\ldots1111} \land \vec{x}'=\litabs{100\ldots000}1)}
&&
\\
\label{eq:it:1bounce-start-really}
\lor
&\bbox{(\vec{x}=1\litabs{11\ldots1111} \land \vec{x}'=\litabs{100\ldots00}1\litabs{0})}
&& \quad \lor (x_n = 1 \land x'_1 = 1)
\\
\lor
&\bbox{(\vec{x}=1\litabs{11\ldots1111} \land \vec{x}'=\litabs{100\ldots0}1\litabs{00})}
&& \quad \lor (x_n = 1 \land x'_2 = 1)
\\
\lor
&\bbox{(\vec{x}=1\litabs{11\ldots1111} \land \vec{x}'=\litabs{100\ldots}1\litabs{000})}
&& \quad \lor (x_n = 1 \land x'_3 = 1)
\\
\lor
\label{eq:it:1bounce-end}
&\bbox{\ldots}
&& \quad \lor \ldots
\end{align}
\normalsize
\yotamsmall{omitting: Obviously, the number of terms in $\tilde{\tr}$ is $\Omega\left(2^n\right)$.}
To compute a bound for the number of frames in $\Lambda$-PDR,
we perform a monotonization of the (two-vocabulary) transition relation.
Recall that in this example $\bkwrch{k}$ consists of a single cube, in which case we need only consider one monotonization (the case of more complex syntactic forms of $\bkwrch{k}$ is discussed later):
examine the monotonization that omits literals that agree with $\bkwrch{k}$ in the post-state, and \emph{conversely} in the pre-state,
$\absr{\tr} = \monox{\tilde{\tr}}{\vec{x}=01\ldots11,\vec{x}'=10\ldots00}$. The literals in $\tilde{\tr}$ that are omitted in $\absr{\tr}$ appear colored.
As we show in~\Cref{thm:absract-reach}, the resulting transition relation captures the behavior of $\Lambda$-PDR:
the set of $i$-reachable states of $\absr{\tr}$
matches the $i$'th frame of the
Kleene iterations with the best transformer for $\tr$ in the $\bkwspan{k}$ domain.
Hence, \textbf{\textit{bounds on the diameter of the abstract system result in bounds on the number of frames}} of $\Lambda$-PDR.
To bound the diameter, we consider the DNF size $\absr{\tr}$.
The monotonization term-by-term from $\tilde{\tr}$
creates many redundant terms; the terms that originate from the transitions marked by boxes above subsume all the others.\footnote{
The term arising from the ``bounce back'' transition with msb $0$ in~\cref{eq:it:0bounce}
subsumes all other terms that originate from transitions where the msb is $0$ in both the pre-state and the post-state (\crefrange{eq:it:00start}{eq:it:00end}), as well as the term originating from the ``wraparound'' transition in~\cref{eq:it:10wrap};
the term arising from the ``skip'' transition in~\cref{eq:it:01skip}
\yotamsmall{changed}subsumes the term originating from the transition in~\Cref{eq:it:1bounce-start};
and the terms arising from ``bounce back'' transitions with msb $1$ in~\crefrange{eq:it:1bounce-start}{eq:it:1bounce-end} (\yotamsmall{new}including~\Cref{eq:it:1bounce-start} which is subsumed by~\Cref{eq:it:01skip})
subsume all the terms that arise from transitions where the msb is $1$ both in the pre-state and the post-state (\crefrange{eq:it:11start}{eq:it:11end}).
}
This generates a DNF representation of $\absr{\tr}$ with linear number of terms (appearing in the right-hand side above)---even though the original $\tilde{\tr}$ has an exponential number of terms in its DNF representation.
By~\Cref{thm:abstract-diamter-bound}
we deduce from
the linear DNF size of $\absr{\tr}$ that
$\Lambda$-PDR converges in at most a linear number of frames.
One way to think about the difference between $\tr,\absr{\tr}$ is by the way transitions in $\tr$ give rise to the transitions in $\absr{\tr}$, illustrated in~\Cref{fig:arrow-subsumption}.
A transition of $\absr{\tr}$ can \emph{abstract} and move away from $\bkwrch{k}$, then follow a \emph{concrete} transition of $\tr$, and from the resulting post-state again \emph{abstract} and move in the direction away from $\bkwrch{k}$.
In this way, it may be possible for $\absr{\tr}$ to use the transition $(\sigma_1,\sigma'_1)$ in order to arrive from $\sigma_2$ to $\sigma'_2$, even if $(\sigma_2,\sigma'_2)$ were not a transition of $\tr$ (see~\Cref{fig:arrow-subsumption}).
When this is the case for $(\sigma_1,\sigma'_1),(\sigma_2,\sigma'_2) \in \tr$, the transitions of $\absr{\tr}$ that use the concrete transition $(\sigma_2,\sigma'_2)$ are also possible using the concrete transition $(\sigma_1,\sigma'_1)$; hence
the term generated from $(\sigma_2,\sigma'_2)$ can be discarded in the monotonization of $\tr$ to obtain $\absr{\tr}$, because it is subsumed by the term generated from $(\sigma_1,\sigma'_1)$.
Roughly, \Cref{thm:abstract-diamter-bound} shows that $\Lambda$-PDR converges rapidly whenever there is a small number of transitions that subsume the others, by going from a pre-state $\sigma$ that is ``very far'' from $\bkwrch{k}$ in Hamming distance compared to the pre-states of other transitions, to the post-state $\sigma'$ that is ``very close'' to $\bkwrch{k}$ compared to the post-states of other transitions.
This is an intuition for how a small $\dnfsize{\absr{\tr}}$ can arise from the monotonization of the fully-expanded DNF representation of $\tr$.
(The starting point for monotonization can also be a more succinct DNF representation of $\tr$, in which case the intuition for an even shorter DNF representation of $\absr{\tr}$ is similar.)
If we were not to add the ``bounce back'' transitions to the example of~\Cref{fig:skip-counter}, still monotonization of the transition relation produces an abstract transition system whose reachable states coincide with $\Lambda$-PDR's frames, and whose diameter corresponds to the number of iterations in which $\Lambda$-PDR converges. However, in that case the bound of~\Cref{thm:abstract-diamter-bound} is poor, because in this case the DNF-size is a poor estimate of the abstract system's diameter (see~\Cref{ex:skip-counter-pure-exponential}).
For when $\bkwrch{k}$ consists of multiple cubes, we generalize \Cref{thm:abstract-diamter-bound} to~\Cref{thm:abstract-hyperdiamter-bound}, bounding the number of frames by a product of monotonizations of $\tr$ and $\Init$. In the proof, the construction involves not an ordinary transition system, but a \emph{hyper}transition system: the hypertransition relation $\absr{\tr}$ arrives through concrete transitions to a \emph{set} of states, and abstracts from them to a state ``protected'' by that set, because the abstraction requires a ``protector'' state from every state in $\bkwrch{k}$ (see~\Cref{fig:protected-hull}).
A similar diameter bound using the DNF size of the hypertransition relation $\absr{\tr}$ applies.
We show that $\absr{\tr}$ can be written as a conjunction of per-cube monotonizations, leading to a bound by the product of DNF sizes of monotonizations of $\tr$ and $\Init$ (see~\Cref{sec:hyper-all}).
This technique does not explain rapid convergence of $\Lambda$-PDR in full generality, but provides one explanation for how the abstraction can bring this about.
\subsection{PDR, Revisited}
Through $\Lambda$-PDR, we have shown how PDR's frames perform an abstract interpretation in a domain founded on the monotone theory, and how such an abstraction can lead to faster convergence. We observe that these important characteristics of PDR are concealed in a simple property of PDR's frames: that they can be written in CNF so that every clause excludes at least one state from $\bkwrch{N}$ (\Cref{lem:pdr-also}).
In the monotone theory from above this reads that for every $1 \leq i \leq N$,
\iflong
\else
\vspace{-0.2cm}
\fi
\begin{tcolorbox}[boxsep=0pt]
\begin{enumerate}
\setcounter{enumi}{\value{overview-frame-props}}
\item \label{it:frames-mbasis} $\Frame_i \in \mspan{\bkwrch{N}}$.
\end{enumerate}
\end{tcolorbox}
\iflong
\else
\vspace{-0.2cm}
\fi
\noindent
The frames of $\Lambda$-PDR are the least set of states that satisfy this property together with properties~\ref{it:frames-start}--\ref{it:frames-end} from~\Cref{sec:overview-frame-props} (\Cref{lem:lambda-frames-minimality}), and the frames of PDR overapproximate them (\Cref{cor:lambda-pdr-underapproximates-pdr}).
Property~\ref{it:frames-mbasis} is the regularization in our abstract domain (\Cref{sec:ai}), and we have shown that it can lead to faster convergence than exact post-image computations---although PDR does not necessarily converge in the same number of frames as $\Lambda$-PDR, due to its additional overapproximation and heuristics.
The fact that PDR's frames are not the least to satisfy the above properties can
\yotamforlater{This could also lead PDR to be ``distracted'' and converge slower than the abstraction of $\Lambda$-PDR. However,
have some benefits. We show two:
\begin{itemize}
\item \textbf{Faster convergence}: In some cases $\Lambda$-PDR performs little or no abstraction over exact forward reachability, but the fact that PDR only samples a subset of the possible lemmas can guarantee fast convergence. We show an example where $\Lambda$-PDR requires an exponential number of frames, whereas a linear number always suffices for standard PDR.
\item \textbf{Frame size}: $\Lambda$-PDR's frames may be (needlessly) complex to represent as a formula. We show an example where some frames of $\Lambda$-PDR necessarily have an exponential DNF or CNF size, whereas standard PDR can converge in the same number of frames that include only a small number of important lemmas.
\end{itemize}
\yotamforlater{However, in some cases PDR can be led astray whereas $\Lambda$-PDR's more systematic exploration converges quickly.}
\subsubsection{Discussion: Additional PDR Features}
Our study focuses on what is, in our view, the most basic version of PDR. Our approach provides an interesting starting point to a discussion of two common, more advanced features of PDR.
\para{Other forms of generalization}
\label{sec:overview-inductive-generalization}
Inductive generalization~\cite{ic3} minimizes lemmas using a stronger criterion:
a lemma $c$ can be learned in $\Framepdr_{i+1}$ if it is inductive \emph{relative} to $\Frame_i$---whether $\postimage{\tr}{\Framepdr_i \land c} \implies c$, namely, checking whether $c$ holds in the post-state while also assuming $c$ in the pre-state. At first sight, this feature is not present in $\Lambda$-PDR, which uses the standard check (\Cref{alg:eepdr}, \cref{ln:eepdr-lemma-check}). Surprisingly, lemmas that PDR can generate using inductive generalization are also present in $\Lambda$-PDR (with $k=N$). This is a consequence of the fact that PDR with inductive generalization still satisfies properties~\ref{it:frames-start}--\ref{it:frames-end}~\cite{ic3,pdr}, as well as property~\ref{it:frames-mbasis}. The optimization of inductive generalization becomes important only when lazily backtracking to refine previous frames.
Other techniques, such as ternary simulation~\cite{pdr}, propagate sets of states to block together. If any of the states is in $\bkwrch{k}$, the resulting lemma is present also in $\Lambda$-PDR.
\para{May-counterexamples}
\label{sec:overview-may-cexs}
Some variants of PDR produce lemmas by blocking may counterexamples~\cite{DBLP:conf/fmcad/GurfinkelI15} that are not necessarily backward reachable, as a way to encourage pushing existing lemmas to later frames.
In $\Lambda$-PDR, all admissible lemmas from $\bkwspan{k}$ are always included, hence may counterexamples are not useful for pushing such lemmas.
However, may counterexamples also mean that lemmas no longer necessarily block states in $\bkwrch{N}$, which could be beneficial if a large $N$ is required to have an inductive invariant $I \in \bkwspan{N}$.
(It is a necessary condition PDR; in $\Lambda$-PDR it is both necessary and sufficient, see~\Cref{lem:eepdr-lfp}.)
This could be thought of as (heuristically) increasing the set $\bkwrch{k}$. In $\Lambda$-PDR, this results in a richer abstract domain that includes more inductive invariants but leads to tighter frames with less overapproximation. The theoretical ramifications of this beyond $\Lambda$-PDR merit more study.
\subsubsection{Outline}
The rest of the paper is organized as follows:
\Cref{sec:prelim} introduces preliminary notation.
\Cref{sec:monotone} introduces notions from the monotone theory and establishes their connection to $\Lambda$-PDR.
\Cref{sec:ai} proves the connection to abstract interpretation.
\Cref{sec:abstract-diameter-all} develops a bound on the number of iterations of $\Lambda$-PDR for the case that $\bkwrch{k}=\bkcube$ a single cube, and \Cref{sec:hyper-all} generalizes these results to arbitrary $\bkwrch{k}$.
\Cref{sec:itp-friends} contrasts forward reachability in $\Lambda$-PDR with exact forward reachability and a dual interpolation-based algorithm.
\Cref{sec:vs-pdr} compares $\Lambda$-PDR to standard PDR.
\Cref{sec:related} discusses related work, and~\Cref{sec:conclusion} concludes.
\section{Preliminaries}
\label{sec:prelim}
We consider the safety of transition systems defined over propositional vocabularies.
\para{States, transition systems, inductive invariants}
Given a vocabulary $\voc = \set{p_1,\ldots,p_n}$ of $n$ Boolean variables, a \emph{state} is a \emph{valuation} to $\voc$. The set of states over $\voc$ is denoted $\States$.
If $x$ is a state, $x[p]$ is the value ($\true/\false$ or $1/0$) that $x$ assigns to the variable $p \in \voc$.
A \emph{transition system} is a triple $(\Init,\tr,\Bad)$ where $\Init,\Bad$ are formulas over $\voc$ denoting the set of initial and bad states (respectively), and the \emph{transition relation} $\tr$ is a formula over $\voc \uplus \voc'$, where $\voc' = \{ \prop' \mid \prop \in \voc\}$ is a copy of the vocabulary used to describe the post-state of a transition.
If $\tilde{\voc},\tilde{\voc}'$ are distinct copies of $\voc$, $\tr[\tilde{\voc},\tilde{\voc}']$ denotes the substitution in $\tr$ of each $p \in \voc$ by its corresponding in $\tilde{\voc}$ and likewise for $\voc',\tilde{\voc'}$.
Given a set of states $S \subseteq \States$, the post-image $\tr(S) = \set{\sigma' \mid \exists \sigma \in S. \ (\sigma,\sigma') \models \tr}$.
The reflexive post-image is $\postimage{\tr}{S} = \tr(S) \cup S$.
The reachability diameter of a system is the least $s$ s.t.\ if $\sigma$ is reachable from $\Init$ by $\tr$ in any number of steps, it is reachable in at most $s$ steps.
\begin{changebar}
The set of \emph{$k$-backward reachable states}---those that can reach a state in $\Bad$ along some execution of length at most $k$---are $\bkwrch{k} = \set{\sigma \ | \ \bmc{\tr}{\set{\sigma}}{k} \cap \Bad \neq \emptyset}$.
\end{changebar}
A transition system is \emph{safe} if all the states that are reachable from $\Init$ via any number of steps of $\tr$ satisfy $\neg \Bad$.
An \emph{inductive invariant} is a formula $I$ over $\voc$ such that
\begin{inparaenum}[(i)]
\item $\Init \implies I$,
\item $I \land \tr \implies I'$, and
\item $I \implies \neg\Bad$, where $I'$
denotes the result of substituting each $\prop \in \voc$ for $\prop' \in \voc'$ in $I$,
\end{inparaenum}
and $\varphi \implies \psi$ denotes the validity of the formula $\varphi \to \psi$. In the context of propositional logic, a transition system is safe iff it has an inductive invariant.
\para{CNF, DNF, and cubes}
A \emph{literal} $\ell$ is a variable $p$ or its negation $\neg p$.
A \emph{clause} $c$ is a disjunction of
literals.
The empty clause is $\false$.
A formula is in \emph{conjunctive normal norm (CNF)} if it is a conjunction of clauses.
A \emph{cube} or \emph{term} $d$ is a conjunction of a consistent set of literals; at times, we also refer directly to the set and write $\ell \in d$. The empty cube is $\true$.
A formula is in \emph{disjunctive normal form (DNF)} if it is a disjunction of terms.
The \emph{domain}, $\cubdom{d}$, of a cube $d$ is the set of variables that appear in it (positively or negatively).
Given a state $\sigma$, we use the state and the (full) cube that consists of all the literals that are satisfied in $\sigma$ interchangeably; the only satisfying valuation of the cube is $\sigma$.
We identify a formula with the set of its valuations, and a set of valuations with an arbitrary formula that represents it, chosen arbitrarily (which always exists in propositional logic).
\section{The Monotone Theory for $\Lambda$-PDR}
\label{sec:monotone}
In this section, we present the monotone theory by~\citet{DBLP:journals/iandc/Bshouty95} and our extensions, and use it to derive the relation between successive frames in $\Lambda$-PDR (\Cref{eq:overview-next-frame}).
\Cref{sec:monotone-background} defines \emph{least $b$-monotone overapproximations} $\monox{\cdot}{b}$.
\Cref{sec:monotone-hull} defines the \emph{monotone hull} $\mhull{\cdot}{B}$ w.r.t.\ a set of states $B$, which is a conjunction of monotone overapproximations, and then relates this to $\Lambda$-PDR.
\iflong
\else
All omitted proofs appear in the extended version~\cite{extendedVersion}.
\fi
\subsection{Monotone Overapproximations}
\label{sec:monotone-background}
Our definitions and claims concerning $b$-monotone overapproximations
generalize \citet{DBLP:journals/iandc/Bshouty95} by considering a partial cube $b$, and coincide with the original in the case of a full cube.
\begin{changebar}
\begin{definition}[$b$-Monotone Order~\cite{DBLP:journals/iandc/Bshouty95}]
\label{def:b-monotone-order}
Let $b$ be a cube. We define a partial order over states where $v \leq_b x$ when $x,v$ agree on all variables not present in $b$, and $x$ disagrees with $b$ on all variables on which also $v$ disagrees with $b$:
$\forall p \in \voc. \ x[p] \neq v[p] \mbox{ implies } p \in \cubdom{b} \land v[p]=b[p]$.
\end{definition}
Geometrically, $v \leq_b x$ indicates that $x$ is ``farther away'' from $b$ in the Hamming cube than $v$ from $b$, namely, that there is a shortest path w.r.t.\ Hamming distance between $x$ and its projection onto $b$ that goes through $v$.
\begin{definition}[$b$-Monotonicity~\cite{DBLP:journals/iandc/Bshouty95}]
\label{def:b-monotonicity}
A formula $\psi$ is $b$-monotone for a cube $b$ if
$
\forall v \leq_b x. \ v \models \psi \mbox{ implies } x \models \psi.
$
\end{definition}
That is, if $v$ satisfies $\psi$, so do all the states that are farther away from $b$ than $v$.
For example, if $\psi$ is $000$-monotone and $100 \models \psi$, then because $100 \leq_{000} 111$ (starting in $100$ and moving away from $000$ can reach $111$), also $111 \models \psi$.
In contrast, $100 \not\leq_{000} 011$ (the same process cannot flip the $1$ bit that already disagrees with $000$), so $011$ does not necessarily belong to $\psi$.
The importance of $b$-monotonicity for us is that if $\sigma \models b$, then, as we shall see (in~\Cref{thm:mhull-conjunctive}), any clause $c \subseteq \neg \sigma$ is a $b$-monotone formula.
The reason is that if $v \models c$ and we flip variables to disagree more with $b$, we can only make $v$ satisfy more literals of $c$ than before: all the variables in $b$ appear in the opposite polarity in $\neg \sigma$ and hence also in $c$, so flipping them in $v \models c$ to disagree with $b$ makes them agree with $c$ even more, and the result also satisfies $c$.
Further, the lemmas that PDR learns are not just clauses, but clauses that overapproximate a certain set, hence they are $b$-monotone overapproximations. There are many $b$-monotone overapproximations, but our main workhorse is the least such formula:
\begin{definition}[Least $b$-Monotone Overapproximation~\cite{DBLP:journals/iandc/Bshouty95}]
\label{def:monox}
Given a formula $\varphi$ and a cube $b$, the \emph{least $b$-monotone overapproximation} of $\varphi$ is a formula $\monox{\varphi}{b}$ defined by
\begin{equation*}
x \models \monox{\varphi}{b} \mbox{ iff } \exists v. \ v \leq_b x \land v \models \varphi.
\end{equation*}
\end{definition}
For example, if $100 \models \varphi$, then $100 \models \monox{\varphi}{000}$ because $\monox{\varphi}{000}$ is an overapproximation, and hence $111 \models \monox{\varphi}{000}$ because it is $000$-monotone, as above.
Here, thanks to minimality, $011$ does not belong to $\monox{\varphi}{000}$, unless $000$, $001$, $010$, or $011$ belong to $\varphi$.
$\monox{\varphi}{b}$ is a well-defined overapproximation of $\varphi$. Its main significance for learning theory is that it can be computed efficiently (through the DNF representation we also show below), obtaining the original $\varphi$ as the conjunction of least $b$-monotone overapproximations (with different $b$'s).
Surprisingly, the same overapproximation is related to PDR; we show below that $\monox{\varphi}{b}$ is exactly the conjunction of all the clauses $c$ that overapproximate $\varphi$ and can arise from blocking a state in $b$ (\Cref{thm:mhull-conjunctive}), matching generalization in the construction of frames of $\Lambda$-PDR (\Cref{cor:lambda-pdr-underapproximates-pdr}).
\end{changebar}
A technical observation that will prove useful several times is that $\monox{\cdot}{b}$ is a monotone operator:
\begin{lemma}
\label{lem:bshouty-monox-monotone}
If $\varphi_1 \implies \varphi_2$ then $\monox{\varphi_1}{b} \implies \monox{\varphi_2}{b}$.
\end{lemma}
\toolong{
\begin{proof}
Immediate from the definition of $\monox{\cdot}{b}$.
\end{proof}
}
\para{Disjunctive form}
The monotone overapproximation can be related to a DNF representation of the original formula, a fact that we use extensively in \Cref{sec:abstract-diameter-all} and also when we analyze $\Lambda$-PDR on specific examples.
Starting with a DNF representation of $\varphi$, we can derive a DNF representation of $\monox{\varphi}{b}$
by dropping in each term the literals that agree with $b$.
Intuitively, a ``constraint'' that $\sigma \models \ell$ in order to have $\sigma \models \monox{t}{b}$ where $\ell$ agrees with $b$ is dropped because if $\sigma \models \monox{t}{b}$ then flipping a bit $\sigma$ to disagree with $b$ results in a state $\tilde{\sigma}$
such that also
$\tilde{\sigma} \models \monox{t}{b}$, as $\sigma \leq_{b} \tilde{\sigma}$.
\begin{lemma}[Generalization of~\citet{DBLP:journals/iandc/Bshouty95}, Lemma 1(7)]
\label{lem:bshouty-mon-mindnf}
Let $\varphi = t_1 \lor \ldots \lor t_m$ in DNF. Then the monotonization $\monox{\varphi}{b} \equiv \monox{t_1}{b} \lor \ldots \lor \monox{t_m}{b}$ where $\monox{t_i}{b} \equiv t_i \setminus b = \bigwedge \set{\ell \in t_i \land \ell \not\in b}$.
\end{lemma}
\toolong{
\begin{proof}
First we argue that for any term $t$, $\monox{t}{b} \equiv t \setminus b$. Denote the rhs by $\psi$.
Let $x \in \monox{t}{b}$. Then there is $v \models t$ such that $v \leq_b x$. Let $\ell \in t$; $v \models \ell$. Consider a literal $\ell \in \psi$, then $\ell \in t$ and $\ell \not\in b$. Since $v \models t$, the former means that in particular $v \models \ell$. The latter means that to satisfy $v \leq_b x$ necessarily $v,x$ agree on the variable in $\ell$, and hence also $x \models \ell$. This proves $\monox{t}{a} \implies \psi$.
For the other direction, let $x \models \psi$. Let $v$ be obtained from $x$ by setting every variable $p_i \in \cubdom{b}$ that does not appear in $\psi$ to disagree with the corresponding value in $b$; then $v \leq_a x$. Now $v \models \psi$ (since these variables do not appear in $\psi$), and, furthermore, $v \models \ell$ for every $\ell \in t$ that was dropped from $t$ to $\psi$, because $v$ disagrees with $b$ on those literals, which are those that appear in $b$ in a negated form compared to $\ell$. Overall, $v \models t$, which implies $x \models \monox{t}{a}$.
We now claim, more generally, that $\monox{\psi_1 \lor \psi_2}{b} \equiv \monox{\psi_1}{b} \lor \monox{\psi_2}{a}$:
Let $x \models \monox{\psi_1 \lor \psi_2}{b}$. Then there is $v \models \psi_1 \lor \psi_2$ such that $x \leq_b x$. If $v \models \psi_1$, by definition we must have $x \models \monox{\psi_1}{b}$ and in particular $x \models \monox{\psi_1}{b} \lor \monox{\psi_2}{b}$; similarly for $\psi_2$. This shows $\monox{\psi_1 \lor \psi_2}{b} \implies \monox{\psi_1}{b} \lor \monox{\psi_2}{b}$.
As for the other direction, let $x \models \monox{\psi_1}{b} \lor \monox{\psi_2}{b}$. Without loss of generality, assume $x \models \monox{\psi_1}{b}$. Then there is $v \models \psi_1$, and in particular $v \models \psi_1 \lor \psi_2$, such that $v \leq_b x$. So we must have $x \models \monox{\psi_1 \lor \psi_2}{b}$.
\end{proof}
}
A corollary provides a canonical (inefficient) disjunctive form for $\monox{\varphi}{b}$:
\begin{corollary}
\label{lem:monox-disjunction-cubes}
Given a state $v$, we denote
$
\cubemon{v}{b} \eqdef \monox{v}{b} =
\bigwedge{\set{p \ | \ v[p]=\true, \, p \not\in b}} \land
\bigwedge{\set{\neg p_i \ | \ v[p]=\false, \, \neg p \not\in b}}.
$
Then $\monox{\varphi}{b} \equiv \bigvee_{v \models \varphi}{\cubemon{v}{b}}$.
\end{corollary}
\toolong{
\begin{proof}
Apply~\Cref{lem:bshouty-mon-mindnf} to the representation of $\varphi$ as the disjunction of all satisfying states.
\end{proof}
}
\subsection{Monotone Hull}
\label{sec:monotone-hull}
We now define the monotone hull, which is a conjunction of $b$-monotone overapproximations over all $b$ from a fixed set of states $B$ (in the case of $\Lambda$-PDR, $B = \bkwrch{k}$). We start with the definition that uses a conjunction over monotone-overapproximations w.r.t.\ individual states, and then extend this to the union of (partial) cubes.
\begin{definition}[Monotone Hull]
The \emph{monotone hull} of a formula $\varphi$ w.r.t.\ a set of states $B$ is $\mhull{\varphi}{B} = \bigwedge_{b \in B}{\monox{\varphi}{b}}$.
\end{definition}
The monotone hull can be simplified to use a succinct DNF representation of the basis $B$ instead of a conjunction over all states.
(This is the motivation for generalizing $\monox{\cdot}{b}$ to a cube $b$ in \Cref{sec:monotone-background}.)
\begin{lemma}
\label{lem:mhull-dnf-base}
If $B = b_1 \lor \ldots \lor b_m$ and $b_1,\ldots,b_m$ are cubes, then $\mhull{\varphi}{B} \equiv \monox{\varphi}{b_1} \land \ldots \land \monox{\varphi}{b_m}$.
\end{lemma}
\toolong{
\begin{proof}
It follows from the definition that $\mhull{\cdot}{B}$ distributes over union in $B$.
Hence, $\mhull{\varphi}{B} \equiv \mhull{\varphi}{b_1} \land \ldots \land \mhull{\varphi}{b_m}$.
Further, $\mhull{\varphi}{b_i} = \bigwedge_{\sigma_b \in b_i}{\mhull{\varphi}{\set{\sigma_b}}} = \bigwedge_{\sigma_b \in b_i}{\monox{\varphi}{\sigma_b}}$.
It remains to argue that $\monox{\varphi}{b_i} \equiv \bigwedge_{\sigma_b \in b_i}{\monox{\varphi}{\sigma_b}}$.
\noindent
$\subseteq$:
By definition, if $\sigma \models \monox{\varphi}{b_i}$ then there is $x \models \varphi$ s.t.\ $x \leq_{b_i} \sigma$. In particular, $x \leq_{\sigma_b} \sigma$ for every $\sigma_b \models b_i$ (because $\sigma_b$ agrees with all the literals in $b_i$), and hence $\sigma \models \monox{\varphi}{\sigma_b}$ for every such $\sigma_b$.
\noindent
$\supseteq$:
Suppose that $\sigma$ is a model of the rhs. Let $\sigma_b$ be the state obtained from $\sigma$ by setting the variables present in $b_i$ to be as in $b_i$ (geometrically, this the projection of $\sigma$ onto the cube $b_i$). We have $\sigma_b \models b_i$. Thus $\sigma \models \monox{\varphi}{\sigma_b}$, so there exists $x \models \varphi$ such that $x \leq_{\sigma_b} \sigma$. But because $\sigma_b$ agrees with $\sigma$ on all literals except those also present in $b_i$, this implies also that $x \leq_{b_i} \sigma$. Hence also $\sigma \models \monox{\varphi}{b_i}$.
\end{proof}
}
Note that when $B = b$ is a single cube, $\mhull{\varphi}{b} = \monox{\varphi}{b}$.
Our main technical observation, connecting the monotone hull to~\Cref{alg:eepdr}, is that the monotone hull has an equivalent CNF form, as the conjunction of all overapproxmating clauses that exclude a state in $B$:
\begin{theorem}
\label{thm:mhull-conjunctive}
\label{lem:monox-conjunction-clauses}
$\mhull{\varphi}{B} \equiv \bigwedge{\set{c \ | \ \mbox{$c$ is a clause, } \varphi \implies c \mbox{, and } \exists b \in B. \, b \not\models c}}.$
\end{theorem}
\begin{proof}
\noindent
$\Longrightarrow$: Let $c$ be a clause as in the rhs. Then there exists $b \in B$ s.t.\ $b \not\models c$. It suffices to show that $\monox{\varphi}{b} \implies c$. Recall that $c$ is a disjunction of literals; since $b \not\models c$, all those literals are falsified in $b$. Hence, by~\Cref{lem:bshouty-mon-mindnf}, $\monox{c}{b} \equiv c$.
Now $\varphi \implies c$ (by the choice of $c$), and~\Cref{lem:bshouty-monox-monotone} yields $\monox{\varphi}{b} \implies \monox{c}{b} \equiv c$.
\noindent
$\Longleftarrow$:
Let $\sigma$ be a model of the rhs. We want to prove that $\sigma \models \monox{\varphi}{b}$ for every $b \in B$.
Assume otherwise.
Take $d$ as the conjunction of all literals that hold in both $\sigma$ and $b$ (if this set is empty, $d=\true$). Clearly $d$ is a term and $b,\sigma \models d$.
Take $c = \neg d$; then $c$ is a clause and $b \not\models c$.
It remains to show that $\varphi \implies c$, because then $c$ belongs to the rhs but $\sigma \not\models c$, in contradiction to the premise.
To see this, note that $c = \monox{\neg \sigma}{b}$ by~\Cref{lem:bshouty-mon-mindnf}---all the literals from the clause $\neg \sigma$ on which $b$ disagrees (because $b$ agrees on them with $\sigma$).
$\varphi \implies \neg \sigma$ because $\sigma \not\models \varphi$ (as $\sigma \not\models \monox{\varphi}{b}$, which is an overapproximation of $\varphi$). Hence \Cref{lem:bshouty-monox-monotone} implies $\monox{\varphi}{b} \implies \monox{\neg \sigma}{b} = c$, and in particular $\varphi \implies c$. The claim follows.
\end{proof}
We can now derive~\Cref{eq:overview-next-frame}:
\begin{corollary}
\label{lem:justify-overview-next-frame}
In $\Lambda$-PDR (\Cref{alg:eepdr}), $\Frame_{i+1} = \mhull{\postimage{\tr}{\Frame_i}}{\bkwrch{k}}$.
\end{corollary}
\begin{proof}
$\Frame_{i+1}$ is the conjunction formed by the process that for each $\sigma_b \in \bkwrch{k}$ (\cref{ln:eepdr-bmc} of~\Cref{alg:eepdr}) iterates in~\cref{ln:eepdr-for-clause} over all clauses that exclude $\sigma_b$, and conjoins $c$ if it overapproximates $\postimage{\tr}{\Frame_i}$ (\cref{ln:eepdr-lemma-check}). By~\Cref{thm:mhull-conjunctive} this is $\mhull{\postimage{\tr}{\Frame_i}}{\bkwrch{k}}$.
\end{proof}
\begin{example}
\label{ex:running-all-frames}
The above lemmas are the basis for our presentation in~\Cref{sec:overview-example-frame} of $\Frame_1$ on~\Cref{fig:skip-counter} as~\Cref{eq:running-frame-1}.
Let us now use these lemmas to describe later frames in that execution.
Recall that $\bkwrch{k}$ is the cube $\vec{x} = 10\ldots0 \land y_n y_{n-1} \ldots y_1 = 11\ldots1 \land z=1$, denote it by $b$.
For the next frame,
$\postimage{\tr}{\Frame_1}=\Frame_1 \lor (\vec{x}=10\ldots01 \land \vec{y}=0\ldots0 \land z=0)$. Then
\begin{align*}
\Frame_2
&\underset{\rm \Cref{thm:mhull-conjunctive}}{=}
\mhull{\postimage{\tr}{\Frame_1}}{\bkwrch{k}}
\underset{\rm \Cref{lem:mhull-dnf-base}}{=}
\monox{\postimage{\tr}{\Frame_1}}{\bkcube}
\\
&\underset{\rm \Cref{lem:bshouty-mon-mindnf}}{=}
\monox{\Frame_1}{\bkcube} \lor
\monox{\vec{x}=10\ldots01 \land \vec{y}=0\ldots0 \land z=0}{\bkcube}
\\
&\underset{\rm \Cref{lem:bshouty-mon-mindnf}}{=}
\Frame_1 \lor (x_0=1 \land \vec{y}=0\ldots0 \land z=0).
\end{align*}
For the next frame, $\postimage{\tr}{\Frame_2} = \Frame_2 \lor (x_0=0 \land \vec{y}=0\ldots0 \land z=0) \land \neg(\vec{x}=1\ldots0 \land \vec{y}=0\ldots0 \land z=0)$. This is equivalent to $(\vec{y}=0\ldots0 \land z=0) \land \neg(\vec{x}=1\ldots0 \land \vec{y}=0\ldots0 \land z=0)$, which is already $\bkcube$-monotone and hence this is also $\Frame_3 = \mhull{\postimage{\tr}{\Frame_2}}{\bkcube}$.
$\postimage{\tr}{\Frame_3}=\Frame_3$, and so $\mhull{\postimage{\tr}{\Frame_3}}{\bkwrch{k}} = \Frame_3$ (see~\Cref{lem:mhull-idempotence} below), and the algorithm converges.
\end{example}
\begin{example}
\label{ex:monotone-several-cubes}
In the previous example, $\bkwrch{k}$ consisted of a single cube. To exemplify the more general case, consider a system over $n$ variables $x_1,\ldots,x_n$, with $\Init = (x_1=\ldots=x_n=0)$, $\Bad$ the set of states with exactly one variable $1$, and $\tr$ that non-deterministically chooses some $i \neq j$ with $x_i=x_j=0$ and sets $x_i \gets 1$ and $x_j \gets 1$.
Take $k=0$. Then $\bkwrch{k}=\bkcube_1 \lor \ldots \lor \bkcube_n$ where $\bkcube_i$ is $x_i=1 \land \bigwedge_{j \neq i}{(x_j=0)}$.
After one step, $\postimage{\tr}{\Frame_0} = (0\ldots0) \lor \bigvee_{i_1 \neq i_2}{(x_{i_1}=x_{i_2}=1 \land \bigwedge_{j \not\in \set{i_1,i_2}}(x_j=0)})$ is the set of states where there are zero or two variables $1$.
Then for every $b_i$,
\begin{align*}
\monox{\postimage{\tr}{\Frame_0}}{b_i}
\underset{\rm \Cref{lem:bshouty-mon-mindnf}}{=}
(x_i=1)
\lor
\left(\bigvee_{i_1 \neq i_2, i\not\in \set{i_1,i_2}}{x_{i_1}=x_{i_2}=x_i=1}\right)
\lor
\left(\bigvee_{i_1 \neq i_2, i=i_1}{x_{i_2}=1}\right)
= \bigvee_{j}{(x_j=1)}.
\end{align*}
Hence,
$
\Frame_1 \underset{\rm \Cref{thm:mhull-conjunctive}}{=} \mhull{\postimage{\tr}{\Frame_0}}{\bkwrch{k}} = \bigvee_{i}{\monox{\postimage{\tr}{\Frame_0}}{b_i}} = \bigvee_{j}{(x_j=1)}.
$
After another step, $\postimage{\tr}{\Frame_1}=\Frame_1$ and so $\mhull{\postimage{\tr}{\Frame_1}}{\bkwrch{k}} = \Frame_1$ (see~\Cref{lem:mhull-idempotence} below), and the algorithm converges.
\end{example}
\para{Additional lemmas}
Before proceeding, we state a few helpful, simple lemmas that we use later:
\begin{lemma}
\label{lem:mhull-overapproximation}
$\varphi \implies \mhull{\varphi}{B}$.
\end{lemma}
\toolong{
\begin{proof}
$\varphi \implies \monox{\varphi}{b}$ for every $b \in B$, from the definition of $b$-monotone overapproximation. Hence also $\varphi \implies \bigwedge_{b \in B}{\monox{\varphi}{b}}$.
\end{proof}
}
\begin{lemma}
\label{lem:mhull-monotonicity}
If $\varphi_1 \implies \varphi_2$ then $\mhull{\varphi_1}{b} \implies \mhull{\varphi_2}{b}$.
\end{lemma}
\toolong{
\begin{proof}
$\monox{\varphi_1}{b} \implies \monox{\varphi_2}{b}$ for every $b \in B$ by~\Cref{lem:bshouty-monox-monotone}, so if $\sigma \models \bigwedge_{b \in B}{\monox{\varphi_1}{b}}$ it also satisfies $\sigma \models \bigwedge_{b \in B}{\monox{\varphi_2}{b}}$.
\end{proof}
}
\begin{lemma}
\label{lem:mhull-idempotence}
The monotone hull is idempotent, that is, $\mhull{\mhull{\varphi}{B}}{B} \equiv \mhull{\varphi}{B}$.
\end{lemma}
\toolong{
\begin{proof}
We claim that for every $b \in B$, $\monox{\mhull{\varphi}{B}}{b} \equiv \monox{\varphi}{b}$, which implies $\mhull{\mhull{\varphi}{B}}{B} = \bigwedge_{b \in B}{\monox{\mhull{\varphi}{B}}{b}} \equiv \bigwedge_{b \in B}{\monox{\varphi}{b}} = \mhull{\varphi}{B}$.
Let $b \in B$. $\monox{\varphi}{b} \subseteq \monox{\mhull{\varphi}{B}}{b}$ because $\varphi \subseteq \mhull{\varphi}{B}$ (\Cref{lem:mhull-overapproximation}) and by~\Cref{lem:mhull-monotonicity}.
Since from the definition $\mhull{\varphi}{B} \subseteq \monox{\varphi}{b}$, again by~\Cref{lem:mhull-idempotence}
$\monox{\mhull{\varphi}{B}} \subseteq \monox{\monox{\varphi}{b}}{b}$ and $\monox{\cdot}{b}$ is idempotent from the definition as least $b$-monotone overapproximation.
\end{proof}
}
\section{Abstract Interpretation in The Monotone Span of $\bkwrch{k}$}
\label{sec:ai}
In this section, we cast $\Lambda$-PDR as an abstract interpretation in a new logical abstract domain of the \emph{monotone span of $\bkwrch{k}$}.
We first discuss abstract interpretation in general, and then develop the notion of a monotone span. We then define the abstract domain and show the connection to $\Lambda$-PDR.
\subsection{Background: Abstract Interpretation}
\label{sec:ai-background}
In this section, we review the basics of abstract interpretation~\cite{DBLP:conf/popl/CousotC77}; see~\cite{urban2015static,rival2020introduction} for complete and general presentations.
A \emph{complete join-semilattice} is a tuple $\langle D, \sleq, \join, \bot \rangle$ where $D$ is partially-ordered by $\sleq$, $\bigjoin X$ is the least upper bound of every $X \subseteq D$ ($\forall x \in X. \ x \sleq \bigjoin X$ and $X$ is the smallest w.r.t.\ $\sleq$ that satisfies this), and $\bot$ is the minimal element ($\forall x \in D. \ \bot \sleq x$).
A \emph{chain} is a sequence of elements from $D$ satisfying $x_1 \sleq x_2 \sleq \ldots$, and a \emph{strictly ascending chain} if additionally $x_i \neq x_{i+1}$ for every $i$. The lattice's \emph{height} is the maximal length of a strictly ascending chain.
We consider finite domains, where in particular the height is also finite.
A function $F: D \to D$ is Scott-continuous if for every chain $X=x_0,x_1,\ldots \subseteq D$ it holds $f(\bigjoin_{x \in X} x) = \bigjoin_{x \in X}{f(x)}$.
By the Knaster-Tarski theorem, such $F$ has a least fixed-point (lfp)---the least $x$ such that $f(x)=x$---and by Kleene's theorem it is $\bigjoin_{i \geq 0}{F^i(\bot)}$. When
the domain's height is also finite, the sequence $\set{F^i(\bot)}_{i \geq 0}$ converges to the lfp.
In our setting, the \emph{concrete domain} is the join-semilattice powerset domain of the set of states, $\mathcal{C} = \langle D=2^{\States}, \subseteq, \cup, \emptyset \rangle$.
An \emph{abstract domain} is a join-semilattice $\mathcal{A} = \langle \abs{D}, \abs{\sleq}, \abs{\join}, \abs{\bot} \rangle$.
An \emph{abstraction function} is a monotone function $\alpha: D \to \abs{D}$, that is, $\forall S_1 \subseteq S_2 \in D. \ \alpha(S_1) \abs{\sleq} \alpha(S_2)$.
A \emph{conretization function} is a monotone function $\gamma: \abs{D} \to D$, that is, $\forall a_1 \abs{\sleq} a_2 \in \abs{D}. \ \gamma(a_1) \subseteq \gamma(a_2)$.
There is a \emph{Galois connections} between $(D, \subseteq)$ and $(\abs{D}, \abs{\sleq})$ through $(\alpha,\gamma)$, denoted $(D, \subseteq) \galois{\alpha}{\gamma} (\abs{D}, \abs{\sleq})$, if $\forall S \in D, a \in \abs{D}. \ \alpha(S) \abs{\sleq} a \Leftrightarrow S \subseteq \gamma(a)$.
Let $(\Init,\tr)$ be a transition system.
Define the concrete transformer $F_{\Init,\tr}: D \to D$ by $F_{\Init,\tr}(S) = \tr(S) \cup \Init$. It is Scott-continuous, since
for every increasing $\subseteq$-chain $X \subseteq D$, we have $F_{\Init,\tr}(\bigcup_{S \in X} S) = \bigcup_{S \in X}{F_{\Init,\tr}(S)}$.
Its fixed-point $\textit{lfp}(F_{\Init,\tr})$ is the set of reachable states of $(\Init,\tr)$.
The corresponding \emph{best abstract transformer} is given by $\abs{F}_{\Init,\tr}(a) = \alpha(F_{\Init,\tr}(\gamma(a)))$. $\abs{F}_{\Init,\tr}$ is also Scott-continuous, and there is
a \emph{fixed-point transfer} from $F$ to $\abs{F}$: $\textit{lfp}(\abs{F}_{\Init,\tr}) = \alpha(\textit{lfp}(F_{\Init,\tr}))$.
It follows that $\textit{lfp}(\abs{F}_{\Init,\tr})$ is the least abstraction of the set of reachable states, and it is obtained by the chain
\begin{equation*}
\abs{\bot} \ \abs{\sleq} \ (\abs{F}_{\Init,\tr})^{1}(\abs{\bot}) \ \abs{\sleq} \ (\abs{F}_{\Init,\tr})^2(\abs{\bot}) \ \abs{\sleq} \ \ldots
\end{equation*}
at its convergence point in a finite $i^*$ with $(\abs{F}_{\Init,\tr})^{i^*} = (\abs{F}_{\Init,\tr})^{i^*+1}$ (due to the finite height of $\abs{D}$).
We call the chain the \emph{Kleene iterations with the best transformer}, and overall it converges to the most precise sound inductive invariant in the abstract domain.
Another way to phrase the same chain, denoting $\xi_i=(\abs{F}_{\Init,\tr})^{i}(\abs{\bot})$, is by
\begin{align*}
\xi_1 = \alpha(\Init) \qquad \qquad \xi_{i+1}=\xi_i \mathbin{\abs{\join}} \alpha(\tr(\gamma(\xi_i))=\alpha(\postimage{\tr}{\gamma(\xi_i)}),
\end{align*}
because
$\xi_i \abs{\sleq} \xi_{i+1}$ implies that $\gamma(\xi_i) \subseteq \gamma(\xi_{i+1})$ and therefore
$\xi_{i+1}=\abs{F}_{\Init,\tr}(\xi_i) = \abs{F}_{\Init,\tr}(\xi_i) \mathbin{\abs{\join}} \alpha(\tr(\gamma(\xi_i)) \cup \Init) = \alpha(\tr(\gamma(\xi_i)) \cup \Init \cup \gamma(\xi_i)) = \alpha(\postimage{\tr}{\gamma(\xi_i)})$.
\subsection{Monotone Basis and Monotone Span}
\label{sec:monotone-basis}
We define the abstract domain in which $\Lambda$-PDR operates using the notion of a monotone span.
\begin{definition}[Monotone Basis~\cite{DBLP:journals/iandc/Bshouty95}]
\label{def:monotone-basis}
A \emph{monotone basis} is a set of states $B$.
It is a basis \emph{for a formula} $\varphi$ if
$
\varphi \equiv \mhull{\varphi}{B}.
$
\end{definition}
\begin{definition}[Monotone Span]
\label{def:monotone-span}
$\mspan{B} = \set{\mhull{\varphi}{B} \mid \varphi \mbox{ over } \voc}$,
the set of formulas for which $B$ is a monotone basis.
\end{definition}
\citet{DBLP:journals/iandc/Bshouty95} showed that $\varphi \in \mspan{B}$ iff
there exist clauses $c_1,\ldots,c_s$ such that $\varphi \equiv c_1 \land \ldots \land c_s$ and for every $1 \leq i \leq s$ there exists $b_j \in B$ such that $b_j \not \models c_i$. (This is also a corollary of~\Cref{thm:mhull-conjunctive}.)
The monotone span is thus the set of all formulas that can be written in CNF using clauses that exclude states from the basis.
A consequence of this is that it is closed under conjunction:
\begin{lemma}
\label{lem:mspan-conjunctive-closed}
If $\varphi_1, \varphi_2 \in \mspan{B}$ then also $\varphi_1 \land \varphi_2 \in \mspan{B}$.
\end{lemma}
\begin{proof}
By~\Cref{thm:mhull-conjunctive}, $\varphi_1 \equiv \bigwedge{\set{c \ | \ \mbox{$c$ is a clause, } \varphi_1 \implies c \mbox{ and } \exists b \in B. \, b \not\models c}}$, likewise for $\varphi_2$. The set $\set{c \ | \ \mbox{$c$ is a clause, } \varphi_1 \land \varphi_2 \implies c \mbox{, and } \exists b \in B. \, b \not\models c}$ includes the conjuncts associated with both $\varphi_1,\varphi_2$ (and more), and so $\mhull{\varphi_1 \land \varphi_2}{B} \implies \varphi_1 \land \varphi_2$. The other implication is by~\Cref{lem:mhull-overapproximation}.
\end{proof}
\subsection{Abstract Interpretation in the Monotone Span}
For a set of states $B$, we define the abstract domain $\madom{B}=\langle \mspan{B}, \implies, \join_{B}, \false \rangle$, a logical abstract domain~\cite{DBLP:conf/popl/GulwaniMT08} consisting of the set of (propositional) formulas for which $B$ is a monotone basis (\Cref{def:monotone-basis}), ordered by logical implication, with bottom element $\false$. The existence of $\join_B$ relies on~\Cref{lem:mspan-conjunctive-closed} (the least upper-bound is the conjunction of all upper-bounds), and $\false \in \mspan{B}$ because $\mhull{\false}{B}=\false$, seeing that $\monox{\false}{b}=\false$ for every $b$.
To define a Galois connection~\cite{DBLP:conf/popl/CousotC77} between sets of concrete states and formulas in $\mspan{B}$, we use the \emph{concretization} $\gamma(\varphi) = \set{\sigma \, | \, \sigma \models \varphi}$ (in the sequel, we refer to $\gamma$ as the identity function, by our convention of equating formulas with the set of states they represent).
The best abstraction is expressed by $\malpha{B}(S)=\mhull{S}{B}$:
\begin{lemma}
\label{lem:best-abstraction}
Let $S \subseteq \States$. Then $\mhull{S}{B}$ is the least overapproximation of $S$ in $\mspan{B}$, namely, $\mhull{S}{B} \implies \varphi$ for every $\varphi \in \mspan{B}$ s.t.\ $S \implies \varphi$.
\end{lemma}
\begin{proof}
Since $\varphi \in \mspan{B}$, by~\Cref{thm:mhull-conjunctive}, $\varphi \equiv \bigwedge_{i}{c_i}$ where each clause $c_i$ excludes some $a_i \in B$. If $\malpha{k}(S) \notimplies \varphi$, then there is some $c_i$ and a corresponding $a_i \in B$ that it excludes such that $\malpha{B}(S) \notimplies c_i$. This means that $\monox{S}{b} \notimplies c_i$ for every $b \in B$. But then in particular $\monox{S}{a_i} \notimplies c_i$, even though $S \implies c_i$ and $a_i \not\models c_i$, which is a contradiction to~\Cref{lem:monox-disjunction-cubes} for $\monox{S}{a_i}$.
\end{proof}
\begin{lemma}
There is a Galois connection $(2^{\States}, \subseteq) \galois{\malpha{B}}{\gamma} (\mspan{B}, \implies)$.\footnote{
Quotient over logical equivalence, this is a Galois \emph{insertion}, as $\malpha{B}(\gamma(\psi))=\mhull{\psi}{B}\equiv\psi$ for $\psi \in \mspan{B}$.
}
\end{lemma}
\begin{proof}
$\malpha{B}$ is monotone by~\Cref{lem:mhull-monotonicity}.
$\gamma$ is also monotone.
Let $\varphi \in \mspan{B}$ and $S \subseteq \States$.
If $\malpha{B}(S) \implies \varphi$ then $S \subseteq \gamma(\varphi)$ since $S \subseteq \mhull{S}{B}$ (\Cref{lem:mhull-overapproximation}).
If $S \subseteq \gamma(\varphi)$ then $\malpha{B}(S) \implies \varphi$ by~\Cref{lem:best-abstraction}.
\end{proof}
\begin{remark}[Disjunctive completion]
When $B = \bkcube_1 \lor \ldots \lor \bkcube_m$
is a disjunction of multiple cubes, the domain is not disjunctively-complete~\cite{POPL:CC79}: if $\varphi_1,\varphi_2 \in \mspan{B}$, it could be that $\varphi_1 \lor \varphi_2 \not\in \mspan{B}$.
However, for a single cube $\bkcube$, the join operation of $\madom{\bkcube}$ is disjunction, as follows from~\Cref{lem:bshouty-mon-mindnf}.
In this case, a definition of the abstraction $\malpha{\bkcube}$ through the representation function $\beta_{\bkcube}(\sigma)=\moncube{\sigma}{\bkcube}$ is straightforward, since $\malpha{\bkcube}(S) = {\bigjoin_{\sigma \in S}}{\beta_{\bkcube}(\sigma)}$ reads as~\Cref{lem:monox-disjunction-cubes}.
\end{remark}
\begin{remark}[As a reduced product domain]
One way to understand $\madom{B}$ when $B = \bkcube_1 \lor \ldots \lor \bkcube_m$ is as a reduced product~\cite{POPL:CC79} of the per-cube domains: $\madom{B}$ (quotient on logical equivalence) is isomorphic to $\bigotimes_{i}{\madom{\bkcube_i}}$.
The Cartesian product domain is over $m$-tuples of formulas, $\bigtimes_{i}{\mspan{\bkcube_i}}$, ordered by $(\varphi^1_1,\ldots,\varphi^1_m) \sleq (\varphi^2_1,\ldots,\varphi^2_m) \iff \bigwedge_{i=1}^{m}{(\varphi^1_i \implies \varphi^2_i)}$, with concretization $\gamma_{\times \bkcube_i}(\varphi_1,\ldots,\varphi_m)=\bigcap_{i=1}^{m}{\gamma(\varphi_i)}$.
The reduced product quotients the Cartesian product w.r.t.\ having the same concretization.
This is isomorphic to $\madom{B}$ because $\gamma_{\times \bkcube_i} = \gamma(\bigwedge_{i=1}^{m}{\varphi_i})$, and if $\varphi_i \in \mspan{\bkcube_i}$ then $\bigwedge_{i=1}^{m}{\varphi_i} \in \mspan{B}$.
\end{remark}
\begin{wrapfigure}{r}{0.33\textwidth}
\vspace{-0.8cm}
\begin{minipage}{0.33\textwidth}
\begin{algorithm}[H]
\caption{Kleene Iterations in $\madom{\bkwrch{k}}$}
\label{alg:eepdr-ai}
\begin{algorithmic}[1]
\begin{footnotesize}
\Procedure{$\Lambda$-PDR}{$\Init$, $\tr$, $\Bad$, $k$}
\State $i \gets 0$
\State $\Frameai_{-1} \gets \false$
\State $\Frameai_0 \gets \malpha{\bkwrch{k}}(\Init)$ $\label{ln:eepdr-ai:frame0}$
\While{$\Frameai_{i} \notimplies \Frameai_{i-1}$}
\State $\Frameai_{i+1} = \malpha{\bkwrch{k}}(\postimage{\tr}{\Frameai_i})$ $\label{ln:eepdr-ai:transformer}$
\State $i \gets i+1$
\EndWhile
\State \Return $\Frameai_i$
\EndProcedure
\end{footnotesize}
\end{algorithmic}
\end{algorithm}
\end{minipage}
\vspace{-0.6cm}
\end{wrapfigure}
\para{$\Lambda$-PDR as Kleene iterations}
\Cref{alg:eepdr-ai} shows Kleene iterations in $\madom{\bkwrch{k}}$ with the best abstract transformer for $(\Init,\tr)$.
The next iterate (\cref{ln:eepdr-ai:transformer}) is always $\Frameai_{i+1}=\malpha{\bkwrch{k}}(\postimage{\tr}{\Frameai_i})$, which exactly matches the relation between successive frames in $\Lambda$-PDR (\Cref{lem:justify-overview-next-frame}).
This means that $\Lambda$-PDR's frames exactly match the Kleene iterates, at least when the initial states are in $\bkwspan{k}$ themselves (in which case the first iterate in~\cref{ln:eepdr-ai:frame0} is simply $\Init$); otherwise there is a difference of at most one frame:
\begin{theorem}
\label{lem:ai-eepdr-sandwich}
$\Frameai_i \subseteq \Frame_{i+1} \subseteq \Frameai_{i+1}$ for every $i$ where $\Frame_{i+1}$ exists.\yotamforlater{assuming that Kleene iterations are always defined, it's not this way in the \Cref{alg:eepdr-ai} but it is in the basic exposition. we don't care?}
Further, if $\Init \in \bkwspan{k}$, then $\Frame_{i}=\Frameai_i$.
\end{theorem}
\begin{proof}
By induction on $i$, first prove that $\Frame_i \subseteq \Frameai_i$ for every $i$ where $\Frame_i$ exists.
Initially, $\Frame_0 = \Init \subseteq \malpha{\bkwrch{k}}(\Init) =\Frameai_0$.
For the step, assume that $\Frame_i \subseteq \Frameai_i$.
Using~\Cref{lem:mhull-monotonicity}, this implies that $\mhull{\postimage{\tr}{\Frame_i}}{\bkwrch{k}} \subseteq \mhull{\postimage{\tr}{\Frameai_i}}{\bkwrch{k}}$, which by~\Cref{lem:justify-overview-next-frame} and the Kleene iterations means that $\Frame_{i+1} \subseteq \Frameai_{i+1}$.
For the other inclusion, likewise, since $\Frameai_0 = \malpha{\bkwrch{k}}(\Init) = \mhull{\Init}{\bkwrch{k}} \subseteq \mhull{\postimage{\tr}{\Init}}{\bkwrch{k}} = \Frame_1$ \iflong(where the inclusion uses~\Cref{lem:mhull-monotonicity})\fi, we have by induction that $\Frameai_i \subseteq \Frame_{i+1}$ for every $i$ s.t.\ $\Frame_{i+1}$ exists.
Similarly, if $\Init \in \bkwspan{k}$ then $\malpha{\bkwrch{k}}(\Init) = \Init$, and by induction $\Frame_{i}=\Frameai_i$ for every $i$.
\end{proof}
We can now relate the number of frames in $\Lambda$-PDR and the number of Kleene iterations:
\begin{corollary}
\label{lem:eepdr-ai-iterations}
$\eepdr(\Init,\tr,\Bad,k)$ (\Cref{alg:eepdr}) converges or fails (\cref{ln:eepdr-restart}) in a frame whose index is at most one greater than the number of Kleene iterations in $\madom{\bkwrch{k}}$ on $(\Init,\tr)$ (\Cref{alg:eepdr-ai}).
\end{corollary}
\begin{proof}
Let $i^*$ be the iteration where \Cref{alg:eepdr-ai} converges to the least-fixed point, i.e. $\Frameai_{i^*+1}=\Frameai_{i^*}$.
If \Cref{alg:eepdr} does not terminate with error (\cref{ln:eepdr-restart}) before $i = i^*+1$, by~\Cref{lem:ai-eepdr-sandwich} $\Frameai_i \subseteq \Frame_{i+1} \subseteq \Frameai_{i+1}$ for every $i \leq i^*$, and for $i=i^*$ we have $\Frameai_{i^*} \subseteq \Frame_{i+1} \subseteq \Frameai_{i^*+1}=\Frameai_{i^*}$, so $\Frame_{i^*+1}$ is an inductive invariant.
Therefore, $\postimage{\tr}{\Frame_{i^*+1}} = \Frame_{i^*+1}$, and thus, because $\Frame_{i^*+1} \in \bkwspan{k}$, also $\mhull{\postimage{\tr}{\Frame_{i^*+1}}}{\bkwrch{k}} = \Frame_{i^*+1}$ and the algorithm converges.
\end{proof}
Further, $\Lambda$-PDR converges whenever the Kleene iterations converge to an inductive invariant:
\begin{corollary}
\label{lem:eepdr-lfp}
If there exists an inductive invariant $I \in \bkwspan{k}$ for $(\Init,\tr,\Bad)$, then $\eepdr(\Init,\tr,\Bad,k)$ (\Cref{alg:eepdr}) converges to an inductive invariant.
\end{corollary}
\begin{proof}
\Cref{alg:eepdr-ai} converges from below to the abstract lfp for $(\Init,\tr)$ in $\madom{\bkwrch{k}}$, which from the premise is strong enough to prove safety. Using~\Cref{lem:ai-eepdr-sandwich},
the same is true for \Cref{alg:eepdr}.
\end{proof}
\para{Increasing $k$ refines $\madom{\bkwrch{k}}$}
Increasing the backward exploration bound $k$ refines $\madom{\bkwrch{k}}$ by increasing $\bkwspan{k}$ ($\bkwrch{k} \subsetneq \bkwrch{k'}$ implies $\bkwspan{k} \subsetneq \bkwspan{k'}$: every $\varphi \in \bkwspan{k}$ is also $\varphi \in \bkwspan{k'}$, and for instance the clause $\neg \sigma_b \in \bkwspan{k'} \setminus \bkwspan{k}$
if the state
$\sigma_b \in \bkwrch{k'} \setminus \bkwrch{k}$).
Restarting $\Lambda$-PDR with a larger $k$ (\cref{ln:eepdr-increase-k} in~\Cref{alg:eepdr})
thus refines the domain until it includes an inductive invariant that establishes safety. Such a $k$ always exists because the set of all backward reachable states (attained by some finite $k$ in the setting of propositional systems) is sufficient to express the weakest safe inductive invariant.
\para{Efficient convergence}
Unfortunately, the lattice height of $\mspan{B}$ is exponential. For example, all the formulas in the strictly ascending chain of formulas $\set{\vec{x} \leq i}_{0 \leq i \leq 2^n-1}$ over $n$ variables $\vec{x}=x_{n-1},\ldots,x_0$ are in $\mspan{\set{0\ldots0}}$ (see~\Cref{sec:slow-convergence}).
Therefore, to bound the number of iterations we need to consider properties of the transition system, a task on which we embark next.
\section{Convergence Bounds via Abstract Diameter}
\label{sec:abstract-diameter-all}
In this section and the next, we prove a bound on the number of iterations of $\Lambda$-PDR on a given transition system via the DNF size of a monotonization of the transition relation.
In the current section we assume that $\bkwrch{k}$ can be expressed as a single cube $\bkcube$, and generalize it in~\Cref{sec:hyper-all}.
To formulate the bound, we define a monotonization of the transition relation, which is a formula over $\voc \cup \voc'$. For cubes $c_1$ and $c_2$ over $\voc$, we denote by $\monox{\tr}{(c_1,c_2)}$ the monotonization $\monox{\tr}{a}$ where $a = c_1 \land c_2'$ and $c_2'$ is obtained from $c_2$ by substituting each $p \in \voc$ by the corresponding $p' \in \voc'$.
The monotonization we perform on the pre-state vocabulary $\voc$ uses the \emph{reflection} of the backward reachable cube:
\begin{definition}[Reflection]
For a cube $\bkcube = \ell_1 \land \ldots \land \ell_r$, the \emph{reflection} is $\reflect{\bkcube} = \neg \ell_1 \land \ldots \land \neg \ell_r$.
\end{definition}
Our main theorem in this section is as follows.
\begin{theorem}
\label{thm:abstract-diamter-bound}
Let $(\Init,\tr,\Bad)$ be a transition system, and $\bkwrch{k}=\bkcube$ a cube.
Then $\eepdr(\Init,\tr,\Bad,k)$ converges or fails in a frame whose index is bounded by $\dnfsize{\monox{\tr}{(\reflect{\bkcube},\bkcube)}}+1$.
\end{theorem}
\begin{example}
\label{ex:even-counter-constant}
\yotamsmall{new}
For an example where~\Cref{thm:abstract-diamter-bound} yields a tight bound, consider a counter over $\vec{x}=x_n,\ldots,x_0$ where $\Init$ is $\vec{x}=00\ldots00$, $\tr$ increments even numbers by two, $\Bad$ is $\vec{x}=10\ldots01$, and $k=0$.
The monotonization $\absr{\tr} = \monox{\tr}{\vec{x}=01\ldots10,\vec{x}'=10\ldots01}$ is $x'_0=0$ (see the calculation below), so $\dnfsize{\monox{\tr}{\vec{x}=01\ldots10,\vec{x}'=10\ldots01}}=1$. By~\Cref{thm:abstract-diamter-bound}, $\Lambda$-PDR converges in $\Frame_2$, and indeed in this example $\Frame_1 = (x_n=0 \land x_0=0)$, and $\Frame_2 = (x_0=0)$ is the inductive invariant (all the even numbers).
To see that indeed $\absr{\tr} = (x'_0=0)$, note that $(01\ldots10,10\ldots0) \in \tr$, the monotonization $\moncube{\vec{x}=01\ldots10,\vec{x}'=10\ldots01}{\vec{x}=01\ldots10,\vec{x}'=10\ldots00)} = (x'_0=0)$ and use~\Cref{lem:monox-disjunction-cubes}; for the other direction, $x'_0=0$ holds in every transition of $\tr$ and remains in every such monotone cube, invoking again~\Cref{lem:monox-disjunction-cubes}.
\end{example}
\begin{example}
\yotamsmall{new}
\label{ex:skip-counter-bounce-polynomial}
An example where~\Cref{thm:abstract-diamter-bound} yields a polynomial yet non-tight bound appears in~\Cref{sec:overview-diameter-bound}.
\end{example}
\para{Outline}
We prove~\Cref{thm:abstract-diamter-bound} by constructing an ``abstract'' transition system whose diameter is the number of iterations required for \Cref{alg:eepdr-ai} to converge (\Cref{sec:abstract-transition}), and bound its diameter
(\Cref{sec:diameter-bound}).
Throughout, we fix a transition system $(\Init,\tr,\Bad)$ and a backwards bound $k \in \mathbb{N}$, denoting the cube $\bkwrch{k}$ by $\bkcube$.
\subsection{Abstract Transition System}
\label{sec:abstract-transition}
Given $(\Init,\tr,\Bad)$, the \emph{abstract transition system} $(\abs{\Init},\abs{\tr},\abs{\Bad})$ is defined over the same set of states as the original, and extends its transitions:
\begin{definition}[Abstract Transition System]
\label{def:abs-tr}
The abstract transition system of $(\Init,\tr,\Bad)$ w.r.t.\ $\bkcube$
is defined as a transition system $(\absr{\Init},\absr{\tr},\absr{\Bad})$ over $\States$ with $\absr{\Init} = \monox{\Init}{\bkcube}$, $\absr{Bad} = \Bad$, and $\absr{\tr}=\monox{\tr}{(\reflect{\bkcube},\bkcube)}$.
\end{definition}
The monotonization in $\absr{\tr}$ of the pre-image is understood using the following technical lemma about monotonization w.r.t.\ a reflection.
\begin{lemma}
\label{lem:moncube-reflect}
$\sigma_1 \models \moncube{\sigma_2}{\reflect{\bkcube}} \iff \sigma_2 \models \moncube{\sigma_1}{\bkcube}$ for every cube $b$ and states $\sigma_1,\sigma_2$.
\end{lemma}
\toolong{
\begin{proof}
Suppose that $\sigma_1 \models \moncube{\sigma_2}{\reflect{\bkcube}}$.
Then for every $p$ present in $\reflect{\bkcube}$ (equivalently, in $\bkcube$), $\sigma_1,\reflect{\bkcube}$ \emph{dis}agree on $p$ whenever $\sigma_2,\reflect{\bkcube}$ do. Contrapositively, $\sigma_2,\reflect{\bkcube}$ \emph{agree} on $p$ whenever $\sigma_1,\reflect{\bkcube}$ do. Equivalently, $\sigma_2,\bkcube$ \emph{dis}agree on $p$ whenever $\sigma_1,\bkcube$ do. This shows that $\sigma_2 \models \moncube{\sigma_1}{\bkcube}$. The other direction of the implication is symmetric because $\reflect{\reflect{\bkcube}}=\bkcube$.
\end{proof}
}
The central property of the abstract transition system is that its $i$-reachable state capture executions of iterations in the $\madom{\bkwrch{k}}$ abstract domain.
\begin{lemma}
\label{thm:absract-reach}
Let $\absr{R}_i$ be the set of states reachable in $(\absr{\Init},\absr{\tr},\absr{Bad})$ w.r.t.\ $\bkwrch{k}=\bkcube$ (\Cref{def:abs-tr}) in at most $i$ steps.
Then $\absr{R}_i \equiv \Frameai_{i}$ where $\Frameai_{i}$ is the $i$'th iterate of~\Cref{alg:eepdr-ai} in $\madom{\bkcube}$ on $(\Init,\tr,\Bad)$.
\end{lemma}
This will imply that the diameter of the abstract transition system---the minimal $i$ where $i$-reachability converges to all the reachable states---equals the number of iterations needed for convergence of the abstract interpretation in $\madom{\bkwrch{k}}$.
Before proving this connection, let us explain the intuition for the abstract transition system and the relation to the algorithm.
Through~\Cref{lem:monox-disjunction-cubes,lem:moncube-reflect} we can see that a transition $(\sigma,\sigma')$ of $\absr{\tr}$ consists of three steps:
\begin{itemize}
\item monotonization of $\sigma$ to $\widetilde{\sigma}$
($\sigma$ is the ``protector'' state for $\widetilde{\sigma}$); then
\item a concrete transition $(\widetilde{\sigma},\widetilde{\sigma}') \in \tr$; and
\item the monotonization of $\widetilde{\sigma}'$ to $\sigma'$
($\widetilde{\sigma}'$ is the ``protector'' state for $\sigma'$).
\end{itemize}
The monotonization \emph{after} the concrete transition mimics a step of the algorithm, which computes the post-image and then a monotone overapproximation.
A potentially critical difference between the algorithm and $\absr{\tr}$ is that the transition system performs this state-by-state, while the algorithm computes these operations over sets. The insight is that when $\bkwrch{k}$ is a single cube, the abstraction of $\Lambda$-PDR factors to individual states as well, and so can be captured using an ordinary transition system, so that its $i$-reachable states correspond to iterations of the algorithm, which is the objective of the next lemma.
As the proof shows, the monotonization \emph{before} the concrete transition does not change reachability, because in execution traces of $\absr{\tr}$, this monotonization is absorbed by the monotonization at the end of the previous transition (the first step in the trace is handled by taking the monotonization of the initial states, $\abs{\Init}=\absr{\Init}=\monox{\Init}{b}$).
Even though the monotonization of the pre-state does not change the diameter, it can improve (and never worsen) our diameter \emph{bound}, which is derived in~\Cref{sec:diameter-bound}.
\begin{proof}[Proof of~\Cref{thm:absract-reach}]
We first prove a similar result for a slightly simpler, ``less abstract'' transition system, where monotonization is performed in the post-state but not in the pre-state.
Define a transition system $(\abs{\Init},\abs{\tr},\abs{\Bad})$ over $\States$ by $\abs{\Init} = \monox{\Init}{\bkcube}$, $\abs{Bad} = \Bad$, and
\begin{equation*}
(\sigma,\sigma') \in \abs{\tr} \iff \exists \sigma''. \ (\sigma,\sigma'') \in \tr \land \sigma' \in \cubemon{\sigma''}{\bkcube}.
\end{equation*}
(In fact, $\abs{\tr}=\monox{\tr}{(\true,\bkcube)}$.)
Denote by $\abs{R}_i$ the set of states reachable in $(\abs{\Init},\abs{\tr},\abs{Bad})$ in at most $i$ steps.
We argue that $\abs{R}_i \equiv \Frameai_{i}$,
by induction on $i$.
Initially, $\abs{R}_0 = \abs{\Init} = \monox{\Init}{\bkcube} = \Frameai_0$.
For the step, by the definition of the abstract system, by the induction hypothesis $\abs{R}_i \equiv \Frameai_i \in \mspan{\bkcube}$. Hence,
\begin{align*}
\abs{R}_{i+1} &= \postimage{\abs{\tr}}{\abs{R}_i}
= \abs{R}_i \cup \bigvee_{\sigma'' \in \tr({\abs{R}_i})}{\cubemon{\sigma''}{\bkcube}}
\underset{\rm \Cref{lem:monox-disjunction-cubes}}{=}
\abs{R}_i \cup \monox{{\tr}({\abs{R}_i})}{\bkcube}
\\
&\underset{\abs{R}_i \in \mspan{\bkcube}}{=}
\monox{\abs{R}_i}{\bkcube} \cup \monox{{\tr}({\abs{R}_i})}{\bkcube}
\underset{\rm \Cref{lem:bshouty-mon-mindnf}}{=}
\monox{\postimage{\tr}{\abs{R}_i}}{\bkcube}
\underset{\rm ind.}{=}
\monox{\postimage{\tr}{\Frameai_i}}{\bkcube}
= \Frameai_{i+1}.
\end{align*}
It remains to show that $\absr{R}_i = \abs{R}_i$, i.e., that the $i$-reachable states of $\abs{\tr},\absr{\tr}$ coincide (although they are not in general bisimilar).
First, $\abs{\tr} \subseteq \absr{\tr}$. This is because if $(\sigma,\sigma') \in \abs{\tr}$, then by definition there is $\sigma''$ such that $(\sigma,\sigma'') \in \tr$ and $\sigma' \in \moncube{\sigma''}{\bkcube}$. Considering the product monotone order, $(\sigma,\sigma'') \leq_{(\cdot,\bkcube)} (\sigma,\sigma')$,
and so $(\sigma,\sigma'') \in \tr \implies (\sigma,\sigma') \in \monox{\tr}{(\reflect{\bkcube},\bkcube)} = \absr{\tr}$, as required.
Second, we show that for any $S \in \mspan{\bkcube}$ it holds that $\postimage{\absr{\tr}}{S} \subseteq \postimage{\abs{\tr}}{S}$.
$\postimage{\absr{\tr}}{S} = S \cup \absr{\tr}(S)$ and $\postimage{\abs{\tr}}{S} = S \cup \abs{\tr}(S)$, so we need to show that $\absr{\tr}(S) \subseteq \abs{\tr}(S)$.
Let $(\sigma,\sigma') \in \absr{\tr}, \sigma \in S$. By the definition of $\absr{\tr}$ and~\Cref{lem:monox-disjunction-cubes}, there exists $(\widetilde{\sigma},\widetilde{\sigma}') \in \tr$ such that $\sigma \models \moncube{\widetilde{\sigma}}{\reflect{\bkcube}}$ and $\sigma' \models \moncube{\widetilde{\sigma}'}{\bkcube}$.
The former implies, by~\Cref{lem:moncube-reflect}, that $\widetilde{\sigma} \models \moncube{\sigma}{\bkcube}$, hence $\widetilde{\sigma} \in S$ as well (because $S \in \mspan{\bkcube}$). Writing $\sigma'' = \widetilde{\sigma}'$ shows that $(\widetilde{\sigma},\sigma') \in \abs{\tr}$, and hence $\sigma' \in \postimage{\abs{\tr}}{S}$, as required.
The first part of the argument (and induction on $i$) shows that $\abs{R}_i \subseteq \absr{R}_i$.
We have shown that $\abs{R}_i = \Frameai_i$, which in particular implies that
always $\abs{R}_i \in \mspan{\bkcube}$; therefore, the second argument above shows that $\absr{R}_i \subseteq \abs{R}_i$.
The claim follows.
\end{proof}
\begin{corollary}
\label{thm:abstract-diameter-eepdr}
Let $(\absr{\Init},\absr{\tr},\absr{\Bad})$ be the abstract transition system w.r.t.\ $\bkwrch{k}=\bkcube$ (\Cref{def:abs-tr}).
If $(\absr{\Init},\absr{\tr},\absr{\Bad})$ is safe and its reachability diameter is $s$, then
$\eepdr(\Init,\tr,\Bad,k)$ converges in frame at most $s+1$.
If $(\absr{\Init},\absr{\tr},\absr{\Bad})$ reaches a bad state in $s$ steps, then $\eepdr(\Init,\tr,\Bad,k)$ fails (\cref{ln:eepdr-restart}) in frame at most $s+1$.
\end{corollary}
\begin{proof}
From~\Cref{thm:absract-reach}, $\Frameai_{s} \equiv \Frameai_{s+1}$ iff $\absr{R}_{s} \equiv \absr{R}_{s+1}$,
and the least $s$ in which the latter holds is the diameter. For the unsafe case, $\Frameai_s \cap \Bad \neq \emptyset$ iff $\absr{R}_s \cap \Bad \neq \emptyset$. Apply~\Cref{lem:eepdr-ai-iterations} in both cases to deduce convergence, resp. failure, of $\Lambda$-PDR in frame at most $s+1$.
\end{proof}
\subsection{Diameter Bound via Abstract DNF Size}
\label{sec:diameter-bound}
In this section, we bound the diameter of the abstract transition system
in order to obtain the convergence bound of~\Cref{thm:abstract-diamter-bound}.
We use a simple, general bound on the diameter of transition systems\iflong, by the DNF size of the transition relation\fi:
\begin{lemma}
\label{lem:diam-dnf}
The reachability diameter of a transition system $(\Init,\tr,\Bad)$ is bounded by $\dnfsize{\tr}$.
\end{lemma}
\begin{proof}
Fix a minimal DNF representation of $\tr$. Thinking about each disjunct of $\tr$ as an action $a$, every transition can be labeled by at least one action. Whenever in an execution $\sigma_1,\sigma_2,\ldots$ an action $a$ labels two transitions $\sigma_{i_1} \overset{a}{\rightarrow} \sigma_{i_1+1},\sigma_{i_2} \overset{a}{\rightarrow} \sigma_{i_2+1}$, the segment between the occurrences, $\sigma_{i_1+1},\ldots,\sigma_{i_2}$ can be dropped and the resulting trace is still valid (and terminates at the same state)---this is because if $(\sigma_{i_1},\sigma_{i_1+1}) \models a$ and likewise $(\sigma_{i_2},\sigma_{i_2+1}) \models a$ then also $(\sigma_{i_1},\sigma_{i_2+1}) \models a$, because $a$, which is a cube, can be decomposed to $a_\textit{pre} \land a_\textit{post}$ where all the literals in $a_\textit{pre}$ are in $\voc$ and those in $a_\textit{post}$ are in $\voc'$.
Overall, every state that can be reached from another state can do so by an execution where each action appears at most once, and thus the diameter is bounded by $\dnfsize{\tr}$.
\end{proof}
Combining the above results yields a proof of this section's main theorem:
\begin{proof}[Proof of~\Cref{thm:abstract-diamter-bound}]
By~\Cref{thm:abstract-diameter-eepdr}, the number of iterations before convergence or failure of $\Lambda$-PDR is bounded by
1 plus
the reachability diameter of $(\absr{\Init},\absr{\tr},\absr{\Bad})$,
which by~\Cref{lem:diam-dnf} is at most $\dnfsize{\monox{\tr}{(\reflect{\bkcube},\bkcube)}}$.
\end{proof}
\para{Complexity}
Finding whether there is an equivalent DNF representation with at most $s$ terms is complete for the second level of the polynomial hierarchy $\psigma{2}$~\cite{DBLP:journals/jcss/Umans01}.
This is on par with deciding whether the diameter is bounded by $s$~\cite{DBLP:journals/tcs/HemaspaandraHTW10} (see also~\cite{schaefer2002completeness}).
Thus the bound in~\Cref{thm:abstract-diamter-bound} is not an efficiently-computable upper bound on the number of frames of $\Lambda$-PDR. Instead, we view the result of~\Cref{thm:abstract-diamter-bound} as a conceptual explanation of how smaller diameters can originate from the abstraction.
\begin{example}
\yotamsmall{new}
\label{ex:skip-counter-pure-exponential}
In the running example from~\Cref{sec:overview} (without the additional transitions in~\Cref{sec:overview-diameter-bound}), \Cref{thm:abstract-diamter-bound} yields a trivial, exponential, bound which is not tight.
Consider the system in~\Cref{fig:skip-counter} restricted to the $\vec{x}$ part (see~\Cref{lem:diameter-bound-project-irrelevant} and the justification in~\Cref{sec:overview-diameter-bound}).
Then $\absr{\tr} = \monox{\tr}{\vec{x}=01\ldots11,\vec{x}'=10\ldots00} = (x_n=1 \land x'_n=0) \lor (x_n=0 \land x'_0=1) \lor (x_n=0 \land x'_n=0 \land \vec{x}' > \vec{x}) \lor (x_n=1 \land x'_n=1 \land \vec{x}' > \vec{x})$, which has an exponential DNF size,\footnote{
\yotam{proof adapted from an email} $\vec{x}' > \vec{x}$ is unate~\cite{DBLP:journals/jacm/AngluinHK93}---it is closed under turning variables in $\vec{x}$ from $1$ to $0$, and turning variables in $\vec{x}'$ from $0$ to $1$. Hence its unique and minimal DNF representation consists of the disjunction of all its prime implicants~\cite{quine1954two} (a term $t$ is a prime implicant of $\varphi$ if $t \implies \varphi$ but this ceases to hold when dropping any literal from $t$). It suffices to show that there is an exponential number of prime implicants.
To see this, let $v$ be an assignment to all the variables except the least significant, let $\sigma=(v,0),\sigma'=(v',1)$ (so in $\sigma$, $\vec{x} = 2v$, and in $\sigma'$, $\vec{x} = 2v+1$). Then $(\sigma,\sigma') \models \vec{x}' > \vec{x}$.
The only prime implicant that can be obtained by dropping literals from $(\sigma,\sigma')$ is the conjunction that includes every $x_i=0$ when $v[x_i]=0$ and $\vec{x}'=1$ when $v[x_i]=1$---dropping any one of these variables would result in a term that is satisfied by $(v[x_i \mapsto 1],0),(v'[x_i \mapsto 0],1)$, which does not satisfy $\vec{x}' > \vec{x}$, and so the resulting term would not be an implicant.
Thus, the prime implicant associated with $v$ can be used to reconstruct $v$, setting $v[x_i]=0$ when $x_i=0$ is present in the prime implicant, and $v[x_i]=1$ when $x'_i=1$ is present. There are exponentially many choices of $v$ and we have shown that each induces a different prime implicant, and the claim follows.
}
yielding an exponential bound on the number of frames of $\Lambda$-PDR. However, as~\Cref{ex:running-all-frames} shows, $\Lambda$-PDR converges in this case in a constant number of frames.
To see that indeed $\absr{\tr}$ is as stated above, it is easiest to think about the behavior of $\absr{\tr}$ as an abstract transition system (\Cref{def:abs-tr}). \yotam{is this actually right?}
\begin{itemize}
\item Starting in a state $\sigma$ with $x_n=0$, $\absr{\tr}$ can lead us to any state $\sigma'$ with $x'_n=0$ and $\vec{x}' > \vec{x}$---to reach $\vec{x}'$ the abstraction step turns all the bits below its leading $1$, a concrete step increments the counter, so the resulting state agrees with the leading $1$ and everything else is $0$, and then another abstraction step can generate the other $1$'s present in $\vec{x'}$.
Additionally, we can reach in the abstraction the state $01\ldots11$, a step then skip and arrives at $10\ldots01$, which is abstracted to all numbers with $x_0=1$.
These are all the states we may reach this way; the first abstract step cannot change $x_n$, and we cannot arrive at smaller numbers, because both the concrete and abstract steps (which turn $0$'s to $1$'s) can only strictly increase the number.
\item Starting in a state $\sigma$ with $x_n=1$, $\absr{\tr}$ can lead us to any state with $x'_n=1,\vec{x}' > \vec{x}$, similarly to above for the case $x_n=0$. Additionally, we can abstract to $11\ldots11$, from which a concrete step wraparounds to $00\ldots00$, which then abstracts to $x'_n=0$.
These are all the states that we may reach this way; we cannot reach smaller numbers with $x'_n=1$, along a similar argument to the case above of $x_n=0$.
\end{itemize}
\end{example}
\section{Convergence Bounds via Abstract Hypertransition Systems}
\label{sec:hyper-all}
In this section, we generalize the results of~\Cref{sec:abstract-diameter-all} to the case that $\bkwrch{k}$ is not expressible as a single cube.
In this case, our bound is the product of monotonizations w.r.t.\ the different cubes that comprise $\bkwrch{k} = \bkcube_1 \lor \ldots \lor \bkcube_m$ in the post-state, and w.r.t.\ (the reflection of) the least cube that contains all $\bkwrch{k}$ in the pre-state, defined as follows:
\begin{definition}
If $\bkwrch{k}=\bkcube_1 \lor \ldots \lor \bkcube_m$, we denote by $\cubejoin{\bkwrch{k}} = \bigcap_{i=1}^{m}{\bkcube_i}$ (as sets of literals) the cube that consists of the literals that appear in all $\bkcube_1,\ldots,\bkcube_m$.
\end{definition}
Fix a representation $\bkwrch{k}=\bkcube_1 \lor \ldots \lor \bkcube_m$.
Our main theorem is as follows:
\begin{theorem}
\label{thm:abstract-hyperdiamter-bound}
Let $(\Init,\tr,\Bad)$ be a transition system.
Then $\eepdr(\Init,\tr,\Bad,k)$ converges or fails in a frame whose index is bounded by
\begin{equation*}
\prod_{i=1}^{m}{\left(
\dnfsize{\monox{\tr}{(\reflect{\cubejoin{\bkwrch{k}}},\bkcube_i)}}
+
\dnfsize{\monox{\Init}{\bkcube_i}}
\right)
}
+
1.
\end{equation*}
\end{theorem}
The reasons for $\monox{\Init}{\bkcube_i}$ and $\cubejoin{\bkwrch{k}}$ will become clear in~\Cref{sec:abstract-hypertransition}.
Often $\Init$ is a cube, in which case $\dnfsize{\monox{\Init}{\bkcube_i}}=1$.
\begin{example}
\label{ex:multiskip-counter-poly}
For an example where~\Cref{thm:abstract-hyperdiamter-bound} yields a polynomial convergence bound,
consider a counter over $\vec{x}=x_n,\ldots,x_0$ (similar in spirit to~\Cref{fig:skip-counter}) with $\Init \ = \ \vec{x}=0$, $\Bad \ = \vec{x} = 10\ldots0$, and $\tr$ that
\begin{inparaenum}[(i)]
\item skips every multiple of $2^r$ except $\vec{0}=0\ldots0$;
\item from every state with $x_r = x_{r-1} = \ldots = x_1 = x_0 = 1$ may also ``bounce back'' to a state with the same upper bits ($i > r$) and exactly one lower bit ($i \leq r$) is $1$, or to $\vec{x}=0$ if the upper bits are already $0$; and
\item transitions from any multiple of $2^r$ but $\vec{0}$ to any other multiple of $2^r$ (including the bad state).
\end{inparaenum}
We call the set of states between consecutive multiples of $\vec{x}=2^r,\vec{x}=2^{r+1}$ a ``segment''.
Assume that $r=n-\textit{polylog}(n)$ (the counter skips relatively few times).
We now compute the bound resulting from the theorem.
For every $k \geq 1$,
$\bkwrch{k}=\bigvee_{i=s}^{n}{\bkcube_i}$ where $\bkcube_i = (x_{r-1}=0 \land \ldots \land x_0=0 \land x_i=1)$ (all multiples of $2^r$ except $0\ldots0$).
$\cubejoin{\bkwrch{k}} = (x_{r-1}=0 \land \ldots \land x_0=0)$ (all multiples of $2^r$).
We find a DNF representation for $\monox{\tr}{(\reflect{\cubejoin{\bkwrch{k}}},\bkcube_i)}$ using~\Cref{lem:bshouty-mon-mindnf} similarly to~\Cref{sec:overview-diameter-bound}:
The number of segments is $2^{n-r}$, and in each segment
the abstraction of the ``bounce back'' transitions subsume the transitions between numbers in the same segments. \yotam{validate me}
This amounts to $\bigO(r2^{n-r})$ terms. \yotam{validate me}
The number of disjuncts $\bkcube_i$ is $n-r$, the number of terms in $\monox{\Init}{\bkcube_i}$ is $1$ because $\Init$ is itself a term, and overall \Cref{thm:abstract-hyperdiamter-bound} yields the bound $\bigO(r (n-r) 2^{n-r})=\textit{poly}(n)$.
\end{example}
\begin{example}
For an example where the theorem yields an exponential convergence bound, consider the same system as in the previous example (\Cref{ex:multiskip-counter-poly}) but when $r=\textit{polylog}(n)$. The above calculation still yields the same bound but now it is $\Omega(2^n)$.
This exponential bound reflects true exponential behavior of the algorithm, because each post-image crosses to at most one new segment, and the abstraction never produces states in a segment beyond those represented in the current frame, mandating at least $2^{n-r}$ frames\iflong.\footnote{
To see that $\mhull{\Frame_i}{\bkwrch{k}}$ never introduces states in a segment that was not already present in $\Frame_i$, note that because always $0\ldots0 \in \postimage{\tr}{\Frame_i}$ (the initial state), $\monox{\Frame_i}{0\ldots0}=\true$, and thus
$\mhull{\postimage{\tr}{\Frame_i}}{\bkwrch{k} \cup \set{0\ldots0}}=\mhull{\postimage{\tr}{\Frame_i}}{\bkwrch{k}} \land \monox{\postimage{\tr}{\Frame_i}}{0\ldots0} = \mhull{\postimage{\tr}{\Frame_i}}{\bkwrch{k}}$. Hence, $\mhull{\postimage{\tr}{\Frame_i}}{\bkwrch{k}} = \mhull{\postimage{\tr}{\Frame_i}}{\bkwrch{k} \cup \set{0\ldots0}} = \monox{\postimage{\tr}{\Frame_i}}{x_{r-1}=0 \land \ldots \land x_0=0}$. But the cube $x_{r-1}=0 \land \ldots \land x_0=0$ does not mention the upper bits, and thus the monotonization does not alter these bits, and includes only segments that were already present in $\postimage{\tr}{\Frame_i}$.
}
\else
~(see the extended version~\cite{extendedVersion} for details).
\fi
\end{example}
\begin{remark}
\label{lem:diameter-bound-project-irrelevant}
There are cases where it is possible to apply \Cref{thm:abstract-hyperdiamter-bound} on a restriction of $\tr$ to specific values to some of the variables, and this produces a better bound.
Suppose that for a set of variables $\vec{x}$ and some $\vec{v}$ valuation thereof, $\vec{x}=\vec{v}$ is an inductive invariant for the system, and $\exists b \in \bkwrch{k}. \ b[\vec{x}]=\reflect{\vec{v}}$. (In~\Cref{sec:overview-diameter-bound}, actually $\bkwrch{k} \implies \vec{x}=\reflect{\vec{v}}$.)
Then applying \Cref{thm:abstract-hyperdiamter-bound} to $\restrict{\tr}{\vec{x}\gets\vec{v}}=\tr[\vec{v}/\vec{x}]$, eliminating $\vec{x}$ by substituting $\vec{v}$ for it, also yields an upper bound on the number of iterations of $\Lambda$-PDR. The benefit is that
$\dnfsize{\monox{\restrict{\tr}{\vec{x}\gets\vec{v}}}{(\ldots)}}$
can be smaller than with the original $\tr$.
It is correct to apply the theorem to the restriction and deduce a bound for the original, because under the above premises, always $\Frame_i[\tr] = \Frame_i[\restrict{\tr}{\vec{x}\gets\vec{v}}] \land \vec{x}=\vec{v}$, where $\Frame_i[\tau]$ is the $i$th frame of $\Lambda$-PDR w.r.t.\ transition relation $\tau$.\footnote{
This is because $\Frame_i \eqdef \Frame_i[\tr] \implies \vec{x}=\vec{v}$ by induction on $i$---since $\vec{x}=\vec{v}$ is an inductive invariant, this holds initially, as well as $\postimage{\tr}{\Frame_i} \implies \vec{x}=\vec{v}$.
Now $\Frame_{i+1}=\mhull{\postimage{\tr}{\Frame_i}}{\bkwrch{k}} \implies \vec{x}=\vec{v}$ as well, because from the assumption there is $b \in \bkwrch{k}$ s.t.\ $b[\vec{x}]=\reflect{\vec{v}}$, so $\cubemon{\sigma}{b} \implies \vec{x}=\vec{v}$ for every $\sigma \in \postimage{\tr}{\Frame_i}$ because in $\sigma$, $\vec{x}=\vec{v}$, which are opposite in $b$ and thus retained. Hence $\monox{\postimage{\tr}{\Frame_i}}{b} = \bigvee_{\sigma \in \postimage{\tr}{\Frame_i}}{\cubemon{\sigma}{b}} \implies \vec{x}=\vec{v}$, and thus also the conjunction
$\mhull{\postimage{\tr}{\Frame_i}}{\bkwrch{k}} \implies \vec{x}=\vec{v}$.
}
\end{remark}
\para{Outline}
To prove \Cref{thm:abstract-hyperdiamter-bound}, the first step is to define an analog to the abstract transition system from~\Cref{def:abs-tr} that captures \Cref{alg:eepdr-ai} in the general case.
If $\bkwrch{k}$ has a DNF form with $m$ cubes, this can be done using a hypertransition system of width $m$ (\Cref{sec:abstract-hypertransition}). We then proceed to bound its diameter (\Cref{sec:hyperdiameter-bound}).
\subsection{Abstract Hypertransition System}
\label{sec:abstract-hypertransition}
We consider hypertransition systems that are dual to the classical definition~\cite[e.g.][]{DBLP:conf/lics/LarsenX90}, in that the pre-state, instead of the post-state, of a hypertransition consists of a set of states.
\begin{definition}[Hypertransition System]
A \emph{hypertransition system} (of width $m \in \mathbb{N}$) over $\States$ is a tuple $(\Init,\tr,\Bad)$ where
\begin{itemize}
\item $\Init \subseteq \States$ is the set of initial states,
\item $\Bad \subseteq \States$ is the set of bad states, and
\item $\tr \subseteq \States^m \times \States$ is a hypertransition relation.
As a formula, it is defined over $m$ copies of $\voc$ for the pre-states, $\voc_1,\ldots,\voc_m$, and a copy $\voc'$ for the post-state.
\end{itemize}
An \emph{execution} of the system is a tree in which the leaves are states from $\Init$, the relationship between a node $\sigma'$ and its children $\sigma_1,\ldots,\sigma_m$ is that $(\sigma_1,\ldots,\sigma_m,\sigma') \models \tr$.
A state $\sigma$ is \emph{reachable in at most $i$ steps} if there is an execution with root $\sigma$ and height at most $i$.
A state is \emph{reachable} if it is reachable in at most $i$ steps for some $i \in \mathbb{N}$.
The \emph{reachability diameter} of the system is the least $i$ such that every reachable state is reachable in $i$ steps.
\end{definition}
A standard transition system is a hypertransition system with width $m=1$.
\begin{definition}[Abstract Hypertransition System]
\label{def:abs-hypertr}
The abstract hypertransition system $(\absr{\Init},\absr{\tr},\absr{\Bad})$ of a (standard) transition system $(\Init,\tr,\Bad)$ w.r.t.\ $\bkwrch{k} = \bkcube_1 \lor \ldots \lor \bkcube_m$
is defined over $\States$ by $\absr{\Init} = \mhull{\Init}{\bkwrch{k}}$, $\absr{Bad} = \Bad$, and
\begin{equation*}
(\sigma_1,\ldots,\sigma_m,\sigma') \in \absr{\tr} \iff
(\sigma_1,\sigma') \in \monox{\tr \lor \Init'}{(\reflect{\cubejoin{\bkwrch{k}}},\bkcube_1)}
\land
\ldots
\land
(\sigma_m,\sigma') \in \monox{\tr \lor \Init'}{(\reflect{\cubejoin{\bkwrch{k}}},\bkcube_m)}.
\end{equation*}
\end{definition}
The central property of the abstract hypertransition system is that its $i$-reachable state capture the Kleene iterations in the $\madom{\bkwrch{k}}$ abstract domain.
\begin{lemma}
\label{thm:hyperabsract-reach}
Let $\absr{R}_i$ be the set of states reachable in $(\absr{\Init},\absr{\tr},\absr{Bad})$ w.r.t.\ $\bkwrch{k}$ (\Cref{def:abs-hypertr}) in at most $i$ steps.
Then $\absr{R}_i \equiv \Frameai_{i}$ where $\Frameai_{i}$ is the $i$'th iterate of~\Cref{alg:eepdr-ai} on $(\Init,\tr,\Bad)$ in $\madom{\bkwrch{k}}$.
\end{lemma}
This will imply that the diameter of the abstract hypertransition system equals the number of iterations needed for convergence of the abstract interpretation in $\madom{\bkwrch{k}}$.
Before proving this connection, let us explain the intuition for the abstract hypertransition system and the relation to the algorithm.
Through~\Cref{lem:monox-disjunction-cubes,lem:moncube-reflect} we can see that a hypertransition $(\sigma_1,\ldots,\sigma_m,\sigma')$ of $\absr{\tr}$ consists of three segments:
\begin{itemize}
\item the monotonization w.r.t.\ $\cubejoin{\bkwrch{k}}$ of each $\sigma_i$ to $\widetilde{\sigma}_i$ ($\sigma_i$ is the ``protector'' state for $\widetilde{\sigma}_i$); then
\item from each resulting state $\widetilde{\sigma}_i$, either a concrete transition $(\widetilde{\sigma}_i,\widetilde{\sigma}'_i) \in \tr$, or going back to an initial state $\widetilde{\sigma}'_i \in \Init$; and
\item application of the monotone hull to arrive at $\sigma'$, using $\widetilde{\sigma}'_1,\ldots,\widetilde{\sigma}'_m$ as ``protector'' states, each $\widetilde{\sigma}'_i$ showing that $\sigma'$ is in the monotone overapproximation w.r.t.\ one of the cubes $\bkcube_i$ composing $\bkwrch{k}$.
\end{itemize}
The abstraction in the \emph{last} step connects the reachable states of $\absr{\tr}$ and the Kleene iterations of~\Cref{alg:eepdr-ai}.
The idea is that $\sigma' \in \mhull{\set{\widetilde{\sigma}'_1,\ldots,\widetilde{\sigma}'_m}}{\bkwrch{k}}$, and if $\widetilde{\sigma}_1,\ldots,\widetilde{\sigma}_m \in \Frameai_i$ then by~\Cref{lem:mhull-dnf-base} this implies that $\sigma' \in \mhull{{\tr}({\Frameai_i}) \cup \Init}{\bkwrch{k}}$, which, using the results of~\Cref{sec:ai}, is the next iterate $\Frameai_{i+1}$.
The key point is that the converse also holds---the monotone hull $\mhull{{\tr}({\Frameai_i}) \cup \Init}{\bkwrch{k}}$ ``factors'' to $\mhull{\set{\widetilde{\sigma}'_1,\ldots,\widetilde{\sigma}'_m}}{\bkwrch{k}}$ on all $m$ choices of protectors
$\widetilde{\sigma}'_1,\ldots,\widetilde{\sigma}'_m \in {\tr}({\Frameai_i}) \cup \Init$ (this is reminiscent of Carath\'{e}odory's theorem in convex analysis).
Unlike in~\Cref{def:abs-tr}, in this definition the protector states also come directly from $\Init$, not only from a transition of $\tr$, and essentially this is the reason for $\monox{\Init}{\bkcube_i}$ in the bound of~\Cref{thm:abstract-hyperdiamter-bound}, unlike in~\Cref{thm:abstract-diamter-bound}. This is necessary here to ``mix'' the different protector states, which do not necessarily all originate from the same frame.
As in~\Cref{def:abs-tr}, the abstraction in the \emph{first} step, which uses $\cubejoin{\bkwrch{k}}$, does not change
reachability
and the diameter, but it can improve our diameter \emph{bound}, which is derived in~\Cref{sec:hyperdiameter-bound}.
This is achieved by also allowing a hypertransition from $(\sigma_1,\ldots,\sigma_m)$ to $\sigma'$ if other steps of $\absr{\tr}$ (concrete/init, monotone hull) can arrive at $\sigma'$ from $(\widetilde{\sigma}_1,\ldots,\widetilde{\sigma}_m)$ and if we know for certain that whenever $\sigma_1,\ldots,\sigma_m$ are reachable, then so are $\widetilde{\sigma}_1,\ldots,\widetilde{\sigma}_m$.
This is the case when $\widetilde{\sigma}_1,\ldots,\widetilde{\sigma}_m$ belong to $\mhull{\sigma_1,\ldots,\sigma_m}{\bkwrch{k}}$, because, as explained above, a monotone hull is performed in the last step of $\absr{\tr}$.
Reachability is not extended,
because the additional abstraction in the pre-state could be mimicked by an abstraction in the post-state of the previous (abstract) step.
One way to ensure that $\widetilde{\sigma}_1,\ldots,\widetilde{\sigma}_m$ belong to $\mhull{\sigma_1,\ldots,\sigma_m}{\bkwrch{k}}$ is that each $\widetilde{\sigma}_i \in \mhull{\set{\sigma_i}}{\bkwrch{k}}$, namely, that
for every $b_j$, $\sigma_i \in \monox{\sigma_i'}{b_j}$. This is achieved when $\sigma_i \in \monox{\sigma_i'}{\cubejoin{\bkwrch{k}}}$, i.e., $\sigma_i' \in \monox{\sigma_i}{\reflect{\cubejoin{\bkwrch{k}}}}$, which is the reason for using $\cubejoin{\bkwrch{k}}$ in the monotonization of the pre-states.
Yet:
\begin{example}
In some cases, the abstraction using $\cubejoin{\bkwrch{k}}$ is weak, and is another source of non-tightness in the the bound of~\Cref{thm:abstract-hyperdiamter-bound}.
Consider the system from~\Cref{ex:monotone-several-cubes}.
In this case $\cubejoin{\bkwrch{k}}=\true$, resulting in no abstraction of the pre-state vocabulary. The DNF size of $\monox{\tr}{(\true,\bkcube_i)}$ is superpolynomial\iflong\footnote{
Let $\sigma$ be a state
$x_i=0$, and $\sigma'$ obtained by applying $\tr$ with two $x_{j_1},x_{j_2} \neq x_i$ variables that are $0$ in $\sigma$.
A DNF representation must have a term $d_{\sigma}$ such that $(\sigma,\sigma') \models d$. However, $d$ must include all variables $x_r \not\in \set{x_i,x_{j_1},x_{j_2}}$ that are $0$ in $\sigma$; otherwise $\tilde{\sigma}$ where they are turned off also yields $(\tilde{\sigma},\sigma') \models d_{\sigma}$. But this can't be because the $x_r$'s are also $0$ in $\sigma'$, and $\monox{\tr}{(\true,\bkcube_i)}$ doesn't allow turning $1$ bits to $0$ (except for $x_i$).
Consider now two states $\sigma_1,\sigma_2$ with $x_i=1,x_{j_1}=x_{j_2}=0$, and each has additional $n/2$ variables $x^{s}_{r_1},\ldots,x^{s}_{r_{n/2}} \not\in \set{x_i,x_{j_1},x_{j_2}}$ of value $0$ ($s \in \set{1,2}$), and the rest of the variables are $1$. By the above argument, $d_{\sigma_1}$ requires that all literals $x^{1}_{r_p}$ are $0$, which implies that $\sigma_2 \not\models d_{\sigma_1}$ if $\set{x^{2}_{r_1},\ldots,x^{2}_{r_{n/2}}} \not\subseteq \set{x^{1}_{r_1},\ldots,x^{1}_{r_{n/2}}}$.
Therefore, every choice of $n/2$ variables yields a non-comparable term, and there are $\binom{n}{n/2} = \Theta\left(4^n/\sqrt{n}\right)$ such choices.
},
\else
~(see the extended version~\cite{extendedVersion}),
\fi
leading to a superpolynomial bound on the number of frames.
However, $\Lambda$-PDR with $k=0$ converges in $\Frame_1$ (see~\Cref{ex:monotone-several-cubes}).
\end{example}
\toolong{
\begin{remark
At first sight, it would seem that the product of monotonizations in the bound of~\Cref{thm:abstract-hyperdiamter-bound} is unnecessary, and that one could study the convergence of $\Lambda$-PDR w.r.t.\ $\bkwrch{k}$ by the convergence w.r.t.\ the simpler (and larger) set $\cubejoin{\bkwrch{k}}$. Since $\bkwrch{k} \subseteq \cubejoin{\bkwrch{k}}$, the overapproximation is tighter with $\cubejoin{\bkwrch{k}}$ (\Cref{lem:mhull-monotonicity}), so it would seem that the number of iterations with $\mhull{\cdot}{\bkwrch{k}}$ must be less than with $\mhull{\cdot}{\cubejoin{\bkwrch{k}}}$. However, this is not so. The reason is that $\Lambda$-PDR with $\mhull{\cdot}{\cubejoin{\bkwrch{k}}}$ might be converging to an inductive invariant that is not present in $\mspan{\bkwrch{k}}$: $\bkwrch{k} \subseteq \cubejoin{\bkwrch{k}}$ implies $\mspan{\cubejoin{\bkwrch{k}}} \subseteq \bkwrch{k}$. Thanks to such ``new'' invariants, convergence could be faster with $\mhull{\cdot}{\cubejoin{\bkwrch{k}}}$ than with $\mhull{\cdot}{\bkwrch{k}}$.
\end{remark}
We now formally prove the connection between reachable states of the abstract transition system and iterations of the algorithm.
\begin{proof}[Proof of~\Cref{thm:hyperabsract-reach}]
We first prove a similar result for a slightly simpler, ``less abstract'' hypertransition system, where abstraction is performed in the post-state but not in the pre-state.
Define a hypertransition system $(\abs{\Init},\abs{\tr},\abs{\Bad})$ over $\States$
$\abs{\Init} = \mhull{\Init}{\bkwrch{k}}$, $\abs{Bad} = \Bad$, and
\begin{equation*}
\begin{split}
(\sigma_1,\ldots,\sigma_m,\sigma') \in \abs{\tr} \iff
\exists \sigma''_1,\ldots,\sigma''_m. \qquad
& ((\sigma_1,\sigma''_1) \in \tr \lor \sigma''_1 \in \Init) \land \sigma' \in \cubemon{\sigma''_1}{\bkcube_1} \qquad \land
\\
& \ldots
\\
& ((\sigma_m,\sigma''_m) \in \tr \lor \sigma''_m \in \Init) \land \sigma' \in \cubemon{\sigma''_m}{\bkcube_m}.
\end{split}
\end{equation*}
(In fact, $\abs{\tr}=\bigwedge_{i=1}^{m}{(\monox{\tr \lor \Init'}{(\true,\bkcube_i)})[\voc_i,\voc']}$).
Denote by $\abs{R}_i$ the set of states reachable in $(\abs{\Init},\abs{\tr},\abs{Bad})$ in at most $i$ steps.
We argue that $\abs{R}_i \equiv \Frameai_{i}$,
by induction on $i$.
Initially, $\abs{R}_0 = \abs{\Init} = \mhull{\Init}{\bkwrch{k}} = \Frameai_0$.
For the step, the set $\abs{R}_{i+1}$ is the set of states reachable in at most $i+1$ steps in the hypertransition system, which is $\abs{R}_{i+1} = \abs{R}_i \cup \tr(\abs{R}_i)$ where $\abs{\tr}(\abs{R}_i)$ is the set of states $\sigma'$ so that there are $\sigma_1,\ldots,\sigma_m \in \abs{R}_i$ such that $(\sigma_1,\ldots,\sigma_m,\sigma') \in \abs{\tr}$. By the definition of $\abs{\tr}$,
\begin{align*}
{\abs{\tr}}({\abs{R_i}})
&= \bigvee_{\sigma''_1,\ldots,\sigma''_m \in {\tr}({\abs{R}_i}) \cup \Init}{\left(\cubemon{\sigma''_1}{\bkcube_1} \land \ldots \land \cubemon{\sigma''_m}{\bkcube_m}\right)}
\\
\intertext{By distributivity of conjunction over disjunction,}
\\
&= \left(\bigvee_{\sigma''_1 \in {\tr}({\abs{R}_i}) \cup \Init}{\cubemon{\sigma''_1}{\bkcube_1}}\right)
\land
\ldots
\land
\left(\bigvee_{\sigma''_m \in {\tr}({\abs{R}_i}) \cup \Init}{\cubemon{\sigma''_m}{\bkcube_m}}
\right)
\\
\intertext{By~\Cref{lem:monox-disjunction-cubes}, this is}
&= \monox{{\tr}({\abs{R}_i}) \cup \Init}{\bkcube_1} \land \ldots \land \monox{{\tr}({\abs{R}_i}) \cup \Init}{\bkcube_m}
\\
\intertext{By~\Cref{lem:mhull-dnf-base}, this amounts to}
&= \mhull{{\tr}({\abs{R}_i}) \cup \Init}{\bkwrch{k}}
\\
\intertext{which by the induction hypothesis is}
&= \mhull{{\tr}({\Frameai_{i}}) \cup \Init}{\bkwrch{k}}.
\end{align*}
In the terminology of~\Cref{sec:ai-background}, as $\Frameai_{i}=(\abs{F})^{i+1}(\abs{\bot})$, we have obtained $\abs{\tr}(\abs{R}_i)=\malpha{\bkwrch{k}}(\tr(({\abs{F}_{\Init,\tr}})^{i+1}(\abs{\bot}))\cup \Init)=(\abs{F}_{\Init,\tr})^{i+2}(\abs{\bot})$. Because
$(\abs{F}_{\Init,\tr})^{i+1}(\abs{\bot}) \abs{\sleq} (\abs{F}_{\Init,\tr})^{i+2}(\abs{\bot})$,
the result is that $\abs{R}_i \subseteq \tr(\abs{R}_i)$, and
$\abs{R}_{i+1} = \tr(\abs{R}_i) \cup \abs{R}_i = \tr(\abs{R}_i) = \Frameai_{i+1}$, as required.
It remains to show that $\absr{R}_i = \abs{R}_i$, i.e., that the $i$-reachable states of $\abs{\tr},\absr{\tr}$ coincide.
First, for every set of states $S$ it holds that $\postimage{\abs{\tr}}{S} \subseteq \postimage{\absr{\tr}}{S}$. This is because if $(\sigma_1,\ldots,\sigma_m,\sigma') \in \abs{\tr}$, then by definition there are $\sigma''_1,\ldots,\sigma''_m$ such that $(\sigma_i,\sigma''_i) \in \tr \lor \Init'$ for every $i$ and $\sigma' \in \moncube{\sigma''_i}{\bkcube_i}$.
Considering the product monotone order, $(\sigma_i,\sigma''_i) \leq_{(\cdot,\bkcube_i)} (\sigma_i,\sigma')$, and so $(\sigma_i,\sigma''_i) \in \tr \lor \Init' \implies (\sigma_i,\sigma') \in \monox{\tr \lor \Init'}{(\reflect{\cubejoin{\bkwrch{k}}},\bkcube_i)}$. This for every $i$; by the definition of $\absr{\tr}$ this implies that $(\sigma_1,\ldots,\sigma_m,\sigma') \in \absr{\tr}$. This means that $\sigma' \in \postimage{\absr{\tr}}{S}$ since $\sigma_1,\ldots,\sigma_m \in S$.
Second, we show that for any $S \in \mspan{\bkwrch{k}}$ it holds that $\postimage{\absr{\tr}}{S} \subseteq \postimage{\abs{\tr}}{S}$. Let $(\sigma_1,\ldots,\sigma_m,\sigma') \in \absr{\tr}$, where $\sigma_1,\ldots,\sigma_m \in S$. By the definition of $\absr{\tr}$, $(\sigma_i,\sigma') \models \monox{\tr \lor \Init'}{(\reflect{\cubejoin{\bkwrch{k}}},\bkcube_i)}$, and so there exist $(\widetilde{\sigma}_1,\widetilde{\sigma}'_1),\ldots,(\widetilde{\sigma}_m,\widetilde{\sigma}'_m) \in \tr \lor \Init'$ such that for every $i$,
\begin{itemize}
\item $\sigma_i \models \moncube{\widetilde{\sigma}_i}{\reflect{\cubejoin{\bkwrch{k}}}}$---by~\Cref{lem:moncube-reflect}, this means that $\widetilde{\sigma}_i \models \moncube{\sigma_i}{\cubejoin{\bkwrch{k}}}$.
Since $\sigma_i \in S$, by~\Cref{lem:monox-disjunction-cubes}, $\widetilde{\sigma}_i \in \monox{S}{\cubejoin{\bkwrch{k}}}$. It follows that $\widetilde{\sigma}_i \in \mhull{S}{\bkwrch{k}}$ (because $\bkwrch{k} \subseteq \cubejoin{\bkwrch{k}}$ implies $\mhull{S}{\cubejoin{\bkwrch{k}}} \subseteq \mhull{S}{\bkwrch{k}}$ and $\mhull{S}{\cubejoin{\bkwrch{k}}} \equiv \monox{S}{\cubejoin{\bkwrch{k}}}$ by~\Cref{lem:mhull-dnf-base}). From the premise that $S \in \mspan{\bkwrch{k}}$, $\mhull{S}{\bkwrch{k}} \equiv S$, and we have $\widetilde{\sigma}_i \in S$.
\item $\sigma' \models \moncube{\widetilde{\sigma}'_i}{\bkcube_i}$.
\end{itemize}
Writing $\sigma''_i = \widetilde{\sigma}'_i$ shows that $(\widetilde{\sigma}_1,\ldots,\widetilde{\sigma}_m,\sigma') \in \abs{\tr}$, because for every $i$ we have $\widetilde{\sigma}_i \in S$, $(\widetilde{\sigma}_i,\widetilde{\sigma}'_i) \in \tr \lor \Init'$, and $\sigma' \models \moncube{\widetilde{\sigma}'_i}{\bkcube_i}$.
This shows that $\sigma' \in \postimage{\abs{\tr}}{S}$, as required.
The first part of the argument (and induction on $i$) shows that $\abs{R}_i \subseteq \absr{R}_i$.
We have shown that $\abs{R}_i = \Frameai_i$, which in particular implies that
always $\abs{R}_i \in \bkwspan{k}$; therefore, the second argument above shows that $\absr{R}_i \subseteq \abs{R}_i$.
The claim follows.
\end{proof}
}
\begin{corollary}
\label{thm:abstract-hyperdiameter-eepdr}
Let $(\absr{\Init},\absr{\tr},\absr{\Bad})$ be the abstract hypertransition system w.r.t.\ $\bkwrch{k}$ (\Cref{def:abs-hypertr}).
If $(\absr{\Init},\absr{\tr},\absr{\Bad})$ is safe and its reachability diameter is $s$, then
$\eepdr(\Init,\tr,\Bad,k)$ converges in frame at most $s+1$.
If $(\absr{\Init},\absr{\tr},\absr{\Bad})$ reaches a bad state in $s$ steps, then $\eepdr(\Init,\tr,\Bad,k)$ fails (\cref{ln:eepdr-restart}) in frame at most $s+1$.
\end{corollary}
\toolong{
\begin{proof}
Follows from~\Cref{thm:hyperabsract-reach} similarly to the proof of~\Cref{thm:abstract-diameter-eepdr} from~\Cref{thm:absract-reach}.
\end{proof}
}
\subsection{Hyperdiameter Bounds via a Joint Abstract Cover}
\label{sec:hyperdiameter-bound}
In this section, we bound the diameter of the abstract transition in order to obtain the convergence bound of~\Cref{thm:abstract-hyperdiamter-bound}.
The proof is based on a diameter bound similar to the case of standard transition systems.
\begin{lemma}
\label{lem:hyperdiam-dnf}
The reachability diameter of a hypertransition system $(\Init,\tr,\Bad)$ is bounded by $\dnfsize{\tr}$.
\end{lemma}
\toolong{
\begin{proof}
Fix a minimal DNF representation of $\tr$. Thinking about each disjunct of $\tr$ as an action, every transition from $m$ children to a parent can be labeled by at least one action.
Consider a path from the root to the leaves in an execution tree. With these actions, if an action $a$ labels two (hyper)transitions $\sigma_{i_1} \overset{a}{\rightarrow} \sigma_{i_1+1},\sigma_{i_2} \overset{a}{\rightarrow} \sigma_{i_2+1}$, the segment of the tree between the occurrences, $\sigma_{i_1+1},\ldots,\sigma_{i_2}$ can be dropped, replacing the and the resulting trace is still valid (and terminates at the same state)---this is because if $(\sigma^1_{i_1},\ldots,\sigma^m_{i_1},\sigma_{i_1+1}) \models a$ and likewise $(\sigma^1_{i_2},\ldots,\sigma^m_{i_2},\sigma_{i_2+1}) \models a$ then also $(\sigma^1_{i_1},\ldots,\sigma^m_{i_1},\sigma_{i_2+1}) \models a$, because $a$, which is a cube, can be decomposed to $a_{\textit{pre}_1} \land \ldots \land a_{\textit{pre}_m} \land a_\textit{post}$ where all the literals in $a_{\textit{pre}_i}$ are in the $i$'th pre-state copy $\voc_i$ and those in $a_\textit{post}$ are in $\voc'$.
Overall, every state that can be reached from a set of leaf states can do so by an execution where each action appears at most once on each path of the tree, and thus the diameter is bounded by $\dnfsize{\tr}$.
\end{proof}
}
\toolong{
\begin{proof}[Proof of~\Cref{thm:abstract-hyperdiamter-bound}]
Denote the bound in the theorem by $q + 1$.
From the distributivity of the conjunction in $\absr{\tr}$, $\dnfsize{\absr{\tr}} \leq \prod_{i=1}^{m}{\dnfsize{\monox{\tr \lor \Init'}{(\reflect{\cubejoin{\bkwrch{k}}},\bkcube_i)}}}$,
and
\begin{align*}
\dnfsize{\monox{\tr \lor \Init'}{(\reflect{\cubejoin{\bkwrch{k}}},\bkcube_i)}}
&\underset{\rm \Cref{lem:bshouty-mon-mindnf}}{=}
\dnfsize{\monox{\tr}{(\reflect{\cubejoin{\bkwrch{k}}},\bkcube_i)}
\lor \monox{\Init'}{(\reflect{\cubejoin{\bkwrch{k}}},\bkcube_i)}}
\\
&\leq
\dnfsize{\monox{\tr}{(\reflect{\cubejoin{\bkwrch{k}}},\bkcube_i)}}
+
\dnfsize{\monox{\Init'}{(\reflect{\cubejoin{\bkwrch{k}}},\bkcube_i)}}
\\
&=
\dnfsize{\monox{\tr}{(\reflect{\cubejoin{\bkwrch{k}}},\bkcube_i)}}
+
\dnfsize{\monox{\Init}{\bkcube_i}},
\end{align*}
overall yielding that $\dnfsize{\absr{\tr}} \leq q$.
By~\Cref{thm:abstract-hyperdiameter-eepdr}, the number of iterations before convergence or failure of $\Lambda$-PDR is bounded by
1 plus
the reachability diameter of $(\absr{\Init},\absr{\tr},\absr{\Bad})$,
which by~\Cref{lem:hyperdiam-dnf} is at most $q$.
\end{proof}
}
\section{Forward Reachability in $\Lambda$-PDR and Others}
\label{sec:itp-friends}
This section highlights the importance of the successive overapproximation embodied in the Kleene iterations of $\Lambda$-PDR by contrasting $\Lambda$-PDR with the treatment of forward reachability in other invariant inference algorithms.
\para{Exact forward reachability}
Exact forward reachability iterates $R_0=\Init, R_{i+1}=\postimage{\tr}{R_i}$, so that $R_i$ is the set of states reachable in at most $i$ steps (without any overapproximation).
We have shown that in some cases $\Lambda$-PDR can converge in a significantly lower number of iterations than exact forward reachability, stated formally in the following lemma.
\begin{lemma}
There exists a family of transition systems $(\Init,\tr,\Bad)$ over $\voc$ with $\card{\voc}=n$ and $k=\bigO(1)$ such that $\eepdr(\Init,\tr,\Bad,k)$ converges in $\textit{poly}(n,k)$ iterations, whereas exact forward reachability converges in $\Omega(2^n)$ iterations.
\end{lemma}
\begin{proof}
See e.g.~\Cref{sec:overview-successive} and~\Cref{ex:running-all-frames}.
\end{proof}
This gap reflects a gap between the diameter of the original system $(\Init,\tr)$ and the diameter of the abstract system $(\absr{\Init},\absr{\tr})$ (\Cref{def:abs-tr,thm:abstract-diameter-eepdr}).
\para{Dual interpolation}
\label{sec:dual-itp-compare}
The essence of \emph{interpolation-based inference} (ITP)~\cite{DBLP:conf/cav/McMillan03} is generalizing from proofs of \emph{bounded} unreachability.
We consider the time-dual~\cite[e.g.][Appendix A]{DBLP:journals/pacmpl/FeldmanISS20} of this approach, generalizing from bounded unreachabilty \emph{from} the initial states, rather than unreachability \emph{to} the bad states,
in line with our focus here on the treatment of forward reachability.
Specifically, \Cref{alg:dual-itp-termmin} is based on (the time-dual of) a model-based ITP algorithm~\cite{DBLP:conf/hvc/ChocklerIM12,DBLP:conf/lpar/BjornerGKL13} whose generalization procedure was inspired by PDR.
\iflong
\begin{wrapfigure}{r}{0.45\textwidth}
\vspace{-0.4cm}
\begin{minipage}{0.45\textwidth}
\begin{algorithm}[H]
\caption{Dual Model-Based ITP, based on~\cite{DBLP:conf/hvc/ChocklerIM12,DBLP:conf/lpar/BjornerGKL13}}
\label{alg:dual-itp-termmin}
\begin{algorithmic}[1]
\begin{footnotesize}
\Procedure{Dual-Model-Based-ITP}{$\Init$,$\tr$,$\Bad$,$s$}
\State $\varphi \gets \neg \Bad$ $\label{ln:dual-itp-frame0}$
\While{$\varphi$ not inductive}
\State take $\sigma_b \in \varphi$ s.t.\ $\tr(\sigma_b) \not\subseteq \varphi$ $\label{ln:dual-itp-cex}$
\If{$\sigma_b \in \mathcal{R}_s$} $\label{ln:dual-itp-restart}$
\State \textbf{restart} with larger $s$
\EndIf
\State take minimal $c \subseteq \neg \sigma_b$ s.t.\ $\mathcal{R}_s \implies c$ $\label{ln:dual-itp-bmc}$
\State $\varphi \gets \varphi \land c$ $\label{ln:dual-itp-learn}$
\EndWhile
\State \Return $\varphi$
\EndProcedure
\end{footnotesize}
\end{algorithmic}
\end{algorithm}
\end{minipage}
\vspace{-0.5cm}
\end{wrapfigure
The algorithm is parametrized by a forward-exploration bound $s$. It refines a candidate $\varphi$ starting from the candidate that excludes just the bad states (\cref{ln:dual-itp-frame0}).
In each iteration,
the algorithm samples a pre-state $\sigma_b$ of a counterexample to the induction of $\varphi$, a state in $\varphi$ that in one step reaches states outside $\varphi$ (\cref{ln:dual-itp-cex}).
Instead of excluding just the counterexample---similarly to PDR---
\iflong\else
\begin{wrapfigure}{r}{0.45\textwidth}
\vspace{-0.4cm}
\begin{minipage}{0.45\textwidth}
\begin{algorithm}[H]
\caption{Dual Model-Based ITP, based on~\cite{DBLP:conf/hvc/ChocklerIM12,DBLP:conf/lpar/BjornerGKL13}}
\label{alg:dual-itp-termmin}
\begin{algorithmic}[1]
\begin{footnotesize}
\Procedure{Dual-Model-Based-ITP}{$\Init$,$\tr$,$\Bad$,$s$}
\State $\varphi \gets \neg \Bad$ $\label{ln:dual-itp-frame0}$
\While{$\varphi$ not inductive}
\State take $\sigma_b \in \varphi$ s.t.\ $\tr(\sigma_b) \not\subseteq \varphi$ $\label{ln:dual-itp-cex}$
\If{$\sigma_b \in \mathcal{R}_s$} $\label{ln:dual-itp-restart}$
\State \textbf{restart} with larger $s$
\EndIf
\State take minimal $c \subseteq \neg \sigma_b$ s.t.\ $\mathcal{R}_s \implies c$ $\label{ln:dual-itp-bmc}$
\State $\varphi \gets \varphi \land c$ $\label{ln:dual-itp-learn}$
\EndWhile
\State \Return $\varphi$
\EndProcedure
\end{footnotesize}
\end{algorithmic}
\end{algorithm}
\end{minipage}
\vspace{-0.5cm}
\end{wrapfigure
\fi
the algorithm seeks a minimal clause $c$ over the literals that are falsified in $\sigma_b$ that does not exclude a state from $\mathcal{R}_s$, the set of states that the system can reach in $s$ steps (\cref{ln:dual-itp-bmc}), and conjoins $c$ to the
candidate (\cref{ln:dual-itp-learn}).
The complexity of this algorithm was recently studied by~\citet{DBLP:journals/pacmpl/FeldmanSSW21}, who showed that the forward-exploration bound $s$ is sufficient to discover an inductive invariant when $s$ steps reach the entire \emph{inner boundary} of $I$,
\iflong\else
\vspace{0.3cm}
\fi
\begin{equation*}
\boundarypos{I} \eqdef \set{\sigma^+ \, \mid \, \exists \sigma^-. \ \sigma^+ \models I, \, \sigma^- \models \neg I, \, \mbox{Hamming-Distance}(\sigma^+,\sigma^-)=1}.
\end{equation*}
\begin{theorem}[\citet{DBLP:journals/pacmpl/FeldmanSSW21}]
\label{lem:itp-fence-condition}
Let $I$ be an inductive invariant for $(\Init,\tr,\Bad)$,
and $\mathcal{R}_s$ the set of states reachable in at most $s$ steps in $(\Init,\tr,\Bad)$.
If
$\boundarypos{I} \subseteq \mathcal{R}_s$,
then a forward bound of $s$ suffices for $\mbox{Dual-Model-Based-ITP}(\Init,\tr,\Bad,s)$ to successfully find an inductive invariant.
\end{theorem}
In the example of~\Cref{fig:skip-counter} (from~\Cref{sec:overview}), this does not hold for the invariant in~\Cref{eq:skip-counter-invariant} unless $s=\Omega(2^n)$ (for example, $\vec{x}=110\ldots0,\vec{y}=0\ldots0,z=0 \in \boundarypos{I}$ but reachable only in $\Omega(2^n)$ steps).
In contrast, we prove that for $\Lambda$-PDR, it is enough that $\boundarypos{I}$ is $s$-reachable in the \emph{abstract} (hyper)system, which interleaves concrete steps and abstraction (see~\Cref{def:abs-hypertr}), and thus can reach $\boundarypos{I}$ in fewer steps, which would result in convergence of $\Lambda$-PDR with a smaller number of frames:
\begin{theorem}
\label{lem:eepdr-fence-condition}
Let $I \in \bkwspan{k}$ be an inductive invariant for $(\Init,\tr,\Bad)$,
and $\absr{\mathcal{R}}_s$ the set of states reachable in at most $s$ steps in $(\absr{\Init},\absr{\tr},\absr{\Bad})$ (\Cref{def:abs-hypertr}).
If $\boundarypos{I} \subseteq \absr{\mathcal{R}}_s$,
then $s+1$ frames suffice for $\eepdr(\Init,\tr,\Bad,k)$ to successfully find an inductive invariant.
\end{theorem}
The two results can be proved similarly, using tools from the monotone theory. In both cases, the argument is that when $s$ is large enough, the monotone hull of the current candidate must include the entire $I$.
In~\Cref{alg:dual-itp-termmin}, the argument is that always $I \subseteq \mhull{\mathcal{R}_s}{\mathcal{C}_i} \subseteq \varphi$, where $\mathcal{C}_i$ is the set of counterexamples $\sigma_b$ that the algorithm has encountered so far.\yotamforlater{opportunity to use this argument to infer CDNF invariants using interpolation?}
In~\Cref{alg:eepdr}, the argument is that $I \subseteq \mhull{\absr{\mathcal{R}}_s}{\bkwrch{k}}$.
Both rely on the following fact about the monotone hull of a boundary of a set:
\begin{lemma}
\label{lem:mhull-boundary}
Let $I,S,B$ be sets of states s.t.\ $\boundarypos{I} \subseteq S$ and $B \cap I = \emptyset$.
Then $I \subseteq \mhull{S}{B}$.
\end{lemma}
\begin{proof}
Let $\sigma \in I$. For every $b \in B$, assume for the sake of contradiction that $\sigma$ is not in $\monox{S}{b}$. By~\Cref{lem:monox-conjunction-clauses}, there is some cube $e$ such that $b \models e$ and $\sigma \models e$ but $S \implies \neg e$. In particular, $\boundarypos{I} \implies \neg e$. Consider some shortest path between $\sigma,b$ in the Hamming cube. Because $\sigma \models I, b \not\models I$, there is some crossing point $\sigma^{+} \in \boundarypos{I}$ on that path. This state $\sigma^{+}$ agrees with $\sigma,b$ on the literals on which they agree, which include all the literals in $e$, since this is a cube (a conjunction of literals). Hence also $e \models \sigma^{+}$, but this is a contradiction to $\boundarypos{I} \implies \neg e$.
\end{proof}
\iflong
We use this to prove the above claims:
\begin{proof}[Proof of~\Cref{lem:itp-fence-condition}]
We show that always $I \subseteq \varphi$, which implies that $\sigma_b \not\models I$ (because otherwise $\tr(\sigma_b) \subseteq I \subseteq \varphi$, in contradiction to the choice of $\sigma_b$), and because $\mathcal{R}_s \subseteq I$ this implies that $\sigma_b \not\in \mathcal{R}_s$ and so no restart is required. Because $\varphi$ strictly decreases in each iteration and the number of non-equivalent formulas is finite, this implies that the algorithm terminates with an inductive invariant.
First note that by \Cref{thm:mhull-conjunctive}, always $\varphi \in \mspan{\mathcal{C}_i}$. By induction on the iterations of the algorithm, always $\mathcal{R}_s \subseteq \varphi$, and hence by~\Cref{lem:mhull-monotonicity} always $\mhull{\mathcal{R}_s}{\mathcal{C}_i} \subseteq \mhull{\varphi}{\mathcal{C}_i} \equiv \varphi$.
We argue that $I \subseteq \varphi$ and $I \cap \mathcal{C}_i = \emptyset$, by induction.
Initially, $I \subseteq \neg \Bad$ because it is an inductive invariant, and $\mathcal{C}_0 = \emptyset$.
For a step, $\sigma_b \not\in I$ because otherwise $\tr(\sigma_b) \subseteq I \subseteq \varphi$.
To show that $I \subseteq \varphi \land \neg d$, by the induction hypothesis $\mathcal{R}_s \subseteq I \subseteq \varphi$ and by construction $\mathcal{R}_s \subseteq \neg d$. Hence, by~\Cref{lem:mhull-boundary}, $I \subseteq \mhull{\mathcal{R}_s}{\mathcal{C}_i \cup \set{\sigma_b}} \subseteq \varphi \land \neg d$, as required.
\end{proof}
\begin{proof}[Proof of~\Cref{lem:eepdr-fence-condition}]
The set of states reachable in $s$ steps in $(\absr{\Init},\absr{\tr},\absr{\Bad})$ is $\Frameai_s$ of the Kleene iterations (\Cref{thm:hyperabsract-reach}).
We can apply~\Cref{lem:mhull-boundary}, because $\postimage{\tr}{\Frame_{s-1}}$ of $\Lambda$-PDR includes all $s$-reachable states (by properties~\ref{it:frames-start}--\ref{it:frames-end} in~\Cref{sec:overview-frame-props}) and $I \cap \bkwrch{k} = \emptyset$ since $I$ is an inductive invariant. We obtain that $\Frameai_s = \mhull{\postimage{\tr}{\Frameai_{s-1}}}{\bkwrch{k}}$ contains $I$. It cannot ``overshoot'' beyond $I$ due to~\Cref{lem:eepdr-lfp}.
Apply~\Cref{lem:eepdr-ai-iterations} for the connection to $\Lambda$-PDR.
\end{proof}
\else
Proofs of~\Cref{lem:itp-fence-condition} and~\Cref{lem:eepdr-fence-condition} are derived from~\Cref{lem:mhull-boundary} in the extended version~\cite{extendedVersion}.
\fi
In essence, these different criteria for when the forward-exploration of the algorithm is sufficient reflect the difference in how the algorithms generalize: per counterexample, both find a minimal clause that does not exclude states from some form of forward reachability, but in $\Lambda$-PDR this is an abstraction of forward reachability, whereas
\Cref{alg:dual-itp-termmin}
uses exact forward reachability.
This difference also manifests in different outcomes of~\Cref{alg:eepdr} and~\Cref{alg:dual-itp-termmin} on the running example of~\Cref{fig:skip-counter}.
For every $s < 2^n$ there is an execution of
~\Cref{alg:dual-itp-termmin}
that fails (\cref{ln:dual-itp-restart}) because it includes reachable states as counterexamples to exclude (for example, the first counterexample in the execution of~\Cref{alg:dual-itp-termmin} is $\sigma_b=(\vec{x}=10\ldots00,\vec{y} = 11\ldots10,z=1)$, which can be generalized to $c=(x_n=0)$ that inadvertently excludes also reachable states such as $\vec{x} = 10\ldots01,\vec{y} = 00\ldots00,z=0$),
although $s=\bigO(1)$ suffices for $\Lambda$-PDR (\Cref{ex:running-all-frames}).
Finally, we remark that~\Cref{alg:dual-itp-termmin} does use a form of successive overapproximation.
By repeatedly generating counterexamples to induction (\cref{ln:dual-itp-cex}), it in a sense uses reverse frames that overapproximate backward reachability.
While both~\Cref{alg:dual-itp-termmin} and~\Cref{alg:eepdr} learn lemmas by minimizing a term w.r.t.\ a forward-reachability analysis in order to block a counterexample from a backward-reachability analysis,
\Cref{alg:eepdr} employs successive overapproximation is in the former analysis, and \Cref{alg:dual-itp-termmin} in the latter.
As we have seen, this successive overapproximation in counterexample generation is not sufficient for~\Cref{alg:dual-itp-termmin} to successfully infer an invariant for the example of~\Cref{fig:skip-counter}. However, it does alleviate the requirement that $I \in \bkwrch{k}$, which is necessary in~\Cref{lem:eepdr-fence-condition}
but not
in~\Cref{lem:itp-fence-condition}.\footnote{
The original, non time-dual version of the algorithm, has frames going forward, such as in PDR, but the roles of backward- and forward-reachability in generalization are reversed. This algorithm ``overshoots'' on the example of~\Cref{fig:skip-counter} unless $s=\Omega(2^n)$\yotamsmall{algorithm initialized with $\vec{x
}=00\ldots00,\vec{y}=0\ldots0,z=0$, cti $\vec{x
}=00\ldots01,\vec{y}=0\ldots0,z=0$, can be generalized to $\vec{x
}=00\ldots01$ unless $s \geq 2^{n-1}-2$, once we add this term to the invariant we've added backward reachable states and we're doomed}, but we focus here on overapproximations that are too tight (rather than too loose), the direction in which $\Lambda$-PDR is informative of PDR.
}
\section{Between $\Lambda$-PDR and PDR: Best Abstraction and Even Better}
\label{sec:vs-pdr}
In each frame, $\Lambda$-PDR includes all possible generalizations,
which we have shown to amount in $\Frame_{i+1}$ to the the best abstraction of $\postimage{\tr}{\Frame_i}$ in the abstract domain $\madom{\bkwrch{k}}$ (\Cref{lem:best-abstraction}).
\begin{changebar}
Its frames are thus the strongest (contain fewest states) that satisfy all the properties of frames listed in~\Cref{sec:overview-frame-props}---the standard ones as well as the monotone span of backward reachable states:
\begin{lemma}
\label{lem:lambda-frames-minimality}
The frames $\Frame_0,\Frame_1,\ldots$ of $\Lambda$-PDR are the least (w.r.t.\ $\implies$) s.t.\ for every $i$,
\begin{inparaenum}
\setcounter{enumi}{\getrefnumber{it:frames-init}-1}
\item $\Init \implies \Frame_0$,
\setcounter{enumi}{\getrefnumber{it:frames-monotone}-1}
\item $\Frame_i \implies \Frame_{i+1}$,
\setcounter{enumi}{\getrefnumber{it:frames-onestep-overapprox}-1}
\item $\tr({\Frame_i}) \implies \Frame_{i+1}$, and
\setcounter{enumi}{\getrefnumber{it:frames-mbasis}-1}
\item $\Frame_i \in \bkwspan{k}$.
\end{inparaenum}
\end{lemma}
\toolong{
\begin{proof}
That the frames of $\Lambda$-PDR satisfy the properties is immediate from the relationship $\Frame_0 = \Init$, $\Frame_{i+1} = \malpha{\bkwrch{k}}(\postimage{\tr}{\Frame_i})$. Minimality is from best abstraction (\Cref{lem:best-abstraction}) and induction on $i=0,1,\ldots$: let $\widetilde{\Frame}_0,\widetilde{\Frame}_1,\ldots$ another sequence that satisfies the properties. By property~\ref{it:frames-init}, $\Init = \Frame_0 \implies \widetilde{\Frame}_0$. For the step, assume that $\Frame_i \implies \widetilde{\Frame}_i$. Then from properties~\ref{it:frames-monotone}~and~\ref{it:frames-onestep-overapprox}, $\postimage{\tr}{\widetilde{\Frame}_i} \implies \widetilde{\Frame}_{i+1}$, and in particular also $\postimage{\tr}{\Frame_i} \implies \widetilde{\Frame}_{i+1}$. From property~\ref{it:frames-mbasis}, $\widetilde{\Frame}_{i+1} \in \bkwspan{k}$. Putting these together, \Cref{lem:best-abstraction} implies that $\Frame_{i+1} = \malpha{\bkwrch{k}}(\postimage{\tr}{\Frame_i}) \implies \widetilde{\Frame}_{i+1}$, as required.
\end{proof}
}
In contrast to $\Lambda$-PDR, standard PDR ``samples'' counterexamples and generalizations, and it does not produce in $\Framepdr_{i+1}$ the least abstraction of $\postimage{\tr}{\Framepdr_i}$. Its frames are nevertheless characterized as abstractions (not necessarily the least abstraction) in the same domain:
\begin{lemma}
\label{lem:pdr-also}
At any point during the execution of $\mbox{PDR}(\Init,\tr,\Bad)$ (\Cref{alg:pdr}) when it has at most $N$ frames, $\Framepdr_i \in \bkwspan{N}$ for every $1 \leq i \leq N$.
\end{lemma}
\toolong{
\begin{proof}
In a call to $\textsc{block}(\sigma_b, i+1)$, it holds $\sigma_b \in \bkwrch{N-i}$, by induction on the recursive calls:
The first call $\textsc{block}(\sigma_b, N+1)$ in~\cref{ln:pdr-block-bad} has $\sigma_b \in \bkwrch{0}=\Bad$, by~\cref{ln:pdr-sample-bad}.
In each recursive call from $(\sigma_b,i+1)$ to $(\sigma,i+1-1)$, in~\cref{ln:pdr-back-sample-prestate} the new counterexample $\sigma$ reaches the counterexample in the parent call $\sigma_b$ in one step, so $\sigma_b \in \bkwrch{N-i}$ implies $\sigma \in \bkwrch{N-i+1} = \bkwrch{N-(i-1)}$ as required.
Since $\bkwrch{N-i} \subseteq \bkwrch{N}$, this ensures that $\sigma_b \in \bkwrch{N}$ in every call to $\textsc{block}(\sigma_b, i+1)$.
Hence, when the algorithm strengthens $\Framepdr_i$ in~\cref{ln:pdr-strengthen}, it is always with a clause $c$ such that $\sigma_b \not\models c$ where $\sigma_b \in \bkwrch{N}$.
This implies that $\Framepdr_i \in \bkwspan{N}$ (see~\Cref{sec:monotone-basis}), completing the proof.
\end{proof}
}
In particular, this shows that PDR overapproximates the frames that $\Lambda$-PDR generates:
\begin{corollary}
\label{cor:lambda-pdr-underapproximates-pdr}
At any point during the execution of $\mbox{PDR}(\Init,\tr,\Bad)$ (\Cref{alg:pdr}) when it has at most $N$ frames, its $i$'th frame, $\Framepdr_i$, satisfies $\Frame_i \implies \Framepdr_i$, where $\Frame_i$ is the $i$'th frame of $\mbox{$\Lambda$-PDR}(\Init,\tr,\Bad,N)$ (\Cref{alg:eepdr}).
\end{corollary}
\toolong{
\begin{proof}
The frames of~\Cref{alg:pdr} satisfy the properties in the premise of~\Cref{lem:lambda-frames-minimality}---all are standard except for the one shown in~\Cref{lem:pdr-also}.
A more direct argument, outlined in~\Cref{sec:overview-eepdr}, is that every lemma that PDR learns is also a lemma that $\Lambda$-PDR includes in its frames. \yotamsmall{added here}Formally,
to strengthen $\Framepdr_i$, $c$ must block some $\sigma_b \in \bkwrch{n}$ and after $c$ is conjoined to the previous frame $\Framepdr_{i-1}$ we must have $\postimage{\tr}{\Framepdr_{i-1}} \implies c$; \sharon{not clear (do we really need to mention induction?). In particular, I think the indices are buggy. In $\Lambda$-pdr the index it should be $\postimage{\tr}{\Frame_{i-1}} \implies c$, since we're talking about frame $i$ not $i+1$}\yotamsmall{right, fixed the indices after here. I think we need the induction}the latter implies, using an induction hypothesis that $\Frame_{i-1} \implies \Framepdr_{i-1}$, that also $\postimage{\tr}{\Frame_{i-1}} \implies c$. $\Lambda$-PDR conjoins \emph{all} such clauses; thus whenever $c$ is conjoined to $\Framepdr_i$, it is also conjoined to $\Frame_i$.
\end{proof}
}
In other words, PDR's frame also constitute some sort of search in the abstract domain $\madom{\bkwrch{N}}$ (though in a complex manner, refining previous frame etc.), and its frames always generate at least as much overapproximation as $\Lambda$-PDR. Hence, our results that show significant overapproximation in $\Lambda$-PDR translate to PDR as well.
Still, the difference between the algorithms is significant---PDR's frames don't employ the best abstraction in this domain. How does this benefit PDR?
\end{changebar
We show two
ways.
First, computing all generalizations may be inefficient. Second, it may not be desirable---it could lead to too precise abstraction and slow convergence.
\para{Inefficient frame size}
Consider a system over $n$ variables $x_1,\ldots,x_n$, with $\Init \ = \ x_1=\ldots=x_n=0$, $\Bad=x_1=\ldots=x_n=1$, and $\tr$ that non-deterministically chooses some $i \neq j$ with $x_i=x_j=0$ and sets $x_i \gets 1$.
We start with the analysis of $\Lambda$-PDR.
In this example, $\bkwrch{k} \ = \ 1\ldots1$ (for every $k$).
We argue that $\Frame_i$ is exactly the set $R_i$ of states reachable in at most $i$ steps, which is the set of states with at most $i$ bits $1$, denoted $\set{\vec{x} \mid \#1(\vec{x}) \leq i}$.
This can be seen by induction:
initially, this holds for $\Frame_0=\Init=\set{0\ldots0}$.
In each step $\tr(\Frame_i) = R_i \cup \set{\vec{x} \mid \#1(\vec{x})=i+1}$. Then
$\Frame_{i+1}=\mhull{\postimage{\tr}{\Frame_i}}{\bkwrch{k}}=\monox{\postimage{\tr}{\Frame_i}}{1\ldots1} = R_i \cup \monox{\set{\vec{x} \mid \#1(\vec{x})=i+1}}{1\ldots1} = R_{i+1}$, because $\monox{\set{\vec{x} \mid \#1(\vec{x})=i+1}}{1\ldots1}$ adds states that are obtained from a state with $\#1(\vec{x})=i+1$ by flipping $1$'s to $0$'s, resulting in states with smaller values of $\#1(\vec{x})$ that are already included in $R_i$.
Unfortunately, the set $R_{\left\lfloor{n/2}\right\rfloor+1}$ is not expressible in polynomial-size CNF nor DNF.\footnote{
It is the majority function, which is not in $\mbox{AC}^0$~\cite{DBLP:conf/stoc/Hastad86}, a complexity class that includes poly-size CNF and DNF.
}
This means that some of $\Lambda$-PDR's frames need an exponential number of clauses, and so construct an exponential number of generalizations of the bad state. Even an alternative DNF computation (based on~\Cref{lem:bshouty-mon-mindnf}) would not fare better.
In contrast, $\Framepdr_i$ consists of a single clause blocking the bad state, which is short.
\para{Slow convergence}
\label{sec:slow-convergence}
Consider a counter over $\vec{x}=x_n,x_{n-1},\ldots,x_0$ with $\Init = (\vec{x}=0\ldots0)$, $\Bad=(\vec{x}=1\ldots1)$, and $\tr$ that increments the counter except for when $\vec{x}=1\ldots10$ which skips the bad state and wraps-around to $0\ldots0$.
We start with the analysis of $\Lambda$-PDR. Similar to the previous example, $\bkwrch{k} \ = \ 1\ldots1$ (for every $k$) and $\Frame_i = \mathcal{R}_i$, except that $\mathcal{R}_i$ is now
the set of states $\vec{x} \leq i$, because $\tr(\Frame_i)$ always adds the state $\vec{x}=i+1$, and its $1\ldots1$-monotonization adds only states with smaller values of $\vec{x}$ which are already included in $\mathcal{R}_i$ (the derivation is similar to the previous example).
Therefore, the frames $\Frame_i = \set{\vec{x}: \ \vec{x} \leq i}$ do not converge until $i=2^n-1$, which means that $\Lambda$-PDR converges after an exponential number of frames.\yotamsmall{in this example the frames can be expressed in linear-size CNF}
In contrast, in this example, PDR always converges in a \emph{linear} number of frames.
The proof uses the fact from~\Cref{lem:pdr-also} that the frames of PDR are $(1\ldots1)$-monotone, and that $\Framepdr_i$ is always exactly one clause, because it blocks the single backward reachable state using a single lemma. Since $\Framepdr_i \implies \Framepdr_{i+1}$, and $\Framepdr_i$ is $1...1$-monotone, the clause that is $\Framepdr_{i+1}$ must be a syntactic subset of the clause that is $\Framepdr_i$~\cite{quine1954two}. Until they converge, the difference between two successive frames must be that some literals are omitted from the clause, which can happen at most $n$ times.
\section{Related Work}
\label{sec:related}
\para{PDR as abstract interpretation}
This work is not the first to study the relation between PDR and abstract interpretation. \citet{DBLP:conf/vmcai/RinetzkyS16} prove that the reachable configurations of PDR are in simulation with the reachable states of a non-standard backward trace semantics. Their work studies standard PDR as non-standard abstract interpretation, whereas we study non-standard PDR as standard abstract interpretation (in a new domain); our domain abstracts the simpler collecting states semantics with standard forward iterations. Our work emphasizes the overapproxmation inherent in the abstraction, where, in particular, the abstraction forces overapproximation in the sequence of frames, whereas
Rinetzky and Shoham's property-guided Cartesian trace semantics domain is precise enough to express any
sequence of frames that satisfy properties~\ref{it:frames-start}--\ref{it:frames-end} from~\Cref{sec:overview-frame-props}. In contrast, adding property~\ref{it:frames-mbasis} characterizes $\Lambda$-PDR as Kleene iterations in our $\madom{\bkwrch{k}}$ domain.
\para{Abstract transition systems}
\citet{DBLP:journals/toplas/DamsGG97} construct, from a transition system and a Galois connection, abstract transition systems that preserve safety and other temporal properties. These are defined over a state space of abstract elements (e.g., formulas in the case of a logical domain), forming abstract edges between abstract elements through $\exists\exists$ or $\forall\exists$ relations of original transitions between the concretizations. It is important for our diameter bounds from the DNF representation of the abstract (hyper)transition system that it is defined over the original state space (\Cref{def:abs-tr,def:abs-hypertr}), which is possible due to the special structure of $\madom{B}$ (see~\Cref{thm:absract-reach,thm:hyperabsract-reach}).
In that respect our abstract transition systems are closer to monotonic abstraction in well-structured transition systems by~\citet{DBLP:journals/ijfcs/AbdullaDHR09}, the abstract transition systems for universally-quantified uninterpreted domains by~\citet{DBLP:conf/popl/PadonISKS16}, and the surjective abstraction games of~\citet{DBLP:conf/vmcai/FecherH07}.
\para{Diameter bounds}
Diameter bounds have been studied in the context of completeness thresholds for bounded model checking~\cite{DBLP:conf/tacas/BiereCCZ99,DBLP:conf/vmcai/KroeningS03}.
The recurrence diameter~\cite{DBLP:conf/dac/BiereCCFZ99,DBLP:conf/vmcai/KroeningS03}, the longest loop-free path, was studied as a more easily-computable upper bound on the diameter. In our setting, this measure cannot be reduced by the abstraction, which only adds transitions.
There are also works that encode the completeness threshold assumption as another verification condition~\cite[see][\S IV.D]{DBLP:journals/tcad/DSilvaKW08}.
Another line of work computes diameter bounds by a composition of diameter bounds of subsystems formed by separating dependencies between variables in the system's actions~\cite{DBLP:journals/jar/AbdulazizNG18,DBLP:conf/cav/BaumgartnerKA02,DBLP:conf/ijcai/RintanenG13}. Existing works have considered guarded-update actions, in which variables are either modified to a constant value or remain unchanged; this is not directly applicable to the actions that arise in our abstract transition systems, where monotonization in effect ``havocs'' variables. Havocked variables are different because in a transition, they \emph{can} change, but not \emph{necessarily};
weaker notions of dependence to capture this may be interesting in future work.
We are not aware of a previous application of the $\dnfsize{\tr}$ diameter bound (\Cref{lem:diam-dnf}).
This bound is never worsened by monotonization, as $\dnfsize{\monox{\tr}{\ldots}} \leq \dnfsize{\tr}$~\cite[][and a corollary of~\Cref{lem:bshouty-mon-mindnf}]{DBLP:journals/iandc/Bshouty95}, and can be exponentially smaller, as e.g.\ in~\Cref{sec:overview-diameter-bound}.
The diameter bounds by~\citet{DBLP:conf/popl/KonnovLVW17,DBLP:conf/concur/KonnovVW14} share with our work the motivation of analyzing the diameter of abstractions of the original system. They rely on the special structure of counter abstractions of fault-tolerant distributed systems to apply movers~\cite{DBLP:journals/cacm/Lipton75} and acceleration.
\para{Complexity of invariant inference algorithms}
Houdini~\cite{DBLP:conf/fm/FlanaganL01} infers conjunctive invariants in a linear number of SAT calls, and Dual Houdini likewise for disjunctive invariants~\cite{DBLP:conf/cade/LahiriQ09}.
\citet{DBLP:journals/pacmpl/FeldmanSSW21} analyze the complexity of an interpolation-based inference algorithm based on the fence condition, which we compare with our results in~\Cref{sec:dual-itp-compare}.
The work by~\citet{DBLP:conf/mbmv/SeufertS17} includes a complexity analysis of all executions of PDR on the case of two synchronized $n$-bit counters, where PDR requires an exponential number of SAT calls (this also follows from the fact that the only CNF invariant is exponentially-large) but an enhanced time-dual version of it converges in one frame.
Convergence in one frame is also proved for maximal transitions systems with monotone invariants~\cite{DBLP:journals/pacmpl/FeldmanISS20}.
In~\Cref{sec:slow-convergence}
we go beyond this with an analysis of standard PDR on a simple example where convergence requires multiple frames.
Our analysis of $\Lambda$-PDR centered on the number of frames, not the complexity of constructing them, which is an interesting direction for future work.
Although, in the spirit of~\Cref{sec:abstract-diameter-all}, we can bound $\dnfsize{\Frame_i} \leq \dnfsize{\abs{\tr}} = \dnfsize{\monox{\tr}{\bkcube}}$ (when $\bkcube=\bkwrch{k}$ is a cube), the original $\Lambda$-algorithm's complexity analysis~\cite{DBLP:journals/iandc/Bshouty95} for computing $\monox{\varphi}{\bkcube}$ depends on $\dnfsize{\varphi}$, not $\dnfsize{\monox{\varphi}{\bkcube}}$, which in our setting is the difference between the concrete and the significantly reduced abstract diameter.
\para{The monotone theory in invariant inference}
The monotone theory~\cite{DBLP:journals/iandc/Bshouty95} has been used for other purposes in invariant inference.
\citet{DBLP:journals/mscs/JungKDWY15} use Bshouty's CDNF learning algorithm to infer predicate abstraction invariants, employing over- and under-approximations to resolve membership queries, sometimes relying on random guesses.
\citet{DBLP:journals/pacmpl/FeldmanSSW21} use Bshouty's $\Lambda$-algorithm for provably-efficient inference of an invariant whose $s$-reachable (cf.~\Cref{lem:itp-fence-condition}) and belongs to $\mspan{B}$ when $B$ is known a-priori.
\citet{DBLP:conf/cav/ChenCFTTW10} use the CDNF algorithm for automatic generation of contextual assumptions in assume-guarantee
\iflong\else
\pagebreak
\fi
\section{Conclusion}
\label{sec:conclusion}
This work has distilled a previously unknown principle of property-directed reachability.
Through $\Lambda$-PDR and its analysis based on the monotone theory from exact learning, we have shown that PDR overapproximates an abstract interpretation process in a new logical abstract domain.
We have further shown how this abstraction achieves a significantly more effective forward reachability exploration than approaches that use exact post-image computations or bounded unrollings, and how this can partially be explained through the difference between diameter bounds between the original system and its abstraction.
In future work, it will be interesting to understand the mechanisms by which PDR deviates from naive backward reachability, avoiding the pitfall in the other direction, of overapproximating too much.
We hope that this will eventually lead to efficient complexity results for PDR itself.
\begin{changebar}
It will also be interesting to study variants of PDR that target infinite-state using richer logics beyond propositional logic. Our observation that there is inherent abstraction in PDR due to states it \emph{cannot} exclude from a frame may also be relevant in such settings. This could also involve extensions of the monotone theory to other logics, which to our knowledge have not been attempted.
\end{changebar}
\begin{acks}
\iflong
\else{\small
\fi
We thank our shepherd and the anonymous reviewers for comments which improved the paper.
We thank Mohammad Abdulaziz, Aman Goel, Alexander Ivrii, Noam Parzanchevski, Hila Peleg, and Noam Rinetzky for insightful discussions and comments.
The research leading to these results has received funding from the
European Research Council under the European Union's Horizon 2020 research and
innovation programme (grant agreement No [759102-SVIS]).
This research was partially supported by the United States-Israel Binational Science Foundation (BSF) grant No.\ 2016260, and the Israeli Science Foundation (ISF) grant No.\ 1810/18.
\iflong
\else
}
\fi
\end{acks}
|
1,314,259,994,219 | arxiv |
\section*{Acknowledgments}
Research reported in this publication was supported by the Eunice Kennedy Shriver National Institute of Child Health \& Human Development of the National Institutes of Health under Award Number R03HD101083. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. \vspace*{-8pt}
{
\section{Analysis of Pediatric Functional Status Data}\label{sec:application}
Our motivating study consists of 1897 children, adolescents, and young adults who were admitted to an inpatient rehabilitation unit with a diagnosis of neurologic injury or illness at a major Midwestern children's hospital for neurological injury or illness between the years 2000 and August of 2020. The sample consisted of individuals between the ages of six months to 32 years with a median age of 12 years and 1st and 3rd quartiles of 6 and 16 years, respectively. The covariates used as predictors are (numbering 608 in total) included basic sociodemographic information including age and sex, and billing codes including CPT codes, ICD-9 and -10 codes, pharmaceutical codes, and data indicating durable medical equipment (DME) use. The billing codes represent information from a single episode of care involving an admission to an inpatient rehabilitation unit. The billing code information pertain only to information collected prior to the WeeFIM assessment. CPT and all ICD-9 and -10 codes were linked together to unique concept unique identifiers (CUIs) using the UMLS metathesaurus mapping system \citep{bodenreider2004unified}. Concept Unique Identifiers (CUIs) are distinct medical concepts (codes, diseases, etc) identified by the UMLS metathesaurus. By linking all billing codes to CUIs, we are able to in many cases handle the switch-over from ICD-9 to ICD-10 codes. An added benefit of CUIs is that they can be linked to other coding systems.
We utilize all 18 WeeFIM\textregistered{} components as responses; these 18 components are divided into three domains. The self-care domain describes how well a child is able to feed themselves, groom, bath, dress, and complete toileting tasks including the management of their bowel and bladder. The mobility domain describes how well a child is able to transfer on and off a toilet, in and out of a bathtub or shower, and in and out of a chair or wheelchair. The mobility domain also describes a child's ability to walk, crawl, or use a wheelchair, and to move up or down stairs. The cognition domain describes how well a child can express themselves, understand information, interact with peers, solve daily problems, and recall information. Together, they describe the ability of children to function in routine and important aspects of daily life.
As our motivating use case of developing a model for the WeeFIM\textregistered{} components is to apply it to future data to assess functional ability across a health system population, we validate all developed models by splitting data into training and validation sub-datasets, where the training dataset is from years prior to data from the validation dataset. We use data prior to 2018 for training and data from 2018 to 2020 as validation data, leaving 1592 observations for training and 305 observations for validation. For validation, we compare methods in terms of the response-specific mean squared error for each response and the average validated mean squared error across all 18 responses. We also compute the corresponding validation R-squared values, i.e. the proportion of the validation responses explained by the out-of-sample predictions.
We exclude any covariates that have no variation in either the training or validation datasets. After this screening, the combined dimensionality of all the predictors was 608, a high-dimensional scenario given the sample size.
We apply all methods described in Section \ref{sec:comparator_methods} and use the approaches described therein for selection of tuning parameters. For OGFM(adapt), we use $\gamma_1=\gamma_2=0.5$, as use of $\gamma_1=\gamma_2=1$ resulted in some extreme weights given the high dimensionality of the data. For both the OGFM approaches and MSGL, the group lasso penalty applied corresponds to the three pre-defined domains of the WeeFIM\textregistered{} components (self-care, mobility, and cognition).
The validation MSEs and R-squared values are displayed in Tables \ref{tab:validation_mse} and \ref{tab:validation_rsq}, respectively. Our OGFM approach has the best performance on the validation data on average across the 18 responses and also most often performs best for the individual responses, with the adaptive version of OGFM with the second lowest validation MSE on average. For responses where OGFM does not perform best, most often its performance does not differ much from the best MSE across the remaining methods, excluding OGFM(adapt). When OGFM performs best for particular responses, its improvement in terms of RMSE and R-squared is often large, as is depicted in Figure \ref{fig:mse_rsq_diffs}, which shows the difference between OGFM and competing approaches in terms of MSE on the validation data.
For both OGFM approaches, the fused lasso mixing parameter $\alpha$ was chosen by cross-validation to be $1\times10^{-5}$, indicating only a small amount of fused lasso was required by the data. However, this small amount of fused lasso penalty had the effect of shrinking a large number of coefficients either close to each other or exactly to each other. In Figure \ref{fig:weefim_coefs_example}, we display the coefficient paths versus $\lambda$ (described in Section \ref{sec:impl_details}) for 6 variables, where each plot is the coefficient path for a particular variable across the 18 outcomes. For each of the coefficient path plots, we fix the mixing tuning parameter $\alpha$ at the value that minimizes the cross validation error. The patterns of these coefficient paths demonstrate that our approach fuses coefficients for a variable to be the same across multiple responses when appropriate, allows effects to differentiate when warranted by the data, an allows for joint selection of effects across an entire domain of related responses. For the ICD code ``shock, unspecified'', the fused lasso grouped the effects for all cognition outcomes together along the entire path, all mobility coefficients together, and grouped the self-care coefficients into two groups. As can be seen for other variables, different amounts fusing occurred for different groups of outcomes, and for some outcomes effects were shrunk towards each other but without exact fusing.
\begin{table}[ht]
\centering
\resizebox{0.8\textwidth}{!}{%
\begin{tabular}{r|r|rrrrrrrrrr}
\toprule
& Domain & \multicolumn{8}{c}{Self-Care WeeFIM Responses} & \\
\cmidrule{2-10}
Method & Average & Ea & G & B & D-U & D-L & T-g & Bl-M & Bo-M & \\
\midrule
OGFM & \textbf{2.153} & 2.074 & \textbf{1.816} & 2.034 & 1.673 & \textbf{2.099} & \textbf{2.052} & \textbf{3.533} & \textbf{3.492} & \\
OGFM(adapt) & 2.165 & \textbf{2.070} & 1.824 & \textbf{2.032} & \textbf{1.663} & 2.112 & 2.165 & 3.573 & 3.521 & \\
MRCE & 2.185 & 2.155 & 1.864 & 2.093 & 1.674 & 2.117 & 2.093 & 3.735 & 3.638 & \\
MSGL & 2.222 & 2.145 & 1.860 & 2.149 & 1.712 & 2.218 & 2.190 & 3.729 & 3.617 & \\
Sep-Lasso & 2.213 & 2.142 & 1.860 & 2.189 & 1.672 & 2.225 & 2.148 & 3.788 & 3.641 & \\
\cmidrule{2-12}
& Domain & \multicolumn{5}{|c|}{Mobility WeeFIM Responses} & \multicolumn{5}{c}{Cognition WeeFIM Responses} \\
& &C-W & T & T-S & W-W & \multicolumn{1}{r|}{S} & C & Ex & S-I & P-S & M \\
\cmidrule{3-12}
OGFM && \textbf{1.483} & \textbf{1.770} & 1.811 & 1.344 & \multicolumn{1}{r|}{2.169} & 2.140 & 2.165 & 2.159 & 2.537 & \textbf{2.395} \\
OGFM(adapt) && 1.492 & 1.792 & 1.808 & 1.336 & \multicolumn{1}{r|}{2.187} & 2.155 & 2.159 & 2.180 & 2.495 & 2.407 \\
MRCE && 1.571 & 1.811 & 1.889 & 1.331 & \multicolumn{1}{r|}{\textbf{2.164}} & \textbf{2.126} & \textbf{2.081} & \textbf{2.140} & \textbf{2.446} & 2.399 \\
MSGL && 1.494 & 1.857 & \textbf{1.790} & 1.305 & \multicolumn{1}{r|}{2.182} & 2.156 & 2.185 & 2.264 & 2.685 & 2.453 \\
Sep-Lasso && 1.539 & 1.809 & 1.915 & \textbf{1.286} & \multicolumn{1}{r|}{2.186} & 2.178 & 2.152 & 2.174 & 2.491 & 2.432 \\
\bottomrule
\end{tabular}%
}
\caption{Mean squared errors (MSEs) on the 18 WeeFIM components in the validation data and average MSE across the 18 components. Bold indicates the smallest MSE across the different methods for a particular component. Ea: Eating, G: Grooming, B: Bathing, D-U: Dressing Upper, D-L: Dressing Lower, T-g: Toileting, Bl-M: Bladder Management, Bo-M: Bowel Management, C-W: Chair Wheelchair, T: Toilet, T-S: Tub Shower, W-W: Walk Wheelchair, S: Stairs, C: Comprehension, Ex: Expression, S-I: Social Interaction, P-S: Problem Solving, M: Memory.}
\label{tab:validation_mse}
\end{table}
\begin{figure}[!htpb]
\includegraphics[width=1\textwidth]{figures/mse_difference_plot_codes_analysis.pdf}
\caption{Comparisons of the validation MSE for each of the 18 WeeFIM components and on average (left) across the 18 components}
\label{fig:mse_rsq_diffs}
\end{figure}
\begin{table}[ht]
\centering
\resizebox{0.8\textwidth}{!}{%
\begin{tabular}{r|r|rrrrrrrrrr}
\toprule
& Domain & \multicolumn{8}{c}{Self-Care WeeFIM Responses} & \\
\cmidrule{2-10}
Method & Average & Ea & G & B & D-U & D-L & T-g & Bl-M & Bo-M & \\
\midrule
OGFM & \textbf{0.432} & 0.526 & \textbf{0.536} & 0.476 & 0.446 & \textbf{0.447} & \textbf{0.498} & \textbf{0.449} & \textbf{0.429} & \\
OGFM(adapt) & 0.429 & \textbf{0.527} & 0.534 & \textbf{0.477} & \textbf{0.449} & 0.444 & 0.470 & 0.443 & 0.425 & \\
MRCE & 0.425 & 0.507 & 0.524 & 0.461 & 0.446 & 0.442 & 0.488 & 0.418 & 0.405 & \\
MSGL & 0.416 & 0.509 & 0.525 & 0.447 & 0.433 & 0.416 & 0.464 & 0.419 & 0.409 & \\
Sep-Lasso & 0.419 & 0.510 & 0.525 & 0.437 & 0.446 & 0.414 & 0.474 & 0.410 & 0.405 & \\
\cmidrule{2-12}
& Domain & \multicolumn{5}{|c|}{Mobility WeeFIM Responses} & \multicolumn{5}{c}{Cognition WeeFIM Responses} \\
& & C-W & T & T-S & W-W & \multicolumn{1}{r|}{S} & C & Ex & S-I & P-S & M \\
\cmidrule{3-12}
OGFM && \textbf{0.433} & \textbf{0.522} & 0.447 & 0.287 & \multicolumn{1}{r|}{0.298} & 0.410 & 0.405 & 0.400 & 0.375 & \textbf{0.384} \\
OGFM(adapt) && 0.430 & 0.516 & 0.448 & 0.290 & \multicolumn{1}{r|}{0.292} & 0.406 & 0.406 & 0.394 & 0.385 & 0.381 \\
MRCE && 0.400 & 0.510 & 0.423 & 0.293 & \multicolumn{1}{r|}{\textbf{0.299}} & \textbf{0.414} & \textbf{0.428} & \textbf{0.405} & \textbf{0.397} & 0.383 \\
MSGL && 0.429 & 0.498 & \textbf{0.454} & 0.307 & \multicolumn{1}{r|}{0.294} & 0.406 & 0.399 & 0.371 & 0.338 & 0.369 \\
Sep-Lasso && 0.412 & 0.511 & 0.415 & \textbf{0.317} & \multicolumn{1}{r|}{0.292} & 0.400 & 0.408 & 0.396 & 0.386 & 0.374 \\
\bottomrule
\end{tabular}%
}
\caption{R-squared values on the 18 WeeFIM components in the validation data and average R-squared value across the 18 components. Bold indicates the larger R-squared across the different methods for a particular components.}
\label{tab:validation_rsq}
\end{table}
From the coefficient paths in Figure \ref{fig:weefim_coefs_example}, our approach yields benefit in terms of predictive performance and also results in clinically sensible estimated coefficients. We chose these 6 variables to demonstrate effects of several classes of billing codes in the data: ICD codes, DME codes, pharmaceutical codes, and CPT codes. The three figures on the left of Figure \ref{fig:weefim_coefs_example} represent examples of diagnostic codes from billing data. The top panel represents the functional domains’ coefficients path plot for children coded with autistic disorder. Children with autism spectrum disorders are diagnosed based on deficits in areas of social communication and behavior. The degree of impairment in social skills as well as comorbid neurodevelopmental disorders often leads to functional deficits in self-care. Mobility is typically unaffected in children with autism. Another example is the diagnosis code for shock. Shock is a life-threatening condition in which the body is not getting enough blood flow, resulting in end-organ damage, including damage to the brain. Consistent with this, in the figure we see from the coefficient path plots that the diagnosis code of shock is most related with the functional domain of cognition/communication as compared to mobility and self-care skills. Hypothyroidism is a disorder caused when the thyroid gland does not make enough hormone. In children, it has numerous etiologies including autoimmune reactions, brain injury, or radiation treatment. Hypothyroidism causes slow growth, decreased strength, and impaired cognition. The associated plot shows some relationship to the self-care and mobility domains, but a stronger relationship to the cognition/communication domain. The three figures in the right of the panel represent billing codes for a medication, a durable good, and a procedure. Methylphenidate Hydrochloride, a stimulant medication used to treat attention deficit hyperactivity disorder, showed minimal relationships with mobility and self-care. However a pronounced relationship with cognition/communication is observed, as would be expected in a child with being treated with this medication. An example of durable medical equipment are enteral feeding supplies. These are ordered for patients who require feeding through a tube inserted into the stomach. This plot shows a clear relationship between feeding supplies and eating outcomes, but markedly less strong relationship with other functional outcome domains. Mechanical chest wall oscillation is used in patients who have an impaired lung function such as in children with cystic fibrosis or other diseases which lead to general physicial debilitation or loss of strength. The plot demonstrates that this procedure code holds associations with mobility and self-care limitations as would be expected, but does not relate to cognition/communication.
\begin{figure}[!htpb]
\includegraphics[width=1\textwidth]{figures/coef_plot_outcomes_6vars_mixed_final_analysis_model_codes.pdf}
\caption{Coefficient path plots across all 18 WeeFIM components for six variables from the pediatric functional status data. The dashed vertical line indicates the value of $\lambda$ that minimizes the cross-validation error. }
\label{fig:weefim_coefs_example}
\end{figure}
\subsection{Asymptotic Properties}\label{sec:asymptotics}
We now present new asymptotic results for our penalization methods pertaining to semiparametric linear models. We show results for a more general, encompassing form for the overlapping group lasso penalty $P_1$ and the fused lasso penalty $P_2$. We allow $P_1(\boldsymbol \beta) = \sum_{j=1}^p\sum_{g\in\mathcal{G}}\lambda_{1,j,G}||\boldsymbol \beta_{j,G}||_2$ for an arbitrary, potentially overlapping group structure $\mathcal{G}$ and we allow $P_2(\boldsymbol \beta) = \sum_{j=1}^p\sum_{(l,o) \in \mathcal{F}} \lambda_{2,j,l,o}|{\beta}_{j,l} - {\beta}_{j,o}|$ for an arbitrary fusing set $\mathcal{F}$ that contains pairs of indices of coefficients to be fused, where the elements of $\mathcal{F}$ are of the form $(l,o)$ with $l,o\in\mathcal{K}$. The penalty form introduced in Section \ref{sec:methods_def} is a special case of the form we study in this section, as our Theorem \ref{thm:linear_model_oracle_property_mis} applies to any group structure, overlapping or not.
We also denote the set of all pairs of coefficients which are equal and non-zero for the $j$th variable by $\mathcal{E}_{j,\cdot} = \{ (l,m) \in \mathcal{F}: \beta_{j,l}^0 = \beta_{j,m}^0 \neq 0 \}$. Similarly, define $\hat{\mathcal{E}}_{j,\cdot} = \{ (l,m) : l,m \in \mathcal{F} \mbox{ and } \hat{\beta}_{j,l} = \hat{\beta}_{j,m} \neq 0 \}$. Denote the union of these sets over all variables as $\mathcal{E} = \bigcup_{j=1}^p\mathcal{E}_{j,\cdot}$ and $\hat{\mathcal{E}} = \bigcup_{j=1}^p\hat{\mathcal{E}}_{j,\cdot}$.
To facilitate our explanation of the asymptotic results, for any vector ${\bf a}_{\cdot,k}\in \mathbbm{R}^p$ associated with response $k$ and a index set ${\mathcal{I}}_{\cdot,k}\subset\{1,\dots,p\}$ of size $|{\mathcal{I}}_{\cdot,k}|$ associated with response $k$, ${\bf a}_{{\mathcal{I}}_{\cdot,k}}$ represents a $|{\mathcal{I}}_{\cdot,k}|$ dimensional sub-vector with elements in ${\bf a}_{\cdot,k}$ indexed by ${\mathcal{I}}_{\cdot,k}$.
Now let $\widetilde{\boldsymbol X} = \boldsymbol I_K\ \otimes \boldsymbol X = \mbox{diag}(\boldsymbol X, \dots, \boldsymbol X)$ be the block diagonal design matrix with $K$ blocks, one for each outcome, with the $k$th block as the design matrix $\boldsymbol X$, where $\otimes$ is the Kronecker product; similarly let $\widetilde{\boldsymbol \beta}^0 = \text{vec}(\boldsymbol \beta^0) = ({\boldsymbol \beta^0_{\cdot,1}}^\top, \dots, {\boldsymbol \beta^0_{\cdot,K}}^\top)^\top$ be the vectorization of the true coefficients for all $K$ responses, let $\widetilde{\boldsymbol Y}^\top = (\boldsymbol Y_{\cdot,1}^\top, \dots, \boldsymbol Y_{\cdot,K}^\top)$. Using this notation, we can re-express the model \eqref{lin_model} as $\widetilde{\boldsymbol Y} = \widetilde{\boldsymbol X} \widetilde{\boldsymbol \beta}^0 + \widetilde{\boldsymbol \epsilon}$, where $\widetilde{\boldsymbol \epsilon}$ is the stacked error vector defined similarly as $\widetilde{\boldsymbol Y}$.
Now let $\mathcal{J} = (\mathcal{J}_{\cdot,1}, \dots, \mathcal{J}_{\cdot,K}) \subseteq \{1, \dots, Kp\}$, where $\mathcal{J}_{\cdot,k} = \{j+(k-1)p: j\in\{1,\dots,p\} \text{ and } \beta^0_{j,k} \neq 0 \}$, to be the set of indices of all non-zero effects in $\widetilde{\boldsymbol \beta}^0$ and let $\mathcal{H} = \text{Hull}(\mathcal{J}) = \left\{ \cup_{G\in \mathcal{G}, G\cap \mathcal{J} = \varnothing}G \right\}^c$ be the hull of the nonzero pattern $\mathcal{J}$, where for a set $\mathcal{S}$, $\mathcal{S}^c$ denotes its complement. The hull of the non-zero pattern is essentially the smallest set of groups in $\mathcal{G}$ that contains all elements in $\mathcal{J}$. Similarly, denote $\hat{\mathcal{J}} = (\hat{\mathcal{J}}_{\cdot,1}, \dots, \hat{\mathcal{J}}_{\cdot,K})$ to be the set of indices of all nonzero variables in $\widehat{\boldsymbol \beta}$. Note that by construction of $P_1(\cdot)$ due to each effect of each variable across the outcomes having its own group in $\mathcal{G}_{M+1}$, $\text{Hull}(\hat{\mathcal{J}}) = \hat{\mathcal{J}}$, however we present our results in terms of the hull so as to be fully general with respect to the group structure. We denote $\widetilde{\boldsymbol X}_H$ as the columns in $\widetilde{\boldsymbol X}$ corresponding to variables in $\mathcal{H}$ and similarly denote $\widetilde{\boldsymbol \beta}^0_H$ as the values in $\widetilde{\boldsymbol \beta}^0$ corresponding to elements in $\mathcal{H}$.
Then denote $\widetilde{\boldsymbol X}_\mathcal{H}^*$ constructed by dropping and collapsing columns of $\widetilde{\boldsymbol X}_\mathcal{H}$ corresponding to the \textit{distinct}, non-zero values of $\widetilde{\boldsymbol \beta}^0_\mathcal{H}$. Specifically, for each $j$, let $\{c_{j,1}, \dots, c_{j,L}\}$ for $L\leq K$ denote the unique \textit{non-zero} values of $\{\widehat{\beta}_{j,1}, \dots, \widehat{\beta}_{j,k}\}$ and denote $\mathcal{C}_{j,\ell} = \{k\in\mathcal{K}: \widehat{\beta}_{j,k} = c_{j,\ell} \}$. Then for each $c_{j,\ell}\in\{c_{j,1}, \dots, c_{j,L}\}$
all columns $j+(k-1)p$ of $\widetilde{\boldsymbol X}$ such that $k\in \mathcal{C}_{j,\ell}$, $j\in\{1,\dots,p\}$ are collapsed and added together into a single column, i.e. $\sum_{k\in \mathcal{C}_{j,\ell}} \widetilde{\boldsymbol X}_{\cdot,j+(k-1)p}$ . As an illustration, denote $\widetilde{\boldsymbol X}_{\cdot,j+(k-1)p}$ as the $j$th column of the design matrix for response $k$. Then if the coefficient for variable $j$ is in $\mathcal{H}$ for responses $k, \ell$, and $K$ and $\beta^0_{j,k} = \beta^0_{j,\ell} = \beta^0_{j,K} \neq 0$, this results in the following to the corresponding columns in $\widetilde{\boldsymbol X}$ in the formulation of $\boldsymbol X^*_\mathcal{H}$:
$$
\stackrel{\mbox{$\vphantom{\widetilde{\boldsymbol X}_{\cdot,j+(k-1)p}}$}}{%
\begin{matrix}
\vphantom{\boldsymbol 0} \\[15pt]
\vphantom{\vdots} \\
\mbox{Block } k\rightarrow \\
\vphantom{\vdots} \\
\mbox{Block } \ell\rightarrow \\
\vphantom{\vdots} \\
\mbox{Block } K\rightarrow \\ \\
\end{matrix}
}
\stackrel{\mbox{$\widetilde{\boldsymbol X}_{\cdot,j+(k-1)p}$}}{%
\begin{pmatrix}
\boldsymbol 0 \\
\vdots \\
\mathbf{x}_{\cdot,j} \\
\vdots \\
\boldsymbol 0 \\
\vdots \\
\boldsymbol 0 \\
\end{pmatrix}
}
+
\stackrel{\mbox{$\widetilde{\boldsymbol X}_{\cdot,j+(\ell-1)p}$}}{%
\begin{pmatrix}
\boldsymbol 0 \\
\vdots \\
\boldsymbol 0 \\
\vdots \\
\mathbf{x}_{\cdot, j} \\
\vdots \\
\boldsymbol 0 \\
\end{pmatrix}
}
+
\stackrel{\mbox{$\widetilde{\boldsymbol X}_{\cdot,j+(K-1)p}$}}{%
\begin{pmatrix}
\boldsymbol 0 \\
\vdots \\
\boldsymbol 0 \\
\vdots \\
\boldsymbol 0 \\
\vdots \\
\mathbf{x}_{\cdot, j} \\
\end{pmatrix}
}
\rightarrow
\begin{pmatrix}
\boldsymbol 0 \\
\vdots \\
\mathbf{x}_{\cdot, j} \\
\vdots \\
\mathbf{x}_{\cdot, j} \\
\vdots \\
\mathbf{x}_{\cdot, j} \\
\end{pmatrix}
$$
In other words, the columns in $\widetilde{\boldsymbol X}$ corresponding to these three variables are collapsed and added together. Note that there exist matrices $\boldsymbol H$ and $\boldsymbol E$ such that $\widetilde{\boldsymbol X}_\mathcal{H}^* = \widetilde{\boldsymbol X}\boldsymbol H\boldsymbol E$, where $\boldsymbol H$ is formed by removing all columns of positions in the indices of $\mathcal{H}$ from the identity matrix of dimension $pK\times pK$ and $\boldsymbol E$ is formed by taking the identity matrix of dimension $|\mathcal{H}|\times |\mathcal{H}|$ and collapsing and summing columns corresponding to non-zero coefficients that are equal to each other and are in $\mathcal{F}$, the fusing set used in $P_2$.
We assume the following standard regularity conditions
\begin{enumerate}
\item[] (D.1) $\lim_{N \to \infty} \frac{1}{N}\boldsymbol X^\top \boldsymbol X \to \boldsymbol Q$ where $\boldsymbol Q$ is positive definite
\item[] (D.2) The errors $\boldsymbol\epsilon_i$ for all samples $i=1\dots,n$ are $i.i.d.$ random vectors with mean zero and finite, positive definite variance-covariance matrix $\boldsymbol \Sigma$.
\end{enumerate}
The following result pertains to cases where the group structure has been misspecified. As such, we define ${\boldsymbol \beta^*}^0$ to be the \textit{distinct} values of ${\boldsymbol \beta}^0_\mathcal{H}$ (i.e. the union of the distinct values in $\{\beta^0_{j,k},\beta^0_{j,\ell}: (k,\ell) \in \mathcal{E} \text{ or } (\ell,k) \in \mathcal{E} \text{ and } (k,\ell) \in \mathcal{F} \text{ or } (\ell,k) \in \mathcal{F}\}$ and all remaining values $\beta^0_{j,k} \neq 0$) appended with zeros corresponding to the elements of ${\boldsymbol \beta}^0_{\mathcal{H}^c}$. Similarly, we define $\widehat{\boldsymbol \beta}^*$ to be the distinct values of ${\widehat{\boldsymbol \beta}}_{\hat{\mathcal{J}}}$ appended with zeros corresponding to the elements of ${\widehat{\boldsymbol \beta}}_{\hat{\mathcal{J}}^c}$.
\begin{theorem} \label{thm:linear_model_oracle_property_mis}
$\mbox{}$
Assume the data are generated under the model described in (\ref{lin_model}) and that our estimator is given by (\ref{overlapping_linear_model_objective}). Furthermore,
assume conditions (D.1) and (D.2) and let $\lambda_{1,j,G} = ||\widehat{\boldsymbol \beta}^{OLS}_{j,G}||_2^{-\gamma_1}$ and $\lambda_{2,j,l,m} = |\hat{\beta}_{j,l}^{OLS} - \hat{\beta}_{j,m}^{OLS}|^{-\gamma_2}$, where $\gamma_1, \gamma_2 > 0$ such that $N^{(\gamma_\ell + 1)/2}\lambda \to \infty$ for $\ell=1,2$. If $\sqrt{N}\lambda \to 0$, then we have the following:
\begin{align}
& P(\hat{\mathcal{J}}_{\cdot,j} = {\mathcal{H}}_{\cdot,j}) \to 1 \mbox{ as } N \to \infty, \label{linear_model_selection_consistency_mis} \\
& P(\hat{\mathcal{E}}_{\cdot,j} = {\mathcal{E}}_{\cdot,j}) \to 1 \mbox{ as } N \to \infty, \label{linear_model_fused_selection_consistency_mis}
\end{align}
for each $j=1,\dots,p$ and
\begin{align}
\sqrt{N}(\widehat{\boldsymbol \beta}^* - {\boldsymbol \beta^*}^0) \xrightarrow{d} \boldsymbol Z^*, \label{linear_model_asympt_distr_mis}
\end{align}
where $\boldsymbol Z^* = ({{}\boldsymbol Z_{\mathcal{H}}^*}^\top, {\boldsymbol 0}^\top)^\top$ and $\boldsymbol Z_{\mathcal{H}}^* \sim N_{|\mathcal{H}^*|}(0, {\boldsymbol Q^*}_{\mathcal{H}}^{-1}{\boldsymbol V^*}_{\mathcal{H}}{\boldsymbol Q^*}_{\mathcal{H}}^{-1} )$, where $\boldsymbol V^*_\mathcal{H} = \boldsymbol E^\top\boldsymbol H^\top \left(\boldsymbol \Sigma \otimes \boldsymbol Q\right)\boldsymbol H\boldsymbol E$, $\boldsymbol Q^*_\mathcal{H} = \boldsymbol E^\top\boldsymbol H^\top \left(\boldsymbol I_K \otimes \boldsymbol Q\right)\boldsymbol H\boldsymbol E$, and $|\mathcal{H}^*|$ denotes the number of distinct non-zero elements in the true coefficients ${\boldsymbol \beta^*}^0$.
\end{theorem}
The proof of Theorem \ref{thm:linear_model_oracle_property_mis} is provided in the Supplementary Material Appendix A. We reiterate that Theorem \ref{thm:linear_model_oracle_property_mis} applies to \textit{any} group structure, including those with overlapping groups.
We note that the variance-covariance matrix in the distribution of $\boldsymbol Z^*$ does not simplify due to the allowance of selection of variables individually by outcome in combination with the Kronecker-product variance structure of the response vector $\widetilde{\boldsymbol Y}$; without either it would simplify considerably. The terms $\boldsymbol H$ and $\boldsymbol E$ in the asymptotic variance of the estimates are due to selection of variables and collapsing of effects of a single variable across outcomes, respectively; without any selection these matrices are removed and the variance simplifies to the usual asymptotic variance of multivariate response regression via least squares. However, the asymptotic variance matches that of the estimator where all and only all truly non-zero variable effects are included in a model and all variables with equal effects across a subset of outcomes are accordingly collapsed. The result of Theorem \ref{thm:linear_model_oracle_property_mis} does not require that $\mathcal{E} \subseteq \mathcal{F}$, however, if indeed $\mathcal{E} \subseteq \mathcal{F}$, i.e.
if $\mathcal{F}$ contains indices corresponding to all pairs of coefficients in $\boldsymbol \beta^0_{\mathcal{H}}$ that are equal to each other, then due to \eqref{linear_model_asympt_distr_mis}, when using adaptive terms for the tuning parameters, our approach yields selection consistency for the hull of the non-zero terms; when the group structure has individual groups for each of the individual coefficients in the model, the convex hull of the non-zero coefficients is simply the set of non-zero coefficients. Thus, when such individual groups are included, we have selection consistency for the non-zero coefficients. Our theory also shows that with probability tending to one, our approach will fuse together the coefficients that are truly equal to each other. Finally, our theory shows that our estimate will converge to the same asymptotic distribution as if we had known which coefficients were non-zero and which coefficients were equal to each other.
\section{Discussion}\label{sec:discussion}
In this paper we introduced a doubly structured sparsity regression approach for modeling multivariate outcomes with known groupings to borrow information across related outcomes. Our approach allows for overlapping groupings of the responses and borrows strength by joint selection of the effects of variables across groups of related outcomes and encourages shrinkage of the effects of variables to be more similar across related outcomes. We prove an oracle property for an adaptive version of our penalty, showing that our approach results in estimates with the same asymptotic distribution as if the true non-zero coefficients and \textit{which} variable effects are truly equal across related responses were both known in advance. The results pertaining to our motivating study indicate our approach yields benefit in terms of predictive performance and yields estimated coefficients that are clinically sensible, as was demonstrated in our analysis of the pediatric functional status data.
Our work is motivated by modeling functional status scores in a pediatric population however the methods are applicable more generally. Since our approach does not rely on normality of the multivariate outcomes, it can be straightforwardly extended to more general models, such as generalized linear models both in theory and computationally. Incorporation of covariance information in addition to borrowing strength through joint selection and effect shrinkage may be a promising direction for additionally borrowing strength of information across outcomes. However, in this work we have not focused on this as it would introduce another tuning parameter and another layer of complexity to computation, which may limit the applicability of such an approach to large scale and high dimensional settings.
\section{Introduction}\label{sec:introduction}
Each year in the United States more than 20,000 children are diagnosed with an acute neurologic injury or illness that result in debilitating physical and cognitive complications, and reduced quality-of-life \citep{lo2009pediatric, patel2014pediatric, taylor2017traumatic, dhillon2017us}. These events are costly. As an example, it is estimated that one billion dollars are spent annually on management of pediatric traumatic brain injury (TBI)-associated hospitalizations \citep{schneier2006incidence}. Though neurologic illnesses and injuries as a category are a leading cause of morbidity in children, each individual diagnosis is uncommon. These low disease-specific incidences make the study of rehabilitation interventions for children with neurologic injuries or illnesses challenging. Further, since many of the material effects manifest in the long term and can change with later child development, it is important to be able to track outcomes over time.
The WeeFIM\textregistered{} is a validated scoring system for appraising functional ability in children and provides valuable information about a multitude of components of health in children with neurologic injury or illness. WeeFIM\textregistered{} is administered and scored by trained assessors and has been demonstrated to have high interrater reliability. WeeFIM\textregistered{} has been shown to be predictive of longitudinal functional recovery of children with neurological disorders and can be used for discharge planning, prediction of functional outcomes, and documentation of functional performance of children over time to assess recovery or decline.
Yet, because the WeeFIM\textregistered{} system requires trained assessors, scores for all children are not always available.
WeeFIM\textregistered{} training and subscription is expensive and time-intensive and thus there is significant interest in being able to assess broadly the functional ability of children across a health system without the need for explicit scoring by trained assessors.
One avenue for doing so is to relate administrative data sources to WeeFIM\textregistered{} scores to facilitate care management and identify individuals who may require additional care or rehabilitation. Our interest is thus in building predictive models to understand functional ability of children using administrative health data in scenarios where WeeFIM\textregistered{} is not used. WeeFIM\textregistered{} scores have particular characteristics and organization. In this work we seek to leverage these characteristics to facilitate the building of accurate and interpretable WeeFIM\textregistered{} risk models. We now describe these characteristics and how we aim to utilize them in this paper.
WeeFIM\textregistered{} is comprised of 18 component scores, each of which is measured on a 7 point Likert scale with 7 indicating complete independence and 1 indicating complete dependence on others to perform various tasks. The WeeFIM\textregistered{} component scores are categorized into three main domains: mobility, cognition, and self-care, and represent three distinct clinical concepts related to functional outcomes. We describe the components of each domain in Section \ref{sec:application}.
In our dataset, the correlations of the WeeFIM\textregistered{} component scores align closely with these domains. We computed the distance correlations \citep{szekely2007measuring} of all pairs of the 18 WeeFIM\textregistered{} components to form an $18\times 18$ distance matrix and ran a hierarchical clustering algorithm on the resulting distance matrix. As can be seen in Figure \ref{fig:weefim_corrs}, the scores in the cognition domain are immediately completely separated from the scores in mobility and self-care; self-care and mobility separate from each other later in the hierarchical clustering.
\begin{figure}[!htpb]
\centering
\includegraphics[width=0.75\textwidth]{figures/heatmap_outcomes.pdf}
\caption{Heatmap of the distance correlations of the WeeFIM\textregistered{} scores. The dendrogram from hierarchical clustering of the scores shows that they group naturally by the three major WeeFIM\textregistered{} domains: self care, mobility, and cognition; thus the pre-defined domains align well with the natural variation in the components in our motivating study.}
\label{fig:weefim_corrs}
\end{figure}
In our motivating study, we aim to use information commonly present in administrative health data including International Classification of Diseases (ICD) -9 and -10 codes, Current Procedural Terminology (CPT) codes, pharmaceutical codes, data indicating durable medical equipment use, and basic demographic information to predict and evaluate functional outcomes in children admitted to an inpatient rehabilitation unit with a diagnosis of neurologic injury or illness at a large children's hospital. Our code information is high-dimensional, with well over 500 billing codes as potential predictors of a child's functional status.
To deal with high-dimensionality, one could fit lasso-penalized regression models \citep{tibshirani96} for each of the 18 scores separately, however this would ignore the inherent relationships and similarities between the components. First, as some WeeFIM\textregistered{} components are highly related (e.g. dressing lower and dressing upper, bowel management and bladder management, bathing and tub shower), it is plausible that some variables (e.g. billing codes) may have the same or similar effects across some of the WeeFIM\textregistered{} components. Similarly, some variables may not have the same effect size across some WeeFIM\textregistered{} components, but are likely to be jointly predictive for multiple components, especially for related components, or even for all components. A complication is that while linearity of effects of covariates on the components may be reasonable, an assumption of normality of the responses or error terms is not, as the WeeFIM\textregistered{} component scores are not normally-distributed.
There exists some work for building models for multivariate responses using high-dimensional predictors.
Rothman, et al. \cite{rothman2010sparse} introduced a lasso-based estimation procedure that incorporates variable selection and covariance estimation for multivariate normal responses and developed an iterative estimation procedure that makes use of the covariance structure among the responses by penalizing the inverse covariance matrix using the methods of Yuan and Lin \cite{yuan2007model}. However, this approach does not make use of similar predictor-response relationships across related responses and only borrows information across responses through their covariance structure. Sofer, et al. \cite{sofer2014variable} extended these ideas to incorporate a wider variety of penalties such as the SCAD penalty \citep{fan2001variable} and the MC+ penalty\cite{zhang2010nearly} and developed new algorithms for estimation. Li, et al. \cite{li2015multivariate} incorporated group structure both in responses and among predictors using an overlapping group lasso penalty \citep{jenatton2011}, which allows for borrowing of strength across responses through joint selection of variables across related responses.
In our work, we borrow strength across responses in two ways through two different structured sparsity inducing penalties. In particular, we first aim to incorporate joint selection and removal of variables across all responses \textit{and} joint selection and removal of variables across related responses (i.e. responses that are in the same functional domain). In our application, the group structure is the natural groups formed by the WeeFIM\textregistered{} components displayed in Figure \ref{fig:weefim_corrs}. Thus, we require use of a group lasso \citep{yuan2006} with overlapping groups \citep{jenatton2011} similar to Li, et al. \cite{li2015multivariate}. Second, we use a fused lasso \citep{tibshirani2005sparsity} for shrinkage of the effects of a variable across related outcomes to borrow strength more explicitly in estimation by partially collapsing models across different responses into a single model for individual variables separately. The fused lasso penalty allows a variable's effects across related responses to be estimated to be exactly the same.
We prove that with adaptively chosen weights \cite{zou2006, wang2008note} but for the overlapping group lasso as in Huling, et al. \cite{huling2018risk} and fused lasso as in Viallon, et al. \cite{viallon2013adaptive}, our doubly-structured sparsity inducing estimator has an oracle property. The theoretical results are general in that they allow for any arbitrary overlapping group structure for the group lasso penalty and any arbitrary fused lasso penalty. The oracle property we show demonstrates that estimation for the non-zero coefficients has the same asymptotic distribution as if we had known in advance both i) which coefficients are non-zero and ii) which coefficients are equal to each other and fit models with only the non-zero coefficients and forcing the truly equal coefficients to be equal to each other. We also show selection consistency for the non-zero coefficients and selection consistency for which coefficients are equal to each other. Neither our methodology nor our theory requires an assumption of normality or independence of the responses, as we work under a semiparametric multivariate linear model assumption, where we make no parametric distributional assumptions about the error terms. Further, our theory, methodology, and computational approach allows for general use of the overlapping group lasso in combination with the fused lasso beyond our application to multivariate response regression.
The remainder of our paper is organized as follows. In Section \ref{sec:methods_def} we introduce the key definition of our methodology. In Section \ref{sec:asymptotics} we develop theory for our methodology using adaptive regularization under semiparametric multivariate linear models and in Section \ref{sec:simulation} we investigate the operating characteristics of our method in small samples using simulation experiments. In Section \ref{sec:application} we demonstrate the use of our methodology on our motivating application involving pediatric functional outcomes score modeling. Finally, we conclude with discussion.
\section{Doubly Structured Sparsity for Multivariate Responses}\label{sec:methods}
\subsection{The Overlapping Group + Fused Method}\label{sec:methods_def}
We assume that the observed multivariate response $\mathbf{y}_i = (y_{i1}, \dots, y_{iK})^\top$ follows the semiparametric linear model
\begin{align}
\mathbf{y}_i = {\boldsymbol \beta^0}^\top\mathbf{x}_i + \boldsymbol\epsilon_i \text{ for } i=1,\dots,N, \label{lin_model}
\end{align}
where $\mathbf{x}_i = (x_{i1}, \dots, x_{ip})^\top$ is a length $p$ vector of predictors, $\boldsymbol \beta^0$ is a $p\times K$ matrix of regression coefficients with $k$th column equal to $\boldsymbol \beta^0_{\cdot,k} = (\beta^0_{1,k}, \dots, \beta^0_{p,k})^\top$, and $\boldsymbol \epsilon_i = (\epsilon_{i1}, \dots, \epsilon_{iK})^\top$ is a vector of errors with mean zero and finite, positive definite variance-covariance matrix $\boldsymbol \Sigma$; no requirement that the response vector be continuous is required, as we do not assume a particular distribution for $\boldsymbol\epsilon_i$. Denote the $j$th variable's coefficients across the $K$ response as $\boldsymbol \beta^0_{j,\cdot} = (\beta^0_{j,1}, \dots, \beta^0_{j,K})^\top$. Let $\boldsymbol X$ denote the $N\times p$ matrix of predictors with $i$th row as $\mathbf{x}_i^\top$ and $j$th column as $\mathbf{x}_{\cdot,j}$ and let $\boldsymbol Y$ denote the $N\times K$ response matrix with $i$th row as $\mathbf{y}_i^\top$ and denote the $k$th column of $\boldsymbol Y$ as $\boldsymbol Y_{\cdot,k}$.
The typical least squares estimator is the solution of
\begin{align*}
\operatornamewithlimits{argmin}_{\boldsymbol \beta} (2N)^{-1}\text{tr}\left[(\boldsymbol Y - \boldsymbol X\boldsymbol \beta)(\boldsymbol Y - \boldsymbol X\boldsymbol \beta)^\top\right],
\end{align*}
which is simply $(\boldsymbol X^\top\boldsymbol X)^{-1}\boldsymbol X^\top\boldsymbol Y$, where for a matrix $\boldsymbol A$, $\text{tr}(\boldsymbol A)$ indicates its trace.
In our setting, the $K$ outcomes are grouped into $M$ different, possibly overlapping groupings that describe inherent relationships between the outcomes. In our motivating data, the WeeFIM outcomes are grouped into three pre-defined domains that represent different categories of functional status. In other settings, these domains may be further subcategorized. In this manner, the groupings generally correspond to hierarchically-organized categorizations of the outcomes. In this work, we make use of the natural grouping of the outcomes by adding regularizers that induce the coefficients of each different variable to be selected or removed jointly across an entire group of the responses. The $j$th grouping $\mathcal{G}_j$ consists of a set of $g_j$ groups $G_{j,1}, \dots, G_{j,g_j}$ such that the union of the groups includes all of the $K$ outcomes.
Generally, since the groupings will be hierarchically defined, $g_j$ will be larger than $g_{j+1}$ and groups in $\mathcal{G}_{j}$ can be expressed as unions of groups in $\mathcal{G}_{j+1}$.
More formally, the groupings are written as $\mathcal{G}_1 = \{G_{1,1}, \dots, G_{1,g_1}\}, \dots, \mathcal{G}_M = \{G_{M,1}, \dots, G_{M,g_M}\}$, where $G_{m,g} \in \{1,\dots,K\} \equiv \mathcal{K}$ and $\bigcup_{G\in\mathcal{G}_m}G = \mathcal{K}$ for $m=1,\dots, M$.
To encourage a variable to be selected jointly across \textit{all} responses, we define the trivial group $\mathcal{G}_0 = \{\mathcal{K}\} $, thus $G_{0,1} = \mathcal{K} = \{1, \dots, K \}$. To allow variables to be selected or removed individually, we further include the group $\mathcal{G}_{M+1} = \{\{1\}, \dots, \{K\}\}$, thus $G_{M+1,k} = \{k\}$. The group structure for a toy example with $K=8$ responses and $M=1$ groupings is displayed in Figure \ref{fig:regularizer_structure}.
We note that in this toy example, the groups are overlapping (e.g. $G_{1,1} \subset G_{0,1}$) because the groups are hierarchically structured. In our motivating application, the group structure we use involves such overlapping groups.
In our setting the outcome groupings are pre-defined, but in general when such a pre-defined set of groups are not available \textit{a priori},
the $M$ groupings can be formed by iteratively refined, hierarchical groupings of the responses, as illustrated in Figure \ref{fig:weefim_corrs}; for example, the $M$ groupings could be formed by clustering the responses at $M$ different levels in a hierarchical clustering of the responses.
For a particular group $G\in\mathcal{K}$ with size $|G|$, define the length $|G|$ subvector of $\boldsymbol \beta_{j,\cdot}$ limited to the outcomes in group $G$ as $\boldsymbol \beta_{j,G}$.
We propose to add to the above objective function two regularizers that induce sparsity in a structured manner. Let $\widehat{\boldsymbol \beta}$ be the solution of the following problem
\begin{align}
\operatornamewithlimits{argmin}_{\boldsymbol \beta} (2N)^{-1}\text{tr}\left[(\boldsymbol Y - \boldsymbol X\boldsymbol \beta)(\boldsymbol Y - \boldsymbol X\boldsymbol \beta)^\top\right] + \lambda_1 P_1(\boldsymbol \beta) + \lambda_2 P_2(\boldsymbol \beta) \label{overlapping_linear_model_objective}
\end{align}
where $$P_1(\boldsymbol \beta) = \sum_{j=1}^p\sum_{m=0}^{M+1}\sum_{G \in \mathcal{G}_m} \lambda_{1,j,G} ||\boldsymbol \beta_{j,G}||_2$$ and
$$P_2(\boldsymbol \beta) = \sum_{j=1}^p\sum_{G\in \mathcal{G}_M}\sum_{l, o \in G: l \neq o } \lambda_{2,j,l,o}|{\beta}_{j,l} - {\beta}_{j,o}|,$$
where the penalty $\lambda_{1,j,G} ||\boldsymbol \beta_{j,G}||_2$ is an overlapping group lasso penalty encourages joint selection or removal of the effects of variable $j$ across the responses in response group $G$. All possible {zero patterns} of coefficients can be represented as unions of groups due to the results of Jenatton, et al \cite{jenatton2011}; hence {nonzero patterns} of coefficients can be thought of as complements of unions of groups. Thus, the grouping $\mathcal{G}_0$ encourages joint selection and removal of effects for a variable across all outcomes simultaneously and grouping $\mathcal{G}_{M+1}$ allows for individual selection or removal for each outcome separately, while the groupings $\mathcal{G}_{1}, \dots, \mathcal{G}_M$ allow for selection or removal of effects across related groups.
The second penalty $\lambda_{2,j,l,o}|{\beta}_{j,l} - {\beta}_{j,o}|$ is a fused lasso penalty that encourages the effect of variable $j$ on response $l$ and on response $o$ to be more similar. Thus, $P_1(\boldsymbol \beta)$ and $P_2(\boldsymbol \beta)$, respectively, help borrow strength across the $K$ responses by 1) leveraging the natural groupings of the responses and taking advantage of joint significance of predictors across related responses and 2) encouraging the effect estimates for a particular variable to be similar across the most related responses according to the response groupings. These two regularizers incorporate structural knowledge about outcomes in two manners by utilizing two different types of structured sparsity inducing penalties. The overall structure of the regularizers are depicted in Figure \ref{fig:regularizer_structure} in a toy example with $K=8$ responses. In our formulation, we only add a fused lasso penalty within the final grouping $\mathcal{G}_M$, reflective of the notion that with an iteratively refined grouping, the final groups are likely to have more similar outcomes within each group. However, the fused lasso penalty can be added within any grouping and can even be set so that fused lasso terms are included for every possible pair of responses; our asymptotic results cover any arbitrary set of fused pairs. In our application, the multivariate outcomes are measured on the same scale, but our approach can be adapted to outcomes that have different scales by standardizing the outcomes prior to analysis. Further, as we list below, our adaptive penalization allows for the fused lasso penalty to adapt to observed differences in effects across outcomes.
To perform adaptive penalization, we take $\lambda_{1,j,G} = ||\widehat{\boldsymbol \beta}^{OLS}_{j,G}||_2^{-\gamma_1}$ and $\lambda_{2,j,l,m} = |\hat{\beta}_{j,l}^{OLS} - \hat{\beta}_{j,m}^{OLS}|^{-\gamma_2}$, where $\gamma_1, \gamma_2 > 0$ and $\widehat{\boldsymbol \beta}^{OLS} = (\boldsymbol X^\top\boldsymbol X)^{-1}\boldsymbol X^\top\boldsymbol Y$ is the ordinary least squares estimate of $\boldsymbol \beta$. We study this adaptive version of our estimator in Section \ref{sec:asymptotics}. For non-adaptive penalization, we take $\lambda_{1,j,G} = |G|^{1/2}$, as is used in Jenatton, et al. \cite{jenatton2011} and, Huling et al. \cite{huling2018risk} and $\lambda_{2,j,l,m}=1$, however for this choice selection consistency is not guaranteed without an irrepresentable condition \citep{jenatton2011}.
In order to get reliable estimates of out-of-sample performance using cross validation when the group structure is derived in a data-driven manner, the procedure used to estimate the group structure should be applied in each cross validation fold instead of being fixed.
\begin{figure}[!htpb]
\centering
\resizebox{1\textwidth}{!}
\tikzstyle{background rectangle}=
[draw=blue!8,fill=blue!8,rounded corners=1.5ex]
\begin{tikzpicture}[y=1cm, x=0.8cm, thick, font=\footnotesize,
vc/.style =
circle, draw, thick, fill=#1,
minimum width=12mm, opacity=0.7},
vg/.style args = {#1/#2}
minimum height=12mm,
minimum width=#1+\pgfkeysvalueof{/pgf/minimum height},
thick,
draw, rounded corners=\pgfkeysvalueof{/pgf/minimum height}/2,
opacity=1,
sloped},
frames/.style args = {#1/#2}{minimum height=#1,
minimum width=#2+\pgfkeysvalueof{/pgf/minimum height},
draw, thick, rounded corners=3mm, opacity=0.6, color=red, dashed,
sloped},
framesbig/.style args = {#1/#2}{minimum height=#1,
minimum width=#2+\pgfkeysvalueof{/pgf/minimum height},
draw, thick, rounded corners=4mm, opacity=0.8, color=blue, dashed,
sloped}]
\usetikzlibrary{arrows,decorations.pathreplacing}
\tikzset{number line/.style={}}
\tikzset{
brace_top/.style={
pen colour=blue,
line width=1.0pt,
decoration={calligraphic brace,amplitude=8pt},
decorate
},
brace_top_red/.style={
pen colour=red,
line width=1.0pt,
decoration={calligraphic brace,amplitude=8pt},
decorate
},
brace_bottom/.style={
pen colour=red,
decoration={calligraphic brace,amplitude=8pt, mirror},
decorate
}
}
\begin{scope}[xshift=0.9cm]
\begin{scope}[xshift=-0.27cm]
\foreach \i in {1,...,8}
{
\node[draw,thick,color=Green,circle, inner sep=0.1, minimum size=1.5em,scale=0.85] (coef\i) at (\i-1,0){$\textcolor{black}{\beta_{j,\i}}$};
}
\node (start_week) at (-0.4,0.95) {};
\node (end_week) at (7.4,0.95) {};
\draw [brace_top] (start_week.north) -- node [above=4pt, pos=0.5] {\scriptsize $G_{\textcolor{blue}{0},1}$} (end_week.north);
\coordinate [label={[black, align=left,scale=1.15]right: $\textcolor{blue}{\mathcal{G}_0}$}] (fl) at (7.5,1.5);
\coordinate [label={[black, align=left,scale=1.15]right: $\textcolor{red}{\mathcal{G}_M = \mathcal{G}_1}$}] (fl) at (7.5,0.925);
\coordinate [label={[black, align=left,scale=1.15]right: $\textcolor{Green}{\mathcal{G}_{M+1} = \mathcal{G}_2}$}] (fl) at (7.5,0);
\node (start_week) at (-0.4,0.35) {};
\node (end_week) at (2.4,0.35) {};
\draw [brace_top_red] (start_week.north) -- node [above=4pt, pos=0.5] {\scriptsize $G_{\textcolor{red}{1},1}$} (end_week.north);
\node (start_week) at (2.6,0.35) {};
\node (end_week) at (4.4,0.35) {};
\draw [brace_top_red] (start_week.north) -- node [above=4pt, pos=0.5] {\scriptsize $G_{\textcolor{red}{1},2}$} (end_week.north);
\node (start_week) at (4.6,0.35) {};
\node (end_week) at (7.4,0.35) {};
\draw [brace_top_red] (start_week.north) -- node [above=4pt, pos=0.5] {\scriptsize $G_{\textcolor{red}{1},3}$} (end_week.north);
\coordinate [label={[black, align=right,rotate=90,scale=0.85]left:\tiny Fused\\[-5pt]\tiny Lasso}] (fl) at (-0.9,-0.5);
\coordinate [label={[black, align=right,rotate=90,scale=0.85]left:\tiny \textcolor{Green}{Lasso}}] (fl) at (-0.9,0.35);
\coordinate [label={[black, align=right,rotate=90,scale=0.85]left:\tiny \textcolor{blue}{Group}\\[-5pt]\tiny \textcolor{red}{Lasso}}] (fl) at (-0.9,1.25);
\draw [->,thin,Green] (-0.7,0) to [bend right=-55] (-0.275,0.2);
\draw [->,thin,red] (-0.6,0.85) to [bend right=-45] (1.05,0.375);
\draw [->,thin,blue] (-0.6,0.75) to [bend right=-45] (0.25,0.45);
\draw [->,thin] (coef1.south)++(0,0cm) to [bend right] (0.4,-0.65) node[below right, draw=none] {};
\draw [->,thin] (coef2.south)++(-0.05,0cm) to [bend left] (0.6,-0.65) node[below left, draw=none] {};
\node[scale=0.425] (fuse12) at (0.5,-0.8) {$|\beta_{j,1}-\beta_{j,2}|$};
\draw [->,thin] (coef2.south)++(0.05,0cm) to [bend right] (1.4,-0.65) node[below right, draw=none] {};
\draw [->,thin] (coef3.south)++(0,0cm) to [bend left] (1.6,-0.65) node[below left, draw=none] {};
\node[scale=0.425] (fuse23) at (1.5,-0.8) {$|\beta_{j,2}-\beta_{j,3}|$};
\draw [->,thin] (coef1.south)++(-0.075,0cm) to [bend right=45] (0.5,-1.15) node[below right, draw=none] {};
\draw [->,thin] (coef3.south)++(0.075,0cm) to [bend left=45] (1.5,-1.15) node[below left, draw=none] {};
\node[scale=0.425] (fuse13) at (1,-1.15) {$|\beta_{j,1}-\beta_{j,3}|$};
\draw [->,thin] (coef4.south)++(0,0cm) to [bend right] (3.4,-0.65) node[below right, draw=none] {};
\draw [->,thin] (coef5.south)++(0,0cm) to [bend left] (3.6,-0.65) node[below left, draw=none] {};
\node[scale=0.425] (fuse45) at (3.5,-0.8) {$|\beta_{j,4}-\beta_{j,5}|$};
\draw [->,thin] (coef6.south)++(0,0cm) to [bend right] (5.4,-0.65) node[below right, draw=none] {};
\draw [->,thin] (coef7.south)++(-0.05,0cm) to [bend left] (5.6,-0.65) node[below left, draw=none] {};
\node[scale=0.425] (fuse67) at (5.5,-0.8) {$|\beta_{j,6}-\beta_{j,7}|$};
\draw [->,thin] (coef7.south)++(0.05,0cm) to [bend right] (6.4,-0.65) node[below right, draw=none] {};
\draw [->,thin] (coef8.south)++(0,0cm) to [bend left] (6.6,-0.65) node[below left, draw=none] {};
\node[scale=0.425] (fuse78) at (6.5,-0.8) {$|\beta_{j,7}-\beta_{j,8}|$};
\draw [->,thin] (coef6.south)++(-0.075,0cm) to [bend right=45] (5.5,-1.15) node[below right, draw=none] {};
\draw [->,thin] (coef8.south)++(0.075,0cm) to [bend left=45] (6.5,-1.15) node[below left, draw=none] {};
\node[scale=0.425] (fuse68) at (6.0,-1.15) {$|\beta_{j,6}-\beta_{j,8}|$};
\end{scope}
\end{scope}
\begin{pgfonlayer}{main}
\path let \p2 = ($(coef1.center)-(coef8.center)$),
\n2 = {veclen(\y2,\x2)} in
(coef1) -- node[framesbig=8.5mm/\n2] {} (coef8);
\path let \p2 = ($(coef6.center)-(coef8.center)$),
\n2 = {veclen(\y2,\x2)} in
(coef6) -- node[frames=7mm/\n2] {} (coef8);
\path let \p2 = ($(coef1.center)-(coef3.center)$),
\n2 = {veclen(\y2,\x2)} in
(coef1) -- node[frames=7mm/\n2] {} (coef3);
\path let \p2 = ($(coef4.center)-(coef5.center)$),
\n2 = {veclen(\y2,\x2)} in
(coef4) -- node[frames=7mm/\n2] {} (coef5);
\end{pgfonlayer}
\end{tikzpicture}
}
\caption{An illustration of the structure of the two components of the penalization methods used to induce group-wise selection of covariate effects and fuse them to be more similar. \textcolor{blue}{\textbf{Blue}} indicates joint selection across all responses via the group $\mathcal{G}_0$, \textcolor{red}{\textbf{red}} indicates joint selection by response group $\mathcal{G}_M = \mathcal{G}_1$, \textcolor{Green}{\textbf{green}} indicates individual coefficient selection achieved via the group $\mathcal{G}_{M+1}$, and \textbf{black} indicates shrinkage towards a common effect within response group $G_{1,k}$.}
\label{fig:regularizer_structure}
\end{figure}
\subsection{Implementation Details}\label{sec:impl_details}
In our implementation, we use a generic ADMM algorithm described in the next section for computation and feed into it the response vector $\widetilde{\boldsymbol Y}$ and design matrix $\widetilde{\boldsymbol X}$, where the latter is treated as a sparse matrix object that only stores the values and locations of the non-zero entries of $\widetilde{\boldsymbol X}$, which both dramatically saves space, memory used, and reduces computation time. The sparse matrix object is of the type provided in the \texttt{Eigen C++} linear algebra library \citep{eigenweb} with interface to an \texttt{R} package through \texttt{RcppEigen} \citep{rcppeigen}, allowing for highly efficient sparse-matrix manipulations. Our \texttt{R} package implementation of our method is \texttt{groupFusedMulti} available in the open source repository \url{https://github.com/jaredhuling/groupFusedMulti}, which uses an interface similar to the interface used in \texttt{glmnet}.
For ease of tuning the tuning parameters, we utlize the following re-parameterization of $\lambda_1$ and $\lambda_2$. Instead of using the penalty $\lambda_1 P_1(\boldsymbol \beta) + \lambda_2 P_2(\boldsymbol \beta)$, we use $\lambda (1-\alpha) P_1(\boldsymbol \beta) + \lambda\alpha P_2(\boldsymbol \beta)$, where $\lambda \geq 0$ and $\alpha\in [0,1]$ so that $\alpha$ controls the proportion of the total penalty that the fused lasso comprises.
\subsection{Computation via a Multi-block ADMM Algorithm}\label{sec:admm}
We utilize an alternating direction method of multipliers (ADMM) \citep{glowinski1975, gabay1976, boyd2011} algorithm for optimization. The ADMM algorithm works by decomposing an objective function and solving the decomposed subproblems iteratively, where each subproblem has a simple and computationally tractable form. ADMM solves problems of the form
\begin{align*}
& \mbox{minimize } f(\boldsymbol \beta) + P(\boldsymbol \gamma) \mbox{ subject to } \boldsymbol A\boldsymbol \beta + \boldsymbol B\boldsymbol \gamma = \boldsymbol c
\end{align*}
where $\boldsymbol A$ and $\boldsymbol B$ are constraint matrices, both $f$ and $P$ are convex functions, $\boldsymbol \beta$ is the parameter vector of interest, and typically $f$ represents some loss function and $P$ represents a penalty. In the simplest case the constraint is of the form $\boldsymbol \beta = \boldsymbol \gamma$ and the purpose of the constraint is to find a new variable for which the penalty is equivalent, but separable across loss and penalty in the terms of the new variable. To optimize our objective, we require the following multi-block version of ADMM
\begin{align*}
& \mbox{minimize } f(\boldsymbol \beta) + P_1(\boldsymbol \gamma) + P_2({\boldsymbol \eta}) \mbox{ subject to } \boldsymbol A\boldsymbol \beta + \boldsymbol B\boldsymbol \gamma + \boldsymbol C{\boldsymbol \eta} = \boldsymbol c,
\end{align*}
where $\boldsymbol C$ is an additional constraint matrix that allows the second penalty to have a decomposed form with loss and the two penalties all separable from each other.
To solve the above problem, the augmented Lagrangian is formed as:
\begin{align*}
L_\rho(\boldsymbol \beta, \boldsymbol \gamma, {\boldsymbol \eta}, \boldsymbol \nu) = {} & f(\boldsymbol \beta) + P_1(\boldsymbol \gamma) + P_2({\boldsymbol \eta}) + \boldsymbol \nu^\top (\boldsymbol A\boldsymbol \beta + \boldsymbol B\boldsymbol \gamma + \boldsymbol C{\boldsymbol \eta} - \boldsymbol c) \\
& + (\rho/2)||\boldsymbol A\boldsymbol \beta + \boldsymbol B\boldsymbol \gamma + \boldsymbol C{\boldsymbol \eta}- \boldsymbol c||^2_2,
\end{align*}
where $\rho$ is any strictly positive number.
The multi-block ADMM algorithm iterates by alternatingly minimizing with respect to $\boldsymbol \beta$, $\boldsymbol \gamma$, and ${\boldsymbol \eta}$ and following these minimizations, updating the Lagrangian parameter $\boldsymbol \nu$:
\begin{align}
\boldsymbol \beta^{(t + 1)} = {} & \operatornamewithlimits{argmin}_{\boldsymbol \beta}L_\rho(\boldsymbol \beta, \boldsymbol \gamma^{(t)}, {\boldsymbol \eta}^{(t)}, \boldsymbol \nu^{(t)}) \label{eqn:betamin}\\
\boldsymbol \gamma^{(t + 1)} = {} & \operatornamewithlimits{argmin}_{\boldsymbol \gamma}L_\rho(\boldsymbol \beta^{(t + 1)}, \boldsymbol \gamma, {\boldsymbol \eta}^{(t)}, \boldsymbol \nu^{(t)})\label{eqn:gammamin} \\
{\boldsymbol \eta}^{(t+1)} = {} & \operatornamewithlimits{argmin}_{\boldsymbol \eta}L_\rho(\boldsymbol \beta^{(t + 1)}, \boldsymbol \gamma^{(t + 1)}, {\boldsymbol \eta}, \boldsymbol \nu^{(t)})\label{eqn:etamin} \\
\boldsymbol \nu^{(t + 1)} = {} & \boldsymbol \nu^{(t)} + \rho (\boldsymbol A\boldsymbol \beta^{(t + 1)} + \boldsymbol B\boldsymbol \gamma^{(t + 1)} + \boldsymbol C {\boldsymbol \eta}^{(t+1)} - \boldsymbol c) \nonumber
\end{align}
where $t$ indexes the iteration number. The standard ADMM has been shown to converge for any $\rho > 0$ and the multi-block ADMM algorithm above has been shown to converge under certain conditions on $\boldsymbol A$, $\boldsymbol B$, and $\boldsymbol C$. These conditions, from Chen, et al \cite{chen2016direct}, are met if either $\boldsymbol A^\top \boldsymbol B = \boldsymbol 0$, $\boldsymbol B^\top \boldsymbol C = \boldsymbol 0$, or $\boldsymbol A^\top \boldsymbol C = \boldsymbol 0$.
The following describes the multi-block ADMM algorithm applied to the overlapping group lasso with fused lasso problem. Let $m = \sum_{G \in \mathcal{G}}|G|$, $g = |\mathcal{G}|$, and suppose $\mathcal{G}=\{G_1,\cdots,G_g\}$, let $\boldsymbol F = (\boldsymbol F_1, \dots, \boldsymbol F_g)$ be a matrix of dimension $m \times Kp$ where $\boldsymbol F_l$ is a $|G_l|\times Kp$ matrix with $(i,j)$th entry equal to 1 if $j$ is the $i^{th}$ element of group $G_l$, and 0 otherwise, $\forall j=1,\dots,g$. Then $\boldsymbol F\boldsymbol \beta$ is a vector of length $m$ comprised components of $\boldsymbol \beta$ where each element of $\boldsymbol \beta$ appears in $\boldsymbol F\boldsymbol \beta$ the total number of times it appears in any group. For example, if p=1, K=3, $ \boldsymbol \beta = (\beta_1, \beta_2, \beta_3)^\top $ and $\mathcal{G} = \{ \{ 1, 2\}, \{2, 3 \} \}$, then
$$
\boldsymbol F =
\begin{pmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{pmatrix}
\mbox{ and }
\boldsymbol F\boldsymbol \beta =
\begin{pmatrix}
\beta_1 \\
\beta_2 \\
\beta_2 \\
\beta_3
\end{pmatrix}.
$$
In this example, the penalty $P_1(\boldsymbol \gamma)$ is
$
=\lambda_1(\lambda_{1,G_1}||(\gamma_1,\gamma_2)||_2+\lambda_{1,G_2}||(\gamma_3,\gamma_4)||_2),
$ which is a standard group lasso penalty on the variable $\boldsymbol \gamma$.
In general, the overlapping group lasso penalty $P_1(\boldsymbol \gamma)$ can be written as
$
P_1(\boldsymbol \gamma)=\lambda_1\sum_{l=1}^g\lambda_{1,G_l}||\boldsymbol \gamma_{l}||_2,
$
where $\boldsymbol \gamma=(\boldsymbol \gamma_{1},\dots,\boldsymbol \gamma_{g})$ and $\boldsymbol \gamma_{l}$ is a $|G_l|$-dimensional vector.
Note that for the nonoverlapping group lasso, $\boldsymbol A = \boldsymbol I_{Kp}$, where $\boldsymbol I_{Kp}$ is the identity matrix of dimension $Kp\times Kp$.
To accommodate the fused lasso penalty, we allow for a more general penalty, the generalized fused lasso penalty. Denote the matrix $\boldsymbol D$ with $e$ rows and $Kp$ columns where each row represents a pair of effects which have a fused lasso penalty applied. Thus $e$ is the total number of fused lasso terms/pairs. For example, if the pair of effects $\beta_{j,\ell}$ and $\beta_{j,m}$ are have a fused lasso applied, then the row of $\boldsymbol D$ corresponding to this pair will have $\ell$th position taking the value 1 and the $m$th position taking the value $-1$. In the toy example in Figure \ref{fig:regularizer_structure}, if $\beta_1$ has a fused lasso penalty with $\beta_2$ and $\beta_2$ has a fused lasso penalty with $\beta_3$, then
$$
\boldsymbol D =
\begin{pmatrix}
1 & -1 & \hphantom{-}0 \\
0 & \hphantom{-}1 & -1
\end{pmatrix},
\mbox{ and }
\boldsymbol D\boldsymbol \beta =
\begin{pmatrix}
\beta_1 - \beta_2\\
\beta_2 - \beta_3
\end{pmatrix}.
$$
In this example, the $P_2(\boldsymbol\eta)=\lambda_2(\lambda_{2,1}|\eta_1| + \lambda_{2,2}|\eta_2|)$, which is a standard lasso penalty on the variable $\boldsymbol \eta$.
In general, the fused lasso penalty $P_2(\boldsymbol\eta)$ can be written as
$
P_2(\beta)=\lambda_2\sum_{l=1}^e\lambda_{2,l}|\eta_l|,
$
where $\boldsymbol\eta$ is an $e$-dimensional vector.
The ADMM algorithm for the overlapping group lasso plus the fused lasso is constructed by taking $$\boldsymbol A = \begin{pmatrix}
\boldsymbol F \\
\boldsymbol D
\end{pmatrix}, \boldsymbol B = \begin{pmatrix}
-\boldsymbol I_m \\
\boldsymbol 0
\end{pmatrix}, \boldsymbol C = \begin{pmatrix}
\boldsymbol 0 \\
-\boldsymbol I_e
\end{pmatrix},
\text{ and } \boldsymbol c = \boldsymbol 0,$$ which meets the condition that $\boldsymbol B^\top\boldsymbol C=\boldsymbol 0$, meaning a multi-block ADMM algorithm using this setup is valid and convergent. When $f(\boldsymbol \beta) = \frac{1}{2}|| \widetilde{\boldsymbol Y} - \widetilde{\boldsymbol X}\boldsymbol \beta||^2_2$, step (\ref{eqn:betamin}) for the overlapping group lasso is simply the solution of $(\widetilde{\boldsymbol X}^\top\widetilde{\boldsymbol X} + \rho (\boldsymbol F^\top \boldsymbol F+ \boldsymbol D^\top \boldsymbol D) )\boldsymbol \beta = \widetilde{\boldsymbol X}^\top\widetilde{\boldsymbol Y} + (\boldsymbol F^\top, \boldsymbol D^\top) \boldsymbol \nu^{(t)} + \rho (\boldsymbol F^\top \boldsymbol \gamma^{(t)} + \boldsymbol D^\top{\boldsymbol \eta}^{(t)})$. When $f(\boldsymbol \beta)$ is the negative log-likelihood, step (\ref{eqn:betamin}) can be carried out by Newton-Raphson or other standard optimization techniques. As step (\ref{eqn:gammamin}) is group-separable, it can be minimized by minimizing with respect to each group $\boldsymbol \gamma_{l}$ independently. This is achieved by the block soft-thresholding operator $S_{\lambda_1 \lambda_{G_l} / \rho}((\boldsymbol F\boldsymbol \beta^{(t+1)})_{l} - \boldsymbol \nu_{l}^{(t)}/ \rho)$, where $S_\lambda(\mathbf{u}) = \mathbf{u}\left( 1 - \lambda / ||\mathbf{u}||_2 \right)_+$ and $(\boldsymbol F\boldsymbol \beta^{(t+1)})_{l}$ and $\boldsymbol \nu_{l}^{(t)}$ are defined in the same way as $\boldsymbol \gamma_l$. Since \eqref{eqn:etamin} is separable and is equivalent to a type of lasso penalization problem, it can be similarly minimized simply via soft thresholding: $S_{\lambda_2 \lambda_{j} / \rho}((\boldsymbol D\boldsymbol \beta^{(t+1)})_{j} - \boldsymbol \nu_{j}^{(t)}/ \rho)$, where with some abuse of notation $(\boldsymbol D\boldsymbol \beta^{(t+1)})_{j}$ is the $j$th element of $(\boldsymbol D\boldsymbol \beta^{(t+1)})$, $\boldsymbol \nu_{j}^{(t)}$ is the $j$th element of $\boldsymbol \nu^{(t)}$, and $S_\lambda(u) = \text{sign}(u)\left( |u| - \lambda \right)_+$, which is the equivalent of the block soft thresholding operator applied to a scalar.
Our convergence criterion is the same as suggested in Section 3.3.1 of Boyd, et al \cite{boyd2011} with $\epsilon^{\mbox{abs}} = \epsilon^{\mbox{rel}} = 10^{-5}$. The parameter $\rho$ for the ADMM algorithm used is the adaptive scheme described in Section 3.4.1 of Boyd, et al \cite{boyd2011}.
\subsection{Connections with existing literature}\label{sec:lit_connections}
The methods of Rothman, et al \cite{rothman2010sparse} and Li, et al \cite{li2015multivariate} both also address scenarios where high dimensional covariates are used to predict a set of multivariate outcomes in a penalized regression framework. These two methods in addition to ours borrow strength in information across outcomes through penalization techniques, albeit in different ways.
If the multivariate outcomes are only correlated due to correlations between the outcomes themselves, then the multivariate regression with covariance estimation (MRCE) approach \cite{rothman2010sparse} may be most appropriate as it directly incorporates the covariance of the responses via a penalized log-likelihood; here, the lasso penalty induces the estimated coefficients to be a function of the covariance of the responses, unlike the unpenalized maximum likelihood estimator. However, when many covariates are expected to have effects of similar magnitude across the multivariate outcomes, our additional fused lasso penalty is designed to make use of this by borrowing information across outcomes for such covariates at a more granular level. Further, when there is strong group structure of whether or not a given covariate has an effect across groups of outcomes, but the magnitudes of the effects are unrelated, then both our approach and the multinomial logistic regression with sparse group lasso (MSGL) approach \cite{li2015multivariate} are appropriate options and their performances should be expected to be similar, despite our additional fused lasso penalty, whereas MRCE is not designed explicitly to make use of such information. However, if indeed there are many covariates with similar effects across outcomes and a group structure, our approach may be expected to perform well. If there is no relationship whatsoever across the multivariate outcomes, an approach of simply fitting a lasso-penalized model separately for each outcome may have reasonable performance.
\section{Numerical Experiments}\label{sec:simulation}
\subsection{Estimators used}\label{sec:comparator_methods}
In this section, we conduct simulation experiments to assess the small sample operating characteristics of our proposed doubly-structured sparsity inducing estimation approach in comparison with several other state-of-the-art approaches for multivariate regression in high-dimensions. In our simulations, we vary both the dimensionality of the problem and the sample size. We also consider data-generating mechanisms with varying degrees of sparsity of variable effects that align with grouping of the outcomes and further consider scenarios where the effects of each variable have either minimal similarity across outcomes or a varying degree of similarity across outcomes. This allows our studies to explore under what data-generating mechanisms our two penalties indeed help in estimation. We use adaptive and non adaptive versions of our method, denoted {OGFM(adapt)} and {OGFM}, respectively, where OGFM indicates Overlapping Group $+$ Fused Multivariate regression. For the adaptive version, we set $\gamma_1=\gamma_2=0.5$ and for high dimensional settings with $p\geq n$, we used marginal regression estimates for the adaptive weights as in Huang, et al. \cite{huang2008}.
We compare our approach with the approach of Li, et al. \cite{li2015multivariate} (denoted as {MSGL}), which allows for overlapping group lasso penalties. For {MSGL}, we use the \texttt{MSGLasso} \texttt{R} package version 2.1 We further compare with a simple approach of fitting a separate lasso-penalized linear regression model for each outcome, with the tuning parameter chosen separately for each outcome (denoted as {Sep-Lasso}) and do so using the \texttt{glmnet} \texttt{R} package version 4.1-3. We also compare with the approach of Rothman, et al. \cite{rothman2010sparse} (denoted {MRCE}) implemented in the \texttt{MRCE} \texttt{R} package version 2.1. For all methods, the tuning parameters are chosen by 10-fold cross validation; {MSGL} has two tuning parameters (one for a lasso penalty, one for a group lasso penalty), {MRCE} has two tuning parameters (one for a lasso penalty for the coefficients for the variable effects on responses, another for a lasso penalty on the elements of the inverse covariance matrix of the residuals), {Sep-Lasso, for separate lasso} has a tuning parameter for each outcome for a lasso penalty on coefficients, and the {OGFM} approaches have two tuning parameters (one for the overlapping group lasso penalty and another for the fused lasso penalty). For both the {OGFM} approaches and {MSGL}, the group lasso penalty applied corresponds to the true underlying group structure of the data-generating mechanism.
\subsection{Data generation}
For each replication of the simulation, we generate data under model \eqref{lin_model}, where $\mathbf{x}_i$ are generated as i.i.d. multivariate normal random variables with mean vector $\boldsymbol 0$ and covariance matrix $\boldsymbol \Sigma_{\boldsymbol X} = [\sigma_{Xjk}]_{j,k=1}^p$, where $\sigma_{Xjk} = 0.5^{|j-k|}$, as was used in the simulations of Yuan and Lin \cite{yuan2007model} and Rothman, et al. \cite{rothman2010sparse}. The responses are generated according to model \eqref{lin_model}, where the error vectors $\boldsymbol \epsilon_i$ are generated as i.i.d. multivariate normal random variables with mean vector $\boldsymbol 0$ and covariance matrix $\boldsymbol \Sigma_{\boldsymbol \epsilon} = [\sigma_{\epsilon jk}]_{j,k=1}^K$, where $\sigma_{\epsilon jk} = 4\left(0.5^{|j-k|}\right)$. In the Supplementary Material Appendix B.2, we explore a simulation setting with binary covariates and a Likert scale outcome similar to our motivating data.
The dimensionality of the outcome/error vector is $K=8$ and the outcomes form 3 groups, with the first three outcomes forming group 1, the fourth and fifth forming group 2, and the last three outcomes forming group 3. The variable effects are generated as
\vspace{10pt}
\begin{normalsize}
\begin{equation*}
\boldsymbol \beta^0 =
\begin{pmatrix}
\bovermat{$G_{1,1}$}{ \xi_{1,1} \xi^G_{1,1} \eta_{1,1} & \xi_{1,2} \xi^G_{1,1} \eta_{1,2} & \xi_{1,3} \xi^G_{1,1} \eta_{1,3} } & \bovermat{$G_{1,2}$}{\xi_{1,4} \xi^G_{1,2} \eta_{1,4} & \xi_{1,5} \xi^G_{1,2} \eta_{1,5}} & \bovermat{$G_{1,3}$}{\xi_{1,6} \xi^G_{1,3} \eta_{1,6} & \xi_{1,7} \xi^G_{1,3} \eta_{1,7} & \xi_{1,8} \xi^G_{1,3} \eta_{1,8}} \\
\vdots & & & & & & & \vdots \\
\xi_{z,1} \xi^G_{z,1} \eta_{z,1} & \xi_{z,2} \xi^G_{z,1} \eta_{z,2} & \xi_{z,3} \xi^G_{z,1} \eta_{z,3} & \xi_{z,4} \xi^G_{z,2} \eta_{z,4} & \xi_{z,5} \xi^G_{z,2} \eta_{z,5} & \xi_{z,6} \xi^G_{z,3} \eta_{z,6} & \xi_{z,7} \xi^G_{z,3} \eta_{z,7} & \xi_{z,8} \xi^G_{z,3} \eta_{z,8} \\
0 & \cdots & & & & & \cdots & 0 \\
\vdots & \ddots & & & & & & \vdots \\
0 & \cdots & & & & & \cdots & 0
\end{pmatrix},
\end{equation*}
\end{normalsize}
where the last $p-z$ rows of $\boldsymbol \beta^0$ have all elements as 0, the terms $\xi_{j,k}\sim$ Bernoulli$(0.9)$ induce sparsity at the individual effect level, the terms $\xi^G_{j,1}, \xi^G_{j,2}, \xi^G_{j,3} \sim$ Bernoulli$(1-p_{\text{HS}})$ induce sparsity at the group level, the variable effect size terms $\eta_{j,k}$ are distributed i.i.d. uniformly from $\{-1, -0.5, -0.25, -0.125, 0.125, 0.25, 0.5, 1\}$ to create both small and large effects, and for each variable $j$ separately the terms $\eta_{j,k}$ for $k$ in the same group are set to be all equal to each other with probability $p_{\text{GE}}/2$ and independently, the terms $\eta_{j,k}$ for all outcomes $k=1,\dots,8$ are set to be all equal to $\eta_{j,1}$ with probability $p_{\text{GE}}/2$. The latter process induces effects for some variables within a group to be equal to each other and induces effects for some variables to be equal for all outcomes.
We explore dimensions of $p=50, 100,$ and $200$ and set $z=25, 50, 50$ for each dimension setting, respectively. We explore hierarchical sparsity parameters from $p_{\text{HS}} = 0, 0.25$ and $0.5$ and fusing probabilities $p_{\text{GE}} = 0, 0.5, 0.75$ and $0.95$; when $p_{\text{HS}} = 0$, there is no group-wise sparsity and thus any group lasso penalty applied is superfluous. For each replication of the simulation, we additionally generate an independent test set of size 10000 for use in evaluation of predictive performance.
\subsection{Performance evaluation}
We evaluate methods in terms of three metrics: root mean squared error of predictions (RMSE) on a large, independent test set of size 10000 generated anew for each simulation replication, model error defined below, and balanced accuracy which is the average of the true positive rate (TPR) and true negative rate (TNR), described in detail in the Supplementary Material {Appendix B.1 and not to be confused with standard classification accuracy}. As our primary interest is performance of predictions on validation data, it is our primary focus of evaluation. For the RMSE, we compute the RMSE for each outcome separately and then average the RMSEs across the outcomes. Model error for a given estimate is defined as ME$(\widehat{\boldsymbol \beta}, \boldsymbol \beta^0) = \text{tr}\left[ (\widehat{\boldsymbol \beta} - \boldsymbol \beta^0)^\top \boldsymbol \Sigma_{\boldsymbol X} (\widehat{\boldsymbol \beta} - \boldsymbol \beta^0) \right]$.
\subsection{Results}
The RMSE results fixing the parameter at $p_{\text{HS}}=0$ are displayed in Figure \ref{fig:sim_res_rmse_0hsp} and the RMSE results for $p_{\text{HS}}=0.25$ are displayed in Figure \ref{fig:sim_res_rmse_025hsp}. When the sample size is small ($n=100$) and the hierarchical sparsity probability is 0 (Figure \ref{fig:sim_res_rmse_0hsp}), our proposed approach OGFM results in the smallest validation RMSE on average across the replications across all settings including moderate dimensions ($p=50,100$) and high dimensions ($p=200$), with MSGL as second best across the majority of settings and OGFM(adapt) a close third. The separate lasso tends to perform worse than all competing approaches, except for in some settings (e.g. when $n=400$) where it performs marginally better than MRCE. We note that the code for MRCE throws an error when $p\geq n$, although in principle MRCE can work for high dimensional settings. In the same settings but with $n=200$, OGFM still tends to work the best across all settings, however when the probability of coefficients being equal to each other is zero ($p_{\text{GE}}=0$), and the dimensionality is moderate, the MRCE approach performs nearly as well. In the high dimensional setting ($p=200$), MSGL performs on par with OGFM and sometimes better, however this benefit slightly attenuates for the larger sample size setting ($n=400$). In general the adaptive version of our approach, OGFM(adapt) performs worse than OGFM in small sample size settings and performs better with larger sample sizes. The results with the hierarchical sparsity probability set to 0.25 (Figure \ref{fig:sim_res_rmse_025hsp}) roughly mirror the results with no hierarchical sparsity, however the performance for all methods is slightly better in terms of RMSE as there are fewer overall non-zero coefficients in the data-generating process.
The model error results largely track with the validation RMSE results, albeit on a different scale, and are thus shown in the Supplementary Material Appendix B.1.
\begin{figure}[!htpb]
\centering
\includegraphics[width=1\textwidth]{figures/simulations/sim_res_100reps_rmse_0hsp.png}
\caption{Validation RMSEs for all methods across 100 replications of the simulation experiment holding the parameter $p_{\text{HS}}=0$ so that there is no group-level sparsity.}
\label{fig:sim_res_rmse_0hsp}
\end{figure}
\begin{figure}[!htpb]
\centering
\includegraphics[width=1\textwidth]{figures/simulations/sim_res_100reps_rmse_025hsp.png}
\caption{Validation RMSEs for all methods across 100 replications of the simulation experiment holding the parameter $p_{\text{HS}}=0.25$ so that there is moderate group-level sparsity.}
\label{fig:sim_res_rmse_025hsp}
\end{figure}
Figure \ref{fig:sim_res_rmse_sparsity_view} shows the same results as Figures \ref{fig:sim_res_rmse_0hsp} and \ref{fig:sim_res_rmse_025hsp}, but is re-organized to focus on the effect of the fused coefficients probability ($p_{\text{GE}}$) and the hierarchical sparsity probability ($p_{\text{HS}}$). In this figure, we fix the sample size to be 200. In general, we can see that as the probability of coefficients being equal/fused within groups increases, the performance of OGFM and OGFM(adapt) relative to the Sep-Lasso, MSGL, and MRCE tends to improve, with the trend being pronounced in moderate dimensional settings. Only the OGFM methods improve as the probability of coefficients being equal increases, as they are the only approaches which explictly allow for shrinkage of effect sizes to each other across outcomes. The relative performance of all methods tends to stay relatively consistent as $p_{\text{HS}}$ is varied. However, we note that MRCE tends to perform best in the setting with the fewest true non-zero coefficients ($p_{\text{HS}}=0.25$) and smallest amount of truly equal coefficients ($p_{\text{GE}}=0$).
In the Supplementary Material Appendix B.1, we show simulation results in terms of the average of the TPR and TNR. From these results, it can be seen that while OGFM performs best in terms of prediction performance, it tends to over-select terms, resulting in a high TPR but low TNR and thus lower average TPR and TNR value. On the other hand, for larger sample sizes OGFM(adapt) performs very well in terms of the average TPR and TNR as it selects fewer variables with a higher proportion of selected variables being ones with truly non-zero coefficients. Overall, MSGL performs best in terms of the average TPR and TNR and MRCE worst, the latter trend having been concurrently observed in the original work of Rothman, et al. \cite{rothman2010sparse}, who noted that MRCE tends to result in better model error but with less benefit in terms of TPR and TNR. In the Supplementary Material Appendix B.1 we also present computation times for all methods. The OGFM approaches are highly competitive computationally as the sample size increases, however their performance deteriorates as the dimensionality increases.
\begin{figure}[!htpb]
\centering
\includegraphics[width=0.85\textwidth]{figures/simulations/sim_res_100reps_sparsity_view_rmse_n200.png}
\caption{Validation RMSEs for all methods across 100 replications of the simulation experiment holding fixing the sample size $n=200$ and varying all other simulation parameters. The points are the average RMSEs across the 100 replications and error bars are plus and minus 1 standard deviation of this average.}
\label{fig:sim_res_rmse_sparsity_view}
\end{figure}
\section{Computation via a Multi-block ADMM Algorithm}\label{sec:admm}
\section{Proofs}\label{sec:proofs}
In this section, we prove Theorem 1 of our paper.
\begin{proof}[Proof of Theorem \ref{thm:linear_model_oracle_property_mis}]
We first prove asymptotic results for the linear model and then extend them to generalized linear models later.
We begin by showing result (\ref{linear_model_asympt_distr_mis}) of the main text. Let $\boldsymbol \beta_{\cdot,k} = \boldsymbol \beta^0_{\cdot,k} + \frac{1}{\sqrt{N}}\mathbf{u}_{\cdot,k}$, where $\mathbf{u}_{\cdot,k} \in \mathbbm{R}^p$. We can write the objective function (\ref{overlapping_linear_model_objective}) of the main text multiplied by $N$ as a function of $\mathbf{u}=(\mathbf{u}_{\cdot,1},\dots,\mathbf{u}_{\cdot,K})$ as follows:
\begin{align*}
F_N(\mathbf{u}) = {} & \sum_{k=1}^K\frac{1}{2}\biggr\lVert-\frac{1}{\sqrt{N}}\boldsymbol X\mathbf{u}_{\cdot,k} + \boldsymbol\epsilon_k\biggr\lVert^2_2 \\
& + \lambda_1 N \sum_{j=1}^p \sum_{G\in \mathcal{G}}\lambda_{1,j,G}||\boldsymbol \beta_{j,G}^{0}+\frac{1}{\sqrt{N}}\circ\mathbf{u}_{j,G}||_2 \\
& + \lambda_2N\sum_{j=1}^p\sum_{(l, m) \in \mathcal{F} } \lambda_{2,j,l,m}\biggr\lvert{\beta}^0_{j,l} - {\beta}^0_{j,m} + \vphantom{\frac{1}{\sqrt{n_l + n_m}}}\vphantom{\biggr\rvert} \\
& \qquad\qquad\qquad + \vphantom{\biggr\lvert}\frac{1}{\sqrt{N}}\left( u_{j,G} - u_{j,m} \right) \biggr\rvert
\end{align*}
where $u_{j,k} = \sqrt{N}(\beta_{j,k} - \beta^0_{j,k})$.
Let $\hat{\mathbf{u}}^{(N)} = \operatornamewithlimits{argmin}_{\mathbf{u}}F_N(\mathbf{u})$ and note that $\hat{\mathbf{u}}^{(N)}_{k,\cdot} = \sqrt{N}(\widehat{\boldsymbol \beta}_{\cdot,k} - \boldsymbol \beta^0_{\cdot,k})${, where $\widehat{\boldsymbol \beta}$ is the minimizer of the objective function (\ref{overlapping_linear_model_objective}) of the main text. Thus, to investigate the asymptotic distribution of $\widehat{\boldsymbol \beta}$ is equivalent to investigating the asymptotic distribution of $\hat{\mathbf{u}}^{(N)}$.}
Now, we let
\begin{align}
D_N(\mathbf{u}) = {} & F_N(\mathbf{u}) - F_N(\boldsymbol 0) \label{eqn:gram_matrix}\\
= {} & \sum_{k=1}^K \left( \frac{1}{2} \mathbf{u}^\top_{\cdot,k}\left( \frac{1}{N}{\boldsymbol X}^\top \boldsymbol X \right)\mathbf{u}_{\cdot,k} - \frac{1}{\sqrt{N}}\mathbf{u}_{\cdot,k}^\top\boldsymbol X^\top\boldsymbol\epsilon_k \right) \nonumber\\
%
%
{} & + \sqrt{N}\lambda_1 \sqrt{N} \left( \sum_{j=1}^p \sum_{G\in \mathcal{G}}\lambda_{1,j,G}||\boldsymbol \beta_{j,G}^{0}+\frac{1}{\sqrt{N}}\mathbf{u}_{j,G}||_2 \right. \nonumber \\
& \qquad\qquad\qquad - \left. \sum_{j=1}^p \sum_{G\in \mathcal{G}}\lambda_{1,j,G}||\boldsymbol \beta_{j,G}^{0}||_2 \right) \label{eqn:diff_f_group} \\
%
%
{} & + \sqrt{N}\lambda_1 \sqrt{N} \left( \sum_{j=1}^p\sum_{(l, m) \in \mathcal{F} } \lambda_{2,j,l,m}\biggr\lvert{\beta}^0_{j,l} - {\beta}^0_{j,m} + \vphantom{\biggr\lvert}\frac{1}{\sqrt{N}}\left[ u_{j,l} - u_{j,m} \right] \biggr\rvert \right. \nonumber \\
{} & \qquad\qquad\qquad \left. -\sum_{j=1}^p\sum_{(l, m) \in \mathcal{F}} \lambda_{2,j,l,m}\biggr\lvert{\beta}^0_{j,l} - {\beta}^0_{j,m} \biggr\rvert \right) \label{eqn:diff_f_fused} \\
%
%
%
= {} & \sum_{k=1}^K \left( \frac{1}{2} \mathbf{u}_{\cdot,k}^\top\left( \frac{1}{N}{\boldsymbol X}^\top \boldsymbol X \right)\mathbf{u}_{\cdot,k} - \frac{1}{\sqrt{N}}\mathbf{u}_{\cdot,k}^\top\boldsymbol X^\top\boldsymbol\epsilon_k \right) \nonumber\\
{} & + \sqrt{N}\lambda_1 \sum_{j=1}^p\sum_{G \in \mathcal{G}_{H_{\cdot,j}}} \lambda_{1,j,G}\sqrt{N} \left( \biggr\lVert\boldsymbol \beta_{j,G}^{0} + \frac{1}{\sqrt{N}}\circ\mathbf{u}_{j,G}\biggr\lVert_2 - \biggr\lVert\boldsymbol \beta_{j,G}^{0}\biggr\lVert_2 \right) \\
{} & + \sqrt{N}\lambda_1 \sum_{j=1}^p\sum_{G \in \mathcal{G}_{H_{\cdot,j}^c}} \lambda_{1,j,G}\sqrt{N} \left( \biggr\lVert\boldsymbol \beta_{j,G}^{0} + \frac{1}{\sqrt{N}}\circ\mathbf{u}_{j,G}\biggr\lVert_2 - \biggr\lVert\boldsymbol \beta_{j,G}^{0}\biggr\lVert_2 \right) \\
%
%
{} & + \sqrt{N}\lambda_2 \sqrt{N} \left( \sum_{j=1}^p\sum_{(l, m) \in \mathcal{F}, {\beta}^0_{j,l} \neq {\beta}^0_{j,m} } \lambda_{2,j,l,m}\biggr\lvert{\beta}^0_{j,l} - {\beta}^0_{j,m} + \vphantom{\frac{1}{\sqrt{n_l + n_m}}}\vphantom{\biggr\rvert} \right. \nonumber \\
{} & \qquad\qquad\qquad + \vphantom{\biggr\lvert}\frac{1}{\sqrt{N}}\left[ u_{j,l} - u_{j,m} \right] \biggr\rvert \nonumber \\
{} & \qquad\qquad\qquad \left. \sum_{j=1}^p\sum_{(l, m) \in \mathcal{F},{\beta}^0_{j,l} \neq {\beta}^0_{j,m} } \lambda_{2,j,l,m}\biggr\lvert{\beta}^0_{j,l} - {\beta}^0_{j,m} \biggr\rvert \right) + \\
%
{} & + \sqrt{N}\lambda_2 \sqrt{N} \left( \sum_{j=1}^p\sum_{(l, m) \in \mathcal{F}, {\beta}^0_{j,l} = {\beta}^0_{j,m} } \lambda_{2,j,l,m} \right. \nonumber \\
{} & \qquad\qquad\qquad \times \left. \biggr\lvert\frac{1}{\sqrt{N}}\left[ u_{j,l} - u_{j,m} \right] \biggr\rvert \vphantom{\sum_{l, m \in \mathcal{K}: l \setminus m = 1, {\beta}^0_{j,l} = {\beta}^0_{j,m} }} \right)
\end{align}
where $\mathcal{G}_{\mathcal{H}_{j,\cdot}}$ is the set of all groups $G$ with any $ k \in G$ such that $\beta^0_{j,k}\not=0$ and $\mathcal{G}_{\mathcal{H}_{j,\cdot}^c}$ is the set of all groups $G$ such that $\beta^0_{j,k}=0$ for all $k \in G$. {We obtain the asymptotic distribution of $\hat{\mathbf{u}}^{(N)}$, by first investigating the asymptotic properties of $D_N(\mathbf{u})$ for every fixed $\mathbf{u}\in \mathbbm{R}^{Kp}$.}
For all $G \in \mathcal{G}_{\mathcal{H}_{j,\cdot}}$, we have $\lambda_{1,j,G} \xrightarrow{p} || \boldsymbol \beta_{j,G}^{0} ||_2^{-\gamma_1} $ and by taking the directional derivative in the direction of $\mathbf{u}_{j,G}$, we have
\begin{align*}
\sqrt{N} \left( \biggr\lVert\boldsymbol \beta_{j,G}^{0} + \frac{1}{\sqrt{N}}\mathbf{u}_{j,G}\biggr\lVert_2 - \biggr\lVert\boldsymbol \beta_{j,G}^{0}\biggr\lVert_2 \right) = \sqrt{N}\frac{1}{\sqrt{N}}\frac{{\mathbf{u}_{j,G}}^\top\boldsymbol \beta_{j,G}^{0}}{||\boldsymbol \beta_{j,G}^{0}||_2} + o_p(1).
\end{align*}
Then because $\sqrt{N}\lambda = o(1)$, we have by Slutsky's theorem that
\begin{align*}
\sqrt{N}\lambda_1\lambda_{1,j,G}\sqrt{N} \left( \biggr\lVert\boldsymbol \beta_{j,G}^{0} + \frac{1}{\sqrt{N}}\mathbf{u}_{j,G}\biggr\lVert_2 - \biggr\lVert\boldsymbol \beta_{j,G}^{0}\biggr\lVert_2 \right) = o_p(1).
\end{align*}
For all $G \in \mathcal{G}_{H_{\cdot,j}^c}$, because $N^{\gamma_1/2}||\widehat{\boldsymbol \beta}^{MLE}_{j,G}||^{\gamma_1}_2 = O_p(1)$ and \newline $\sqrt{N} \left( \biggr\lVert\boldsymbol \beta_{j,G}^{0} + \frac{1}{\sqrt{N}}\mathbf{u}_{j,G}\biggr\lVert_2 - \biggr\lVert\boldsymbol \beta_{j,G}^{0}\biggr\lVert_2 \right) = ||\sqrt{N}\frac{1}{\sqrt{N}}\mathbf{u}_{j,G}||_2$, we have
\begin{align}
\lambda_1\lambda_{1,j,G}\sqrt{N} ||\sqrt{N}\frac{1}{\sqrt{N}}\mathbf{u}_{j,G}||_2 = ||\sqrt{N}\frac{1}{\sqrt{N}}\mathbf{u}_{j,G}||_2\lambda_1 \frac{N^{(\gamma_1 + 1)/2}}{(\sqrt{N}|| \widehat{\boldsymbol \beta}^{MLE}_{j,G} ||_2)^{\gamma_1}} \xrightarrow{p} \infty, \label{lambda_g_infty1}
\end{align}
{
if $\mathbf{u}_{j,G}\not =\boldsymbol 0$, and,
\begin{align}
\lambda_1\lambda_{1,j,G}\sqrt{N} ||\sqrt{N}\frac{1}{\sqrt{N}}\mathbf{u}_{j,G}||_2 = ||\sqrt{N}\frac{1}{\sqrt{N}}\mathbf{u}_{j,G}||_2\lambda_1 \frac{N^{(\gamma_1 + 1)/2}}{(\sqrt{N}|| \widehat{\boldsymbol \beta}^{MLE}_{j,G} ||_2)^{\gamma_1}} = o_p(1), \label{lambda_g_infty2}
\end{align}
if $\mathbf{u}_{j,G}=\boldsymbol 0$.}
Similarly, for all $(l,m) \in \mathcal{F}$ with ${\beta}^0_{j,l} \neq {\beta}^0_{j,m}$, we have $\lambda_{2,j,l,m} \xrightarrow{p} | {\beta}^0_{j,l} - {\beta}^0_{j,m} |^{-\gamma_2}$ and
\begin{align*}
& \sqrt{N}\left( \biggr\lvert \beta_{j,l}^0 - \beta_{j,m}^0 + \frac{1}{N}\left( u_{j,l} - u_{j,m} \right) \biggr\rvert - \biggr\lvert \beta_{j,l}^0 - \beta_{j,m}^0 \biggr\rvert \right) \\
{} & = \left( u_{j,l} - u_{j,m} \right) \mbox{sign}\left({\beta}^0_{j,l} - {\beta}^0_{j,m}\right) + o_p(1)
\end{align*}
Then since $\sqrt{N}\lambda = o(1)$, we have by Slutsky's theorem that
\begin{align*}
& \sqrt{N}\lambda_2\lambda_{2,j,l,m}\sqrt{N}\left( \biggr\lvert \beta_{j,l}^0 - \beta_{j,m}^0 + \frac{1}{N}\left( u_{j,l} - u_{j,m} \right) \biggr\rvert - \biggr\lvert \beta_{j,l}^0 - \beta_{j,m}^0 \biggr\rvert \right) = o_p(1).
\end{align*}
Now for all $(l,m) \in \mathcal{F}$ with ${\beta}^0_{j,l} = {\beta}^0_{j,m}$, we have
\begin{align*}
& \sqrt{N}\lambda_2\lambda_{2,j,l,m} \sqrt{N} \biggr\lvert\frac{1}{\sqrt{N}}\left( u_{j,l} - u_{j,m} \right) \biggr\rvert \\
{} & = \sqrt{N} \biggr\lvert\frac{1}{\sqrt{N}}\left( u_{j,l} - u_{j,m} \right) \biggr\rvert \lambda_2 \frac{N^{(\gamma_2 + 1)/2 }}{\left( \sqrt{N}\lvert \hat{\beta}^{MLE}_{j,l} - \hat{\beta}^{MLE}_{j,m} \rvert \right)^{\gamma_2}} \xrightarrow{p} \infty
\end{align*}
if $ u_{j,l} \neq u_{j,m}$ and
\begin{align*}
& \sqrt{N}\lambda_2\lambda_{2,j,l,m} \sqrt{N} \biggr\lvert\frac{1}{\sqrt{N}}\left( u_{j,l} - u_{j,m} \right) \biggr\rvert \\
{} & = \sqrt{N} \biggr\lvert\frac{1}{\sqrt{N}}\left( u_{j,l} - u_{j,m} \right) \biggr\rvert \lambda_2 \frac{N^{(\gamma_2 + 1)/2 }}{\left( \sqrt{N}\lvert \hat{\beta}^{MLE}_{j,l} - \hat{\beta}^{MLE}_{j,m} \rvert \right)^{\gamma_2}} = o_p(1)
\end{align*}
if $ u_{j,l} = u_{j,m}$.
By our condition on the rates of $n_j$ for $j \in \mathcal{K}$ and $N$, we have that $N^{-1}{\boldsymbol X}^\top\boldsymbol X \to \boldsymbol Q$ and thus $N^{-1}{\widetilde{\boldsymbol X}}^\top\widetilde{\boldsymbol X} \to \widetilde{\boldsymbol Q} \equiv \boldsymbol I_K\otimes \boldsymbol Q$, where $\boldsymbol Q$ and $\widetilde{\boldsymbol Q}$ are positive definite. Note that there exist matrices $\boldsymbol H$ and $\boldsymbol E$ such that $\widetilde{\boldsymbol X}_\mathcal{H}^* = \widetilde{\boldsymbol X}\boldsymbol H\boldsymbol E$, so that $N^{-1}{{\widetilde{\boldsymbol X}_\mathcal{H}{}}^*}^\top{\widetilde{\boldsymbol X}_\mathcal{H}{}}^* = N^{-1}\boldsymbol E^\top\boldsymbol H^\top\widetilde{\boldsymbol X}^\top\widetilde{\boldsymbol X}\boldsymbol H\boldsymbol E \to \boldsymbol Q_\mathcal{H}^* \equiv \boldsymbol E^\top\boldsymbol H^\top\widetilde{\boldsymbol Q}\boldsymbol H\boldsymbol E$, where $\boldsymbol Q_\mathcal{H}^*$ is positive definite and is constructed from $\widetilde{\boldsymbol Q}$ in a manner corresponding to the pattern of collapsed and dropped columns of $\widetilde{\boldsymbol X}$ but also dropping and collapsing rows in the same pattern. Further, $N^{-1/2}\boldsymbol\epsilon^\top\widetilde{\boldsymbol X} \xrightarrow{d} \boldsymbol W$ with $\boldsymbol W \sim N(\boldsymbol 0, \boldsymbol \Sigma \otimes \boldsymbol Q)$, $N^{-1/2}\boldsymbol\epsilon^\top\widetilde{\boldsymbol X}_\mathcal{H}^* = N^{-1/2}\boldsymbol\epsilon^\top\widetilde{\boldsymbol X}\boldsymbol H\boldsymbol E \xrightarrow{d} \boldsymbol W_\mathcal{H}^*$ with $\boldsymbol W_\mathcal{H}^* \sim N(\boldsymbol 0, \boldsymbol V^*_\mathcal{H})$, where $\boldsymbol V^*_\mathcal{H} = \boldsymbol E^\top\boldsymbol H^\top \left(\boldsymbol \Sigma \otimes \boldsymbol Q\right)\boldsymbol H\boldsymbol E$. Hence $\mathbf{u}_\mathcal{H}$ is comprised of the elements in $\mathbf{u}$ which correspond to the nonzero elements in $\boldsymbol \beta^0$. Further, denote $\mathbf{u}_\mathcal{H}^*$ to be the unique elements in $\mathbf{u}_\mathcal{H}$ collapsed in the same manner as $\boldsymbol X_\mathcal{H}^*$.
Since the term (\ref{eqn:gram_matrix}) converges in distribution to
$\frac{1}{2}\mathbf{u}^\top\widetilde{\boldsymbol Q}\mathbf{u} + \mathbf{u}^\top\boldsymbol W$,
using Slutsky's theorem we have that $D_N(\mathbf{u}) \xrightarrow{d} D(\mathbf{u})$ {for each $\mathbf{u}$}, where
\[ D(\mathbf{u}) =
\begin{cases}
\frac{1}{2}{\mathbf{u}^*}_\mathcal{H}^\top\boldsymbol Q_\mathcal{H}^*\mathbf{u}^*_{\mathcal{H}} + {\mathbf{u}^*}_\mathcal{H}^\top\boldsymbol W_\mathcal{H}^* & \mbox{if } {\mathbf{u}_{\mathcal{H}_{k,\cdot}^c} = \boldsymbol 0}, \forall k=1,\dots,K, \\
& \mbox{and } u_{j,l} = u_{j,m} \mbox{ for all } (l,m) \in {E_{\cdot,j}} \\
\infty & \mbox{otherwise.}
\end{cases}
\]
It is clear that $D_N(\mathbf{u})$ is convex and the unique minimum of $D(\mathbf{u})$ is $((\boldsymbol Q_\mathcal{H}^*)^{-1} \boldsymbol W_\mathcal{H}^*, \boldsymbol 0)$. By the epiconvergence results of \cite{geyer1994} and \cite{knight2000}, we have the following:
\begin{align}
{{}\hat{\mathbf{u}}^*_{\mathcal{H}}}^{(N)} \xrightarrow{d} (\boldsymbol Q_\mathcal{H}^*)^{-1} \boldsymbol W^*_{\mathcal{H}} \mbox{ and } \hat{\mathbf{u}}_{\mathcal{H}^c}^{(N)} \xrightarrow{d} \boldsymbol 0
\end{align}
where ${\boldsymbol Q_\mathcal{H}^*}^{-1} \boldsymbol W_{\mathcal{H}}^* \sim N_{|\mathcal{H}^*|}(0,{\boldsymbol Q_\mathcal{H}^*}^{-1}\boldsymbol V^*_\mathcal{H} {\boldsymbol Q_\mathcal{H}^*}^{-1} )$, where $|\mathcal{H}^*|$ represents the number of columns in $\boldsymbol Q_\mathcal{H}^*$. Hence result (\ref{linear_model_asympt_distr_mis}) of the main text is verified.
We now show selection consistency. For any {$j \in \mathcal{H}_{\cdot,k}$}, we have by the asymptotic normality (\ref{linear_model_asympt_distr_mis}) of the main text and thus it follows that $P(k \in \hat{\mathcal{J}}_{\cdot,j}) \to 1$. To verify result (\ref{linear_model_selection_consistency_mis}) of the main text it is equivalent to show that for any $k'$ such that $j \in \mathcal{H}_{\cdot,k'}^c$ which implies that $k'\not\in \mbox{Hull}(\mathcal{J}_{j,\cdot})$, $P(k' \in \hat{\mathcal{J}}_{j,\cdot}) \to 0$. Suppose $\hat{\beta}_{j,k'} \neq 0$ and $j \in \mathcal{H}_{\cdot,k'}^c$, then there exists at least one $G\subset \{1,\dots,K\}$ such that $k'\in G\subset \mathcal{H}_{j,\cdot}^c$. For any $k \in \mathcal{K}$ define $\mathcal{G}_{j,k} = \{ G \subset \mathcal{G} : k \in G \subset \mathcal{H}_{j,\cdot}^c \}$.
Let $G_{j,k} = \operatornamewithlimits{argmax}_{G \in \mathcal{G}_{j,k}} |G| $, i.e. $G_{j,k}$ is the biggest group in the complement of the true hull for variable $j$ which contains $k$.
Assume without loss of generality that $G_{j,k}=\{1,\dots,k_0\}$ for some $k_0\in\mathcal{K}$ subject to re-ordering of the response labels. For any $k \in \mathcal{K}$ define $C_{j,k} = \{l \in G_{j,k} : l = k \mbox{ or } (l, k) \in \mathcal{F} \mbox{ or } (k,l) \in \mathcal{F} \}$, in other words $C_{j,k}$ is indices of all coefficients fused to the coefficient for the $j$th variable in the $k$th subpopulation. Further, let $C_{j,k}^0 = \{l \in C_{j,k} : \beta^0_{j,l} = 0 \}$. Suppose there exists some $l \in C_{j,k}^0$ such that $\hat{\beta}_{j,l} \neq 0$. Then at least one of $S_{j,k,neg} = \{m \in C_{j,k}^0 : \hat{\beta}_{j,m} < 0 \}$ and $S_{j,k,pos} = \{m \in C_{j,k}^0 : \hat{\beta}_{j,m} > 0 \}$ is nonempty. If $S_{j,k,neg} \neq \varnothing$, then let $\hat{\beta}_{j,k,min} = \min_{m \in S_{j,k,neg}}\hat{\beta}_{j,m}$ be the largest magnitude nonzero coefficient in $S_{j,k,neg}$ and $S_{j,k,neg} \supset L_{j,k,min} = \{m \in S_{j,k,neg} : \hat{\beta}_{j,m} = \hat{\beta}_{j,k,min} \}$. Clearly $L_{j,k,min} \neq \varnothing$. Then by combining ideas from \cite{lee2014, jenatton2011, viallon2013adaptive}, based on the KKT optimality conditions summed up over the indices in $L_{j,k,min}$ we have
\begin{align*}
\boldsymbol 0 = {} &
\begin{pmatrix}
-\sum_{m \in L_{j,1,min}}{\boldsymbol X_{j}}^\top(\boldsymbol Y_{\cdot,m} - \boldsymbol X_m\widehat{\boldsymbol \beta}_{\cdot,m})\\
\vdots\\
-\sum_{m \in L_{j,k_0,min}}{\boldsymbol X_{j}}^\top(\boldsymbol Y_{\cdot,m} - \boldsymbol X_{m}\widehat{\boldsymbol \beta}_{\cdot,m})\\
\end{pmatrix}+\lambda_1
\begin{pmatrix}
\mathbf{v}_{j,1}\\
\vdots\\
\mathbf{v}_{j,k_0}\\
\end{pmatrix}+\lambda_2
\begin{pmatrix}
\mathbf{b}_{j,1}\\
\vdots\\
\mathbf{b}_{j,k_0}\\
\end{pmatrix} \\
= {} & \Phi + \lambda_1 \mathbf{v} + \lambda_2 \mathbf{b},
\end{align*}
where $\boldsymbol X_{j}$ is the vector of length $N$ corresponding to the $j$th covariate,
\begin{align*}
\mathbf{v}_{j,k'} = {} & N \cdot \sum_{m \in L_{j,k',min}}\sum_{G\in \mathcal{G} \mbox{ s.t. } m\in G} \lambda_{j,G}t_{j,m,G},
\end{align*}
where $t_{j,m,G} = \frac{\hat{\beta}_{j,m}}{||\widehat{\boldsymbol \beta}_{j,G}||_2}$ for any $j\in \{1,\dots,p\},m\in\mathcal{K}, G\in\mathcal{G}$ with $\widehat{\boldsymbol \beta}_{j,G} \neq \boldsymbol 0$ and $t_{j,m,G} \in [-1,1]$ for any $j\in \{1,\dots,p\},m\in\mathcal{K}, G\in\mathcal{G}$ with $\widehat{\boldsymbol \beta}_{j,G} = \boldsymbol 0$,
and
\begin{align}
\mathbf{b}_{j,k'} = {} & N\cdot\sum_{m \in L_{j,k',min}}\sum\limits_{\substack{l \in \mathcal{K} \mbox{ s.t. } (l,m)\in\mathcal{F} \mbox{ or } (m,l)\in\mathcal{F} \\ \mbox{ and } \beta_{j,l}^0 \neq 0}} \lambda_{2,j,m,l} \cdot r_{j,m,l} \label{eqn:_b1} \\
& + N\cdot\sum_{m \in L_{j,k',min}}\sum\limits_{\substack{l \in \mathcal{K} \mbox{ s.t. } (l,m)\in\mathcal{F} \mbox{ or } (m,l)\in\mathcal{F} \\ \mbox{ and } \beta_{j,l}^0 = 0, \hat{\beta}_{j,k',min} < \hat{\beta}_{j,l} }} \lambda_{2,j,m,l} \cdot r_{j,m,l} ,\label{eqn:_b2}
\end{align}
where $r_{j,m,l} = \mbox{sign}(\hat{\beta}_{j,m} - \hat{\beta}_{j,l})$ for any $j\in \{1,\dots,p\}, (m,l)\in \mathcal{F}$ with $\hat{\beta}_{j,m} \neq \hat{\beta}_{j,l}$ and $r_{j,m,l} \in [-1,1]$ for any $j\in \{1,\dots,p\}, (m,l)\in \mathcal{F}$ with $\hat{\beta}_{j,m} = \hat{\beta}_{j,l}$.
Due to the asymptotic normality of $\sqrt{n_k}(\boldsymbol \beta^0_{k,\cdot} - \widehat{\boldsymbol \beta}_{k,\cdot})$ and conditions (D.1) - (D.3), we have
\begin{align*}
\biggr\lVert \frac{1}{\sqrt{N}}\Phi\biggr\lVert_2 = O_p(1).
\end{align*}
In addition, by the same arguments that show (\ref{lambda_g_infty1}) and definition of $G_0$, we have
\[
{||\sqrt{N}\lambda_1\lambda_{j,G}\frac{\widehat{\boldsymbol \beta}_{j,G_0}}{||\widehat{\boldsymbol \beta}_{j,G_0}||_2}||_2} \xrightarrow{p} \infty
\]
and for $m \in L_{k',j,min}$ and $l \in \mathcal{K} \mbox{ s.t. } (l,m)\in\mathcal{F} \mbox{ or } (m,l)\in\mathcal{F}$ when $\beta_{j,l}^0 \neq 0$ we have $\beta_{j,l}^0 \neq \beta_{j,m}^0$ because $\beta_{j,m}^0 \neq 0$ and
$$\frac{\lambda_2}{\sqrt{N}} \frac{\mbox{sign}(\hat{\beta}_{m,j} - \hat{\beta}_{l,j})}{| \hat{\beta}^{MLE}_{j,m} - \hat{\beta}^{MLE}_{j,l} |^{\gamma_2}} \xrightarrow{p} 0 $$ as $N \to \infty$. Since $L_{k',j,min} \subset S_{k',j,neg}$, we have that $\frac{\hat{\beta}_{m,j}}{||\widehat{\boldsymbol \beta}_{G,j}||_2}< 0$ and by the construction of $L_{k',j,neg}$ we have $\mbox{sign}(\hat{\beta}_{m,j} - \hat{\beta}_{l,j}) = -1$. Then since $\lambda_1N^{\gamma_1/2} / \sqrt{N}$ and $\lambda_2N^{\gamma_2/2} / \sqrt{N}$ both $\to \infty$ as $N \to \infty$, we have that $||\frac{1}{\sqrt{N}}\Phi||_2 \to -\infty$ as $N \to \infty$ and hence we have arrived at a contradiction. Thus $Pr(S_{k',j,neg} = \varnothing) \to 1$. However if $S_{k',j,neg} = \varnothing$ we have $S_{k',j,pos}$ must be nonempty. Yet if we carry out the above arguments for
$S_{k',j,pos}$ we similarly arrive at a contradiction. Thus we can see that the probability of the KKT conditions holding vanishes,
\begin{align*}
P(k'\in \hat{J}_{\cdot,j}) \to 0.
\end{align*}
Hence, we have established selection consistency.
We now seek to show result (\ref{linear_model_fused_selection_consistency_mis}) of the main text, i.e. that we consistently estimate all coefficients for each variable $j$ which are equal to each other and are in adjacent subpopulations to be equal to each other. Specifically, we need to show that for all $(l,m) \not\in \mathcal{E}_{j,\cdot}$, $P((l,m) \not\in \hat{\mathcal{E}}_{j,\cdot}) \to 1$ and for all $(l,m) \in \mathcal{E}_{j,\cdot}$ that $P((l,m) \not\in \hat{\mathcal{E}}_{j,\cdot}) \to 0$. If $(l,m) \not\in \mathcal{E}_{j,\cdot}$ then $\beta_{j,l}^0 \neq 0$ and $\beta_{j,m}^0 \neq 0$ and $\beta_{j,l}^0 \neq \beta_{j,m}^0$. Then by our result of asymptotic normality, we have that $\hat{\beta}_{j,l} - \hat{\beta}_{j,m} \xrightarrow{p} \beta_{j,l}^0 - \beta_{j,m}^0 \neq 0$ and hence $P((l,m) \not\in \hat{\mathcal{E}}_{\cdot, j}) \to 1$.
We will use an approach similar to our proof for selection consistency to show for all $(l,m) \in \mathcal{E}_{j,\cdot}$ that $P((l,m) \not\in \hat{\mathcal{E}}_{j,\cdot}) \to 0$. Let $k' \in \mathcal{K}$ be such that there exists an $m\in \mathcal{K}$ such that $(k',m) \in \mathcal{E}_{j,\cdot}$. Suppose there is some $m \in \mathcal{K}$ with $(k',m) \in \mathcal{E}_{j,\cdot}$ such that $\hat{\beta}_{j,k'} \neq \hat{\beta}_{j,m}$. Similar as before, define $\hat{c}_{j,k,min} = \min_{m \in \mathcal{K} \mbox{ with } (k,m) \in \mathcal{E}_{j,\cdot}}\hat{\beta}_{j,k} - \hat{\beta}_{j,m}$ and $$m_{j,k,min} = \operatornamewithlimits{argmin}_{m \in \mathcal{K} \mbox{ with } (k,m) \in \mathcal{E}_{j,\cdot}}\hat{\beta}_{j,k} - \hat{\beta}_{j,m}.$$ Let $L_{j,k,min} = \{ m \in \mathcal{K}: \hat{\beta}_{j,m} = \hat{\beta}_{j,m_{j,k,min}} \mbox{ and there is a connected path } (l,l+1,\dots, m_{min}, \dots, l+q) \mbox{ s.t. either } (l_{i}, l_{i+1}) \in \mathcal{F} \mbox{ or } (l_{i+1}, l_{i}) \in \mathcal{F} \mbox{ and } \hat{\beta}_{j,l_i} = \hat{\beta}_{j,l_{i+1}} \mbox{ for } i = 0,\dots,q \}$, in other words, $L_{k,j,min} $ is the indices of all coefficients which have been fused to each other along a path of pairwise fusings and have their coefficients equal to $\hat{\beta}_{j,m_{j,k,min}}$. Then similar to the previous KKT conditions and with $G_{j,k}$ defined as in the proof of selection consistency, we have
\begin{align*}
\boldsymbol 0 = {} &
\begin{pmatrix}
-\sum_{m \in L_{j,1,min}}{\boldsymbol X_{j}^\top}(\boldsymbol Y_{\cdot,m} - \boldsymbol X\widehat{\boldsymbol \beta}_{\cdot,m})\\
\vdots\\
-\sum_{m \in L_{j,k_0,min}}{\boldsymbol X_{j}^\top}(\boldsymbol Y_{\cdot,m} - \boldsymbol X\widehat{\boldsymbol \beta}_{\cdot,m})\\
\end{pmatrix}+\lambda_1
\begin{pmatrix}
\mathbf{v}_{j,1}\\
\vdots\\
\mathbf{v}_{j,k_0}\\
\end{pmatrix}+\lambda_2
\begin{pmatrix}
\mathbf{b}_{j,1}\\
\vdots\\
\mathbf{b}_{j,k_0}\\
\end{pmatrix} \\
\equiv {} & \Phi + \lambda_1 \mathbf{v} + \lambda_2 \mathbf{b},
\end{align*}
where
$$\mathbf{v}_{j,k'} = N \cdot \sum_{m \in L_{j,k',min}}\sum_{G\in \mathcal{G} \mbox{ s.t. } m\in G} \lambda_{G,j}t_{j,m,G},$$ and
\begin{align}
\mathbf{b}_{j,k'} = {} & N\cdot\sum_{m \in L_{j,k',min}}\sum\limits_{\substack{l \in \mathcal{K} \mbox{ s.t. } (l,m)\in\mathcal{F} \mbox{ or } (m,l)\in\mathcal{F} \\ \mbox{ and } \beta_{j,l}^0 \neq \beta_{j,m}^0}} \lambda_{2,j,l,m} \cdot r_{j,m,l} \label{eqn:_bb1} \\
& + N\cdot\sum_{m \in L_{j,k',min}}\sum\limits_{\substack{l \in \mathcal{K} \mbox{ s.t. } (l,m)\in\mathcal{F} \mbox{ or } (m,l)\in\mathcal{F} \\ \mbox{ and } \beta_{j,l}^0 = \beta_{j,m}^0, \hat{\beta}_{j,m_{j,k',min}} < \hat{\beta}_{j,l} }} \lambda_{2,j,l,m} \cdot r_{j,m,l}.\label{eqn:_bb2}
\end{align}
As before, $|| N^{-1/2}\Phi ||_2 = O_p(1)$. By similar arguments as before, each term in ${N}^{-1/2}\mathbf{v}_{j,k'}$ converges to zero in probability and each term in (\ref{eqn:_bb1}) divided by $\sqrt{N}$ converges to 0 in probability, whereas the terms in (\ref{eqn:_bb2}) divided by $\sqrt{N}$ converge to $-\infty$ in probability and hence we have arrived at a similar contradiction as in the proof of selection consistency. Repeating these arguments for an appropriately defined $L_{k,j,max}$ results in the conclusion that for $(l,m) \in \mathcal{E}_{\cdot, j}$ we have $P((l,m) \in \hat{\mathcal{E}}_{\cdot, j}) \to 0$ and hence we have completed the proof.
\end{proof}
\section{Additional simulation results}
\label{sec:additional_sims}
\subsection{Additional results under settings of main text}
\label{sec:additional_sims_sub1}
In this section we present additional simulation results under the settings described in Section \ref{sec:simulation} of the main text. In particular, we show results in terms of the model error metric, balanced accuracy ($0.5($TPR $+$ TNR$)$, where TPR is the true positive rate and TNR is the true negative rate, both defined below), and computation times. TPR and TNR are defined as in \citet{rothman2010sparse}:
\begin{align*}
\text{TPR}(\widehat{\boldsymbol \beta},\boldsymbol \beta^0) = {} & \frac{\#\{(j,k): \widehat{\beta}_{j,k} \neq 0 \text{ and } \beta^0_{j,k} \neq 0\}}{\max(1, \#\{(j,k): \beta^0_{j,k} \neq 0\})} \text{ and} \\
\text{TNR}(\widehat{\boldsymbol \beta},\boldsymbol \beta^0) = {} & \frac{\#\{(j,k): \widehat{\beta}_{j,k} = 0 \text{ and } \beta^0_{j,k} = 0\}}{\max(1, \#\{(j,k): \beta^0_{j,k} = 0\})}.
\end{align*}
The results in terms of the model errors fixing $p_{\text{HS}}=0$ are displayed in Figure \ref{fig:sim_res_model_error_0hsp} and fixing $p_{\text{HS}}=0.25$ are displayed in Figure \ref{fig:sim_res_model_error_025hsp}. The results in terms of balanced accuracy $p_{\text{HS}}=0$ are displayed in Figure \ref{fig:sim_res_balacc_0hsp} and \ref{fig:sim_res_balacc_025hsp}.
Since the setting with $p_{\text{HS}}=0$ has the most non-zero coefficients, it will in general require the greatest computational effort. As such we show computation times for this setting only. These results are displayed in Figure \ref{fig:sim_res_comptime_0hsp}.
\begin{figure}[H]
\includegraphics[width=1\textwidth]{figures/simulations/sim_res_100reps_model_error_0hsp.png}
\caption{Model errors for all methods across 100 replications of the simulation experiment holding the parameter $p_{\text{HS}}=0$ so that there is no group-level sparsity.}
\label{fig:sim_res_model_error_0hsp}
\end{figure}
\begin{figure}[H]
\includegraphics[width=1\textwidth]{figures/simulations/sim_res_100reps_model_error_025hsp.png}
\caption{Model errors for all methods across 100 replications of the simulation experiment holding the parameter $p_{\text{HS}}=0.25$ so that there is no group-level sparsity.}
\label{fig:sim_res_model_error_025hsp}
\end{figure}
\begin{figure}[H]
\includegraphics[width=1\textwidth]{figures/simulations/sim_res_100reps_acc_0hsp.png}
\caption{Balanced accuracy (average of TPR and TNR) for all methods across 100 replications of the simulation experiment holding the parameter $p_{\text{HS}}=0$ so that there is no group-level sparsity.}
\label{fig:sim_res_balacc_0hsp}
\end{figure}
\begin{figure}[H]
\includegraphics[width=1\textwidth]{figures/simulations/sim_res_100reps_acc_025hsp.png}
\caption{Balanced accuracy (average of TPR and TNR) for all methods across 100 replications of the simulation experiment holding the parameter $p_{\text{HS}}=0.25$ so that there is no group-level sparsity.}
\label{fig:sim_res_balacc_025hsp}
\end{figure}
\begin{figure}[H]
\includegraphics[width=1\textwidth]{figures/simulations/sim_res_100reps_compute_time_0hsp.png}
\caption{Computation times in terms of log seconds for all methods across 100 replications of the simulation experiment holding the parameter $p_{\text{HS}}=0$ so that there is no group-level sparsity. The points are the average log computation time in seconds and error bars are plus and minus 1 standard deviation.}
\label{fig:sim_res_comptime_0hsp}
\end{figure}
\subsection{Simulations with discrete covariates and ordinal outcomes}
\label{sec:additional_sims_sub2}
\subsubsection{Data generation}
For each replication of the simulation, we generate data under model \eqref{lin_model}, where $\mathbf{x}_i$ are generated as i.i.d. Bernoulli$(0.2)$ random variables. The responses are generated as a multivariate ordered probit model where the observed responses $\mathbf{y}_i$ take 8 levels determined by an unobserved latent response $\mathbf{y}^*_i$, which is generated as $\mathbf{y}^*_i = {\boldsymbol \beta^0}^\top\mathbf{x}_i + \boldsymbol\epsilon_i \text{ for } i=1,\dots,N$. The $j$th observed response for the $i$th individual is then determined by
\begin{equation*}
\mathbf{y}_{ij} =
\begin{cases}
1, & \text{if}\ \mathbf{y}^*_{ij} \leq -2 \\
2, & \text{if}\ -2 < \mathbf{y}^*_{ij} \leq -1 \\
3, & \text{if}\ -1 < \mathbf{y}^*_{ij} \leq -1/2 \\
4, & \text{if}\ -1/2 < \mathbf{y}^*_{ij} \leq 0 \\
5, & \text{if}\ 0 < \mathbf{y}^*_{ij} \leq 1/2 \\
6, & \text{if}\ 1/2 < \mathbf{y}^*_{ij} \leq 1 \\
7, & \text{if}\ 1 < \mathbf{y}^*_{ij} \leq 2 \\
8, & \text{if}\ \mathbf{y}^*_{ij} > 2. \\
\end{cases}
\end{equation*}
The error vectors $\boldsymbol \epsilon_i$ for the latent responses are generated as i.i.d. multivariate normal random variables with mean vector $\boldsymbol 0$ and covariance matrix $\boldsymbol \Sigma_{\boldsymbol \epsilon} = [\sigma_{\epsilon jk}]_{j,k=1}^K$, where $\sigma_{\epsilon jk} = 4\left(0.5^{|j-k|}\right)$. The dimensionality of the outcome/error vector is $K=8$ and the outcomes form 3 groups, with the first three outcomes forming group 1, the fourth and fifth forming group 2, and the last three outcomes forming group 3. The variable effects $\boldsymbol \beta^0$ are generated in the exact same manner as in the simulation study in the main text, which for reference is
\vspace{10pt}
\begin{small}
\begin{equation*}
\boldsymbol \beta^0 =
\begin{pmatrix}
\bovermat{$G_{1,1}$}{ \xi_{1,1} \xi^G_{1,1} \eta_{1,1} & \xi_{1,2} \xi^G_{1,1} \eta_{1,2} & \xi_{1,3} \xi^G_{1,1} \eta_{1,3} } & \bovermat{$G_{1,2}$}{\xi_{1,4} \xi^G_{1,2} \eta_{1,4} & \xi_{1,5} \xi^G_{1,2} \eta_{1,5}} & \bovermat{$G_{1,3}$}{\xi_{1,6} \xi^G_{1,3} \eta_{1,6} & \xi_{1,7} \xi^G_{1,3} \eta_{1,7} & \xi_{1,8} \xi^G_{1,3} \eta_{1,8}} \\
\vdots & & & & & & & \vdots \\
\xi_{z,1} \xi^G_{z,1} \eta_{z,1} & \xi_{z,2} \xi^G_{z,1} \eta_{z,2} & \xi_{z,3} \xi^G_{z,1} \eta_{z,3} & \xi_{z,4} \xi^G_{z,2} \eta_{z,4} & \xi_{z,5} \xi^G_{z,2} \eta_{z,5} & \xi_{z,6} \xi^G_{z,3} \eta_{z,6} & \xi_{z,7} \xi^G_{z,3} \eta_{z,7} & \xi_{z,8} \xi^G_{z,3} \eta_{z,8} \\
0 & \cdots & & & & & \cdots & 0 \\
\vdots & \ddots & & & & & & \vdots \\
0 & \cdots & & & & & \cdots & 0
\end{pmatrix},
\end{equation*}
\end{small}
where the last $p-z$ rows of $\boldsymbol \beta^0$ have all elements as 0, the terms $\xi_{j,k}\sim$ Bernoulli$(0.9)$ induce sparsity at the individual effect level, the terms $\xi^G_{j,1}, \xi^G_{j,2}, \xi^G_{j,3} \sim$ Bernoulli$(1-p_{\text{HS}})$ induce sparsity at the group level, the variable effect size terms $\eta_{j,k}$ are distributed i.i.d. uniformly from $\{-1, -0.5, -0.25, -0.125, 0.125, 0.25, 0.5, 1\}$ to create both small and large effects, and for each variable $j$ separately the terms $\eta_{j,k}$ for $k$ in the same group are set to be all equal to each other with probability $p_{\text{GE}}/2$ and independently, the terms $\eta_{j,k}$ for all outcomes $k=1,\dots,8$ are set to be all equal to $\eta_{j,1}$ with probability $p_{\text{GE}}/2$. The latter process induces effects for some variables within a group to be equal to each other and induces effects for some variables to be equal for all outcomes.
We explore dimensions of $p=50, 100,$ and $200$ and set $z=25, 50, 50$ for each dimension setting, respectively. We explore hierarchical sparsity parameters from $p_{\text{HS}} = 0, 0.25$ and $0.5$ and fusing probabilities $p_{\text{GE}} = 0, 0.5, 0.75$ and $0.95$; as mentioned in the main text, when $p_{\text{HS}} = 0$, there is no group-wise sparsity and thus any group lasso penalty applied is superfluous. For each replication of the simulation, we additionally generate an independent test set of size 10000 for use in evaluation of predictive performance.
\subsubsection{Simulation results}
The RMSE results fixing the parameter at $p_{\text{HS}}=0$ are displayed in Figure \ref{fig:sim_res_rmse_0hsp_ordinal_y} and the RMSE results for $p_{\text{HS}}=0.25$ are displayed in Figure \ref{fig:sim_res_rmse_025hsp_ordinal_y}. The larger trends in the results in terms of both RMSE and model error are quite similar to the results of the main simulation study and thus extended discussion is omitted.
\begin{figure}[!htpb]
\centering
\includegraphics[width=1\textwidth]{figures/simulations/sim_res_ordinal_y_binary_x_100reps_rmse_0hsp.png}
\caption{Validation RMSEs for all methods across 100 replications of the simulation experiment holding the parameter $p_{\text{HS}}=0$ so that there is no group-level sparsity. These results pertain to the simulation setting with binary covariates and an ordinal/Likert response.}
\label{fig:sim_res_rmse_0hsp_ordinal_y}
\end{figure}
\begin{figure}[!htpb]
\centering
\includegraphics[width=1\textwidth]{figures/simulations/sim_res_ordinal_y_binary_x_100reps_rmse_025hsp.png}
\caption{Validation RMSEs for all methods across 100 replications of the simulation experiment holding the parameter $p_{\text{HS}}=0.25$ so that there is moderate group-level sparsity. These results pertain to the simulation setting with binary covariates and an ordinal/Likert response.}
\label{fig:sim_res_rmse_025hsp_ordinal_y}
\end{figure}
Figure \ref{fig:sim_res_rmse_sparsity_view_ordinal_y} shows the same results as Figures \ref{fig:sim_res_rmse_0hsp_ordinal_y} and \ref{fig:sim_res_rmse_025hsp_ordinal_y}, but is re-organized to focus on the effect of the fused coefficients probability ($p_{\text{GE}}$) and the hierarchical sparsity probability ($p_{\text{HS}}$). In this figure, we fix the sample size to be 200.
We also show below simulation results in terms of the average of the TPR and TNR. The results in terms of the model errors fixing $p_{\text{HS}}=0$ are displayed in Figure \ref{fig:sim_res_model_error_0hsp_ordinal_y} and fixing $p_{\text{HS}}=0.25$ are displayed in Figure \ref{fig:sim_res_model_error_025hsp_ordinal_y}. The results in terms of balanced accuracy $p_{\text{HS}}=0$ are displayed in Figure \ref{fig:sim_res_balacc_0hsp_ordinal_y} and \ref{fig:sim_res_balacc_025hsp_ordinal_y}.
\begin{figure}[!htpb]
\centering
\includegraphics[width=0.85\textwidth]{figures/simulations/sim_res_ordinal_y_binary_x_100reps_sparsity_view_rmse_n200.png}
\caption{Validation RMSEs for all methods across 100 replications of the simulation experiment holding fixing the sample size $n=200$ and varying all other simulation parameters. The points are the average RMSEs across the 100 replications and error bars are plus and minus 1 standard deviation of this average. These results pertain to the simulation setting with binary covariates and an ordinal/Likert response.}
\label{fig:sim_res_rmse_sparsity_view_ordinal_y}
\end{figure}
Since the setting with $p_{\text{HS}}=0$ has the most non-zero coefficients, it will in general require the greatest computational effort. As such we show computation times for this setting only. These results are displayed in Figure \ref{fig:sim_res_comptime_0hsp_ordinal_y}.
\begin{figure}[H]
\includegraphics[width=1\textwidth]{figures/simulations/sim_res_ordinal_y_binary_x_100reps_model_error_0hsp.png}
\caption{Model errors for all methods across 100 replications of the simulation experiment holding the parameter $p_{\text{HS}}=0$ so that there is no group-level sparsity. These results pertain to the simulation setting with binary covariates and an ordinal/Likert response.}
\label{fig:sim_res_model_error_0hsp_ordinal_y}
\end{figure}
\begin{figure}[H]
\includegraphics[width=1\textwidth]{figures/simulations/sim_res_ordinal_y_binary_x_100reps_model_error_025hsp.png}
\caption{Model errors for all methods across 100 replications of the simulation experiment holding the parameter $p_{\text{HS}}=0.25$ so that there is no group-level sparsity. These results pertain to the simulation setting with binary covariates and an ordinal/Likert response.}
\label{fig:sim_res_model_error_025hsp_ordinal_y}
\end{figure}
\begin{figure}[H]
\includegraphics[width=1\textwidth]{figures/simulations/sim_res_ordinal_y_binary_x_100reps_acc_0hsp.png}
\caption{Balanced accuracy (average of TPR and TNR) for all methods across 100 replications of the simulation experiment holding the parameter $p_{\text{HS}}=0$ so that there is no group-level sparsity. These results pertain to the simulation setting with binary covariates and an ordinal/Likert response.}
\label{fig:sim_res_balacc_0hsp_ordinal_y}
\end{figure}
\begin{figure}[H]
\includegraphics[width=1\textwidth]{figures/simulations/sim_res_ordinal_y_binary_x_100reps_acc_025hsp.png}
\caption{Balanced accuracy (average of TPR and TNR) for all methods across 100 replications of the simulation experiment holding the parameter $p_{\text{HS}}=0.25$ so that there is no group-level sparsity. These results pertain to the simulation setting with binary covariates and an ordinal/Likert response.}
\label{fig:sim_res_balacc_025hsp_ordinal_y}
\end{figure}
\begin{figure}[H]
\includegraphics[width=1\textwidth]{figures/simulations/sim_res_ordinal_y_binary_x_100reps_compute_time_0hsp.png}
\caption{Computation times in terms of log seconds for all methods across 100 replications of the simulation experiment holding the parameter $p_{\text{HS}}=0$ so that there is no group-level sparsity. The points are the average log computation time in seconds and error bars are plus and minus 1 standard deviation. These results pertain to the simulation setting with binary covariates and an ordinal/Likert response.}
\label{fig:sim_res_comptime_0hsp_ordinal_y}
\end{figure}
{
\bibliographystyle{abbrvnat}
\section{Sample for first level head}\label{sec1}
xLorem ipsum dolor sit amet, consectetuer adipiscing elit.\cite{Hirt1974} Ut purus elit, vestibulum ut, placerat ac, adipiscing vitae,
felis. Curabitur dictum gravida mauris. Nam arcu libero, nonummy eget, consectetuer id, vulputate a, magna. Donec
vehicula augue eu neque. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas.
Mauris ut leo. Cras viverra metus rhoncus sem. Nulla et lectus vestibulum urna fringilla ultrices. Phasellus eu tellus
sit amet tortor gravida placerat. Integer sapien est, iaculis in, pretium quis, viverra ac, nunc. Praesent eget sem vel
leo ultrices bibendum. Aenean faucibus. Morbi dolor nulla, malesuada eu, pulvinar at, mollis ac, nulla. Curabitur
auctor semper nulla. Donec varius orci eget risus. Duis nibh mi, congue eu, accumsan eleifend, sagittis quis, diam.
Duis eget orci sit amet orci dignissim rutrum.
\begin{eqnarray}
s(nT_{s}) &= &s(t)\times \sum\limits_{n=0}^{N-1} \delta (t-nT_{s}) \xleftrightarrow{\mathrm{DFT}} S \left(\frac{m}{NT_{s}}\right) \nonumber\\
&= &\frac{1}{N} \sum\limits_{n=0}^{N-1} \sum\limits_{k=-N/2}^{N/2-1} s_{k} e^{\mathrm{j}2\pi k\Delta fnT_{s}} e^{-j\frac{2\pi}{N}mn}
\end{eqnarray}
\section{Sample for another first level head}\label{sec2}
Nulla malesuada porttitor diam. Donec felis erat, congue non, volutpat at, tincidunt tristique, libero. Vivamus viverra
fermentum felis. Donec nonummy pellentesque ante. Phasellus adipiscing semper elit. Proin fermentum massa ac
quam. Sed diam turpis, molestie vitae, placerat a, molestie nec, leo.\cite{Liska2010} Maecenas lacinia. Nam ipsum ligula, eleifend
at, accumsan nec, suscipit a, ipsum. Morbi blandit ligula feugiat magna. Nunc eleifend consequat lorem. Sed lacinia
nulla vitae enim. Pellentesque tincidunt purus vel magna. Integer non enim. Praesent euismod nunc eu purus. Donec
bibendum quam in tellus. Nullam cursus pulvinar lectus. Donec et mi. Nam vulputate metus eu enim. Vestibulum
pellentesque felis eu massa.
Example for bibliography citations cite\cite{Taylor1937}, cites\cite{Knupp1999,Kamm2000}
Quisque ullamcorper placerat ipsum. Cras nibh.\cite{Kucharik2003,Blanchard2015} Morbi vel justo vitae lacus tincidunt ultrices. Lorem ipsum dolor sit
amet, consectetuer adipiscing elit. In hac habitasse platea dictumst. Integer tempus convallis augue. Etiam facilisis.
Nunc elementum fermentum wisi. Aenean placerat. Ut imperdiet, enim sed gravida sollicitudin, felis odio placerat
quam, ac pulvinar elit purus eget enim. Nunc vitae tortor. Proin tempus nibh sit amet nisl. Vivamus quis tortor
vitae risus porta vehicula.
Fusce mauris. Vestibulum luctus nibh at lectus. Sed bibendum, nulla a faucibus semper, leo velit ultricies tellus, ac
venenatis arcu wisi vel nisl. Vestibulum diam. Aliquam pellentesque, augue quis sagittis posuere, turpis lacus congue
quam, in hendrerit risus eros eget felis. Maecenas eget erat in sapien mattis porttitor. Vestibulum porttitor. Nulla facilisi. Sed a turpis eu lacus commodo facilisis. Morbi fringilla, wisi in dignissim interdum, justo lectus sagittis dui, et
vehicula libero dui cursus dui. Mauris tempor ligula sed lacus. Duis cursus enim ut augue. Cras ac magna. Cras nulla.
Nulla egestas. Curabitur a leo. Quisque egestas wisi eget nunc. Nam feugiat lacus vel est. Curabitur consectetuer.
\begin{figure}[t]
\centerline{\includegraphics[width=342pt,height=9pc,draft]{empty}}
\caption{This is the sample figure caption.\label{fig1}}
\end{figure}
Suspendisse vel felis. Ut lorem lorem, interdum eu, tincidunt sit amet, laoreet vitae, arcu. Aenean faucibus pede eu
ante. Praesent enim elit, rutrum at, molestie non, nonummy vel, nisl. Ut lectus eros, malesuada sit amet, fermentum
eu, sodales cursus, magna. Donec eu purus. Quisque vehicula, urna sed ultricies auctor, pede lorem egestas dui, et
convallis elit erat sed nulla. Donec luctus. Curabitur et nunc. Aliquam dolor odio, commodo pretium, ultricies non,
pharetra in, velit. Integer arcu est, nonummy in, fermentum faucibus, egestas vel, odio.
Sed commodo posuere pede. Mauris ut est. Ut quis purus. Sed ac odio. Sed vehicula hendrerit sem. Duis non
odio. Morbi ut dui. Sed accumsan risus eget odio. In hac habitasse platea dictumst. Pellentesque non elit. Fusce
sed justo eu urna porta tincidunt. Mauris felis odio, sollicitudin sed, volutpat a, ornare ac, erat. Morbi quis dolor.
Donec pellentesque, erat ac sagittis semper, nunc dui lobortis purus, quis congue purus metus ultricies tellus. Proin
et quam. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos hymenaeos. Praesent sapien
turpis, fermentum vel, eleifend faucibus, vehicula eu, lacus.
\begin{figure*}
\centerline{\includegraphics[width=342pt,height=9pc,draft]{empty}}
\caption{This is the sample figure caption.\label{fig2}}
\end{figure*}
\subsection{Example for second level head}
Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Donec odio elit, dictum
in, hendrerit sit amet, egestas sed, leo. Praesent feugiat sapien aliquet odio. Integer vitae justo. Aliquam vestibulum
fringilla lorem. Sed neque lectus, consectetuer at, consectetuer sed, eleifend ac, lectus. Nulla facilisi. Pellentesque
eget lectus. Proin eu metus. Sed porttitor. In hac habitasse platea dictumst. Suspendisse eu lectus. Ut mi mi, lacinia
sit amet, placerat et, mollis vitae, dui. Sed ante tellus, tristique ut, iaculis eu, malesuada ac, dui. Mauris nibh leo,
facilisis non, adipiscing quis, ultrices a, dui.
Morbi luctus, wisi viverra faucibus pretium, nibh est placerat odio, nec commodo wisi enim eget quam. Quisque
libero justo, consectetuer a, feugiat vitae, porttitor eu, libero. Suspendisse sed mauris vitae elit sollicitudin malesuada.
Maecenas ultricies eros sit amet ante. Ut venenatis velit. Maecenas sed mi eget dui varius euismod. Phasellus aliquet
volutpat odio. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Pellentesque sit
amet pede ac sem eleifend consectetuer. Nullam elementum, urna vel imperdiet sodales, elit ipsum pharetra ligula,
ac pretium ante justo a nulla. Curabitur tristique arcu eu metus. Vestibulum lectus. Proin mauris. Proin eu nunc eu
urna hendrerit faucibus. Aliquam auctor, pede consequat laoreet varius, eros tellus scelerisque quam, pellentesque
hendrerit ipsum dolor sed augue. Nulla nec lacus.
\begin{quote}
This is an example\cite{Burton2013,Berndt2011,Kucharik2012} for quote text. This is an example for quote text. This is an example for quote text. This is an example for quote text.\cite{Breil2015} This is an example for quote text. This is an example for quote text. This is an example for quote text. This is an example for quote text. This is an example for quote text. This is an example for quote text.\cite{Barth1997} This is an example for quote text. This is an example for quote text. This is an example for quote text.
\end{quote}
\section{Sample for next first level head}\label{sec3}
\subsection{Example for another second level head}
Suspendisse vitae elit. Aliquam arcu neque, ornare in, ullamcorper quis, commodo eu, libero. Fusce sagittis erat at
erat tristique mollis. Maecenas sapien libero, molestie et, lobortis in, sodales eget, dui. Morbi ultrices rutrum lorem.
Nam elementum ullamcorper leo. Morbi dui. Aliquam sagittis. Nunc placerat. Pellentesque tristique sodales est.
Maecenas imperdiet lacinia velit. Cras non urna. Morbi eros pede, suscipit ac, varius vel, egestas non, eros. Praesent
malesuada, diam id pretium elementum, eros sem dictum tortor, vel consectetuer odio sem sed wisi.
Sed feugiat. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Ut pellentesque
augue sed urna. Vestibulum diam eros, fringilla et, consectetuer eu, nonummy id, sapien. Nullam at lectus. In sagittis
ultrices mauris. Curabitur malesuada erat sit amet massa. Fusce blandit. Aliquam erat volutpat. Aliquam euismod.
Aenean vel lectus. Nunc imperdiet justo nec dolor.
\subsection{Second level head text}
Etiam euismod. Fusce facilisis lacinia dui. Suspendisse potenti. In mi erat, cursus id, nonummy sed, ullamcorper
eget, sapien. Praesent pretium, magna in eleifend egestas, pede pede pretium lorem, quis consectetuer tortor sapien
facilisis magna. Mauris quis magna varius nulla scelerisque imperdiet. Aliquam non quam. Aliquam porttitor quam
a lacus. Praesent vel arcu ut tortor cursus volutpat. In vitae pede quis diam bibendum placerat. Fusce elementum
convallis neque. Sed dolor orci, scelerisque ac, dapibus nec, ultricies ut, mi. Duis nec dui quis leo sagittis commodo.
\subsubsection{Third level head text}
Aliquam lectus. Vivamus leo. Quisque ornare tellus ullamcorper nulla. Mauris porttitor pharetra tortor. Sed fringilla
justo sed mauris. Mauris tellus. Sed non leo. Nullam elementum, magna in cursus sodales, augue est scelerisque
sapien, venenatis congue nulla arcu et pede. Ut suscipit enim vel sapien. Donec congue. Maecenas urna mi, suscipit
in, placerat ut, vestibulum ut, massa. Fusce ultrices nulla et nisl.
Etiam ac leo a risus tristique nonummy. Donec dignissim tincidunt nulla. Vestibulum rhoncus molestie odio. Sed
lobortis, justo et pretium lobortis, mauris turpis condimentum augue, nec ultricies nibh arcu pretium enim. Nunc
purus neque, placerat id, imperdiet sed, pellentesque nec, nisl. Vestibulum imperdiet neque non sem accumsan laoreet.
In hac habitasse platea dictumst. Etiam condimentum facilisis libero. Suspendisse in elit quis nisl aliquam dapibus.
Pellentesque auctor sapien. Sed egestas sapien nec lectus. Pellentesque vel dui vel neque bibendum viverra. Aliquam
porttitor nisl nec pede. Proin mattis libero vel turpis. Donec rutrum mauris et libero. Proin euismod porta felis.
Nam lobortis, metus quis elementum commodo, nunc lectus elementum mauris, eget vulputate ligula tellus eu neque.
Vivamus eu dolor.
Nulla in ipsum. Praesent eros nulla, congue vitae, euismod ut, commodo a, wisi. Pellentesque habitant morbi
tristique senectus et netus et malesuada fames ac turpis egestas. Aenean nonummy magna non leo. Sed felis erat,
ullamcorper in, dictum non, ultricies ut, lectus. Proin vel arcu a odio lobortis euismod. Vestibulum ante ipsum primis
in faucibus orci luctus et ultrices posuere cubilia Curae; Proin ut est. Aliquam odio. Pellentesque massa turpis, cursus
eu, euismod nec, tempor congue, nulla. Duis viverra gravida mauris. Cras tincidunt. Curabitur eros ligula, varius ut,
pulvinar in, cursus faucibus, augue.
\begin{boxtext}
\section*{Example of Boxtext}%
This is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext this is sample for boxtext.
\end{boxtext}
\paragraph{Fourth level head text}
Sed feugiat. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Ut pellentesque
augue sed urna. Vestibulum diam eros, fringilla et, consectetuer eu, nonummy id, sapien. Nullam at lectus. In sagittis
ultrices mauris. Curabitur malesuada erat sit amet massa. Fusce blandit. Aliquam erat volutpat. Aliquam euismod.
Aenean vel lectus. Nunc imperdiet justo nec dolor.
Etiam euismod. Fusce facilisis lacinia dui. Suspendisse potenti. In mi erat, cursus id, nonummy sed, ullamcorper
eget, sapien. Praesent pretium, magna in eleifend egestas, pede pede pretium lorem, quis consectetuer tortor sapien
facilisis magna. Mauris quis magna varius nulla scelerisque imperdiet. Aliquam non quam. Aliquam porttitor quam
a lacus. Praesent vel arcu ut tortor cursus volutpat. In vitae pede quis diam bibendum placerat. Fusce elementum
convallis neque. Sed dolor orci, scelerisque ac, dapibus nec, ultricies ut, mi. Duis nec dui quis leo sagittis commodo.
\subparagraph{Fifth level head text}
Aliquam lectus. Vivamus leo. Quisque ornare tellus ullamcorper nulla. Mauris porttitor pharetra
tortor. Sed fringilla justo sed mauris. Mauris tellus. Sed non leo. Nullam elementum, magna in cursus sodales, augue
est scelerisque sapien, venenatis congue nulla arcu et pede. Ut suscipit enim vel sapien. Donec congue. Maecenas
urna mi, suscipit in, placerat ut, vestibulum ut, massa. Fusce ultrices nulla et nisl.
Etiam ac leo a risus tristique nonummy. Donec dignissim tincidunt nulla. Vestibulum rhoncus molestie odio. Sed
lobortis, justo et pretium lobortis, mauris turpis condimentum augue, nec ultricies nibh arcu pretium enim. Nunc
purus neque, placerat id, imperdiet sed, pellentesque nec, nisl. Vestibulum imperdiet neque non sem accumsan laoreet.
In hac habitasse platea dictumst. Etiam condimentum facilisis libero. Suspendisse in elit quis nisl aliquam dapibus.
Pellentesque auctor sapien. Sed egestas sapien nec lectus. Pellentesque vel dui vel neque bibendum viverra. Aliquam
porttitor nisl nec pede. Proin mattis libero vel turpis. Donec rutrum mauris et libero. Proin euismod porta felis.
Nam lobortis, metus quis elementum commodo, nunc lectus elementum mauris, eget vulputate ligula tellus eu neque.
Vivamus eu dolor.
in faucibus orci luctus et ultrices posuere cubilia Curae; Proin ut est. Aliquam odio. Pellentesque massa turpis, cursus
eu, euismod nec, tempor congue, nulla. Duis viverra gravida mauris. Cras tincidunt. Curabitur eros ligula, varius ut,
pulvinar in, cursus faucibus, augue.
Curabitur tellus magna, porttitor a, commodo a, commodo in, tortor. Donec interdum. Praesent scelerisque. Mae-
cenas posuere sodales odio. Vivamus metus lacus, varius quis, imperdiet quis, rhoncus a, turpis. Etiam ligula arcu,
elementum a, venenatis quis, sollicitudin sed, metus. Donec nunc pede, tincidunt in, venenatis vitae, faucibus vel,
nibh. Pellentesque wisi. Nullam malesuada. Morbi ut tellus ut pede tincidunt porta. Lorem ipsum dolor sit amet,
consectetuer adipiscing elit. Etiam congue neque id dolor.
Donec et nisl at wisi luctus bibendum. Nam interdum tellus ac libero. Sed sem justo, laoreet vitae, fringilla at,
adipiscing ut, nibh. Maecenas non sem quis tortor eleifend fermentum. Etiam id tortor ac mauris porta vulputate.
Integer porta neque vitae massa. Maecenas tempus libero a libero posuere dictum. Vestibulum ante ipsum primis in
faucibus orci luctus et ultrices posuere cubilia Curae; Aenean quis mauris sed elit commodo placerat. Class aptent
taciti sociosqu ad litora torquent per conubia nostra, per inceptos hymenaeos. Vivamus rhoncus tincidunt libero.
Etiam elementum pretium justo. Vivamus est. Morbi a tellus eget pede tristique commodo. Nulla nisl. Vestibulum
sed nisl eu sapien cursus rutrum.
Nulla non mauris vitae wisi posuere convallis. Sed eu nulla nec eros scelerisque pharetra. Nullam varius. Etiam
dignissim elementum metus. Vestibulum faucibus, metus sit amet mattis rhoncus, sapien dui laoreet odio, nec ultricies
nibh augue a enim. Fusce in ligula. Quisque at magna et nulla commodo consequat. Proin accumsan imperdiet sem.
Nunc porta. Donec feugiat mi at justo. Phasellus facilisis ipsum quis ante. In ac elit eget ipsum pharetra faucibus.
Maecenas viverra nulla in massa.
Nulla in ipsum. Praesent eros nulla, congue vitae, euismod ut, commodo a, wisi. Pellentesque habitant morbi
tristique senectus et netus et malesuada fames ac turpis egestas. Aenean nonummy magna non leo. Sed felis erat,
ullamcorper in, dictum non, ultricies ut, lectus. Proin vel arcu a odio lobortis euismod. Vestibulum ante ipsum primis
\begin{center}
\begin{table*}[t]%
\caption{This is sample table caption.\label{tab1}}
\centering
\begin{tabular*}{500pt}{@{\extracolsep\fill}lccD{.}{.}{3}c@{\extracolsep\fill}}
\toprule
&\multicolumn{2}{@{}c@{}}{\textbf{Spanned heading\tnote{1}}} & \multicolumn{2}{@{}c@{}}{\textbf{Spanned heading\tnote{2}}} \\\cmidrule{2-3}\cmidrule{4-5}
\textbf{col1 head} & \textbf{col2 head} & \textbf{col3 head} & \multicolumn{1}{@{}l@{}}{\textbf{col4 head}} & \textbf{col5 head} \\
\midrule
col1 text & col2 text & col3 text & 12.34 & col5 text\tnote{1} \\
col1 text & col2 text & col3 text & 1.62 & col5 text\tnote{2} \\
col1 text & col2 text & col3 text & 51.809 & col5 text \\
\bottomrule
\end{tabular*}
\begin{tablenotes
\item Source: Example for table source text.
\item[1] Example for a first table footnote.
\item[2] Example for a second table footnote.
\end{tablenotes}
\end{table*}
\end{center}
Fusce mauris. Vestibulum luctus nibh at lectus. Sed bibendum, nulla a faucibus semper, leo velit ultricies tellus, ac
venenatis arcu wisi vel nisl. Vestibulum diam. Aliquam pellentesque, augue quis sagittis posuere, turpis lacus congue
quam, in hendrerit risus eros eget felis. Maecenas eget erat in sapien mattis porttitor. Vestibulum porttitor. Nulla
facilisi. Sed a turpis eu lacus commodo facilisis. Morbi fringilla, wisi in dignissim interdum, justo lectus sagittis dui, et
vehicula libero dui cursus dui. Mauris tempor ligula sed lacus. Duis cursus enim ut augue. Cras ac magna. Cras nulla.
Nulla egestas. Curabitur a leo. Quisque egestas wisi eget nunc. Nam feugiat lacus vel est. Curabitur consectetuer.
\begin{center}
\begin{table}[t]%
\centering
\caption{This is sample table caption.\label{tab2}}%
\begin{tabular*}{500pt}{@{\extracolsep\fill}lcccc@{\extracolsep\fill}}
\toprule
\textbf{col1 head} & \textbf{col2 head} & \textbf{col3 head} & \textbf{col4 head} & \textbf{col5 head} \\
\midrule
col1 text & col2 text & col3 text & col4 text & col5 text\tnote{$\dagger$} \\
col1 text & col2 text & col3 text & col4 text & col5 text \\
col1 text & col2 text & col3 text & col4 text & col5 text\tnote{$\ddagger$} \\
\bottomrule
\end{tabular*}
\begin{tablenotes}
\item Source: Example for table source text.
\item[$\dagger$] Example for a first table footnote.
\item[$\ddagger$] Example for a second table footnote.
\end{tablenotes}
\end{table}
\end{center}
Below is the example\cite{Liska2010,Kucharik2003,Blanchard2015} for bulleted list. Below is the example for bulleted list. Below is the example for bulleted list. Below is the example for bulleted list. Below is the example for bulleted list. Below is the example for bulleted list\footnote{This is an example for footnote.}:
\begin{itemize}
\item bulleted list entry sample bulleted list entry.\cite{Lauritzen2011} sample list entry text.
\item bulleted list entry sample bulleted list entry. bulleted list entry sample bulleted list entry. bulleted list entry sample bulleted list entry.
\item bulleted list entry sample bulleted list entry.\cite{Klima2017} bulleted list entry sample bulleted list entry.\cite{Dukowicz2000} sample list entry text. bulleted list entry sample bulleted list entry.
\item sample list entry text. sample list entry text.
\end{itemize}
Suspendisse vel felis. Ut lorem lorem, interdum eu, tincidunt sit amet, laoreet vitae, arcu. Aenean faucibus pede eu
ante. Praesent enim elit, rutrum at, molestie non, nonummy vel, nisl. Ut lectus eros, malesuada sit amet, fermentum
eu, sodales cursus, magna. Donec eu purus. Quisque vehicula, urna sed ultricies auctor, pede lorem egestas dui, et
convallis elit erat sed nulla. Donec luctus. Curabitur et nunc. Aliquam dolor odio, commodo pretium, ultricies non,
pharetra in, velit. Integer arcu est, nonummy in, fermentum faucibus, egestas vel, odio.
Sed commodo posuere pede. Mauris ut est. Ut quis purus. Sed ac odio. Sed vehicula hendrerit sem. Duis non
odio. Morbi ut dui. Sed accumsan risus eget odio. In hac habitasse platea dictumst. Pellentesque non elit. Fusce
sed justo eu urna porta tincidunt. Mauris felis odio, sollicitudin sed, volutpat a, ornare ac, erat. Morbi quis dolor. Donec pellentesque, erat ac sagittis semper, nunc dui lobortis purus, quis congue purus metus ultricies tellus. Proin
et quam. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos hymenaeos. Praesent sapien
turpis, fermentum vel, eleifend faucibus, vehicula eu, lacus.
Below is the sample for description list. Below is the example for description list. Below is the example for description list. Below is the example for description list. Below is the example for description list. Below is the example for description list:\\[12pt]
\noindent\textbf{Description sample:}
\begin{description}
\item[first entry] description text. description text.\cite{Kucharik2011,Kucharik2014,Loubere2005} description text. description text. description text. description text. description text.
\item[second long entry] description text. description text. description text. description text. description text. description text. description text.
\item[third entry] description text. description text. description text. description text. description text.
\item[fourth entry] description text. description text.
\end{description}
\noindent\textbf{Numbered list items sample:}
\begin{enumerate}[1.]
\item First level numbered list entry. sample numbered list entry.
\item First numbered list entry. sample numbered list entry. Numbered list entry.\cite{Caramana1998} sample numbered list entry. Numbered list entry. sample numbered list entry.
\begin{enumerate}[a.]
\item Second level alpabetical list entry. Second level alpabetical list entry. Second level alpabetical list entry.\cite{Hoch2009} Second level alpabetical list entry.
\item Second level alpabetical list entry. Second level alpabetical list entry.\cite{Shashkov1996,Knupp1999,Knupp1999}
\begin{enumerate}[ii.]
\item Third level lowercase roman numeral list entry. Third level lowercase roman numeral list entry. Third level lowercase roman numeral list entry.
\item Third level lowercase roman numeral list entry. Third level lowercase roman numeral list entry.\cite{Kamm2000}
\end{enumerate}
\item Second level alpabetical list entry. Second level alpabetical list entry.\cite{Taylor1937}
\end{enumerate}
\item First level numbered list entry. sample numbered list entry. Numbered list entry. sample numbered list entry. Numbered list entry.
\item Another first level numbered list entry. sample numbered list entry. Numbered list entry. sample numbered list entry. Numbered list entry.
\end{enumerate}
\noindent\textbf{un-numbered list items sample:}
\begin{enumerate}[]
\item Sample unnumberd list text..
\item Sample unnumberd list text.
\item sample unnumberd list text.
\item Sample unnumberd list text.
\end{enumerate}
\section{Examples for enunciations}\label{sec4}
\begin{theorem}[Theorem subhead]\label{thm1}
Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text.
\end{theorem}
Quisque ullamcorper placerat ipsum. Cras nibh. Morbi vel justo vitae lacus tincidunt ultrices. Lorem ipsum dolor sit
amet, consectetuer adipiscing elit. In hac habitasse platea dictumst. Integer tempus convallis augue. Etiam facilisis.
Nunc elementum fermentum wisi. Aenean placerat. Ut imperdiet, enim sed gravida sollicitudin, felis odio placerat
quam, ac pulvinar elit purus eget enim. Nunc vitae tortor. Proin tempus nibh sit amet nisl. Vivamus quis tortor
vitae risus porta vehicula.
Fusce mauris. Vestibulum luctus nibh at lectus. Sed bibendum, nulla a faucibus semper, leo velit ultricies tellus, ac
venenatis arcu wisi vel nisl. Vestibulum diam. Aliquam pellentesque, augue quis sagittis posuere, turpis lacus congue
quam, in hendrerit risus eros eget felis. Maecenas eget erat in sapien mattis porttitor. Vestibulum porttitor. Nulla
facilisi. Sed a turpis eu lacus commodo facilisis. Morbi fringilla, wisi in dignissim interdum, justo lectus sagittis dui, et
vehicula libero dui cursus dui. Mauris tempor ligula sed lacus. Duis cursus enim ut augue. Cras ac magna. Cras nulla.
Nulla egestas. Curabitur a leo. Quisque egestas wisi eget nunc. Nam feugiat lacus vel est. Curabitur consectetuer.
\begin{proposition}
Example proposition text. Example proposition text. Example proposition text. Example proposition text. Example proposition text. Example proposition text. Example proposition text. Example proposition text. Example proposition text. Example proposition text. Example proposition text. Example proposition text. Example proposition text. Example proposition text. Example proposition text. Example proposition text.
\end{proposition}
Nulla malesuada porttitor diam. Donec felis erat, congue non, volutpat at, tincidunt tristique, libero. Vivamus
viverra fermentum felis. Donec nonummy pellentesque ante. Phasellus adipiscing semper elit. Proin fermentum massa
ac quam. Sed diam turpis, molestie vitae, placerat a, molestie nec, leo. Maecenas lacinia. Nam ipsum ligula, eleifend
at, accumsan nec, suscipit a, ipsum. Morbi blandit ligula feugiat magna. Nunc eleifend consequat lorem. Sed lacinia
nulla vitae enim. Pellentesque tincidunt purus vel magna. Integer non enim. Praesent euismod nunc eu purus. Donec
bibendum quam in tellus. Nullam cursus pulvinar lectus. Donec et mi. Nam vulputate metus eu enim. Vestibulum
pellentesque felis eu massa.
Quisque ullamcorper placerat ipsum. Cras nibh. Morbi vel justo vitae lacus tincidunt ultrices. Lorem ipsum dolor sit
amet, consectetuer adipiscing elit. In hac habitasse platea dictumst. Integer tempus convallis augue. Etiam facilisis.
Nunc elementum fermentum wisi. Aenean placerat. Ut imperdiet, enim sed gravida sollicitudin, felis odio placerat
quam, ac pulvinar elit purus eget enim. Nunc vitae tortor. Proin tempus nibh sit amet nisl. Vivamus quis tortor
vitae risus porta vehicula.
\begin{definition}[Definition sub head]
Example definition text. Example definition text. Example definition text. Example definition text. Example definition text. Example definition text. Example definition text. Example definition text. Example definition text. Example definition text. Example definition text.
\end{definition}
Sed commodo posuere pede. Mauris ut est. Ut quis purus. Sed ac odio. Sed vehicula hendrerit sem. Duis non
odio. Morbi ut dui. Sed accumsan risus eget odio. In hac habitasse platea dictumst. Pellentesque non elit. Fusce
sed justo eu urna porta tincidunt. Mauris felis odio, sollicitudin sed, volutpat a, ornare ac, erat. Morbi quis dolor.
Donec pellentesque, erat ac sagittis semper, nunc dui lobortis purus, quis congue purus metus ultricies tellus. Proin
et quam. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos hymenaeos. Praesent sapien
turpis, fermentum vel, eleifend faucibus, vehicula eu, lacus.
Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Donec odio elit,
dictum in, hendrerit sit amet, egestas sed, leo. Praesent feugiat sapien aliquet odio. Integer vitae justo. Aliquam
vestibulum fringilla lorem. Sed neque lectus, consectetuer at, consectetuer sed, eleifend ac, lectus. Nulla facilisi.
Pellentesque eget lectus. Proin eu metus. Sed porttitor. In hac habitasse platea dictumst. Suspendisse eu lectus. Ut
mi mi, lacinia sit amet, placerat et, mollis vitae, dui. Sed ante tellus, tristique ut, iaculis eu, malesuada ac, dui.
Mauris nibh leo, facilisis non, adipiscing quis, ultrices a, dui.
\begin{proof}
Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text.
\end{proof}
Nam dui ligula, fringilla a, euismod sodales, sollicitudin vel, wisi. Morbi auctor lorem non justo. Nam lacus libero,
pretium at, lobortis vitae, ultricies et, tellus. Donec aliquet, tortor sed accumsan bibendum, erat ligula aliquet magna,
vitae ornare odio metus a mi. Morbi ac orci et nisl hendrerit mollis. Suspendisse ut massa. Cras nec ante. Pellentesque
a nulla. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Aliquam tincidunt
urna. Nulla ullamcorper vestibulum turpis. Pellentesque cursus luctus mauris.
Nulla malesuada porttitor diam. Donec felis erat, congue non, volutpat at, tincidunt tristique, libero. Vivamus
viverra fermentum felis. Donec nonummy pellentesque ante. Phasellus adipiscing semper elit. Proin fermentum massa
ac quam. Sed diam turpis, molestie vitae, placerat a, molestie nec, leo. Maecenas lacinia. Nam ipsum ligula, eleifend
at, accumsan nec, suscipit a, ipsum. Morbi blandit ligula feugiat magna. Nunc eleifend consequat lorem. Sed lacinia
nulla vitae enim. Pellentesque tincidunt purus vel magna. Integer non enim. Praesent euismod nunc eu purus. Donec
bibendum quam in tellus. Nullam cursus pulvinar lectus. Donec et mi. Nam vulputate metus eu enim. Vestibulum
pellentesque felis eu massa.
\begin{proof}[Proof of Theorem~\ref{thm1}]
Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text.
\end{proof}
Etiam euismod. Fusce facilisis lacinia dui. Suspendisse potenti. In mi erat, cursus id, nonummy sed, ullamcorper
eget, sapien. Praesent pretium, magna in eleifend egestas, pede pede pretium lorem, quis consectetuer tortor sapien
facilisis magna. Mauris quis magna varius nulla scelerisque imperdiet. Aliquam non quam. Aliquam porttitor quam
a lacus. Praesent vel arcu ut tortor cursus volutpat. In vitae pede quis diam bibendum placerat. Fusce elementum
convallis neque. Sed dolor orci, scelerisque ac, dapibus nec, ultricies ut, mi. Duis nec dui quis leo sagittis commodo.
Aliquam lectus. Vivamus leo. Quisque ornare tellus ullamcorper nulla. Mauris porttitor pharetra tortor. Sed fringilla
justo sed mauris. Mauris tellus. Sed non leo. Nullam elementum, magna in cursus sodales, augue est scelerisque
sapien, venenatis congue nulla arcu et pede. Ut suscipit enim vel sapien. Donec congue. Maecenas urna mi, suscipit
in, placerat ut, vestibulum ut, massa. Fusce ultrices nulla et nisl.
Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Donec odio elit,
dictum in, hendrerit sit amet, egestas sed, leo. Praesent feugiat sapien aliquet odio. Integer vitae justo. Aliquam
vestibulum fringilla lorem. Sed neque lectus, consectetuer at, consectetuer sed, eleifend ac, lectus. Nulla facilisi.
Pellentesque eget lectus. Proin eu metus. Sed porttitor. In hac habitasse platea dictumst. Suspendisse eu lectus. Ut Curabitur tellus magna, porttitor a, commodo a, commodo in, tortor. Donec interdum. Praesent scelerisque. Mae-
cenas posuere sodales odio. Vivamus metus lacus, varius quis, imperdiet quis, rhoncus a, turpis. Etiam ligula arcu,
elementum a, venenatis quis, sollicitudin sed, metus. Donec nunc pede, tincidunt in, venenatis vitae, faucibus vel,
\begin{sidewaystable
\caption{Sideways table caption. For decimal alignment refer column 4 to 9 in tabular* preamble.\label{tab3}}%
\begin{tabular*}{\textheight}{@{\extracolsep\fill}lccD{.}{.}{4}D{.}{.}{4}D{.}{.}{4}D{.}{.}{4}D{.}{.}{4}D{.}{.}{4}@{\extracolsep\fill}}%
\toprule
& \textbf{col2 head} & \textbf{col3 head} & \multicolumn{1}{c}{\textbf{10}} &\multicolumn{1}{c}{\textbf{20}} &\multicolumn{1}{c}{\textbf{30}} &\multicolumn{1}{c}{\textbf{10}} &\multicolumn{1}{c}{\textbf{20}} &\multicolumn{1}{c}{\textbf{30}} \\
\midrule
&col2 text &col3 text &0.7568&1.0530&1.2642&0.9919&1.3541&1.6108 \\
& &col2 text &12.5701 &19.6603&25.6809&18.0689&28.4865&37.3011 \\
3 &col2 text & col3 text &0.7426&1.0393&1.2507&0.9095&1.2524&1.4958 \\
& &col3 text &12.8008&19.9620&26.0324&16.6347&26.0843&34.0765 \\
& col2 text & col3 text &0.7285&1.0257&1.2374&0.8195&1.1407&1.3691\tnote{*} \\
& & col3 text &13.0360&20.2690&26.3895&15.0812&23.4932&30.6060\tnote{\dagger} \\
\bottomrule
\end{tabular*}
\begin{tablenotes
\item[*] First sideways table footnote. Sideways table footnote. Sideways table footnote. Sideways table footnote.
\item[$\dagger$] Second sideways table footnote. Sideways table footnote. Sideways table footnote. Sideways table footnote.
\end{tablenotes}
\end{sidewaystable}
\begin{sidewaysfigure}
\centerline{\includegraphics[width=542pt,height=9pc,draft]{empty}}
\caption{Sideways figure caption. Sideways figure caption. Sideways figure caption. Sideways figure caption. Sideways figure caption. Sideways figure caption.\label{fig3}}
\end{sidewaysfigure}
nibh. Pellentesque wisi.\cite{Kucharik2012} Nullam malesuada. Morbi ut tellus ut pede tincidunt porta. Lorem ipsum dolor sit amet,
consectetuer adipiscing elit. Etiam congue neque id dolor.
\begin{algorithm}
\caption{Pseudocode for our algorithm}\label{alg1}
\begin{algorithmic}
\For each frame
\For water particles $f_{i}$
\State compute fluid flow\cite{Hirt1974}
\State compute fluid--solid interaction\cite{Benson1992}
\State apply adhesion and surface tension\cite{Margolin2003}
\EndFor
\For solid particles $s_{i}$
\For neighboring water particles $f_{j}$
\State compute virtual water film \\(see Section~\ref{sec3})
\EndFor
\EndFor
\For solid particles $s_{i}$
\For neighboring water particles $f_{j}$
\State compute growth direction vector \\(see Section~\ref{sec2})
\EndFor
\EndFor
\For solid particles $s_{i}$
\For neighboring water particles $f_{j}$
\State compute $F_{\theta}$ (see Section~\ref{sec1})
\State compute $CE(s_{i},f_{j})$ \\(see Section~\ref{sec3})
\If $CE(b_{i}, f_{j})$ $>$ glaze threshold
\State $j$th water particle's phase $\Leftarrow$ ICE
\EndIf
\If $CE(c_{i}, f_{j})$ $>$ icicle threshold
\State $j$th water particle's phase $\Leftarrow$ ICE
\EndIf
\EndFor
\EndFor
\EndFor
\end{algorithmic}
\end{algorithm}
Donec et nisl at wisi luctus bibendum. Nam interdum tellus ac libero. Sed sem justo, laoreet vitae, fringilla at,
adipiscing ut, nibh. Maecenas non sem quis tortor eleifend fermentum. Etiam id tortor ac mauris porta vulputate.
Integer porta neque vitae massa.\cite{Hirt1974,Benson1992} Maecenas tempus libero a libero posuere dictum. Vestibulum ante ipsum primis in
faucibus orci luctus et ultrices posuere cubilia Curae; Aenean quis mauris sed elit commodo placerat. Class aptent
taciti sociosqu ad litora torquent per conubia nostra, per inceptos hymenaeos. Vivamus rhoncus tincidunt libero.
Etiam elementum pretium justo. Vivamus est. Morbi a tellus eget pede tristique commodo.\cite{Benson1992} Nulla nisl. Vestibulum
sed nisl eu sapien cursus rutrum.
Pellentesque wisi. Nullam malesuada. Morbi ut tellus ut pede tincidunt porta. Lorem ipsum dolor sit amet,
consectetuer adipiscing elit. Etiam congue neque id dolor.
Donec et nisl at wisi luctus bibendum. Nam interdum tellus ac libero. Sed sem justo, laoreet vitae, fringilla at,
adipiscing ut, nibh. Maecenas non sem quis tortor eleifend fermentum. Etiam id tortor ac mauris porta vulputate.
Integer porta neque vitae massa. Maecenas tempus libero a libero posuere dictum. Vestibulum ante ipsum primis in
faucibus orci luctus et ultrices posuere cubilia Curae; Aenean quis mauris sed elit commodo placerat. Class aptent
taciti sociosqu ad litora torquent per conubia nostra, per inceptos hymenaeos. Vivamus rhoncus tincidunt libero.
Etiam elementum pretium justo. Vivamus est. Morbi a tellus eget pede tristique commodo. Nulla nisl. Vestibulum
sed nisl eu sapien cursus rutrum.
\begin{equation}\label{eq23}
\|\tilde{X}(k)\|^2
=\frac{\left\|\sum\limits_{i=1}^{p}\tilde{Y}_i(k)+\sum\limits_{j=1}^{q}\tilde{Z}_j(k) \right\|^2}{(p+q)^2}
\leq\frac{\sum\limits_{i=1}^{p}\left\|\tilde{Y}_i(k)\right\|^2+\sum\limits_{j=1}^{q}\left\|\tilde{Z}_j(k)\right\|^2 }{p+q}.
\end{equation}
Sed feugiat. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Ut pellentesque
augue sed urna. Vestibulum diam eros, fringilla et, consectetuer eu, nonummy id, sapien. Nullam at lectus. In sagittis
ultrices mauris. Curabitur malesuada erat sit amet massa. Fusce blandit. Aliquam erat volutpat. Aliquam euismod.
Aenean vel lectus. Nunc imperdiet justo nec dolor.
Etiam euismod. Fusce facilisis lacinia dui. Suspendisse potenti. In mi erat, cursus id, nonummy sed, ullamcorper
eget, sapien. Praesent pretium, magna in eleifend egestas, pede pede pretium lorem, quis consectetuer tortor sapien
facilisis magna. Mauris quis magna varius nulla scelerisque imperdiet. Aliquam non quam. Aliquam porttitor quam
a lacus. Praesent vel arcu ut tortor cursus volutpat. In vitae pede quis diam bibendum placerat. Fusce elementum
convallis neque. Sed dolor orci, scelerisque ac, dapibus nec, ultricies ut, mi. Duis nec dui quis leo sagittis commodo.
\begin{equation}\label{eq24}
\|\tilde{X}(k)\|^2
=\frac{\left\|\sum\limits_{i=1}^{p}\tilde{Y}_i(k)+\sum\limits_{j=1}^{q}\tilde{Z}_j(k) \right\|^2}{(p+q)^2}
\leq\frac{\sum\limits_{i=1}^{p}\left\|\tilde{Y}_i(k)\right\|^2+\sum\limits_{j=1}^{q}\left\|\tilde{Z}_j(k)\right\|^2 }{p+q}.
\end{equation}
Aliquam lectus. Vivamus leo. Quisque ornare tellus ullamcorper nulla. Mauris porttitor pharetra
tortor. Sed fringilla justo sed mauris. Mauris tellus. Sed non leo. Nullam elementum, magna in cursus sodales, augue
est scelerisque sapien, venenatis congue nulla arcu et pede. Ut suscipit enim vel sapien. Donec congue. Maecenas
urna mi, suscipit in, placerat ut, vestibulum ut, massa. Fusce ultrices nulla et nisl.
Etiam ac leo a risus tristique nonummy. Donec dignissim tincidunt nulla. Vestibulum rhoncus molestie odio. Sed
lobortis, justo et pretium lobortis, mauris turpis condimentum augue, nec ultricies nibh arcu pretium enim. Nunc
purus neque, placerat id, imperdiet sed, pellentesque nec, nisl. Vestibulum imperdiet neque non sem accumsan laoreet.
In hac habitasse platea dictumst. Etiam condimentum facilisis libero. Suspendisse in elit quis nisl aliquam dapibus.
Pellentesque auctor sapien. Sed egestas sapien nec lectus. Pellentesque vel dui vel neque bibendum viverra. Aliquam
porttitor nisl nec pede. Proin mattis libero vel turpis. Donec rutrum mauris et libero. Proin euismod porta felis.
Nam lobortis, metus quis elementum commodo, nunc lectus elementum mauris, eget vulputate ligula tellus eu neque.
Vivamus eu dolor.
\section{Conclusions}\label{sec5}
Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Ut purus elit, vestibulum ut, placerat ac, adipiscing vitae,
felis. Curabitur dictum gravida mauris. Nam arcu libero, nonummy eget, consectetuer id, vulputate a, magna. Donec
vehicula augue eu neque. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas.
Mauris ut leo. Cras viverra metus rhoncus sem. Nulla et lectus vestibulum urna fringilla ultrices. Phasellus eu tellus
sit amet tortor gravida placerat. Integer sapien est, iaculis in, pretium quis, viverra ac, nunc. Praesent eget sem vel
leo ultrices bibendum. Aenean faucibus. Morbi dolor nulla, malesuada eu, pulvinar at, mollis ac, nulla. Curabitur
auctor semper nulla. Donec varius orci eget risus. Duis nibh mi, congue eu, accumsan eleifend, sagittis quis, diam.
Duis eget orci sit amet orci dignissim rutrum.
Nam dui ligula, fringilla a, euismod sodales, sollicitudin vel, wisi. Morbi auctor lorem non justo. Nam lacus libero,
pretium at, lobortis vitae, ultricies et, tellus. Donec aliquet, tortor sed accumsan bibendum, erat ligula aliquet magna,
vitae ornare odio metus a mi. Morbi ac orci et nisl hendrerit mollis. Suspendisse ut massa. Cras nec ante. Pellentesque
a nulla. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Aliquam tincidunt
urna. Nulla ullamcorper vestibulum turpis. Pellentesque cursus luctus mauris.
\section*{Acknowledgments}
This is acknowledgment text.\cite{Kenamond2013} Provide text here. This is acknowledgment text. Provide text here. This is acknowledgment text. Provide text here. This is acknowledgment text. Provide text here. This is acknowledgment text. Provide text here. This is acknowledgment text. Provide text here. This is acknowledgment text. Provide text here. This is acknowledgment text. Provide text here. This is acknowledgment text. Provide text here.
\subsection*{Author contributions}
This is an author contribution text. This is an author contribution text. This is an author contribution text. This is an author contribution text. This is an author contribution text.
\subsection*{Financial disclosure}
None reported.
\subsection*{Conflict of interest}
The authors declare no potential conflict of interests.
\section*{Supporting information}
The following supporting information is available as part of the online article:
\noindent
\textbf{Figure S1.}
{500{\uns}hPa geopotential anomalies for GC2C calculated against the ERA Interim reanalysis. The period is 1989--2008.}
\noindent
\textbf{Figure S2.}
{The SST anomalies for GC2C calculated against the observations (OIsst).}
|
1,314,259,994,220 | arxiv | \section{Introduction}
The extremely high densities reached in the cores of compact stars make those astrophysical objects the natural place for the realization of color superconductivity.
If the core matter density is high enough to guarantee the quark deconfinement, but
sufficiently low to consider the decoupling of the strange quark
mass $M_s$, the quarks can condense to form a two-flavor color
superconductor (2SC) \cite{Bailin-Love}. At higher densities, for
which $M_s$ cannot be regarded as extremely large, the three-flavor
phases $gCFL$ \cite{Alford} and $CFL$ \cite{alf-raj-wil-99/537} will
appear respectively with increasing densities. Matter inside compact
stars should be neutral and remain in $\beta$-equilibrium. When
these conditions along with $M_s$ are taken into account, a pairing mismatch takes place which is reflected in the
dynamics of the gluons in the $gCFL$ phase (or for that matter in
the 2SC phase if the s-quark is decoupled). As a consequence, some gluon modes become
tachyonic \cite{Igor,Fukushima}, indicating that the system ground
state should be restructured.
The chromomagnetic instabilities created by pairing mismatch has led to several interesting proposals.
Some of the most promising possibilities are a modified $CFL$-phase
with a condensate of kaons \cite{Schafer} that requires a
space-dependent condensate to remove the instability; a LOFF phase
on which the quarks pair with nonzero total momentum \cite{LOFF};
and an inhomogenous gluon condensate together with the spontaneous induction of an inmedium magnetic field \cite{Ind-Vortex}. At present
it is not clear if any of these proposals, or a combination of them, is the final solution to
the instability problem.
The phase with a ground state formed by
an inhomogeneous condensate of charged gluons and induced rotated
magnetic field was found in Ref. \cite{Ind-Vortex} in the Meissner unstable
region of the so-called gapped $2SC$ phase \cite{Huang}, but it is
expected to be also realized in the three-flavor theory. It
has the peculiarity of preserving the electromagnetic gauge
invariance $\widetilde{U}_{em}(1)$ of the color superconductor.
On the other hand, a common characteristic of dense astrophysical objects is their strong magnetic fields, especially those found for the so called magnetars which can range between $10^{14} - 10^{16}$ G \cite{magnetars} at their surface.
The stars' interior having a larger density can reach even higher values thanks to the magnetic flux conservation in the stellar medium. Maximum strengths of $10^{18}-10^{19}$ G are allowed
by a simple application of the virial theorem \cite{Lai}.
As found in Ref. \cite{Vortex}, the magnetic field can also
influence the gluon dynamics as we will discuss below. At field strengths
comparable to the charged gluon Meissner mass a chromomagnetic instability is created which is removed by the realization of a new phase which is formed
by the generation of an inhomogeneous condensate of
$\widetilde{Q}$-charged gluons. The generated gluon condensate
anti-screens the magnetic field due to the anomalous magnetic
moment of these spin-1 particles. Because of the anti-screening,
this condensate does not give a mass to the $\widetilde{Q}$
photon, but instead amplifies the applied rotated magnetic field.
This means that at such applied fields the color superconductor behaves as a
paramagnet.
The capability exhibited by color superconductors to generate or enhance a magnetic field due to the gluon-vortex antiscreening
mechanism can be of interest for
astrophysics, since compact stars with color superconducting cores could
have larger magnetic fields than neutron stars made up entirely of
nuclear matter.
As follows, I will discuss the role of gluons in generating and/or enhancing magnetic fields in color superconductors.
\section{Charged Gluons in Color Superconductivity}
In spin-zero color superconductivity, the color condensates in the CFL, as well as in the 2SC phases,
have non-zero electric charge. Then, we could expect a magnetic response of the color superconducting medium similar to that of the conventional superconductor, where the Cooper pairs are
electrically charged and consequently the electromagnetic gauge invariance is
spontaneously broken. In that case, the photon acquires a Meissner mass thus having the possibility
to screen a weak magnetic field (the well known phenomenon of Meissner effect). Nevertheless, in the spin-zero color superconductor the conventional electromagnetic field $A_\mu$ is not an eigenfield, but it is mixed with the $8^{th}$-gluon $G^8_\mu$. Hence, as in the electroweak model after the symmetry breaking produced by the condensation on the Higgs field, where the $SU(2)$ ($W_\mu^3$) and $U(1)$ ($B_\mu$) bosons combine to give rise to the real eigenfields, i.e. the $Z_\mu$ boson and $A_\mu$ photon, in the color superconductor it is the linear combinations of $A_\mu$ and $G^8_\mu$ what becomes the in-medium physical modes \cite{alf-raj-wil}. In this case, one of the two linear combinations between $A_\mu$ and $G^8_\mu$
\begin{equation}
\widetilde{A}_{\mu}=\cos{\theta}\,A_{\mu}-\sin{\theta}\,G^{8}_{\mu}
\ , \label{1}
\end{equation}
becomes massless and plays the role of the electromagnetic field inside the color superconducting medium (it is called the rotated electromagnetic field), while the orthogonal combination
\begin{equation}
\widetilde{G}_{\mu}^8=\sin{\theta}A_{\mu}+\cos{\theta}\,G^{8}_{\mu}
\ , \label{2}
\end{equation}
is massive. The mixing angle $\theta$ is a function of the strong $g$ and electromagnetic $e$ coupling constants, and depends on the nature of the color superconducting phase. In particular, for the CFL phase
\begin{equation}
\cos{\theta_{CFL}}=\frac{1}{\sqrt{1+\frac{4}{3}(\frac{e}{g})^2}},\,\quad \sin{\theta_{CFL}}=\frac{1}{\sqrt{1+\frac{3}{4}(\frac{g}{e})^2}}
\ , \label{angle-CFL}
\end{equation}
and for the 2SC phase it is given by
\begin{equation}
\cos{\theta_{2SC}}=\frac{1}{\sqrt{1+\frac{1}{3}(\frac{e}{g})^2}},\,\quad \sin{\theta_{2SC}}=\frac{1}{\sqrt{1+3(\frac{g}{e})^2}}
\ , \label{angle-2SC}
\end{equation}
Because of the hierarchy between the two coupling constants, in both phases the mixing angle $\theta$ is
sufficiently small ($\sin{\theta}\sim e/g\sim1/40$). Thus, the
penetrating field in the color superconductor (i.e. the rotated photon) is mostly formed by
the photon with only a small gluon admixture. Because the new massless field plays the role of the "in-medium" photon in the color
superconductor, the propagation of light in the color
superconductor is different from that in an electric superconductor.
Although in QCD gluons are electrically neutral, on the color superconducting
background they can interact with the rotated electromagnetism acquiring
a $\widetilde{Q}$ charge. In units of $\widetilde{e} =
e \cos{\theta_{CFL}}$ the gluons charges in the CFL phase are
\begin{equation} \label{table-2SC}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
$G_{\mu}^{1}$ & $G_{\mu}^{2}$ & $G_{\mu}^{3}$ & $G_{\mu}^{+}$ & $G_{\mu}^{-}$ & $I_{\mu}^{+}$ & $I_{\mu}^{-}$ & $\widetilde{G}_{\mu}^{8}$ \\
\hline
0 & 0 & 0 & 1 & -1 & 1 & -1 & 0 \\
\hline
\end{tabular} \ .
\end{equation}
where it was introduced the notation for the charged fields
$G_{\mu}^{\pm}=\frac{1}{\sqrt{2}}[G_{\mu}^{4}\pm iG_{\mu}^{5}]$
and $I_{\mu}^{\pm}=\frac{1}{\sqrt{2}}[G_{\mu}^{6}\pm
iG_{\mu}^{7}]$.
In the 2SC phase the charges of the gluons in units of $\widetilde{e} =
e \cos{\theta_{2SC}}$ are
\begin{equation} \label{table-CFL}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$G_{\mu}^{1}$ & $G_{\mu}^{2}$ & $G_{\mu}^{3}$ & $K_{\mu}$ & $K_{\mu}^{\dag}$ & $\widetilde{G}_{\mu}^{8}$ \\
\hline
0 & 0 & 0 & 1/2 & -1/2 & 0 \\
\hline
\end{tabular} \ .
\end{equation}
where we used for the charged fields the doublet representation
\begin{eqnarray} \label{charged-fields-2SC}
K_{\mu} \equiv \frac{1}{\sqrt{2}}\left(
\begin{array}{cc}
G_{\mu}^{(4)}-iG_{\mu}^(5)\\
G_{\mu}^{(6)}-iG_{\mu}^(7)
\end{array}
\right) \
\end{eqnarray}
The charged gluon fields
$G_{\mu}^{\pm}$, $I_{\mu}^{\pm}$ and $K_{\mu}^{\pm}$ can interact, through the
long-range field $\widetilde{A}_{\mu}$, with an applied external
magnetic field.
\section{Enhanced Magnetic Fields by Gluon Vortices}
An applied magnetic field can modify the color superconducting pair-condensate structure. As shown in Ref. \cite{MCFL}, the
color-superconducting properties of the three-flavor system are substantially
affected by the penetrating $\widetilde{B}$ field and as a
consequence, a new phase, called Magnetic Color Flavor Locked (MCFL)
phase \cite{MCFL}-\cite{Phases}, takes place. Now, the magnetic field effect in color superconductivity is not only restricted to the ground-state pairing, but it can also impact the gluon dynamics. As follow, I will discuss how a sufficiently strong magnetic field can produce in the CFL phase a new background formed by vortices of rotated charged gluons, which as a back reaction boost the applied magnetic field \cite{Vortex}.
To investigate the effect of the applied rotated magnetic field
$\widetilde{H}$ on the charged gluons, we should start from the effective action of the charged fields $G_{\mu}^{\pm}$ (the
contribution of the other charges gluons $I_{\mu}^{\pm}$ is similar)
\begin{eqnarray}
\label{Eff-Act-2} &\Gamma_{eff}^{c}= \int d^{4}x
\{-\frac{1}{4}(\widetilde{f}_{\mu
\nu})^{2}-\frac{1}{2}|\widetilde{\Pi}_{\mu}G_{\nu}^{-}-\widetilde{\Pi}_{\nu}G_{\mu}^{-}|^{2}&
\nonumber
\\
& - [(m_{D}^{2} \delta_{\mu 0} \delta_{\nu 0}+ m_{M}^{2}
\delta_{\mu i} \delta_{\nu i})+ i\widetilde{e}\widetilde{f}_{\mu
\nu}] G_{\mu}^{+}G_{\nu}^{-}& \nonumber
\\
& +
\frac{g^2}{2}[(G^{+}_{\mu})^{2}(G^{-}_{\nu})^{2}-(G^{+}_{\mu}G^{-}_{\mu})^{2}]+\frac{1}{\lambda}G_{\mu}^{+}\widetilde{\Pi}_{\mu}\widetilde{\Pi}_{\nu}G_{\nu}^{-}
\},&
\end{eqnarray}
where $\lambda$ is the gauge fixing parameter, $\widetilde{\Pi}_{\mu}=\partial_{\mu}
-i\widetilde{e}\widetilde{A}_{\mu}$ is the covariant derivative in the presence of the external rotated field, $m_{D}$ and $m_{M}$ are the $G_{\mu}^{\pm}$-field Debye and Meissner masses respectively, and the field strength tensor for the rotated electromagnetic field if denoted by $\widetilde{f}_{\mu
\nu}=\partial_{\mu}\widetilde{A}_{\nu}-\partial_{\nu}\widetilde{A}_{\mu}$.
The corresponding Debye and Meissner masses in (\ref{Eff-Act-2})
are given by \cite{Wang}
\begin{equation}
m_{D}^{2} = m_{g}^{2} \frac{21-8 \ln 2}{18},\qquad m_{M}^{2} =
m_{g}^{2} \frac{21-8 \ln 2}{54},
\end{equation}
with $m_{g}^{2}=g^2(\mu^{2}/2\pi^{2})$. We
are neglecting the correction produced by the applied field to the
gluon Meissner masses since it will be a second order effect. The effective action (\ref{Eff-Act-2}) is characteristic of a
spin-1 charged field in a magnetic field (for details see for
instance \cite{emilio}).
Assuming that the penetrating magnetic field
points along the third spatial direction
($\widetilde{f}^{ext}_{12}=\widetilde{H}$), we find after diagonalizing the
mass matrix of the field components ($G^{+}_{1}, G^{+}_{2}$) in (\ref{Eff-Act-2})
\begin{equation}
\left(
\begin{array}{cc}
m_{M}^{2}& i\widetilde{e}\widetilde{H} \\
- i\widetilde{e}\widetilde{H}& m_{M}^{2}
\label{mass-matrx}
\end{array} \right) \rightarrow
\left(
\begin{array}{cc}
m_{M}^{2}+\widetilde{e}\widetilde{H}& 0 \\
0& m_{M}^{2}-\widetilde{e}\widetilde{H}
\label{mass-matrx}
\end{array} \right),
\end{equation}
with corresponding eigenvectors ($G^{+}_{1}, G^{+}_{2}$)
$\rightarrow$ ($G,iG$). We see that the gluon anomalous magnetic moment term $i\widetilde{e}\widetilde{f}_{\mu
\nu}G_{\mu}^{-}G_{\nu}^{+} $
produces for the lowest mass mode in (\ref{mass-matrx}) a sort of
"Higgs mass" above the critical field
$\widetilde{e}\widetilde{H}_{C}= m_{M}^2$, indicating that the
$G$-field grows exponentially with time (this is the well known "zero-mode
problem" found in the presence of a magnetic field for Yang-Mills
fields \cite{zero-mode}, for the $W^{\pm}_{\mu}$ bosons in the
electroweak theory \cite{Skalozub, Olesen}, and even for
higher-spin fields in the context of string theories
\cite{porrati}). Thus, it should be expected that the solution of
the instability is reached through the restructuring of the ground
state through the condensate of the field bearing the tachyonic mode (i.e. the $G$-field).
To find the $G$-field condensate and the induced magnetic field
$\widetilde{\textbf{B}}=\nabla\times\widetilde{\textbf{A}}$, with
$\widetilde{\textbf{A}}$ being the total rotated electromagnetic
potential in the condensed phase in the presence of the external
field $\widetilde{H}$, we should start from the Gibbs free energy
density $\mathcal{G}=\mathcal{F}-\widetilde{H}\widetilde{B}$, since
it depends on both $\widetilde{B}$ and $\widetilde{H}$
($\mathcal{F}$ is the system free energy density). Since
specializing $\widetilde{H}$ in the third direction the instability
develops in the $(x,y)$-plane, we make the ansatz $G=G(x,y)$.
Starting from (\ref{Eff-Act-2}) in the Feynman gauge $\lambda=1$,
which in terms of the condensed field $G$ implies
$(\widetilde{\Pi}_{1}+i\widetilde{\Pi}_{2})G=0$, we have that the
Gibbs free energy in the condensed phase is
\begin{eqnarray}
\label{Gibbs} \mathcal{G}_{c} =\mathcal{F}_{n0}
-2G^{\dag}\widetilde{\Pi}^{2}
G-2(2\widetilde{e}\widetilde{B}-m_{M}^{2})|G|^{2}+2g^{2}|G|^{4}\nonumber
\\
+ \frac{1}{2}\widetilde{B}^{2}-\widetilde{H}\widetilde{B}\qquad
\qquad \qquad \qquad \qquad \qquad \qquad
\end{eqnarray}
where $\mathcal{F}_{n0}$ is the system free energy in the normal
phase ($G=0$) at zero magnetic field.
The equations of minimum for the fields $G$ and
$\widetilde{B}$ are respectively obtained from (\ref{Gibbs}) as
\begin{equation}
\label{G-Eq-1} -\widetilde{\Pi}^{2}
G+(m_{M}^{2}-2\widetilde{e}\widetilde{B})G+2g^{2}|G|^{2}G=0,
\end{equation}
and
\begin{equation}
\label{B-Eq-1} 2\widetilde{e} |G|^{2}-\widetilde{B}+\widetilde{H}=0
\end{equation}
Eqs. (\ref{G-Eq-1})-(\ref{B-Eq-1}) are analogous to the Ginsburg-Landau
equations for the conventional superconductor with $G$ playing the role of the complex order parameter. Nevertheless, there are two
distinctive factors that differentiate the Ginsburg-Landau equations of conventional superconductivity from (\ref{G-Eq-1})-(\ref{B-Eq-1}). They are given by the negative sign in front of the $\widetilde{B}$ field in Eq. (\ref{G-Eq-1}) and the positive sign in the first term of the LHS of Eq. (\ref{B-Eq-1}). The fact that here we get opposite signs to those appearing in conventional superconductivity is due to the different nature of the condensates in both cases. While in conventional superconductivity the Cooper pair is a spin-0 condensate, here we have a condensate formed by spin-1 charged particles which interact through their anomalous magnetic moment with the magnetic field (i.e. the term $i\widetilde{e}\widetilde{f}_{\mu
\nu}G_{\mu}^{-}G_{\nu}^{+} $ in (\ref{Eff-Act-2}) is responsible of that dissimilitude).
The positive sign in front of the first term in (\ref{B-Eq-1}) implies that the
condensation of the gluon field makes the magnetic field in the new
phase, $\widetilde{B}$, larger than the applied field,
$\widetilde{H}$. That is, the magnetic field is boosted to a higher
value which depends on the modulus of the $G$-condensate. Hence, the phase which is attained at $\widetilde{H}\geq \widetilde{H}_c$ is called paramagnetic CFL \cite{Vortex, Phases}. I want to point out that at the scale of baryon densities
typical of neutron-star cores ($\mu \simeq 400 MeV$, $g(\mu)\simeq
3$) the charged gluons magnetic mass in the CFL phase is $m_{M}^{2}
\simeq 16\times 10^{-3} GeV^{2}$. This implies a critical magnetic
field of order $\widetilde{H}_{c}\simeq 77\times 10^{16} G$. Although it is a significant high value, it is in the expected range
for the neutron star interiors. Let us stress that in our analysis
we considered asymptotic densities where quark masses can be
neglected (CFL phase).
To find the structure of the gluon condensate we should solve the non-linear differential equation (\ref{G-Eq-1}). However, to get an analytic solution we can consider the approximation where $\widetilde{H}\approx \widetilde{H}_c=m_M^2$ and consequently $\mid G\mid \approx 0$. In this approximation, Eq. (\ref{G-Eq-1}) can be linearized as
\begin{equation}
\label{Vortex-Eq-1} [\partial_{j}^{2}-\frac{4\pi
i}{\widetilde{\Phi}_{0}}\widetilde{H}_{C}x\partial_{y}-4\pi^{2}\frac{\widetilde{H}_{C}^{2}}{\widetilde{\Phi}_{0}^{2}}x^{2}+\frac{1}{\xi^{2}}]G=0,
\quad j=x,y
\end{equation}
where we fixed the gauge condition
$\widetilde{A}_{2}=\widetilde{H}_{C}x_{1}$, and introduced the notations
$\widetilde{\Phi}_{0}\equiv2\pi/\widetilde{e}$, and
$\xi^{2}\equiv1/(2\widetilde{e}\widetilde{H}_{C}-m_{M}^{2})=1/m_{M}^{2}$.
Eq. (\ref{Vortex-Eq-1}) is formally similar to the Abrikosov's equation in type-II conventional superconductivity \cite{Abrikosov}, with $\xi$ playing the role of the coherence length and $\widetilde{\Phi}_{0}$ of the flux quantum per vortex cell. Then, following the Abrikosov's approach, a solution of Eq. (\ref{Vortex-Eq-1}) can be found as
\begin{equation}
\label{Vortex-solution} G(x,y)=\frac{1}{\sqrt{2}\widetilde{e}\xi}e^{-\frac{x^2}{e\xi^2}}\vartheta_3(u/\tau),
\end{equation}
with $\vartheta_3(u/\tau)$ being the elliptic theta function with arguments
\begin{equation}
\label{arguments} u=-i\pi b(\frac{x}{\xi^2}+\frac{y}{b^2}), \qquad \tau=-i\pi\frac{b^2}{\xi^2}
\end{equation}
In (\ref{arguments}) the parameter $b$ is the periodic length in the y-direction ($b=\Delta y$). The double periodicity of the elliptic theta function also implies that there is a periodicity length in the x-direction given by $\Delta x=\widetilde{\Phi}_{0}/b\widetilde{H}_{c}$. Therefore, the magnetic flux through each
periodicity cell ($\Delta x \Delta y$) in the vortex lattice is quantized $\label{Flux}
\widetilde{H}_c \Delta x \Delta y=\widetilde{\Phi}_{0}$, with
$\widetilde{\Phi}_{0}$ being the flux quantum per unit vortex cell.
In this semi-qualitative analysis we considered the Abrikosov's
ansatz of a rectangular lattice, but the lattice configuration should be carefully
determined from a minimal energy analysis. For the rectangular
lattice, we see that the area of the unit cell is $A=\Delta x \Delta
y=\widetilde{\Phi}_{0} /\widetilde{H}_c$, so decreasing with
$\widetilde{H}$. In summary, we have that to remove the instability
a magnetic field specialized along the $z$-direction turns
inhomogeneous in the $(x,y)$-plane since it depends on the
condensate $G$, which has a periodic structure on that plane, while it can be
homogeneous in the $z$-direction, therefore it forms a fluxoid along
the $z$-direction that creates a nontrivial topology on the
perpendicular plane. From (\ref{B-Eq-1}) we see that the magnetic
field can go from a minimum value $\widetilde{H}$ to a maximum at
the core of the fluxoid that depends on the amplitude of the gluon
condensate determined by the mismatch between the applied field and
the gluon Meissner mass.
Summarizing, at low $\widetilde{H}$ field, the CFL phase behaves as an
insulator, and the $\widetilde{H}$ field just penetrates through it.
At sufficiently high $\widetilde{H}$, the condensation of $G$
is triggered inducing the formation of a lattice of magnetic flux
tubes that breaks the translational and remaining rotational
symmetries. It should be noticed that contrary to the situation in conventional type-II
superconductivity, where the applied field only penetrates through the
flux tubes and with a smaller strength, the vortex state in the color superconductor has the peculiarity that outside the flux tube the
applied field $\widetilde{H}$ totally penetrates the sample, while
inside the tubes the magnetic field becomes larger than
$\widetilde{H}$.
\section{Induced Magnetic Field in Color Superconductivity with Chromomagnetic Instabilities}
As known, chromomagnetic instabilities can be present in color
superconductivity even in
the absence of an external magnetic field. As it was found in Ref. \cite{Igor}, the charged gluons become tachyonic in the 2SC phase at moderate densities after imposing electrical and color neutralities and
$\beta$ equilibrium conditions, while in the CFL phase the corresponding charged gluons become tachyonic under the previous constraints and at densities where the $s$
quark mass $M_{s}$ becomes a relevant parameter \cite{Fukushima}. Here, I will discuss how the chromomagnetic instabilities in the 2SC system in the absence of an applied magnetic field can be removed by the spontaneous generation of a condensate of inhomogeneous gluons that simultaneously induce a rotated magnetic field. It is expected that a similar mechanism can be also found for the unstable CFL phase, although it is a pending task.
In the gapped $2SC$ phase the solution of the neutrality conditions
$\partial \Omega_{0}/\partial \mu_{i}=0$, with $\mu_{i}=\mu_{e},
\mu_{8}, \mu_{3}$, and gap equation $\partial \Omega_{0}/\partial
\Delta=0$, for the effective potential $\Omega_{0}$ in the
mean-field approximation, led to $\mu_{3}=0$, and nonzero values of
$\mu_{e}$, and $\mu_{8}$, satisfying $\mu_{8}\ll \mu_{e}<\mu$ for a
wide range of parameters \cite{Huang}. Here $\mu$ is the quark
chemical potential, $\mu_{e}$ the electric chemical potential, and
the "chemical potentials" $\mu_{8}$ and $\mu_{3}$ are strictly
speaking condensates of the time components of gauge fields,
$\mu_{8}= (\sqrt{3}g/2)\langle G_{0}^{(8)}\rangle$ and $\mu_{3}=
(g/2)\langle G_{0}^{(3)}\rangle$. The nonzero values of the chemical
potentials produce a mismatch between the Fermi spheres of the quark
Cooper pairs, $\delta \mu=\mu_{e}/2$.
The gapped 2SC turned out to be unstable once the gauge fields
$\{G_{\mu}^{(1)}, G_{\mu}^{(2)}, G_{\mu}^{(3)},$ $K_{\mu},
K_{\mu}^{\dag}, \widetilde{G}^{8}_{\mu}, \widetilde{A}_{\mu}\}$ were
taken into consideration. As shown in Ref. \cite{Igor}, the gluons
$G_{\mu}^{(1,2,3)}$ are massless, the in-medium $8^{th}$-gluon
has positive Meissner square mass, and the $K$-gluon doublet (\ref{charged-fields-2SC})
has Meissner square mass that becomes imaginary for $\Delta > \delta
\mu > \Delta/\sqrt{2}$, signalizing the onset of an unstable ground
state. The mass of the in-medium (rotated) electromagnetic field $\widetilde{A}_{\mu}$
is zero, which is consistent with the remaining unbroken
$\widetilde{U}(1)_{em}$ group. In what follows, we will find an stable ground state solution in the
gapped 2SC phase near the critical point
$\delta\mu_{c}=\Delta/\sqrt{2}$.
To investigate the condensation phenomenon triggered by the
tachyonic modes of the charged gluons, we can restrict our analysis
to the gauge sector of the mean-field effective action
that depends on the charged gluon fields
and rotated electromagnetic field. For a static solution, one only
needs the leading contribution of the polarization operators in the
infrared limit ($p_{0}=0, |\overrightarrow{p}|\rightarrow 0$). Going
through the critical point $\delta \mu = \delta \mu_{c}$ the order
parameter $m_{M}^{2}$ changes sign and varies continuously,
indicating a second-order phase transition. Hence, near the
transition point both the gluon condensate and the induced magnetic
field should be very small and we can neglect their contribution to
the fermion quasiparticle propagators. Under these conditions, the gauge
sector of the effective action can be written as
\begin{eqnarray}
\label{Eff-Act-3} \Gamma_{eff}^{g}& = & \int d^{4}x
\{-\frac{1}{4}(\widetilde{f}_{\mu
\nu})^{2}-\frac{1}{2}|\widetilde{\Pi}_{\mu}K_{\nu}-\widetilde{\Pi}_{\nu}K_{\mu}|^{2}
\nonumber
\\
& - & [m_{D}^{2} \delta_{\mu 0} \delta_{\nu 0}+
(m_{M}^{2}-\mu_{8}^{2}) \delta_{\mu i} \delta_{\nu i}+
i\widetilde{q}\widetilde{f}_{\mu \nu}] K_{\mu}K_{\nu}^{\dag}\qquad
\nonumber
\\
& + &
\frac{g^2}{2}[(K_{\mu})^{2}(K^{\dag}_{\nu})^{2}-(K_{\mu}K^{\dag}_{\mu})^{2}]
+\frac{1}{\lambda}K^{\dag}_{\mu}\widetilde{\Pi}_{\mu}\widetilde{\Pi}_{\nu}K_{\nu}\}\qquad
\end{eqnarray}
where we introduced the 't Hooft gauge with
gauge-fixing parameter $\lambda$, and the notations
$\widetilde{\Pi}_{\mu}=\partial_{\mu}
-i\widetilde{q}\widetilde{A}_{\mu}$ and $\widetilde{f}_{\mu
\nu}=\partial_{\mu}\widetilde{A}_{\nu}-\partial_{\nu}\widetilde{A}_{\mu}$.
In the unstable region close to the transition point the Debye
square mass is
$m^{2}_{D}=\frac{2\alpha_{s}\overline{\mu}^{2}}{\pi}[1+(\frac{2\delta\mu^{2}}{\Delta^{2}})]$,
and the magnetic mass $m_{M}$ is imaginary. As usual in theories
with zero-component gauge-field condensates \cite{Chemical-Pot}, $\mu_{8}$
gives rise to a tachyonic mass contribution, although it has a very
small value, since it is parametrically suppressed by the quark
chemical potential $\mu_{8}\sim \Delta^{2}/\mu$ \cite{Igor}.
At this point let us consider for a moment that we have an external
rotated magnetic field $\widetilde{H}$. In this case the effective
action (\ref{Eff-Act-3}) becomes that of a spin-1 charged field in a
magnetic field (\ref{Eff-Act-2}). If we assume
$\delta \mu < \delta \mu_{c}$, the ground state is stable
considering $m^{2}_{M}-\mu^{2}_{8}> 0$. As discussed in the previous Section, when
$q\widetilde{H}\geq \widetilde{q}\widetilde{H}_{c}=
(m^{2}_{M}-\mu_{8}^{2})$, the magnetic mass of one of the charged
gluon modes becomes imaginary due to the anomalous magnetic moment
term $i\widetilde{q}\widetilde{f}_{\mu \nu}K_{\mu}K_{\nu}^{\dag}$.
This field-induced instability triggers the formation of a
gluon-vortex state characterized by the antiscreening of the
magnetic field.
Now, let us go back to the situation of interest in the present
analysis, that is, a system with no external magnetic field. As
discussed above, if $\delta \mu > \delta \mu_{c}$, the
magnetic mass of the $K$ gluons becomes imaginary:
$(m^{2}_{M}-\mu_{8}^{2})< 0$. Borrowing from the experience gained
in the case with external magnetic field, we expect that this
instability should also be removed through the spontaneous
generation of an inhomogeneous gluon condensate $\langle K_{i}
\rangle$ capable to induce a rotated magnetic field thanks to the
anomalous magnetic moment of the spin-1 charged particles. Having
this in mind, we propose the following ansatz
\begin{eqnarray} \label{condensate}
\langle K_{\mu} \rangle \equiv \frac{1}{\sqrt{2}}\left(
\begin{array}{cc}
\overline{G}_{\mu}\\
0
\end{array}
\right) \ , \quad \overline{G}_{\mu} \equiv
\overline{G}(x,y)(0,1,-i,0),
\end{eqnarray}
where we took advantage of the $SU(2)_{c}$ symmetry to write the
$\langle K_{i} \rangle$-doublet with only one nonzero component.
Since in this ansatz the inhomogeneity of the gluon condensate is
taken in the $(x,y)$-plane, it follows that the corresponding
induced magnetic field will be aligned in the perpendicular
direction, i.e. along the z-axes, $\langle\widetilde{f}_{12}
\rangle=\widetilde{B}$. The part of the free energy density that
depends on the gauge-field condensates,
$\mathcal{F}_{g}=\mathcal{F}-\mathcal{F}_{n0}$, with
$\mathcal{F}_{n0}=-\Gamma_{0}=\Omega_{0}$ denoting the system
free-energy density in the absence of the gauge-field condensate
($\overline{G}=0, \widetilde{B}=0$), is found, after fixing the
gauge parameter to $\lambda=1$ and using the ansatz
(\ref{condensate}) in (\ref{Eff-Act-3}), to be
\begin{eqnarray}
\label{free-energy} \mathcal{F}_{g} =
\frac{\widetilde{B}^{2}}{2}-2\overline{G}^{\ast}\widetilde{\Pi}^{2}
\overline{G}+2g^{2}|\overline{G}|^{4}\qquad\qquad \nonumber
\\
-2[2\widetilde{q}\widetilde{B}+(\mu_{3}+\mu_{8})^2+m_{M}^{2}]|\overline{G}|^{2}\qquad
\end{eqnarray}
From the neutrality condition for the $3^{rd}$-color charge it is
found that $\mu_{3}=\mu_{8}$. The fact that $\mu_{3}$ gets a finite
value just after the critical point $m^{2}_{M}-\mu^{2}_{8} = 0$ is
an indication of a first-order phase transition, but since $\mu_{8}$
is parametrically suppressed in the gapped phase by the quark
chemical potential $\mu_{8}\sim \Delta^{2}/\mu$ \cite{Igor}, it will
be a weakly first-order phase transition. Henceforth, we will
consider that $\mu_{3}=\mu_{8}$ in (\ref{free-energy}), and work
close to the transition point $\delta \mu \geq \delta \mu_{c}$
which is the point where $m_{M}^{2}$ continuously changes sign to a
negative value. For very small, negative values of $m_{M}^{2}$, the
gluon condensate and the induced magnetic field should be very small
too, thereby facilitating the calculations.
Minimizing
(\ref{free-energy}) with respect to $\overline{G}^{*}$ gives
\begin{equation}
\label{G-Eq} -\widetilde{\Pi}^{2}
\overline{G}-(2\widetilde{q}\widetilde{B}+| m_M^{2}|)\overline{G}+2g^{2}|\overline{G}|^{2}\overline{G}=0
\end{equation}
Eq. (\ref{G-Eq}) is a highly non-linear differential equation that
can be exactly solved only by numerical methods. Nevertheless, we
can take advantage of working near the transition point, where we
can manage to find an approximated solution that will lead to a
qualitative understanding of the new condensate phase. With this
aim, and guided by the experience with the external field case, where the
solution is always such that the kinetic term
$|\widetilde{\Pi}_{\mu}K_{\nu}-\widetilde{\Pi}_{\nu}K_{\mu}|^{2}$ is
approximately zero near the transition point, we will consider that
when $\delta\mu \simeq \delta\mu_{c}$ our solution will satisfy the
same condition. Hence, we will look for a minimum solution
satisfying
\begin{equation}
\label{G-Eq-2} \widetilde{\Pi}^{2}
\overline{G}+\widetilde{q}\widetilde{B}\overline{G} \simeq 0.
\end{equation}
With the help of (\ref{G-Eq-2}) one can show that the minimum
equation for the induced field $\widetilde{B}$ takes the form
\begin{equation}
\label{B-Eq} 2\widetilde{q} |\overline{G}|^{2}-\widetilde{B}\simeq 0
\end{equation}
The relative sign between the two terms in Eq. (\ref{B-Eq}) implies
that for $|\overline{G}|\neq 0$ a magnetic field $\widetilde{B}$ is
induced. The origin of that possibility can be traced back to the
anomalous magnetic moment term in the action of the charged gluons.
This effect has the same physical root as the paramagnetism found in
Ref. \cite{Vortex} and discussed in the previous Section; where contrary to what occurs in conventional
superconductivity, the resultant in-medium field $\widetilde{B}$
becomes stronger than the applied field $\widetilde{H}$ that triggers
the instability.
Using the minimum equations (\ref{G-Eq}) and (\ref{B-Eq}) in
(\ref{free-energy}), we obtain the condensation free-energy density
\begin{equation}
\label{F-min} \mathcal{\overline{F}}_{g} \simeq
-2(g^2-\widetilde{q}^2)|\overline{G}|^4
\end{equation}
The hierarchy between the strong ($g$) and the electromagnetic
($\widetilde{q}$) couplings implies that
$\mathcal{\overline{F}}_{c}< 0$. Therefore, although the induction
of a magnetic field always costs energy (as can be seen from the
positive first term in (\ref{free-energy})), the field interaction
with the gluon anomalous magnetic moment, produces a sufficiently
large negative contribution to compensate for the increase.
Consequently, as seen from (\ref{F-min}), the net effect of the
proposed condensates is to decrease the system free-energy density.
It follows from Eqs.(\ref{G-Eq})-(\ref{B-Eq}) that near the phase
transition point the inhomogeneity of the condensate solution should
be a small but nonzero correction to a leading constant term
\begin{equation}
\label{Constraint-3} |\overline{G}|^{2}\simeq
\Lambda_{g/\widetilde{q}} |m_{M}^{2}|/2\widetilde{q}^{2} +\mathcal
{O}(m_{M}^4)f(x,y),
\end{equation}
\begin{equation}
\label{Constraint-2} \widetilde{q}\widetilde{B}\simeq
\Lambda_{g/\widetilde{q}} |m_{M}^{2}|+\mathcal {O}(m_{M}^4)g(x,y).
\end{equation}
with
$\Lambda_{g/\widetilde{q}}\equiv(g^{2}/\widetilde{q}^{2}-1)^{-1}$.
The explicit form of the inhomogeneity can be found from
(\ref{G-Eq-2}), which can be written in polar coordinates as
\begin{equation}
\label{Vortex-Eq} [\frac{1}{r}\partial_{r}(r\partial_{r})
+\frac{1}{r^2}\partial_{\theta}^2+\frac{1}{\xi^2}(1-i\partial_{\theta})-\frac{r^{2}}{4\xi^4}]G(r,\theta)=0
\end{equation}
In the above equation we approached the rotated magnetic field by
its leading in (\ref{Constraint-2}), used the symmetric gauge
$\widetilde{A}_{i}=-(\widetilde{B}/2)\epsilon_{ij}x_{j}$, and
introduced the characteristic length $\xi^{2}\equiv
1/[\Lambda_{g/\widetilde{q}}|m^2_M|]$. Using just the
leading contribution of $B$ in (\ref{Vortex-Eq}) is a consistent
approximation if we simultaneously drop the $\frac{r^{2}}{4\xi^4}$
term and restrict the solution to the domain $r\ll\xi$. Notice that
this domain is indeed a large region because near the critical point
$\xi\gg1$. The most symmetric solution of Eq.(\ref{Vortex-Eq}) is
the one that preserves the SO(2) symmetry in ($x,y$) plane. Hence,
proposing a solution of the form $G(r,\theta)\sim R(r)e^{i\chi}$,
with $\chi$ a constant phase, the equation for $R(r)$ reduces to
$[r\partial_{r}(r\partial_{r}) +\frac{r^{2}}{\xi^2}]R(r)=0$. It is
solved by the Bessel function of the first kind $J_0(r/\xi)$. Then,
the gluon condensate can be written as
$G(r)=(1/\sqrt{2}\widetilde{q}\xi)J_0(r/\xi)\exp i\chi$, which is
consistent with (\ref{Constraint-3}), as in the domain of validity
of this solution ($r\ll \xi$) the series can be approximated by its
first terms. Accordingly, the modulus of the condensate square
is given by
\begin{equation}
\label{Amplitude}
|\overline{G}|^2\simeq\frac{1}{2\widetilde{q}^2\xi^2}-\frac{r^2}{4\widetilde{q}^2\xi^4}
\end{equation}
The improved solution for $\widetilde{B}$ is found substituting
(\ref{Amplitude}) back into (\ref{B-Eq}). The induced field
$\widetilde{B}$ is homogeneous in the $z$-direction and
inhomogeneous in the $(x,y)$-plane.
One may wonder whether this inhomogeneous gluon condensate forms a
vortex state. To answer this question we can compare our results
with the case with external magnetic field reported in \cite{Vortex} and discussed in the previous Section. For this
we should notice that the mathematical problem we have just solved
is formally similar to that where the instability is induced by a
weak external field. This would be the situation when the $2SC$
system approaches the transition point from the stable side (real
magnetic mass) and the external magnetic field is slightly larger
than the positive mass square $\widetilde{H}\simeq
\widetilde{H}_{c}= m_{M}^{2} \ll 1$. We know that at large
$m_{M}^{2}$ the condensate solution is a crystalline array of
vortex-cells with cell's size $\sim \xi\ll1$. At smaller $m_{M}^{2}$
the lattice structure should remain, but with a larger separation
between cells, since in this case $\xi\gg1$. However, the use of a
linear approximation to solve the equations in this case only allows
to explore the solution inside an individual cell ($r\ll\xi$). This
is the same limitation we have in the linear approach followed in
this Section. Therefore, we expect that when the nonlinear
equations will be solved, the vortex arrangement will be explicitly
manifested.
\section{General Remarks and Astrophysical Considerations}
As we have shown in this talk, in color superconductors magnetic fields tend to be reinforced and even generated. Thus, if a color superconducting state is realized in the core of neutron stars, it should have some implications in the magnetic properties of such compact objects. Taking into account that at the moderate high densities that can exist in the cores of neutron stars the charged gluons Meissner masses decrease from values of order $m_{g}$ to values which are close to zero, and that any magnetic field in that medium with values $\widetilde{H}>m_M^2$ will produce the spontaneous generation of vortices of charged gluons that enhance the existing magnetic field, it becomes natural to expect that color superconductivity can have something to do with the generation of the large magnetic fields observed in some stellar objects as magnetars.
Moreover, if it is accepted the
standard explanation of the origin of the magnetar's large magnetic
field through a
magnetohydrodynamic dynamo mechanism that amplifies a seed magnetic
field due to a rapidly rotating protoneutron star, then it will imply that the rotation should have a spin period $<3 ms$.
Nevertheless, this mechanism cannot explain all the features of the
supernova remnants surrounding these objects
\cite{magnetar-criticism}.
Now, the gluon
vortices we found in Ref. \cite{Vortex} and discussed here, could produce a magnetic field of the
order of the Meissner mass scale ($m_{g}$), which means with a magnitude in the range $\sim 10^{16}-10^{17} G$. As
discussed in Refs. \cite{Phases}, the possibility of generating a
magnetic field of such a large magnitude in the core of a compact
star without relying on a magneto-hydrodynamics effect, can be an
interesting alternative to address the main criticism
\cite{magnetar-criticism} to the observational conundrum of the
standard magnetar's paradigm \cite{magnetars}. On the other hand, to
have a mechanism that associates the existence of high magnetic
fields to color superconductivity at moderate densities can serve to single out the
magnetars as the most probable astronomical objects for the
realization of this high-dense state of matter.
\bigskip
This work was supported in part by the Office of Nuclear Physics of the Department of Energy under contract DE-FG02-09ER41599.
|
1,314,259,994,221 | arxiv | \section{Introduction}
\footnote{“© 20XX IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.”}
Artificial Intelligence (AI) is becoming a vital part of many real-life applications such as healthcare, logistics, surveillance, and industry. Classification is a common concept in the AI field, and it can be considered one of the building blocks for higher-level reasoning and decision-making systems. With the increasing demand for robust and reliable algorithms, especially in safety-critical systems \cite{Aravantinos2020}, the research community has been trying to define the robustness \cite{Fawzi2017}, evaluation metrics \cite{Carlini2017}, and solutions to satisfy the requirements of a robust classifier \cite{Xu2018}.
State-of-the-art classifiers have achieved high accuracy numbers when dealing with simple datasets such as MNIST \cite{LeCun1998} or challenging ones like ImageNet \cite{Deng2009}. However, several open questions remain on how the classifier should behave in the circumstances not covered in the training set, for example, when unseen classes appear (out-of-distribution samples) or when inputs are distorted in a way not seen in the training set. In such cases, a classifier might generate faulty results. So it becomes clear that accuracy is not enough for measuring the performance of classifiers, and the generalization to new environments and robustness to environmental changes should also be considered.
In their review, Zhang \textit{et al. } argue that unexpected faulty result in a pattern recognition algorithm can happen due to the violation of either of the following assumptions\cite{Zhang2020}: (1) Closed-World Assumption where the data is assumed to have a fixed number of classes, all covered in the training set, (2) Independent and Identically Distributed Assumption where the classes in the data are assumed to be independent of each other and have the same distribution, and (3) Clean and Big Data Assumption where the data is assumed to be well-labeled and large enough for training the network properly. While fulfilling these assumptions is more accessible in a controlled environment, real-world applications rarely cover them completely.
This paper deals with the violation of the Closed-World Assumption. While a straightforward way of dealing with this issue is introducing a \textit{trash} class in the training set to cover all out-of-distribution samples, the complex distribution of them makes it impossible to train an effective classifier in most cases. Moreover, different distortions might make a sample not easy to classify, even for a human. While there is ongoing research for adversarial attacks, the phenomenon is not that common in the everyday use of AI algorithms. In a typical case, distortions usually are from these categories: blur, noise, occlusion, and digital alteration of the image.
Recent works try to solve this issue by formulating it to reliable rejection of the predictions when the network is uncertain. The rejection option, also known as selective classification, is a central concept in different classification applications when dealing with uncertainty (e.g., optical character recognition). Previous works either rely on using a specific type of activation function in the classifier, such as OpenMax \cite{Bendale2016}, temperature scaling for SoftMax \cite{Liang2017}, and Sigmoid \cite{Shu2017}, modifying the loss function such as discrepancy loss \cite{Yu2019}, using more resources such as an ensemble of multiple classifiers \cite{Lakshminarayanan2016} and Monte-Carlo dropout \cite{Gal2016}. Moreover, some also suggest a combination of different ideas \cite{Vyas2018}.
The proposed method is a rejection option based on hypothesis testing with probabilistic networks. By utilizing a Z-test over the distribution of outcomes from a probabilistic network, it is possible to estimate the statistical significance of a given output and reject insignificant results. The main difference between the proposed method and previous state-of-the-art methods such as ODIN \cite{Bendale2016} is the non-restricted use of different architectures. The proposed method can be applied to any architecture and improve the performance when dealing with violation of the Closed-World Assumption by not limiting the network to a specific loss function or activation function.
In their work, Geifman and El-Yaniv show that Softmax Response (SR) is a simple yet top-performing method in selective classifiers \cite{Geifman2017} that outperforms Monte Carlo (MC) dropout. However, this paper shows that if utilized correctly, the probabilistic network can easily outperform the SR method, making it a viable choice.
The main contributions of this paper are as follows:
\begin{itemize}
\item Proposing a simple yet effective method (rejection based on the statistical significance of probabilistic network output) to deal with the violation of the Closed-World Assumption in classifiers. This method can be utilized in any modern network architecture by changing the structure into a probabilistic model, which is possible with the help of existing tools.
\item Testing the proposed method on state-of-the-art architecture (ResNet) with a diverse set of distortions (blur, noise, gamma correction, and occlusion) to show the effectiveness of the proposed method over the baseline SR method.
\end{itemize}
The rest of this paper is structured as follows. The details of the proposed method are presented in Section~\ref{method}. Then Section~\ref{experiments} deals with the experiments and their results. Finally, Section~\ref{conclusion} concludes the work and suggests potential research directions for the future.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[width=1\linewidth]{Method.pdf}
\end{center}
\caption{The structure of the proposed method. (1) Pass the test image through the probabilistic classifier. (2) Repeat it $n$ times and store the class scores for each inference. (3) Calculate the mean and standard deviation for each class. (4) Find the maximum mean value and label it as potential output. (5) Run two-sample Z-tests between the potential output and all other classes, then store the Z-scores. (6) Compare Z-scores with the threshold value to decide the acceptance or rejection of the potential output.}
\label{figure_1}
\end{figure*}
\section{Methods}
\label{method}
\subsection{Proposed method}
The proposed method requires a fully trained probabilistic classifier to work. Due to the nature of the probabilistic classifier, each inference of it will result in a slightly different class score. To utilize this fact, first, the test image is passed through the network $n$ times to get the mean and standard deviation values for each class. After that, the maximum mean value between classes is chosen as the potential output. Next, two-sample Z-tests \cite{TwoSampleZTest} are deployed between the potential output and all other classes to find the statistical significance between their difference. Finally, if the Z-scores indicate a significant difference, then the potential output is chosen to be correct. Algorithm \ref{algorithm_1} summarizes these steps and Figure \ref{figure_1} shows the structure of the proposed method.
\begin{algorithm}
\caption{Selective Probabilistic Classifier}
\begin{algorithmic}[1]
\REQUIRE A trained probabilistic classifier.
\STATE run the image through the classifier $n$ times
\STATE find mean ($\mu$) and std. dev. ($\sigma$) for all $N$ classes
\STATE find the class with the highest mean value ($\mathit{c}_M$)
\FOR {$i \in 1,2,\ldots,N;\ i\neq M$}
\STATE run the two-sample Z-test between $\mathit{c}_M$ and $\mathit{c}_i$
\STATE store the $\mathit{Z}_i$ score
\ENDFOR
\IF{$\mathit{Z}_i > z$ for $i \in 1,2,\ldots,C; i\neq M$}
\STATE set output to be $\mathit{C}_M$
\ELSE
\STATE set output to be Reject
\ENDIF
\end{algorithmic}
\textbf{return} output value for the image
\label{algorithm_1}
\end{algorithm}
\subsubsection{Probabilistic Neural Network}
A probabilistic neural network (PNN) classifier \cite{Mohebali2020} uses a stochastic weighting system. The classifier can allocate a class to an input sample by utilizing the posterior probability, which means each run of the network will result in a slightly different output. The amount of difference between several runs is the key to network certainty. A low standard deviation between several runs indicates a higher level of certainty for the network, making standard deviation a suitable metric for selective classification. The convolution layers for such a network are constructed based on Flipout \cite{Wen2018}. The code can be found in the Tensorflow probability directory \cite{TFP}.
\subsubsection{Two-Sample Z-test}
A Z-test \cite{Ztest} refers to any statistical test that can approximate the distribution of the hypothesis by a normal distribution. The two-sample Z-test can be used to test whether two samples are similar to each other or not. The formula is as follows:
\[\text{$Z = \frac{\mu_1-\mu_2-\Delta}{\sqrt{\frac{\sigma_1 ^ 2}{n_1} + \frac{\sigma_2 ^ 2}{n_2}}}$}\]
\noindent Where $\mu_1$ and $\mu_2$ are the mean values for two samples, $\Delta$ is the hypothesized difference between the means (0 if testing for equality), $\sigma_1$ and $\sigma_2$ are the standard deviations, and $n_1$ and $n_2$ are the sample sizes (which are equal in this paper).
By setting the null hypothesis as $H_0: \mu_1 = \mu_2$, the alternative hypothesis as $H_a: \mu_1 \neq \mu_2$, and $\Delta$ to zero, the two-sample Z-test will result in a score that indicates the likelihood of two samples being different from each other. A higher score means more likelihood for the samples to be different. This score can be compared to critical values to get the percentage for the likelihood of a significant difference between samples. These values can be found in any Z-Score table, such as \cite{ZScoreTable}.
\subsection{Softmax Response}
The SR method applies a threshold directly to the output of the Softmax layer from a deep neural network (DNN) and rejects any output below the threshold. This method was chosen as the baseline for comparison. While the method is simple, it is a known top-performer \cite{Geifman2017}.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[width=0.9\linewidth]{Distortions.pdf}
\end{center}
\caption{Distortions on the image. (A) Original image. (B) Motion blur. (C) Frosted glass blur. (D) Gaussian blur. (E) Noise. (F) Gamma darkening. (G) Gamma lightening. (H) Occlusion.}
\label{figure_2}
\end{figure*}
\section{Experiments and Results}
\label{experiments}
The proposed method was experimented on with the well-known ResNet-18 network configuration \cite{He2016}. The goal is to show the performance of it in case of violating the Closed-World Assumption. A comparison with the SR was made to evaluate the performance. This comparison was based on the area under the Receiver Operating Characteristic curve (ROC), which is threshold-independent. Both networks are trained from scratch with the same initial configuration to have a fair comparison. Other state-of-the-art methods were not included in the comparison as they either require a specific structure for the model, limiting the use case, or were only tested on more simple datasets such as MNIST.
Multiple experiments were conducted to represent various violations of Closed-World Assumption in real-world applications. In these experiments, the classifiers are trained with a limited number of classes and presented with both in-distribution and out-of-distribution samples. Further experiments also distort the test samples to see the effect of each distortion on the performance. The chosen distortions were based on \cite{Kamann2020}. Before discussing the results, the dataset and distortions are explained in detail.
\subsection{Dataset and Distortions}
\textbf{\em COCO ---}
COCO \cite{Lin2014} was chosen as the first dataset. It is a complex dataset where the objects have various sizes, qualities, and overlaps. Since the COCO is originally an object detection dataset, all instances were extracted from it manually based on the bounding boxes provided in the dataset. The data was separated into four classes: Human, Vehicle (containing 4-wheeled vehicles), Animal (containing 4-legged animals), and Background (patches of images with no overlapping objects). 260k images were used for training, excluding the animal class, and 40k images were used as test samples. The reason behind using a commonly known object detection dataset for classification is to have a more realistic dataset where an external source does not filter the samples.
\noindent\textbf{\em CIFAR ---}
CIFAR \cite{Krizhevsky2009} was chosen as the second dataset. It is a more straightforward dataset where objects are classified into ten categories. The dataset is small yet sufficiently complex, which makes it an ideal case for testing algorithms. 40k images were used for training, excluding the automobile and truck classes, and 10k images were used as test samples.
\noindent\textbf{\em Blur ---}
Three different blurring algorithms were used to see their effect on the performance: Motion blur, Frosted glass blur, and Gaussian blur. The effect of each algorithm can be seen in Figure \ref{figure_2}(B-D). Each algorithm will simulate a situation where the object is not sharp (e.g., the camera is not focused, the object is moving, a semi-transparent object is between the camera and the object)
\noindent\textbf{\em Noise ---}
Two different noises were added to test samples to see their effect on the performance: Gaussian noise and Salt-and-pepper noise. The effect of a sample noise can be seen in Figure \ref{figure_2}(E). It will simulate a situation where the input is noisy due to internal or external sources.
\noindent\textbf{\em Gamma Correction ---}
The gamma correction technique was applied to each test sample to see the illumination effect on the performance. The effect of darkening and lightening can be seen in Figure \ref{figure_2}(F-G). It will simulate a situation where the amount of light in the environment changes due to environmental factors.
\noindent\textbf{\em Occlusion ---}
A black patch was added to test samples to see the effect of occlusion on the performance. The effect of occlusion can be seen in Figure \ref{figure_2}(H). It will simulate a situation where the object is partially visible.
\begin{table*}[!ht]
\small
\begin{center}
\begin{tabular}{|c||c||c||c c c|| c c||c c||c|}
\hline
\multirow{3}{*}{Dataset} & \multirow{3}{*}{Method} & Out & \multicolumn{3}{c||}{Blur} & \multicolumn{2}{c||}{Noise} & \multicolumn{2}{c||}{Gamma correction} & \multirow{3}{*}{Occlusion}\\
\cline{4-10}
& & of & \multirow{2}{*}{Motion} & Frosted & \multirow{2}{*}{Gaussian} & \multirow{2}{*}{Gaussian} & \multirow{2}{*}{S\&P} & \multirow{2}{*}{Darkening} & \multirow{2}{*}{Lightening} & \\
& & Distribution & & glass & & & & & & \\
\hline\hline
\multirow{2}{*}{COCO} & Proposed & \textbf{0.65} & \textbf{0.34} & \textbf{0.25} & \textbf{0.38} & \textbf{0.22} & \textbf{0.21} & \textbf{0.16} & \textbf{0.17} & \textbf{0.23}\\
& SR & 0.29 & 0.20 & 0.18 & 0.22 & 0.14 & 0.09 & 0.04 & 0.05 & 0.06 \\
\hline\hline
\multirow{2}{*}{CIFAR} & Proposed & \textbf{0.89} & \textbf{0.50} & \textbf{0.50} & \textbf{0.59} & \textbf{0.38} & \textbf{0.39} & \textbf{0.37} & \textbf{0.42} & \textbf{0.48}\\
& SR & 0.52 & 0.44 & 0.34 & 0.47 & 0.35 & 0.25 & 0.22 & 0.26 & 0.31 \\
\hline
\end{tabular}
\end{center}
\caption{AUROC values of the tests. The values are calculated by taking the area under the ROC where the algorithm could produce a valid response.}
\label{table_1}
\end{table*}
\subsection{Results}
\label{results}
After conducting the tests, ROC curves were used to examine the effectiveness of each algorithm. These curves can be seen in Figure \ref{figure_3}-\ref{figure_4}. In general, each point in the ROC curve corresponds to a specific threshold value for the rejection option. If this threshold is set to 0, the algorithm will not reject any input, resulting in a 100\% FPR. The more extreme threshold values will result in lower FPR and True Positive Ratio (TPR) until, at some point, the algorithm rejects all inputs (0\% FPR and TPR). The SR method hits this value when the threshold is set to 1. As the output of Softmax cannot be larger than 1, any output will be rejected. However, since a DNN typically generates high scores for the output, this threshold ends up preventing the SR algorithm from reaching lower FPR rates. On the other hand, the proposed method does not rely on the limit of Softmax output, as it compares the significance of each class to the others. Such a limit will cause a significant gap in AUROC scores, as seen in Table \ref{table_1}.
Judging by the ROC curves, both algorithms start roughly on the same point. This means that both algorithms function similarly when it comes to classification. However, the SR method has the mentioned drawback, which is visible in the curves.
The comparison must be threshold-independent for it to be fair. Thus, the area under the ROC curve (AUROC) was used as a comparison method. The area calculation must consider the limitations of both algorithms. While the SR algorithm can reach 0\% FPR, it only happens when the threshold is at one (1) or higher, which means the output is not valid. Thus, only the area under the valid parts of the ROC curve was used in calculating the AUROC values. These values can be found in Table \ref{table_1}.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.9\linewidth]{PNNROC.pdf}
\end{center}
\caption{ROC curves for proposed method in COCO test. The worst performance of each category was chosen to present the tolerance of the algorithm to extreme distortions.}
\label{figure_3}
\end{figure}
While every distortion reduces the performance, gamma correction has the most significant effect, and blurring has an almost negligible effect on the proposed method. It can be justified by how a classifier works, as changing the intensity of the image makes it harder to separate the objects from the Background class. That being said, the proposed algorithm still outperforms the SR method by a notable margin.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.9\linewidth]{DNNROC.pdf}
\end{center}
\caption{ROC curves for SR method in COCO test. The worst performance of each category was chosen to present the tolerance of the algorithm to extreme distortions.}
\label{figure_4}
\end{figure}
\section{Conclusion}
\label{conclusion}
In this paper, we propose a rejection option for probabilistic classifiers based on Z-test analysis. This method will address the violation of the Closed-World Assumption. By utilizing a probabilistic classifier, each run results in a slightly different class score. A Z-test analyses the mean and standard deviation values for multiple runs to estimate network certainty and filter out uncertain results.
We designed several experiments based on a well-known network configuration (ResNet-18) and datasets (COCO and CIFAR). A comparison with the SR method was made based on AUROC as a threshold-independent metric. The proposed method was shown to have better performance than the SR method by a notable margin while maintaining robustness in the presence of distortions. This makes the proposed method more suitable in safety applications.
In the future, we will consider expanding the method by merging it with existing tools such as ODIN and covering more complex systems such as object detection.
\bibliographystyle{IEEEbib}
|
1,314,259,994,222 | arxiv | \section{Introduction}
Index coding studies the optimal coding and rate requirements in a network with a single sender and multiple receivers connected by a noiseless broadcast link. In index coding, the sender is assumed to have $m$ messages and each receiver \emph{knows} a subset of the $m$ messages and \emph{wants} a specific subset of messages it does not know. Index coding~\cite{baryossefbirk11,neelytehranizhang13,ongholim16,arbabjolfaeikim18trends}, its secure variant~\cite{dauskachekchee12,ongvellambiyeohklieweryuan2016}, and its connection to network coding~\cite{rouayhebsprintsongeorghiades10,effrosrouayheblangberg15,ongkliwervellambiyeoh-it-2018} have received significant research interest.
Recently, a variant of index coding, known as \emph{pliable index coding}, was introduced~\cite{brahmafragouli15}. In this pliable variant, each receiver is posited not to want a specific subset, but instead to want \emph{any} subset of $t$ messages it does not already know. This variant is natural in applications where the receiver is flexible in which unknown message it wants to receive. One such example is when multiple receivers are each seeking a picture of an object on the internet, but each does not particularly care for a specific picture of the object.
Brahma and Fragouli~\cite{brahmafragouli15} focused solely on linear codes for pliable-index-coding problems and established that the problem is NP-hard. Further, they showed that if each receiver has at least $s_\text{min}$ and at most $s_\text{max}$ messages, then $\min \{s_\text{max} + t, m - s_\text{min}\}$ is an upper bound on the minimum number of transmissions (which is referred to as the minimum \textit{broadcast rate}) required for each receiver to obtain $t$ additional messages. And this bound is tight if the sender knows only the number of messages (as opposed to the exact message sets) that each receiver knows. For a general setup, they
approximated the order of dependence of the minimum broadcast rate in the limit as the number of messages and receivers grows.
Song and Fragouli~\cite{songfragouli18} also restricted their analysis to linear codes to show that if receivers having every possible strict subset of the message set are present, then the sender needs to send all $m$ messages. This result of all receivers being present was further strengthened by Liu and Tuninetti~\cite{liutuninetti17} to all (including non-linear) pliable index codes.
Liu and Tuninetti~\cite{liutuninetti17} defined a class of complete-$S$ pliable-index-coding problems, where $S\subseteq \{0,\ldots, m-1\}$ is a parameter.
Given $S\subseteq \{0,\ldots, m-1\}$, a complete-$S$ problem consists of \textit{all} $\binom{m}{i}$ receivers each having a different combination of $i$ messages, for every $i \in S$. Focusing on the case that each receiver requires only one message (that is, $t=1$), they showed that the minimum broadcast rate for any linear or non-linear pliable index code for a complete-$S$ problem with $S=\{0,\ldots, m-1\}\setminus\{s_\text{min} ,\ldots, s_\text{max}\}$ is precisely $|S|=m-s_\text{max}+s_\text{min}+1$.
They later \cite{liutuninetti18} derived tight lower bounds based on decoding chains and maximum acyclic induced subgraphs to show that the minimum broadcast rate for any linear or non-linear pliable index code for a complete-$S$ problem with $S=\{s_\text{min} ,\ldots, s_\text{max}\}$ and $t=1$ equals $\min \{s_\text{max} + 1, m - s_\text{min}\}$.
Existing results on exact minimum broadcast rates were established for certain complete-$S$ problems.
This paper considers the case $t=1$ and problems that are in general not complete-$S$. We identify a new technique based on \textit{absent receivers} to construct \emph{decoding chains with skipped messages} to derive lower bounds on the minimum broadcast rate for all pliable-index-coding problems that are applicable to both linear and non-linear codes. When combined with matching transmission codes (upper bounds), we establish precisely the minimum broadcast rate for several classes of pliable-index-coding problems.
We also introduce a notion of \emph{critical} set of receivers; such sets of receivers are maximal in the sense that the addition of any new receiver (that is absent) strictly increases the broadcast rate. In other words, each critical set of receivers is a maximal set of receivers supported by a fixed broadcast rate.
\subsection{Problem Formulation}
We use the following notation: $\mathbb{Z}^+$ denotes the set of natural numbers, $[a:b] := \{a, a+1, \dotsc, b\}$ for $a,b\in\mathbb{Z}^+$ such that $a < b$, and $X_S = (X_i: i \in S)$ for some ordered set $S$.
Consider a sender having $m \in \mathbb{Z}^+$ messages, denoted by $X_{[1 : m]} = (X_1, \dots, X_m)$. Each message $X_i \in \mathbb{F}_q$ is independently and uniformly distributed over a finite field of size~$q$. There are $n$ receivers having distinct subsets of messages, which we refer to as side information. Each receiver is labelled by its side information, i.e., the receiver that has messages $X_{H}$, for some $H \subsetneq [1 : m]$, will be referred to as receiver $H$ or receiver with side information $H$. The aim of the pliable-index-coding problem is to devise an encoding scheme for the sender and a decoding scheme for each receiver satisfying pliable recovery of a message at each receiver.
Without loss of generality, the side-information sets of the receivers are distinct; all receivers having the same side information can be satisfied if and only if (iff) any one of them can be satisfied. Also, no receiver has side information $H = [1:m]$ because this receiver cannot be satisfied. So, there can be at most $2^m-1$ receivers present in the problem. A pliable index coding problem is thus defined uniquely by $m$ and the set $\mathbb{U} \subseteq 2^{[1:m]} \setminus \{[1:m]\}$ of all receiver side information present in the problem. Lastly,
any receiver that is not present, i.e., receiver~$H \in 2^{[1:m]} \setminus (\{[1:m]\} \cup \mathbb{U})$, is said to be \textit{absent}.
\begin{example}
Let $m= 3$, and $\mathbb U=\{\emptyset, \{1\}, \{2\}, \{1,2\}, \{2,3\}]\}$. Then, the receivers $\{3\}$ and $\{1,3\}$ are absent.
\end{example}
Given a pliable-index-coding problem with $m$ messages and receivers $\mathbb U$, a pliable index code of length $\ell \in \mathbb{Z}^+$ consists of
\begin{itemize}
\item an encoding function of the sender, $\mathsf{E}: \mathbb{F}_q^m \rightarrow \mathbb{F}_q^\ell$; and
\item for each receiver $H\in\mathbb{U}$, a decoding function $\mathsf{D}_H: \mathbb{F}_q^\ell \times \mathbb{F}_q^{|H|} \rightarrow \mathbb{F}_q$, such that $\mathsf{D}_H(\mathsf{E}(X_{[1:m]}),X_H) = X_i$, for some $i \in [1:m]\setminus H$.
\end{itemize}
The above formulation requires the decoding of only one message at each receiver, similar to that in Liu and Tuninetti~\cite{liutuninetti17,liutuninetti18}. Lastly, the aim is to find the optimal broadcast rate for a particular message size $q$, denoted by $\beta_q := \min_{\mathsf{E}, \{\mathsf{D}\}} \ell$ and the optimal broadcast rate over all $q$, denoted by $\beta := \inf_q \beta_q$.
\begin{remark}
All results in this paper will be derived for $\beta_q$ for all $q \in \mathbb{Z}^+$. Consequently, the results are also valid for $\beta$.
\end{remark}
\section{New Lower Bounds}
\subsection{An optimal-rate expression}
We first express a lower bound on the optimal broadcast rate for pliable index coding in terms of an equivalence notion for index coding.
Define \textit{decoding choice} $D$ as follows:
\begin{equation}
D: \mathbb{U} \rightarrow [1:m], \text{ such that } D(H) \in [1:m] \setminus H.
\end{equation}
Here, $D(H)$ is the message decoded by receiver $H$.
Let $\mathcal{P}_{m,\mathbb{U}}$ denote a pliable-index-coding problem with $m$ messages and a set of receivers $\mathbb{U}$. For a fixed decoding choice~$D$ for $\mathcal{P}_{m,\mathbb{U}}$, denote the problem by $\mathcal{P}_{m,\mathbb{U},D}$. This means any code for $\mathcal{P}_{m,\mathbb{U},D}$ is a pliable index code for $\mathcal{P}_{m,\mathbb{U}}$ with the restriction that $\mathsf{D}_H(\mathsf{E}(X_{[1:m]}),X_H) = X_{D(H)}$ for all $H \in \mathbb{U}$, and vice versa.
With an abuse of notation, let the optimal broadcast rate for $\mathcal{P}_{m,\mathbb{U},D}$ be $\beta_q(\mathcal{P}_{m,\mathbb{U},D})$.
We can establish the following:
\begin{lemma}\label{lemma:equivalence}
$\beta_q (\mathcal{P}_{m,\mathbb{U}}) = \min_D \beta_q(\mathcal{P}_{m,\mathbb{U},D}).$
\end{lemma}
\begin{IEEEproof}
Clearly,
$\beta_q (\mathcal{P}_{m,\mathbb{U}}) \leq \beta_q(\mathcal{P}_{m,\mathbb{U},D})$
for all $D$ because any code for $\mathcal{P}_{m,\mathbb{U},D}$ is a code for $\mathcal{P}_{m,\mathbb{U}}$. Since the inequality must be tight for at least one $D$, we have Lemma~\ref{lemma:equivalence}.
\end{IEEEproof}
$\mathcal{P}_{m,\mathbb{U},D}$ is in fact an index-coding problem~\cite{neelytehranizhang13,ongholim16,arbabjolfaeikim18trends},
with a message set~$X_{[1:m]}$ and a receiver set~$\mathbb{U}$, where each receiver~$H \in \mathbb{U}$ has $X_H$ and wants $X_{D(H)}$.
From Lemma 1, $\beta_q (\mathcal{P}_{m,\mathbb{U}})$ can be obtained by evaluating the optimal broadcast rates $\beta_q(\mathcal{P}_{m,\mathbb{U},D})$ of index-coding problems~$\mathcal{P}_{m,\mathbb{U},D}$ for all $D$. However, the optimal broadcast rate for index coding is not known in general, and the search space over all possible $D$ grows exponentially with $m$.
\subsection{A lower bound based on acyclic subgraphs}
Nonetheless, we will utilise Lemma~\ref{lemma:equivalence} to formulate a lower bound for pliable index coding using results for index coding. More specifically,
\begin{equation}
\beta_q (\mathcal{P}_{m,\mathbb{U}}) \geq \min_D \phi_q(\mathcal{P}_{m,\mathbb{U},D}),
\end{equation}
where $\phi_q(\mathcal{P}_{m,\mathbb{U},D})$ is any lower bound on $\beta_q(\mathcal{P}_{m,\mathbb{U},D})$.
We now state a lower bound for index coding~\cite{neelytehranizhang13}, expressed through a directed-bipartite-graph representation of an index-coding problem. Any index-coding problem can be specified by a bipartite graph with these two disjoint, independent sets: the message node set and the receiver node set. A directed edge from receiver node~$r$ to message node~$m$ exists iff receiver~$r$ has $X_m$ as side information; a directed edge from message node~$m$ to receiver node~$r$ exists iff receiver~$r$ wants $X_m$.
Now, we perform one or more of following pruning operations, as many times as desired: \textsf{(a)}~remove a message node and all its incoming and outgoing edges; \textsf{(b)}~remove a receiver node and all its incoming and outgoing edges; \textsf{(c)}~remove a message-to-receiver edge. After a series of pruning operations, remove all message nodes with no outgoing edge. Let the resultant bipartite graph be $G'$, and the number of message nodes left by $m(G')$. If $G'$ is acyclic (in the directed sense), then we have the following lower bound, which generalises the maximum-acyclic-induced-subgraph (MAIS) lower bound~\cite{baryossefbirk11}.
\begin{lemma}{\cite[Lem.~1]{neelytehranizhang13}} \label{lemma:index-coding-lower-bound} Consider an index-coding problem~$\mathcal{I}$ and its bipartite-graph representation~$G$.
After a series of pruning operations, if the resultant graph~$G'$ is acyclic, then
$\beta_q(\mathcal{I}) \geq m(G')$.
\end{lemma}
A pliable-index-coding problem~$\mathcal{P}_{m,\mathbb{U}}$ with a decoding choice~$D$---that is, the index-coding problem $\mathcal{P}_{m,\mathbb{U},D}$---can be described by the following bipartite graph:
\textsf{(a)}~message nodes~$i \in [1:m]$;
\textsf{(b)}~receiver nodes~$H \in \mathbb{U}$;
\textsf{(c)}~each receiver node~$H$ has an outgoing edge to every message node~$i \in H$ and an incoming edge from node~$D(H)$.
Name this graph~$G_D$, and let $G_D'$ denote the resultant graph after a series of pruning operations on $G_D$.
Using Lemmas~\ref{lemma:equivalence} and \ref{lemma:index-coding-lower-bound}, we obtain the following lower bound for pliable index coding:
\begin{lemma}\label{lemma:acyclic} [Lower bound]
Consider a pliable-index-coding problem $\mathcal{P}_{m,\mathbb{U}}$, and a set of bipartite graphs $\{G_D\}$ formed by all possible decoding choices $D$. Perform pruning operations on each $G_D$ to obtain an acyclic $G_D'$. Then,
\begin{equation}
\beta_q (\mathcal{P}_{m,\mathbb{U}}) \geq \min_D m(G_D').
\end{equation}
\end{lemma}
\subsection{Constructing acyclic subgraphs using decoding chains with skipped messages}
To use Lemma~\ref{lemma:acyclic}, one needs to consider all $D$, perform pruning operations on each $G_D$ to get an acyclic graph $G_D'$, and count the remaining number of message nodes $m(G_D')$.
We will instead use a decoding-chain argument to obtain the required $m(G_D')$. The concept of decoding chains was used to prove the MAIS lower bound~\cite{baryossefbirk11} and its extension~\cite[Lem.~1]{neelytehranizhang13} for index coding, and lower bounds for certain pliable-index-coding problems~\cite{liutuninetti17,liutuninetti18}.
In this paper, we propose the a new approach to construct decoding chains by introducing \textit{skipped messages}, which is implemented in the following randomised algorithm:
\vspace*{-1.95ex}
\begin{algorithm}[h]
\SetKwInOut{Input}{input}
\SetKwInOut{Output}{output}
\Input{$\mathcal{P}_{m,\mathbb{U},D}$}
\Output{A \textit{decoding chain} $C$ (a totally ordered set with a total order $\leq_C$) and a set of \textit{skipped messages} $S$}
$C \leftarrow \emptyset$; \texttt{\scriptsize\color{blue} (initialise $C$)}\\
$S \leftarrow \emptyset$; \texttt{\scriptsize\color{blue} (initialise $S$)}\\
\While{$C \neq [1:m]$}{
\If(\texttt{\scriptsize\color{blue} (receiver $C$ is absent)}){$C \notin \mathbb{U}$}{
Choose any $a \in [1:m] \setminus C$; $^\#$\\ \texttt{\scriptsize\color{blue} ($a$ is called a skipped message)}\\
$C \leftarrow (C \cup \{a\}$, with $i \leq_C a,$ for all $i \in C$); \texttt{\scriptsize\color{blue} (expand $C$)}\\
$S \leftarrow S \cup \{a\}$; \texttt{\scriptsize\color{blue} (expand $S$)}
}
\Else (\texttt{\scriptsize\color{blue} (receiver $C$ is present)})
{
$C \leftarrow (C \cup \{D(C)\}$, with $i \leq_C D(C),$ for all $i \in C$);\\
\texttt{\scriptsize\color{blue} (add the message that receiver $C$ decodes)}
}
}
\caption{An algorithm to construct a decoding chain with skipped messages}
\label{algo:chain}
\end{algorithm}
\vspace*{-1.95ex}
We say that the algorithm ``\textit{skips}'' a message $a$, whenever we execute the step marked \# for that message $a$. We will see later that the number of skipped messages is an important parameter characterising lower bounds. We say that the algorithm ``\textit{hits}'' a receiver $H$ whenever $C$ is updated as $C \leftarrow H$. If receiver~$H$ is absent, we say that it hits an absent receiver~$H$. Note that receiver $[1:m]$ cannot exist, so when the algorithm ends, $[1:m]$ is not considered an absent receiver being hit.
\begin{remark} \label{remark:decoding-chain}
We highlight some properties of Algorithm~\ref{algo:chain}:
\begin{enumerate}
\item For a fixed $D$, the only uncertainty in constructing a chain is the choice of skipped messages. So, $(C,S)$ is completely determined by $D$ and the choice of skipped messages.
\item If an absent receiver~$H$ is hit, then subsequently a message~$a \notin H$ will be skipped, and vice versa.
So, we skip a message iff we hit an absent receiver.
\item The algorithm always commences by hitting receiver~$\emptyset$ first.
\end{enumerate}
\end{remark}
For a fixed $\mathcal{P}_{m,\mathbb{U},D}$, any choice of skipped messages results in a pair of $(C,S)$. Let $\mathbb{C}$ be the set of all $(C,S)$ pairs, obtained by varying different skipped messages. We have the following:
\begin{lemma} \label{lemma:pruning-lower-bound}
For each $(C,S) \in \mathbb{C}$ derived from a given $\mathcal{P}_{m,\mathbb{U},D}$ (or equivalently, $G_D$), there exists a series of pruning operations on $G_D$ yielding an acyclic $G_D'$ with $m(G_D') = |C \setminus S| = m - |S|$.
\end{lemma}
\begin{IEEEproof}
Remove from $G_D$ all present receivers not being hit in the algorithm, and their connected edges.
Let the elements of $C$ in the order of construction of $C$ be $c_1,c_2,\ldots, \underline{c_i}, \ldots, c_{|C|}$, that is, $c_i \leq_C c_j$ iff $i \leq j$, where underlined elements are present in $S$ as well. By construction, if $c_i$ is underlined, then receiver~$\{c_1, \dotsc, c_{i-1}\}$ is absent. So, for each $c_i$ in $C$ that is not underlined, receiver~$\{c_1, \dotsc, c_{i-1}\}$ is present and has been hit in the algorithm, and therefore remains. This includes receiver~$\emptyset$ if $c_1$ is not underlined. So, $|C \setminus S|$ receivers remain.
Next, remove all messages in $S$ (and their associated edges) so that only messages in $C \setminus S$ remain.
After these pruning operations, the graph $G_D'$ consists of the following edges for each remaining receiver node~$H$: \textsf{(a)}~outgoing edges from $H$ to all message nodes $i \in H \setminus S$, \textsf{(b)}~incoming edge from message node $D(H)$ to $H$. Also, by construction, for each remaining receiver node~$H$, $i \leq_C D(H)$ for all $i \in H$.
For $a \leq_C b$, we say that $b$ is \textit{larger} than $a$ in $C$, and $a$ is \textit{smaller} than $b$ in $C$.
In $G_D'$, all edges flow from message nodes that are larger in $C$ to message nodes that are smaller in $C$, through receiver nodes. Hence, $G_D'$ is acyclic. Also, since each message node that remains is requested by a receiver that remains, no message node is removed after the pruning operations. So, $G_D'$ contains $|C \setminus S|$ message nodes. As $C$ contains all the messages $[1:m]$, we have $|C \setminus S| = m - |S|$.
\end{IEEEproof}
\subsection{A lower bound via decoding chains with skipped messages}
We can express the lower bound in Lemma~\ref{lemma:pruning-lower-bound} as follows:
\begin{lemma} \label{lemma:chain-lower-bound} [Lower bound]
Consider a pliable-index-coding problem $\mathcal{P}_{m,\mathbb{U}}$ and its bipartite-graph representation $G$.
\begin{equation}
\beta_q (\mathcal{P}_{m,\mathbb{U}}) \geq m - \max_D \min_{(C,S) \in \mathbb{C}} |S|. \label{eq:chain-lower-bound}
\end{equation}
\end{lemma}
\begin{IEEEproof}
From Lemmas~\ref{lemma:acyclic} and \ref{lemma:pruning-lower-bound}, we know that $\beta_q (\mathcal{P}_{m,\mathbb{U}}) \geq \min_D (m - |S|)$, for any $(C,S) \in \mathbb{C}$ for each decoding choice $D$. By optimising $(C,S) \in \mathbb{C}$ for each $D$, we get Lemma~\ref{lemma:chain-lower-bound}.
\end{IEEEproof}
\begin{remark} \label{remark:max-min}
Although the lower bound \eqref{eq:chain-lower-bound} involves minimising over all $(C,S) \in \mathbb{C}$, it is clear that any choice of $(C,S)$ for each $D$ will also give us a lower bound. Having said that, maximising over all $D$ is compulsory.
\end{remark}
\subsection{A lower bound based on nested chains of absent receivers}
Denote the set of absent receivers by $\mathbb{U}^\text{abs} := 2^{[1:m]} \setminus (\{[1:m]\} \cup \mathbb{U})$.
\begin{lemma} \label{lemma:necessary-nested}
If an instance of Algorithm~\ref{algo:chain} skips $L \in \mathbb{Z}^+$ messages,
then there exists a \textit{nested chain} of absent receivers of length $L$, that is, $H_1 \subsetneq H_2 \subsetneq \cdots \subsetneq H_L$, with each $H_i \in \mathbb{U}^\text{abs}$.
\end{lemma}
\begin{IEEEproof}
A decoding chain $C$ is constructed by adding messages one by one. So, any receiver that is hit must contain all previously hit receivers. From Remark~\ref{remark:decoding-chain}, we know that if the algorithm skips $L$ messages, it must hit $L$ absent receivers, and these absent receivers must form a nested chain.
\end{IEEEproof}
We will now prove another lower bound that is easier to use compared to Lemma~\ref{lemma:chain-lower-bound} in some scenarios (for example, case~2 in Theorem~\ref{theorem:nesting} and Theorem~\ref{theorem:perfectly-nested}).
\begin{lemma} \label{lemma:simplier-lower-bound} [Lower bound]
Consider a pliable-index-coding problem $\mathcal{P}_{m,\mathbb{U}}$ and its bipartite-graph representation $G$.
Let $L \in \mathbb{Z}^+$ be the maximum length of any nested chain constructed from receivers absent in $\mathcal{P}_{m,\mathbb{U}}$. We have that
$\beta_q(\mathcal{P}_{m,\mathbb{U}}) \geq m-L$.
\end{lemma}
\begin{IEEEproof}
$L$ must be the largest number of skipped messages evaluated over all decoding choices~$D$ and skipped-message sets. Otherwise, from Lemma~\ref{lemma:necessary-nested}, we have a nested chain of absent receivers of length $L+1$, which is a contradiction. Thus,
$\displaystyle m- L = m - \max_D \max_{(C,S) \in \mathbb{C}} |S| \leq m - \max_D \min_{(C,S) \in \mathbb{C}} |S| \stackrel{\eqref{eq:chain-lower-bound}}{\leq} \beta_q(\mathcal{P}_{m,\mathbb{U}}).$
\end{IEEEproof}
\section{Criticality and Monotonicity}
Before we characterise the optimal broadcast rate of certain classes of pliable-index-coding problems, we introduce the notion of \textit{critical} receivers for pliable index coding.
In index coding, it is well-known that removing any message from the side information of any receiver cannot decrease the optimal broadcast rate $\beta$. Hence, the side-information sets of all receivers are said to be critical if removing any messages therein results in a strictly larger $\beta$.
However, in pliable index coding, removing messages from side-information sets may increase or decrease $\beta$. We will establish this in Corollary~\ref{corollary:criticality} later. Hence, criticality should not be defined for the messages in side-information sets. However, we can define criticality of pliable index coding with respect to the receivers. By noting that any pliable index code for $\mathcal{P}_{m,\mathbb{U}}$ is also a pliable index code for $\mathcal{P}_{m,\mathbb{U}^-}$, we have the following:
\begin{lemma} \label{lemma:monotonicity}
Let $\mathbb U^{-}\subseteq \mathbb U$. Then, $\beta_q (\mathcal{P}_{m,\mathbb{U}^-}) \leq \beta_q(\mathcal{P}_{m,\mathbb{U}})$.
\end{lemma}
In light of this, we define the following.
\begin{definition}
For pliable-index-coding $\mathcal{P}_{m,\mathbb{U}}$, the set of receivers $\mathbb{U}$ is said to be \textit{critical} iff adding any receiver to $\mathbb{U}$ strictly increases $\beta_q$.
\end{definition}
So, for pliable index coding, critical receivers can be seen as a maximal receiver set that a broadcast rate can support. This is different from index coding, where critical side information can be seen as the minimal side information that is required to maintain a broadcast rate.
\section{Results on Optimal Broadcast Rates}
We now derive $\beta_q$ for a few classes of pliable-index-coding problems.
For lower bounds, we use Lemma~\ref{lemma:chain-lower-bound} and Lemma~\ref{lemma:simplier-lower-bound} for different settings.
For achievability, we will engage \textit{cyclic codes} defined as follows. A cyclic code for messages $\{X_1, X_2, \dotsc, X_L\}$ is $(X_1 + X_2, X_2 + X_3, \dotsc, X_{L-1} + X_L) \in \mathbb{F}_q^{L-1}$. For notational convenience, we let the cyclic code for a single message $X_i$ be nil (that is, sending nothing).
\begin{theorem} \label{theorem:incomplete}
Let $\mathcal{P}_{m,\mathbb{U}}$ be such that $|\mathbb{U}^\text{abs}| \neq 0$ and
\begin{equation}
\textstyle \mathop{\bigcup}\limits_{H \in \mathbb{U}^\text{abs}} H \neq [1:m]. \label{eq:union-not-full}
\end{equation}
Then $\beta_q(\mathcal{P}_{m,\mathbb{U}}) = m-1$.
\end{theorem}
\begin{IEEEproof}
If receiver $\emptyset \in \mathbb{U}$, we remove it to get another pliable-index-coding problem $\mathcal{P}^- = \mathcal{P}_{m,\mathbb{U}\setminus \{\emptyset\}}$. Using Lemma~\ref{lemma:monotonicity}, $\beta_q(\mathcal{P}^-) \leq \beta_q(\mathcal{P}_{m,\mathbb{U}})$.
We run Algorithm~\ref{algo:chain} on $\mathcal{P}^-$. Since receiver $\emptyset$ is missing, we start by skipping some message $a \in [1:m]$. We choose any $a \in [1:m] \setminus \mathop{\bigcup}_{H \in \mathbb{U}^\text{abs}} H$, which is possible due to \eqref{eq:union-not-full}. After this step, for \textit{any} decoding choice $D$, Algorithm~\ref{algo:chain} must terminate without skipping any more messages (meaning that it will not hit any absent receiver). This is because $a$ (which is included in $C$ in the first step) is not in the side-information set of any absent receiver. So, Algorithm~\ref{algo:chain} terminates with $S = \{a\}$.
Invoking Lemma~\ref{lemma:chain-lower-bound}, we have $\beta_q(\mathcal{P}^-) \geq m-1$. Note that we need not minimise the algorithm over all $(C,S)$ here; see Remark~\ref{remark:max-min}. This completes the lower bound.
For achievability, pick any $H \in \mathbb{U}^\text{abs}$. We send $X_H$ uncoded, and $X_{[1:m]\setminus H}$ using a cyclic code. This gives a codelength of $m-1$. Note that any receiver that does not have all messages in $H$ as side information will be able to decode a new message. Also, any receiver that has all messages in $H$ must also have at least one (but not all) messages in $[1:m] \setminus H$---because receiver $H$ is absent---and hence it can decode a new message from the cyclic code.
\end{IEEEproof}
It has been shown~\cite{liutuninetti17} that if all receivers are present, then $\beta_q = m$. We now strengthen the result to if and only if.
\begin{theorem}
$\beta_q(\mathcal{P}_{m,\mathbb{U}}) = m$ iff $\mathbb{U} = 2^{[1:m]}\setminus \{[1:m]\}$.
\end{theorem}
\begin{IEEEproof}
We only need to prove the ``only if'' part. Equivalently, we show that if $\mathbb{U} \neq 2^{[1:m]}\setminus \{[1:m]\}$, then $\beta_q(\mathcal{P}_{m,\mathbb{U}}) \neq m$. We start by observing that if $\mathbb{U} \neq 2^{[1:m]}\setminus \{[1:m]\}$, then at least one receiver must be absent. By letting the absent receiver be $H$, we have $\mathbb{U} \subseteq 2^{[1:m]}\setminus \{ [1:m], H\} := \mathbb{U}^+$. As $H \neq [1:m]$, we have
$\beta_q(\mathcal{P}_{m,\mathbb{U}}) \leq \beta_q (\mathcal{P}_{m,\mathbb{U}^+}) = m-1$, where the inequality follows from Lemma~\ref{lemma:monotonicity} and the equality from Theorem~\ref{theorem:incomplete}
\end{IEEEproof}
We now present our results to absent receivers $\mathbb{U}^\text{abs}$ in some cases where $\mathop{\bigcup}_{H \in \mathbb{U}^\text{abs}} H = [1:m]$.
\begin{theorem} \label{theorem:nesting}
Consider a pliable-index-coding problem $\mathcal{P}_{m,\mathbb{U}}$. If any of the following is true, then $\beta_q(\mathcal{P}_{m,\mathbb{U}}) = m-1$.
\begin{enumerate}
\item (no nested absent pair) $J \nsubseteq K$, for all distinct $J, K \in \mathbb{U}^\text{abs}$.
\item (one nested absent pair) $J \subsetneq K$, for exactly one pair of $J, K \in \mathbb{U}^\text{abs}$.
\end{enumerate}
\end{theorem}
\begin{IEEEproof}
Theorem~\ref{theorem:incomplete} covers the case $\mathop{\bigcup}_{H \in \mathbb{U}^\text{abs}} H \neq [1:m]$. So, in the proof, we consider only $\mathop{\bigcup}_{H \in \mathbb{U}^\text{abs}} H = [1:m]$.
For achievability, we use the coding scheme for Theorem~\ref{theorem:incomplete}, that is, we choose any $H \in \mathbb{U}^\text{abs}$, and then send $X_H$ uncoded, and $X_{[1:m]\setminus H}$ using a cyclic code. This gives a code of length $m-1$. Note that this code works for the case where only receiver~$H$ is missing, will therefore works for the case where $H$ and more receivers are missing.
For lower bounds, we start with case~1. Since no pair of absent receivers are nested, using Lemma~\ref{lemma:simplier-lower-bound}, we obtain the required lower bound $m-1$.
For case~2, as there is a pair of nested absent receivers, Lemma~\ref{lemma:simplier-lower-bound} gives a loose lower bound of $m-2$. Suppose that receiver~$\emptyset$ is absent, then $J=\emptyset$, and only one another receiver $K$ can be absent, since the presence of any other absent receiver will yield at least two pairs of nested absent receivers. In this setting then, $\mathop{\bigcup}_{H \in \mathbb{U}^\text{abs}} H = \emptyset \cup K \neq [1:m]$, and by Theorem~\ref{theorem:incomplete}, we see that $\beta_q(\mathcal{P}_{m,\mathbb{U}}) = m-1$.
Now, suppose that case 2 holds and $\emptyset$ is present. With $\emptyset \in \mathbb{U}$, we know that Algorithm~\ref{algo:chain} can start without skipping the first message to be included in $C$. We split the decoding choices into three sub-cases, and skip specific messages to avoid $|S|=2$.\\
\underline{Sub-case 1:} $D$ such that the decoding chain does not hit any absent receiver. For this case, $|S|=0$.\\
\underline{Sub-case 2:} $D$ such that the decoding chain first hits any absent receiver $H \neq J$. Then, we arbitrarily skip one message, and will not hit another absent receiver, since every receiver that has $H$ as a subset is present. This gives $|S|=1$.\\
\underline{Sub-case 3:} $D$ such that the decoding chain first hits $J$. Then, we skip a message $a \in [1:m] \setminus K$. We will not hit another absent receiver, as every receiver that has $J \cup \{a\}$ as a subset is present. This results in $|S|=1$. \\
Maximising $|S|$ over all $D$, we get the lower bound $m-1$
\end{IEEEproof}
For the next result, we need first define a class of pliable-index-coding problems.
\begin{definition}
A pliable-index-coding problem is said to have \textit{perfectly $L$-nested absent receivers} iff the messages $[1:m]$ can be partitioned into $L+1 \in [2:m]$ subsets $P_0, P_1, \dotsc, P_{L}$ (that is, $\mathop{\bigcup}_{i=0}^L P_i = [1:m]$ and $P_i \cap P_j = \emptyset$ for all $i \neq j$), such that only $P_0$ can be an empty set, and there are exactly $2^L-1$ \textit{absent} receivers, which are
\begin{equation}
\textstyle P_0 \cup \left( \mathop{\bigcup}\limits_{i \in Q} P_i \right), \text{ for each } Q \subsetneq [1:L].
\end{equation}
\end{definition}
Figure~\ref{fig:nested} depicts an example of perfectly 3-nested absent receivers.
\begin{theorem} \label{theorem:perfectly-nested}
For any pliable-index-coding problem $\mathcal{P}_{m,\mathbb{U}}$ with perfectly $L$-nested absent receivers, $\beta_q(\mathcal{P}_{m,\mathbb{U}}) = m-L$.
\end{theorem}
\begin{IEEEproof}
For achievability we send $X_{P_0}$ uncoded and $X_{P_i}$ for each $i \in [1:L]$ using a cyclic code. One can verify that decodability of each present receiver can be satisfied.
Since the maximum length of any nested chain of absent receivers is $L$, Lemma~\ref{lemma:simplier-lower-bound} gives $\beta_q(\mathcal{P}_{m,\mathbb{U}}) \geq m-L$.
\end{IEEEproof}
\begin{lemma}
If $\mathbb{U}^\text{abs}$ is set of perfectly $L$-nested absent receivers, then $\mathbb{U}$ is critical.
\end{lemma}
\begin{IEEEproof}
Start with $\mathcal{P}_{m,\mathbb{U}}$ with perfectly $L$-nested absent receivers.
Imposed by the structure of $\mathbb{U}$, the maximum length of any nested chain of absent receivers is $L$, and they each must be in the following form:
\vspace*{-1.7ex}
\begin{multline}
\textstyle P_0 \subsetneq\, P_0 \hspace{-0.25mm}\cup\hspace{-0.25mm} P_{i_1} \,\subsetneq\, P_0 \hspace{-0.25mm}\cup\hspace{-0.25mm} P_{i_1} \hspace{-0.25mm}\cup\hspace{-0.25mm} P_{i_2}\subsetneq \dotsm \subsetneq\, P_0\hspace{-0.25mm}\cup\hspace{-0.25mm} \left(\mathop{\bigcup}\limits_{j=1}^{L-1} P_{i_j}\right), \label{eq:L-chain}
\end{multline}
for some distinct $i_1, \dotsc, i_{L-1}\in [1:L]$.
We need to show that if we augment $\mathbb{U}$ with any absent receiver $H = P_0 \cup ( \mathop{\bigcup}_{i \in Q} P_i )$ for some $Q \subsetneq [1:L]$, then $\beta_q(\mathcal{P}_{m,\mathbb{U}^+}) \geq m-L+1$, where $\mathbb{U}^+ = \mathbb{U} \cup \{H\}$ is the set of receivers after augmenting $H$.
Clearly, if $H = P_0$, then receiver $P_0$ is no longer absent, and \eqref{eq:L-chain} is not possible. So, the maximum length of any nested chain constructed from receivers absent in $\mathcal{P}_{m,\mathbb{U}^+}$ is $L-1$. Using Lemma~\ref{lemma:simplier-lower-bound}, we have $\beta_q(\mathcal{P}_{m,\mathbb{U}^+}) \geq m-L+1$.
Otherwise, without loss of generality, let $H = P_0 \cup ( \mathop{\bigcup}_{1 \leq j \leq Q} P_j )$ for $Q \in [1:L-1]$. We will use Lemma~\ref{lemma:chain-lower-bound} to show that for any decoding choice $D$, we can construct a series of skipped messages $S$ such that $|S| \leq L-1$.
Since in any attempt to skip $L$ messages, chain \eqref{eq:L-chain} is necessary, we only need to consider all decoding choices for which the first absent receiver being hit is $P_0$. After this, we choose to skip any message in $P_1$. Again, in the attempt to skip $L$ messages, the decoding choice must be made such that the next absent receiver being hit is $P_0 \cup P_1$. Repeating this, in iteration~$i \in [1:Q-1]$, after hitting each $P_0 \cup (\mathop{\bigcup}_{1 \leq j \leq i} P_j )$, we choose to skip any message in $P_{i+1}$. The next absent receiver being hit must then be $P_0 \cup (\mathop{\bigcup}_{1 \leq j \leq i+1} P_j )$, except when we reach $i+1 = Q$, where the absent receiver $P_0 \cup ( \mathop{\bigcup}_{1 \leq j \leq Q} P_j )$ has been included in $\mathbb{U}^+$. In this case, either \textsf{(a)} the next absent receiver being hit is $P_0 \cup ( \mathop{\bigcup}_{1 \leq j \leq Q} P_j ) \cup P_k$ for some $k \in [Q+1:L]$ if $Q \leq L-2$, or \textsf{(b)} the decoding chain terminates without hitting another absent receiver if $Q=L-1$. In any case, by the choice of skipped messages we devised, the maximum skipped messages is $L-1$ for any decoding choice, and therefore Lemma~\ref{lemma:chain-lower-bound} gives $\beta_q(\mathcal{P}_{m,\mathbb{U}^+}) \geq m-L+1$.
\end{IEEEproof}
\begin{figure}[t]
\centering
\includegraphics[scale=0.39]{26-new}
\caption{Perfectly 3-nested absent receivers, where circles denote messages, $\{P_i\}_{i=0}^3$ partitions, and $\{H_i\}_{i=1}^7$ absent receivers.}
\label{fig:nested}
\vspace{-0.94em}
\end{figure}
With the above results, we can show the following:
\begin{corollary}\label{corollary:criticality}
For any $\mathcal{P}_{m,\mathbb{U}}$, removing a message from a present receiver $H \in \mathbb{U}$ may strictly increase or strictly decrease the optimal broadcast rate $\beta_q(\mathcal{P}_{m,\mathbb{U}})$.
\end{corollary}
We prove Corollary~\ref{corollary:criticality} using the example below:
\begin{example} \label{example:increase-decrease}
Consider $m=5$ and a set of absent receivers~$\mathbb{U}^\text{abs}_1 = \{ \{1,2,3\}, \{3\}, \{3,4\}\}$. Using Theorem~\ref{theorem:incomplete}, we have $\beta_q(\mathcal{P}_{m,\mathbb{U}_1}) = m-1$. Now, we remove message~5 from a present receiver~$\{3,4,5\} \in \mathbb{U}_1$. This is equivalent to replacing the present receiver~$\{3,4,5\}$ with a new present receiver~$\{3,4\}$. We get $\mathbb{U}^\text{abs}_2 = \{ \{1,2,3\}, \{3\}, \{3,4,5\}\}$, which forms perfectly 2-nested absent receivers. Using Theorem~\ref{theorem:perfectly-nested}, $\beta_q(\mathcal{P}_{m,\mathbb{U}_2}) = m-2$. We continue by removing messages 2 and 4 from the present receiver $\{2,3,4\} \in \mathbb{U}_2$. This replaces the present receiver $\{2,3,4\}$ with a new present receiver $\{3\}$. We get $\mathbb{U}^\text{abs}_3 = \{ \{1,2,3\}, \{2,3,4\}, \{3,4,5\}\}$. Using Theorem~\ref{theorem:nesting}, $\beta_q(\mathcal{P}_{m,\mathbb{U}_3}) = m-1$.
\end{example}
|
1,314,259,994,223 | arxiv | \chapter{Introduction}
Following earlier
ideas\con\ROLIVE\RIBANEZ\RSTROM\RDUFF\RTOL\RGAZU\RODD\RDEROO\RSTW\noc\ we
have
proposed recently\con\RDUALITY\RDYON\noc\ that the toroidally compactified
heterotic string theory\RNARAIN\RNSW\ in four dimensions may be invariant
under an
SL(2,Z) group of transformations.
These transformations mix electric and magnetic fields, and at the same
time act non-trivially on the axion-dilaton field, thereby interchanging
the strong and weak coupling limits of the theory.
Further work in this direction was reported in ref.\RSCHWARZ.
SL(2,R) symmetry was used in refs.\RSTW\RDUALITY\RORTIN\ to generate
magnetically
charged black hole solutions in string theory.
In order that the full string theory has SL(2,Z) invariance, the theory
must contain magnetically charged states.
The allowed spectrum of electric and magnetic charges in the theory was
computed in ref.\RDYON.
A natural question to ask would be, `where do these magnetically charged
states come from?'
A partial answer to this question was provided in ref.\RDUALITY.
Following ref.\RDGHR, if we regard fundamental strings as solitons of the
effective field theory (a description which is likely to hold for states
representing long strings) then the dual strings may be constructed simply
from the SL(2,Z) transform of these solitons.
In this paper we shall study the interaction between these dual strings,
which also includes interaction between a dual string and an ordinary
string.
In particular, we show that the force between an infinitely long straight
dual string and an
ordinary test string parallel to the dual string vanishes.
We then study the scattering of two closed dual strings when one of them
passes
through the other without touching it, and show that the result is an
exchange of a fixed amount of electric and magnetic charge between the two
strings, determined by the quantum numbers of the original string.
The plan of the paper is as follows.
In sect.2 we give a brief review of SL(2, Z) invariance in toroidally
compactified heterotic string theory, and also discuss the relationship
between classical solutions in this theory and fundamental strings.
In sect.3 we construct the magnetically charged dual string solutions by
taking SL(2,Z) transformation of the fundamental string solution.
In sect.4 we calculate the force between a dual string and an ordinary
test string parallel to it, and show that it vanishes.
In sect.5 we study the result of adiabatically transporting a particle,
carrying both, electric and magnetic charges, around a dual string and
show that both these charges change as a result of this transport.
In sect.6 we derive the same results by regarding the string as the
boundary of a domain wall, and calculating the electric and magnetic
charges exchanged between the particle and the domain wall as the particle
passes through the wall.
In sect.7 we use the results of sect.5 and 6 to study the scattering of
two strings.
We summarize our results in sect.8.
\chapter{Review}
We begin by giving a brief review of the duality invariance of the
effective field theory and soliton solutions in this theory representing
fundamental strings.
We consider heterotic string theory with six of its ten dimensions
compactified on a torus with constant background gauge and anti-symmetric
tensor fields.
For a generic compactification, the only massless bosonic fields in the
theory are the metric $G_{\mu\nu}$, a complex scalar field $\lambda$
representing the axion-dilaton system, a set of 28 gauge fields
$A_\mu^{(\alpha)}$ ($1\le\alpha\le 28$) which we shall denote as a 28
dimensional vector $\vec A_\mu$, and a $28\times 28$ matrix valued field
$M$ satisfying,
$$
M^T=M, ~~~~ M^T L M = L
\eqn\etwo
$$
where
$$
L=\pmatrix{ 0 & I_6 & 0\cr I_6 & 0 & 0\cr 0 & 0 & -I_{16}}
\eqn\ethree
$$
$I_n$ being the $n\times n$ identity matrix.
In terms of these fields, the action is given by,
$$\eqalign{
S =&{1\over 32\pi}\int d^4 x\sqrt{-\det G}[ R -{1\over 2(\lambda_2)^2}
G^{\mu\nu}\partial_\mu\lambda\partial_\nu\bar\lambda -\lambda_2\vec F^T_{\mu\nu}.LML.
\vec F^{\mu\nu} \cr
& +\lambda_1\vec F^T_{\mu\nu}.L.\vec{\tilde F}^{\mu\nu} +{1\over 8}
G^{\mu\nu}
Tr(\partial_\mu M L\partial_\nu M L)]\cr
}
\eqn\eone
$$
where
$$
\vec F_{\mu\nu} =\partial_\mu \vec A_\nu - \partial_\nu\vec
A_\mu
\eqn\efour
$$
The equations of motion derived from this action are invariant under the
SL(2,R) transformation\RDUALITY:
$$\eqalign{
& \lambda\to {a\lambda + b\over c\lambda + d}, ~~~~ \vec F_{\mu\nu}\to
c\lambda_2 ML . \vec{\tilde F}_{\mu\nu} + (c \lambda_1 + d) \vec
F_{\mu\nu}\cr
& M\to M, ~~~~ G_{\mu\nu}\to G_{\mu\nu}\cr
}
\eqn\efive
$$
where $a,b,c,d$ are real numbers.
However, quantum corrections due to instantons break the SL(2,R)
invariance to at most
SL(2,Z) invariance, for which,
$$
a,b,c,d\in Z, ~~~~ ad-bc=1
\eqn\esix
$$
The equations of motion derived from the action \eone\ has a string like
classical solution\RDGHR, given by,
$$\eqalign{
\lambda =& {1\over 2\pi i} \ln {z\over r_0}\cr
ds^2 =& -dt^2 +(dx^3)^2 -{1\over 2\pi} \ln{r\over r_0} dz d\bar z\cr
}
\eqn\eseven
$$
This describes a fundamental string lying along the $x^3$ direction.
$z=x^1 +i x^2$ denotes the complex coordinate transverse to the string.
The core of the string in this case is located at $z=0$.
This solution can be shown to be invariant under eight of the sixteen
global supersymmetry generators\RDGHR.
Note that the solution is sensible only in the region $r<r_0$, where $r_0$
is an arbitrary length scale.
{}From physical consideration we see that
$r_0$ should be taken to be of the order of the overall size of the closed
string loop.
Only for $r<<r_0$ the string looks like a straight string, and eq.\eseven\
gives a good description of the field configuration in this region.
$r_0$ also contains information about the asymptotic value of $\lambda_2$;
for closed string loops of the same size, $r_0$ takes different values for
different asymptotic values of the field $\lambda_2$.
The solution has several zero modes. First of all there are two bosonic
zero modes which simply correspond to shifting the location of the
core of the string in the $x^1-x^2$ plane. There are eight fermionic
zero modes which correspond to supersymmetry transformation of the
solution with the eight broken global supersymmetry generators. These
supersymmetry generators are chiral with respect to the gamma matrices
associated with the $t-x^3$ direction\RDGHR\RSTRING, hence the
corresponding fermionic
zero modes are also chiral. In particular for the solution given in
eq.\eseven, they turn out to be right chiral. Finally, there are 28 bosonic
zero modes generated by $O(7,23)$ deformation of the solution as discussed
in refs.\RDUALITY\RSTRING. The parameters labelling the deformed solution
may be identified as the charge per unit length carried by the string
corresponding to the 28 gauge fields. To first order in the deformation
parameters $q^{(I)}$ ($13\le I\le 28$), $p^{(m)}$, $l^{(m)}$ ($1\le m\le
6$), the
solution deformed by these charge zero modes is given by,
$$\eqalign{
\lambda =& {1\over 2\pi i}\ln{z\over r_0}\cr
ds^2 =& -dt^2 + (dx^3)^2 -{1\over 2\pi} \ln{r\over r_0} dz d\bar z\cr
F^{(I)}_{-zt}=& F^{(I)}_{-z3} = {q^{(I)}\over z} {1\over
(\ln(r/r_0))^2} ~~{\rm for}~13\le I\le 28\cr
F^{(m)}_{-zt}-F^{(m+6)}_{-zt} =& F^{(m)}_{-z3}-F^{(m+6)}_{-z3} ={\sqrt 2
p^{(m)}\over z} {1\over (\ln(r/r_0))^2}~~{\rm for}~ 1\le m\le 6\cr
F^{(m)}_{-zt}+F^{(m+6)}_{-zt} =& -(F^{(m)}_{-z3}+F^{(m+6)}_{-z3}) ={\sqrt
2 l^{(m)}\over z} {1\over (\ln(r/r_0))^2}~~{\rm for}~ 1\le m\le 6\cr
F^{(\alpha)}_{-\bar z t}=& F^{(\alpha)}_{-\bar z 3} =0~~{\rm for~} 1\le
\alpha\le 28\cr
M =& I\cr
}
\eqn\eeight
$$
which generalizes the solution given in ref.\RSTRING\ where only the
parameters $q^{(I)}$ were present.
Here
$$
\vec F_{\pm \mu \nu} = -ML . \vec F_{\mu\nu}\pm i\vec{\tilde F}_{\mu\nu}
\eqn\eeightaa
$$
$q^{(I)}$, $p^{(m)}$ measure the charge per unit length, as well as the
current in the $- x^3$ direction associated with the gauge fields
$A_\mu^{(I)}$
($13\le I\le 28$) and $(A_\mu^{(m)} - A_\mu^{(m+6)})/\sqrt 2$ ($1\le m\le
6$)
respectively, and $l^{(m)}$ measure the charge per unit length, and the
current in the $x^3$ direction associated with the gauge field $(A_\mu^m
+ A_\mu^{(m+6)})/\sqrt 2$ ($1\le m\le 6$).
The collective excitation of
the string may be described by making the parameters labelling these
deformations functions of $t$ and $x^3$.
In particular, when we make $q^{(I)}$, $p^{(m)}$ and $l^{(m)}$ functions
of $t$ and
$x^3$, charge conservation implies that,
$$
(\partial_t - \partial_3) q^{(I)} = (\partial_t-\partial_3) p^{(m)} = (\partial_t + \partial_3) l^{(m)} =0
\eqn\enine
$$
therby showing that $q^{(I)}$ and $p^{(m)}$ denote left moving
coordinates and $l^{(m)}$ denote right moving coordinates.
The set of all the collective excitations of the string can easily be seen
to be in one to one correspondence with the dynamical degrees of freedom
of the fundamental string in the static gauge.
This leads to the hypothesis that the quantization of these collective
coordinates will reproduce the full spectrum of states in the string
theory.
Although formally this may be an exact result after taking into account
the correction to the action due to the higher derivative terms, in
practice the usefulness of this hypothesis is limited to states associated
with long strings.
\chapter{Dual Strings}
In ref.\RDYON\ we have indicated that the allowed spectrum of
electric and magnetic charges in string theory is consistent with the
SL(2,Z) invariance of the theory. This, however, does not answer the
question as to where the magnetically charged states, that are necessary
for SL(2,Z) invariance of the spectrum, come from. In this section we
shall try to partially answer this question.
The answer in fact lies in the hypothesis stated at the end of the last
section.
Since according to this hypothesis, string states can be regarded as
collective excitations of a classical solution in the effective field
theory, the magnetically charged string states must come from the
collective excitations of the SL(2,Z)
transform of this classical solution. To make this more concrete, let us
first write down the SL(2,Z) transform of the solution given in
eq.\eseven\ by the element $g=\pmatrix{a & b\cr c & d\cr}$:
$$\eqalign{
\lambda =&{ a\ln (z/r_0) + 2\pi i b\over c\ln (z/r_0) + 2\pi i d}\cr
ds^2 =& - dt^2 + (dx^3)^2 -{1\over 2\pi}\ln(r/r_0) dz d\bar z\cr
}
\eqn\eten
$$
Note that as we go around the string, $\lambda$ changes to
$$
(\tilde a \lambda + \tilde b)/ (\tilde c\lambda +\tilde d)
\eqn\eeleven
$$
where,
$$
\tilde g\equiv \pmatrix{\tilde a & \tilde b\cr \tilde c & \tilde d\cr} = \pmatrix{ a & b\cr c &
d\cr} T \pmatrix{a & b\cr c & d\cr}^{-1}
=\pmatrix{1 - ac & a^2\cr -c^2 & 1+ac\cr}
\eqn\etwelve
$$
where
$$
T = \pmatrix{1 & 1\cr 0 & 1\cr}
\eqn\etwelvea
$$
The zero modes of this solution can be constructed in the following way.
Instead of trying to write down the deformed solution directly, we can
simply take the zero mode deformation of the solution \eseven, and take
the SL(2,Z) transform of it.
SL(2,Z) invariance of the equations of motion will imply that the
transformed configuration is also a solution of the equations of motion,
and hence denotes the zero mode deformation of the solution given in
eq.\eten.
Quantization of these zero modes should, in turn, produce the magnetically
charged strings required for duality invariance of the theory, at least
those corresponding to long strings.
We shall not explicitly display the deformed solution here.
Note that the electrically and magnetically charged string states in a
theory with a given asymptotic value of $\lambda$ are not related to each
other by SL(2,Z) transformation.
Instead, the magnetically charged particles in this theory are related
by SL(2, Z) transformation to
the purely electrically charged particles in a theory with a different
asymptotic value of $\lambda$.
There is one question that must be addressed before we conclude this
section.
So far we have discussed SL(2,Z) invariance of the theory in the case
where the fermionic background fields have been set to zero.
But the true SL(2,Z) invariance of the theory requires SL(2, Z)
invariance of the equations of motion even in the presence of fermionic
background fields.
In particular, this is necessary if we want to construct the fermionic
zero modes of the SL(2,Z) transformed solution.
We shall now give an indirect proof of the SL(2, Z) invariance of the
equations of motion after the inclusion of the fermionic fields.
This is done by comparing the dimensionally reduced heterotic string
theory to the $N=4$ Poincare supergravity theory coupled to abelian gauge
field multiplets as discussed in ref.\RDEROO.
It can be shown that the bosonic part of the action
given in eqs.(4.18), (4.26) of ref.\RDEROO\ is identical to the action
given in eq.\eone\ after
we make the following identification of fields\foot{
Partial results to this effect have been obtained previously in
refs.\RCOMP.}
$$
M = U OO^T U^{-1}, ~~~~~~ {i\over\lambda} = {\phi_1 -\phi_2\over \phi_1 +
\phi_2}
\eqn\esixtynine
$$
and a redefinition of the gauge fields $\vec F\to U\vec F$.
Here $U$ is a matrix that diagonalizes $L$:
$$
U^{-1} L U =\pmatrix{ I_6 & &\cr & -I_6 &\cr & & -I_{16}\cr}
\eqn\eseventy
$$
In eq.\esixtynine, the right hand sides of the equations contain variables
appearing in ref.\RDEROO, whereas the left hand sides of the equations
contain variables appearing in eq.\eone.
Since the bosonic part of the two actions are identical,
we have a strong evidence that the two theories are indeed
the same.
We shall proceed with the assumption that this is the case.
This assumption is further supported by the fact that both theories have
local $N=4$ supersymmetry.
In ref.\RDEROO\ it was shown that the gauge field equations of motion are
invariant under SL(2,R) transformation even after including the fermionic
fields in this theory. This result, combined with an earlier result of
Gaillard and Zumino\RGAZU\ shows that all the field equations must be
invariant under the SL(2,R) transformation.
This establishes the SL(2,R) invariance of the full
set of equations of motion of the dimensionally reduced heterotic string
theory.
Similar argument has been advanced previously by Schwarz\RSCHWARZ.
\chapter{Force Exerted by a Dual String on an Ordinary String}
We now begin our study of the interaction between dual strings, and also
between a dual string and an ordinary string.
The first quantity we would like to compute is the force exerted by an
infinitely long straight dual
string on an ordinary test string kept parallel to itself some distance
away.
A similar computation for two ordinary strings parallel to each other had
yielded the answer that the net force between such strings
vanish\RDGHR\RSTRING.
We begin by writing down the action of a test string in the presence of a
background axion-dilaton-gravitational field:
$$
S_{string} = \int d^2\xi (\sqrt{-\gamma} G_{S\mu\nu}(X)\gamma^{\alpha\beta}
\partial_\alpha X^\mu \partial_\beta X^\nu +\epsilon^{\alpha\beta} B_{\mu\nu}(X)
\partial_\alpha X^\mu \partial_\beta X^\nu ) +\ldots
\eqn\efourteen
$$
where $\xi^\alpha$ and $\gamma_{\alpha\beta}$ denote the coordinates and
metric on the string world-sheet respectively, $X^\mu$ denote the
coordinates of the string, $\Phi$ denotes the dilaton field, and
$G_{S\mu\nu}$ denotes the string metric.
$\ldots$ denotes terms involving background gauge fields and world sheet
fermionic fields, which
we are setting to zero for the present analysis.
The relation between the fields appearing here and those in eq.\eone\ are
given by,
$$\eqalign{
G_{S\mu\nu} =& e^{\Phi} G_{\mu\nu}, ~~~~~~ e^{-\Phi} = \lambda_2\cr
G^{\sigma\sigma'}\partial_{\sigma'}\lambda_1 =& {1\over 2} (\sqrt{-\det G})^{-1}
e^{-2\Phi}\epsilon^{\mu\nu\rho\sigma} (\partial_\mu B_{\nu\rho} + {1\over 2}
\vec A_\mu^T . L . \vec
F_{\nu\rho}) \cr
}
\eqn\efifteen
$$
Let us now consider the test string lying along the 3 direction,
$$
X^0 =\xi^0, ~~~~ X^3 =\xi^1
\eqn\esixteen
$$
with the gauge choice,
$$
\sqrt{-\gamma} \gamma^{\alpha\beta} =\eta^{\alpha\beta}
\eqn\esixteena
$$
If $Z=X^1+i X^2$ denotes the complex coordinate transverse to the string,
then the equation of motion of $Z$ that follows from the action
\efourteen\ in the background \eten\ is given by (with the help of
eqs.\efifteen)
$$
D^\alpha D_\alpha Z +\Gamma^Z_{\mu\nu}\partial_\alpha X^\mu \partial^\alpha X^\nu -i
{\partial_{\bar Z}\lambda \over \lambda_2} G^{Z\bar Z}=0
\eqn\eseventeen
$$
The second term, which in this gauge is given by
$\Gamma^Z_{33}-\Gamma^Z_{tt}$ vanishes, as can easily be seen by computing
the
Christoffel symbol from the metric given in eq.\eten.
The last term also vanishes, since $\lambda$ given in eq.\eten\ is an
analytic function of $z$.
The net result is that the equation of motion of the coordinate $Z$ looks
like,
$$
D^\alpha D_\alpha Z =0
\eqn\eeighteen
$$
showing that there is no net force exerted on the test string.
The same analysis can also be repeated for the case where the dual string
carries electric and magnetic charge density (these are the solutions
deformed by the charge zero modes). As in the case of ref.\RSTRING, in
this case the net magnetic and electric forces exerted on the test string
cancel each other.
\chapter{Adiabatic Transport of a Charged Particle Around a Dual
String}
We shall now consider the effect of adiabatically transporting a point
particle carrying magnetic charge $\vec Q_m$ and electric charge $\vec
Q_e$ around a dual string.\foot{Similar phenomena associated with the
usual $R\to 1/R$ duality transformation was discussed in ref.\RGREENE.}
Although the results of this and the next section may be derived by
starting from the known results about the interaction of a dyon with an
axionic domain wall\RSIKIVIE\ and then making a duality transformation of
the results, we shall carry out the analysis explicitly, since this gives
a physical understanding of the interaction mechanism.
As was shown in ref.\RDYON\ the allowed spectrum of $(\vec Q_m, \vec Q_e)$
is
$$
(\vec Q_m,\vec Q_e)=(M^{(0)} L\vec\beta, ~(\vec\alpha +\lambda_1^{(0)} \vec\beta)/\lambda_2^{(0)})
\eqn\enineteen
$$
where $\lambda_1^{(0)}$, $\lambda_2^{(0)}$ and $M^{(0)}$ are the asymptotic values of the fields
$\lambda_1$, $\lambda_2$ and $M$ respectively, and $\vec\alpha$, $\vec\beta$ are 28
dimensional vectors
belonging to an even, self dual lattice $P$ with the metric $L$.
Let us now consider a dual string that is a transform of the ordinary
string by the group element
$g=\pmatrix{a & b\cr c & d}$ and let $\tilde g=\pmatrix{\tilde a & \tilde b\cr \tilde c &
\tilde d}$ be the corresponding element defined in eq.\etwelve.
As we adiabatically transport the particle around the string, we expect
$\vec\alpha$, $\vec\beta$ to remain fixed since they take discrete values.
After we go around the string once, the background value of the field
$\lambda$ changes to
$$
\lambda' = {\tilde a \lambda +\tilde b\over \tilde c\lambda +\tilde d}
\eqn\etwentyone
$$
which, in turn, implies that the electric and magnetic charge vectors of
the particle change to
$$
(\vec Q_m', \vec Q_e') =(M^{(0)} L\vec\beta, {1\over \lambda_2^{\prime(0)}}(\vec\alpha +\lambda_1^{\prime(0)}\vec\beta))
\eqn\etwentytwo
$$
However, the background seen by the particle is now different from the one
that was seen by it before.
As a result, we should not directly compare $(\vec Q_m', \vec Q_e')$ with
$(\vec Q_m, \vec Q_e)$.
Instead, we need to make a duality transformation of the fields and the
charges by the element,
$$
\tilde g^{-1}=\pmatrix{\tilde d & -\tilde b\cr -\tilde c & \tilde a}
\eqn\etwentythree
$$
to state the results in the original coordinate system.
This transformation sends $\lambda'$ back to $\lambda$, and $(\vec\alpha,\vec\beta)$ to
$(\vec\alpha', \vec\beta')$ given by\RDYON\
$$
\pmatrix{\vec\alpha'\cr -\vec\beta'} = \pmatrix{\tilde d & -\tilde b\cr -\tilde c & \tilde a\cr} \pmatrix{\vec\alpha
\cr -\vec\beta}
\eqn\etwentyfour
$$
Thus the final electric and magnetic charges of the particle are given by:
$$
(\vec Q_m'', \vec Q_e'') = (M^{(0)} L\vec\beta', {1\over\lambda_2^{(0)}}(\vec\alpha' +\lambda_1^{(0)}\vec\beta'))
\eqn\etwentyfive
$$
The conservation of electric and magnetic charge implies that
the charge lost by the particle must be deposited on the string, but the
above analysis does not show explicitly how it happens.
Also, for a realistic string, once the SL(2,Z) symmetry is broken by the
instanton corrections, the field around a string is not given by eq.\eten,
but remains equal to its vacuum value $\lambda^{(0)}$ in most of the region of
space, and changes quickly to $\lambda^{\prime(0)}=(\tilde a\lambda^{(0)}+\tilde b)/(\tilde c\lambda^{(0)}+\tilde d)$ across a
thin domain wall bounded by the string.
We shall now derive eq.\etwentyfive\ using this more realistic picture of
strings, which will also clarify the charge exchange mechanism between the
string and the domain wall.
\chapter{Dynamics of Domain Wall Penetration}
\centerline{\singlespace
\vbox{\hbox{A}
\hbox{$\big |$\hskip -.1in $\cdot$}\hbox{$\big |$\hskip -.2in
$\cdot$}\hbox{$\big |$\hskip -.3in $\cdot$}\hbox{$\big |$\hskip -.35in
$\cdot$}\hbox{$\big |$\hskip -.4in
$\cdot$\hskip -.3in D\hskip .7in C}
\hbox{$\big |$\hskip -.35in $\cdot$}\hbox{$\big |$\hskip -.3in $\cdot$}
\hbox{$\big |$\hskip -.2in $\cdot$}\hbox{$\big |$\hskip -.1in $\cdot$}
\hbox{B}\hbox{~}\hbox{~}\hbox{~\hskip -1.4in Fig.~1. Picture of a string}
}}
Let us consider the picture of the string depicted in Fig.1. The points
$A$ and $B$ in this figure denote the points at which the string intersects
the plane of the paper.
The line $C$ connecting the two points represents the intersection of the
domain wall (bounded by the string) with the plane of the paper.
On the right side of the wall $C$ the field $\lambda$ takes the value
$\lambda^{(0)}$, whereas on the left side of the wall the field $\lambda$ takes
the value $\lambda^{\prime(0)}$.
Finally, the line $D$ represents the intersection of another fictitious
wall bounded by the string with the plane of the paper.
This fictitious
wall is characterised by the choice of two different coordinate systems on
the two sides of the wall, related by the SL(2,Z) transformation with the
group element $\tilde g$ such that on the left hand side of the wall $D$ the
field $\lambda$ takes value $\lambda^{(0)}$.
In other words, the field $\lambda$ takes value $\lambda^{(0)}$ on the right side of
$C$ and the left side of $D$, but takes the value $\lambda^{\prime(0)}$ in the region
bounded by $C$ and $D$.
Note that $C$ represents a real domain wall with finite
thickness, and $\lambda$ changes continuously from $\lambda^{(0)}$ to $\lambda^{\prime(0)}$ as we
cross the wall, whereas $D$ denotes an infinitely thin fictitious wall,
and arises only because we choose to use two different coordinate systems
on the two sides of the wall.
As we shall see, the results of passage of a particle through these two
walls are completely different.
In both cases, however, as the particle crosses the wall, it deposits
certain amount of electric and magnetic charge on the wall, which
ultimately flows back to the string.
In studying the passage of charged particles through these walls, we shall
use the method used by Sikivie\RSIKIVIE\ for studying the passage of
charged particles through an axionic domain wall.
First let us consider the passage of the particle through $C$.
Both, inside, and outside the wall, the gauge fields satisfy the equation
of motion:
$$
D_\mu (\lambda_2ML\vec F^{\mu\nu} -\lambda_1\vec{\tilde F}^{\mu\nu})=0
\eqn\etwentysix
$$
Let us now ignore all time derivatives (assuming that the motion of the
particle is slow) and define,
$$
\vec F^{i0}=\vec E^i,~~~~\vec{\tilde F}^{i0}=\vec B^i
\eqn\etwentyseven
$$
We shall also assume that the energy per unit area of the wall
is small compared to $M_{Pl}^3$,
so that we can ignore the gravitational field produced by the wall.
At the same time, the thickness of the wall is taken to be small compared
to the overall size of the string, so that the variation of the
gravitational field due to the string across the wall is small.
The equation of motion \etwentysix\ and the Bianchi identity now takes the
form:
$$\eqalign{
D_i (\lambda_2 ML\vec E^i-\lambda_1\vec B^i)=& 0, ~~~~ \epsilon^{ijk}D_j
(\lambda_2 ML \vec B_k +\lambda_1 \vec E_k)=0\cr
D_i \vec B^i=& 0, ~~~~ \epsilon^{ijk} D_j\vec E_k=0\cr
}
\eqn\etwentyeight
$$
Let us now denote by $\vec B_{\perp}$ ($\vec E_\perp$) and $\vec B_\perp'$
($\vec E_\perp'$) the components of the
magnetic (electric) fields perpendicular to the domain wall on the right
and the left side of $C$ respectively.
Similarly, we denote by $\vec B_{||}$ ($\vec E_{||}$) and $\vec B_{||}'$
($\vec E_{||}'$) the components of magnetic (electric) fields parallel to the
wall on the two sides of the wall.
In the thin wall approximation, the variation of various fields in
directions parallel to the wall are small compared to that in directions
perpendicular to the wall.
Eq.\etwentyeight\ then gives
$$\eqalign{
\vec B'_\perp =& \vec B_\perp, ~~~~ \lptoM^{(0)} L\vec E'_\perp -\lambda_1^{\prime(0)}\vec B'_\perp =
\ltoM^{(0)} L\vec E_\perp -\lambda_1^{(0)}\vec B_\perp \cr
\vec E_{||}' =& \vec E_{||}, ~~~~ \lptoM^{(0)} L \vec B'_{||} +\lambda_1^{\prime(0)}\vec E'_{||} =
\ltoM^{(0)} L \vec B_{||} +\lambda_1^{(0)}\vec E_{||}\cr
}
\eqn\ethirty
$$
Thus the total induced magnetic charge on the wall is given by,
$$
\Delta\vec Q_m ={1\over 4\pi}\int (\vec B'_\perp -\vec B_\perp) d^2 S =0
\eqn\ethirtyone
$$
On the other hand, the total induced electric charge is given by,
$$
\Delta\vec Q_e ={1\over 4\pi}\int (\vec E'_\perp -\vec E_\perp)d^2S
\eqn\ethirtytwo
$$
and is non-zero in general.
We now consider a particle with charge $(\vec Q_m, \vec Q_e)$ approaching
the wall $C$ from the right.
The electromagnetic fields due to the particle in the absence of the
domain wall $C$ are given by
$\vec E^i =\vec Q_e r^i/r^3$, $\vec B^i =\vec Q_m r^i/r^3$.
In order to calculate the total induced charge on the wall, we need to
calculate the electric and magnetic fields that are obtained by solving
eqs.\ethirty\ and then use eqs.\ethirtyone, \ethirtytwo.
This is done using the method of images.
Let $P$ denote the position of the incoming particle at a given instant of
time and $Q$ be its image point.
We assume that the field to the right side of $C$ is reproduced by the
original particle at the point $P$ and a fictitious charge $(\vec q^{(1)}_m,
\vec q^{(1)}_e)$ placed at the point $Q$.
On the other hand, the field to the left side of $C$ is assumed to be
given by the original particle, together with a fictitious charge
$(\vec q^{(2)}_m, \vec q^{(2)}_e)$ placed at the point $P$.
The boundary conditions \ethirty\ then give,
$$\eqalign{
\vec Q_m + \vec q^{(2)}_m =& - (\vec q^{(1)}_m -\vec Q_m)\cr
\vec Q_e + \vec q^{(2)}_e =& \vec q^{(1)}_e +\vec Q_e\cr
\lptoM^{(0)} L . (\vec q^{(2)}_e +\vec Q_e) - \lambda_1^{\prime(0)} (\vec q^{(2)}_m + \vec Q_m)
=& \ltoM^{(0)} L . (-\vec q^{(1)}_e +\vec Q_e) - \lambda_1^{(0)} (-\vec q^{(1)}_m + \vec Q_m)\cr
\lptoM^{(0)} L . (\vec q^{(2)}_m +\vec Q_m) + \lambda_1^{\prime(0)} (\vec q^{(2)}_e + \vec Q_e)
=& \ltoM^{(0)} L . (\vec q^{(1)}_m +\vec Q_m) + \lambda_1^{(0)} (\vec q^{(1)}_e + \vec Q_e)\cr
}
\eqn\ebcone
$$
Explicit expressions for the image charges can be found by solving these
four equations.
We are, however, interested in computing the total electric and magnetic
charges induced on the wall.
Using eqs.\ethirtyone, \ethirtytwo\ and \ebcone, these are given by,
$$\eqalign{
\Delta\vec Q_m =& {1\over 2} (\vec q^{(1)}_m + \vec q^{(2)}_m) =0\cr
\Delta\vec Q_e =& {1\over 2} (\vec q^{(1)}_e + \vec q^{(2)}_e)\cr
=& {(\lambda_2^{(0)})^2 - (\lambda_2^{\prime(0)})^2 - (\lambda_1^{(0)} -\lambda_1^{\prime(0)})^2\over (\lambda_2^{(0)} +\lambda_2^{\prime(0)})^2 + (\lambda_1^{\prime(0)}
-\lambda_1^{(0)})^2} \vec Q_e + {2\lambda_2^{(0)} (\lambda_1^{\prime(0)} -\lambda_1^{(0)})\over (\lambda_2^{(0)} +\lambda_2^{\prime(0)})^2 + (\lambda_1^{\prime(0)}
-\lambda_1^{(0)})^2} M^{(0)} L .\vec Q_m\cr
}
\eqn\ethirtyfive
$$
Note that the total induced charge on the wall is independent of the
distance of the particle from the wall as long the particle is close
enough so that we can regard the wall as infinite.
As the particle approaches closer and closer to the wall, the total
induced charge gets concentrated at the point of impact.
Let us assume that after passing through the wall the particle emerges
with charge $(\vec Q_m', \vec Q_e')$.
A similar analysis now shows that the total charge induced on the
wall is given by,
$$\eqalign{
\Delta\vec Q_m' =& 0\cr
\Delta \vec Q_e'=& {(\lambda_2^{\prime(0)})^2 - (\lambda_2^{(0)})^2 - (\lambda_1^{(0)} -\lambda_1^{\prime(0)})^2\over (\lambda_2^{(0)}
+\lambda_2^{\prime(0)})^2 + (\lambda_1^{\prime(0)}
-\lambda_1^{(0)})^2} \vec Q_e' + {2\lambda_2^{\prime(0)} (\lambda_1^{(0)} -\lambda_1^{\prime(0)})\over (\lambda_2^{(0)} +\lambda_2^{\prime(0)})^2 + (\lambda_1^{\prime(0)}
-\lambda_1^{(0)})^2} M^{(0)} L .\vec Q_m'\cr
}
\eqn\ethirtyeight
$$
This result can be interpreted by saying that as the particle penetrates
the domain wall, it exchanges charge with the wall.
Charge conservation then gives
$$
\eqalign{
\vec Q_e +\Delta \vec Q_e =& \vec Q_e' +\Delta\vec Q_e'\cr
\vec Q_m +\Delta\vec Q_m =& \vec Q_m' +\Delta\vec Q_m'\cr
}
\eqn\eforty
$$
Eqs.\ethirtyfive, \ethirtyeight\ and \eforty\ gives,
$$\eqalign{
\vec Q_e' =&{\lambda_2^{(0)}\over\lambda_2^{\prime(0)}}\vec Q_e +{1\over\lambda_2^{\prime(0)}} (\lambda_1^{\prime(0)}-\lambda_1^{(0)})M^{(0)} L\vec Q_m\cr
\vec Q_m' =&\vec Q_m
}
\eqn\efortyone
$$
Finally, using eq.\enineteen, we get
$$
\vec Q_m' =M^{(0)} L\vec\beta, ~~~~ \vec Q_e' ={1\over\lambda_2^{\prime(0)}}(\vec\alpha +\lambda_1^{\prime(0)}\vec\beta)
\eqn\efortythree
$$
Let us now analyse the effect of crossing the fictitious wall $D$ on the
electric and magnetic charges of the particle.
Let $\vec F'_{\mu\nu}$ and $\vec F''_{\mu\nu}$ be the electromagnetic fields on
the right and the left sides of this fictitious wall.
Then from eq.\efive\ we see that the boundary condition across this wall
is given by,
$$
\vec F'_{\mu\nu}=(\tilde c\lambda_1^{(0)}+\tilde d)\vec F''_{\mu\nu}+\tilde c\ltoM^{(0)} L. \vec{\tilde F}''_{\mu\nu}
\eqn\efortyfour
$$
which gives,
$$\eqalign{
\vec E'_{\perp, ||}=&(\tilde c\lambda_1^{(0)}+\tilde d)\vec E''_{\perp, ||} +\tilde c\ltoM^{(0)}
L\vec B''_{\perp, ||}\cr
\vec B'_{\perp, ||}=&(\tilde c\lambda_1^{(0)}+\tilde d)\vec B''_{\perp, ||} -\tilde c\ltoM^{(0)}
L\vec E''_{\perp, ||}\cr
}
\eqn\efortyfive
$$
and the reverse relations
$$\eqalign{
\vec E''_{\perp, ||}=&(-\tilde c\lambda_1^{\prime(0)}+\tilde a)\vec E'_{\perp, ||} -\tilde c\lptoM^{(0)}
L\vec B'_{\perp, ||}\cr
\vec B''_{\perp, ||}=&(-\tilde c\lambda_1^{\prime(0)}+\tilde a)\vec B'_{\perp, ||} +\tilde c\lptoM^{(0)}
L\vec E'_{\perp, ||}\cr
}
\eqn\efortyfivea
$$
The total induced electric and magnetic charges on the wall are
given by $\int (\vec E''_\perp - \vec E'_\perp) d^2S/4\pi$ and $\int
(\vec B''_\perp -\vec B'_\perp) d^2S/4\pi$ respectively.
As before, we first consider a particle carrying charge $(\vec Q_m',\vec Q_e')$
approaching the wall from the right, and calculate the total induced
charge $(\Delta\vec Q_m', \Delta\vec Q_e')$ on the wall using eqs.\efortyfivea.
Then we assume that after passing through the wall, the particle carries
charges $(\vec Q_m'',\vec Q_e'')$ and calculate the corresponding induced charges
$(\Delta\vec Q_m'', \Delta\vec Q_e'')$ using eq.\efortyfive.
Finally, using the equation for charge conservation,
$$
(\vec Q_m'+\Delta\vec Q_m', \vec Q_e'+\Delta\vec Q_e')
=(\vec Q_m''+\Delta\vec Q_m'', \vec Q_e''+\Delta\vec Q_e'')
\eqn\efortynine
$$
we get
$$\eqalign{
\vec Q_e''=&(-\tilde c\lambda_1^{\prime(0)} +\tilde a)\vec Q_e' -\tilde c\lptoM^{(0)} L\vec Q_m'\cr
\vec Q_m''=&(-\tilde c\lambda_1^{\prime(0)} +\tilde a)\vec Q_m' +\tilde c\lptoM^{(0)} L\vec Q_e'\cr
}
\eqn\efifty
$$
Using eq.\efortythree, and that $\lambda^{\prime(0)}$ is the SL(2,Z) transform of $\lambda^{(0)}$
by the element $\pmatrix{\tilde a &\tilde b\cr \tilde c &\tilde d\cr}$ we recover
eqs.\etwentyfour, \etwentyfive.
Although this provides a rederivation of the results of the last section,
this derivation makes it clear how the charge lost by the particle is
deposited on the string.
As the particle goes farther away from the wall, the
charge spreads over wider region of the wall, thereby decreasing the
charge density induced on the wall.
When the distance of the particle from the wall is much larger than the
string size, the
total charge induced on the wall becomes negligible, showing that all the
induced charge flows back to the boundary of the wall, i.e. the string.
\chapter{Scattering of Dual Strings}
We shall now use the results derived above to study the scattering of
dual strings.
As we saw earlier, a dual string is characterized by a group element
$g=\pmatrix{a& b\cr c & d\cr}\in SL(2,Z)$, or, equivalently, the group
element $\tilde g$ defined in eq.\etwelve.\foot{Note that $\tilde g$ remains
invariant under a change $g\to gT$.
Since it is the element $\tilde g$ that characterizes inequivalent strings, we
see that the elements $g$ and $gT$ describe the same string.
As we shall see, all our results will be invariant under the
transformation $g\to gT$.}
We shall first find the allowed spectrum of $\vec Q_e$ and $\vec Q_m$, or
equivalently of $\vec\alpha$ and $\vec\beta$ defined through eq.\enineteen, for a
dual string characterized by a given element $g$.
We start with the observation that for an ordinary string
$(\vec\alpha,\vec\beta)=(\vec l, 0)$ where $\vec l\in P$,
since these states do not carry any magnetic charge.
{}From this the allowed values of $\vec\alpha$ and $\vec\beta$ for the dual string can
be found by SL(2,Z) transformation, and are given by,
$$
\pmatrix{\vec\alpha\cr -\vec\beta\cr} = \pmatrix{a & b\cr c & d\cr}\pmatrix{\vec l\cr
0\cr} =\pmatrix{a\vec l\cr c\vec l\cr}
\eqn\efiftyfive
$$
This shows that the magnetic and electric charges of a dual string
characterized by a specific SL(2,Z) element $g$ are related.
We shall denote the state of a dual string by the quantum numbers
$(g, \vec l, \ldots)$ where $\ldots$ denote other quantum numbers which
are not of interest for our analysis.
We shall now consider two such strings, characterized by the quantum
numbers $(g_1, \vec l_1)$ and $(g_2, \vec l_2)$ respectively, and consider
a scattering where string 1 passes through string 2 without touching
it.
We shall assume that string 2 is a long string, where string 1 is small in
size.
The first point to note is that after string 1 passes through string 2,
it is characterized by a new group element
$$
g_1'=\tilde g_2^{-1}g_1
\eqn\efiftyeight
$$
\noindent Proof: Before scattering, if we go around string 1, the new
field configuration $\phi'$ is related to the original field
configuration $\phi$ through the relation $\phi'=g_1\phi$.
When string 1 crosses the fictitious wall $D$ during the process of
scattering with the string 2, we use a different coordinate system
$\psi=\tilde g_2^{-1}\phi$.
In this new coordinate system, the relation $\phi'=g_1\phi$ may be
expressed as,
$$
\psi'=\tilde g_2^{-1} \tilde g_1\tilde g_2\psi
\eqn\efiftynine
$$
Thus $\tilde g_1'=\tilde g_2^{-1}\tilde g_1\tilde g_2$.
Using the relations $\tilde g_1=g_1 T
g_1^{-1}$ and $\tilde g_1'=g_1' T g_1^{\prime -1}$ we get eq.\efiftyeight\
up to a transformation of the form $g_1'\to g_1' T$.
Using eqs.\etwentyfour, \efiftyfive\ and \efiftyeight, we see that the
electric and magnetic charge quantum numbers $\vec\alpha_1'$ and $\vec\beta_1'$ of string
1 after scattering are given by,
$$
\pmatrix{\vec\alpha_1'\cr -\vec\beta_1'\cr} = \tilde g_2^{-1}\pmatrix{\vec\alpha_1\cr -\vec\beta_1}
=\tilde g_2^{-1} g_1 \pmatrix{\vec l_1\cr 0\cr}
=g_1'\pmatrix{\vec l_1\cr 0\cr}
\eqn\esixtytwo
$$
showing that,
$$
\vec l_1'=\vec l_1
\eqn\esixtythree
$$
Eqs.\efiftyeight\ and \esixtythree\ determine the quantum numbers of the
string 1 after scattering.
Let us now study the quantum numbers of string 2 after scattering.
First of all, note that during this scattering process $g_2$ and $\tilde g_2$
remains unchanged, i.e.
$$
g_2'=g_2
\eqn\esixtyfour
$$
Conservation of electric and magnetic charge implies that the charges lost
by string 1 must be deposited on string 2.
This gives,
$$
\pmatrix{\vec\alpha_2'\cr -\vec\beta_2'\cr} = \pmatrix{\vec\alpha_2\cr -\vec\beta_2\cr}
+\pmatrix{\vec\alpha_1\cr -\vec\beta_1\cr} -\pmatrix{\vec\alpha'_1\cr -\vec\beta_1'\cr}
\eqn\esixtyfive
$$
Using eqs.\efiftyfive, \esixtytwo\ and \etwelve\ we get,
$$
\pmatrix{\vec\alpha_2'\cr -\vec\beta_2'\cr} = g_2\pmatrix{\vec l_2'\cr 0\cr}
\eqn\esixtyseven
$$
where,
$$
\vec l_2'=\vec l_2+(a_2 c_1 - a_1 c_2)\vec l_1
\eqn\esixtyeight
$$
Eqs.\efiftyeight, \esixtythree, \esixtyfour\ and \esixtyeight\ describe the
final result of scattering when string 1 passes through string 2.
\chapter{Summary}
To summarize, in this paper we have studied the classical scattering of
dual strings
(strings carrying electric and magnetic charges) and have shown that there
is a definite change of quantum numbers of the string as a result of the
scattering.
The changes in the quantum numbers are determined by the quantum numbers
of the original strings, and depend on which string passed through the
other during the scattering.
The picture of magnetically charged string states that we have used is
valid for sufficiently long string states, but is not useful
for description of point like string states.
Description of such states are likely to be found in 't Hooft - Polyakov
like monopole solutions in string theory\RMONOPOLE.
\refout
\end
|
1,314,259,994,224 | arxiv | \section{Introduction}
The present work is based on our previous paper \cite{Tresguerres:2007ih}. There we studied jointly gravitation and electrodynamics in the form of a gauge theory of the Poincar\'e group times the internal group $U(1)$. Following the approach of Hehl et al. to gauge theories of gravity \cite{Hehl:1974cn}--\cite{Obukhov:2006ge}, we made use of a Lagrangian formalism to get the field equations and the Noether identities associated to the gauge symmetry, devoting special attention to energy conservation. This latter aspect of \cite{Tresguerres:2007ih}, where exchange between different forms of energy plays a central role, strongly suggests to look for a thermodynamic interpretation of the corresponding formulas, although this aim remains unattainable as only single matter particles are involved. For this reason, we are interested in extending similar energetic considerations to macroscopic matter in order to be able to construct an approach to thermodynamics compatible with gauge theories of gravity.
In this endeavor, our starting point is provided by the dynamical equations found for a particular form of fundamental matter, namely Dirac matter, with the help of the principle of invariance of the action under local Poincar\'e$\otimes U(1)$ transformations. Our main hypothesis is that the equations still hold for other forms of matter with the same $U(1)$, translational and Lorentz symmetry properties, and we assume that these are possessed by macroscopic matter. Accordingly, we consider that material media obey equations with a form which is known to us, also when we have to reinterpret several quantities involved in them --in particular the matter sources-- in order to give account of macroscopic features which are not present in the original formulation.
Moreover, a major alteration of the almost purely geometrical approach to physical reality characteristic for gauge theories occurs with the introduction of thermodynamic variables. Briefly exposed, regarding the latter ones we proceed as follows. From the original gauge theoretically defined matter energy current $\epsilon ^{\rm matt}$, we define a modified matter energy current $\epsilon ^{\rm u}$ with an energy flux component $q$ identified as heat flux, and a further component $\mathfrak{U}$ representing the internal energy content of a volume element. As a requirement of the transition to macroscopic matter \cite{Callen}, we postulate $\mathfrak{U}$ to depend, among others, on a new macroscopic variable $\mathfrak{s}$ with the meaning of the entropy content of an elementary volume. (Contrary to other authors \cite{Landau:1958}-\cite{Priou:1991}, we do not introduce an additional entropy flow variable.) The definition of temperature as the derivative of $\mathfrak{U}$ with respect to $\mathfrak{s}$ completes the set of fundamental thermal variables. We are going to prove that they satisfy the first and second laws of thermodynamics. In our approach, the energy and entropy forms, as much as the temperature function, are Lorentz invariants, as in Eckart's pioneering work \cite{Eckart:1940te}. There, as in our case, the first principle of thermodynamics is derived from the energy-momentum conservation law not as the zero component of this vector equation, but as a scalar equation.
The paper is organized as follows. In Sections II and III we present the gauge-theoretically derived field equations and Noether identities. After introducing in IV a necessary spacetime foliation, Section V is devoted to defining total energy and its various constitutive pieces, and to studying the corresponding conservation equations. In VI, explicit Lagrangians for electrodynamics and gravity are considered, while VII deals with some aspects of the energy-momentum of macroscopic matter. In Section VIII we argue on the most suitable way to include the features of material media in the dynamical equations. Lastly, the main results are presented in Section IX, where we deduce the laws of thermodynamics in two different scenarios. The paper ends with several final remarks and the conclusions.
\section{Field equations}
The results of \cite{Tresguerres:2007ih} relevant for the present paper are summarized in what follows with slight changes needed to replace the fundamental Dirac matter by macroscopic matter. Interested readers are referred to \cite{Tresguerres:2007ih} for technical details, in particular those concerning the handling of translations. A complementary study of the underlying geometry of dynamical spacetimes of Poincar\'e gauge theories can be found in Refs. \cite{Tresguerres:2002uh} and \cite{Tresguerres:2012nu}.
Our point of departure is a Lagrangian density 4-form
\begin{equation}
L=L(\,A\,,\vartheta ^\alpha\,,\Gamma ^{\alpha\beta}\,;F\,,T^\alpha\,,\,R^{\alpha\beta}\,;{\rm matter\hskip0.2cm variables}\,)\,,\label{totalLag}
\end{equation}
invariant under local Poincar\'e$\otimes U(1)$ symmetry. Its arguments, along with matter fields, are the following. On the one hand, we recognize the connection 1-forms of $U(1)$, of translations and of the Lorentz subgroup respectively: that is, the electromagnetic potential $A$, the (nonlinear) translational connections $\vartheta ^\alpha$ geometrically interpreted as tetrads, and the Lorentz connections $\Gamma ^{\alpha\beta}$ required to guarantee gauge covariance, being antisymmetric in their indices. On the other hand, further arguments are the covariantized derivatives of the preceding connections. The differential of the electromagnetic potential is the familiar electromagnetic field strength
\begin{equation}
F:= dA\,,\label{Fdef}
\end{equation}
and analogously, torsion \cite{Hehl:1995ue} defined as the covariant differential of tetrads
\begin{equation}
T^\alpha := D\,\vartheta ^\alpha = d\,\vartheta ^\alpha + \Gamma _\beta{}^\alpha\wedge\vartheta ^\beta\,,\label{torsiondef}
\end{equation}
together with the Lorentz curvature
\begin{equation}
R^{\alpha\beta} := d\,\Gamma ^{\alpha\beta} + \Gamma _\gamma{}^\beta\wedge \Gamma ^{\alpha\gamma}\,,\label{curvdef}
\end{equation}
play the role of the field strengths associated respectively to translations and to the Lorentz group. Lorentz indices are raised and lowered with the help of the constant Minkowski metric $o_{\alpha\beta}= diag(-+++)$.
The derivatives of (\ref{totalLag}) with respect to the connections $A$, $\vartheta ^\alpha $ and $\Gamma ^{\alpha\beta}$ are the electric four-current 3-form
\begin{equation}
J :={{\partial L}\over{\partial A}}\,,\label{definition03a}
\end{equation}
the total energy-momentum 3-form
\begin{equation}
\Pi _\alpha :={{\partial L}\over{\partial \vartheta ^\alpha}}\,,\label{definition03b}
\end{equation}
(including, as we will see, electrodynamic, gravitational and matter contributions), and the spin current\footnote{The definition of spin current given in Eq.(61) of Reference \cite{Tresguerres:2007ih} differs from the present one due to the fact that there we considered an internal structure for the tetrads, with a particular dependence on $\Gamma ^{\alpha\beta}$, giving rise to additional terms. The latter ones are not present when the internal structure of the tetrads is ignored, as is the case here.}
\begin{equation}
\tau _{\alpha\beta} :={{\partial L}\over{\partial \Gamma ^{\alpha\beta}}}\,.\label{definition03c}
\end{equation}
Finally, derivatives of (\ref{totalLag}) with respect to the field strengths (\ref{Fdef}), (\ref{torsiondef}) and (\ref{curvdef}) yield respectively the electromagnetic excitation 2-form
\begin{equation}
H:=-{{\partial L}\over{\partial F}}\,,\label{definition01}
\end{equation}
and its translative and Lorentzian analogs, defined as the excitation 2-forms
\begin{equation}
\quad H_\alpha :=-{{\partial L}\over{\partial T^\alpha}}\,,\quad H_{\alpha\beta}:=-\,{{\partial L}\over{\partial R^{\alpha\beta}}}\,.\label{definition02}
\end{equation}
With these definitions at hand, the principle of extremal action yields the field equations
\begin{eqnarray}
dH &=&J\,,\label{covfieldeq1} \\
DH_\alpha &=&\Pi _\alpha\,,\label{covfieldeq2}\\
DH_{\alpha\beta} +\vartheta _{[\alpha }\wedge H_{\beta ]}&=&\tau _{\alpha\beta}\,.\label{covfieldeq3}
\end{eqnarray}
As we will see below, suitable explicit Lagrangians uncover respectively (\ref{covfieldeq1}) as Maxwell's equations and (\ref{covfieldeq2}) as a generalized Einstein equation for gravity, whereas (\ref{covfieldeq3}) completes the scheme taking spin currents into account. Notice that Eqs. (\ref{covfieldeq1})--(\ref{covfieldeq3}) are explicitly Lorentz covariant\footnote{The covariant differentials in
(\ref{covfieldeq2}) and (\ref{covfieldeq3}) are defined as
$$DH_\alpha := dH_\alpha -\Gamma _\alpha{}^\beta\wedge H_\beta\,,$$
and
$$DH_{\alpha\beta} := dH_{\alpha\beta} -\Gamma _\alpha{}^\gamma\wedge H_{\gamma\beta}
-\Gamma _\beta{}^\gamma\wedge H_{\alpha\gamma}\,,$$ respectively.}. In addition, they are invariant with respect to translations as much as to $U(1)$ as a consequence of the (nonlinear) symmetry realization used in \cite{Tresguerres:2007ih}.
\section{Noether identities}
Following \cite{Hehl:1995ue}, we separate the total Lagrangian density 4-form (\ref{totalLag}) into three different pieces
\begin{equation}
L=L^{\rm matt}+L^{\rm em}+L^{\rm gr}\,,\label{Lagrangedecomp}
\end{equation}
consisting respectively in the matter contribution
\begin{equation}
L^{\rm matt} = L^{\rm matt}(\,\vartheta ^\alpha\,;{\rm matter\hskip0.2cm variables}\,)\,,\label{mattLagcontrib}
\end{equation}
(in the fundamental case, matter variables consisting of matter fields $\psi$ and of their covariant derivatives including connections $A$ and $\Gamma ^{\alpha\beta}$), together with the electromagnetic part $L^{\rm em}(\,\vartheta ^\alpha\,,\,F\,)\,$ and the gravitational Lagrangian $L^{\rm gr}(\,\vartheta ^\alpha\,,\,T^\alpha\,,\,R_\alpha{}^\beta\,)$. According to (\ref{Lagrangedecomp}), the energy-momentum 3-form (\ref{definition03b}) decomposes as
\begin{equation}
\Pi _\alpha =\Sigma ^{\rm matt}_\alpha +\Sigma ^{\rm em}_\alpha +E_\alpha\,,\label{momentdecomp}
\end{equation}
with the different terms in the right-hand side (rhs) defined respectively as
\begin{equation}
\Sigma ^{\rm matt}_\alpha :={{\partial L^{\rm matt}}\over{\partial \vartheta ^\alpha}}\,,\quad
\Sigma ^{\rm em}_\alpha :={{\partial L^{\rm em}}\over{\partial \vartheta ^\alpha}}\,,\quad
E_\alpha :={{\partial L^{\rm gr}}\over{\partial \vartheta ^\alpha}}\,.\label{momentdecompbis}
\end{equation}
Starting with the matter Lagrangian part $L^{\rm matt}\,$, let us derive the Noether type conservation equations for the matter currents associated to the different symmetries, that is
\begin{equation}
J={{\partial L^{\rm matt}}\over{\partial A}}\,,\quad
\Sigma ^{\rm matt}_\alpha = {{\partial L^{\rm matt}}\over{\partial \vartheta ^\alpha }}\,,\quad\tau _{\alpha\beta} = {{\partial L^{\rm matt}}\over{\partial \Gamma ^{\alpha\beta}}}\,.\label{mattcurrdefs}
\end{equation}
Provided the field equations (\ref{covfieldeq1})--(\ref{covfieldeq3}) are fulfilled, as much as the Euler-Lagrange equations for matter fields (non explicitly displayed here), from the invariance of $L^{\rm matt}$ under vertical (gauge) Poincar\'e $\otimes$ $U(1)$ transformations follow the conservation equations for both, the electric current
\begin{equation}
dJ =0\,,\label{elcurrcons}
\end{equation}
and the spin current
\begin{equation}
D\,\tau _{\alpha\beta} +\vartheta _{[\alpha}\wedge\Sigma ^{\rm matt}_{\beta ]}=0\,.\label{spincurrconserv}
\end{equation}
On the other hand, the Lie (lateral) displacement ${\it{l}}_{\bf x} L^{\rm matt}$ of the Lagrangian 4-form along an arbitrary vector field $X$ yields the identity
\begin{equation}
D\,\Sigma ^{\rm matt}_\alpha =(\,e_\alpha\rfloor T^\beta )\wedge\Sigma ^{\rm matt}_\beta +(\,e_\alpha\rfloor R^{\beta\gamma}\,)\wedge\tau _{\beta\gamma} +(\,e_\alpha\rfloor F\,)\wedge J\,,\label{sigmamattconserv}
\end{equation}
with the matter energy-momentum 3-form given by
\begin{equation}
\Sigma ^{\rm matt}_\alpha =-(\,e_\alpha\rfloor\overline{D\psi}\,)\,{{\partial L^{\rm matt}}\over{\partial d\overline{\psi}}} +{{\partial L^{\rm matt}}\over{\partial d\psi}}\,(\,e_\alpha\rfloor D\psi\,) + e_\alpha\rfloor L^{\rm matt}\label{sigmamatt}
\end{equation}
(for Dirac matter, and thus to be modified for the case of macroscopic matter). In the rhs of (\ref{sigmamattconserv}) we recognize, besides the proper Lorentz force 4-form in the extreme right, two additional terms with the same structure, built with the field strengths and the matter currents of translational and Lorentz symmetry respectively.
Next we apply the same treatment to the remaining constituents of (\ref{Lagrangedecomp}). The gauge invariance of the electromagnetic Lagrangian piece implies
\begin{equation}
\vartheta _{[\alpha}\wedge\Sigma ^{\rm em}_{\beta ]} =0\,,\label{Symem-emt}
\end{equation}
while in analogy to (\ref{sigmamattconserv}) we find
\begin{equation}
D\,\Sigma ^{\rm em}_\alpha =(\,e_\alpha\rfloor T^\beta )\wedge\Sigma ^{\rm em}_\beta -(\,e_\alpha\rfloor F\,)\wedge dH\,,\label{sigmaemconserv}
\end{equation}
being the electromagnetic energy-momentum
\begin{equation}
\Sigma ^{\rm em}_\alpha =(\,e_\alpha\rfloor F\,)\wedge H + e_\alpha\rfloor L^{\rm em}\,.\label{sigmaem}
\end{equation}
Finally, regarding the gravitational Lagrangian part, its gauge invariance yields
\begin{equation}
D\,\Bigl( DH_{\alpha\beta} +\vartheta _{[\alpha }\wedge H_{\beta ]}\,\Bigr) +\vartheta _{[\alpha}\wedge\Bigl( DH_{\beta ]} -E_{\beta ]}\,\Bigr)=0\,,\label{redund}
\end{equation}
(derivable alternatively from (\ref{spincurrconserv}) with (\ref{covfieldeq2}), (\ref{covfieldeq3}), (\ref{momentdecomp}) and (\ref{Symem-emt})), and the (\ref{sigmamattconserv}) and (\ref{sigmaemconserv})-- analogous equation reads
\begin{eqnarray}
&&D\,\Bigl( DH_\alpha -E_\alpha\,\Bigr) -(\,e_\alpha\rfloor T^\beta
)\wedge\Bigl( DH_\beta -E_\beta\,\Bigr)\nonumber\\
&&\hskip0.2cm -(\,e_\alpha\rfloor R^{\beta\gamma}\,)\wedge\Bigl( DH_{\beta\gamma}+\vartheta _{[\beta }\wedge H_{\gamma ]}\,\Bigr)=0\,,\label{ealphaconserv}
\end{eqnarray}
with the pure gravitational energy-momentum given by
\begin{eqnarray}
E_\alpha =(\,e_\alpha\rfloor T^\beta )\wedge H_\beta +(\,e_\alpha\rfloor R^{\beta\gamma}\,)\wedge H_{\beta\gamma} +e_\alpha\rfloor L^{\rm gr}\,.\label{ealpha}
\end{eqnarray}
Eq.(\ref{ealphaconserv}) is also redundant, being derivable from (\ref{sigmamattconserv}) and (\ref{sigmaemconserv}) together with the field equations (\ref{covfieldeq1})--(\ref{covfieldeq3}) and (\ref{momentdecomp}).
\section{Spacetime foliation}
\subsection{General formulas}
The definition of energy to be introduced in next section, as much as its subsequent thermodynamic treatment, rests on a foliation of spacetime involving a timelike vector field $u$ defined as follows. (For more details, see \cite{Tresguerres:2012nu}.) The foliation is induced by a 1-form $\omega = d\tau $ trivially satisfying the Frobenius' foliation condition $\omega\wedge d\omega =0$. The vector field $u$ relates to $d\tau$ through the condition $u\rfloor d\tau =1$ fixing its direction. This association of the vector $u$ with $\tau $, the latter being identified as {\it parametric time}, allows one to formalize time evolution of any physical quantity represented by a $p$-form $\alpha$ as its Lie derivative along $u$, that is
\begin{equation}
{\it{l}}_u\alpha :=\,d\,(u\rfloor\alpha\,) + u\rfloor d\alpha \,.\label{Liederdef}
\end{equation}
(Notice that the condition $u\rfloor d\tau =1$ itself defining $u$ in terms of $\tau$ means that ${\it l}_u\,\tau := u\rfloor d\tau =1$.) With respect to the direction of the time vector $u$, any $p$-form $\alpha$ decomposes into two constituents \cite{Hehl-and-Obukhov}, longitudinal and transversal to $u$ respectively, as
\begin{equation}
\alpha = d\tau\wedge\alpha _{\bot} +\underline{\alpha}\,,\label{foliat1}
\end{equation}
with the longitudinal piece
\begin{equation}
\alpha _{\bot} := u\rfloor\alpha\,,\label{long-part}
\end{equation}
consisting of the projection of $\alpha$ along $u$, and the transversal component
\begin{equation}
\underline{\alpha}:=
u\rfloor ( d\tau\wedge\alpha\,)\,,\label{trans-part}
\end{equation}
orthogonal to the former as a spatial projection.
The foliation of exterior derivatives of forms is performed in analogy to (\ref{foliat1}) as
\begin{equation}
d\,\alpha = d\tau\wedge\bigl(\,{\it{l}}_u\underline{\alpha} -\,\underline{d}\,\alpha _{\bot}\,\bigr) +\underline{d}\,\underline{\alpha }\,,\label{derivfoliat}
\end{equation}
with the longitudinal part expressed in terms of the Lie derivative (\ref{Liederdef}) and of the spatial differential $\underline{d}$. For its part, the Hodge dual (\ref{dualform}) of a $p$-form $\alpha$ decomposes as
\begin{equation}
{}^*\alpha =\,(-1)^p\, d\tau\wedge {}^{\#}\underline{\alpha} - {}^{\#}\alpha _{\bot}\,,\label{foliat2}
\end{equation}
being $^\#$ the Hodge dual operator in the three-dimensional spatial sheets.
\subsection{Foliation of tetrads}
Let us apply the general formulas (\ref{Liederdef})--(\ref{foliat2}) to the particular case of tetrads $\vartheta ^\alpha $, which, as universally coupling coframes \cite{Tresguerres:2007ih}, will play a significant role in what follows. Their dual vector basis $\{e_\alpha\}$ is defined by the condition
\begin{equation}
e_\alpha\rfloor \vartheta ^\beta = \delta _\alpha ^\beta\,.\label{dualitycond}
\end{equation}
When applied to tetrads, (\ref{foliat1}) reads
\begin{equation}
\vartheta ^\alpha = d\tau\,u^\alpha + \underline{\vartheta}^\alpha\,,\label{tetradfoliat}
\end{equation}
where the longitudinal piece
\begin{equation}
u^\alpha := u\rfloor\vartheta ^\alpha\label{fourvel}
\end{equation}
has the meaning of a four-velocity. In terms of it, the time vector $u$ can be expressed as $u =u^\alpha e_\alpha$, being the requirement for $u$ to be timelike fulfilled as
\begin{equation}
u_\alpha u^\alpha = -1\,.\label{form01}
\end{equation}
In terms of (\ref{fourvel}), let us define the projector
\begin{equation}
h_\alpha{}^\beta :=\delta _\alpha ^\beta + u_\alpha u^\beta\,.\label{form03}
\end{equation}
Replacing (\ref{tetradfoliat}) in (\ref{dualitycond}) and making use of (\ref{form03}) we find
\begin{equation}
e_\alpha\rfloor \Big(\,d\tau\,u^\beta + \underline{\vartheta}^\beta\,\Bigr) = \delta _\alpha ^\beta
=-u_\alpha u^\beta +h_\alpha{}^\beta \,.\label{dualitycondbis}
\end{equation}
implying
\begin{equation}
e_\alpha \rfloor d\tau = -\,u_\alpha\,,\label{form02}
\end{equation}
and
\begin{equation}
e_\alpha\rfloor \underline{\vartheta}^\beta = h_\alpha{}^\beta\,.\label{dualitycondbis}
\end{equation}
On the other hand, let us generalize the definition (\ref{Liederdef}) of Lie derivatives by considering covariant differentials instead of ordinary ones \cite{Hehl:1995ue}. In particular, we will make extensive use of the covariant Lie derivative of the tetrads, defined as
\begin{eqnarray}
{\cal \L\/}_u\vartheta ^\alpha &:=& D\left( u\rfloor\vartheta ^\alpha\right) + u\rfloor D\vartheta ^\alpha\nonumber\\
&=& D u^\alpha + T_{\bot}^\alpha
\,,\label{thetaLiederiv01}
\end{eqnarray}
where
\begin{equation}
{\cal \L\/}_u\vartheta ^\alpha = {\it{l}}_u\vartheta ^\alpha +{\Gamma _{\bot}}_\beta{}^\alpha\wedge\vartheta ^\beta\,,\label{thetaLiederiv02}
\end{equation}
with (\ref{thetaLiederiv01}) decomposing into the longitudinal and transversal pieces
\begin{eqnarray}
({\cal \L\/}_u\vartheta ^\alpha )_{\bot} &=& {\cal \L\/}_u u^\alpha\,,\label{thetaLiederiv03}\\
\underline{{\cal \L\/}_u\vartheta ^\alpha} &=& \underline{D} u^\alpha + T_{\bot}^\alpha\nonumber\\
&=& {\cal \L\/}_u\underline{\vartheta}^\alpha\,.\label{thetaLiederiv04}
\end{eqnarray}
For what follows, we also need complementary formulas concerning the foliation of the eta basis. Since they require more space, we introduce them in Appendix A.
\section{Definition and conservation of energy}
In Ref.\cite{Tresguerres:2007ih} we discussed the definition of the total energy current 3-form
\begin{equation}
\epsilon := -\left(\,u^\alpha\,\Pi _\alpha + Du^\alpha\wedge H_\alpha\,\right)\,.\label{energycurr}
\end{equation}
By rewriting it as
\begin{equation}
\epsilon =-d\left( u^\alpha H_\alpha\right) + u^\alpha \left( DH_\alpha -\Pi _\alpha \right)\,,\label{exactform01}
\end{equation}
and making use of (\ref{covfieldeq2}), we find that it reduces to an exact form
\begin{equation}
\epsilon =-d\left( u^\alpha H_\alpha\right)\,,\label{exactform02}
\end{equation}
automatically satisfying the continuity equation
\begin{equation}
d\,\epsilon =0\,.\label{energyconserv01}
\end{equation}
The interpretation of (\ref{energycurr}) as total energy, and thus of (\ref{energyconserv01}) as local conservation of total energy, becomes apparent with the help of (\ref{momentdecomp}). The energy (\ref{energycurr}) reveals to be the sum of three pieces
\begin{equation}
\epsilon =\epsilon ^{\rm matt}+\epsilon ^{\rm em}+\epsilon ^{\rm gr}\,,\label{energydec}
\end{equation}
defined respectively as
\begin{eqnarray}
\epsilon ^{\rm matt} &:=& -u^\alpha\,\Sigma ^{\rm matt}_\alpha\,,\label{mattenergy}\\
\epsilon ^{\rm em} &:=& -u^\alpha\,\Sigma ^{\rm em}_\alpha\,,\label{emenergy}\\
\epsilon ^{\rm gr} &:=& -\left(\,u^\alpha\,E_\alpha + D u^\alpha\wedge H_\alpha\,\right)\,.\label{grenergy}
\end{eqnarray}
On the other hand, decomposing (\ref{energycurr}) into its longitudinal and transversal components
\begin{equation}
\epsilon = d\tau\wedge\epsilon _{\bot} +\underline{\epsilon}\,,\label{energyfol01}
\end{equation}
the foliated form of the local energy conservation equation (\ref{energyconserv01}) reads
\begin{equation}
{\it l}_u\,\underline{\epsilon}-\underline{d}\,\epsilon _{\bot}=0\,,\label{conteq}
\end{equation}
showing (when integrated) that the rate of increase of the energy $\underline{\epsilon}$ contained in a small volume equals the amount of energy flowing into the volume over its boundary surface as the result of the balance of inflow and outflow of the energy flux $\epsilon _{\bot}$ crossing through the closed surface.
Conservation of total energy is the result of exchanges between the different forms of energy. Let us write the continuity equations of the different pieces (\ref{mattenergy})--(\ref{grenergy}). As we will see immediately, in all these equations, when considered separately, sources and sinks of energy are involved, reflecting the fact that, inside the small volume considered, energy is produced or consumed, wether on account of work or of any other manifestation of energy. These terms only cancel out when all forms of energy are considered together, that is, in (\ref{energyconserv01}) with (\ref{energydec}).
Regarding the matter contribution to energy (\ref{mattenergy}), using (\ref{sigmamattconserv}) we find
\begin{equation}
d\,\epsilon ^{\rm matt} = -{\cal \L\/}_u\,\vartheta ^\alpha\wedge\Sigma ^{\rm matt}_\alpha -R_{\bot}^{\alpha\beta}\wedge\tau _{\alpha\beta} -F_{\bot}\wedge J\,.\label{mattender}
\end{equation}
The interpretation of this conservation equation when its validity is extended to macroscopic matter constitutes the main task of the present work. Actually, Eq. (\ref{mattender}) provides the basis for our approach to thermodynamics.
In analogy to (\ref{mattender}), definition (\ref{emenergy}) of electromagnetic energy with (\ref{sigmaemconserv}) yields the Poynting equation
\begin{equation}
d\,\epsilon ^{\rm em} = -{\cal \L\/}_u\,\vartheta ^\alpha\wedge\Sigma ^{\rm em}_\alpha + F_{\bot}\wedge dH\,,\label{emender}
\end{equation}
generalized to take into account spacetime as defined in Poincar\'e gauge theories. In (\ref{emender}), the energy flux (or intensity of flowing energy) is represented by the Poynting 2-form $\epsilon ^{\rm em}_{\bot}$, and the last term in the rhs is related to Joule's heat. Finally, from the gravitational energy definition (\ref{grenergy}) with (\ref{ealphaconserv}) we get
\begin{eqnarray}
d\,\epsilon ^{\rm gr} &:=& -{\cal \L\/}_u\,\vartheta ^\alpha\wedge\left(\,E_\alpha -DH_\alpha\right)\nonumber\\
&&+R_{\bot}^{\alpha\beta}\wedge \left(\,DH_{\alpha\beta} +\vartheta _{[\alpha }\wedge H_{\beta ]}\right)\,.\label{grender}
\end{eqnarray}
The field equations (\ref{covfieldeq1})--(\ref{covfieldeq3}) guarantee that the sum of (\ref{mattender}), (\ref{emender}) and (\ref{grender}) is conserved, in agreement with (\ref{energyconserv01}).
\section{Electrodynamical and gravitational Lagrangians}
In the present Section we introduce explicit Lagrangian pieces (\ref{Lagrangedecomp}) describing electrodynamics and gravity. We do so in order to calculate in particular the excitations defined in (\ref{definition01}) and (\ref{definition02}), which extend to the macroscopic arena without alterations, as will be discussed in Section VIII. We also derive the electromagnetic and gravitational energy-momentum contributions to (\ref{momentdecomp}) as defined in (\ref{momentdecompbis}), and the corresponding energies (\ref{emenergy}) and (\ref{grenergy}). The form found for (\ref{emenergy}), namely (\ref{explemen1}), and in particular that of its transversal part (\ref{emendh}), provides us with a criterion to choose the way to extend the {\it microscopic} fundamental equations to macroscopic material media. (See Section VIII.)
\subsection{Electrodynamics}
In the context of fundamental matter in vacuum, we consider the Maxwell Lagrangian
\begin{equation}
L^{\rm em}=-{1\over 2}\,F\wedge\,^*F\,.\label{emlagrang1}
\end{equation}
From it follows a field equation of the form (\ref{covfieldeq1}) where the excitation (\ref{definition01}) is given by the Maxwell-Lorentz electromagnetic spacetime relation
\begin{equation}
H={}^*F\,,\label{emmom}
\end{equation}
involving (\ref{Fdef}), which identically satisfies
\begin{equation}
dF =0\,.\label{vanfder}
\end{equation}
Eqs. (\ref{covfieldeq1}) and (\ref{vanfder}) complete the set of Maxwell's equations for fundamental matter in vacuum.
On the other hand, the electromagnetic part (\ref{sigmaem}) of energy-momentum derived from the explicit Lagrangian (\ref{emlagrang1}) reads
\begin{equation}
\Sigma ^{\rm em}_\alpha = {1\over 2}\,\left[\,\left( e_\alpha\rfloor F\right)\wedge H -F\wedge\left( e_\alpha\rfloor H\right)\,\right]\,,\label{emenergymom}
\end{equation}
so that (\ref{emenergy}) becomes
\begin{equation}
\epsilon ^{\rm em} = -{1\over 2}\,\bigl(\,F_{\bot}\wedge H -F\wedge H_{\bot}\,\bigr)\,,\label{explemen1}
\end{equation}
obeying Eq.(\ref{emender}). The transversal component $\underline{\epsilon}^{\rm em}$ of the electromagnetic energy current 3-form (\ref{explemen1}) is the energy 3-form representing the amount of electric and magnetic energy contained in a small volume, and the longitudinal part $\epsilon ^{\rm em}_{\bot}$ is the energy flux or Poynting 2-form.
\subsection{Gravity}
For the gravitational action, we consider a quite general Lagrangian density taken from Ref. \cite{Obukhov:2006ge}, including a Hilbert-Einstein term with cosmological constant, plus additional contributions quadratic in the Lorentz-irreducible pieces of torsion and curvature as established by McCrea \cite{Hehl:1995ue} \cite{McCrea:1992wa}. The gravitational Lagrangian reads
\begin{eqnarray}
L^{\rm gr}&=&{1\over{\kappa}}\,\left(\,\,{a_0\over
2}\,\,R^{\alpha\beta}\wedge\eta_{\alpha\beta}
-\Lambda\,\eta\,\right)\nonumber\\
&&-{1\over 2}\,\,T^\alpha\wedge
\left(\sum_{I=1}^{3}{{a_{I}}\over{\kappa}}\,\,{}^{*(I)}
T_\alpha\right)\nonumber\\
&&-{1\over 2}\,\,R^{\alpha\beta}\wedge\left(\sum_{I=1}^{6}b_{I}\,\,
{}^{*(I)}R_{\alpha\beta}\right)\,,\label{gravlagr}
\end{eqnarray}
with $\kappa$ as the gravitational constant, and $a_0$, $a_{I}$, $b_{I}$ as dimensionless constants. From (\ref{gravlagr}) we calculate the translational and Lorentz excitations (\ref{definition02}) to be respectively
\begin{eqnarray}
H_\alpha &=& \sum_{I=1}^{3}{{a_{I}}\over{\kappa}}\,\,{}^{*(I)}
T_\alpha\,,\label{torsmom}\\
H_{\alpha\beta}&=&-{a_0\over{2\kappa}}\,\eta_{\alpha\beta} +\sum_{I=1}^{6}b_{I}\,\,
{}^{*(I)}R_{\alpha\beta}\,,\label{curvmom}
\end{eqnarray}
and we find the pure gravitational contribution (\ref{ealpha}) to the energy-momentum
\begin{eqnarray}
E_\alpha &=& {a_0\over {4\kappa}}\,e_\alpha\rfloor \left(\,R^{\beta\gamma}\wedge\eta_{\beta\gamma}\,\right)-{\Lambda\over{\kappa}}\,\eta _\alpha\nonumber\\
&&+{1\over 2}\,\left[\,\left( e_\alpha\rfloor T^\beta\right)\wedge H_\beta -T^\beta\wedge\left( e_\alpha\rfloor H_\beta \right)\,\right]\nonumber\\
&&+{1\over 2}\,\left[\,\left( e_\alpha\rfloor R^{\beta\gamma}\right)\wedge H_{\beta\gamma} -R^{\beta\gamma}\wedge\left( e_\alpha\rfloor H_{\beta\gamma}\right)\,\right]\,.\nonumber\\
\label{gravenergymom}
\end{eqnarray}
(Notice the resemblance between (\ref{gravenergymom}) and (\ref{emenergymom}).) The gauge-theoretical equations (\ref{covfieldeq2}) with (\ref{gravenergymom}) and (\ref{momentdecomp}) constitute a generalization of Einstein's equations. Actually, for $a_0=1\,$, $a_{I}=0\,$, $b_{I}=0\,$ and vanishing torsion, (\ref{gravenergymom}) reduces to
\begin{equation}
E_\alpha = {1\over{\kappa}}\,\left(\,\,{1\over 2}\,\,R^{\beta\gamma}\wedge\eta_{\beta\gamma\alpha}
-\Lambda\,\eta _\alpha\,\right)\,,\label{H-Egravenergymom}
\end{equation}
which is simply an exterior calculus reformulation of Einstein's tensor plus a cosmological constant term. Using the general expression (\ref{gravenergymom}), we calculate the gravitational energy (\ref{grenergy}) to be
\begin{eqnarray}
\epsilon ^{\rm gr} &=& -{a_0\over {4\kappa}}\,\bigl(\,R^{\alpha\beta}\wedge\eta_{\alpha\beta}\,\bigr)_{\bot}+{\Lambda\over{\kappa}}\,u^\alpha\eta _\alpha\nonumber\\
&&-{1\over 2}\,\bigl(\,T_{\bot}^\alpha\wedge H_\alpha -T^\alpha\wedge H_{{\bot}\alpha}\,\bigr)\nonumber\\
&&-{1\over 2}\,\bigl(\,R_{\bot}^{\alpha\beta}\wedge H_{\alpha\beta} -R^{\alpha\beta}\wedge H_{{\bot}\alpha\beta}\,\bigr)\nonumber\\
&&-D u^\alpha\wedge H_\alpha\,,\label{explgren}
\end{eqnarray}
(compare with (\ref{explemen1})), obeying Eq.(\ref{grender}).
\section{Energy-momentum 3-form of macroscopic matter}
Contrarily to the former cases of electromagnetism and gravity, we do not propose a Lagrangian for macroscopic matter. Instead, we focus our attention on the matter energy-momentum 3-form $\Sigma ^{\rm matt}_\alpha $, for which we postulate the dynamical equation (\ref{sigmamattconserv}), and any other in which it appears, to hold macroscopically. The energy-momentum (\ref{sigmamatt}) found for Dirac matter does not play any role when considering macroscopic systems. The description of each kind of material medium requires the construction of a suitably chosen energy-momentum 3-form adapted to it. In the present Section we merely present a useful decomposition applicable to any $\Sigma ^{\rm matt}_\alpha$, and we consider the form of the simplest of all mechanic energy-momentum contributions, namely that due to pressure, which we explicitly separate from the whole macroscopic matter energy-momentum. By using projectors (\ref{form03}) and definition (\ref{mattenergy}), we find
\begin{eqnarray}
\Sigma ^{\rm matt}_\alpha &&\equiv ( -u_\alpha u^\beta + h_\alpha{}^\beta ) \Sigma ^{\rm matt}_\beta\nonumber\\
&&=: u_\alpha\,\epsilon ^{\rm matt} +\widetilde{\Sigma}^{\rm matt}_\alpha\,,\label{enmom02}
\end{eqnarray}
making apparent the pure energy content of energy-momentum . On the other hand, to give account of pressure, we separate the pressure term from an energy-momentum 3-form as
\begin{eqnarray}
\Sigma ^{\rm matt}_\alpha &=& p\,h_\alpha{}^\beta\,\eta _\beta +\Sigma ^{\rm undef}_\alpha\nonumber\\
&=&-d\tau\wedge p\,\overline{\eta}_\alpha +\Sigma ^{\rm undef}_\alpha\,,\label{enmom01}
\end{eqnarray}
with $\overline{\eta}_\alpha$ as defined in (\ref{3deta07}), while $\Sigma ^{\rm undef}_\alpha $ is left undefined. By decomposing (\ref{enmom01}) according to (\ref{enmom02}), we get
\begin{equation}
\Sigma ^{\rm matt}_\alpha = u_\alpha\,\epsilon ^{\rm matt} -d\tau\wedge p\,\overline{\eta}_\alpha +\widetilde{\Sigma}^{\rm undef}_\alpha\,.\label{enmom03}
\end{equation}
The piece $\widetilde{\Sigma}^{\rm undef}_\alpha $ present in (\ref{enmom03}) after the separation of the energy term can be chosen in different manners to describe, as the case may be, viscosity, elasticity, plasticity, etc. Actually, (\ref{enmom03}) resembles the energy-momentum 3-form of a fluid plus additional contributions responsible for different mechanic features.
Notice that, being (\ref{sigmamattconserv}) a dynamical equation of the form
\begin{equation}
D\,\Sigma ^{\rm matt}_\alpha = f_\alpha\,,\label{force01}
\end{equation}
where the 4-form $f_\alpha$ is a generalized Lorentz force, by replacing (\ref{enmom03}) in it, we get (at least formally) an extended Navier-Stokes equation.
\section{Electrodynamic equations in material media}
Looking for a general criterion about the most suitable procedure to include phenomenological matter in the fundamental equations, let us examine in particular electromagnetism in order to find out how to generalize (\ref{covfieldeq1}) as much as (\ref{emender}) in such a manner that they become applicable macroscopically while preserving their form. As a matter of fact, Maxwell's equations in matter admit two alternative formulations, depending on how the electric and magnetic properties of material media are taken into account \cite{Hehl-and-Obukhov} \cite{Obukhov:2003cc}. Actually, polarization and magnetization can be described, in seemingly equivalent ways, either as due to modifications of the electromagnetic excitations $H$ or as the result of the existence inside such materials of generalized currents $J$ including both, free and bound contributions. With the latter approach in mind, we define the total current density $J^{\rm tot}$ as the sum of a current $J^{\rm free}$ of free charge and a matter-bounded contribution $J^{\rm matt}$ characteristic for the medium, that is
\begin{equation}
J^{\rm tot} = J^{\rm free} + J^{\rm matt}\,,\label{totcurr01}
\end{equation}
with the assumption that they are conserved separately as
\begin{equation}
dJ^{\rm free}=0\,,\qquad dJ^{\rm matt}=0\,,\label{totcurrconserv}
\end{equation}
so that, although both types of charge can coexist, no exchange occurs between them. From the second conservation condition in (\ref{totcurrconserv}), we infer the existence of an independent excitation 2-form, which we denote as $H^{\rm matt}$, such that
\begin{equation}
J^{\rm matt}= -dH^{\rm matt}\,.\label{indepexcits}
\end{equation}
For the longitudinal and transversal pieces of $H^{\rm matt}$ we introduce the notation
\begin{equation}
H^{\rm matt}= -d\tau\wedge M + P\,,\label{matexcit01}
\end{equation}
where $M$ is the magnetization 1-form and $P$ the polarization 2-form.
The extension of Maxwell's equations (\ref{covfieldeq1}) to include the contribution (\ref{indepexcits}) of the material medium without altering their form can then be performed in any of the alternative ways mentioned above. Let us define
\begin{equation}
H^{\rm bare} :={}^*F\,,\label{macMax05}
\end{equation}
(where we call {\it bare fields} the fields in vacuum) in analogy to the Maxwell-Lorentz spacetime relation (\ref{emmom}). Then, according to the first procedure, consisting in considering the electromagnetic effects of the medium as due to a modification of the electromagnetic excitations, the latter ones $H$ as much as $J$ in (\ref{covfieldeq1}) are to be understood respectively as
\begin{equation}
H = H^{\rm tot} := H^{\rm bare} +H^{\rm matt}\quad{\rm and}\quad J= J^{\rm free}\,,\label{secondcase}
\end{equation}
while in the second case such effects are characterized in terms of bounded currents, so that the same equation (\ref{covfieldeq1}) is to be read taking in it now
\begin{equation}
H = H^{\rm bare}\quad{\rm and}\quad J = J^{\rm tot} := J^{\rm free} - dH^{\rm matt}\,.\label{firstcase}
\end{equation}
Let us show that, despite appearances, both formulations are not trivially interchangeable. Actually, only one of them can be easily adjusted to our program of generalizing the {\it microscopic} formulas (\ref{mattender}) and (\ref{emender}) to include the contributions of the medium. Our main argument to decide in favor of one of both alternatives (in the present context) is that the electromagnetic energy (\ref{explemen1}) is different in each case, in such a way that, for arbitrary $P$ and $M$, Eq.(\ref{emender}) is compatible with only one of the possible choices.
Making use of (\ref{foliat1}), we decompose the electromagnetic excitation 2-form $H$, the electromagnetic field strength 2-form $F$ and the current $J$ of Maxwell's equations (\ref{covfieldeq1}) and (\ref{vanfder}) as
\begin{eqnarray}
H &=& d\tau\wedge {\cal H} + {\cal D}\,,\label{Max01}\\
F &=& -d\tau\wedge E + B\,,\label{Max02}\\
J &=& -d\tau\wedge j + \rho\,.\label{Max03}
\end{eqnarray}
Accordingly, the foliation of (\ref{covfieldeq1}) yields
\begin{eqnarray}
{\it{l}}_u {\cal D} -\underline{d}\,{\cal H} &=& -j\,,\label{Max07}\\
\underline{d}\,{\cal D}&=& \rho\,.\label{Max08}
\end{eqnarray}
and that of (\ref{vanfder}) gives rise to
\begin{eqnarray}
{\it{l}}_u B +\underline{d}\,E &=& 0\,,\label{Max09}\\
\underline{d}\,B &=& 0\,.\label{Max10}
\end{eqnarray}
In Eqs. (\ref{Max07})--(\ref{Max10}) we do not prejudge which of both interpretations is to be given to the different fields. In order to decide, we express (\ref{macMax05}) in terms of the Hodge dual (\ref{foliat2}) of (\ref{Max02})
\begin{equation}
^*F = d\tau\wedge{}^\#B + {}^\#E\,.\label{Max04}
\end{equation}
So we see that (\ref{secondcase}) corresponds to the choice
\begin{equation}
{\cal D} ={}^\#E +P\,,\quad {\cal H}={}^\#B -M\,,\quad J=J^{\rm free}\,,\label{elmagexcits02}
\end{equation}
in the Maxwell equations (\ref{Max07})--(\ref{Max10}), with
\begin{equation}
J^{\rm free}= -d\tau\wedge j^{\rm free} + \rho ^{\rm free}\,,\label{freecurr}
\end{equation}
while (\ref{firstcase}) gives rise to
\begin{equation}
{\cal D} ={}^\#E\,,\quad {\cal H}={}^\#B\,,\quad J=J^{\rm tot}\,,\label{elmagexcits01}
\end{equation}
being
\begin{equation}
J^{\rm tot}= -d\tau\wedge ( j^{\rm free} +{\it{l}}_u P +\underline{d}\,M\,) + (\rho ^{\rm free}-\underline{d}\,P\,)\,,\label{totcurr}
\end{equation}
as calculated from (\ref{totcurr01}) with (\ref{indepexcits}) and (\ref{matexcit01}). Now, in order to check the compatibility either of (\ref{elmagexcits02}) or (\ref{elmagexcits01}) with (\ref{emender}), we add (\ref{Max07}) and (\ref{Max09}) to each other, respectively multiplied by $E$ and ${\cal H}$, to get
\begin{equation}
E\wedge{\it{l}}_u {\cal D} + {\it{l}}_u B\wedge{\cal H} +\underline{d}\,(E\wedge {\cal H}) = -E\wedge j\,,\label{Poynting01}
\end{equation}
and on the other hand, we rewrite the transversal part of (\ref{explemen1}) as
\begin{equation}
\underline{\epsilon}^{\rm em} ={1\over 2}\,( E\wedge{\cal D} + B\wedge {\cal H}\,)\,.\label{emendh}
\end{equation}
We can see that, in general, for nonspecified $P$ and $M$, the step from (\ref{Poynting01}) to (\ref{emender}) with $\epsilon ^{\rm em}$ given by (\ref{emendh}) is only possible with the choice (\ref{elmagexcits01}) for the excitations. Indeed, notice that the first term in the rhs of (\ref{emender}) has its origin in the relation
\begin{eqnarray}
&&{\it{l}}_u \underline{\epsilon}^{\rm em} := {\it{l}}_u\,{1\over 2}\left( E\wedge{}^\#E + B\wedge {}^\#B\,\right)\nonumber\\
&&\hskip1.0cm \equiv E\wedge {\it{l}}_u {}^{\#}E + {\it{l}}_u B\wedge {}^{\#}B -({\cal \L\/}_u\,\vartheta ^\alpha\wedge\Sigma ^{\rm em}_\alpha )_{\bot}\,,\nonumber\\
\label{ident02}
\end{eqnarray}
derived with the help of the identities
\begin{eqnarray}
{\it{l}}_u {}^{\#}E &\equiv &\,{}^{\#}\Bigl(\,{\it{l}}_u E -{\cal \L\/}_u\underline{\vartheta}^\alpha\, e_\alpha\rfloor E\,\Bigr) +{\cal \L\/}_u\underline{\vartheta}^\alpha\wedge\left( e_\alpha\rfloor {}^{\#}E\,\right)\,,\nonumber\\
\label{formula01}\\
{\it{l}}_u {}^{\#}B &\equiv &\,{}^{\#}\Bigl(\,{\it{l}}_u B -{\cal \L\/}_u\underline{\vartheta}^\alpha\wedge e_\alpha\rfloor B\,\Bigr) +{\cal \L\/}_u\underline{\vartheta}^\alpha\wedge\left( e_\alpha\rfloor {}^{\#}B\,\right)\,.\nonumber\\
\label{formula02}
\end{eqnarray}
(Compare with (\ref{dualvar}).) Thus, although (\ref{Poynting01}) holds in both approaches, it only can be brought to the form (\ref{emender}) within the scope of choice (\ref{elmagexcits01}), or equivalently of (\ref{firstcase}), the latter thus revealing to be necessary in order to guarantee the general applicability of the fundamental formulas found for microscopic matter. Accordingly, we choose option (\ref{firstcase}), which in practice means that, in order to apply the original formula (\ref{covfieldeq1}) of the fundamental approach, we have to keep in it the excitation $H =H^{\rm bare}={}^*F$ built from bare fields, and to include all contributions of the medium in the matter current by replacing $J$ by $J^{\rm tot}=J -dH^{\rm matt}$, where the new $J$ in $J^{\rm tot}$ is understood to be $J^{\rm free}$.
In the following, we generalize this criterion of strict separation between bare electromagnetic fields (say radiation in vacuum) and matter, in such a way that it also applies to the gravitational case. So, in all field equations and Noether identities established in Sections II and III, we have to leave untouched the excitations $H$, $H_\alpha$, $H_{\alpha\beta}$ built from bare fields as in Section VI, while modifying the matter currents $J$, $\Sigma ^{\rm matt}_\alpha $, $\tau _{\alpha\beta}$. The matter contributions separated from bare fields will enter $\epsilon ^{\rm matt}$ and thus $\epsilon ^{\rm u}$ as defined in Section IX, so that they will play a role in the thermodynamic relations to be established there.
\section{Deduction of the laws of thermodynamics}
\subsection{First approach, in an electromagnetic medium}
In view of the discussion of previous section, we identify $H$ with $H^{\rm bare}$ and, in order to adapt Eq.(\ref{mattender}) to a macroscopic medium with electromagnetic properties, we replace in it (as everywhere) $J$ by $J^{\rm tot}$, that is
\begin{equation}
d\,\epsilon ^{\rm matt} = -{\cal \L\/}_u\,\vartheta ^\alpha\wedge\Sigma ^{\rm matt}_\alpha -R_{\bot}^{\alpha\beta}\wedge\tau _{\alpha\beta} -F_{\bot}\wedge J^{\rm tot}\,.\label{emmattender}
\end{equation}
Taking into account the explicit form (\ref{totcurr}) of $J^{\rm tot}$, we find that (\ref{emmattender}) can be rewritten as
\begin{eqnarray}
&&\mkern-60mu d\,\bigl(\,\epsilon ^{\rm matt} +F\wedge M\,\bigr)\nonumber\\
&&= -{\cal \L\/}_u\,\vartheta ^\alpha\wedge\Sigma ^{\rm matt}_\alpha -R_{\bot}^{\alpha\beta}\wedge\tau _{\alpha\beta}-F_{\bot}\wedge J\nonumber\\
&&\quad + d\tau\wedge\Bigl\{ -F_{\bot}\wedge{\it l}_u P +\underline{F}\wedge{\it l}_u M\Bigr\}\,,\label{diff02}
\end{eqnarray}
where we use simply $J$ instead of $J^{\rm free}$. Let us define the modified matter energy current in the left-hand side (lhs) of (\ref{diff02}) as
\begin{equation}
\epsilon ^{\rm u} := \epsilon ^{\rm matt} + F\wedge M\,.\label{intenergycurr01}
\end{equation}
Then, from (\ref{diff02}) and (\ref{intenergycurr01}) and using the notation (\ref{Max02}) for $F$, we find the more explicit version of (\ref{diff02})
\begin{eqnarray}
d\,\epsilon ^{\rm u} &=&d\tau\wedge\Bigl\{{\Sigma _{\alpha}}_{\bot}^{\rm matt}\wedge{\cal \L\/}_u\underline{\vartheta}^\alpha -\underline{\Sigma}^{\rm matt}_\alpha\,{\cal \L\/}_u u^\alpha +R_{\bot}^{\alpha\beta}\wedge{\tau _{\alpha\beta}}_{\bot}\nonumber\\
&&\hskip1.5cm +E\wedge j +E\wedge{\it l}_u P +B\wedge{\it l}_u M \Bigr\}\,,\label{diff01}
\end{eqnarray}
where we recognize in the rhs, among other forms of energy, the electric and magnetic work contributions $E\wedge{\it l}_u P$ and $B\wedge{\it l}_u M$ respectively. Let us now decompose (\ref{intenergycurr01}) foliating it according to (\ref{foliat1}) and introducing a suitable notation for the longitudinal and transversal pieces, namely
\begin{eqnarray}
\epsilon ^{\rm u} &=& d\tau\wedge\epsilon ^{\rm u}_{\bot} + \underline{\epsilon}^{\rm u}\nonumber\\
&=:& d\tau\wedge q + \mathfrak{U}\,.\label{intenergycurr02}
\end{eqnarray}
As we are going to justify in the following (in view of the equations satisfied by these quantities), $q$ will play the role of the heat flux 2-form and $\mathfrak{U}$ that of the internal energy 3-form. From (\ref{intenergycurr02}) with (\ref{derivfoliat}) we get
\begin{equation}
d\,\epsilon ^{\rm u}= d\tau\wedge\left(\,{\it l}_u\,\mathfrak{U} - \underline{d}\,q\,\right)\,.\label{energycurrder01}
\end{equation}
At this point, we claim as a characteristic of macroscopic matter systems \cite{Callen} the dependence of the internal energy 3-form $\mathfrak{U}$ on a certain new quantity $\mathfrak{s}$ --the entropy-- which we take to be a spatial 3-form (representing the amount of entropy contained in an elementary volume). Eq.(\ref{secondlaw}) to be found below confirms {\it a posteriori} that $\mathfrak{s}$ actually behaves as expected for entropy. Moreover, the structure of (\ref{diff01}) suggests to promote a shift towards a fully phenomenological approach by considering $\mathfrak{U}$ to possess \cite{Callen} the following general functional dependence
\begin{equation}
\mathfrak{U} = \mathfrak{U}\,(\mathfrak{s}\,,P\,,M\,,\underline{\vartheta}^\alpha \,, u^\alpha\,)\,.\label{uargs}
\end{equation}
In (\ref{uargs}), as in the matter Lagrangian piece (\ref{mattLagcontrib}), tetrads are still taken as arguments of $\mathfrak{U}$ while new variables replace the fundamental matter fields $\psi$ and their covariant derivatives $D\psi$. Connections involved in the derivatives $D\psi$ are thus excluded together with the fields. Besides the new entropy variable and the polarization and magnetization of the medium (induced by external fields), we find the components
(\ref{tetradfoliat}) of the tetrads in terms of which the volume 3-form (\ref{3deta06}) with (\ref{3deta09}) is defined. Accordingly, the Lie derivative of (\ref{uargs}) present in (\ref{energycurrder01}) takes the form
\begin{eqnarray}
{\it l}_u\,\mathfrak{U} &=& {{\partial\mathfrak{U}}\over{\partial\mathfrak{s}}}\,{\it l}_u\mathfrak{s}
+{{\partial\mathfrak{U}}\over{\partial P}}\wedge{\it l}_u P
+{{\partial\mathfrak{U}}\over{\partial M}}\wedge{\it l}_u M\nonumber\\
&&+{{\partial\mathfrak{U}}\over{\partial\underline{\vartheta}^\alpha}}\wedge{\it l}_u\underline{\vartheta}^\alpha
+{{\partial\mathfrak{U}}\over{\partial u^\alpha}}\,{\it l}_u u^\alpha\,,\label{uLiederiv01}
\end{eqnarray}
where we identify the derivatives \cite{Callen} as
\begin{eqnarray}
&& {{\partial\mathfrak{U}}\over{\partial\mathfrak{s}}} =T\,,\quad
{{\partial\mathfrak{U}}\over{\partial P}} =E\,,\quad
{{\partial\mathfrak{U}}\over{\partial M}} =B\,,\label{Uder01}\\
&&{{\partial\mathfrak{U}}\over{\partial\underline{\vartheta}^\alpha}}={\Sigma _{\alpha}}_{\bot}^{\rm matt}\,,\quad
{{\partial\mathfrak{U}}\over{\partial u^\alpha}}=-\underline{\Sigma}^{\rm matt}_\alpha\,.\label{Uder02}
\end{eqnarray}
Let us call attention to the temperature defined in (\ref{Uder01}) as the derivative of the internal energy with respect to the entropy. On the other hand, a plausibility argument to justify the identifications we make in (\ref{Uder02}) can be found in Appendix B. Replacing (\ref{Uder01})--(\ref{Uder02}) in (\ref{uLiederiv01}) we get
\begin{eqnarray}
{\it l}_u\,\mathfrak{U} &=& T\,{\it l}_u\mathfrak{s}
+E\wedge{\it l}_u P
+B\wedge{\it l}_u M\nonumber\\
&&+{\Sigma _{\alpha}}_{\bot}^{\rm matt}\wedge{\it l}_u\underline{\vartheta}^\alpha
-\underline{\Sigma}^{\rm matt}_\alpha\,{\it l}_u u^\alpha\,.\label{uLiederiv02}
\end{eqnarray}
In order to rearrange the non explicitly invariant terms in (\ref{uLiederiv02}) to get invariant expressions, we replace the ordinary Lie derivatives by covariant Lie derivatives of the form (\ref{thetaLiederiv02}), so that the last terms in (\ref{uLiederiv02}) become
\begin{eqnarray}
{\Sigma _{\alpha}}_{\bot}^{\rm matt}\wedge{\it l}_u\underline{\vartheta}^\alpha -\underline{\Sigma}^{\rm matt}_\alpha\,{\it l}_u u^\alpha
&\equiv& {\Sigma _{\alpha}}_{\bot}^{\rm matt}\wedge{\cal \L\/}_u\underline{\vartheta}^\alpha
-\underline{\Sigma}^{\rm matt}_\alpha\,{\cal \L\/}_u u^\alpha\nonumber\\
&&+\Gamma _{\bot}^{\alpha\beta}\bigl(\,\vartheta _{[\alpha}\wedge\Sigma ^{\rm matt}_{\beta ]}\bigr)_{\bot}\,.\label{identity01}
\end{eqnarray}
Replacing (\ref{identity01}) in (\ref{uLiederiv02}) we finally arrive at
\begin{eqnarray}
{\it l}_u\,\mathfrak{U} &=& T\,{\it l}_u\mathfrak{s} +E\wedge{\it l}_u P +B\wedge{\it l}_u M\nonumber\\
&&+{\Sigma _{\alpha}}_{\bot}^{\rm matt}\wedge{\cal \L\/}_u\underline{\vartheta}^\alpha -\underline{\Sigma}^{\rm matt}_\alpha\,{\cal \L\/}_u u^\alpha\nonumber\\
&&+\Gamma _{\bot}^{\alpha\beta}\bigl(\,\vartheta _{[\alpha}\wedge\Sigma ^{\rm matt}_{\beta ]}\bigr)_{\bot}\,.\label{uLiederiv03}
\end{eqnarray}
In the rhs of (\ref{uLiederiv03}), the term containing explicitly the Lorentz connection is obviously noninvariant. Its emergence is due to an inherent limitation of the phenomenological approach, namely the absence of explicit dependence of $\mathfrak{U}$ on fundamental matter fields and their derivatives, together wit connections. Indeed, provided matter fields $\psi$ with derivatives $d\psi$ were present, connections were required to define covariant derivatives preserving local symmetry. However, in the phenomenological case, $\mathfrak{U}$ depends neither on $\psi$ nor on $d\psi$, so that (since $d\psi$ and connections need each other) it cannot give rise to invariant expressions, either one takes it or not to depend on the connections. The noninvariant term in (\ref{uLiederiv03}), reflecting the lack of invariance of the terms in the lhs of (\ref{identity01}), will be dragged to equations (\ref{energycurrder02}) and (\ref{secondlaw}) below. (We will find a similar situation in (\ref{uLiederiv04bis}) and (\ref{diff01tot}).) In any case, let us mention that the invariance is restored in the particular case when the macroscopic free spin current $\tau _{\alpha\beta}$ vanishes.
Making use of (\ref{uLiederiv03}), Eq.(\ref{diff01}) reduces to
\begin{eqnarray}
d\,\epsilon ^{\rm u} &=& d\tau\wedge\Bigl[\,{\it l}_u\,\mathfrak{U} - T\,{\it l}_u\mathfrak{s} + E\wedge j + R_{\bot}^{\alpha\beta}\wedge{\tau _{\alpha\beta}}_{\bot}\nonumber\\
&&\hskip1.5cm -\Gamma _{\bot}^{\alpha\beta}\bigl(\,\vartheta _{[\alpha}\wedge\Sigma ^{\rm matt}_{\beta ]}\bigr)_{\bot} \,\Bigr]\,,\label{energycurrder02}
\end{eqnarray}
and finally, comparison of (\ref{energycurrder02}) with (\ref{energycurrder01}), making use of (\ref{spincurrconserv}), yields
\begin{equation}
{\it l}_u\mathfrak{s} -{{\underline{d}\,q}\over T} = {1\over T}\,\bigl[\,E\wedge j + R_{\bot}^{\alpha\beta}\wedge{\tau _{\alpha\beta}}_{\bot} +\Gamma _{\bot}^{\alpha\beta}\bigl(\,D\,\tau _{\alpha\beta}\bigr)_{\bot}\,\bigr]\,.\label{secondlaw}
\end{equation}
In the lhs of (\ref{secondlaw}) we find the rate of change of the entropy 3-form combined in a familiar way with heat flux and temperature. The interpretation of the first term in the rhs is facilitated by the fact that, according to Ohm's law $j=\sigma\,{}^\# E$, it is proportional to $E\wedge j ={1\over\sigma} j\wedge{}^\# j \geq 0$, so that it is responsible for entropy growth. The second term is analogous to the first one. If we suppose that all terms in the rhs of (\ref{secondlaw}) are $\geq 0$, or, in any case, for vanishing macroscopic free spin current $\tau _{\alpha\beta}$, we can consider (\ref{secondlaw}) to be a particular realization of the second law of thermodynamics.
On the other hand, the first law is no other than the conservation equation (\ref{emmattender}) for matter energy, rewritten as (\ref{diff01}) in terms of the internal energy current 3-form (\ref{intenergycurr01}). This reformulation is necessary in order to bring to light the components of $\epsilon ^{\rm u}$ defined in (\ref{intenergycurr02}), that is, heat flux and internal energy respectively, thus making possible to compare the first law with the second one (\ref{secondlaw}) deduced above. (By the way, notice that the inversion of (\ref{intenergycurr01}) to express $\epsilon ^{\rm matt}$ in terms of $\epsilon ^{\rm u}$ suggests to interpret $\epsilon ^{\rm matt}$ as a sort of enthalpy current 3-form.) Making use of (\ref{energycurrder01}), the first law (\ref{diff01}) can be brought to the more compact form
\begin{eqnarray}
{\it l}_u\,\mathfrak{U} -\underline{d}\,q &=& -\bigl(\,{\cal \L\/}_u\,\vartheta ^\alpha\wedge\Sigma ^{\rm matt}_\alpha \bigr)_{\bot} +R_{\bot}^{\alpha\beta}\wedge{\tau _{\alpha\beta}}_{\bot}\nonumber\\
&&+E\wedge j +E\wedge{\it l}_u P +B\wedge{\it l}_u M\,.\label{uLiederiv03bis}
\end{eqnarray}
The first term in the rhs of (\ref{uLiederiv03bis}), that is, the longitudinal part of ${\cal \L\/}_u\,\vartheta ^\alpha\wedge\Sigma ^{\rm matt}_\alpha $, encloses information about mechanic work, whose form depends on the explicit matter energy-momentum 3-form we consider. In particular, by taking it to consist of a pressure term plus an undefined part, as in (\ref{enmom01}), we find
\begin{equation}
\bigl(\,{\cal \L\/}_u\,\vartheta ^\alpha\wedge\Sigma ^{\rm matt}_\alpha\,\bigr)_{\bot} = \bigl(\,{\cal \L\/}_u\,\vartheta ^\alpha\wedge\Sigma ^{\rm undef}_\alpha\,\bigr)_{\bot} +{\cal \L\/}_u\underline{\vartheta}^\alpha\wedge p\,\overline{\eta}_\alpha\,,\label{presscontrib}
\end{equation}
where the last term, in view of (\ref{volLieder}), results to be
\begin{equation}
{\cal \L\/}_u\underline{\vartheta}^\alpha\wedge p\,\overline{\eta}_\alpha = p\,{\it l}_u\overline{\eta}\,,\label{pressderiv}
\end{equation}
being thus identifiable as the ordinary pressure contribution to work as pressure times the derivative of the volume. It is worth remarking that the emergence of this pressure contribution to the first law does not ocur through derivation of $\mathfrak{U}$ with respect to the volume $\overline{\eta}$ (which is not an independent variable by itself, being defined from the tetrads as (\ref{3deta06})), but with respect to the tetrad components, as in (\ref{Uder02}). Replacing (\ref{presscontrib}) with (\ref{pressderiv}) in the first law equation (\ref{uLiederiv03bis}), we get for it the more explicit formulation
\begin{eqnarray}
{\it l}_u\,\mathfrak{U} -\underline{d}\,q &=& -\bigl(\,{\cal \L\/}_u\,\vartheta ^\alpha\wedge\Sigma ^{\rm undef}_\alpha \bigr)_{\bot}
+R_{\bot}^{\alpha\beta}\wedge{\tau _{\alpha\beta}}_{\bot} +E\wedge j\nonumber\\
&&-p\,{\it l}_u\overline{\eta}+E\wedge{\it l}_u P +B\wedge{\it l}_u M \,,\label{firstlaw01}
\end{eqnarray}
where one recognizes the familiar contributions of internal energy, heat flux and work [including $\bigl(\,{\cal \L\/}_u\,\vartheta ^\alpha\wedge\Sigma ^{\rm undef}_\alpha\,\bigr)_{\bot}$ among the latter ones], together with additional terms. In particular, $E\wedge j$ and the formally similar quantity $R_{\bot}^{\alpha\beta}\wedge{\tau _{\alpha\beta}}_{\bot}$ are present in (\ref{firstlaw01}) due to irreversibility, as read out from (\ref{secondlaw}).
\subsection{General approach}
Let us extend the previous results to the most general scenario in which we modify all matter currents in analogy to $J^{\rm (tot)}$ in order to take into account further possible contributions of a medium. In an attempt to expand the electromagnetic model, we introduce --associated to gravitational interactions-- translational and Lorentz generalizations of the electromagnetic polarization and magnetization of macroscopic matter. Maybe this constitutes a merely formal exercise. However, it can also be understood as a proposal to look for new properties of material media, since we are going to consider the hypothesis of certain new phenomenological matter contributions to the sources of gravity, acting perhaps as dark matter.
Generalizing (\ref{firstcase}), we propose to modify the complete set of field equations (\ref{covfieldeq1})--(\ref{covfieldeq3}) as
\begin{eqnarray}
dH &=&J^{\rm (tot)}\,,\label{covfieldeq1bis} \\
DH_\alpha &=&\Pi ^{\rm (tot)}_\alpha\,,\label{covfieldeq2bis}\\
DH_{\alpha\beta} +\vartheta _{[\alpha }\wedge H_{\beta ]}&=&\tau ^{\rm (tot)}_{\alpha\beta}\,,\label{covfieldeq3bis}
\end{eqnarray}
with bare excitations and total currents consisting of the sum of free and bound contributions, defined respectively as
\begin{eqnarray}
J^{\rm (tot)} &=& J-dH^{\rm matt}\,,\label{Jtot} \\
\Pi ^{\rm (tot)}_\alpha &=& \Pi _\alpha -DH^{\rm matt}_\alpha \,,\label{Pitot}\\
\tau ^{\rm (tot)}_{\alpha\beta} &=& \tau _{\alpha\beta} - ( DH^{\rm matt}_{\alpha\beta} +\vartheta _{[\alpha }\wedge H^{\rm matt}_{\beta ]})\,,\label{Tautot}
\end{eqnarray}
where we introduce generalizations of the electromagnetic polarization and magnetization (\ref{matexcit01}) as
\begin{eqnarray}
H^{\rm matt} &=& -d\tau\wedge M + P\,,\label{matexcit01bis}\\
H_\alpha ^{\rm matt} &=& -d\tau\wedge M_\alpha + P_\alpha \,,\label{matexcit02}\\
H_{\alpha\beta}^{\rm matt} &=& -d\tau\wedge M_{\alpha\beta} + P_{\alpha\beta}\,,\label{matexcit03}
\end{eqnarray}
whatever the physical correspondence of these quantities may be. Since, as discussed above, only matter currents are to be modified, we understand (\ref{Pitot}) in the sense that only the matter part is altered, that is
\begin{equation}
\Pi ^{\rm (tot)}_\alpha = \Sigma ^{\rm matt}_{{\rm (tot)}\alpha } +\Sigma ^{\rm em}_\alpha +E_\alpha\,,\label{totmomentdecomp}
\end{equation} being
\begin{equation}
\Sigma ^{\rm matt}_{{\rm (tot)}\alpha } = \Sigma ^{\rm mat}_\alpha -DH^{\rm matt}_\alpha\,.\label{totmattmom}
\end{equation}
In view of (\ref{totmattmom}), we extend (\ref{mattenergy}) as
\begin{equation}
\epsilon _{\rm (tot)}^{\rm matt} := -u^\alpha\,\Sigma ^{\rm matt}_{{\rm (tot)}\alpha } =\epsilon ^{\rm matt} + u^\alpha DH^{\rm matt}_\alpha\,,\label{totmattenergy}
\end{equation}
and, as a generalization of (\ref{mattender}) to include macroscopic matter, we postulate the formally analogous equation
\begin{equation}
d\,\epsilon _{\rm (tot)}^{\rm matt} = -{\cal \L\/}_u\,\vartheta ^\alpha\wedge\Sigma ^{\rm matt}_{{\rm (tot)}\alpha } -R_{\bot}^{\alpha\beta}\wedge\tau ^{\rm (tot)}_{\alpha\beta} -F_{\bot}\wedge J^{\rm (tot)}\,,\label{genmattender01}
\end{equation}
as the law of conservation of total matter energy. Eq.(\ref{genmattender01}) can be rearranged as
\begin{eqnarray}
&&\mkern-60mu d\,\bigl(\,\epsilon ^{\rm matt} +F\wedge M + T^\alpha\wedge M_\alpha + R^{\alpha\beta}\wedge M_{\alpha\beta}\,\bigr)\nonumber\\
&&= -{\cal \L\/}_u\,\vartheta ^\alpha\wedge\Sigma ^{\rm matt}_\alpha -R_{\bot}^{\alpha\beta}\wedge\tau _{\alpha\beta} -F_{\bot}\wedge J\nonumber\\
&&\quad + d\tau\wedge\Bigl\{ -F_{\bot}\wedge{\it l}_u P +\underline{F}\wedge{\it l}_u M\nonumber\\
&&\hskip1.6cm -T_{\bot}^\alpha\wedge{\cal \L\/}_u P_\alpha +\underline{T}^\alpha\wedge{\cal \L\/}_u M_\alpha\nonumber\\
&&\hskip1.6cm -R_{\bot}^{\alpha\beta}\wedge{\cal \L\/}_u P_{\alpha\beta} +\underline{R}^{\alpha\beta}\wedge{\cal \L\/}_u M_{\alpha\beta}\Bigr\}\,.\nonumber\\
\label{diff03bis}
\end{eqnarray}
(Compare with (\ref{diff02}).) Without going into details, we proceed in analogy to the former case. We define a similar internal energy current 3-form
\begin{equation}
\widehat{\epsilon}^u := \epsilon ^{\rm matt} +F\wedge M + T^\alpha\wedge M_\alpha + R^{\alpha\beta}\wedge M_{\alpha\beta}\,,\label{totintenergy}
\end{equation}
decomposing as
\begin{equation}
\widehat{\epsilon}^{\rm u} =: d\tau\wedge \widehat{q} + \widehat{\mathfrak{U}}\,.\label{intenergycurr02bis}
\end{equation}
Supposing the functional form of $\widehat{\mathfrak{U}}$ to be
\begin{equation}
\widehat{\mathfrak{U}} = \widehat{\mathfrak{U}}\,(\widehat{\mathfrak{s}}\,,P\,,M\,,P_\alpha\,,M_\alpha\,,P_{\alpha\beta}\,,M_{\alpha\beta}\,,\underline{\vartheta}^\alpha \,, u^\alpha\,)\,,\label{uargsbis}
\end{equation}
and with the pertinent definitions analogous to (\ref{Uder01}) and (\ref{Uder02}), first we get
\begin{eqnarray}
{\it l}_u\,\widehat{\mathfrak{U}} &=& \widehat{T}\,{\it l}_u\widehat{\mathfrak{s}} +{\Sigma _{\alpha}}_{\bot}^{\rm matt}\wedge{\it l}_u\underline{\vartheta}^\alpha -\underline{\Sigma}^{\rm matt}_\alpha\,{\it l}_u u^\alpha\nonumber\\
&&-F_{\bot}\wedge{\it l}_u P +\underline{F}\wedge{\it l}_u M\nonumber\\
&&-T_{\bot}^\alpha\wedge{\it l}_u P_\alpha +\underline{T}^\alpha\wedge{\it l}_u M_\alpha\nonumber\\
&&-R_{\bot}^{\alpha\beta}\wedge{\it l}_u P_{\alpha\beta} +\underline{R}^{\alpha\beta}\wedge{\it l}_u M_{\alpha\beta}\,,\label{uLiederiv02bis}
\end{eqnarray}
and finally, suitably rearranging the noncovariant quantities in (\ref{uLiederiv02bis}) into covariant ones defined in analogy to (\ref{thetaLiederiv01}), we arrive at
\begin{eqnarray}
{\it l}_u\,\widehat{\mathfrak{U}} &=& \widehat{T}\,{\it l}_u\widehat{\mathfrak{s}} +{\Sigma _{\alpha}}_{\bot}^{\rm matt}\wedge{\cal \L\/}_u\underline{\vartheta}^\alpha -\underline{\Sigma}^{\rm matt}_\alpha\,{\cal \L\/}_u u^\alpha\nonumber\\
&&-F_{\bot}\wedge{\it l}_u P +\underline{F}\wedge{\it l}_u M\nonumber\\
&&-T_{\bot}^\alpha\wedge{\cal \L\/}_u P_\alpha +\underline{T}^\alpha\wedge{\cal \L\/}_u M_\alpha\nonumber\\
&&-R_{\bot}^{\alpha\beta}\wedge{\cal \L\/}_u P_{\alpha\beta} +\underline{R}^{\alpha\beta}\wedge{\cal \L\/}_u M_{\alpha\beta}\nonumber\\
&&+\Gamma _{\bot}^{\alpha\beta}\Bigl[\,D\,\bigl(\tau ^{\rm (tot)}_{\alpha\beta} -\tau _{\alpha\beta}\bigr) +\vartheta _{[\alpha}\wedge\Sigma ^{\rm matt}_{\beta ]{\rm (tot)}}\Bigr]_{\bot}
\,.\label{uLiederiv04bis}
\end{eqnarray}
Assuming that the analogous of (\ref{spincurrconserv}) holds for generalized matter, that is
\begin{equation}
D\,\tau ^{\rm (tot)}_{\alpha\beta} +\vartheta _{[\alpha}\wedge\Sigma ^{\rm matt}_{\beta ]{\rm (tot)}} =0\,,\label{totspinconserv}
\end{equation}
from (\ref{diff03bis}) with (\ref{totintenergy}) and (\ref{uLiederiv04bis}) follows
\begin{eqnarray}
d\,\widehat{\epsilon}^{\rm u} &=& d\tau\wedge\Bigl[\,{\it l}_u\,\widehat{\mathfrak{U}} -\widehat{T}\,{\it l}_u\widehat{\mathfrak{s}} -F_{\bot}\wedge j + R_{\bot}^{\alpha\beta}\wedge{\tau _{\alpha\beta}}_{\bot}\nonumber\\
&&\hskip1.5cm +\Gamma _{\bot}^{\alpha\beta}\bigl(\,D\,\tau _{\alpha\beta}\bigr)_{\bot}\,\Bigr]\,,\label{diff01tot}
\end{eqnarray}
giving rise, when compared with the differential of (\ref{intenergycurr02bis}), to the second law of thermodynamics with exactly the same form as (\ref{secondlaw}).
Regarding the first law (\ref{diff03bis}) with (\ref{totintenergy})--(\ref{uargsbis}), taking (\ref{enmom01}) as before and using the notation (\ref{Max02}), it takes the form
\begin{eqnarray}
{\it l}_u\,\widehat{\mathfrak{U}} -\underline{d}\,\widehat{q} &=& -\bigl(\,{\cal \L\/}_u\,\vartheta ^\alpha\wedge\Sigma ^{\rm undef}_\alpha \bigr)_{\bot}
+R_{\bot}^{\alpha\beta}\wedge{\tau _{\alpha\beta}}_{\bot} +E\wedge j\nonumber\\
&&-p\,{\it l}_u\overline{\eta}+E\wedge{\it l}_u P +B\wedge{\it l}_u M \nonumber\\
&&-T_{\bot}^\alpha\wedge{\cal \L\/}_u P_\alpha +\underline{T}^\alpha\wedge{\cal \L\/}_u M_\alpha\nonumber\\
&&-R_{\bot}^{\alpha\beta}\wedge{\cal \L\/}_u P_{\alpha\beta} +\underline{R}^{\alpha\beta}\wedge{\cal \L\/}_u M_{\alpha\beta}\,,\label{firstlaw01bis}
\end{eqnarray}
which only differs from (\ref{firstlaw01}) in the additional work contributions corresponding to the gravitational generalizations of polarization and magnetization.
\section{Final remarks}
\subsection{Gravity and conservation of total energy}
Let us examine the role played by gravity in the conservation of energy. In our approach, the first law of thermodynamics can take alternatively the forms (\ref{emmattender}) or (\ref{diff01}), being concerned with the matter energy current either in its form $\epsilon ^{\rm matt}$ or $\epsilon ^{\rm u}$. Differentiation of such matter energy currents generates work expressions, the latter ones acting physically by transforming themselves into different forms of energy. So, mechanic work can produce electric effects, etc. However, these subsequent transformations are not explicitly shown by the thermodynamic equation (\ref{diff01}). Neither the sum of the matter and electromagnetic energy currents is conserved separately, since the addition of (\ref{mattender}) and (\ref{emender}) yields
\begin{eqnarray}
d\,(\epsilon ^{\rm matt}+\epsilon ^{\rm em}) &&= -{\cal \L\/}_u\,\vartheta ^\alpha\wedge (\,\Sigma ^{\rm matt}_\alpha +\Sigma ^{\rm em}_\alpha \,) -R_{\bot}^{\alpha\beta}\wedge\tau _{\alpha\beta}\nonumber\\
&&\neq 0\,.\label{energyconserv02}
\end{eqnarray}
Conservation of energy in an absolute sense, with all possible transformations of different forms of energy into each other taken into account, requires to include also the gravitational energy. Indeed, from (\ref{energyconserv01}) with (\ref{energydec}) we get
\begin{equation}
d\,(\epsilon ^{\rm matt}+\epsilon ^{\rm em}+\epsilon ^{\rm gr})=0\,.\label{energyconserv03}
\end{equation}
This conservation equation, concerned with all forms of energy simultaneously, completes the first law of thermodynamics (\ref{diff01}), which concentrates on the behavior of only the matter energy current $\epsilon ^{\rm u}$. The total energy flux $\epsilon _{\bot}$ in (\ref{energyconserv03}) includes heat flux, Poynting flux in a strict sense and other Poynting-like contributions. The integrated form (\ref{exactform02}) of (\ref{energyconserv03}) can be seen as a sort of generalized Bernouilli's principle.
\subsection{Thermal radiation}
The formalism is not necessarily restricted to gauge theoretically derived forms of energy. It is flexible enough to deal with other thermodynamic approaches, as is the case for thermal radiation, the latter being described not in terms of electromagnetic fields but as a foton gas \cite{Prigogine} \cite{Demirel}. A body in thermal equilibrium is modelized as a cavity filled with a gas of thermal photons in continuous inflow and outflow. The number of photons, the internal energy and the entropy contained in the cavity, the pressure of thermal radiation on the walls and the chemical potential are all functions of the temperature, being respectively given by
\begin{eqnarray}
\mathcal{N} &=& \alpha\,T^3\,\overline{\eta}\,,\label{photgas01}\\
\mathfrak{U} &=& \beta\,T^4\,\overline{\eta}\,,\label{photgas02}\\
T \mathfrak{s} &=& {4\over 3}\,\mathfrak{U}\,,\label{photgas03}\\
p\,\overline{\eta} &=& {1\over 3}\,\mathfrak{U}\,,\label{photgas04}\\
\mu &=& 0\,.\label{photgas05}
\end{eqnarray}
The quantities (\ref{photgas01})--(\ref{photgas05}) automatically satisfy the relation
\begin{equation}
{\it l}_u\,\mathfrak{U} = T\,{\it l}_u\mathfrak{s} -p\,{\it l}_u \overline{\eta}\,,\label{uLiederiv09}
\end{equation}
which constitutes a particular case of the thermodynamic equations found above. Indeed, Eq. (\ref{uLiederiv03}) with vanishing $P$, $M$ and $\tau _{\alpha\beta}$ reduces to
\begin{eqnarray}
{\it l}_u\,\mathfrak{U} = T\,{\it l}_u\mathfrak{s} +{\Sigma _{\alpha}}_{\bot}^{\rm matt}\wedge{\cal \L\/}_u\underline{\vartheta}^\alpha -\underline{\Sigma}^{\rm matt}_\alpha\,{\cal \L\/}_u u^\alpha \,.\label{uLiederiv07}
\end{eqnarray}
By handling the photon gas as matter, and taking for it an energy-momentum (\ref{enmom03}) with $\widetilde{\Sigma}^{\rm undef}_\alpha =0$ as
\begin{equation}
\Sigma ^{\rm matt}_\alpha = u_\alpha\,\epsilon ^{\rm matt} -d\tau\wedge p\,\overline{\eta}_\alpha\,,\label{enmom04}
\end{equation}
replacement of (\ref{enmom04}) in (\ref{uLiederiv07}) yields
\begin{equation}
{\it l}_u\,\mathfrak{U} = T\,{\it l}_u\mathfrak{s} -p\,{\it l}_u \overline{\eta} +\epsilon ^{\rm matt}_{\bot}\wedge u_\alpha\,T_{\bot}^\alpha\,,\label{uLiederiv08}
\end{equation}
from where, for vanishing torsion, (\ref{uLiederiv09}) follows.
On the other hand, for thermal radiation, the second law (\ref{secondlaw}) reduces \cite{Prigogine} to that of reversible processes
\begin{equation}
{\it l}_u\mathfrak{s} -{{\underline{d}\,q}\over T} = 0\,,\label{revsecondlaw}
\end{equation}
and since the number of photons (\ref{photgas01}) inside the cavity is in general not constant, we propose for this quantity the continuity equation
\begin{equation}
{\it l}_u\mathcal{N} +\underline{d} j_{_N} = \sigma _{_N}\,,\label{photnumber}
\end{equation}
where we introduce $j_{_N}$ as the photon flux and $\sigma _{_N}$ as the rate of photon creation or destruction. Now, from (\ref{photgas01})--(\ref{photgas03}), replacing the values
\begin{equation}
\alpha ={{16\,\pi\,k_B^3\,\zeta (3)}\over{c^3\,h^3}}\,,\qquad \beta ={{8\,\pi ^5\,k_B^4}\over{15\,c^3\,h^3}}\,,\label{alphabeta01}
\end{equation}
with $\zeta $ as the Riemann zeta function, such that $\zeta (3)\approx 1.202$, and being $k_B$ the Boltzmann constant, we get the relation
\begin{equation}
\mathfrak{s} = {4\over 3}\,{\mathfrak{U}\over T} = {{4\beta}\over{3\alpha}}\,\mathcal{N}\approx 3.6\,k_B\,\mathcal{N}\,,\label{alphabeta02}
\end{equation}
so that (\ref{revsecondlaw}) with (\ref{alphabeta02}) yields
\begin{equation}
\underline{d}\,q = T\,{\it l}_u\mathfrak{s} \approx 3.6\,k_B\,T\,{\it l}_u\mathcal{N}\,.\label{photheatflux}
\end{equation}
With (\ref{photnumber}), Eq.(\ref{photheatflux}) transforms into
\begin{equation}
\underline{d}\,q \approx 3.6\,k_B\,T\,(\sigma _{_N} -\underline{d} j_{_N})\,.\label{fluxrelat}
\end{equation}
According to (\ref{fluxrelat}), the divergence of the heat flux $q$ of thermal radiation is proportional to the divergence of the photon flux $j_{_N}$ continuously emitted and absorbed by a body, and it also depends on possible additional contributions $\sigma _{_N}$ due to photon production or destruction.
\section{Conclusions}
We propose an approach to thermodynamics compatible with gauge theories of gravity and beyond. Indeed, the formalism developed in the present paper is explicitly covariant under local Lorentz transformations unless for the symmetry breaking terms present in (\ref{secondlaw}) and (\ref{diff01tot}), (which vanish for $\tau _{\alpha\beta}=0$). Moreover, local translational symmetry as much as local $U(1)$ symmetry are also present in our equations as hidden symmetries, due the particular realization of the Poincar\'e$\otimes U(1)$ gauge group used to derive the field equations and Noether identities which constituted our starting point \cite{Tresguerres:2007ih} \cite{Tresguerres:2002uh} \cite{Tresguerres:2012nu}. In particular, the thermodynamic equations, concerned with the exchange between different forms of energy, are both Poincar\'e and $U(1)$ gauge invariant.
The laws of thermodynamics deduced by us concentrate on the conservation of the matter energy current $\epsilon ^{\rm matt}$ (or, equivalently, $\epsilon ^{\rm u}$), but in addition we complete the scheme giving account of the conservation of total energy, as discussed in Sec. X. In this way we synthesize the total energy balance in classical physics of material media.
|
1,314,259,994,225 | arxiv | \section{Introduction}\label{sec:introduction}
Abundant applications raise the demands of training and inference deep neural networks (DNNs) efficiently
on diverse hardware platforms ranging from cloud servers to embedded devices.
Moreover, computational graph-level optimization of deep neural network,
like tensor operator fusion~\cite{wang2010kernel}, may introduce new tensor operators.
Thus, manually optimized tensor operators provided by hardware-specific libraries have limitations in terms of
supporting new operators or hardware platforms,
so automatically optimizing tensor operators on diverse hardware platforms
is essential for large-scale deployment and application of deep learning technologies in the real-world problems.
Tensor operator optimization is essentially a combinatorial optimization problem.
The objective function is the performance of a tensor operator on specific hardware platform,
which should be maximized with respect to the hyper-parameters of corresponding device code,
such as how to tile a matrix or whether to unroll a loop.
Thereafter, we will refer to a tuple of hyper-parameters determining device code as a configuration,
and the set of all possible configurations as a configuration space or search space.
Unlike many typical problems of this type, such as travelling salesman problem,
the objective function of tensor operator optimization is a black box and expensive to sample.
One has to compile a device code with a specific configuration
and run it on real hardware to get the corresponding performance metric.
Therefore, a desired method for optimizing tensor operators should find the best configuration with as few samples as possible.
The expensive objective function makes solving tensor operator optimization problem with traditional combinatorial optimization methods,
for example, simulated annealing (SA)~\cite{kirkpatrick1983optimization} and evolutionary algorithms (EA)~\cite{back1993overview}, almost impossible.
Although these algorithms inherently support combinatorial search spaces~\cite{youssef2001evolutionary},
they do not take sample-efficiency into account,
thus thousands of or even more samples are usually needed,
which is unacceptable when tuning tensor operators in product environments.
On the other hand, sequential model based optimization (SMBO) methods
are proved sample-efficient for optimizing black-box functions with continuous search spaces~\cite{srinivas2009gaussian,hernandez2014predictive,wang2017max}.
However, when optimizing ones with combinatorial search spaces,
SMBO methods are not as sample-efficient as their continuous counterparts~\cite{hutter2011sequential},
because there is lack of prior assumptions about the objective functions,
such as continuity and differentiability in the case of continuous search spaces.
For example, if one could assume an objective function with a continuous search space is infinitely differentiable,
a Gaussian process with a radial basis function (RBF) kernel could be used to model the objective function.
In this way, a sample provides not only a single value at a point
but also the local properties of the objective function in its neighborhood or even global properties,
which results in a high sample-efficiency.
In contrast, SMBO methods for combinatorial optimization suffer from poor sample-efficiency
due to the lack of proper prior assumptions and corresponding surrogate models.
Besides sample-efficiency,
another weakness of SMBO methods is the extra burden introduced by training and optimizing surrogate models.
Although it can be safely ignored for many ultra-expensive objective functions,
such as hyperparameter tuning and architecture search for neural networks~\cite{elsken2018neural},
in which a trial usually needs several hours or more,
but it is not the case in the context of tensor operator optimization,
since compiling and executing a tensor operator usually need at most tens of seconds.
In this work, we propose a lightweight model-free method, OpEvo (\textbf{Op}erator \textbf{Evo}lution), which combines both advantages of EA and SMBO
by leveraging prior assumptions on combinatorial objective functions in an evolutionary framework.
Although there is no nice property like continuity or differentiability,
we construct topological structures over search spaces of tensor operators
by assuming similar configurations of a tensor operator will result in similar performance,
and then introduce a topology-aware mutation operation by proposing a $q$-random walk distribution
to leverage the constructed topological structures for better trade-off between exploration and exploitation.
In this way, OpEvo not only inherits the support of combinatorial search spaces and model-free nature of EA,
but also benefits from the prior assumptions about combinatorial objective functions,
so that OpEvo can efficiently optimize tensor operators. The contributions of the paper are four-fold:
\begin{itemize}
\item We construct topological structures for search spaces of tensor operator optimization
by assuming similar configurations of a tensor operator will result in similar performance;
\item We define $q$-random walk distributions over combinatorial search spaces equipped with topological structures
for better trade-off between exploitation and exploration;
\item We propose OpEvo, which can leverage the topological structures over search spaces
by introducing a novel topology-aware mutation operation based on $q$-random walk distributions;
\item We evaluate the proposed algorithm with comprehensive experiments on both Nvidia and AMD platforms.
Our experiments demonstrate that compared with state-of-the-art (SOTA) methods
OpEvo can find the best configuration with the lowest variance
and least efforts in the number of trials and wall-clock time.
\end{itemize}
The rest of this paper is organized as follows.
We summarize the related work in Section~\ref{sec:related work},
and then introduce a formal description of tensor optimization problem
and construct topological structures in Section~\ref{sec:problem formulation}.
In Section~\ref{sec:methodology},
we describe OpEvo method in detail and demonstrate its strength with experiments of optimizing typical tensor operators in Section~\ref{sec:experiments}.
Finally, we conclude in Section~\ref{sec:conclusion}.
\section{Related Work}\label{sec:related work}
As a class of popular methods for expensive black-box optimization,
SMBO methods are potential solutions for tensor operator optimization.
Although classic SMBO methods, such as Bayesian optimization (BO) with Gaussian process surrogate,
are usually used to optimize black-box functions with continuous search spaces,
many works have been done in using SMBO to optimize combinatorial black-box functions.
\citet{hutter2011sequential} proposed SMAC,
which uses random forest as a surrogate model to optimize algorithm configuration successfully.
\citet{bergstra2011algorithms} proposed TPE,
which uses tree-structured Parzen estimator as a surrogate model
to optimize hyperparameters of neural networks and deep belief networks.
As for tensor operator optimization,
TVM~\cite{chen2018tvm} framework implemented a SMBO method called AutoTVM~\cite{chen2018learning}
to optimize configurations of tensor operators.
Specifically, AutoTVM fits a surrogate model with either XGBoost~\cite{chen2016xgboost} or TreeGRU~\cite{tai2015improved},
and then uses SA to optimize the surrogate model for generating a batch of candidates in an $\epsilon$-greedy style.
\citet{ahn2020chameleon} proposed CHAMELEON to further improve AutoTVM with clustering based adaptive sampling and
reinforcement learning based adaptive exploration to reduce the number of costly hardware and surrogate model measurements, respectively.
Ansor~\cite{zheng2020ansor} is another work built upon TVM.
It used EA instead of SA to optimize surrogate models and devised an end-to-end framework to allocate computational resources among subgraphs and
hierarchically generate TVM templates for them.
Although these methods are successfully used in many combinatorial optimization problems,
they are not as sample-efficient as their continuous counterparts
due to the lack of proper prior assumptions and corresponding surrogate models.
OpEvo, on the other hand, introduces and leverages topological structures over combinatorial search spaces
thus obtains better sample and time-efficiency than previous arts.
AutoTVM and CHAMELEON also claimed that they are able to transfer knowledge among operators
through transfer learning and reinforcement learning, respectively.
However, they seem not so helpful in the context of tensor operator optimization.
For transfer learning, the pre-training dataset should be large and diverse enough to cover main information in fine-tuning datasets,
like ImageNet~\cite{deng2009imagenet} and GPT-3~\cite{brown2020language} did,
otherwise using a pre-trained model is more likely harmful than beneficial.
Many tensor operators needing optimizing are either new types of operators generated by tensor fusion or expected to run on new devices.
Neither is suitable for transfer learning.
Even if transfer learning works in some cases, pre-training a surrogate model with a large dataset before starting a new search and
executing and fine-tuning such model during searching are probably more time and money-consuming than just sampling more configurations on hardwares.
For reinforcement learning, its brittle convergence and poor generalization have been widely questioned for many years~\cite{haarnoja2018soft,cobbe2019quantifying,cobbe2019leveraging}.
There seems no guarantee that the policy learned by CHAMELEON can generalize to unseen operators so that improve sample-efficiency.
Two operator-specific methods,
Greedy Best First Search (G-BFS) and Neighborhood Actor Advantage Critic (N-A2C),
have been recently proposed to tune matrix tiling schemes of matrix multiplication (MatMul) operators
by taking the relation between different configurations into account~\cite{zhang2019compiler}.
They actually introduce a topology over the configuration space of MatMul operator by defining a neighborhood system on it,
and further employ a Markov Decision Process (MDP) for exploration over the configuration space.
By leveraging a domain-specific topological structure, G-BFS and N-A2C outperform AutoTVM in optimizing MatMul operators.
However, these two methods are only designed for tuning tiling schemes of multiplication of matrices with only power of 2 rows and columns,
so they are not compatible with other types of configuration spaces.
Further, they tend to encounter curse of dimensionality as the number of parameters needed tuning getting bigger,
because they only change one parameter at a time based on the MDP they defined.
Thus, generalizing them to more general tensor operators is not straightforward.
OpEvo, on the other hand, constructs topological structures in a general way and
uses evolutionary framework rather than MDP framework to explore search spaces,
so that the aforementioned problems encountered by G-BFS and N-A2C are overcame.
\section{Problem Formulation}\label{sec:problem formulation}
As earlier mentioned, tensor operator optimization is essentially a black-box optimization problem with a combinatorial search space.
It can be formally written as
\begin{equation}\label{eq:problem}
\begin{gathered}
x^\star=\underset{x\in\mathbb{X}}{\arg\max}\ f(x),\
\mathbb{X}=\prod_{i=1}^\mu \mathcal{X}_i.
\end{gathered}
\end{equation}
Here, $f(x)$ is a black-box function that measures the performance of a specific tensor operator with configuration $x$.
We use trillion floating-point operations per second (TFLOPS) as the measurement in this work.
Configuration $x$ is an ordered $\mu$-tuple $(x_1,...,x_\mu)$ and
each component $x_i\in\mathcal{X}_i$ corresponds to a hyperparameter of a device code,
so the entire search space $\mathbb{X}$ is the Cartesian product of all component feasible sets $\prod_{i=1}^\mu\mathcal{X}_i$.
Our aim is to find the optimal configuration $x^\star\in\mathbb{X}$ that corresponds to the maximum TFLOPS.
A topological structure over each $\mathcal{X}_i$ can be introduced by defining an undirected graph $G=(V,E)$,
where the set of vertices $V$ is $\mathcal{X}_i$,
and the set of edges $E=\{\{u,v\}|u,v\in V,\ u\neq v,\ g_V(u,v)=1\}$.
Here $g_V(u,v)$ is an adjacency function mapping from $V\times V$ to $\{0,1\}$.
$g_V(u,v)=1$ represents vertices $u$ and $v$ are adjacent, otherwise $u$ and $v$ are not adjacent.
In this way, one can introduce a topological structure over $V$ by defining an adjacency function $g_V(u,v)$
according to prior assumptions on $V$.
For example, it is intuitive to treat $u$ and $v$ as adjacent
if similar performance can be expected when changing from $u$ to $v$.
Search process can benefit from the topological structures introduced this way by obtaining information about neighborhood of samples,
like the performance of configurations in the neighborhood of a poor performance configuration are probably poor as well,
so that better sample-efficiency could be achieved.
Different tensor operators may have different types of hyperparameters and corresponding feasible sets.
In the rest part of this section,
we will discuss four kinds of hyperparameters used in this work,
and construct topological structures for them.
It should be noted that, besides them, one can easily introduce other types of hyperparameters
and construct corresponding topological structures based on concrete demands in a similar way.
\begin{figure}[ht]
\centering
\begin{subfigure}[h!]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/factor.png}
\caption{Factorization parameter with $C=8$ and $\nu=3$.}
\label{fig:factor example}
\end{subfigure}
\begin{subfigure}[h!]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/permutation.png}
\caption{Permutation parameter with $n=3$.}
\label{fig:perm example}
\end{subfigure}
\begin{subfigure}[h!]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/discrete.png}
\caption{Discrete parameter with feasible set $\{1,2,3,4\}$.}
\label{fig:discrete example}
\end{subfigure}
\begin{subfigure}[h!]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/catagorical.png}
\caption{Categorical parameter with feasible set $\{a,b,c,d,e,f\}$.}
\label{fig:catagorical example}
\end{subfigure}
\caption{A simple illustration of topological structures introduced over the search spaces of tensor operators.}
\label{fig:topology example}
\end{figure}
First is the $\nu$-tuple with a factorization constraint,
$\mathcal{X}_i=\{(a_j)_{j=1}^\nu|\prod_{j=1}^\nu a_j=C,\ a_j\in\mathbb{N}_+\}$,
where $\nu,C\in\mathbb{N}_+$ are constants depending on specific tensor operators.
We will refer to this type of parameter as factorization parameter thereafter.
The factorization parameter is required by a popular technique called matrix tiling for improving the cache hit rate of memory access.
It iteratively splits computation into smaller tiles to adapt memory access patterns to a particular hardware.
From the implementation perspective, it transforms a single loop into nested loops,
where $\nu$ is the number of nested loops, $C$ is the total loop length and $a_j$ is the loop length of each nested loop.
We define two factorizations of $C$ are adjacent if one of them can be transformed to the other
by moving $z$, a prime factor of $C$, from the $n$-th factor to the $m$-th factor,
which is a basic transformation of the tiling scheme.
This adjacency function can be formally written as
$g(u,v)=1$ if $\exists n,m,z$ such that $u_m = v_m\cdot z$ and $u_n = v_n\cdot z^{-1}$, and $0$ otherwise,
where $n,m=1,...,\nu$ are distinct indices.
A simple example of the topology defined this way with $C=8$ and $\nu=3$
is illustrated in Figure~\ref{fig:factor example}.
The second type is the permutation parameter, $\mathcal{X}_i=\mathcal{M}!$,
where $\mathcal{M}$ is a set with $n$ distinct elements
and $\mathcal{M}!$ represents the symmetric group over $\mathcal{M}$.
The order of nested loops in device code can be modeled by this type of parameter,
where $n$ is the number of nested loops and each element in the feasible set corresponds to a particular order of nested loops.
We define two permutations of $\mathcal{M}$ are adjacent if one of them can be transformed to the other by a two-cycle permutation,
which is a basic transformation of the order.
This adjacency function can be formally written as
$g(u,v)=1$ if there exists a two-cycle permutation $\sigma$ of $\mathcal{M}$ such that $u=\sigma v$, and $0$ otherwise.
Figure~\ref{fig:perm example} shows the topology defined this way when $n=3$.
The third type is the discrete parameter,
$\mathcal{X}_i=\{a_j|j=1,...,J\ \mathrm{and}\ J\in\mathbb{N}_+,\ a_j\in\mathbb{R}\}$,
in which there are finite numbers.
The maximum step of loop unrolling is an example of discrete type parameter.
There is a natural adjacency among discrete parameters since they have well-defined comparability.
This natural adjacency function can be formally written as $g(u,v)=1$ if $\nexists w\in V$ such that $(w-u)\cdot(w-v)<0$, and $0$ otherwise.
A simple example of the topology defined this way with $\mathcal{X}_i=\{1, 2, 3, 4\}$
is illustrated in Figure~\ref{fig:discrete example}.
The last type is the categorical parameter,
$\mathcal{X}_i=\{a_j|j=1,...,J\ \mathrm{and}\ J\in\mathbb{N}_+\}$,
in which there are finite elements that can be any entity.
The choices like whether to unroll a loop and which thread axis to dispatch computation are examples of categorical type parameter.
Unlike discrete parameters, there is no natural adjacency among categorical parameters,
so all elements in the feasible set of categorical parameter are treated as adjacent,
which can be formally written as $g(u,v)=1$ for all $u,v\in V$ and $u\neq v$, and $0$ otherwise.
A simple example of the topology defined this way with $\mathcal{X}_i=\{a, b, c, d, e, f\}$
is illustrated in Figure~\ref{fig:catagorical example}.
\section{Methodology}\label{sec:methodology}
\subsection{Evolutionary Algorithm}
EA is a kind of stochastic derivative-free optimization methods,
which can be used to solve problems defined by Equation~\ref{eq:problem}.
EA imitates the natural selection in the evolution process of biological species
to find the optimal configuration of an objective function.
Evolutionary concepts are translated into algorithmic operations,
i.e., selection, recombination, and mutation~\cite{kramer2016machine},
which significantly influence the effectiveness and efficiency of EA.
To efficiently search the best configuration of a tensor operator,
OpEvo leverages topological structures defined in Section~\ref{sec:problem formulation} with an evolutionary framework.
In specific, OpEvo evolves a population of configurations, which are also called individuals in EA terminology.
The TFLOPS of executing tensor operators on a target hardware is a measure of the individuals' quality or fitness.
At each evolutionary step, we select top ranked individuals to be parents based on their fitnesses,
and then recombine and mutate them to generate new individuals or children.
After evaluation, children are added to the population to be candidates of new parents at the next evolutionary step.
This iteration will repeat until some termination criteria are met.
In the rest of this section, we will describe the selection, recombination and mutation operations of OpEvo in detail
and illustrate how OpEvo leverages the topological structures and why OpEvo can outperform previous arts in this way.
\subsection{Selection and Recombination}\label{sec:select}
Suppose we already have a list of individuals which are ranked by their fitnesses.
Top-$\lambda$ ranked individuals are chosen to be parents,
where $\lambda\in\mathbb{N}_+$ governs the diversity in evolutionary process.
Evolution with large $\lambda$ tends to get rid of suboptimum but sacrifices data efficiency,
while one with small $\lambda$ is easier to converge but suffers from suboptimum.
A child will be generated by recombining these selected parents in a stochastic way.
Specifically, we sample below categorical distribution with $\lambda$ categories $\mu$ times
to decide which parents each parameter of a child should inherit from.
\begin{equation}\label{eq:recombine}
\begin{gathered}
P(x_i = x_i^j) = \frac{f(x^j)}{\sum_{k=1}^\lambda f(x^k)}, \\
\mathrm{for}\quad i=1,...,\mu,\quad j=1,...,\lambda,
\end{gathered}
\end{equation}
where $\mu$ is the number of parameters in a configuration,
superscripts represent different parents, and subscripts represent different parameters in a configuration.
$x_i$ is the $i$-th parameter of generated child $x$.
It is worthwhile to mention that
many SOTA methods suffer invalid configurations in the search spaces,
which is inevitable since the constraints in search spaces are usually black-box as well.
OpEvo can mitigate this problem by assigning zero fitnesses to the invalid configurations
so that their characters have no chance to be inherited.
In this way, invalid configurations will have less and less probability to be sampled during evolution.
\subsection{Mutation}\label{sec:mutate}
OpEvo mutates each parameter $x_i$ of each child
by sampling a topology-aware probability distribution over corresponding feasible set $\mathcal{X}_i$.
Given a topology over $\mathcal{X}_i$ and current vertex,
such topology-ware probability distribution can be constructed by a random walk-like process.
The transition probability from vertex $u$ to an adjacent vertex $v$ is
\begin{equation}\label{eq:dist}
P_{uv}=\frac{q}{|S(u)|}, v\in S(u),
\end{equation}
where $q\in(0,1)$ is the mutation rate which trade-offs the exploration and exploitation.
OpEvo tends to exploration as $q$ approaches $1$, while tends to exploitation as $q$ approaches $0$.
$S(u)=\{v|g(u,v)=1\}$ is the set of all vertices adjacent to $u$,
and $|S(u)|$ denotes the cardinality of set $S(u)$.
The major difference between the "random walk" defined by Equation~\ref{eq:dist} and the regular random walk is that
the summation of transition probability over all adjacent vertices is $q$ rather than $1$,
so the "random walk" we introduced is not a Markov chain since there is a probability of $1-q$ to stop walking.
In this way, given a current vertex $u\in\mathcal{X}_i$,
the topology-aware probability distribution $P_u(v)$ for all $v\in\mathcal{X}_i$ could be defined as
the probability of walking started from $u$ and stopped at $v$.
We will refer to the distribution defined this way as $q$-random walk distribution thereafter.
Appendix~\ref{sec:proof} formally proved that the $q$-random walk distribution is a valid probability distribution over $\mathcal{X}_i$.
\begin{figure}[ht]
\centering
\begin{subfigure}[h!]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/q5.png}
\caption{$q=0.5$}
\label{fig:q example a}
\end{subfigure}
\begin{subfigure}[h!]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/q7.png}
\caption{$q=0.7$}
\label{fig:q example b}
\end{subfigure}
\caption{Two $q$-random walk distributions with different $q$.}
\label{fig:q example}
\end{figure}
For revealing the intuition behind $q$-random walk distribution,
two q-random walk distributions over the feasible set of factorization parameter with $C=8$ and $\nu=3$
are illustrated in Figure~\ref{fig:q example}.
They start from the same vertex (the blue vertex) but mutate with different $q$.
It could be easily observed that the vertices with smaller distance to the start vertex have higher probability to be sampled,
which ensures a good trade-off between exploration and exploitation.
Further, the distribution with a larger $q$ has a wider spread than one with a smaller $q$,
because larger $q$ encourages more jumps in the $q$-random walk process.
Considering the asymptotic case of $q=1$, the $q$-random walk degenerates into a regular random walk on an undirected graph,
which keeps jumping forever and eventually traverses all vertices on the graph,
while in the case of $q=0$, the $q$-random walk vanishes and no mutation acts on parameter $x_i$.
Thus, $q$ is a hyperparameter for OpEvo to trade off exploitation and exploration.
Considering a regular random walk on an undirected graph, i.e. $q=1$,
the probability of visiting a vertex in the graph is determined by the graph topology
when the Markov chain induced by the random walk is converged.
That's why random walk can be used for embedding graphs in many works~\cite{perozzi2014deepwalk}.
$q$-random walk distribution also inherits this topology-aware nature.
Observing vertices with the same distance to the start vertex in Figure~\ref{fig:q example},
the vertices with more complex neighborhood have larger probability.
For example, vertices $(2,1,4)$, $(2,2,2)$ and $(2,4,1)$ have the same distance to start vertex $(8,1,1)$,
but vertex $(2,2,2)$ has larger probability since it has larger degree.
This property of $q$-random walk distribution helps explore search spaces efficiently,
because sampling vertices with more complex neighborhood will provide us more knowledge about objective functions.
\subsection{Summary}
The OpEvo algorithm is summarized in Algorithm~\ref{alg:opevo}.
At first, $\lambda$ configurations are randomly generated and evaluated
to initialize a priority queue $\mathcal{Q}$ ordered by decreasing fitness.
Next, taking top $\lambda$ configurations from $\mathcal{Q}$ as parents and
recombining them to generate $\rho$ children according to Section~\ref{sec:select}.
Then, each child is mutated based on Section~\ref{sec:mutate}.
Note that the same configuration will not be sampled twice in the whole process of OpEvo,
since the noise of TFLOPS of executing a tensor operator on a hardware is relatively small
and data efficiency can benefit from non-duplicated samples.
As a result, when a mutated child has already in $\mathcal{Q}$,
we will mutate the child again until it is not already sampled.
Finally, the fitnesses of $\rho$ children are evaluated on target hardware, and enqueued into $\mathcal{Q}$.
This iteration will repeat until some termination criteria are met.
\begin{algorithm}[htb]
\caption{OpEvo}
\label{alg:opevo}
\textbf{Input}: all component feasible sets $\mathcal{X}_i, i=1,..,\mu$,
parents size $\lambda$, offspring size $\rho$, mutation rate $q$\\
\textbf{Output}: optimal configuration $x^\star$
\begin{algorithmic}[1]
\STATE randomly generate $\lambda$ configurations $\{x^j\}_{j=1}^\lambda$
\STATE evaluate $\{x^j\}_{j=1}^\lambda$ to get associated fitness,
and enqueue $\{x^j,f(x^j)\}_{j=1}^\lambda$ into a priority queue $\mathcal{Q}$
\REPEAT
\STATE select $\lambda$ parents from $\mathcal{Q}$ and
recombine them to generate $\rho$ children according to Section~\ref{sec:select}
\STATE mutate $\rho$ children according to Section~\ref{sec:mutate}
\STATE evaluate $\rho$ children on hardware, and enqueue $\{x^j,f(x^j)\}_{j=1}^\rho$ into $\mathcal{Q}$
\UNTIL{termination criterion is met}
\STATE \textbf{return} the best configuration so far
\end{algorithmic}
\end{algorithm}
\section{Experiments}\label{sec:experiments}
We now evaluate the empirical performance of the proposed method with three typical kinds of tensor operators,
MatMul, BatchMatMul, 2D Convolution, and a classic CNN architecture AlexNet~\cite{krizhevsky2012imagenet} on both Nvidia (GTX 1080Ti) and AMD (MI50 GPU) platforms.
All tensor operators in our experiments are described and generated with TVM framework,
and then compiled and run with CUDA 10.0 or RCOM 2.9.
Additionally, we compare OpEvo with three aforementioned SOTA methods, G-BFS, N-A2C and AutoTVM.
In our experiments, OpEvo, G-BFS and N-A2C are implemented by ourselves
with the framework of Neural Network Intelligence (NNI, \textit{https://github.com/microsoft/nni/}),
and AutoTVM is implemented by its authors in the TVM project (\textit{https://github.com/apache/incubator-tvm}).
All codes for OpEvo, G-BFS, N-A2C and our benchmarks are publicly available with the NNI project.
Please refer to Appendix~\ref{sec:exp_details} for more details about the experiments
and Appendix~\ref{sec:omitted} for specific numbers about figures presented in this section.
\begin{figure}[ht]
\centering
\includegraphics[width=0.47\textwidth]{figures/mm.png}
\caption{Algorithms comparison for three MatMul operators.
The first and third rows are results on Nvidia platform, while the second and fourth rows are results on AMD platform.
Three columns correspond to three operators MM1, MM2 and MM3 described in Appendix~\ref{sec:exp_mm} from left to right, respectively.}
\label{fig:matmul}
\end{figure}
\subsection{MatMul}
Three different MatMul operators are chosen from BERT~\cite{devlin2018bert} to evaluate proposed method.
The maximum performance obtained so far versus number of trials and wall-clock time which have been used is illustrated in Figure~\ref{fig:matmul}.
For the upper two rows, the lines denote the averages of 5 runs, while the shaded areas indicate standard deviations.
For the lower two rows, each line denotes a specific run.
Different colors and line styles represent different algorithms.
From the results, it can be easily concluded that the methods leveraging predefined topology, OpEvo, G-BFS and N-A2C,
much outperform the general SMBO method, AutoTVM.
G-BFS and N-A2C leverage the underlying topology by introducing a MDP,
so just local topology is considered and leveraged to explore the configuration space,
while OpEvo can consider the global topology thanks to the mutation operation based on the $q$-random walk distribution.
Therefore, OpEvo performs better than G-BFS and N-A2C in most cases in terms of mean and variance of best TFLOPS and data-efficiency.
Further, as earlier mentioned, OpEvo is a lightweight model-free method,
so the extra burden for training and optimizing surrogate models is avoided.
It can be seen from Figure~\ref{fig:matmul} that OpEvo can save around $30\%$ and $10\%$ wall-clock time
when optimizing CUDA and ROCM operators, respectively.
This is because the CUDA compilation speed is usually faster than ROCM,
so the extra burden of tuning CUDA operators takes a larger share of the total wall-clock time.
\begin{figure}[ht]
\centering
\includegraphics[width=0.47\textwidth]{figures/bmm.png}
\caption{Algorithms comparison for three BatchMatMul operators.
The first and third rows are results on Nvidia platform, while the second and forth rows are results on AMD platform.
Three columns correspond to three operators BMM1, BMM2 and BMM3 described in Appendix~\ref{sec:exp_bmm} from left to right, respectively.}
\label{fig:bmm}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.314\textwidth]{figures/conv.png}
\caption{Algorithms comparison for two 2D Convolution operators.
The first and third rows are results on Nvidia platform, while the second and forth rows are results on AMD platform.
Two columns correspond to two operators C1 and C2 described in Appendix~\ref{sec:exp_conv} from left to right, respectively.}
\label{fig:conv}
\end{figure}
\subsection{BatchMatMul}
There are also three BatchMatMul operators selected from BERT for evaluation.
All these operators have batch size $960$, so G-BFS and N-A2C are not capable to optimize them
because they can only deal with matrices with power of 2 rows and columns.
The comparison between OpEvo and AutoTVM is shown in Figure~\ref{fig:bmm}.
Compared with MatMul operators, BatchMatMul has an order of magnitude bigger search space
since one more parameter needed to be optimized.
Also, the generated BatchMatMul device code is more likely to overflow the device memory
as tile size of BatchMatMul is bigger than that of MatMul,
which leads to sparser performance measurement.
Although these challenges exist, OpEvo performs still well thanks to the globally exploration mechanism.
The variance of best performance even better than that of MatMul because of the sparsity.
\subsection{2D Convolution}
Two 2D convolution operators are chosen from AlexNet for evaluation.
They have more complex search spaces and thus harder to model compared with tensor operators discussed before,
since, besides factorization parameter, discrete and categorical parameters are also involved.
As a result, G-BFS and N-A2C are not capable to tune them.
Figure~\ref{fig:conv} shows the comparison between OpEvo and AutoTVM.
Although XGBoost is a tree boosting model
which is relatively friendly to discrete and categorical parameters,
AutoTVM still performs worse than OpEvo, because EA inherently supports complex search space
and OpEvo further improves sample-efficiency by leveraging predefined topology.
We note that the time-saving effect of OpEvo is not significant in the 2D convolution cases,
because compiling and executing convolution operators are much more time-consuming than MatMul and BatchMatMul Operators
and thus dominate the total tuning time.
\subsection{End-to-end Evaluation}
A classic CNN architecture, AlexNet, is used to evaluate the end-to-end performance of the proposed method,
where there are 26 different kinds of tensor operators covering the most commonly used types.
Figure~\ref{fig:e2e} shows the comparison between OpEvo and AutoTVM in terms of inference time,
form which it can be easily concluded that OpEvo is more data-efficient than AutoTVM on both NVIDIA and AMD platforms.
For OpEvo, the end-to-end inference time rapidly deceases and reaches the minimum at around 200 trials,
while AutoTVM needs at least 400 trails to reach the same performance.
\begin{figure}[ht]
\centering
\includegraphics[width=0.47\textwidth]{figures/e2e.png}
\caption{Algorithms comparison in terms of end-to-end inference time.
The left figure is the result on Nvidia platform, while the right one is the result on AMD platform.}
\label{fig:e2e}
\end{figure}
\begin{figure}[ht]
\centering
\begin{subfigure}[h!]{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/mm_q.png}
\end{subfigure}
\begin{subfigure}[h!]{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/bmm_q.png}
\end{subfigure}
\begin{subfigure}[h!]{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/conv_q.png}
\end{subfigure}
\caption{The effect of mutation rate $q$ on OpEvo.}
\label{fig:q}
\end{figure}
\subsection{Hyperparameter Sensitivity}\label{sec:sensitivity}
As earlier mentioned, OpEvo has two important hyperparameters,
the mutation rate $q$ which trade-offs the exploration and exploitation and
the parent size $\lambda$ which governs the diversity in the evolutionary process.
In this part, we evaluate OpEvo with different $q$ and $\lambda$
for better understanding of each introduced technique and the hyperparameter sensitivity.
From left to right,
the first rows of Figure~\ref{fig:q} and~\ref{fig:lambda} correspond to MM1, MM2 and MM3,
the second rows correspond to BMM1, BMM2 and BMM3,
and the third rows correspond to C1 and C2.
It can be concluded from the Figure~\ref{fig:q} and~\ref{fig:lambda} that
OpEvo is quite stable with the choice of $q$ and $\lambda$ in most cases.
The effect of $q$ is only visible in the 2D convolution cases,
where insufficient exploration leads to suboptima and large variance.
As for $\lambda$, the influences are only considerable in the cases of BMM2 and C1,
where large $\lambda$ results in a significant reduction of sample-efficiency
while small $\lambda$ results in suboptima and large variance due to insufficient diversity in the evolutionary process.
\begin{figure}[ht]
\centering
\begin{subfigure}[h!]{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/mm_lambda.png}
\end{subfigure}
\begin{subfigure}[h!]{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/bmm_lambda.png}
\end{subfigure}
\begin{subfigure}[h!]{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/conv_lambda.png}
\end{subfigure}
\caption{The effect of parent size $\lambda$ on OpEvo.}
\label{fig:lambda}
\end{figure}
\section{Conclusion}\label{sec:conclusion}
In this paper, we proposed OpEvo, a novel evolutionary method which can efficiently optimize tensor operators.
We constructed topological structures for tensor operators
and introduced a topology-aware mutation operation based on $q$-random walk distribution,
so that OpEvo can leverage the constructed topological structures to guide exploration.
Empirical results show that OpEvo outperforms SOTA methods
in terms of best FLOPS, best FLOPS variance, sample and time-efficiency.
Please note that all experiments in this work are done with 8 CPU threads for compiling and a single GPU card for executing.
The time-saving effect of OpEvo will be more significant if more CPU threads and GPU cards are used.
Further, the analysis of hyperparameter sensitivity illustrated the robustness of OpEvo.
This work also demonstrated that good leverage of proper prior assumptions on objective functions is the key of sample-efficiency
regardless of model-based or model-free methods.
Even EA can beat SMBO in terms of sample-efficiency as long as proper prior assumptions are effectively leveraged.
Please note the proposed method cannot only be used to optimize tensor operators,
but can also be generally applicable to any other combinatorial search spaces with underlying topological structures.
Since the performance of OpEvo highly depends on the quality of constructed topology,
it is particularly suitable for the cases where abundant human knowledge exists but there is lack of methods to leverage them.
\section*{Acknowledgement}
We would like to thank Lidong Zhou and Jing Bai for the inspiring discussions with them,
all anonymous reviewers for their insightful comments,
and all contributors of the NNI project for the powerful and user-friendly tools they created.
|
1,314,259,994,226 | arxiv | \section{Introduction}
Many Isabelle users are familiar with the following scenario once their formal
developments reach a certain size:
\begin{quote}\sl
You make a change high up in the theory hierarchy but then want to continue
with your latest theory that is a leaf of the hierarchy. So you initiate a
build of an Isabelle session that covers all imports of your current theory
and \ldots take a break, since all the CPU cycles and gigabytes of RAM your
computer has to offer are needed to finish the build within the next few hours
and you will not even be able to read emails on the same machine in the
meantime.
\end{quote}
While the above might be slightly exaggerated, it is not that far from the
truth. For example, building
\isafor/\ceta\footnote{\url{http://cl-informatik.uibk.ac.at/isafor}}
takes almost four hours on my current machine if only a single process is
available, and requires at least 16\,GB of RAM to succeed at all.
This is in contrast to only around one hour build time on a workstation with 12
processes available. Thus the obvious solution is that you build session heap
images not on your local machine but instead on some decent remote machine.
Often, you will still want to continue work on your local machine once the
build is finished. So you have to copy the remotely built heap images to your
local machine in such a way that Isabelle does not initiate a new build.
It is surely possible to do all this by hand, but it tends to get tedious after
the $n$th repetition. Which is why
I introduce the Isabelle add-on tool \RB, enabling transparent remote session
builds. The
intended workflow for a user is to locally issue a build command for some
session heap images and then immediately continue work without the performance loss that
often comes with time and computation intensive Isabelle builds.
Meanwhile, the actual build is started on another machine and the resulting heap
images are synchronized incrementally as soon as they are available.
\section{Invoking the Build Process}
The \RB tool is implemented in
Isabelle/Scala\footnote{\href{http://isabelle.in.tum.de/doc/system.pdf\#page=32}{\nolinkurl{http://isabelle.in.tum.de/doc/system.pdf}}
(Chapter 4)} and comes with a command line
interface. Its usage is:
\begin{bash}
Usage: isabelle remote_build [OPTIONS] SESSIONS ...
Options are:
-B DIR base directory for remote Isabelle installations (default:
$REMOTE_BUILD_REMOTE_BASE, or if former not set ~)
-d DIR include session directory
-r HOST remote host name (default: $REMOTE_BUILD_REMOTE_HOST)
-o OPTION add option for remote isabelle call, e.g., -o -d -o '$ISAFOR'
-i incremental: only synchronize heap images that are newly
built on the remote host (default: synchronize all session
heaps together with their ancestors)
-P PROXY connect to remote host via proxy jump; PROXY may either be a
HOST or a specification HOST:PORT (default PORT: 2222)
-v be verbose
Build and copy heap images, observing implicit settings:
REMOTE_BUILD_REMOTE_HOST="..."
REMOTE_BUILD_REMOTE_BASE="..."
\end{bash}
\lstset{xleftmargin=1em}%
In order for \RB to work properly, we need (at least) two computers, a local
machine $L$ and a remote machine $R$, with Isabelle installed. The respective
installations should be reasonably similar (meaning if one of them is
\nolinkurl{x86_64-linux}, the other should be too; and of course the Isabelle
versions should coincide).
Also, the sessions you want to build and corresponding theory sources have to
be present on both machines (for \isafor, I achieve this for example by using
two clones of its mercurial repository, one on $L$ and one on $R$).
Moreover, communication between $L$ and $R$ runs through SSH and the
\tool{rsync}\footnote{\url{https://rsync.samba.org/}} utility is used for heap
image synchronization.
By default, the Isabelle installation on $R$ is expected to be located in the
user home directory. This can be overwritten by explicitly setting the
remote base directory with \sh{-B}, or made persistent in
\sh{$ISABELLE_HOME_USER/etc/settings} by setting
\sh{REMOTE_BUILD_REMOTE_BASE}.
As for the standard \sh{build} tool of Isabelle, (local) session directories
can be specified via \sh{-d}. Usually, this has to be reflected on the remote
side. A general way of passing options to the Isabelle process invoked on $R$
is by \sh{-o} (which takes a single word, no spaces, as argument).
The hostname/IP address of $R$ can be set explicitly using \sh{-r} or made
persistent by
setting \sh{REMOTE_BUILD_REMOTE_HOST}.
If \sh{-i} is set, then \RB enters incremental mode and only synchronizes heap
images that are generated during the current build. This might occasionally be
useful to save some time (for example, you might already have started to
manually copy heap images from $R$ that existed before the build was
initiated). The default behavior is to synchronize all ancestors of the built
sessions.
If $R$ is not directly available via SSH, a proxy $P$ can be specified using
\sh{-P}, which works as long as $P$ is reachable via SSH from $L$ and $R$ is
reachable via SSH from $P$.
\paragraph{Usage examples.}
This is, for example, how I build the whole of \isafor/\ceta from my office
(``remote host'' and ``remote base'' are implicit in my local settings):
\begin{bash}
isabelle remote_build -d'$ISAFOR' -o-d'$ISAFOR' CeTA
\end{bash}
If I want to do the same from at home, I have to provide a proxy, since the
``build machine'' of our research group is not directly available from the
outside:
\begin{bash}
isabelle remote_build -P proxy.uibk.ac.at -d'$ISAFOR' -o-d'$ISAFOR' CeTA
\end{bash}
\section{Installation Instructions}
The \RB tool is part of the \isafor/\ceta project since version \isaforversion
and compatible with \isaversion. Its sources reside in
\href{http://cl2-informatik.uibk.ac.at/rewriting/mercurial.cgi/IsaFoR/raw-file/3735ca6ffdb9/src/remote_build.scala}{\file{src/remote_build.scala}}.
Once you obtained the sources, the following steps are required to make \RB
locally available as Isabelle tool. Start by compiling the sources
\begin{bash}
isabelle scalac remote_build.scala
\end{bash}
which should create the two files:
\sh{Remote_Build.class} and \sh{Remote_Build$.class}.
Then, assemble a JAR archive \file{remote_build.jar} via:
\begin{bash}
jar cevf Remote_Build remote_build.jar \
Remote_Build.class 'Remote_Build$.class'
\end{bash}
Now, say in a directory \file{tools/}, create the tool wrapper \file{remote_build} with content
\begin{bash}
#!/usr/bin/env bash
$ISABELLE_TOOL scala /path/to/remote_build.jar "$@"
\end{bash}
and register it as Isabelle tool by adding
\begin{bash}
ISABELLE_TOOLS="$ISABELLE_TOOLS:/path/to/tools/"
\end{bash}
to \file{$ISABELLE_HOME_USER/etc/settings}.
\section{Some Further Details and Troubleshooting}
The \RB tool employs the available Isabelle/Scala interface to the
\tool{JSch}\footnote{\url{http://www.jcraft.com/jsch/}} Java implementation of
the SSH2 protocol.
Since the available interface does not cater for password authentication (which would be
cumbersome anyway), the involved SSH connections assume key-based
authentication. However, the current version does not seem to support ECDSA
based host keys.\footnote{The only kind of keys I actually tested is RSA.}
Therefore, it will sometimes be necessary to set up an RSA
host key.
To find out what kind of keys are currently known for a given host \sh{host},
use
\begin{bash}
ssh-keygen -F host
\end{bash}
which looks up host keys in \file{~/.ssh/known_hosts}.
To obtain an RSA key for \sh{host}, use:
\begin{bash}
ssh-keyscan -t rsa host
\end{bash}
Its output can directly be appended to the list of known hosts as follows:
\begin{bash}
ssh-keyscan -t rsa host >> ~/.ssh/known_hosts
\end{bash}
In case a proxy $P$ is used between $L$ and $R$,
\RB establishes, behind the scenes, the following SSH connections.
First a connection from $L$ to $P$ with port-forwarding from $L:2222$ to the
SSH daemon of $R$. That is, akin to:
\begin{bash}
ssh -L 2222:R:22 P
\end{bash}
And in addition the actual connection between $L$ and $R$ that is carried
inside the above port-forwarding channel. Which you could establish on a
command line via \sh{ssh -p 2222 localhost}.
This setup, causes the peculiarity that an entry for \sh{[localhost]:2222}
is needed in \file{~/.ssh/known_hosts} that provides an RSA key for the remote
host $R$ (so, if you change your remote host, also the key for
\sh{[localhost]:2222} has to change, even though the hostname of the entry did
not).
\end{document}
|
1,314,259,994,227 | arxiv | \section*{Proof of Theorem~\ref{thm:Smalealphatheory}}
The proof of the theorem relies in the following three lemmas, stated for $f\in\mathcal{P}_{n,\bfd}[n]$ and $x,y\in\mathbb{F}^n$.
\begin{lem}\label{lem:inversegamma}
If $\gamma(f,x)\|x-y\|<1$, then $\mathrm{D}_y f$ is invertible and
$\|\mathrm{D}_y f^{-1}\mathrm{D}_xf\|=1$.
\end{lem}
\begin{lem}[Variations of Smale's parameters]\label{lem:variationSmaleparameters}
If $\rho:=\gamma(f,x)\|x-y\|<1$, then:
\begin{center}
\emph{(a)} $\alpha(f,y)\leq \max\{\alpha(f,x),\rho\}$.
\emph{(b)} $\beta(f,y)\leq \max\{\beta(f,x),\|y-x\|\}$.
\emph{(c)} $\gamma(f,y)=\gamma(f,x)$.
\end{center}
Moreover, if $\|y-x\|<\beta(f,x)$, all are equalities.
\end{lem}
\begin{lem}[Variations along Newton step]\label{lem:variationNewton}
If $\alpha(f,x)<1$, then:
\begin{center}
\emph{(a)} $\alpha(f,\mathrm{N}_f(x))\leq \alpha(f,x)^2$.\hfill
\emph{(b)} $\beta(f,\mathrm{N}_f(x))\leq \alpha(f,x)\beta(f,x)$.\hfill
\emph{(c)} $\gamma(f,\mathrm{N}_f(x))=\gamma(f,x)$.
\end{center}
In particular, $\mathrm{N}_f(\mathrm{N}_f(x))$ is well-defined.
\end{lem}
\begin{proof}[Proof of Theorem~\ref{thm:Smalealphatheory}]
If $\alpha(f,x)<1$, then, using induction and Lemma~\ref{lem:variationNewton}, we obtain that (a), (b) and (c) hold. But then the sequence $\{\mathrm{N}_f^k(x)\}$ converges since
$\lim_{k\to\infty}\|\mathrm{N}_f^{k+1}(x)-\mathrm{N}_f^k(x)\|=0$
and so it is a Cauchy sequence. Finally, (Q) follows from noting that for $l\geq k$
\[
\|\mathrm{N}_f^l(x)-\mathrm{N}_f^k(x)\|\leq \alpha(f,x)^{2^{l-k}}\beta(f,\mathrm{N}_f^k(x))
\]
and taking infinite sum together with the equality case of the ultrametric inequality. In particular, we have $\dist(x,f^{-1}(0))=\|x-\zeta\|=\beta(f,x)<1/\gamma(f,x)$. This shows that ($\alpha$) implies ($\gamma$).
For the other direction, assume that $\dist(x,f^{-1}(0))<1/\gamma(f,x)$. Then $\gamma(f,x)$ is finite, since otherwise $\dist(x,f^{-1}(0))<0$, which is impossible. Let $\zeta\in\mathbb{F}^n$ be a zero of $f$ such that $\dist(x,\zeta)<1/\gamma(f,x)$. Then
$
0=f(\zeta)=f(x)+\mathrm{D}_xf(\zeta-x)+\sum_{k=2}^{\infty}\frac{\mathrm{D}_x^kf}{k!}(\zeta-x,\ldots,\zeta-x),
$
and so
\[
-\mathrm{D}_xf^{-1}f(x)=\zeta-x+\sum_{k=2}^{\infty}\mathrm{D}_xf^{-1}\frac{\mathrm{D}_x^kf}{k!}(\zeta-x,\ldots,\zeta-x).
\]
Now, the higher order terms satisfy that
$
\left\|\mathrm{D}_xf^{-1}\frac{\mathrm{D}_x^kf}{k!}(\zeta-x,\ldots,\zeta-x)\right\|\leq \left(\gamma(f,x)\|\zeta-x\|\right)^{k-1}\|\zeta-x\|<\|\zeta-z\|
$
and so, by the equality case of the ultrametric inequality,
$
\beta(f,x)=\|\zeta-x\|<1/\gamma(f,x),
$
as desired.
\end{proof}
Now, we prove the auxiliary lemmas \ref{lem:inversegamma}, ~\ref{lem:variationSmaleparameters} and~\ref{lem:variationNewton}
\begin{proof}[Proof of Lemma~\ref{lem:inversegamma}]
We have that
$
\mathrm{D}_xf^{-1}\mathrm{D}_yf=\mathbb{I}+\sum_{k=1}^{\infty}\mathrm{D}_xf^{-1}\frac{\mathrm{D}_x^{k+1}f(y-x,\ldots,y-x)}{k!}.
$
Now, under the given assumption,
$
\left\|\mathrm{D}_xf^{-1}\frac{\mathrm{D}_x^{k+1}f(y-x,\ldots,y-x)}{k!}\right\|\leq \left(\gamma(f,x)\|y-x\|\right)^{k-1}<1
$
for $k\geq 2$, and so, by the the ultrametric inequality, $\|\mathrm{D}_xf^{-1}\mathrm{D}_yf-\mathbb{I}\|<1$. Therefore
$
\sum_{k=0}^\infty (\mathbb{I}-\mathrm{D}_xf^{-1}\mathrm{D}_yf)^k
$
converges, and it does so to the inverse of $\mathrm{D}_xf^{-1}\mathrm{D}_yf$. Since, by assumption $\mathrm{D}_xf$ is invertible, so it is $\mathrm{D}_yf$.
Finally, by the invertibility of $\mathrm{D}_yf$, we have that
$
\mathrm{D}_yf^{-1}\mathrm{D}_xf=\sum_{k=0}^\infty (\mathbb{I}-\mathrm{D}_xf^{-1}\mathrm{D}_yf)^k,
$
and so, by the equality case of the ultrametric inequality, $\|\mathrm{D}_yf^{-1}\mathrm{D}_xf\|=1$, as desired.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:variationSmaleparameters}]
We first prove (c) and then (b). (a) follows from (b) and (c) immediately.
(c) We note that under the given assumption, for $k\geq 2$,
\begin{equation}\label{eq:ineqmixedgamma1}
\left\|\mathrm{D}_xf^{-1}\frac{\mathrm{D}_y^kf}{k!}\right\|\leq\gamma(f,x)^{k-1}.
\end{equation}
For this, we expand the Taylor series of $\frac{\mathrm{D}_y^kf}{k!}$ (with respect $y$) and note that its $l$th term is dominated by
\[
\gamma(f,x)^{k+l-1}\|y-x\|^l,
\]
which, by the ultrametric inequality, gives the above inequality. In this way, for $k\geq 2,$
\[
\left\|\mathrm{D}_yf^{-1}\frac{\mathrm{D}_y^kf}{k!}\right\|\leq \left\|\mathrm{D}_yf^{-1}\mathrm{D}_xf\right\|\left\|\mathrm{D}_xf^{-1}\frac{\mathrm{D}_y^kf}{k!}\right\|\leq \gamma(f,x)^{k-1}
\]
by Lemma~\ref{lem:inversegamma} and~\eqref{eq:ineqmixedgamma1}. Thus $\gamma(f,y)\leq \gamma(f,x)$. Now, due to this, the hypothesis $\gamma(f,y)\|x-y\|<1$ holds, and so, by the same argument, $\gamma(f,x)\leq\gamma(f,y)$, which is the desired equality.
(b) Arguing as in (c), we can show that
\begin{equation}\label{eq:ineqmixedbeta1}
\left\|\mathrm{D}_xf^{-1}f(y)\right\|\leq \max\{\|\mathrm{D}_xf^{-1}f(x)+y-x\|,\gamma(f,x)\|y-x\|^2\}
\end{equation}
by noting that the general term (of the Taylor series of $\mathrm{D}_xf^{-1}f(y)$ with respect $y$) is dominated by $\gamma(f,x)^{k-1}\|y-x\|^k<\gamma(f,x)\|y-x\|^2$. Now, $\beta(f,y)\leq \left\|\mathrm{D}_yf^{-1}\mathrm{D}_xf\right\|\left\|\mathrm{D}_xf^{-1}f(y)\right\|$, and so, by Lemma~\ref{lem:inversegamma} and~\eqref{eq:ineqmixedbeta1},
\[
\beta(f,y)
\leq \max\{\|\mathrm{D}_xf^{-1}f(x)+y-x\|,\gamma(f,x)\|y-x\|^2\}\leq \max\{\beta(f,x),\|y-x\|\}.
\]
For the equality case, note that, by the same argument, we have
$
\beta(f,x)\leq\max\{\beta(f,y),\|y-x\|\}=\beta(f,y)
$
where the equality on the right-hand side follows from $\beta(f,x)>\|y-x\|$.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:variationNewton}]
(a) follows from combining (b) and (c), and (c) from Lemma~\ref{lem:variationSmaleparameters} (c). We only need to show (b). We use equation~\eqref{eq:ineqmixedbeta1} in the proof of Lemma~\ref{lem:variationSmaleparameters} with $y=\mathrm{N}_f(x)$. By~\eqref{eq:ineqmixedbeta1} and Lemma~\ref{lem:inversegamma},
\[
\beta(f,\mathrm{N}_f(x))\leq \max\{\|D_xf^{-1}f(x)+N_f(x)-x\|,\gamma(f,x)\|\mathrm{N}_f(x)-x\|^2\}.
\]
Now, $\mathrm{N}_f(x)-x=-\mathrm{D}_xf^{-1}f(x)$, so the above becomes
$
\beta(f,\mathrm{N}_f(x))\leq \max\{0,\gamma(f,x)\beta(f,x)^2\},
$
which gives the desired claim.
\end{proof}
\section*{Acknowledgements}
J.G.S. is supported by a Companion Species Research Fellowship funded by the Durlacher Foundation. J.T.-C. is supported by a postdoctoral fellowship of the 2020 ``Interaction'' program of the Fondation Sciences Mathématiques de Paris, and partially supported by ANR JCJC GALOP (ANR-17-CE40-0009).
J.G.S. and J.T.-C. are thankful to Elias Tsigaridas for useful suggestions, and to Evgenia Lagoda for moral support.
|
1,314,259,994,228 | arxiv | \section{Introduction}
The physics of collisionless shock waves is of paramount importance in
modern astrophysics, as it governs a variety of phenomena, most
notably the emission of high energy radiation. In particular, the
afterglow radiation of gamma-ray bursts is generally attributed to the
synchrotron emission of electrons that have been accelerated around
the relativistic external shock wave, with shock Lorentz factor
$\Gamma_{\rm sh}\,\sim\,100$ (Paczy\'nski \& Rhoads 1993, Katz 1994,
M\'esz\'aros \& Rees 1997, Sari \& Piran 1997, Vietri 1997, Waxman
1997; see Piran 2005 for a review). In this framework, it can be shown
that the magnetic field, downstream of the shock wave, must have been
amplified by orders of magnitude beyond the shock compression of a
typical interstellar magnetic field (see for instance Waxman 1997; see
also Piran 2005 and references therein). Recent work by Li \& Waxman
(2006) further shows that even the upstream magnetic field must have
been amplified by at least one or two orders of magnitude in order to
explain observed $X-$ray afterglows.
These results should be put in perspective with recent studies on the
process of Fermi acceleration in the test particle limit around
relativistic shock waves. In particular, Niemiec \& Ostrowski (2006)
have shown that Fermi acceleration becomes inoperative in a turbulent
magnetic field with power spectra of the Kolmogorov or scale invariant
type. Through analytical calculations, Lemoine, Pelletier \& Revenu
(2006) have shown that relativistic Fermi acceleration is inefficient
if the magnetic field power is distributed on scales larger than the
typical Larmor radius of the accelerated population, in agreement with
the above numerical result. Therefore the interpretation of the
gamma-ray burst afterglow emission from the relativistic external
shock wave requires the magnetic field to have been amplified on short
spatial scales.
It has been suggested that the relativistic two stream Weibel
instability could amplify the downstream magnetic field to the level
required by afterglow modeling of gamma-ray bursts (Gruzinov \& Waxman
1999; Medvedev \& Loeb 1999). Subsequent studies have however argued
that this instability should saturate early on (Wiersma \& Achterberg
2004; Lyubarsky \& Eichler 2006). The question of the long term
evolution of the downstream magnetic field (on timescales
$\gg\,\omega_{p}^{-1}$, with $\omega_{p}$ the plasma frequency) also
remains open. Ongoing particle-in-cell simulations should eventually
shed light on this issue (Silva {\it et al.} 2003; Frederiksen {\it et
al.} 2004; Medvedev {\it et al.} 2005; Kato 2007; Chang, Spitkovsky
\& Arons 2008; Keshet {\it et al.} 2008; Spitkovsky 2008).
Regarding the amplification of the upstream magnetic field, the
present situation is reminiscent of results obtained for
non-relativistic supernovae remnant shock waves, where the
interstellar magnetic field has apparently been amplified by one or
two orders of magnitude (see V\"olk, Berezhko \& Ksenofontov 2005,
Parizot {\it et al.} 2006). In this case, the leading candidate for
the instability is the streaming instability, seeded by the cosmic ray
precursor in the upstream plasma (see Bell 2004, 2005; Pelletier,
Lemoine \& Marcowith 2006, Marcowith, Lemoine \& Pelletier 2006;
Reville {\it et al.} 2007; Niemiec et al. 2008; Amato \&
Blasi 2008; Zirakashvili, Ptuskin \& V\"olk 2008; Reville et al. 2008). The generalization
of this instability to the relativistic regime has been studied on
phenomenological grounds by Milosavljevi\'c \& Nakar (2006) who have
concluded that it should be able to account for the degree of
amplification inferred in gamma-ray bursts. More recently, Reville,
Kirk \& Duffy (2006) have derived in detail the dispersion relation
for a parallel shock wave and the saturation due to thermal effects.
In the present work, we propose to explore in detail the
generalization of this type of instability to the ultra-relativistic
regime $\Gamma_{\rm sh}\,\gg\,1$. One crucial difference with the
previous works on the streaming instability is that we consider the
most natural case of superluminal shock waves (with a magnetic field
perpendicular to the shock normal in the shock front frame). This case
is more generic than the parallel configuration studied previously
because the transverse component of the magnetic field is boosted by
the shock Lorentz factor when going to the shock frame. Another
important difference is that we bring to light a new type of
instability, of a compressive nature. Finally, in contrast with most
particle-in-cell simulations performed to date, our study focusses on
magnetized shock waves, for which there exists a coherent upstream
magnetic field (whose dynamical influence on the shock jump conditions
can be neglected however). Nevertheless there exist pioneering PIC simulations in the moderately relativistic regime, which include both a mean field and a significant mass ratio between electrons and ions, see Hededal \& Nishikawa (2005), Dieckmann, Shukla, \& Drury (2008).
We adopt a simplified description in which the cosmic-ray distribution
is modeled as a step function out to some distance $\ell_{\rm cr}$ and
we neglect the cosmic-ray response to the disturbance. This latter
assumption is justified by the fact that the instability is maximal on
the shortest spatial scales, orders of magnitude below the typical
Larmor radius of accelerated particles.
The paper is organized as follows. In Section~\ref{sec:general}, we
introduce the main scales of the problem, most notably the diffusion
scale of the cosmic rays; we then calculate the level of amplification
that is necessary to make Fermi acceleration
operative. Section~\ref{sec:instab} is devoted to the investigation of
the instabilities under the condition that some cosmic-rays have
undergone a first Fermi cycle. Section~\ref{sec:conc} summarizes out
results and provides some outlook. Details of the calculations are
provided in Appendix~\ref{sec:appperp}.
\section{General considerations}\label{sec:general}
We carry out most of the discussion in the shock front rest frame,
hence unless otherwise noted, all quantities are evaluated in this
frame. We use the subscripts $_{\rm\vert u}$ or $_{\rm\vert d}$ to tag
quantities measured in the upstream or in the downstream rest frame
respectively.
\subsection{Upstream diffusion length}
\label{sec:updiff}
In the upstream rest frame, cosmic rays can never stream too far ahead
of a relativistic shock wave since this latter propagates towards
upstream with velocity $v_{\rm sh}=\beta_{\rm sh}c\approx c$ [the
shock Lorentz factor $\Gamma_{\rm sh} \equiv (1-\beta_{\rm
sh}^2)^{-1/2}\,\gg\,1$]. Cosmic rays scatter on magnetic
turbulence upstream before they are caught back by the shock wave when
their pitch angle $\theta_{\rm\vert u} \sim 1/\Gamma_{\rm sh}$
(Gallant \& Achterberg 1999, Achterberg {\it et al.}
2001). Consequently, they can travel a distance $\ell_{\rm cr\vert
u}$, which may take the following values depending on the ratio of
the Larmor radius $r_{\rm L\vert u}$ to the coherence length of the
upstream magnetic field $\lambda_{\rm c\vert u}$ (Milosavljevi\'c \&
Nakar 2006):
\begin{itemize}
\item for small scale turbulence
\begin{equation}
\ell_{\rm cr\vert u}\,\sim\,\frac{1}{\Gamma_{\rm sh}^2}\frac{r_{\rm
L\vert u}^2}{\lambda_{\rm c\vert u}} \quad \left(r_{\rm L\vert
u}\gg \Gamma_{\rm sh} \lambda_{\rm c\vert u}\right) \ ,
\end{equation}
\item for large scale turbulence
\begin{equation}
\ell_{\rm cr\vert u}\,\sim\,\frac{r_{\rm L\vert u}}{\Gamma_{\rm sh}} \quad \left(r_{\rm L\vert
u}\ll \Gamma_{\rm sh} \lambda_{\rm c\vert u}\right) \ .
\end{equation}
\end{itemize}
Both regimes, short or large scale turbulence can be expected at some
point, insofar as the excitation of the upstream magnetic field on
short spatial scales is due to the streaming of the non-thermal
particle population in the shock precursor. Indeed, cosmic rays of the
first generation are to interact with a turbulent magnetic field
ordered on large scales. However, provided the instability that they
trigger grows fast enough, cosmic rays of the next generation will
propagate in short scale turbulence. In reality, the situation is
likely to be more complex as the process of particle propagation
upstream and magnetic field generation are closely intertwined. The
fact that the non-thermal population contains particles of different
energies, which can stream at different distances from the shock
front, should also play a significant role. In this respect, Keshet
{\it et al.} (2008) have observed that the upstream magnetic field is
affected to greater distances as time goes on. This strongly suggests
that higher energy cosmic rays are produced as time goes on, and that,
by travelling farther in the upstream medium, they excite the
turbulence at larger distances from the shock front.
In the discussion that follows, we estimate the growth of unstable
modes in both the limits of small or large scale turbulence in order
to remain as general as possible. Certainly the limit of large scale
turbulence is more restrictive with respect to the growth of the
instability, since the distance travelled upstream is significantly
reduced with respect to that in small scale turbulence. Whichever
limit prevails depends on the ratio of the short scale turbulent to
the large scale (coherent) magnetic field strength, as discussed in
Section~\ref{sec:fermi}. We also discuss the effect of higher energy
cosmic rays on the growth rate.
It is important to emphasize that the distance that controls the
growth of the instability is that between the shock front and the
position of the particle, which is smaller than $\ell_{\rm cr\vert u}$
by a factor $(1-\beta_{\rm sh})\sim \left(2\Gamma_{\rm
sh}^2\right)^{-1}$. In the following, we will need the expression
for $\ell_{\rm cr\vert sh}$ (also noted $\ell_{\rm cr}$), i.e. the
length scale of the cosmic-ray distribution as measured in the shock
front rest frame. It can be calculated by transforming the upstream
residence time $t_{\rm r\vert u}\,\simeq\,\ell_{\rm cr\vert u}/c$ in
the shock front frame $t_{\rm r\vert sh}=t_{\rm r\vert u}/\Gamma_{\rm
sh}$, and then by rewriting in the expression obtained the upstream
Larmor radius and coherence length in terms of their shock frame
equivalent. In the perpendicular (or superluminal and
ultra-relativistic) configuration of interest, $r_{\rm L\vert
sh}\,\simeq\, \Gamma_{\rm sh}^{-2} r_{\rm L\vert u}$, $\lambda_{\rm
c\vert sh}\,\simeq\, \Gamma_{\rm sh}^{-1} \lambda_{\rm c\vert
u}$. This boost of the coherence length is valid for wavenumber
modes that are parallel to the shock normal; perpendicular wavenumber
modes remain unchanged. In the following, we thus use the following
expressions for $\ell_{\rm cr}$:
\begin{itemize}
\item for small scale turbulence
\begin{equation}
\ell_{\rm cr}\,\equiv\, \ell_{\rm cr\vert sh}\,\simeq\, \frac{r_{\rm L\vert
sh}^2}{\lambda_{\rm c\vert sh}}\ ,
\label{eq:lbar1}
\end{equation}
\item for large scale turbulence
\begin{equation}
\ell_{\rm cr}\,\equiv\, \ell_{\rm cr\vert sh}\,\simeq\, r_{\rm L\vert
sh}\ .
\label{eq:lbar2}
\end{equation}
\end{itemize}
Below, we find a process of generation of intense magnetic
disturbances at short spatial scales. The expected instabilities in
all relevant cases generate disturbances with a coherence length
$\lambda_{\rm c\vert u}$ between the minimal length $l_{\rm
MHD}=c/\omega_{\rm p,i}$ required for the validity of MHD
description and the diffusion length $\ell_{\rm cr\vert u}$ of the
cosmic rays, but with a preference for short scale, i.e. $l_{\rm
MHD}\,\la\,\lambda_{\rm c\vert u}\,\la\,\ell_{\rm cr\vert u}$. The
minimal MHD length is:
\begin{equation}
l_{\rm MHD} \equiv \beta_{\rm A} r_{0\vert\rm u} \ ,
\end{equation}
where $\beta_{\rm A} \equiv B_{0\vert\rm u}/\sqrt{4\pi \rho_{\rm
u\vert u}c^2}$ denotes throughout this paper the Alfv\'en velocity
(measured upstream) in units of $c$, and $r_{0\vert\rm u} =
m_{\rm p}c^2/eB_{0\vert\rm u}$ is the Larmor radius of thermal protons in
the upstream comoving frame. The above MHD scale is measured in the
upstream plasma rest frame.
\subsection{Requirements for successful Fermi acceleration}\label{sec:fermi}
As discussed in Lemoine, Pelletier \& Revenu (2006), a particle can
execute a large number of Fermi cycles through the shock only if the
turbulence coherence scale is much smaller than the Larmor radius
(assuming no coherent magnetic field), in detail $r_{\rm L\vert u}\gg
\lambda_{\rm c\vert u} \Gamma_{\rm sh}$ upstream, or $r_{\rm L\vert d}
\gg \lambda_{\rm c\vert d}$ as measured downstream.
The Fermi process becomes inoperative in the limit of large coherence
length (when compared to the typical Larmor radius) since the cycle
time becomes smaller than a coherence time hence the field is
effectively regular and superluminal on a typical Fermi
cycle. Conversely, the Fermi process will be operative provided either
of the above inequalities is fulfilled, since the memory of the
initial conditions at shock crossing is then erased by pitch angle
diffusion in the short scale turbulence.
In the generic case of a ultra-relativistic superluminal shock, the
above two conditions are actually related by the shock jump
conditions, since the Larmor radius increases by $\sim \Gamma_{\rm
rel}^2$ when going from downstream to upstream, while the coherence
length increases by $\sim\Gamma_{\rm rel}$. The quantity $\Gamma_{\rm
rel}$ corresponds to the Lorentz factor of the downstream fluid as
measured upstream, and in the case of a ultra-relativistic strong
shock, $\Gamma_{\rm rel}\,\simeq\,\Gamma_{\rm sh}/\sqrt{2}$ (Blandford
\& McKee 1976).
In the following, we describe the upstream and downstream fluids as
comprising a large scale component $\mathbf{B_0}$ and a short scale
turbulent generated by some unspecified instability. In this case,
Fermi acceleration will be efficient provided the short scale field
has a sufficient amplitude as compared to $B_0$. We discuss this
condition in the following.
\subsubsection{Upstream motion}
In this particular section, all quantities are evaluated in the
upstream rest frame and we avoid using the notation $_{\rm\vert u}$
out of clarity. We derive here the requirements on the amplitude of
the short scale component of the magnetic field that would allow
successful Fermi acceleration. Without loss of generality, we assume
that the coherent component $\mathbf{B_0}$ lies in the plane $x-y$,
the $x$ direction corresponding to the shock normal (oriented toward
upstream infinity). We denote by $\mathbf{b}$ the irregular component
(whose amplitude is expressed in units of $B_0$). The layout is
pictured in Fig.~\ref{fig:shock_conf}.
The total field is thus written as $\mathbf{B}= B_0(\mathbf{e} +
\mathbf{b})$, and the particle trajectory is governed by the following
equation:
\begin{equation}
\frac{{\rm d}\boldsymbol\beta}{{\rm d}t} = \omega_{\rm L,0}\,
\boldsymbol\beta\times(\mathbf{e} + \mathbf{b})\ ,
\end{equation}
with $\omega_{\rm L,0}$ denoting the Larmor frequency expressed
relatively to $B_0$. We neglect the influence of any short scale time
varying electric field $\boldsymbol\delta\mathbf{E}$ in the above
equation of motion since $\delta E/(B_0b)\,\sim\, \omega/k\,\ll\,1$.
\begin{figure}
\centering{
\includegraphics[width=0.4\textwidth,clip=true]{shock_conf.eps}}
\caption{Schematic plot of the geometrical configuration in the
upstream plasma rest frame. The coherent component $\mathbf{B_0}$
lies in the $x-y$ plane, with $x$ pointing towards the shock
normal, and ($\theta$, $\phi$) the angles of the particle velocity
in this frame.}
\label{fig:shock_conf}
\end{figure}
In order to quantify the return timescale and the condition for
successful Fermi acceleration as a function of the amplitude of the
random component of the magnetic field, it is useful to express this
equation in terms of the angles $\theta$ and $\phi$, which are defined
as: $\beta_x=\cos\theta$, $\beta_y=\sin\theta\cos\phi$ and
$\beta_z=\sin\theta\sin\phi$. These equations of motion read:
\begin{eqnarray}
\dot\theta&\,=\,& \omega_{\rm L,0}\,\left[\left(e_y +
b_y\right)\sin\phi -
b_z\cos\phi\right]\ ,\label{eq:t}\\ \dot\phi&\,=\,& \omega_{\rm
L,0}\,\left[-e_x-b_x +
\left(e_y+b_y\right)\frac{\cos\phi}{\tan\theta} +
b_z\frac{\sin\phi}{\tan\theta}\right]\ .
\label{eq:p1}
\end{eqnarray}
Rescaling the Larmor frequency with respect to $e_yB_0$, i.e. defining
$\omega_{{\rm L},y}\,\equiv\, e_y\omega_{\rm L,0}$, and noting that
$\theta\,\ll\,1$ when the particle propagates upstream, the equations
of motion for $\theta$ and $\phi$ can be rewritten as follows:
\begin{eqnarray}
\dot\theta&\,\simeq\,& \omega_{{\rm L},y}\sin(\phi) + \omega_{{\rm
L},y}A\sin(\phi-\sigma)\ ,\nonumber\\
\dot\phi&\,\simeq\,& \omega_{{\rm L},y}{\cos\phi\over\theta}\,+\,\omega_{{\rm
L},y}A{\cos(\theta-\sigma)\over\theta}\ .
\label{eq:p2}
\end{eqnarray}
The factors $A$ and $\sigma$ describe respectively the amplitude and
the phase of the small scale magnetic field in the shock front plane,
i.e. $b_y=A\cos\sigma$ and $b_z=A\sin\sigma$ (assuming isotropic
turbulence). The amplitude $A=\delta B/B_y$ is measured with respect
to $B_y$.
One can extract from the above system the unperturbed trajectory,
i.e. assuming $A=0$:
\begin{equation}
\cos\phi\,=\,\cos\phi_{\rm i}\,{\sin\theta_{\rm i}\over\sin\theta}\ ,
\end{equation}
where values indexed with $_{\rm i}$ are calculated at some initial
time. Since entry into upstream corresponds to $\cos\theta_{\rm
i}\,>\,\beta_{\rm sh}$ hence $\theta_{\rm i}\,\lesssim\,
1/\Gamma_{\rm sh}$, and exit from upstream corresponds to
$\theta\,\gtrsim\,1/\Gamma_{\rm sh}$, the above equation suggests that
$\phi$ is driven to an angle close to $\pm\pi/2$. The sign is given
by the sign of $e_y$: for $e_y>0$, it is $+\pi/2$. This region lies
opposite to that which allows return to the shock when the particle
travels downstream (see Lemoine, Pelletier \& Revenu 2006), hence
Fermi cycles cannot be completed. It is important to note that this
unperturbed trajectory is executed on a timescale $t_{\rm
unpert}\,\simeq\, r_{{\rm L},y}/(\Gamma_{\rm sh}c)$ (where $r_{{\rm
L},y}=c/\omega_{{\rm L},y}$).
The noise term comes from the random phase $\sigma$ (assuming
isotropic turbulence in the shock front plane, an assumption that can
be relaxed). Therefore the trajectory deviates from the unperturbed
trajectory to linear order by $(\Delta\phi,\Delta\theta)$, whose
variance increases linearly with time:
\begin{eqnarray}
\langle\Delta\phi^2\rangle &\,\simeq\,& {1\over 3}A^2{\omega_{{\rm L},y}^2
\over \theta^2}\tau_{\rm c}\Delta t\ ,\nonumber\\
\langle\Delta\theta^2\rangle&\,\simeq\,& {2\over 3}\,A^2\omega_{{\rm
L},y}\tau_{\rm c}\Delta t\ .
\end{eqnarray}
Note that the above equation does not contain all the terms driving
the variance of $\phi$ and $\theta$: it neglects term of the form
$\int {\rm d}t_1{\rm d}t_2
\langle\Delta\phi(t_1)\Delta\phi(t_2)\rangle$ and similarly in
$\Delta\theta$. However, these terms are smaller by ${\cal O}(A^2)$ as
compared to those above, hence they can be neglected in a first
approximation (we will find $A\,\gg\,1$ further below).
In the absence of the regular driving term in the equations of motion
(i.e. when $B_y=0$), return occurs over a timescale $t_{\rm
diff}\,\simeq\, \left(\Gamma_{\rm sh}^2A^2\omega_{{\rm L},y}^2\tau_{\rm
c}\right)^{-1}$. Noise dominates over the unperturbed trajectory if
$\langle \Delta\theta^2\rangle\,\gtrsim\,\theta^2$, where $\theta$
indicates the unperturbed trajectory. This inequality must be
satisfied within a (return) timescale $t_{\rm unpert}$ otherwise the
particle will have exited from upstream (along the weakly perturbed
unperturbed trajectory) before the noise could overcome the
unperturbed driving force. This implies:
\begin{equation}
A^2\,\gtrsim\, {1\over \Gamma_{\rm sh}\omega_{{\rm L},y}\tau_{\rm c}}\ .
\end{equation}
If one defines the Larmor radius $r_{\rm L}$ measured with respect to
the total magnetic field, $r_{\rm
L}\,=\,c\left[(A^2+1)^{1/2}\omega_{{\rm L},0}\right]^{-1}$, hence
$r_{\rm L}\,\simeq\, c/(A\omega_{\rm L,y})$ when $A\,\gg\,1$, the former
inequality amounts to:
\begin{equation}
A\,\gtrsim\, {r_{\rm L}\over \Gamma_{\rm sh}\lambda_{\rm c}}\ .
\end{equation}
If this inequality is satisfied, then one can check that
$\langle\Delta\phi^2\rangle^{1/2}\,\sim\,{\cal O}(1)$ on a return
timescale $t_{\rm diff}$, which implies that the return directions
are isotropized in the shock front plane. Fermi acceleration should
then be efficient (provided the return probability to the shock as
calculated downstream is also isotropic in $\phi$, see below).
An interesting implication of the above is to restrict Fermi
acceleration to a rather limited range of Larmor radii:
\begin{equation}
\Gamma_{\rm sh}\lambda_{\rm c}\,\lesssim\, \bar r_{\rm L}\,\lesssim\,
A\Gamma_{\rm sh}\lambda_{\rm c} \ .
\end{equation}
Very high values of the amplification factor are thus required to
produce powerlaw spectra over a large dynamic range.
\subsubsection{Downstream motion}
All quantities in this subsection are evaluated in the downstream rest
frame. In this frame, the coherent magnetic field is nearly fully
aligned with the $y-$axis, and we assume indeed that
$\mathbf{B_0}=B_0\mathbf{y}$. It is useful to study the evolution of
the velocity components perpendicular ($\beta_\perp$) and parallel
($\beta_\parallel$) to the direction of $\mathbf{B_0}$:
$\beta_\parallel\,\equiv\,\beta_y$,
$\beta_\perp^2\,=\,\beta_x^2+\beta_z^2$. The equations of motion for
the velocity $\mathbf{\beta}$ can then be written:
\begin{eqnarray}
\dot\beta_\perp&\,=\,&\omega_{{\rm
L},0}{\beta_\parallel\over\beta_\perp}\left(\beta_xb_z -
\beta_zb_x\right)\,\nonumber\\ \dot\beta_\parallel&\,=\,&\omega_{{\rm
L},0}\left(\beta_zb_x - \beta_xb_z\right)\ .
\end{eqnarray}
One can check that
$\beta_\parallel\dot\beta_\parallel+\beta_\perp\dot\beta_\perp=0$ as
it should since $\beta$ is conserved. The unperturbed trajectory for
the couple of variables ($\beta_\parallel$, $\beta_\perp$) is trivial,
i.e. $\beta_{\perp,0}\,=\,{\rm const.}\,=\beta_{\perp,\rm i}$ and
$\beta_{\parallel,0}\,=\,{\rm const.}\,=\beta_{\parallel,\rm
i}$. Return may occur or not, depending on these initial velocity
components; if it occurs, it does so on a timescale $t_{\rm
unpert}\,\simeq\,r_{{\rm L},0}/c$ (Lemoine, Pelletier \& Revenu
2006).
Since $b_x$ is not compressed through shock crossing,
$b_x/b_z\,\sim\,1/\Gamma_{\rm sh}$ for isotropic upstream turbulence,
hence this component can be neglected. Integrating the above equations
to calculate the squared perpendicular displacement, one finds:
\begin{eqnarray}
\langle\Delta x_\perp^2\rangle&\,=\,&\int_0^t {\rm d}t_1\,\int_0^t
{\rm d}t_2\,
\langle\beta_\perp(t_1)\beta_\perp(t_2)\rangle
\nonumber\\ &\,\simeq\,& \omega_{{\rm
L},0}^2{\beta_{\parallel,\rm i}^2\over
\beta_{\perp,\rm i}^2}\int_0^t {\rm d}t_1\,\int_0^t
{\rm d}t_2\int_0^{t_1}{\rm
d}t_3\,\int_0^{t_2}{\rm
d}t_4\,\nonumber\\ & & \quad\quad
\beta_{x,0}(t_3)\beta_{x,0}(t_4)\langle
b_z(t_3)
b_z(t_4)\rangle\nonumber\\ &\,\simeq\,&\omega_{{\rm
L},0}^2{\beta_{\parallel,\rm i}^2\over
\beta_{\perp,\rm i}^2}\int_0^t{\rm
d}t_1\int_{t_1}^{t}{\rm
d}t_2\int_0^{t_1}{\rm d}t_3\, {A^2\over
3}\tau_{\rm
c}\,\beta_{x,0}^2(t_3)\nonumber\\ &\,\simeq\,&
{A^2\over 3}\omega_{{\rm L},0}^2\tau_{\rm
c}{\beta_{\parallel,\rm
i}^2\over\beta_{\perp,\rm i}^2}\,P(t)\ .
\end{eqnarray}
In the above equation, $\beta_{x,0}(t)$ represents the unperturbed
trajectory: $\beta_{x,0}\,=\,\beta_{x,\rm i}\cos(\omega_{{\rm
L},0}t)-\beta_{z,\rm i}\sin(\omega_{{\rm L},0}t)$. The function $P(t)$
contains powers of $t$ up to $t^3$, as well as sine and cosine
functions of $2\omega_{{\rm L},0}t$; its dimension is that of
$c^2\omega_{\rm L}^{-3}$. Noise will then dominate over the
unperturbed trajectory $x_{\perp,0}\,=\,\beta_{\perp,\rm i} ct$ over a
unperturbed return timescale if:
\begin{equation}
A^2\,\gtrsim\, {1\over \omega_{{\rm L},0}\tau_{\rm c}}\ ,
\end{equation}
or, equivalently:
\begin{equation}
A\,\gtrsim\, {r_{\rm L}\over \lambda_{\rm c}}\ .\label{eq:ineq-d}
\end{equation}
As before, $r_{\rm L}$ denotes the Larmor radius measured with respect
to the total magnetic field. This condition on $A$ is very similar to
that obtained upstream, since $r_{\rm L\vert u}\,\sim\, \Gamma_{\rm
sh}^2 r_{\rm L\vert d}$ and $\lambda_{\rm c\vert
u}\,\sim\,\Gamma_{\rm sh}\lambda_{\rm c\vert d}$, while $A_{\vert
\rm u}\,\simeq\,A_{\vert\rm d}$.
According to the above analysis, and following Lemoine, Pelletier \&
Revenu (2006), phase mixing should be sufficiently large to erase all
dependence on the phase of the velocity vectors, hence Fermi
acceleration should be fully operative, if either one of the above
conditions on $A$ is satisfied.
It is interesting to discuss the above results in light of recent
Monte Carlo numerical simulations of relativistic Fermi acceleration
in the presence of short scale turbulence (Niemiec, Ostrowski \& Pohl
2006). These authors have investigated the efficiency of the Fermi
process for various turbulence configurations, including a coherent
magnetic field, a long wave turbulence and a short scale
component. They confirm that Fermi acceleration is not operative if
$\delta B/B=0$, but find that a spectrum can develop over about two
orders of magnitude when $\delta B/B\,=\,80$ (see their Fig.~1). The
high energy break can be directly interpreted as the energy at which
the above inequality Eq.~(\ref{eq:ineq-d}) is no longer
fulfilled. Beyond that point, the spectrum steepends and the Fermi
process becomes inoperative. Interestingly, these authors also show
that if the pitch angle scattering amplitude in the short scale
turbulence is independent of energy, the break disappears and the
spectrum continues without bound. This is also expected, insofar as
the break that is determined by Eq.~(\ref{eq:ineq-d}) stems from the
fact that the scattering time in the short scale turbulence scales as
the square of the Larmor time whereas the unperturbed trajectory in
the background field depends linearly on the Larmor time. Assuming a
constant pitch angle scattering amplitude in the notations of Niemiec,
Pohl \& Ostrowski (2006) implies that the two scattering timescales
evolve in the same way, hence if short scale turbulence suffices to
isotropize directions at a given energy, it will do so at all
energies.
In order to produce powerlaw spectra over a large dynamic range, it is
thus necessary to reach an amplification factor $\delta B/B$ as large
as possible, since this ratio determines the dynamic range of particle
energies. Interestingly, the afterglow modeling of gamma-ray bursts
suggest that indeed, the usptream and downstream magnetic fields have
been amplified by a large factor. In the range of energies in which
Fermi acceleration is operative, it is likely that the spectral index
would be equal to the so-called canonical value $s=2.3$, although the
generality of this prediction for different shapes of the $3$d
turbulence spectrum remains to be studied. The results of Niemiec,
Ostrowski \& Pohl (2006) cannot be used to infer this spectral index,
since they have restricted their analysis to values of $\delta B/B <
100$, and the spectral indices they report have been measured beyond
the break energy. Techniques developed in Lemoine \& Pelletier
(2003), Lemoine \& Revenu (2006) are particularly suited to study this
problem and calculations are underway.
\section{Instabilities at perpendicular shock waves}\label{sec:instab}
The magnetic field in the shock front $B_{\vert\rm sh}$ is related to
the magnetic field in the upstream frame $B_{\vert\rm u}$ and the
associated electric field $E_{\vert\rm u}$ through the Lorentz
transform:
\begin{eqnarray}
B_{\parallel\vert\rm sh}&\,=\,& B_{\parallel\vert\rm
u}\nonumber\\ \mathbf{B}_{\perp\vert\rm sh}&\,=\,& \Gamma_{\rm
sh}\left(\mathbf{B}_{\perp\vert\rm u} - \mathbf{\beta}_{\rm
sh}\times\mathbf{E}_{\perp\vert\rm u}\right)\ .
\end{eqnarray}
To zeroth order, the electric field in the upstream plasma frame
vanishes, hence the magnetic field in the shock front frame is mostly
perpendicular unless $\mathbf{B}_{\vert\rm u}$ is aligned along the
shock normal to within an angle $\sim {\cal O}(1/\Gamma_{\rm sh})$
(subluminal shock). It then suffices to consider the fully
perpendicular situation with $B_{\parallel\vert\rm sh}=0$.
The cosmic rays stream ahead of the shock wave, carrying a net charge
density $\rho_{\rm cr}$ which will induce a counteracting charge
density $\rho_{\rm pl}$ in the background plasma. Note that the cosmic
rays do not induce an electrical current at zeroth order in the shock
front frame, only a net charge density. Since we consider the
generation of short scale turbulence, we neglect the cosmic ray
response to this short scale turbulence other than its effect on the
cosmic ray distribution scale $\ell_{\rm cr}$. For simplicity, we
approximate the cosmic-ray charge profile with a top-hat distribution.
As usual, we search for a stationary regime which serves as a basis
for perturbating the equations in the time-dependent regime. The
details of the calculations are provided in Appendix~\ref{sec:appperp}
for both stationary and time dependent quantities. In particular, the
stationary regime exhibits the following set-up:
\begin{equation}
\mathbf{B}=B_y \mathbf{e_y},\,\, \mathbf{u}=u_x\mathbf{e_x} +
u_z\mathbf{e_z} \ .
\end{equation}
Recalling that the shock normal is directed along $x$, $u_x\simeq
-\Gamma_{\rm sh}\beta_{\rm sh}$ characterizes the velocity of the
upstream inflowing through the front. The velocity component $u_z$
along the front is small compared to $u_x$, but its shear $\partial_x
u_z$ cannot be neglected (see Appendix~\ref{sec:appperp}).
\subsection{Linear analysis of the reduced system}
The complete system that governs the linear evolution is of seventh
order (see Appendix~\ref{sec:appperp}) and thus rather
involved. Nevertheless, it remains possible to derive the main results
since the Lorentz transform from the upstream comoving frame to the
shock front frame dominates the MHD propagation effects when the wave
scale is ``not too small", as will be made more precise in a next
subsection. In other words, the instability growth rates can be
derived to lowest order by assuming vanishing Alfv\'en and sound
velocities. We have verified that the analysis of the system including
$\beta_{\rm A}$ and $\beta_{\rm s}$ (but neglecting terms in $u_z$)
does not modify the growth rates obtained further below.
One should point out that electromagnetic waves with dispersion
relation $\omega_{\vert\rm u}(\mathbf{k_{\vert\rm u}})$ are Lorentz
transformed into waves with dispersion relation:
\begin{equation}
\omega\,\simeq\, \beta_{\rm sh}ck_x + {\cal
O}\left(\frac{k_{y,z}}{\Gamma_{\rm sh}}\right)\ .
\end{equation}
In particular, for Alfv\'en waves, the next term on the r.h.s. is
$\beta_{\rm A}k_yc/\Gamma_{\rm sh}$. Note that the Alfv\'en velocity
is always expressed in the upstream plasma rest frame,
i.e. $\beta_{\rm A}\,=\,B_{y\vert \rm u}/\sqrt{4\pi\rho_{\rm
u}c^2}$. Therefore it suffices in what follows to consider the
limit $k_z\rightarrow 0$. The limit $k_y\rightarrow 0$ can also be
considered but it appears more restrictive, because it limits the
analysis to the evolution of upstream magnetosonic modes. We will thus
consider both cases $k_y=0$ and $k_y$ finite. Of course, one can use
Fourier analysis without mode coupling as long as the spatial
dependence of the system coefficients can be neglected, in agreement
with our previous approximations, and in particular, that the cosmic
ray charge density is modeled as a step function. In the shock front
rest frame, the physical picture of the instability is then as
follows. Impingent electromagnetic waves propagate in vacuum beyond
the length scale $\ell_{\rm cr}$ which characterizes the spatial
distribution of cosmic rays upstream of the shock front. For $0<x<
\ell_{\rm cr}$, the presence of the cosmic-ray charge density induces
a short scale instability. Hence we consider electromagnetic waves and
seek a solution in Fourier space with $\omega$ set by its vacuum
dispersion relation, but with a complex $k_x$ characterizing the
amplification in the charge layer $0<x< \ell_{\rm cr}$.
In this perpendicular configuration, the return current (in the
upstream frame) is not responsible for a supplementary tension effect,
but for a supplementary compression effect. A $b_y$ perturbation
leads to a supplementary vertical (i.e. along $z$) compression that
can push like the kinetic compression. If we consider a spatial
modulation along the mean field ($k_y \neq 0$), a vertical
perturbation $b_z$ generates a supplementary compression in the
direction of the mean field that can push in phase with the kinetic
compression as well.
For large $\Gamma_{\rm sh}$ even with $k_z \neq 0$, the system reduces
to fourth order and can be expressed as the coupling between the
propagation of the vertical perturbed motion $\delta u_z$ and the
propagation of kinetic compression $\delta w/w$, where $w \equiv
\rho_{\rm u}c^2$ is the relativistic proper enthalpy density of the
cold upstream plasma. The system of the two coupled equations can be
expressed using the system of
Eqs.~(\ref{eq:tdb}),(\ref{eq:tdw}),(\ref{eq:tdu}) in the limit
$\beta_{\rm A}\rightarrow 0$, $\beta_{\rm s}\rightarrow 0$. In
particular in this limit, one notes that $b_x\,\approx\,0$.
Then one obtains a single equation for $\delta u_z$ (see
Appendix~\ref{sec:perteq}) where derivatives $\partial_z$ cancel out
and where the derivative of $u_z$ is inserted, its second derivative
being neglected (we also assume $\beta_{\rm sh} = 1$ in the coefficients):
\begin{eqnarray}
\label{eq:maseq}
\hat D^4 \delta u_z - \left(\kappa^2 + u_z \kappa
\partial_x\right)\hat D^2 \delta u_z + \Gamma_{\rm sh}\kappa^2
\partial_x \hat D \delta u_z + \Gamma_{\rm sh}^2\kappa^2 \partial_y^2
\delta u_z = 0 \ .
\end{eqnarray}
The differential operator $\hat D$ is defined by $\hat{D} =
\Gamma_{\rm sh}\left(c^{-1}\partial_{\rm t} - \beta_{\rm sh}
\partial_x\right)$. For a detailed understanding of the instabilities
one has to notice that the charge is not completely screened by the
plasma as a supplementary electric field is generated along the flow
(i.e. oriented towards $-x$): $E_x = u_zB_y/\Gamma_{\rm sh}$, along
with a sheared vertical motion $u_z$ such that $\partial_x u_z =
\kappa$ (see Appendix~\ref{sec:appperp}). The instabilities stem from
this sheared motion. The parameter $\kappa$, which carries the
dimension of a wavenumber, is defined in Eq.~(\ref{eq:kappa}). To our
present order of approximation, it can be expressed as:
\begin{equation}
\kappa \,\equiv \, \frac{\rho_{\rm cr}B_y}{\Gamma_{\rm sh} w} \ .
\label{eq:kappa_main}
\end{equation}
Note that both $\rho_{\rm cr}$ and $B_y$ are here evaluated in the
shock front frame. In the front frame, the parameter $\kappa$ appears
divided by $\Gamma_{\rm sh}$, hence we introduce $k_{*} \equiv
\kappa/\Gamma_{\rm sh}$.
The reduced form of the equation (see Appendix~\ref{sec:perteq})
clearly shows the ordering (by setting $\hat D = k_x \beta_{\rm sh}\Gamma_{\rm sh} \tilde
D$, $\delta u_z = \beta_{\rm sh}\Gamma_{\rm sh} \tilde u_z$):
\begin{equation}
\label{eq:nw1}
\tilde D^4 \tilde u_z -\left({k_*^2 \over k_x^2} + {u_z \over \Gamma_{\rm
sh}}{k_*\partial_x \over k_x^2}\right)\tilde D^2 \tilde u_z + {k_*^2
\partial_x \over k_x^3} \tilde D \tilde u_z + {k_*^2 \over k_x^4}
\partial_y^2 \tilde u_z = 0
\end{equation}
The inhomogeneity is contained in $\kappa^2$ that vanishes at large
distance from the shock. At infinity $\hat D \delta u_z = 0$, which
implies that for a mode having a specified $k_x$, the pulsation
$\omega = \beta_{\rm sh} k_xc$, as expected. In the precursor where
$\kappa^2 \neq 0$, we follow this mode characterized by its frequency
and look at the modification of its spatial behavior. As explained
above, we solve this equation by setting $c^{-1} \partial_t \mapsto
i\beta_{\rm sh} k_x$, $\partial_x \mapsto ik_x(1-\varepsilon)$, so
that $\hat D \mapsto ik_x \beta_{\rm sh}\Gamma_{\rm sh} \varepsilon$
($\tilde D \mapsto i \varepsilon$), where $\varepsilon$ is a complex
number, the imaginary part of which characterizes the growth rate. It
is obtained by solving the algebraic equation:
\begin{equation}
\label{eq:nw2}
\varepsilon^4 +2{k_*^2 \over k_x^2} \varepsilon^2 - {k_*^2 \over
k_x^2} \varepsilon -\frac{k_*^2k_y^2}{k_x^4} + i{u_z \over
\Gamma_{\rm sh}}{k_* \over k_x}(1-\varepsilon)\varepsilon^2 = 0
\end{equation}
Actually, because $\varepsilon$ is small, the equation can be
simplified to:
\begin{equation}
\varepsilon^4 - {k_*^2 \over k_x^2} \varepsilon -\frac{k_*^2k_y^2}{k_x^4} = 0 \ .
\end{equation}
The contribution in $u_z$ introduces a small correction to the real
part of the roots (most $u_z$ contributions can be canceled out by a
frame transformation, but that correction cannot be canceled out
because it corresponds to the electric field component $E_x$).
\subsubsection{Analysis of the case $k_y=0$}
Let us first analyze the case $k_y=0$, or less restrictively for
$k_y^2 \ll \vert \varepsilon^2 \vert\,k_x^2$. This is a pure
magneto-sonic compression with $\mathbf{b}$ along the mean field. One
gets the following three roots:
\begin{equation}
\varepsilon\, \simeq\, -\left(\frac{k_*}{k_x}\right)^{2/3} \left(-1, \,
\frac{1\pm i\sqrt{3}}{2}\right),
\end{equation}
One obtains an instability spatial growth rate $\gamma_x={\rm
Im}\left(k_x\varepsilon\right)$ that increases with large $k_x$.
\begin{equation}
\gamma_x\, =\, \frac{\sqrt{3}}{2}k_*^{2/3}k_x^{1/3} \quad (k_x\, \gg\,
k_*) \ .\label{eq:g2}
\end{equation}
Thus the characteristic wave number associated with the charge in this
problem of magneto-sonic wave propagation is $k_* \equiv
\kappa/\Gamma_{\rm sh}$.
\subsubsection{Analysis of the case $k_y \neq 0$}
With $k_y \neq 0$, one recovers the previous growth rate if $k_y \ll
k_*^{1/3}k_x^{2/3}$, while for $k_y \gg k_*^{1/3}k_x^{2/3}$ one
derives :
\begin{equation}
\varepsilon\, \simeq\, \left(\frac{k_y k_*}{k_x^2
}\right)^{1/2}(1,-1,i,-i)\ ,\label{eq:g4}
\end{equation}
or, equivalently:
\begin{equation}
\gamma_x \,\simeq\, \left(k_y k_*\right)^{1/2} \quad
\left(k_y\,\gg\,k_*^{1/3}k_x^{2/3}\right)\ .\label{eq:g5}
\end{equation}
The spatial growth rates are thus $\gamma_x \sim (k_*^2k_x)^{1/3}$ and
$(k_*k_y)^{1/2}$ respectively. Remarkably they are independent of
$k_z$ that can be chosen arbitrarily, provided it remains small
compared to $k_x$ for the consistency of the derivation.
\subsubsection{Comparison to MHD and cosmic-ray length scales}
One should compare the scale defined by $k_*$ to the smallest scale
$l_{\rm MHD}$ for our MHD description. Note that this length scale is
defined in the comoving upstream frame. Since $k_{x\vert\rm
u}\,\simeq\,k_x/\Gamma_{\rm sh}$ while $k_{y\vert\rm u}\,\simeq\,
k_y$, it suffices to require $k_* l_{\rm MHD}\,<\,1$ to ensure that
there exist modes in the MHD range with $k>k_*$. One obtains:
\begin{equation}
k_* l_{\rm MHD}\, =\,\frac{n_{\rm cr}eB_y}{\Gamma_{\rm
sh}^2w}\beta_{\rm A}r_{0\vert\rm u}
\end{equation}
which can be rewritten in terms of the fraction $\xi_{\rm cr}$ of
(downstream) shock internal energy converted into (downstream) cosmic
ray energy:
\begin{equation}
\xi_{\rm cr} \,\equiv\, \frac{e_{\rm cr\vert d}}{4\Gamma_{\rm sh}^2w} \ .
\end{equation}
To this effect, we assume that the accelerated population can be
described as a power-law of index $-s$ and minimum momentum $p_{\rm
min}$, so that:
\begin{equation}
n_{\rm cr}\,\simeq\, \frac{\vert 1-s\vert}{\vert 2-s\vert}\frac{e_{\rm
cr}}{p_{\rm min}c}\ .
\end{equation}
This equation assumes $s>2$; if $s=2$, then $\vert 2-s\vert$ should be
replaced by $\log(p_{\rm max}/p_{\rm min})$. Since $e_{\rm cr\vert
d}\,\sim\,e_{\rm cr\vert sh}$ (as $\Gamma_{\rm sh\vert
d}=\sqrt{9/8}$ for a strong ultra-relativistic shock), one obtains:
\begin{equation}
k_* l_{\rm MHD} \,\simeq\, 4\frac{\vert 1-s\vert}{\vert
2-s\vert}\xi_{\rm cr}\beta_{\rm A} \frac{r_{0\vert\rm u}}{r_{\rm
min\vert\rm u}}\Gamma_{\rm sh}^2 \ .\label{eq:totgrow}
\end{equation}
Here $r_{\rm min\vert u}$ denotes the minimum Larmor radius of the
accelerated population, as measured upstream. The typical energy of
the first generation of cosmic rays is $\Gamma_{\rm sh}^2 m_{\rm
p}c^2$, so that $r_{\rm min\vert u} \sim \Gamma_{\rm sh}^2
r_{0\vert\rm u}$. Therefore
\begin{equation}
k_* l_{\rm MHD} \,\sim\, 4\frac{\vert 1-s\vert}{\vert
2-s\vert}\xi_{\rm cr} \beta_{\rm A} \,\ll\, 1 \ .
\end{equation}
Thus $k_*$ defines an MHD scale.
In order for the instability to be efficient, one must also require
$\gamma_x \ell_{\rm cr}\,\gg\,1$, which is a non-trivial requirement in
view of the restricted value of $\ell_{\rm cr}$. One finds:
\begin{equation}
\gamma_x \ell_{\rm cr}\,\simeq\, 4\frac{\vert 1-s\vert}{\vert
2-s\vert}\frac{\gamma_x}{k_*}\xi_{\rm cr}\, g\ ,\label{eq:gro}
\end{equation}
with $g\,\equiv\,1$ for large scale turbulence, and
$g\,\equiv\,r_{0\vert\rm u}/\lambda_{\rm c\vert u}$ for short scale
turbulence. In the latter case, one may assume $\lambda_{\rm c\vert
u}\,\simeq\, l_{\rm MHD}$, as suggested by the fact that the growth
rate increases with $k$, so that $g\,\sim\,1/\beta_{\rm A}$. Then, in
both cases, the total growth is presumably much larger than unity if
$\xi_{\rm cr}$ is not too small and $k_x$, $k_y$ sufficiently large as
compared to $k_*$. Hence even the first generation of cosmic rays is
able to destabilize the upstream magnetic field on the shortest
scales. The effect of the high energy part of the accelerated
population will be addressed shortly.
As mentioned, the study of the more extended system including the
contribution of the terms involving the Alfv\'enic and sonic
contributions (but neglecting terms of order $u_z$ in the system)
confirms the results above for the growth rates, provided $\beta_{\rm
A}\,\ll\,1$ and $\beta_{\rm s}\,\ll\,1$.
\subsubsection{Magneto-sonic saturation}
One can estimate the amplitude of the various components in terms of
$b_y$. It has already been mentioned that $\vert b_x \vert \ll \vert
b_y \vert$,
\begin{equation}
b_y \,\simeq\, \frac{\beta_{\rm sh}}{\Gamma_{\rm sh}} \delta u_x
\end{equation}
and
\begin{equation}
b_z \,\simeq\, -\frac{1}{\kappa}\partial_y \delta u_x \ .
\end{equation}
Thus we find that $b_z$ may achieve a large amplitude, because
\begin{equation}
b_z \,\simeq\, - \frac{\Gamma_{\rm sh}}{\kappa \beta_{\rm sh}} \partial_y b_y \ .
\end{equation}
In the frame of linear theory, the saturation level of these
magneto-sonic instabilities is simply estimated by the fact that the
compression has a limited amplitude $\vert \delta w/w \vert
<1$. Defining the power spectrum of $b_y$ and $b_z$ per log interval
of wavenumber ${\cal P}_{b_y}\,\equiv\, (k^3/2\pi^2)\vert\tilde
b_y\vert^2$ in terms of the Fourier component $\tilde b_y$ and
similarly for ${\cal P}_{b_z}$ in terms of $\tilde b_z$, one derives
from Eq.~(\ref{eq:redsys}):
\begin{equation}
{\cal P}_{b_y}^{1/2} \,\leq\, \frac{k_*^2}{\vert \varepsilon^2k_x^2
+ 2k_*^2 \vert}
\end{equation}
and
\begin{equation}
{\cal P}_{b_z}^{1/2}\,=\, \frac{k_y}{k_*} {\cal P}_{b_y}^{1/2} \ .
\end{equation}
Note that the power spectrum is normalized to the ratio of amplitude
of the turbulent (small scale) component to the coherent $B_0$.
Consider the first branch of the instability, given by
Eq.~(\ref{eq:g2}) which applies in the limit $k_x\,\gg\,k_*$,
$k_y\,\ll\,k_*^{1/3}k_x^{2/3}$. There one can show that ${\cal
P}_{b_y}^{1/2}$ saturates at a value of order
$\left(k_x/k_*\right)^{-2/3}\,\ll\,1$, but ${\cal P}_{b_z}^{1/2}$
saturates at a value of order unity when
$k_y\,\sim\,k_*^{1/3}k_x^{2/3}$. For the second branch, given in
Eq.~(\ref{eq:g5}) and which applies in the limit
$k_y\,\gg\,k_*^{1/3}k_x^{2/3}$, one finds that ${\cal P}_{b_y}^{1/2}$
saturates at a value of order $k_*/k_y\,\ll\,1$, but again ${\cal
P}_{b_z}^{1/2}$ saturates at a value unity. This moderate saturation
level can be directly traced back to the saturation of these
compressive modes at the linear level $\vert\delta w/w\vert\sim 1$. As
discussed in Sec.~\ref{sec:fermi}, such a level $\delta B/B_0\sim1$
should not allow a powerlaw spectrum to develop through Fermi
acceleration.
Nevertheless, our present analytical description is by definition
limited to the linear regime. It would certainly be interesting to
pursue the calculations using numerical simulations as one could
expect that the holes produced in the plasma would widen with the
growing magnetic pressure inside and that the density excesses become
spikes. Thus one might reasonably expect the generation of many local
small magneto-sonic shocks that are absorbed by the main
ultra-relativistic shocks, with the reconversion of some amount of
cosmic-ray energy into thermal energy in the precursor.
The previous discussion has focused on the role played by the first
generation of cosmic rays, since higher energy cosmic rays can only be
present if acceleration to smaller energies has been
completed. However it is easy to verify that, if highest energy cosmic
rays are present in the shock precursor, they are bound to play a
dominant role in the amplification of the magnetic field, most notably
because the length scale of the cosmic ray distribution increases as
$r_{\rm L}^2$. For instance, at a distance $x$ from the shock front
where only cosmic rays of momentum larger than $p_*$ can be found, the
critical wavenumber for the instability reads for $s > 2$ ($s$
spectral index of the accelerated population):
\begin{equation}
k_*\,=\, k_{*,0} \left(\frac{p_*}{p_{*,0}}\right)^{1-s}\ ,
\end{equation}
where $k_{*,0}$ denotes the same wavenumber evaluated for the whole
cosmic-ray distribution, and $p_{*,0}$ denotes the minimum momentum of
the cosmic ray distribution at the shock front. Consequently, if
$\gamma_x\propto k_*^a$, with $a=2/3$ or $1/2$ for the two branches of
the instability obtained above:
\begin{equation}
\gamma_x \ell_{\rm cr,*}\,=\,\left(\frac{p_*}{p_{*,0}}\right)^{b+a(1-s)}
\gamma_{x,0}\ell_{\rm cr}\ ,
\end{equation}
with $b=1$ for large scale turbulence, $b=2$ for short scale
turbulence. Here as well, $\gamma_{x,0}$ should be understood as the
growth rate derived previously for the whole cosmic-ray
distribution. This product grows with increasing $p_*/p_{*,0}$
provided $s<(b+a)/a$, i.e. $s<8$ ($b=2$, $a=2/3$), or $s<5$ ($b=2$,
$a=1/2$), or $s<5/2$ ($b=1$, $a=2/3$) or finally $s<3$ ($b=1$,
$a=1/2$). These inequalities are likely to be fulfilled. Finally, the
product $k_* l_{\rm MHD}$ scales as $(p_*/p_{*,0})^{1-s}$ hence
decreases if $s>1$. This also means the typical scale of the
instability increases with increasing $p_*/p_{*,0}$.
Of course, the above implicitly ignores the spatial dependence of $B$,
but it suggests that amplification through the streaming of the
highest energy part of the cosmic ray distribution will seed much more
efficiently the magnetic field amplification.
\subsubsection{Non-existence of an incompressible instability}
Since the above compressive instabilities appear to saturate at a
level too small ($\delta B/B\sim 1$) to allow Fermi acceleration over
a broad range of energies, the possible existence of non-compressive
instability becomes crucial. One can search for such modes by
selecting the wave vectors accordingly, in which case the reduced
system becomes:
\begin{eqnarray}
\tilde D^2 \tilde u_z -\left(\frac{k_*^2}{k_x^2}+\frac{k_*
\partial_z}{k_x^2} + \frac{u_z}{\beta_{\rm sh}\Gamma_{\rm sh}}\frac{k_*
\partial_x}{k_x^2}\right)\tilde u_z + \frac{k_*}{k_x}
& = & 0 \nonumber \\ \frac{k_*\partial_x}{k_x^2} \tilde
D \tilde u_z - \frac{k_*\partial_y^2}{k_x^3} \tilde u_z
- \frac{\partial_z}{k_x} \tilde D^2 \tilde u_z & = & 0
\end{eqnarray}
It is easy to verify that this system possesses no unstable solution
but only damped solutions. Hence incompressible unstable modes do not
exist in this MHD description.
\section{Conclusions}\label{sec:conc}
In this study, we have examined the amplification of a pre-existing
magnetic field upstream of an ultra-relativistic shock wave. We have
assumed that the magnetic field is fully perpendicular to the shock
normal in the shock front frame, as generally expected in the
ultra-relativistic limit $\Gamma_{\rm sh}\,\gg\,1$. In the shock
frame, the cosmic rays do not induce any current at zeroth order, only
a net charge density, which is partly screened by the inflowing
plasma. This charge distribution then triggers an instability on very
short spatial scales, with a growth rate increasing with the
wavenumber. Cosmic rays do not respond to this amplification as it
takes on scales much shorter than the typical Larmor radius.
It is important to stress that these instabilities are of a different
nature than the resonant or non-resonant instability studied by Bell
(2004) in the case of a non-relativistic parallel shock wave, most
notably because they are essentially {\it compressive}. Since density
depletions are naturally limited, our linear analysis indicates that
these instabilities saturate at a moderate level of amplification,
$\delta B/B_0 \, \sim\, 1$. Furthermore, there is no unstable
incompressible mode within the present MHD approximation which could
provide a higher level of amplification. Therefore, we conclude that,
within the framework of ideal MHD, instabilities at ultra-relativistic
magnetized shock waves appear limited in their efficiency.
Such instabilities cannot therefore account for the degree of
amplification that has been inferred from the modeling of the
afterglow radiation of gamma-ray bursts. We have also argued in
Section~\ref{sec:fermi} that the ratio $\delta B/B_0$ sets the dynamic
range of energies over which powerlaw spectra can be produced through
Fermi acceleration. The present instabilities thus cannot allow
successful Fermi acceleration. Nevertheless, it would certainly be
useful to pursue the present investigation with dedicated numerical
simlations, which would allow one to go beyond the present linear
approximation and study whether non-linear effects can push the small
scale magnetic field to higher values.
If the standard interpretation of gamma-ray burst afterglows as the
synchrotron light of electrons accelerated at the forward shock (but
see Uhm \& Beloborodov 2006, Genet, Daigne \& Mochkovitch 2007 for
recent alternatives involving the reverse shock) holds, one is led to
conclude that some other instability is able to amplify the upstream
magnetic field on short spatial scales, or that some other form of
acceleration mechanism is operating (see for instance Hoshino
2008 for an alternative involving radiative pressure effects).
The present work indicates that such cosmic ray induced instabilities
would
involve non-MHD effects. The short spatial scales $\lambda_{\rm
c}\,\ll\,r_{\rm L}/\Gamma_{\rm sh}$ required are in conflict with
the usual synchrotron resonance between cosmic rays and MHD waves;
this remark is actually one of the motivation of the present work in a
full MHD framework. However, one cannot rule out that other types of
resonance could become relevant and yet involve incompressible modes,
such as Alfv\'en waves. One promising instability, currently under
study, involves a Cerenkov resonance ($\omega = k_xc$) between plasma
waves and the cosmic ray beam, generating a modified two stream
instability. The growth rates appear quite promising and the details
will be given in a forthcoming article.
|
1,314,259,994,229 | arxiv | \section{Introduction}
Let $\mathscr{F}$ be a filter over a regular uncountable cardinal $\kappa$.
We say that \emph{Galvin's property} holds for $\mathscr{F}$ (in symbols, ${\rm Gal}(\mathscr{F},\kappa,\kappa^+)$) if every family $\langle C_\gamma\mid \gamma<\kappa^+\rangle \subseteq\mathscr{F}$ admits a subfamily $\langle C_{\gamma_i}\mid i<\kappa\rangle$ with the property that $\bigcap\{C_{\gamma_i}\mid i<\kappa\}\in\mathscr{F}$.
In the 1970's, Galvin proved that if $\kappa=\kappa^{<\kappa}>\aleph_0$ then ${\rm Gal}(\mathscr{F},\kappa,\kappa^+)$ is true whenever $\mathscr{F}$ is normal.
The statement and the proof were published in a paper by Baumgartner, Hajnal and Mate \cite{MR0369081}.
The motivation of this paper came from an open problem which appeared in \cite{bgs}.
In that work it is shown that, consistently, there is a $\kappa$-complete ultrafilter over a measurable cardinal $\kappa$ which fails to satisfy the Galvin property.
One should keep in mind the fact that if $\kappa$ is measurable then every normal filter $\mathscr{F}$ satisfies the Galvin property $\mathrm{Gal}(\mathscr{F},\kappa,\kappa^+)$.
Thus, the main result of \cite{bgs} shows that $\kappa$-completeness differs from normality in terms of implying Galvin's property.
On the other hand, it is consistent that $\kappa$ is measurable and every $\kappa$-complete ultrafilter is Galvin.
This can be demonstrated in Solovay's inner model $L[\mathscr{U}]$, as shown in \cite{MR4393795}.
Howe\-ver, inner models are limited with their tolerance to large cardinals.
It was asked in \cite{bgs} whether it is consistent for a supercompact cardinal $\kappa$ that
every $\kappa$-complete ultrafilter $\mathscr{U}$ over $\kappa$ satisfies $\mathrm{Gal}(\mathscr{U},\kappa,\kappa^+)$.
In the first part of this paper we investigate the possibility of very large cardinals carrying $\kappa$-complete filters (ultrafilters) that fail to satisfy Galvin's property.
In \S2.1 we exhibit a generic extension where $\kappa$ is supercompact and every $\kappa$-complete ground model ultrafilter $\mathscr{U}$ over $\kappa$ extends to an ultrafilter $\mathscr{U}^*$ for which $\mathrm{Gal}(\mathscr{U}^*,\kappa,\kappa^+)$ fails (see Proposition~\ref{ExtendingGalvinWithSupercompacts}). Shortly after, we show that this construction is amenable to preserve even stronger large cardinals, such as $C^{(n)}$-extendibles and Vop\v{e}nka's Principle (Proposition~\ref{Largercadinals}). Continuing in this vein, we present a result of an opposite nature.
Namely, in Theorem~\ref{MakingEverythingNormal} we construct a generic extension where $\kappa$ is supercompact and every $\kappa$-complete ground model ultrafilter $\mathscr{U}$ extends to a $\kappa$-complete ultrafilter $\mathscr{U}^*$ that is \emph{Rudin-Keisler} equivalent to a normal one. In particular, all of these ultrafilters $\mathscr{U}^*$ do satisfy Galvin's property (Proposition~\ref{IdkappathenGalvin}).
The reader may have noticed that this is perhaps a too harsh way to convert an arbitrary $\kappa$-complete ultrafilter into a Galvin one. During the rest of the section we present alternative strategies to achieve the same configuration without such dramatic changes. Our first attempt takes place in Theorem~\ref{ExtendFilters} where we employ iterations of \emph{Generalized Mathias forcing} to show that every $\kappa$-complete filter $\mathscr{U}$ extends to a $\kappa$-complete filter $\mathscr{U}^*$ for which $\mathrm{Gal}(\mathscr{U}^*,\kappa,\kappa^+)$ holds. Section 2.1 is then culminated with our main result, which builds upon previous work of Gitik and Shelah \cite{MR1632081}. More specifically, in Theorem~\ref{GitikShelahTheorem} we replace the previous iteration by a more sophisticated one also involving Generalized Mathias forcing. This iteration was devised by Gitik and Shelah and here it is adapted to our current purposes. As an outcome we obtain the consistency of a supercompact cardinal $\kappa$ with every ground model $\kappa$-complete ultrafilter $\mathscr{U}$ extending to a $\kappa$-complete ultrafilter $\mathscr{U}^*$ which is a $P$-point. In particular, $\mathrm{Gal}(\mathscr{U}^*,\kappa,\kappa^+)$ holds (see Proposition~\ref{P point is Galvin}).
\smallskip
Another way to examine large cardinals is to consider small cardinals in \textsf{ZF} models.
For instance, under the \emph{Axiom of Determinacy} (\textsf{AD}) $\omega_1$ is $\aleph_2$-supercompact \cite[Theorem~28.22]{Kan}. It is worth mentioning that in the choiceless setting Galvin's original proof breaks down. Actually, even the assumption $\kappa=\kappa^{<\kappa}$ requires some choice in order to be meaningful. So, if one wishes to analyze Galvin-property-configurations over that models one must modify the currently available arguments.
In \S2.2 of this paper we show that some instances of Galvin's property are provable without the need of the Axiom of Choice (\textsf{AC}). In effect, this paper employs the Axiom of Determinacy as a new tool to get some variations of the Galvin property.
This fact is quite interesting since, apparently, there is a deep connection between Galvin's property and cardinal arithmetic which is not available with weak versions of \textsf{AC}.
This connection appears in the original theorem and also in other generalizations of the Galvin property.
For example, it is shown in \cite{MR3604115} that $2^{\aleph_0}<2^{\aleph_1}$ implies an instance of the Galvin property.
Nevertheless, Galvin's property reflects a substantial feature of the structure of normal filters, and for that reason it is relevant even without choice.
In the context of \textsf{AD} we shall see that
$\mathrm{Gal}(\mathscr{U},\aleph_2,\aleph_2)$ holds for the unique $\aleph_1$-complete ultrafiler over $\aleph_1$; namely, the club filter $\mathscr{D}_{\aleph_1}$.
Thus
by dropping \textsf{AC}
(or some fragments of it) one can get stronger large cardinal properties upon $\kappa$ along with the property that every $\kappa$-complete ultrafilter is Galvin.
\smallskip
The study of Galvin's property under \textsf{AD} lead us to the area of partition calculus.
In \S3 of the paper we address a classical question about ordinary partition relations. An exquisite theorem of Shelah establishes that if $\lambda>{\rm cf}(\lambda)=\kappa>\aleph_0$ and $2^\kappa<\lambda$ then $\lambda\rightarrow(\lambda,\omega+1)^2$ \cite{MR2494318}. We prove that the same partition relation follows upon replacing the cardinal arithmetic assumption by an appropriate instance of Galvin's property. This is true in general, but it is very meaningful under \textsf{AD} since the assumption $2^\kappa<\lambda$ must be modified in the absence of \textsf{AC}.
Specifically, we shall see, again under \textsf{AD},
that if $\kappa$ is measurable and $\kappa={\rm cf}(\lambda)<\lambda$ is a limit of measurable cardinals then $\lambda\rightarrow(\lambda,\omega+1)^2$.
This gives an answer to \cite[Question 11.4]{MR795592} in the context of \textsf{AD}. Moreover,
since under ``\textsf{AD}+$V=L(\mathbb{R})$'' every regular uncountable cardinal below $\Theta$ is measurable (see \cite{MR2768698}) we will have that $\lambda\rightarrow(\lambda,\omega+1)^2$ holds whenever $\lambda$ is a singular cardinal of uncountable cofinality and a limit of regular cardinals.
For details, see Theorem~\ref{thm881ad} and the subsequent discussion. Finally, it must be said that our Galvin-like assumption is trivial when $2^\kappa<\lambda$, and forceable when $2^\kappa>\lambda$.
We believe that the negative relation $\lambda\nrightarrow(\lambda,\omega+1)^2$ is consistent as well. Actually, our result pinpoints which instances of Galvin's property should be violated in order to force this negative partition relation.
\smallskip
Our notation is mostly standard.
If $\kappa={\rm cf}(\kappa)>\aleph_0$ then $\mathscr{D}_\kappa$ denotes the club filter over $\kappa$.
If $\kappa={\rm cf}(\kappa)<\lambda$ then $S^\lambda_\kappa=\{\delta\in\lambda\mid {\rm cf}(\delta)=\kappa\}$. The arrow symbol $\lambda\rightarrow (\alpha,\beta)^2$ is a shorthand for the following statement: for every $f:[\lambda]^{2}\rightarrow 2$ either there is a $0$-monochoromatic subsets of $\lambda$ of order type $\alpha$ or a $1$-monochromtic subset of $\lambda$ of order-type $\beta$.
We say that $\binom{\alpha}{\beta}\rightarrow\binom{\gamma}{\delta}$ iff for every $c:\alpha\times\beta\rightarrow\{0,1\}$ there are $A\in[\alpha]^\gamma,B\in[\beta]^\delta$ for which $c\upharpoonright(A\times{B})$ is constant.
We use $\Theta$ to denote $$\sup\{\alpha\mid \text{There exists a mapping from ${}^\omega\omega$ \emph{onto} $\alpha$}\}.$$
We employ the Jerusalem forcing notation, thus $p\leq{q}$ means that $q$ is stronger than $p$.
For background in partition calculus we refer the reader to \cite{MR795592} and \cite{MR3075383}.
\section{Galvin's property at very large cardinals with and without choice}
\subsection{Galvin's property at large cardinals}
In \cite{MR4393795} the following result is proved: It is consistent that $\kappa$ is a measurable cardinal and every $\kappa$-complete ultrafilter $\mathscr{U}$ over $\kappa$ satisfies $\mathrm{Gal}(\mathscr{U},\kappa,\kappa^+)$.
The proof strategy consist on analyzing Solovay's inner model $L[\mathscr{U}]$, where a complete classification of the $\sigma$-complete ultrafilters over $\kappa$ is available. The key observation is that in this inner model every $\sigma$-complete ultrafilter over $\kappa$ is Rudin-Keisler equivalent to a finite power of the normal measure $\mathscr{U}$. Since these ultrafilters do satisfy Galvin's property one concludes that $\mathrm{Gal}(\mathscr{V},\kappa,\kappa^+)$ holds for every $\kappa$-complete ultrafilter $\mathscr{V}\in L[\mathscr{U}]$. This phenomenon suggests the following question: How about those (very) large cardinals for which there is no available canonical inner model? The epitome of this is supercompactness.
By work of the first two authors together with S. Shelah \cite{bgs} it is consistent that a supercompact cardinal $\kappa$ carries a $\kappa$-complete ultrafilter $\mathscr{U}$ which extends the club filter and $\neg \mathrm{Gal}(\mathscr{U},\kappa,\kappa^+)$. Shortly after, the first author together with M. Gitik \cite{OnPrikryandCohen} improved this result by showing that just a measurable cardinal suffices to obtain such an ultrafilter $\mathscr{U}$.
The forthcoming proposition is a spin-off of the above-mentioned result in the context of general $\kappa$-complete ultrafilters:
\begin{proposition}\label{ExtendingGalvinWithSupercompacts}
Assume that the $\mathsf{GCH}$ holds and that $\kappa$ is a measurable cardinal. Then the following is true in the generic extension of \cite[Theorem 2.6]{OnPrikryandCohen}:
Every $\kappa$-complete (non-necessarily normal) ultrafilter $\mathscr{U}$ of the ground model extends to a $\kappa$-complete ultrafilter $\mathscr{U}^*$ such that $\neg \mathrm{Gal}(\mathscr{U}^*,\kappa,\kappa^+)$.
In addition, if $\kappa$ was supercompact then it remains so in the extension.
\end{proposition}
\begin{proof}
The sought model is the generic extension by the Easton support iteration ${\langle}\mathbb{P}_\alpha,\lusim{\mathbb{Q}}_\beta\mid \alpha\leq\kappa+1, \beta\leq\kappa{\rangle}$ such that for $\alpha\leq\kappa$, $\lusim{\mathbb{Q}}_\alpha$ is trivial unless $\alpha$ is inaccessible, in which case it is a $\mathbb{P}_\alpha$-name for $\mathrm{Add}(\alpha,\alpha^+)$. This iteration preserves supercompactness (see e.g. \cite[Theorem~11.1]{MR2768691}).
Let ${\mathscr{U}}\in V$ be a $\kappa$-complete ultrafilter. Let us verify that we can adjust the argument in \cite{OnPrikryandCohen} to encompass non-normal ultrafilters. We will follow the notation from the original proof, considering the elementarity embeddings
$$j_1:=j_{{\mathscr{U}}}:V\rightarrow M_{\mathscr{U}}=:M_1, \ j_2:=j_{\mathscr{U}^2}:V\rightarrow M_{\mathscr{U}^2}=:M_2$$
$$ k:M_1\rightarrow M_{2}, \ j_{2}=k\circ j_1$$
where $k$ is simply the ultrapower embedding defined in $M_\mathscr{U}$ using the ultrafilter $j_{1}(\mathscr{U})$. Let
$G:=G_\kappa*g_\kappa$ be $V$-generic for $\mathbb{P}_\kappa*\lusim{\mathbb{Q}}_\kappa$. The argument that these embeddings can be lifted in $V[G]$ does not require normality and remains unaltered. Thus, we form $j_1\subseteq j_1^*:V[G]\rightarrow M_1[j^*_1(G)]$, $k\subseteq k^*:M_1[j_1^*(G)]\rightarrow M_2[j_2^*(G)]$ and $j_2\subseteq j_2^*:=k^*\circ j_1^*$ such that:
\begin{enumerate}
\item for every $\alpha\in j_1``\kappa^+$, $f_{\kappa_2,k(\alpha)}(\kappa_1)=1$.
\item for every $\alpha\in \kappa_1\setminus j_1``\kappa^+, f_{\kappa_2,k(\alpha)}(\kappa_1)=0.$
\item $f_{\kappa_2,\kappa_1}(\kappa_1)=\kappa$.
\end{enumerate}
Since we are just dealing with non-normal ultrafilters we need to alter the values of the generic $f_2$ at $\delta^*:=[\id]_{j_1(\mathscr{U})}$, the generator of the second ultrapower. Also, we need to eliminate the generator of the first ultrapower $\delta:=[\id]_\mathscr{U}$:
\begin{enumerate}
\item for every $\alpha\in j_1``\kappa^+$, $f_{\kappa_2,k(\alpha)}(\delta^*)=1$.
\item for every $\alpha\in \kappa_1\setminus j_1``\kappa^+$, $f_{\kappa_2,k(\alpha)}(\delta^*)=0$
\item $f_{\kappa_2,\delta^*}(\delta^*)=\delta$.
\end{enumerate}
Notice that the amount of coordinates that were altered is small. In particular,
the counting/genericity arguments of \cite[Lemma 2.7]{OnPrikryandCohen} relying on \textsf{ZFC} still go through. Next, derive in $V[G]$ the ultrafilter generated by $j^*_1$ and $[\id]_\mathscr{U}$,
$$\mathscr{U}^*:=\{X\subseteq \kappa\mid [\id]_\mathscr{U}\in j^*_1(X)\}$$
Note that $\mathscr{U}\subseteq \mathscr{U}^*$. Finally, let
\begin{equation*}
\mathscr{W}:=\{X\subseteq\kappa\mid [\id]_{j_1(\mathscr{U})}\in j^*_2(X)\}\in V[G].\qedhere
\end{equation*}
Let us prove that $\mathscr{W}$ witnesses the statement of the theorem:
\begin{claim}\label{final claim}
$\mathscr{W}$ is a $\kappa$-complete ultrafilter over $\kappa$ such that:
\begin{enumerate}
\item $\mathscr{U}\subseteq \mathscr{W}$.
\item $\neg \mathrm{Gal}(\mathscr{W},\kappa,\kappa^+)$.
\end{enumerate}
\end{claim}
\begin{proof}[Proof of claim]
(1): If $A\in \mathscr{U}$ then $j_1(A)\in j_1(\mathscr{U})$, hence $[\id]_{j_1(\mathscr{U})}\in j_2(A)$ and thus $A\in \mathscr{W}$.
(2): Let us define the witness. For each $\alpha<\kappa^+$ let
$$A_\alpha:=\{\nu<\kappa\mid f_{\kappa,\alpha}(\nu)=1\}$$
then $$j^*_2(A_\alpha)=\{\beta<\kappa_2\mid f_{\kappa_2,j_2(\alpha)}(\beta)=1\}.$$ Since $j_2(\alpha)=k(j_1(\alpha))$, our modifications of the generic give $$f_{\kappa_2,j_2(\alpha)}([\id]_{j_1(\mathscr{U})})=1,$$ hence $[\id]_{j_1(\mathscr{U})}\in j^*_2(A_\alpha)$. Finally $A_\alpha\in \mathscr{W}$ by definition of $\mathscr{W}$.
Before proving the failure of the Galvin property, let us denote by $j_\mathscr{W}:V[G]\rightarrow M_\mathscr{W}$ the ultrapower embedding by $\mathscr{W}$ and $k_\mathscr{W}:M_\mathscr{W}\rightarrow M^*_2$ defined by $k_\mathscr{W}([f]_\mathscr{W}):=j^*_2(f)([id]_{j_1(\mathscr{U})})$ the factor map satisfying $k_\mathscr{W}\circ j_\mathscr{W}=j^*_2$.
We show that $k_\mathscr{W}$ is onto, hence the identity, and thus $j^*_2=j_\mathscr{W}$. In effect, if $A\in M_2[j^*_2(G)]$ then there is a name $\lusim{A}\in M_2$ with $A=(\lusim{A})_{j_2^*(G)}$. Since $j_2$ is the second ultrapower by $\mathscr{U}$, there is $f:[\kappa]^2\rightarrow V$ such that $j_2(f)([\id]_\mathscr{U},[\id]_{j_1(\mathscr{U})})=\lusim{A}$. By \L\"{o}s theorem, we can assume that $f(\alpha,\beta)$ is a $\mathbb{P}_{\kappa+1}$-name for every $(\alpha,\beta)\in [\kappa]^2$. In $V[G]$ let $f^*(\alpha)=(f(f_{\kappa,\alpha}(\alpha),\alpha))_G$.
Then, $$k_\mathscr{W}([f^*]_W)=j_2^*(f^*)([\id]_{j_1(\mathscr{U})})=(j_2(f)(f_{\kappa_2,[\id]_{j_1(\mathscr{U})}}([\id]_{j_1(\mathscr{U})}),[\id]_{j_1(\mathscr{U})}))_{j_2^*(G)}$$
$$=j_2(f)([\id]_\mathscr{U},[\id]_{j_1(\mathscr{U})})_{j_2^*(G)}=(\lusim{A})_{j_2^*(G)}=A$$
Let
$\langle A_{\alpha_i}\mid i<\kappa\rangle$ be any subfamily of length $\kappa$ and $\kappa\leq \eta<[\id]_\mathscr{W}=[\id]_{j_1(\mathscr{U})}$.
Denote $j_\mathscr{W}({\langle} A_{\alpha_i}\mid i<\kappa{\rangle}):={\langle} A'_{\alpha'_i}\mid i<j_\mathscr{W}(\kappa){\rangle}.$
Pick any $\kappa\leq\eta< [\id]_{\mathscr{U}}<j_2(\kappa)$, then $\eta\notin j_1``\kappa^+$ and also $\alpha''_\eta\notin j_1``\kappa^+$, where $\alpha''_\eta$ is the first image of the $\{\alpha_i\mid i<\kappa\}$. Moreover $k(\alpha''_\eta)=\alpha'_{k(\eta)}$ and by definition, $f_{\kappa_2,\alpha'_{\eta}}([\id]_{j_1(\mathscr{U})})=0$ and $[\id]_{j_1(\mathscr{U})}\notin A'_{\eta}$. Hence
$$[\id]_{j_1(\mathscr{U})}\notin \bigcap \{A'_{\alpha'_i}\mid i<\kappa_2\}=j_2^*(\bigcap_{i<\kappa}A_{\alpha_i})$$
Hence $\bigcap_{i<\kappa}A_{\alpha_i}\notin \mathscr{W}$.
\end{proof}
\end{proof}
Continuing with our original discussion one may ask if the conclusion of Proposition~\ref{ExtendingGalvinWithSupercompacts} is compatible with large cardinals stronger than supercompactness. As argued in \cite{Bag, BagPov}, the natural model-theoretic strengthe\-ning of supercompactness is $C^{(n)}$-extendibility. Fix $n<\omega$. A cardinal $\kappa$ is called \emph{$C^{(n)}$-extendible} if for every $\lambda>\kappa$ there is $\theta\in\mathrm{Ord}$ and an elementary embedding $j\colon V_\lambda\rightarrow V_\theta$ with $\crit(j)=\kappa$, $j(\kappa)>\lambda$ and $V_{j(\kappa)}\prec_{\Sigma_n} V$.\footnote{Recall that $V_\eta\prec_{\Sigma_n} V$ is a shorthand for the following statement: for every $\bar{a}\in V_\eta^{<\omega}$ and every $\Sigma_n$ formula $\varphi(\bar{x})$ in the language of set theory, $V_\eta\models \varphi(\bar{a})$ iff $V\models \varphi(\bar{a})$.}
The classical notion of extendibility (see \cite[\S23]{Kan}) coincides with $C^{(n)}$-extendibility whenever $n=1$. However, when $n\geq 2$ the first $C^{(n)}$-extendible is far above, and has stronger large-cardinal-properties, than the first extendible cardinal. In addition, $C^{(n)}$-extendibility do entail a proper hierarchy of cardinals \cite{Bag}. The culmination of this hierarchy is
the category-theoretic axiom known as \emph{Vop\v{e}nka's Principle} (\textsf{VP}) \cite[p. 335]{Kan}. In effect, it was shown by Bagaria that \textsf{VP} is equivalent to the existence of a (proper class of) $C^{(n)}$-extendible, for all $n\geq 1$. We refer the reader to \cite{Bag} for further details.
Let us come back to the argument of Proposition~\ref{ExtendingGalvinWithSupercompacts}. If $\kappa$ is an extendible cardinal performing our iteration $\mathbb{P}_{\kappa+1}$
will ruin extendibility of $\kappa$.\footnote{Actually, adding a single Cohen subset to $\kappa$ does it.} Nevertheless, if one forces with $\mathrm{Add}(\alpha,\alpha^+)$ at every inaccessible cardinal the situation changes completely. In \cite{BagPov} the authors develop a general theory of preservation of extendible cardinals under class-forcing iterations. Specifically, in \cite[\S8]{BagPov} it is shown that many classical class-forcing iterations (e.g., Jensen's iteration to force the \textsf{GCH}) do preserve extendible cardinals, as well as
$C^{(n)}$-extendible cardinals and Vop\v{e}nka's Principle ($\mathrm{VP}$).
\smallskip
The following proposition is an easy corollary of \cite[Theorem~8.4]{BagPov}:
\begin{proposition}\label{Largercadinals}
Assume that the $\mathsf{GCH}$ holds and that $\kappa$ is a $C^{(n)}$-extendible cardinal for some $n\geq 1$. Let $\mathbb{P}$ denote the Easton support class iteration forcing with $\mathrm{Add}(\alpha,\alpha^+)$ at each inaccessible cardinal
Then, the following hold in $V^{ \mathbb{P}}$:
\begin{enumerate}
\item $\kappa$ is $C^{(n)}$-extendible;
\item for every measurable cardinal $\lambda$ every $\lambda$-complete ultrafilter $\mathscr{U}\in V$ extends to a $\lambda$-complete ultrafilter $\mathscr{U}^*$ such that $\neg \mathrm{Gal}(\mathscr{U}^*,\lambda,\lambda^+).$
\end{enumerate}
In addition, if one assumes $\mathrm{VP}$ this is preserved in $V^{\mathbb{P}}$.
\end{proposition}
\begin{proof}
Clause~(1) is an immediate consequence of \cite[Theorem~8.4]{BagPov}.
For Clause~(2) we argue as follows. If $\lambda_0$ stands for the first $V$-inaccessible cardinal then $\mathbb{P}$ admits a gap at $\lambda_0^{++}$.\footnote{I.e., $\mathbb{P}\simeq \mathbb{P}_1\ast \lusim{\mathbb{P}_2}$ where $|\mathbb{P}_1|<\lambda_0^{++}$ and $\Vdash_{\mathbb{P}_1}\text{$``\lusim{\mathbb{P}}_2$ is $\lambda^{++}_0$-distributive''}$.} Thus $\mathbb{P}$ does not create new measurable cardinals in $V^{\mathbb{P}}$ \cite[Corollary~2]{HamkinsGap}. Let $\lambda$ be a $V$-measurable cardinal and $\mathscr{U}$ a $\lambda$-complete ultrafilter in the ground model. By Proposition~\ref{ExtendingGalvinWithSupercompacts}, $\mathbb{P}_{\lambda+1}$ forces that there is $\mathscr{U}^*\supseteq \mathscr{U}$ such that $\mathrm{Gal}(\mathscr{U}^*,\lambda,\lambda^+)$ fails. Clearly $\mathbb{P}/\mathbb{P}_{\lambda+1}$ is forced to be $\lambda^{++}$-directed closed (actually more), hence it preserves that $\mathscr{U}^*$ is a $\lambda$-complete ultrafilter over $\lambda$ witnessing $\neg \mathrm{Gal}(\mathscr{U}^*,\lambda,\lambda^+)$.
\end{proof}
Let us now show how to obtain the consistency of the opposite statement ``Every $\kappa$-complete ultrafilter $\mathscr{U}$ over a supercompact cardinal $\kappa$ extends to a $\kappa$-complete ultrafilter $\mathscr{U}^*$ satisfying $\mathrm{Gal}(\mathscr{U}^*,\kappa,\kappa^+)$''. To this aim we will show how to turn a non (necessarily) Galvin ultrafilter $\mathscr{U}$ into another $\mathscr{U}^*$ that is \emph{Rudin-Keisler} equivalent to a normal one. By virtue of Galvin's theorem \cite{MR0369081} this ensures that $\mathscr{U}^*$ itself does satisfy $\mathrm{Gal}(\mathscr{U}^*,\kappa,\kappa^{+})$.
To show this we need a couple of preliminary observations. First, if $\mathscr{U}$ and $\mathscr{W}$ are $\kappa$-complete ultrafilters over $\kappa$ and $\mathscr{U}\equiv_{\mathrm{RK}}\mathscr{W}$ then $\mathrm{Gal}(\mathscr{U},\kappa,\kappa^+)$ holds if and only if $\mathrm{Gal}(\mathscr{W},\kappa,\kappa^+)$ holds. Second:
\begin{proposition}\label{IdkappathenGalvin}
If $\mathscr{U}$ is a $\kappa$-complete ultrafilter over $\kappa$ with $|[\id]_\mathscr{U}|=\kappa$ then $\mathscr{U}$ is Rudin-Keisler equivalent to a normal $\kappa$-complete ultrafilter.
In particular, under the above conditions, $\mathrm{Gal}(\mathscr{U},\kappa,\kappa^+)$ holds.
\end{proposition}
\begin{proof}
Let $\mathscr{U}_0$ denote the normal measure generated from $j:=j_\mathscr{U}$ and $\kappa$.
For each $\lambda<\kappa^+$ there is $f_\lambda\colon \kappa\rightarrow \kappa$ such that $j(f)(\kappa)=\lambda$. We prove this by induction on $\lambda$. Suppose that
${\langle} f_\alpha\mid \alpha<\lambda<\kappa^+{\rangle}$ are defined and let ${\langle} \lambda_i\mid i<{\rm cf}(\lambda){\rangle}$ be cofinal in $\lambda$. Define $f_\lambda\colon \kappa\rightarrow \kappa$ as follows: $$f_\lambda(\alpha):=\sup_{i<\alpha}f_{\lambda_i}(\alpha).$$
Note that $f_\lambda\colon \kappa\rightarrow\kappa$ due to the regularity of $\kappa$. Next, put $$j(\langle f_\beta\mid \beta<\lambda\rangle):=\langle f'_\beta\mid \beta<j(\lambda)\rangle, \ j({\langle} \lambda_i\mid i<{\rm cf}(\lambda){\rangle}):={\langle} \lambda'_i\mid i<j({\rm cf}(\lambda)){\rangle}.$$ Observe that $f'_{j(\alpha)}=j(f_\alpha)$ and $\lambda'_{j(\alpha)}=j(\lambda_\alpha)$. In particular, $f'_\alpha=j(f_\alpha)$ and $\lambda'_\alpha=j(\lambda_\alpha)$ for every $\alpha<\kappa$. Hence, $$j(f_\lambda)(\kappa)=\sup_{i<\kappa}f'_{\lambda'_i}(\kappa)=\sup_{i<\kappa}j(f)_{j(\lambda_i)}(\kappa)=$$
$$=\sup_{i<\kappa}j(f_{\lambda_i})(\kappa)=\sup_{i<\kappa}\lambda_i=\lambda$$
Thus, there is $f\colon \kappa\rightarrow \kappa$ such that $j(f)(\kappa)=[\id]_\mathscr{U}$, so that $\mathscr{U}\leq_{\mathrm{RK}} \mathscr{U}_0$. Also, it is well-known that normal filters are $\leq_{\mathrm{RK}}$-minimal (see~e.g. \cite[Proposition 2.6]{TomTreePrikry}), hence $\mathscr{U}\equiv_{\mathrm{RK}}\mathscr{U}_0$. For the in particular clause use our comments prior to the statement of the proposition.
\end{proof}
\begin{theorem}\label{MakingEverythingNormal
Assume that the $\mathsf{GCH}$ holds and that $\kappa$ is a huge cardinal.
Then, there is an inaccessible cardinal $\mu>\kappa$ and a generic extension of $V_\mu$
where the following hold:
\begin{enumerate}
\item $\kappa$ is supercompact;
\item Every $\kappa$-complete {ultrafilter} $\mathscr{U}\in V$ extends to a $\kappa$-complete ultrafilter $\mathscr{U}^*$ that is Rudin-Keisler equivalent to a normal ultrafilter. In particular, $\mathrm{Gal}(\mathscr{U}^*,\kappa,\kappa^+)$ holds.
\end{enumerate}
\end{theorem}
\begin{proof}
Let $j\colon V\rightarrow M$ be an elmentary embedding witnessing that $\kappa$ is huge; namely, $\crit(j)=\kappa$ and $M^{j(\kappa)}\subseteq M$. Fix $\langle \mathscr{U}_\alpha\mid \alpha<2^{2^\kappa}\rangle$ an injective enumeration of the $\kappa$-complete ultrafilters over $\kappa$. For each $\alpha<2^{2^\kappa}$ note that $j``\mathscr{U}_\alpha\in M$ and that $j``\mathscr{U}_\alpha\in[j(\mathscr{U}_\alpha)]^{<j(\kappa)}$ hence, by $j(\kappa)$-completeness of $j(\mathscr{U}_\alpha)$ in $M$, we can find $\epsilon_\alpha\in \bigcap j``\mathscr{U}_\alpha.$ Clearly, $\epsilon_\alpha<j(\kappa)$ and
$$\mathscr{U}_\alpha\subseteq\mathscr{U}^*_{\alpha,0}:= \{X\subseteq \kappa\mid \epsilon_\alpha\in j(X)\}.$$
Let $\lambda$ and $\mu$ be, respectively, the first inaccessible cardinals in the intervals $(\sup_{\alpha<2^{2^\kappa}}\epsilon_\alpha, j(\kappa))$ and $(\lambda, j(\kappa))$.\footnote{This choice is possible as $j(\kappa)$ is a limit of inaccessibles.} Next, let $i\colon V\rightarrow N$ be the $\mu$-supercompact embedding derived from $j$: that is, the ultrapower embedding that arises from the measure $\{X\subseteq\mathcal{P}_\kappa(\mu)\mid j``\mu\in j(X)\}$. Let $k\colon N\rightarrow M$ be the factor embedding between $j$ and $i$. Usual arguments show that $\crit(k)>\eta$, hence $\mathscr{U}^*_{\alpha,0}=\{X\subseteq \kappa\mid \epsilon_\alpha\in i(X)\}$ for each $\alpha<2^{2^\kappa}$.
Now, force over $V$ with \emph{Woodin's fast function forcing} $\mathbb{F}_\kappa$. By virtue of Lemma~1.10 in \cite{HamkinsLottery} we have that $i$ lifts to a $\mu$-supercompact embedding $i\colon V[f]\rightarrow M[i(f)]$ such that $i(f)(\kappa)=\mu$. Notice that using the fast function $f\colon \kappa\rightarrow\kappa$ we can easily represent $\lambda$ as well: let $f^*\colon \kappa\rightarrow\kappa$ be defined as $\alpha\mapsto\sup\{\beta<f(\alpha)\mid \text{$\beta$ is inaccessible}\}$ and note that $i(f^*)(\kappa)=\lambda$.\footnote{Here we are implicitly assuming that $N[i(f)]$ is $\mu$-closed, hence it thinks that $\mu$ is inaccessible and that $\lambda$ is the first inaccessible below it.}
Next, over $V[f]$, force with the two-step iteration $\mathbb{C}:=\mathbb{C}_\kappa\ast \lusim{\mathop{\mathrm{Col}}}(\kappa,<\lambda)$ where $\mathbb{C}_\kappa$ is the Easton-supported iteration defined as follows: for $\alpha<\kappa$, the $\alpha^{\mathrm{th}}$-stage of the iteration is trivial unless $\alpha$ is inaccessible, $f``\alpha\subseteq \alpha$ and $\alpha< f^*(\alpha)$, in which case it forces with $\lusim{\mathop{\mathrm{Col}}}(\alpha, {<}f^*(\alpha))$.
\begin{claim}
After forcing with $\mathbb{C}$ the embedding $i\colon V[f]\rightarrow N[f]$ lifts to a $\mu$-supercompact embedding $i^*\colon V[f\ast C]\rightarrow N[i(f\ast C)]$ in $V[f\ast C]$.
\end{claim}
\begin{proof}[Proof of claim]
Denote $\bar{V}:=V[f]$ and $\bar{N}:=N[i(f)]$. Let $C:=C_\kappa\ast c\subseteq \mathbb{C}$ generic over $V$. We can lift the embedding after forcing with $\mathbb{C}_\kappa$ to another $i\colon \bar{V}[C_\kappa]\rightarrow \bar{N}[C_\kappa\ast c\ast h]\subseteq \bar{V}[C]$. There are two points here: first, the $\kappa^{\mathrm{th}}$-stage of the iteration from the perspective of $\bar{N}$ is $\mathop{\mathrm{Col}}(\kappa,{<}\lambda)^{\bar{V}}$;\footnote{Because $j(f)``\kappa\subseteq \kappa$ and $j(f^*)(\kappa)=\lambda>\kappa$.} second, the tail forcing $i(\mathbb{C}_\kappa)/\mathbb{C}$ is trivial in the interval $(\kappa, \mu)$ because $i(f)(\kappa)=\mu$ and so the next closure point of $i(f)$ past $\kappa$ is $\geq (\mu^+)^{\bar{V}}$.
Finally, one can lift $i$ after forcing with $\mathop{\mathrm{Col}}(\kappa,{<}\lambda)_{\bar{V}[C_\kappa]}$ to another embedding $i\colon \bar{V}[C]\rightarrow \bar{N}[C_\kappa\ast c\ast h \ast h']$. For this one uses the fact that $i``c\subseteq \mathop{\mathrm{Col}}(i(\kappa),{<}i(\lambda))_{\bar{N}[C_\kappa\ast c\ast h]}$ is a directed set of conditions in $N[C_\kappa\ast c\ast h]$, $|i``c|<i(\kappa)$ and $\mathop{\mathrm{Col}}(i(\kappa),{<}i(\lambda))_{\bar{N}[C_\kappa\ast c\ast h]}$ is $\mu$-directed-closed in the model $N[C_\kappa\ast c\ast h]$. Standard arguments show that the resulting embedding witnesses $\mu$-supercompactness of $\kappa$.
\end{proof}
Working in $V[f\ast C]$, for each $\alpha<(2^{2^\kappa})^V$ define $$\mathscr{U}^*_\alpha:=\{X\subseteq\kappa\mid \epsilon_\alpha\in i^*(X)\}.$$
Clearly, $\mathscr{U}^*_\alpha$ is a $\kappa$-complete ultrafilter satisfying $\mathscr{U}_\alpha\subseteq \mathscr{U}^*_\alpha$. The point now is that $|[\id]_{\mathscr{U}^*_\alpha}|^{V[f\ast C]}\leq |\epsilon_\alpha|^{V[f\ast C]}=\kappa$ hence $\mathrm{Gal}(\mathscr{U}^*_\alpha,\kappa,\kappa^+)$ holds in $V[f\ast C]$.
Let $M:=V[f\ast C]_\mu$. This is certainly a model of \textsf{ZFC} because $\mu$ remains inaccessible. Also, $M$ satisfies that $\kappa$ is supercompact. Finally, note that $M=V_\mu[f\ast C]$. Since every $\kappa$-complete ultrafilter $\mathscr{U}$ over $\kappa$ from the ground model actually comes from $V_\mu$ we obtain Clause~(2) of the theorem.
\end{proof}
In the model of Theorem~\ref{MakingEverythingNormal} our target cardinal $\kappa$ cannot be extendible. To make it so, one should perform a class-forcing iteration that is nice enough to carry the previous arguments. This suggests the following question:
\begin{question}
Is the statement of the previous theorem compatible with $\kappa$ being extendible or, more generally, $C^{(n)}$-extendible?
\end{question}
In the proof of Theorem~\ref{MakingEverythingNormal} we showed how to \emph{correct} ultrafilters that do not satisfy Galvin's property. The technique used for this purpose consisted of collapsing the generators of every $V$-ultrafilter to yet another generator of cardinality $\kappa$; namely, the normal generator. In that manner we accomplished our \emph{correction} of non-Galvin ultrafilters by making them essentially \emph{minimal} (to wit, normal) from the Rudin-Keisler-perspective. This is, certainly, a too harsh way to ensure Galvin's property in the final model.
In the light of this one may wonder whether a similar Galvin-like configu\-ration is possible without \emph{trivializing} the relevant ultrafilters. In what is left
we show that this is indeed possible.
As a warm up exercise we begin describing how to turn a general $\kappa$-complete ultrafilter into a Galvin one by using a generalization of \emph{Mathias forcing}. The classical Mathias forcing dealing with subsets of $\omega$ appeared in \cite{MathiasHappy}, while the version that we will use here follows the template of \cite[Definition 3.1]{GartiShelah964}:
\begin{definition}[Generalized Mathias forcing]\label{PropertiesMathias}
\label{defgeneralmathias}
Let $\kappa$ be a regular cardinal
and $\mathscr{U}$ a non-principal $\kappa$-complete filter over $\kappa$.
The forcing notion $\mathbb{M}_{\mathscr{U}}$ consists of pairs $(a,A)$ such that $a \in [\kappa]^{<\kappa}, A \in \mathscr{U}$ and $\sup(a)<\min(A)$.
For the order, one writes $(a_0, A_0) \leq (a_1, A_1)$ if and only if $a_0 \subseteq a_1, A_0 \supseteq A_1$ and $a_1\setminus a_0 \subseteq A_0$.
\end{definition}
The next is a brief account of the main properties of $\mathbb{M}_{\mathscr{U}}$:
\begin{proposition}[Properties of $\mathbb{M}_{\mathscr{U}}$]\label{PropertiesOfMathias}\hfill
\begin{enumerate}
\item $\mathbb{M}_{\mathscr{U}}$ is $\kappa^+$-centered, provided $\kappa^{<\kappa}=\kappa$;
\item $\mathbb{M}_{\mathscr{U}}$ is $\kappa$-directed-closed;
\item $\mathbb{M}_{\mathscr{U}}$ is countably parallel closed;\footnote{I.e., every two decreasing sequences of conditions $\langle p_n\mid n<\omega\rangle$, $\langle q_n\mid n<\omega\rangle$ with $p_n\parallel q_n$ admit an upper bound. See \cite{AFramework}.}
\item If $G\subseteq \mathbb{M}_{\mathscr{U}}$ is a $V$-generic filter then $V[G]=V[a_G]$, where
$$a_G:=\bigcup\{a\mid \exists A\in\mathscr{U}\,((a,A)\in G)\};$$
\item The set $a_G$ \emph{diagonalizes} $\mathscr{U}$: i.e., $a_G\subseteq^* A$ for every $A\in \mathscr{U}$.
\end{enumerate}
In particular, if $\mathscr{U}$ is an ultrafilter then either $a_G\subseteq^* A$ or
$a_G\subseteq^* \kappa\setminus A$ for all $A\in \mathcal{P}(\kappa)^V$. \end{proposition}
The next proposition describes how to turn a $\kappa$-complete filter into one satisfying Galvin's property by means of $\mathbb{M}_\mathscr{U}$:
\begin{proposition}\label{MathiasandBase}
Let $\kappa$ be a regular cardinal, $\mathscr{U}$ a $\kappa$-complete filter over $\kappa$ and $G\subseteq \mathbb{M}_\mathscr{U}$ a generic filter. Then the following hold in $V[G]$:
\begin{enumerate}
\item $\mathscr{U}^*:=\{A\subseteq\kappa \mid a_G\subseteq^* A\}$ is a $\kappa$-complete filter;
\item $\mathscr{U}\subseteq \mathscr{U}^*$;
\item $\mathrm{Gal}(\mathscr{U}^*,\kappa,\kappa^+)$ holds.
\end{enumerate}
\end{proposition}
\begin{proof}
$\mathscr{U}^*$ is clearly a filter and $\mathscr{U}\subseteq \mathscr{U}^*$ by virtue of Proposition~\ref{PropertiesMathias}(4). The argument for $\kappa$-completeness of $\mathscr{U}^*$ is essentially the same as the one for Clause~(3): Let ${\langle} A_\alpha \mid \alpha <\kappa^+{\rangle}\subseteq \mathscr{U}^*$ and find $I\in [\kappa^+]^{\kappa^+}$ and $\alpha^*<\kappa$ such that $a_G\setminus \alpha^*\subseteq A_\alpha$ for all $\alpha\in I$. Thus, $\bigcap_{\alpha\in I} A_\alpha\in\mathscr{U}^*$.
\end{proof}
The above argument repeats the one from \cite[Proposition 4.5]{bgp}: the point is that $\mathbb{M}_\mathscr{U}$ creates a \emph{generating sequence} of length $1$.
\begin{definition}
A family $\mathcal{A}=\langle x_\alpha\mid \alpha<\lambda\rangle\subseteq \mathscr{U}$ is a \emph{generating sequence for $\mathscr{U}$} if for every $A\in \mathscr{U}$ there is $\alpha<\lambda$ such that $x_\alpha\subseteq^* A.$ In addition, $\mathcal{A}$ is called \emph{strong generating} if it is $\subseteq^*$-decreasing.
\end{definition}
As demonstrated in \cite[\S4]{bgp} the analysis of (strong) generating sequences provides an effective way to produce certain Galvin-like configurations. The main obstacle, however, is to ensure that the departing $\kappa$-complete ultrafilter $\mathscr{U}$ extends to yet another $\kappa$-complete ultrafilter. This will be eventually addressed in Theorem~\ref{GitikShelahTheorem}.
\smallskip
Our next goal will be to iterate Mathias forcing over a given filter (and its extensions along the way) so that it will generate a $\kappa$-complete ultrafilter with a strong generating sequence of arbitrary length.
This idea traces back to
Kunen who employed it to separate the ultrafilter number $\mathfrak{u}$ from $2^{\omega}$ (see \cite[Ch. VII Question (A10)]{Kunen1980}). A similar argument, yet
involving a more complex iteration, was considered in \cite{BrookeTaylor2017CardinalCA}. There
the authors separate
$\mathfrak{u}(\kappa)$ and $2^{\kappa}$ in a context where $\kappa$ is supercompact.
The n\"{a}ive approach would consist of
iterating Mathias forcing over and over with $\kappa$-complete filters. Unfortunately, this strategy is doomed to failure and so an
additional structure on the forcing is required. Let us illustrate where the problem arises. Suppose that $x_0$ is a Mathias set for a $\kappa$-complete filter $\mathscr{U}$. Working in the generic extension $V[x_0]$ let $\mathscr{U}_0$ be a $\kappa$-complete filter extending $\{x_0\}\cup\mathscr{U}$ (e.g.,
by Proposition~\ref{MathiasandBase} we can take $\{X\in \mathcal{P}(\kappa)^{V[x_0]}\mid x_0\subseteq^*X\}$). Next, over $V[x_0]$, force a Mathias set $x_1$ through $\mathscr{U}_0$ and
working over the resulting extension $V[x_0,x_1]$ let $\mathscr{U}_1$ a
$\kappa$-complete filter extending $\{x_1\}\cup\mathscr{U}_0$. One can proceed in this fashion $\omega$-many times. Formally speaking, this is forced by the following
full-support iteration ${\langle} \mathbb{P}_\alpha,\lusim{\mathbb{Q}}_n\mid n<\omega{\rangle}$:
for each $n\geq 1$, $\lusim{\mathbb{Q}}_n$ is a $\mathbb{P}_n$-name for $\mathbb{M}_{\lusim{\mathscr{U}}_{n}}$ where $\lusim{\mathscr{U}}_n$ is a $\mathbb{P}_n$-name for a $\kappa$-complete ultrafilter extending $\{\lusim{x}_{n}\}\cup\lusim{\mathscr{U}}_{n-1}$.
An essential obstacle arises at stage $\omega+1$. Here
one needs to find a $\kappa$-complete filter which includes all the Mathias sets $\langle x_n\mid n<\omega\rangle$ constructed so far. However, notice that $\mathscr{W}:=\{X\subseteq\omega\mid \exists n<\omega\, (x_n\subseteq^* X)\}$ (i.e., the filter generated by the Mathias sets) is not $\sigma$-complete: for if $\bigcap_{n<\omega}x_n\in\mathscr{W}$ then
there would be some $n^*<\omega$ such that $x_{n^*}\subseteq^* \bigcap_{n<\omega} x_n$,
hence $x_{n^*}$ would be $\subseteq^*$-included in $x_{n^*+1}$. This latter is certainly impossible in that $x_{n^*+1}$ is a Mathias set for a filter including $x_{n^*}$.
\smallskip
For the moment, and as a warm up for Theorem~\ref{GitikShelahTheorem}, we show how to produce $\kappa$-complete filters with arbitrarily long strong generating sequences using Mathias forcing.
\begin{theorem}\label{ExtendFilters}
\label{propsupercom} Let $\mathscr{U}$ be a $\kappa$-complete filter over a Mahlo cardinal $\kappa$.
Then, for every $\lambda\in \mathrm{Ord}$ there is a $\kappa$-directed-closed and $\kappa^+$-cc poset $\mathbb{P}(\lambda)$ forcing
that $\mathscr{U}$ can be extended to a $\kappa$-complete filter $\mathscr{U}^*$ with a strong generating sequence $\langle x_\alpha\mid \alpha<\lambda\rangle$
\end{theorem}
\begin{proof}
Let $\mathscr{U}$ and $\lambda$ be as above. Define a ${<}\kappa$-supported iteration $\mathbb{P}(\lambda)$, $\langle\mathbb{P}_\alpha,\lusim{\mathbb{Q}}_\beta\mid \beta<\alpha\leq\lambda\rangle$, as follows.
Suppose that $\mathbb{P}_\alpha$ is defined for $\alpha<\lambda$. In $V^{\mathbb{P}_\alpha}$ we will define a filter $\mathscr{U}^*_\alpha$ over $\kappa$ which we will prove to be $\kappa-$complete. Bearing this in mind, we shall let $\lusim{\mathbb{Q}}_\alpha$ be a $\mathbb{P}_\alpha$-name for $\mathbb{M}_{\mathscr{U}^*_\alpha}$ and denote by $x_\alpha:=a_{G_{\mathbb{Q}_\alpha}}$ the generic Mathias set added after forcing with $\lusim{\mathbb{Q}}_\alpha$.
For $\alpha=0$, we let $\mathscr{U}_0:=\mathscr{U}$. At successor $\alpha+1$, in $V^{\mathbb{P}_{\alpha+1}}$ we have $x_\alpha$, then we let $\lusim{\mathscr{U}}_{\alpha+1}$ be the $\mathbb{P}_{\alpha+1}-$name for the filter generated by $\lusim{x}_\alpha$. By proposition \ref{MathiasandBase}, we have that $0_{\mathbb{P}_{\alpha+1}}\Vdash\lusim{\mathscr{U}}_\alpha\subseteq\lusim{\mathscr{U}}_{\alpha+1}$ and $\lusim{\mathscr{U}}_{\alpha+1}$ is $\kappa-$complete. As for the limit stages, let us split into cases
\begin{claim}
Suppose that $\alpha<\lambda$ is limit such that $cf(\alpha)\geq\kappa$. Then $\langle x_\beta\mid \beta<\alpha\rangle$ generates a $\kappa$-complete filter $\mathscr{U}^*_\alpha$ in $V^{\mathbb{P}_\alpha}$ that extends $\mathscr{U}^*_{\beta}$ for every $\beta<\alpha$.
\end{claim}
\begin{proof}
Let $\{X_i\mid i<\mu<\kappa\}\subseteq \mathscr{U}^*_\alpha$. Since $\mathscr{U}^*_\alpha$, for every $i<\mu$, there $\beta_i<\alpha$ such that $x_{\beta_i}\subseteq^* X_i$. Since cofinality of $\alpha$ is at least $\kappa$, $\sup_{i<\mu}\beta_i=:\beta^*<\alpha$, hence $x_{\beta^*}\subseteq^* x_{\beta_i}\subseteq^* X_i$ for every $i<\mu$. It follows that for some $\epsilon_i<\kappa$, $x_{\beta^*}\setminus\epsilon_i\subseteq X_i$. Take $\epsilon^*=\sup_{i<\mu}\epsilon_i<\kappa$, then $x_{\beta^*}\setminus\epsilon^*\subseteq\cap_{i<\mu}X_i$, by definition, $\cap_{i<\mu}X_i\in \mathscr{U}^*_\alpha$. Also, for every $\beta<\alpha$, $\mathscr{U}^*_{\beta}\subseteq\mathscr{U}^*_{\beta+1}$ and $\mathscr{U}^*_{\beta+1}$ is by definition the filter generated $x_\beta$ which is clearly a subset of $\mathscr{U}^*_\alpha$.
\end{proof}
\begin{claim}
Suppose that $\alpha<\lambda$ is limit such that $cf(\alpha)<\kappa$, and let $$\alpha=\kappa^{\delta_1}\gamma_1+...+\kappa^{\delta_n}\gamma_n$$ be the Cantor normal form of $\alpha$. Consider the following cofinal subset of $\alpha$: $I_\alpha:=\{\kappa^{\delta_1}\gamma_1+...+\kappa^{\delta_n}\gamma\mid \gamma<\gamma_m\}$. Then $x:=\cap_{i\in I_\alpha}x_i\in V^{\mathbb{P}_\alpha}$ is unbounded in $\kappa$. In particular, $x\subseteq^* x_\beta$ for every $\beta<\alpha$, and the filter $\mathscr{U}^*_\alpha$ generated by $x$, is a $\kappa$-complete filter in $V^{\mathbb{P}_\alpha}$ which extends $\mathscr{U}^*_\beta$ for every $\beta<\alpha$.
\end{claim}
\begin{proof}
Let $p\in\mathbb{P}_\alpha$ and $\alpha_0<\kappa$. We shall now proceed with a density argument to prove that there are $\alpha_0<\gamma^*<\kappa$ and $p\leq p_{fin}$ such that $p_{fin}\Vdash \gamma^*\in\cap_{i\in I_{\alpha}}\lusim{x}_i$. Construct two sequences ${\langle} M_\rho\mid \rho<\kappa{\rangle}$ and ${\langle} q_\rho\mid \rho<\kappa{\rangle}$ such that:
\begin{enumerate}
\item $M_\rho\prec H(\theta)$ for $\theta$ high enough.
\item $M_{\rho}$ is increasing and continuous.
\item $|M_\rho|:=\gamma_\rho<\kappa$. $\gamma_0$ is regular and also $\gamma_{\rho+1}$.
\item If $\gamma_\rho$ is regular then $M_\rho^{<{\gamma_\rho}}\subseteq M_\rho$.
\item $\alpha,p,\mathbb{P}_\alpha,\kappa\in M_\rho$.
\item If $\beta\in M_{\rho}\cap \alpha+1$ has cofinality less than $\kappa$, then $cf(\beta)\cup I_{\beta}\subseteq M_{\rho+1}$.
\end{enumerate}
For the condition $q_\alpha$ we require that:
\begin{enumerate}
\item $q_\rho$ is increasing and continuous.\footnote{i.e. for limit $\rho$, we let $q_\rho(\gamma)={\langle} \cup_{\rho'<\rho}\lusim{a}^{q_{\rho'}}_\gamma,\cap_{\rho'<\rho}\lusim{A}^{q_{\rho'}}_\gamma{\rangle}$.}
\item $q_\rho$ is $M_\rho$-generic for $\mathbb{P}_\alpha$, namely, for every dense open $D\subseteq\mathbb{P}_\alpha$, $D\in M_\rho$, $q_{\rho}\in D$.
\item $supp(q_{\rho})=M_\rho\cap\alpha$.
\item $q_{\rho}\in M_{\rho+1}$
\end{enumerate}
For this construction we need that $\kappa$ is Mahlo. Such condition exists by $\kappa-$closure of $\mathbb{P}_\alpha$ and by standard construction of an increasing sequence of conditions in $M$. Let $\mu^*<\kappa$ be regular such that $|M_{\mu^*}|=\mu^*$. Denote $M^*=M_{\mu^*}$ and $p^*=q_{\mu^*}=\sup_{i<\mu^*} q_{i}$.
\begin{claim} For every $\beta\in Supp(p^*)$ the following hold:
\begin{enumerate}
\item There is $\delta_{\beta}$ such that $p^*\restriction \beta\Vdash \lusim{a}^{p^*}_\beta\subseteq\delta_\beta$.
\item If $\beta=\beta_0+1$ is successor, then there is $\epsilon_\beta$ such that $p^*\restriction \beta\Vdash \lusim{x}_{\beta_0}\setminus \epsilon_\beta\subseteq \lusim{A}^{p^*}_\beta$.
\item If $\beta\in Supp(p^*)$ is if cofinality less than $\kappa$ then there is $\epsilon_{\beta}$ such that $p^*\restriction \beta\Vdash \cap_{i\in I_\beta}\lusim{x}_i\setminus \epsilon_\beta\subseteq \lusim{A}^{p^*}_\beta$.
\item If $\beta\in Supp(p^*)$ is of cofinality at least $\kappa$, then there $i_{\beta}<\beta$
and $\epsilon_\beta$ such that $p^*\restriction \beta\Vdash \lusim{x}_{i_{\beta}}\setminus\epsilon_\beta\subseteq\lusim{A}^{p^*}_\beta$.
\end{enumerate}
\end{claim}
\begin{proof}
To see $(1)$, since $\beta\in Supp(p^*)=M^*\cap\alpha$, find $\xi_0<\mu^*$ such that $\beta\in M_{\xi_0}\cap\alpha$. In $M_{\xi_0+1}$ we can define the dense open set
$$D_{\xi_0}:=\{q\in\mathbb{P}_{\alpha}\mid \exists \delta<\kappa. q\restriction \beta\Vdash \lusim{a}^{q_{\xi_0}}_\beta\subseteq\delta\}$$
Since $q_{\xi_0+1}$ is $M_{\xi_0}-$generic, $q_{\xi_0+1}\in D_{\xi_0}$. For every $\xi_0\leq i<\mu^*$ we have that $q_i\in M_{i+1}$, hence we can define in $M_{i+1}$ the d.o. set $$D_{i+1}:=\{q\in\mathbb{P}_{\alpha}\mid \exists \delta<\kappa. q\restriction \beta\Vdash \lusim{a}^{q_{i}}_\beta\subseteq\delta\}$$ By genericity, $q_{i+1}\in D_{i+1}$. For every such $i$, pick $\delta^{(i)}$ witnessing $q_{i+1}\in D_{i+1}$. Let $\delta_\beta=\sup_{i<\mu^*}\delta^{(i)}<\kappa$, by continuity, $p^*\restriction \beta\Vdash \lusim{a}^{p^*}_\beta\subseteq \delta_\beta$.
The proof of $(3),(4)$ is similar to $(1)$. Just note that by definition of $\mathscr{U}_\beta$, it is the filter generated by $\lusim{x}_{\beta_0}$ and replace the dense $D_{i+1}$ by
$$E_{i+1}:=\{q\in\mathbb{P}_\alpha\mid \exists\epsilon. q\restriction \beta\Vdash \lusim{x}_{\beta_0}\setminus\epsilon\subseteq \lusim{A}^{q_i}_\beta\}$$
Finally, to see $(4)$, by follow a similar path by defining
$$F_{i+1}:=\{q\in\mathbb{P}_\alpha\mid \exists j_\beta<\beta.\exists\epsilon. q\restriction \beta\Vdash \lusim{x}_{j_\beta}\setminus\epsilon\subseteq \lusim{A}^{q_i}_\beta\}$$
Pick for every $i<\mu^*$, $j_{\beta,i}\in M_{\rho^*}\cap\beta$ witnessing $q^*\in F_{i+1}$ and since the cofinality of $\beta$ is at least $\kappa$ we can take the sup to find a single $i_{\beta}$. Now choose the epsilons as before. Since $j_{\beta,i}\in M^*$ and unbounded in $i_{\beta}$, there is $i_0<\mu^*$ such that for every $i_0\leq i<\mu^*$, the cantor normal form of $j_{\beta,i}$ is a continuation of the on of $i_\beta$. Hence the $\delta_i'$s belong to $M^*$
\end{proof}
By $(1)$, for every $\beta\in Supp(p^*)$ we have $\delta_{\beta}<\kappa$, pick $\delta^*=\sup\delta_\beta$. By $(2),(3)$ and $(4)$
we choose $\epsilon_\beta$ and set $\epsilon^*=\sup\epsilon_\beta<\kappa$. Pick $\gamma^*\in A^{p^*}_0$ above $\epsilon^*,\delta^*,\alpha_0$ and define $p_{fin}$. Define the support of $p_{fin}$ to be $Supp(p^*)\cup\{i_\beta\mid \beta\in Supp(p^*),cf(\beta)\geq\kappa\}$. Define
$$p_{fin}(\gamma)=\begin{cases}{\langle}\lusim{a}^{p^*}_\gamma\cup\{\gamma^*\},\lusim{A}^{p^*}_\gamma\setminus\gamma^*{\rangle} & \gamma\in Supp(p^*)\\ {\langle} \{\gamma^*\},\kappa\setminus\gamma^*+1{\rangle} &else\end{cases}$$
Clearly $p_{fin}\Vdash \gamma^*\in \cap_{i\in I_\alpha}\lusim{x}_i$. It remains to argue that $p_{fin}$ is an extension of $p^*$:
Indeed $p_{fin}(0)\geq p^*_0$ since $\gamma^*\in A_0$ and $\gamma^*>\delta_0>\sup(a^{p^*}_0)$. Suppose that $\beta\in Supp(p^*)$ and that $p_{fin}\restriction \beta\geq p^*\restriction \beta$. If $\beta=\beta_0+1$ is successor then $p_{fin}\restriction\beta\Vdash \gamma^*\in \lusim{x}_{\beta_0}\setminus\epsilon^*\subseteq \lusim{A}^{p^*}_\beta$, hence $p_{fin}\restriction\beta\Vdash p_{fin}(\beta)\geq p^*(\beta)$. If $\beta$ is limit of cofinality less than $\kappa$, then since $\beta\in M^*$, we have that $I_{\beta}\subseteq M^*\cap\beta$, hence by induction by induction
$p_{fin}\restriction\beta\Vdash \gamma^*\in\cap_{i\in I_{\beta}}\lusim{x}_{\beta}\setminus\epsilon^*\setminus\lusim{A}^{p^*}_\beta$. Finally, if cofinality of $\beta$ is at least $\kappa$, then there is $i_\beta<\kappa$ such that $p^*\restriction\beta\Vdash \gamma^*\in x_{i_\beta}^*\setminus\epsilon^*\subseteq \lusim{A}^{p^*}_\beta$.
\end{proof}
This completes the proof of the theorem.
\end{proof}
Let us briefly describe a natural, yet unfruitful, strategy to make $\mathscr{U}^*$ become a $\kappa$-complete ultrafilter.
Given a Laver-indestructible supercompact cardinal $\kappa$ and $\alpha<\lambda$
force over $V^{\mathbb{P}_\alpha}$ with $\mathbb{M}_{\mathscr{V}_\alpha}$, where $\mathscr{V}_\alpha$ is an extension of $\mathscr{U}^*_{\alpha}$ (in $V^{\mathbb{P}_\alpha}$) to a $\kappa$-complete ultrafilter. Note that since $\mathbb{P}_\alpha$ is $\kappa$-directed-closed, $\kappa$ is still supercompact in the corresponding extension and thus the choice of $\mathscr{V}_\alpha$ is available.
In addition, if the length of the iteration is $\lambda=\kappa^+$ then the union of the (tower of) $\kappa$-complete ultrafilters generated along the way will be also an ultrafilter, $\mathscr{U}_\infty$. The problem with this approach is that we lose control upon $\kappa$-completeness of $\mathscr{U}_\infty$. Indeed, even the union of the first $\omega$-many ultrafilters generated
might not be $\kappa$-complete, as we argued in the discussion preceding Theorem~\ref{ExtendFilters}.
\smallskip
The approach of Theorem~\ref{GitikShelahTheorem} is to iterate $\mathbb{M}_\mathscr{U}$ more carefully so that we have complete control on the completeness of the ultrafilter $\mathscr{U}_\infty$.
\begin{definition}
For $f\colon \kappa\rightarrow\kappa$ and an ultrafilter $\mathscr{U}$ over $\kappa$ we say that \emph{$f$ is constant $\mathrm{mod}(\mathscr{U})$} if there is $\gamma<\kappa$ such that $f^{-1}\{\gamma\}\in\mathscr{U}$. Similarly,
\emph{$f$ is $1$-$1\mathrm{mod}(\mathscr{U})$} if there is $X\in \mathscr{U}$ such that $|f^{-1}\{\gamma\}\cap X|<\kappa$ for every $\gamma<\kappa$.
\end{definition}
\begin{definition
A $\kappa$-complete ultrafilter $\mathscr{U}$ is called a \emph{$P$-point} if every function $f:\kappa\rightarrow\kappa$ that is not constant $\mathrm{mod}(\mathscr{U})$ is
$1$-$1\mathrm{mod}(\mathscr{U})$.
\end{definition}
\begin{remark}
$\mathscr{U}$ is a $P$-point if and only if every sequence $\langle X_\alpha\mid \alpha<\kappa\rangle$ of elements in $\mathscr{U}$ has a pseudo intersection; namely, there is $X\in \mathscr{U}$ such that $X\subseteq^* X_\alpha$ for every $\alpha<\kappa$.
\end{remark}
\begin{lemma}\label{P point is Galvin}
Every $\kappa$-complete ultrafilter $\mathscr{U}$ with a generating sequence of size $\kappa^+$ is a $P$-point. In particular, $\mathrm{Gal}(\mathscr{U},\kappa,\kappa^+)$.
\end{lemma}
\begin{proof}
Let $\mathcal{A}={\langle} A_\alpha\mid\alpha<\kappa^+{\rangle}$ be a generating sequence of $\mathscr{U}$. To see it is $P$-point suppose that ${\langle} X_\alpha\mid \alpha<\kappa{\rangle}$ is any $\kappa-$sequence of members of $\mathscr{U}$. By definition of generating sequence, for each $\alpha<\kappa$ there is $\beta_\alpha<\kappa^+$ such that $A_{\beta_\alpha}\subseteq^* X_\alpha$. Consider $\beta^*=\sup_{\alpha<\kappa}\beta_\alpha<\kappa^+$. Then $A_{\beta^*}$ is a pseudo intersection of the sequence ${\langle} X_\alpha\mid \alpha<\kappa{\rangle}$. For the in particular claim use that every $P$-point ultrafilter $\mathscr{U}$ satisfies $\mathrm{Gal}(\mathscr{U},\kappa,\kappa^+)$ (see \cite[Proposition~5.13]{MR4393795}).
\end{proof}
In analogy to Theorem \ref{MakingEverythingNormal}, next we show that every ground model $\kappa$-complete ultrafilter can be extended to a \emph{well-behaved} one: to wit, to a $P$-point. By virtue of the above lemma, this gives an alternative (and less severe way) to trasnmute an arbitrary $\kappa$-complete ultrafilter into a Galvin one.
Unlike Theorem~\ref{MakingEverythingNormal}, the models we produce this time have the extra feature that $2^\kappa$ can be made arbitrarily large. Our construction owes much to previous work of Gitik and Shelah \cite{MR1632081}.
Recall that $\kappa$ is \emph{almost huge with target $\lambda$} if there is $j\colon V\rightarrow M$ such that $\crit(j)=\kappa$, $j(\kappa)=\lambda$ and $M^{<\lambda}\subseteq M$.
\begin{theorem}\label{GitikShelahTheorem}
Assume that the $\mathsf{GCH}$ holds and suppose that $\kappa$ is an \linebreak almost huge cardinal with measurable target $\lambda$. Then for every $\delta<\lambda$ there is a forcing extension $V[G_\delta]$ where
$2^\kappa=\delta$ and every ground model $\kappa$-complete ultrafilter extends to a $P$-point ultrafilter in $V[G_\delta]$. In addition, $V[G_\delta]_\lambda$ models the same configuration and $\kappa$ is supercompact there.
\end{theorem}
\begin{proof}
Let $j\colon V\rightarrow M$ be such that $\crit(j)=\kappa$, $M^{<\lambda}\subseteq M$ and $j(\kappa)=\lambda$. Recall that $\lambda$ is assumed to be measurable in the ground model, hence we can let $\mathcal{U}$ be a measure on $\lambda$. Let us define an Easton support iteration ${\langle} \mathbb{P}_\alpha, \lusim{\mathbb{Q}}_\beta\mid \beta\leq \kappa,\, \alpha\leq \kappa+1{\rangle}$, where $\lusim{\mathbb{Q}}_\alpha$ is trivial unless $\alpha$ is measurable in $V^{\mathbb{P}_\alpha}$, in which case $\lusim{\mathbb{Q}}_\alpha$ is a $\mathbb{P}_\alpha$-name for the two-step iteration $\lusim{\mathbb{Q}}_{\alpha,0}\ast\lusim{\mathbb{Q}}_{\alpha,1}$ defined as follows: $\lusim{\mathbb{Q}}_{\alpha,0}$ is the atomic forcing choosing some ordinal $F(\alpha)<\kappa$ followed by $\lusim{\mathbb{Q}}_{\alpha,0}$, an ${<}\alpha$-supported iteration $\langle \mathbb{R}^\alpha_{\beta}, \mathbb{S}^\alpha_\gamma\mid \beta\leq F(\alpha)\, \gamma<F(\alpha)\rangle$ defined as follows: At each step $\beta\leq F(\alpha)$, $\mathbb{S}^\alpha_\beta$ is trivial unless $\Vdash_{\mathbb{P}_\alpha\ast\mathbb{R}^\alpha_{\beta}}``\alpha$ is measurable'',
in which case $\mathbb{S}^\alpha_\beta$ is forced to be the ${<}\alpha$-supported product $\prod_{\lusim{\mathscr{V}}}\mathbb{M}_{\lusim{\mathscr{V}}}$, where $\lusim{\mathscr{V}}$ ranges over all $\mathbb{P}_\alpha\ast\mathbb{R}^\alpha_{\beta}$-names for an $\alpha$-complete ultrafilter over $\alpha.$
Since $\mathbb{Q}_{\alpha,1}$ is an ${<}\alpha$-supported iteration of $\alpha^+$-stationary c.c., ${<}\alpha$-closed and {countably-parallel closed}
forcing (cf.~Proposition~\ref{PropertiesOfMathias}) it follows from \cite[Theorem~1.2]{AFramework} that $\mathbb{Q}_{\alpha,1}$ is $\alpha^+$-cc.
Also, $\mathbb{P}_\kappa$ is $\kappa$-cc because $\kappa$ is Mahlo and $\mathbb{P}_\kappa$ is Easton-supported. Let $G_\kappa\subseteq \mathbb{P}_\kappa$ be a $V$-generic filter. Then, due to closure of $M$ under ${<}\lambda$-sequences and $\kappa$-ccness of $\mathbb{P}_\kappa$, $V[G_\kappa]$ and $M[G_\kappa]$ agree up to $\lambda$ (see, e.g., \cite[Proposition 8.4]{MR2768691}). Consider $j(\mathbb{P}_\kappa):=\mathbb{P}'_{j(\kappa)}$.
\begin{claim}
For every $\rho<\lambda$, $M[G_\kappa\ast \{\rho\}]^{\mathbb{Q}_{1,\kappa}}\models ``\kappa$ is measurable''.
In fact, $\kappa$ is ${<}\lambda$-supercompact in $M[G_\kappa\ast\{\rho\}]^{\mathbb{Q}_{1,\kappa}}$, hence also in \linebreak $V[G_\kappa\ast\{\rho\}]^{\mathbb{Q}_{1,\kappa}}$, and thus $\kappa$ is fully supercompact in $(V[G_\kappa\ast\{\rho\}]^{\mathbb{Q}_{1,\kappa}})_\lambda.$
\end{claim}
\begin{proof}
In the ground model $V$, let $\max(\rho,2^{2^{\kappa}})\leq \theta<\lambda$. Let $U$ be a fine normal measure over $P_\kappa(\theta)$ and let $j_U\colon V\rightarrow M$ be the corresponding elementary embedding. Then ${}^{\theta}M\subseteq M$. Put $\mathbb{P}'_{j_U(\kappa)}:=j_U(\mathbb{P}_\kappa)$ and $\mathbb{Q}'_{j_U(\kappa)}:=j_U(\mathbb{Q}_\kappa)$. Let $G(\mathbb{Q}_{1,\kappa})$ a $V[G_\kappa][\{\rho\}]$-generic, and let us lift $j_U$ to the model $V[G_{\kappa}\ast \{\rho\}\ast G(\mathbb{Q}_{1,\kappa})]$. Note that in $M[G_{\kappa}\ast \{\rho\}\ast G(\mathbb{Q}_{1,\kappa})]$ we have $2^\kappa\geq \rho$, since at each step of the iteration $\mathbb{Q}_{1,\kappa}$ we add a new subset to $\kappa$. Hence, by closure under $\rho$-sequences, the degree of closure of the forcing $$\mathbb{P}'_{j_{U}(\kappa)}/[G_{\kappa}\ast\{\rho\}*G(\mathbb{Q}_{1,\kappa})]$$ is at least $\rho^{+}$, even in $V[G_{\kappa}\ast\{\rho\}\ast G(\mathbb{Q}_{1,,\kappa})]$. Also note that every dense set in $\mathbb{P'}_{j(\kappa)}/G_{\kappa}\ast \{\rho\}\ast G(\mathbb{Q}_{1,\kappa})$ is represented by a function $f:\mathcal{P}_\kappa(\rho)\rightarrow \mathcal{P}(\mathbb{P}_\kappa)$. Since there are in total $(2^{\kappa})^{\rho}=\rho^+$ of such functions we can construct a master sequence which induces an $M[G_\kappa\ast\{\rho\}\ast G(\mathbb{Q}_{1,\kappa})]$-generic filter $G_{<\lambda}$. Next, the top-most forcing $\mathbb{Q}_{0,\kappa}\ast \mathbb{Q}_{1,\kappa}$ is $\kappa^+$-cc and when its length is restricted to some $\rho$ its {size becomes $\rho\cdot 2^{2^\kappa}$}. Hence every maximal antichain in $M[G_{<\lambda}\ast\{j(\rho)\}]$ for $j(\mathbb{Q}_{1,\kappa})$ is represented by a function $F\colon \mathcal{P}_{\kappa}(\rho)\rightarrow \mathcal{P}_{\kappa}((\mathbb{Q}_{1,\kappa})_{G_{\kappa}\ast\{\rho\}})\in V[G_{\kappa}\ast\{\rho\}]$ and since $(\mathbb{Q}_{1,\kappa})_{G_{\kappa}\ast\{\rho\}}$ is of size $\rho\cdot 2^{2^\kappa}=\theta$ the are $\theta^{\theta}=\theta^+$ many such functions. The remaining argument is standard.
\end{proof}
For each $\rho<\lambda$, $(\mathbb{Q}_{1,\kappa})_{G_\kappa\ast \{\rho\}}$ is $\kappa^{+}$-cc and of size less than $\lambda$. Thus, there are less than $\lambda$-many nice names for subsets of $\kappa$. Denote these names by ${\langle} \lusim{A}^{\rho}_\tau\mid \tau<\theta_\rho{\rangle}$. We can find in $V[G_{\kappa}]$ an enumeration ${\langle} \lusim{A}_{\tau}\mid \tau<\lambda{\rangle}$ of subsets of $\kappa$ such that for every $\tau_1\leq \tau_2$, there are $\delta_1,\delta_2$ such that $\lusim{A}_{\tau_1}=\lusim{A}^{\rho(\tau_1)}_{\delta_1}$ and $\lusim{A}_{\tau_2}=\lusim{A}^{\rho(\tau_2)}_{\delta_2}$, and $\rho(\tau_1)\leq\rho(\tau_2)$. For each $\tau<\lambda$ for which $\rho(\tau)<\lambda$ has been defined, let $C'$ be the club of closure points of $\rho(\tau)$. Since $\lambda$ is measurable there is $S\in \mathcal{U}$ siting on inaccessible. Hence, there for $\mathcal{U}$-many $\delta\in S$ such that $\rho``\delta\subseteq \delta$. In particular, the sequence ${\langle} \lusim{A}_\tau\mid\tau<\delta{\rangle}$
codes all the $(\mathbb{Q}_{1,\kappa})_{G_\kappa*\{\delta\}}$-names for subsets of $\kappa$.\footnote{Specifically, if $\sigma$ is a $(\mathbb{Q}_{1,\kappa})_{G_\kappa*\{\delta\}}$-name for a subset of $\kappa$ then there is $\tau<\delta$ such that $0\Vdash_{(\mathbb{Q}_{1,\kappa})_{G_\kappa*\{\delta\}}}\sigma=\lusim{A}_\tau.$}
\smallskip
Let $\epsilon<\lambda$ be an ordinal above the generators of all $\kappa$-complete ultrafilters. More precisely, for each $\kappa$-complete ultrafilter $U\in V$ over $\kappa$ let $\varepsilon_U\in \lambda\cap \bigcap j``U$ and define $\epsilon:=\sup_{U}\varepsilon_U$. Note that $\epsilon<\lambda$ as $\lambda$ is inaccessible in $V$.
For every $\delta'<\delta$ and every $(\mathbb{Q}_{1,\kappa})_{G_\kappa\ast\{\delta' \}}$-name (i.e., a $(\mathbb{Q}_{1,\kappa})_{G_\kappa\ast\{\delta\}}\restriction\delta'$-name) $\lusim{U}$ for a $\kappa$-complete ultrafilter over $\kappa$,
let us define $r_{\lusim{U}}\in \mathbb{M}_{j(\lusim{U})}$ as:
$$r_{\lusim{U}}={\langle} a_{\lusim{U}}\cup (A_{\lusim{U}}\cap \epsilon),{A}_{\lusim{U}}\setminus (\epsilon +1){\rangle},$$ where
\begin{itemize}
\item $a_{\lusim{U}}$ is the standard $(\mathbb{Q}_{1,\kappa})_{G_\kappa\ast\{\delta\}}$-name for the Mathias set for $\mathbb{M}_{\lusim{U}}$;
\item $A_{\lusim{U}}$ is a name for the set $\bigcap j``\lusim{U}$.
\end{itemize}
Note that $r_{\lusim{U}}$ is a condition in $\mathbb{M}_{j(\lusim{U})}$: First, the trivial condition of $\mathbb{M}_{j(\lusim{U})}$ forces $``\min A_{\lusim{U}}=\kappa$'', hence $a_{\lusim{U}}\cup (A_{\lusim{U}}\cap \epsilon)$ is a legitimate stem. Second, this condition also forces $j(\lusim{U})$ to be $j(\kappa)$-complete, hence $\bigcap j``\lusim{U}\in j(\lusim{U})$.
The key feture of $r_{\lusim{U}}$ is that $0_{j(\mathbb{M}_{\lusim{U}})}$ forces $a_{j(\lusim{U})}$ to contain all generators $\varepsilon_W$, $W\in V$, that are forced to belong to $A_{\lusim{U}}$.
\smallskip
Let $q_{\delta}$ be a condition in $\mathbb{Q}'_{j_U(\kappa)}$ with support $j``\delta$ (hence of cardinality ${<}j(\kappa)$) be the following condition: For every $\delta'<\delta$ and a $(\mathbb{Q}_{1,\kappa})_{G_\kappa\ast\{\delta' \}}$-name for a $\kappa$-complete ultrafilter $\lusim{U}$ as above,
$$q_{\delta}\restriction j(\delta')\Vdash q_{\delta}(j(\delta'))(j(\lusim{U}))=r_{\lusim{U}};$$
for other coordinates (i.e., names for \emph{ghost ultrafilters}) $\lusim{W}$ we require that $$q_{\delta}\restriction j(\delta')\Vdash \text{$``q_{\delta}(j(\delta'))(\lusim{W})$ is the trivial codition in $\mathbb{M}_{\lusim{W}}$''}.$$
\smallskip
A moment's reflection makes clear that $q_\delta$ is a \emph{master condition} for $G(\mathbb{Q}_{1,\kappa})$: namely, $j(p)\leq q_\delta$ for every $p\in G({\mathbb{Q}_{1,\kappa}}).$
In addition, the conditions $q_\delta$ are defined in a coherent way: namely, if $\rho\leq \delta$ are both in $C$ then $q_\delta\restriction\rho=q_\rho$.
\smallskip
Fix $\rho \in C\setminus(\epsilon+1)$. For every $\tau,\zeta<\rho$
let $\lusim{D}_{\tau,\zeta}\in M[G_{\kappa}\ast\{\rho\}]$ be a $(\mathbb{Q}_{1,\kappa})_{G_\kappa*\{\rho\}}$-name for the following dense open set $$\{p\in\mathbb{P}'_{(\kappa,j_U(\kappa)]} \mid \exists s_\kappa\in\mathbb{Q}_{1,\kappa}\,\exists i\in 2\, ({\langle} s_\kappa,p{\rangle} \Vdash^i_{\mathbb{P}'_{[\kappa,j_U(\kappa)]}} \zeta\in j(\lusim{A}_\tau))\}.$$
Since the trivial condition of $\mathbb{Q}_{1,\kappa}$ forces $\mathbb{P}'_{(\kappa,j_U(\kappa)]}$ to be $\rho^+$-closed it also forces that $\bigcap_{\tau,\zeta}\lusim{D}_{\tau,\zeta}$ is a name for a dense open set. In particular, there is some $(\mathbb{Q}_{1,\kappa})_{G_{\kappa}\ast\{\rho\}}$-name $p_{\rho}$ for a condition in $\bigcap_{\tau,\zeta}\lusim{D}_{\tau,\zeta}$ such that {$p_{\rho}\restriction j(\kappa)\Vdash p_{\rho}(j(\kappa))\geq q_\rho$.} Notice that
$p_\rho$ has the property that for every $\tau,\zeta<\rho$ there is $s\in G_{\kappa}*\{\rho\}$ and $s_\kappa\in(\mathbb{Q}_{1,\kappa})_{G_\kappa\ast\{\rho\}}$ such that $\langle s,s_\kappa,p_\rho\rangle||_{\mathbb{P'}_{[\kappa,j_U(\kappa))}}\zeta\in j(\lusim{A}_\tau)$.
For each $(s,s_\kappa)\in G_\kappa \ast \lusim{\mathbb{Q}}_{\kappa}$, ordinals $\zeta<\epsilon$ and $\tau<\lambda$, and $i\in 2$ define $$A^{i}_{(s,{s}_\kappa,\zeta,\tau)}:=\{\rho<\lambda\mid \langle s, s_\kappa, p_\rho\rangle\Vdash^i\zeta\in j(\lusim{A}_\tau)\}.$$ Denote by $A^{2}_{(s, {s}_\kappa,\zeta,\tau)}$ be the complement of the union of the above two sets. For each such quadruple $(s,s_\kappa,\zeta, \tau)$, let $i_{(s,s_\kappa,\zeta, \tau)}\in 3$ be the unique index $i$ for which $A^{i}_{(s,{s}_\kappa,\zeta,\tau)}\in \mathcal{U}$, the $\lambda$-complete measure on $\lambda$. Now let $$A:=\{\rho<\lambda\mid (\langle s,s_\kappa\rangle \in G\ast \lusim{\mathbb{Q}}_\kappa\restriction\rho\,\wedge\, \max(\zeta,\tau)<\rho)\,\Rightarrow\, \rho\in A^{i_{(s,s_\kappa,\zeta,\tau)}}_{(s,s_\kappa,\zeta,\tau)}\}.$$
We claim that $A\in\mathcal{U}$: In effect, for every $\langle s,s_\kappa\rangle \in G\ast j(\mathbb{Q}_\kappa)\restriction\lambda=G\ast \mathbb{Q}_\kappa$ and $\zeta,\tau<\lambda$, $A^{i_{(s,s_\kappa,\zeta,\tau)}}_{(s,s_\kappa,\zeta,\tau)}\in\mathcal{U}$. Hence, $\lambda\in j_\mathcal{U}(A^i_{(s,s_\kappa,\zeta,\tau)})$, and thus $\lambda\in j_\mathcal{U}(A)$.
Put $C^*:=A\cap C$. Let $\rho,\rho'\in C^*$ with $\rho<\rho'$, $\langle s,s_\kappa\rangle \in G\ast (\lusim{\mathbb{Q}_{\kappa}}\restriction \rho)$ and $\zeta, \tau<\rho$. By definition of $A$, $\rho,\rho'\in A^{i_{(s,s_\kappa,\zeta,\tau)}}_{(s,s_\kappa,\zeta,\tau)}$, hence
$$
\text{$\langle s, s_\kappa, p_\rho\rangle \Vdash^i \zeta\in j(\lusim{A}_\tau)$ iff $\langle s, s_\kappa, p_{\rho'}\rangle \Vdash^i \zeta\in j(\lusim{A}_\tau)$,}
$$
and also
$$
\text{$\langle s, s_\kappa, p_\rho\rangle \Vdash \zeta\in j(\lusim{A}_\tau)$ iff $\langle s, s_\kappa, p_{\rho'}\rangle \Vdash \zeta\in j(\lusim{A}_\tau).$}
$$
Next, for all $\rho\in C^*$ and $\zeta<\epsilon$ consider
$$\mathscr{U}_{\rho,\zeta}:=\{(\lusim{A}_\tau)_{G_{\kappa}\ast\{\rho\}\ast G(\mathbb{Q}_{\kappa,1})}\subseteq \kappa\mid \exists\langle s,s_\kappa\rangle \in G\ast \{\rho\}\ast G(\mathbb{Q}_{\kappa,1})\, \langle s, s_\kappa,
p_\rho \rangle \Vdash\zeta\in j(\lusim{A}_\tau)\}.$$
Since $\langle \lusim{A}_\tau\mid \tau<\rho\rangle$ is an enumeration of the $(\mathbb{Q}_{\kappa,1})_{G_\kappa\ast \{\rho\}}$-names, it is not hard to show that $\mathscr{U}_{\rho,\zeta}$ is a $\kappa$-complete ultrafilter in $V[G_\kappa\ast\{\rho\}\ast G(\mathbb{Q}_{\kappa,1})]$.
Also, for each $\zeta<\epsilon$, $\langle \mathscr{U}_{\rho,\zeta}\mid \rho\in C^*\rangle$ defines a tower of ultrafilters: Suppose $\rho<\rho'\in C^*$ and let $A\in \mathscr{U}_{\rho,\zeta}$. Then, there is a pair $\langle s,s_\kappa\rangle$ such that $\langle s,s_\kappa, p_\rho\rangle \Vdash\zeta\in j(\lusim{A}_\tau)$. By our definition of $C^*$ this is also true when replacing $p_\rho$ by $p_{\rho'}$. Thus, $(\lusim{A}_{\tau})_{G_{\kappa}\ast\{\rho\}\ast G(\mathbb{Q}_{\kappa,1})}\in\mathscr{U}_{\rho',\zeta}.$
Let $\delta$ be the limit of some sequence $\langle \rho_\alpha\mid\alpha <\kappa^+\rangle\subseteq C^*$. From our previous comments, $\langle \mathscr{U}_{\rho_\alpha,\zeta}\mid\alpha<\kappa^+\rangle$ defines a tower of measures. Now, define $\mathscr{V}_{\delta,\zeta}:=\bigcup_{\alpha<\kappa^+} \mathscr{U}_{\rho_\alpha,\zeta}$. Since $V[G_\kappa\ast \{\rho_\alpha\}\ast G(\mathbb{Q}_{\kappa,1})\restriction\rho_\alpha]$ is a submodel of $V[G_\kappa\ast \{\delta\}\ast G(\mathbb{Q}_{\kappa,1})]$ and $(\mathbb{Q}_{1,\kappa})_{G_\kappa\ast \{\delta\}}$ is $\kappa^+$-cc it is immediate that $\mathscr{V}_{\delta,\zeta}$ is a $\kappa$-complete ultrafilter.
\begin{claim}
$\mathscr{V}_{\delta,\zeta}$ admits a strong generating sequence of size $\kappa^+$.
\end{claim}
\begin{proof}
For each $\alpha<\kappa^+$, $\mathscr{U}_{\rho_\alpha,\zeta}\in V[G_\kappa\ast \{\delta\}\ast G(\mathbb{Q}_{\kappa,1})\restriction \rho_\alpha]$ hence the iteration $\mathbb{Q}_{\kappa,1}$ at stage $\rho_{\alpha}+1$ shoots a Mathias set $x_\alpha\subseteq \kappa$ (over the model $V[G_\kappa\ast \{\delta\}\ast G(\mathbb{Q}_{\kappa,1})\restriction \rho_\alpha]$) for the measure $\mathscr{U}_{\rho_\alpha,\zeta}$. Namely, $x_\alpha$ is almost included in every $A\in \mathscr{U}_{\rho_\alpha,\zeta}$.
We claim that $\langle x_\alpha\mid \alpha<\kappa^+\rangle$ is the sought strong generating sequence. First, for each $A\in \mathscr{V}_{\delta,\zeta}$ there is $\alpha<\kappa^+$ such that $A\in\mathscr{U}_{\rho_\alpha,\zeta}$ and so $x_\alpha\subseteq^* A$. Second, $\langle x_\alpha\mid \alpha<\kappa^+\rangle$ is $\subseteq^*$-decreasing: Fix $\alpha<\beta<\kappa^+$.
We would like to show that $x_\alpha\in \mathscr{U}_{\rho_\beta,\zeta}$. Recall that $x_{\alpha}=(a_{\lusim{\mathscr{U}}_{\rho_\alpha,\zeta}})_{G_{\kappa}\ast \{\rho_{\beta}\}\ast G(\mathbb{Q}_{\kappa,1})\restriction\rho_{\beta}}$, hence we should check that for some ${\langle} s,s_\kappa{\rangle}\in G_{\kappa}\ast\{\rho_{\beta}\}\ast G(\mathbb{Q}_{\kappa,1})\restriction\rho_{\beta}$ we have that:
$${\langle} s,s_{\kappa},p_{\rho_{\beta}}{\rangle}\Vdash \zeta\in j(a_{\lusim{\mathscr{U}}_{\rho_\alpha,\zeta}})$$
By elementarity of $j$, it follows that $j(a_{\lusim{\mathscr{U}}_{\rho_\alpha,\zeta}})=a_{j(\lusim{\mathscr{U}}_{\rho_\alpha,\zeta})}$ is the canonical $\mathbb{P}'_{j(\kappa)}\ast\{j(\rho_{\beta})\}\ast \mathbb{Q}'_{j(\kappa)}$-name for the Mathias generic of $\mathbb{M}_{j(\mathscr{U}_{\rho_{\alpha},\zeta})}$. By the definition of $p_{\rho_{\beta}}$ (which extends $q_{\rho_{\beta}}$, and the definition of $r_{\lusim{\mathscr{U}}_{\rho_\alpha,\zeta}}$), $${\langle} 0,p_{\rho_{\beta}}{\rangle}\Vdash a_{j(\lusim{\mathscr{U}}_{\rho_\alpha,\zeta})}\cap(\epsilon+1)\supseteq \bigcap j``\lusim{\mathscr{U}}_{\rho_{\alpha},\zeta}\cap (\epsilon+1).$$
Working in $M[G_{\kappa}\ast\{\rho_{\alpha}\}\ast G(\mathbb{Q}_{\kappa,1})]$, we have that for every $A\in \mathscr{U}_{\rho_\alpha,\zeta}$, there is a name $A_\tau$ for $A$ such that $p_{\rho_\alpha}\Vdash \zeta\in j(\lusim{A}_\tau)$. Hence $p_{\rho_\alpha}\Vdash \zeta\in \bigcap j``\lusim{\mathscr{U}}_{\rho_\alpha,\zeta}$. Hence there is ${\langle} s,s_\kappa{\rangle}\in G_\kappa\ast\{\rho_\alpha\}\ast G(\mathbb{Q}_{\kappa,1})\restriction \rho_\alpha$ such that
$${\langle} s,s_\kappa,p_{\rho_\alpha}{\rangle} \Vdash \zeta\in j(a_{\lusim{\mathscr{U}}_{\rho_{\alpha},\zeta}})$$
Since $\rho_\alpha,\rho_\beta\in C^*$, this means that
${\langle} s,s_\kappa,p_{\rho_{\beta}}{\rangle}\Vdash \zeta\in j(a_{\lusim{\mathscr{U}}_{\rho_{\alpha},\zeta}})$ which by definition implies that $x_\alpha\in\mathscr{U}_{\rho_{\beta},\zeta}$.
\end{proof}
\begin{claim}
Every $\kappa$-complete ultrafilter $\mathscr{U}$ from the ground model is extended by $\mathscr{V}_{\delta,\zeta}$, for some $\zeta<\epsilon.$
\end{claim}
\begin{proof}[Proof of claim]
Let $\zeta<\epsilon$ be such that $\mathscr{U}=\{X\subseteq \kappa\mid \zeta\in j(X)\}$ . Clearly, $\mathscr{U}\subseteq \mathscr{U}_{\rho_\alpha,\zeta}$ for all $\alpha<\kappa^+$, hence $\mathscr{U}\subseteq \mathscr{V}_{\delta,\zeta}$.
\end{proof}
This completes the proof of the theorem.
\end{proof}
\begin{remark}
Note that every measurable cardinal $\kappa$ always carries a $\kappa$-complete ultrafilter which is not a $P$-point. To see this, take any $\kappa$-complete ultrafilter $\mathscr{U}$ over $\kappa$, and a bijection $\phi:[\kappa]^2\rightarrow \kappa$ and define $\mathscr{W}:=\phi_*(\mathscr{U}\times\mathscr{U})$. One can check that $\mathscr{W}$ is a $\kappa$-complete non-$P$-point ultrafilter. As witnessed by $L[\mathscr{U}]$, it is consistent that every ultrafilter is a finite power of a normal one (hence of a $P$-point), and such ultrafilters are always Galvin (see \cite[Corollary 5.29]{MR4393795}). If $\kappa$ is $\kappa$-compact\footnote{A cardinal $\kappa\geq \aleph_1$ is called \emph{$\kappa$-compact} if every $\kappa$-complete filter over $\kappa$ extends to a $\kappa$-complete ultrafilter.} then there is a $\kappa$-complete ultrafilter which is not a finite power of a $P$-point (see, e.g., \cite[\S3.9]{Kan1}).
\end{remark}
\begin{question}
Is it consistent to have a measurable cardinal carrying a $\kappa$-complete ultrafilter $\mathscr{U}$ such that $\mathrm{Gal}(\mathscr{U},\kappa,\kappa^+)$ but it is not Rudin-Keisler equivalent to a finite power of $P$-points?
\end{question}
\begin{question}
Is it $\mathsf{ZFC}$-provable that a supercompact cardinal always admits a $\kappa$-complete non Galvin ultrafilter?
\end{question}
\subsection{Galvin's property in the choiceless context}
Another way to examine Galvin's property at very large cardinals is to consider relatively small cardinals in \textsf{ZF}.
A typical example is $\aleph_1$ under \textsf{AD}.
Indeed, Solovay proved that both $\aleph_1$ and $\aleph_2$ are measurable under \textsf{AD}, and $\mathscr{D}_{\aleph_1}$ is a normal ultrafilter (see \cite[Theorem~33.12]{Jech2003}).
In fact, $\aleph_1$ is $\aleph_2$-supercompact under \textsf{AD}, by a result of Martin (see \cite[p. 401]{Kan}).
Moreover, under $\mathrm{AD}_\mathbb{R}$, $\aleph_1$ is $\gamma$-supercompact for all $\gamma<\Theta$ \cite[Theorem 28.22]{Kan}.
Nonetheless, the classical proof of Galvin employs the Axiom of Choice in a crucial way. In effect, a crucial claim in this proof is that a small union of small sets is yet again small. Thus it is not clear whether $\mathrm{Gal}(\mathscr{D}_{\aleph_1},\aleph_1,\aleph_2)$ holds under \textsf{AD}.
The following addresses this issue.
\begin{theorem}
\label{thmad} Assume that $\kappa$ and $\kappa^+$ are measurable cardinals.
Then, ${\rm Gal}(\mathscr{U},\kappa^+,\kappa^+)$ holds for every $\kappa$-complete ultrafilter over $\kappa$.
In particular, ${\rm Gal}(\mathscr{D}_{\aleph_1},\aleph_2,\aleph_2)$ holds under $\mathsf{AD}$.
\end{theorem}
\begin{proof}
Let $\mathscr{U}$ be a $\kappa$-complete ultrafilter over $\kappa$, and $\mathscr{V}$ be a $\kappa^+$-complete ultrafilter over $\kappa^+$.
Let us first argue that for any coloring $c\colon \kappa^+\times\kappa\rightarrow{2}$ one can find $A\in\mathscr{V}$ and $B\in\mathscr{U}$ such that $c\upharpoonright(A\times{B})$ is constant.
For each $\beta<\kappa$ and $i\in\{0,1\}$ define $$S^i_\beta:=\{\alpha<\kappa^+\mid c(\alpha,\beta)=i\}.$$
Notice that for every $\beta<\kappa$ there is a unique index $i(\beta)$ such that $S^{i(\beta)}_\beta\in\mathscr{V}$.
Hence there is $B\in\mathscr{U}$ and a fixed $i\in\{0,1\}$ for which $\beta\in{B}\Rightarrow i(\beta)=i$.
Since $\mathscr{V}$ is $\kappa^+$-complete, $A=\bigcap\{S^i_\beta\mid \beta\in{B}\}\in\mathscr{V}$.
Altogether we have $c``(A\times{B})=\{i\}$, which completes the proof of our initial statement.
Assume now that $\langle C_\gamma\mid \gamma\in\kappa^+\rangle \subseteq\mathscr{U}$.
Define $d:\kappa^+\times\kappa\rightarrow{2}$ by letting $d(\alpha,\beta)=0$ iff $\beta\in{C_\alpha}$.
By the above considerations there are $A\in\mathscr{V},B\in\mathscr{U}$ and $i\in\{0,1\}$ such that $d``(A\times{B})=\{i\}$.
If $i=1$ then $B\cap{C_\alpha}=\emptyset$ whenever $\alpha\in{A}$, and this is impossible since both $B$ and $C_\alpha$ belong to $\mathscr{U}$.
Thus $i=0$ and consequently $B\subseteq{C_\alpha}$ whenever $\alpha\in{A}$.
This means that ${\rm Gal}(\mathscr{U},\kappa^+,\kappa^+)$ holds true.
For the additional statement of the theorem recall that $\aleph_1$ and $\aleph_2$ are measurable under \textsf{AD} \cite[Theorem~33.12]{Jech2003}.
\end{proof}
\begin{remark}
Eilon Bilinsky suggested another way to prove the previous result. We reproduce here the argument counting with his kind permission.
Let $f\colon \kappa^+\rightarrow \mathcal{P}(\kappa)$ be a sequence of subsets of $\kappa$. Suppose towards a contradiction that there is no set $A$ such that $B_A=\{i<\kappa^+\mid f(i)=A\}$ is unbounded in $\kappa^+$. Construct an injection $g\colon \kappa^+\rightarrow \mathcal{P}(\kappa)$ by recursion as follows: Set $g(0):=f(0)$. Suppose that $g\restriction \alpha$ is defined for some $\alpha<\kappa^+$. By our assumption, for each $\beta<\alpha$ the set $B_{g(\beta)}$ is bounded by some $\gamma_\beta<\kappa^+$. Put $\gamma:=\sup_{\beta<\alpha}\gamma_\beta$ and note that $\gamma<\kappa^+$. Next, define $g(\alpha):=f(\gamma+1)$. For every $\beta<\alpha$, $\gamma+1\notin B_{g(\beta)}$, hence $g(\beta)\neq f(\gamma+1)=g(\alpha)$.
\end{remark}
Anticipating the results in the next section, we generalize the statement of Theorem~\ref{thmad}.
Rather than a pair of consecutive measurable cardinals $\kappa$ and $\kappa^+$, we prove a generalized statement for two measurable cardinals $\kappa<\lambda$.
The important point is that the family of sets with a large intersection and the large set contained in all of them are explicitly constructed.
We will apply the claim to many pairs simultaneously, and since these objects are explicitly definable we do not need the axiom of choice in order to pick them.
We indicate that the claim below is trivial in \textsf{ZFC}, since if $\kappa<\lambda$ and $\lambda$ is measurable then $\lambda={\rm cf}(\lambda)>2^\kappa$ and then the relevant Galvin property holds trivially.
We shall use this claim in the context of \textsf{AD}.
\begin{proposition} \label{clmgeneral} Suppose that $\kappa$ and $\lambda$ are measurable and $\kappa<\lambda$.
For every $\kappa$-complete ultrafilter $\mathscr{U}$ over $\kappa$ it is true that ${\rm Gal}(\mathscr{U},\lambda,\lambda)$ holds.
\end{proposition}
\begin{proof}
Let $\mathcal{C}=\{C_\alpha\mid \alpha<\lambda\}$ be a subset of $\mathscr{U}$ and
$\mathscr{V}$ be a $\lambda$-complete ultrafilter over $\lambda$.
An ordinal $\beta<\kappa$ will be called $\mathcal{C}$-good iff the set $A_\beta:=\{\alpha\in\lambda\mid \beta\in C_\alpha\}$ belongs to $\mathscr{V}$.
Let $G$ be the set of $\mathcal{C}$-good ordinals ${<}\kappa$.
We claim that $G\in\mathscr{U}$.
If not, $\kappa\setminus G\in\mathscr{U}$.
Now for every $\beta\in\kappa\setminus G$ one has $A_\beta\notin\mathscr{V}$ and hence $(\lambda\setminus A_\beta)\in\mathscr{V}$.
Define $S:=\bigcap\{(\lambda\setminus A_\beta)\mid \beta\in\kappa\setminus G\}$ and observe that $S\in\mathscr{V}$, by $\lambda$-completeness.
In particular, $S\neq\emptyset$, so pick $\alpha\in{S}$.
It follows that $\beta\notin{C_\alpha}$ for every $\beta\in\kappa\setminus G$, so $C_\alpha\cap(\kappa\setminus G)=\varnothing$.
This is impossible, however, since both $C_\alpha$ and $\kappa\setminus G$ are members of $\mathscr{U}$.
We conclude, therefore, that $G\in\mathscr{U}$.
Define $A:=\bigcap\{A_\beta\mid \beta\in{G}\}$.
Again, $\lambda$-completeness ensures that $A\in\mathscr{V}$.
Let $\mathcal{D}=\{C_\alpha\mid \alpha\in{A}\}$.
Notice that $G\subseteq{C_\alpha}$ for every $\alpha\in{A}$, so we are done.
\end{proof}
If one seeks for a parallel of the above in \textsf{ZFC} then one may consider real-valued measurable cardinals, which can be described as measurables without the cardinal arithmetic.
A cardinal $\kappa$ is real-valued measurable if there exists a non-trivial $\kappa$-additive measure over $\kappa$.
The size of such cardinals, if exist at all, is at most $2^{\aleph_0}$.
Solovay proved that if there is a measurable cardinal $\kappa$ and one forces a $\kappa$-product of random reals then one obtains $2^{\aleph_0}=\kappa$ in the generic extension and $\kappa$ is real-valued measurable.
It is tempting to try an amalgamation between this theorem and the forcing construction of Abraham-Shelah \cite{MR830084}.
The framework will be similar, but rather than Cohen reals (added in the Abraham-Shelah model) one can try random reals.
The most difficult part is to verify that the \textit{Main lemma} \cite[Lemma 1.7]{MR830084} still holds true when replacing the Cohen's by the Random reals. This is in principle not evident for the original argument of \cite{MR830084} relied upon some specific properties of Cohen reals.
Once this is accomplished, the failure of Galvin's property at $\aleph_1$ follows. However, there is the
additional caveat of ensuring
that Abraham-Shelah poset
preserves the real-valued measurability of $\kappa$. All of this suggests the following question:
\begin{question}
\label{qrealvalued} Is it consistent that $\kappa$ is a real-valued measurable cardinal and ${\rm Gal}(\mathscr{D}_{\aleph_1},\aleph_1,\kappa)$ fails?
\end{question}
Back to $\aleph_1$ and $\aleph_2$ we indicate that the statement ${\rm Gal}(\mathscr{D}_{\aleph_1},\aleph_2,\aleph_2)$ proved above under \textsf{AD} is stronger than the classical Galvin's property.
This will be interesting in the light of our next result in which we prove that ${\rm Gal}(\mathscr{U},\aleph_3,\aleph_3)$ fails for $\omega_2$-complete ultrafilters over $\aleph_2$ (under \textsf{AD}), despite the fact that $\aleph_2$ is measurable.
\begin{proposition}
\label{propomega2ad} Assume $\mathsf{AD}$.
Then, ${\rm Gal}(\mathscr{U},\aleph_3,\aleph_3)$ fails for every $\aleph_2$-complete ultrafilter $\mathscr{U}$ over $\aleph_2$.
\end{proposition}
\begin{proof}
As before, we employ a combinatorial argument.
So, let us show that $\binom{\omega_3}{\omega_2}\nrightarrow\binom{\omega_3}{\omega_2}$ under \textsf{AD}.
More generally, if $\kappa={\rm cf}(\mu)$ then $\binom{\mu}{\kappa}\nrightarrow\binom{\mu}{\kappa}$.
Fix a sequence $\langle \mu_j\mid j<\kappa\rangle$ cofinal in $\mu$ that is strictly increasing and continuous.
For each $\alpha<\mu$ let $j(\alpha)$ be the unique index $j\in\kappa$ for which $\alpha\in[\mu_j,\mu_{j+1})$.
Define $c\colon \mu\times\kappa\rightarrow{2}$ by $c(\alpha,\beta)=0$ iff $i(\alpha)\leq\beta$ and verify that $c``(A\times{B})=\{0,1\}$ whenever $A$ is unbounded in $\mu$ and $B$ is unbounded in $\kappa$.
From \cite{MR0479903} we know that ${\rm cf}(\omega_3)=\omega_2$ under \textsf{AD}, thus $\binom{\omega_3}{\omega_2}\nrightarrow\binom{\omega_3}{\omega_2}$.
Now let $\mathscr{U}$ be an $\aleph_2$-complete ultrafiter over $\omega_2$ and assume towards contradiction that ${\rm Gal}(\mathscr{U},\aleph_3,\aleph_3)$ is true.
We shall prove that the positive relation $\binom{\omega_3}{\omega_2}\rightarrow\binom{\omega_3}{\omega_2}$ follows from this assumption, thus arriving at a contradiction and accomplishing the proof.
Let $d\colon \omega_3\times\omega_2\rightarrow{2}$ be a coloring.
We argue as before but this time from the larger cardinal downwards.
So for every $\alpha<\omega_3$ and $i\in\{0,1\}$ let $S^i_\alpha:=\{\beta\in\omega_2\mid d(\alpha,\beta)=i\}$.
For each $\alpha<\omega_3$ let $i(\alpha)$ be such that $S^{i(\alpha)}_\alpha\in\mathscr{U}$.
Let $A\subseteq\omega_3$ with $|A|=\aleph_3$ such that $\alpha\in{A}\Rightarrow i(\alpha)=i$ for some fixed $i\in\{0,1\}$. Thus, $\{S^i_\alpha\mid \alpha\in{A}\}\subseteq\mathscr{U}$.
From ${\rm Gal}(\mathscr{U},\aleph_3,\aleph_3)$ there are $A'\in[A]^{\aleph_3}$ and $B'\in\mathscr{U}$ such that $B'\subseteq\bigcap\{S^i_\alpha\mid \alpha\in{A'}\}$.
By definition, $d\upharpoonright(A'\times{B'})$ is constantly $i$, so we are done.
\end{proof}
The above proposition does not exclude the possibility that ${\rm Gal}(\mathscr{U},\aleph_2,\aleph_3)$ holds under \textsf{AD}.
Notice, however, that the negative statement $\binom{\mu}{\kappa}\nrightarrow\binom{\mu}{\kappa}$ is actually weaker than the statement we proved:
we showed that there is no \emph{unbounded product} in $\mu\times\kappa$. Thus if $A\subseteq\mu$ is unbounded in $\mu$ then it forms no monochromatic product even if $|A|<\mu$.
Recall that under \textsf{AD}, ${\rm cf}(\omega_3)=\omega_2$, so our proof gives a bit more in the negative direction.
Now from ${\rm Gal}(\mathscr{U},\aleph_2,\aleph_3)$ one can prove that $\binom{\omega_3}{\omega_2}\rightarrow\binom{\omega_2}{\omega_2}$.
The missing part for concluding $\neg{\rm Gal}(\mathscr{U},\aleph_2,\aleph_3)$ is due to the fact that we do not know how to show that the upper component of size $\aleph_2$ can be unbounded in $\omega_3$, so we cannot get the desired contradiction.
\begin{question}
\label{qad} Assume \textsf{AD}.
Is it consistent that $\mathscr{U}$ is a $\kappa$-complete ultrafilter over $\kappa$ and ${\rm Gal}(\mathscr{U},\kappa,\kappa^+)$ fails?
\end{question}
In any case, Proposition \ref{propomega2ad} indicates that the measurable cardinal $\aleph_2$ does not enjoy the strong Galvin-property configuration described in Theorem~\ref{thmad}.
This goes in line with other combinatorial properties of these cardinals under \textsf{AD}. For instance, $\aleph_1$ is a \emph{strong partition cardinal} while $\aleph_2$ is just a weak partition cardinal.\footnote{Recall that $\kappa$ is a strong partition cardinal if $\kappa\rightarrow (\kappa)^{\kappa}$. A cardinal $\kappa$ is a weak partition cardinal if $\kappa\rightarrow (\kappa)^{\alpha}$ for every $\alpha<\kappa$.}
The pertinent statements about $\aleph_1$ and $\aleph_2$ are due to Martin and Kleinberg, respectively
(see \cite{MR0479903}).
\smallskip
Recall that one can force the failure of Galvin's property over $\mathscr{D}_{\aleph_2}$ in \textsf{ZFC}, as done in \cite{MR830084}.
In fact, Abraham and Shelah forced the \emph{strong failure}; namely, a family $\mathcal{C}=\{C_\alpha\mid \alpha<\omega_3\}\subseteq\mathscr{D}_{\aleph_2}$ such that if $\mathcal{D}\subseteq\mathcal{C}$ and $|\mathcal{D}|=\aleph_2$ then $|\bigcap\mathcal{D}|<\aleph_1$. This principle is denoted in \cite[\S2]{bgp} by $\neg_{\rm st}{\rm Gal}(\mathscr{U},\aleph_2,\aleph_3)$.
Our next proposition shows that under \textsf{AD} the strong failure must fail.
\begin{proposition}
\label{propstad} Assume $\mathsf{AD}$. Then, the strong failure $\neg_{\rm st}{\rm Gal}(\mathscr{D}_{\aleph_2},\aleph_2,\aleph_3)$ does not hold.
\end{proposition}
\begin{proof}
Suppose that $\mathcal{C}=\{C_\alpha\mid \alpha<\omega_3\}\subseteq\mathscr{D}_{\aleph_2}$.
So for every $\alpha<\omega_3$ let $\beta_\alpha$ be the least ordinal in $S:=E^{\omega_2}_{\omega_1}$ such that $C_\alpha\cap\beta_\alpha$ is a club of $\beta_\alpha$. Let $f\colon \omega_3\rightarrow\omega_2$ be the map $f(\alpha):=\beta_\alpha$. We claim that there is $\alpha<\omega_2$ such that $|f^{-1}\{\alpha\}|\geq \aleph_2$. Suppose otherwise that for all $\alpha<\omega_2$, $|f^{-1}\{\alpha\}|\leq \aleph_1$. Note that since $f^{-1}\{\alpha\}\subseteq \omega_3$ it is well-ordered and hence it has a cardinality. Next, fix $\varphi\colon\omega_2\rightarrow\omega_2\times\omega_2$ a pairing bijection. Notice that such $\varphi$ exists without the need of \textsf{AC} (see \cite[Theorem~3.5]{Jech2003}). To get the desired contradiction we shall exhibit a one-to-one function $g\colon \omega_3\rightarrow\omega_2$. For each $\beta<\omega_3$, there is a unique $\gamma_\beta<\omega_2$ such that $\beta$ is the $\gamma_\beta^{\mathrm{th}}$-member of $f^{-1}\{f(\beta)\}.$ Yet again, here we use that this latter set is well-ordered and of order-type ${<}\omega_2$. Finally, define $g\colon \beta\mapsto \varphi^{-1}(\gamma_\beta, f(\beta)).$ Certainly, this is a one-to-one map, which yields the sought contradiction.
Let $A\in[\omega_3]^{\aleph_2}$ be such that $\alpha\in{A}\Rightarrow\beta_\alpha=\beta$ for some fixed $\beta\in{S}$.
Let $\psi\colon \omega_1\rightarrow\beta$ be strictly increasing, continuous and cofinal in $\beta$.
For every $\alpha\in{A}$, $C_\alpha\cap \mathrm{Im}(\psi)$ is a club in $\beta$, hence the set $E_\alpha:=\psi^{-1}[C_\alpha\cap \beta]$ belongs to
$\mathscr{D}_{\aleph_1}$. Put $\mathcal{E}:=\{E_\alpha\mid \alpha\in{A}\}\subseteq\mathscr{D}_{\aleph_1}$ and apply Theorem \ref{thmad} to find $B\in[A]^{\aleph_2}$ such that
$|\bigcap_{\alpha\in B} E_\alpha|=\aleph_1$.
Let $D$ be the image of $\psi$ over the set $\bigcap_{\alpha\in B} E_\alpha$.
It follows that $D$ is of size $\aleph_1$ and $D\subseteq{C_\alpha}$ for every $\alpha\in B$
\end{proof}
The following is a natural question in light of the previous results:
\begin{question}
\label{qsupercom} Does there exist a model of \textsf{ZF} in which $\kappa$ is fully supercompact and $\kappa^+$ is measurable?
\end{question}
\section{An application to ordinary partition relations}
Ramsey's theorem from \cite{MR1576401} says that $\omega\rightarrow(\omega,\omega)^2$.
That is, for every $c:[\omega]^2\rightarrow 2$ there exists an infinite monochromatic subset $A\subseteq\omega$.
A natural generalization is obtained by replacing $\omega$ with some cardinal $\kappa>\aleph_0$.
The resulting relation is $\kappa\rightarrow(\kappa,\kappa)^2$ and implies that $\kappa$ is weakly compact.
There is yet another possible way to generalize Ramsey's theorem to uncountable cardinals and in this way one obtains a positive relation at small cardinals as well.
Recall that $\lambda\rightarrow(\kappa,\theta)^2$ means that for every $c:[\lambda]^2\rightarrow 2$, either there is $A\subseteq\lambda,|A|=\kappa$ such that $c``[A^2]=\{0\}$ or $B\subseteq\lambda, |B|=\theta$ such that $c``[B]^2=\{1\}$.
Ramsey's theorem generalizes to the statement $\kappa\rightarrow(\kappa,\omega)^2$ in which we increase the first component only.
A theorem of Erd\H{o}s, Dushnik and Miller says, indeed, that this relation holds at every infinite cardinal $\kappa$, see \cite{MR4862} and \cite{MR795592}.
In terms of graph theory this means that if $G$ is a complete graph of size $\kappa$ then either $G$ contains an edge-free subset of size $\kappa$ or an infinite clique.
Can one improve the positive relation $\lambda\rightarrow(\lambda,\omega)^2$?
The lightest possibility would be $\lambda\rightarrow(\lambda,\omega+1)^2$, but this relation does not hold at eve\-ry infinite cardinal $\lambda$ anymore.
Suppose that $\lambda>{\rm cf}(\lambda)=\omega$ and let $\lambda=\bigcup_{n<\omega}\Delta_n$ where $m\neq{n}\Rightarrow\Delta_m\cap\Delta_n=\emptyset$ and $|\Delta_n|<\lambda$ for every $n<\omega$.
Define $c:[\lambda]^2\rightarrow 2$ by letting $c(\alpha,\beta)=0$ iff there exists $n<\omega$ for which $\{\alpha,\beta\}\subseteq\Delta_n$.
A $0$-monochromatic set $A$ must be contained in some $\Delta_n$, so there is no such a set of size $\lambda$.
A $1$-monochromatic set $B$ satisfies $|B\cap\Delta_n|\leq 1$ for every $n<\omega$, so there is no such a set of order-type $\omega+1$.
Hence there is a class of cardinals which fail to satisfy $\lambda\rightarrow(\lambda,\omega+1)^2$.
On the other hand, if $\lambda={\rm cf}(\lambda)>\aleph_0$ then $\lambda\rightarrow(\lambda,\omega+1)^2$, see \cite[Theorem 11.3]{MR795592}.
In fact, one can prove a slightly stronger statement.
\begin{proposition}
\label{thmstat} Suppose that $\lambda={\rm cf}(\lambda)>\aleph_0$.
For every $c:[\lambda]^2\rightarrow 2$, either there is a stationary set $T\subseteq\lambda$ such that $c\upharpoonright[T]^2$ is $0$-monochromatic or there is $B\subseteq\lambda, {\rm otp}(B)=\omega+1$ such that $c\upharpoonright[B]^2$ is $1$-monochromatic.
\end{proposition}
\begin{proof}
Suppose that $c\colon [\lambda]^2\rightarrow 2$.
If there exists $B\subseteq\lambda$ of order type $\omega+1$ such that $c``[B]^2=\{1\}$ then we are done.
Suppose that there is no such $B$, and let $S$ be the set of limit ordinals of $\lambda$.
For every $\delta\in{S}$ choose a sequence $\bar{\alpha}_\delta = \langle\alpha^\delta_0,\ldots,\alpha^\delta_{n-1}\rangle$ such that $\bar{\alpha}_\delta^\frown\langle\delta\rangle$ is $1$-monochromatic and if $\max(\bar{\alpha}_\delta)<\xi<\delta$ then $\bar{\alpha}_\delta^\frown\langle\xi,\delta\rangle$ is not $1$-monochromatic.
The choice is possible by our assumption that there is no $1$-monochromatic sequence of length $\omega+1$.
By shrinking $S$ if needed, we may assume that $\ell{g}(\bar{\alpha}_\delta)=n$ for some fixed $n<\omega$ and every $\delta\in{S}$.
We remark that in this shrinking process we retain the fact that $S$ is stationary.
Let $\xi_\delta$ be the top-element of $\bar{\alpha}_\delta$ for every $\delta\in{S}$.
The function $h(\delta)=\xi_\delta$ is regressive on $S$, so by Fodor's lemma there is a stationary $T_0\subseteq{S}$ and a fixed ordinal $\xi<\lambda$ such that $\delta\in{T_0}\Rightarrow\xi_\delta=\xi$.
By repeating this process $n$-many times we obtain a stationary set $T_n$ and a fixed sequence $\bar{\alpha}$ such that $\bar{\alpha}^\frown\langle\delta\rangle$ is $1$-monochromatic and $\bar{\alpha}^\frown\langle\zeta,\delta\rangle$ is not $1$-monochromatic whenever $\delta\in{T_n}$ and $\max(\bar{\alpha})<\zeta<\delta$.
In particular, if $T=T_n\setminus (\max(\bar{\alpha})+1)$ then $T$ is a stationary subset of $\lambda$ and if $\zeta,\delta\in{T}, \zeta<\delta$ then necessarily $c(\zeta,\delta)=0$.
Otherwise, $\bar{\alpha}^\frown\langle\zeta\rangle$ will contradict the conclusion of the previous paragraph.
Thus, $c``[T]^2=\{0\}$ and we are done.
\end{proof}
As mentioned above, the statement of the proposition gives a bit more than just $\lambda\rightarrow(\lambda,\omega+1)^2$ since it yields a \emph{stationary} $0$-monochromatic set.
Notice that the argument applies to singular cardinals of uncountable cofinality for which the concepts of club and stationary subsets are sound.
However, if $T$ is a stationary subset of a singular cardinal $\lambda$ then it is possible that $|T|<\lambda$.
Therefore, one may wonder about $\lambda\rightarrow(\lambda,\omega+1)^2$ in such cardinals.
The following is \cite[Question 11.4]{MR795592}:
\begin{question}
\label{qehmr} Does the relation $\lambda\rightarrow(\lambda,\omega+1)^2$ hold for $\lambda>{\rm cf}(\lambda)>\omega$?
\end{question}
Actually, the question is phrased in \cite{MR795592} with respect to $\lambda=\aleph_{\omega_1}$, the first relevant instance.
We indicate that in \cite{MR795592} there appears a partial answer, based on canonization theorems of Shelah, which apply to strong limit singular cardinals.
Namely, if $\lambda>{\rm cf}(\lambda)>\omega$ and $\lambda$ is a strong limit cardinal then $\lambda\rightarrow(\lambda,\omega+1)^2$.
In some sense, this result gives many \textsf{ZFC} instances since for every $\kappa={\rm cf}(\kappa)>\aleph_0$ there is a class of singular cardinals which are strong limit of cofinality $\kappa$.
On the other hand, for every specific $\lambda>{\rm cf}(\lambda)=\kappa$ one can choose $\theta<\lambda$ and force $2^\theta>\lambda$, thus locally it is not a theorem of \textsf{ZFC}.
A substantial progress with regard to the above question was made by Shelah in \cite{MR2494318}.
Using methods of pcf theory, Shelah proved that if $\lambda>{\rm cf}(\lambda)=\kappa>\aleph_0$ and $2^\kappa<\lambda$ then $\lambda\rightarrow(\lambda,\omega+1)^2$.
The assumption $2^\kappa<\lambda$ is a weakening of the assumption that $\lambda$ is a strong limit cardinal, but the overall question remains open if one wishes to eliminate any further assumption.
In this section we would like to replace $2^\kappa<\lambda$ by a version of Galvin's property.
The main application will be under \textsf{AD}, in which strong instances of Galvin's property hold, as shown in the previous section.
To this end, we must render the proof of \cite{MR2494318} by removing any use of choice apart from $\textsf{AC}_\omega$.
Our assumption on the Galvin property is also easily forced in the \textsf{ZFC} context if $\kappa={\rm cf}(\lambda)$ is measurable.
This gives a slight improvement to Shelah's result, but it seems that the real importance of the $\textsf{ZFC}$ result is that it guides us towards the (tentative) direction of forcing the negative relation $\lambda\nrightarrow(\lambda,\omega+1)^2$. In effect, our approach indicates that one must kill all the pertinent instances of the Galvin property to obtain $\lambda\nrightarrow(\lambda,\omega+1)^2$.
Let us begin with models of \textsf{ZF}. Our first mission is to prove that $\kappa\rightarrow(\kappa,\omega+1)^2$ for many regular cardinals.
The proof of Theorem \ref{thmstat} seems to make use of the axiom of choice in two places.
Firstly, when one chooses the maximal green sequence $\bar{\alpha}_\delta$ below $\delta$ for every $\delta\in{S}$.
Secondly, when one employs Fodor's lemma (finitely many times).
The first issue is not a real obstacle, since finite sequences are well ordered even without choice.
The second issue is more substantial, but if one assumes that $\kappa$ is measurable then one can use normality.
We indicate that if $V=L(\mathbb{R})$ then every regular cardinal below $\Theta$ is measurable, as proved in \cite{MR2768698}, so the distance between measurable and regular cardinals is not so large under \textsf{AD}.
\begin{proposition}
\label{clmadmeasurable} Assume $\mathsf{AD}$.
If $\kappa$ is measurable then $\kappa\rightarrow(\kappa,\omega+1)^2$.
Consequently, under $``\mathsf{AD}+V=L(\mathbb{R})$'' one has $\kappa\rightarrow(\kappa,\omega+1)^2$ whenever $\aleph_0<\kappa={\rm cf}(\kappa)<\Theta$.
\end{proposition}
\begin{proof}
Let $c\colon [\kappa]^2\rightarrow 2$ be a coloring.
We refer to the first color as red and to the second color as green.
Repeat the arguments in the proof of Proposition~\ref{thmstat} with the following changes.
Fix a normal ultrafilter $\mathscr{U}$ over $\kappa$, so $S\in\mathscr{U}$, where $S$ is the set of limit ordinals below $\kappa$.
For every $\delta\in{S}$ let $\bar{\alpha}_\delta$ be the first $<_{\rm lex}$-finite sequence which is green with $\delta$ and maximal with respect to this property.
Now apply the normality of $\mathscr{U}$ finitely many times to obtain a fixed maximal green sequence $\bar{\alpha}$ with respect to some set $T\in\mathscr{U}$.
It follows that $c``[T]^2=\{0\}$, so the proof is accomplished.
\end{proof}
Now we come to the main result of this section.
For simplicity we assume that $V=L(\mathbb{R})$, though we believe that the result holds under \textsf{AD} only.
\begin{theorem}
\label{thm881ad} Assume $\mathsf{AD}+V=L(\mathbb{R})$.
Suppose that $\aleph_0<\kappa={\rm cf}(\lambda)<\lambda$.
If $\lambda$ is a limit of regular cardinals then $\lambda\rightarrow(\lambda,\omega+1)^2$.
In particular, this relation holds for $\lambda=\aleph_{\omega_1}$.
\end{theorem}
\begin{proof}
For transparency, assume that $\kappa=\omega_1$.
Let $\lambda=\bigcup_{i<\omega_1}\lambda_i$, where $\langle \lambda_i\mid i<\omega_1\rangle$ is increasing and continuous, $\omega_1<\lambda_0$ is measurable and $\lambda_{i+1}$ is measurable for every $i<\omega_1$.
Let $\mathscr{U}$ denote the club filter over $\omega_1$, which is a normal ultrafilter under \textsf{AD}.
For every $i<\omega_1$ let $\mathscr{U}_{i+1}$ be the filter generated by the unbounded $\omega$-closed subsets of $\lambda_{i+1}$. This is a normal ultrafilter over $\lambda_{i+1}$ under \textsf{AD}+$V=L(\mathbb{R})$, as shown in \cite{MR2768698}.
Given a function $f:\omega_1\rightarrow{\rm Ord}$ we define a rank function $\daleth(f)$ as follows.
Set $\daleth(f)=\alpha$ iff for every $\beta<\alpha$ one has $\daleth(f)\neq\beta$ and $\daleth(g)=\beta$ for some $g\colon \omega_1\rightarrow{\rm Ord}$ which satisfies $g<_{\mathscr{U}}f$ (i.e., $\{\alpha<\omega_1\mid g(\alpha)<f(\alpha)\}\in\mathscr{U}$).
Let $c\colon [\lambda]^2\rightarrow 2$ be a coloring and suppose that there is no $1$-monochromaic subset of order type $\omega+1$.
Define $\Delta_0:=\lambda_0$ and $\Delta_{1+i}:=[\lambda_i,\lambda_{i+1})$ for every $i<\omega_1$.
By our assumption, there is a full-sized $0$-monochromatic subset of $\Delta_{i}$ for every $i<\omega_1$.
Moreover, this set is explicitly constructible by the proof of Claim \ref{clmadmeasurable}.
Hence we may assume without loss of generality that $c``[\Delta_i]^2=\{0\}$ for every $i<\omega_1$.
For each $\alpha<\omega_1$ let $\eta(\alpha)$ be the unique $i<\omega_1$ for which $\alpha\in\Delta_i$.
For every $0<i<\omega_1$ let $Seq_i$ be the set $\{\langle\alpha_0,\ldots,\alpha_{n-1}\rangle\mid \eta(\alpha_0)<\cdots<\eta(\alpha_{n-1})<i\}$.
For $i<\omega_1$ and $\zeta\in\Delta_i$ we define a tree $\mathscr{T}_\zeta$ as follows.
For every $\bar{\alpha}=\langle\alpha_0,\ldots,\alpha_{n-1}\rangle\in Seq_i$ and every $\zeta\in\Delta_i$ we let $\bar{\alpha}\in\mathscr{T}_\zeta$ iff $\{\alpha_0,\ldots,\alpha_{n-1},\zeta\}$ is $1$-monochromatic under $c$.
By our assumption each $\mathscr{T}_\zeta$ is well-founded with respect to the reversed order.
Therefore, one can define a rank function $rk_\zeta$ over $Seq_i$, for every $\zeta\in\Delta_i$, by the following procedure.
If $\bar{\alpha}\in Seq_i-\mathscr{T}_\zeta$ then let $rk_\zeta(\bar{\alpha})=-1$.
If $\bar{\alpha}\in\mathscr{T}_\zeta$ then $rk_\zeta(\bar{\alpha})=\xi$ iff for every $\varepsilon\in\xi$ one has $rk_\zeta(\bar{\alpha})\neq\varepsilon$ and there exists an ordinal $\beta$ for which $rk_\zeta(\bar{\alpha}^\frown\langle\beta\rangle)=\varepsilon$.
The idea of this rank function is to express the degree of maximality of finite sequences below $\zeta$.
For a maximal $1$-monochromatic sequence below $\zeta$, $rk_\zeta$ assumes the value zero.
If there is more room for adding ordinals above $\max(\bar{\alpha})$ and keeping the $1$-monochromaticity with $\zeta$, then the rank grows.
Notice that for every $i<\omega_1$ and every $\zeta\in\Delta_i$ the range of $rk_\zeta$ is bounded in $\lambda_{i+1}$ since it is an initial segment of $\lambda_{i+1}$.
Let $\Delta_i^{end}\subseteq\Delta_i$ be an end-segment such that for every $\bar{\alpha}\in Seq_i$ and $\gamma<\lambda_i$, if there is $\zeta\in \Delta_i^{end}$ such that $rk_\zeta(\bar{\alpha})=\gamma$ then there are $\lambda_{i+1}$-many such $\zeta$'ss in $\Delta^{end}_i$.
The set $\Delta^{end}_i$ has a concrete definition: this end-segment is obtained by omitting bounded subsets of $\Delta_i$, which amounts in our case to the intersection of their complements.
Since $\lambda_{i+1}$ is measurable, this intersection is of size $\lambda_{i+1}$. In fact we may assume that it is an end-segment of $\lambda_{i+1}$.
We define a set of triples $K$ as follows.
A triple $(\bar{\alpha},Z,f)$ belongs to $K$ iff $Z\in\mathscr{U}, f:\omega_1\rightarrow{\rm Ord}$ and for some $0<i<\omega_1, \bar{\alpha}\in Seq_i, \min(Z)>i$ and for every $j\in{Z}$ there exists $\zeta\in\Delta_j^{end}$ such that $rk_\zeta(\bar{\alpha})=f(j)$.
It is easy to verify that $K\neq\varnothing$.
Let $(\bar{\alpha}^*,Z^*,f^*)$ be a triple in $K$ for which $\daleth(f^*)$ is minimal amongst the elements of $K$.
Without loss of generality, all the elements of $Z^*$ are limit ordinals.
For every $j\in Z^*$ we isolate a section $\Sigma_j\subseteq\Delta_j^{end}$ by defining $\Sigma_j=\{\zeta\in\Delta_j^{end}\mid rk_\zeta(\bar{\alpha}^*)=f^*(j)\}$.
Notice that $|\Sigma_j|=\lambda_{j+1}$.
Fix $j\in Z^*$.
We would like to understand what happens between the $j$th level and upper levels mentioned in $Z^*$.
For every $\ell\in Z^*$ such that $j<\ell$ and every $\zeta\in\Sigma_j$, let $\Sigma_j^\ell(\zeta)=\{\eta\in\Sigma_\ell\mid c(\zeta,\eta)=1\}$.
Let $L_\zeta=\{\ell\in Z^*\mid j<\ell,|\Sigma_j^\ell(\zeta)|=\lambda_{\ell+1}\}$, the set of \emph{large} $1$-monochromatic levels above $j$.
Similarly, let $T_\zeta=Z^*-L_\zeta$, the set of \emph{tiny} $1$-monochromatic levels above $j$.
Our goal is to garner many ordinals $\zeta$ with big $T_\zeta$, since then we will be able to remove the ``$1$" edges (there will be only a few of them) and create a $0$-monochromatic union of size $\lambda$.
We claim, therefore, that $L_\zeta\notin\mathscr{U}$ (and parallely, $T_\zeta\in\mathscr{U}$) for every $j\in Z^*,\zeta\in\Sigma_j$.
For suppose not.
Fix $i\in Z^*,\beta\in\Sigma_i$ such that $L_\beta\in\mathscr{U}$.
Define $\bar{\alpha}'={\bar{\alpha}}^{*\frown}\langle\beta\rangle, Z'=L_\beta$ and for $j\in Z'$ let $f'(j)=\min\{rk_\eta(\bar{\alpha}')\mid \eta\in\Sigma_i^j(\beta)\}$ and $f'(j)=0$ otherwise.
Notice that $(\bar{\alpha}',Z',f')\in{K}$.
We claim that $\daleth(f')<\daleth(f^*)$.
Indeed, for every $j\in Z'=L_\beta$ one has $f'(j)=rk_\eta(\bar{\alpha}')$ for some $\eta\in\Sigma_i^j(\beta)$, so $f'(j)=rk_\eta({\bar{\alpha}}^{*\frown}\langle\beta\rangle)<rk_\eta(\bar{\alpha}^*)=f^*(j)$.
Thus, $\daleth(f')<\daleth(f^*)$, contradicting the minimality of $\daleth(f^*)$.
We conclude, therefore, that $T_\zeta\in\mathscr{U}$ for every $j\in Z^*, \zeta\in\Sigma_j$.
Fix now $j\in Z^*$.
Let $T^j=\{T_\zeta\mid \zeta\in\Sigma_j\}\subseteq\mathscr{U}$.
Let $\mathcal{A}^j=\{T_{\zeta_\varepsilon}\mid \varepsilon\in\lambda_{j+1}\}$, let $A^j_0=\{\zeta_\varepsilon:T_{\zeta_\varepsilon}\in\mathcal{A}^j\}$ and let $Y_j\in\mathscr{U}$ be such that $Y_j\subseteq T_{\zeta_\varepsilon}$ for every $\varepsilon\in\lambda_{j+1}$.
The existence of $\mathcal{A}^j,Y_j$ comes from Claim \ref{clmgeneral}.
We emphasize that we do not need the axiom of choice in order to define these objects.
We render this process at every $j\in Z^*$.
Finally, let $Y=\Delta\{Y_j\mid j\in{Z^*}\}\in\mathscr{U}$.
For every $j\in{Y}$ we wish to define $A_j\subseteq A^j_0$ such that $|A_j|=\lambda_{j+1}$.
The sets $A_j$ will be $0$-monochromatic, and our goal is to show that their union is $0$-monochromatic as well.
We define these sets by induction on $j\in{Y}$.
So fix $j\in{Y}$ and define $A_j=\{\xi\in A^j_0\mid \forall i\in Y\cap j, \forall\zeta\in A_i, \xi\notin\Sigma_i^j(\zeta)\}$.
We claim that $|A_j|=\lambda_{j+1}$.
To see this, observe that if $i\in Y\cap{j}$ then for every $\zeta\in{A_i}$ the set $\Sigma_i^j(\zeta)$ is bounded in $\lambda_{j+1}$, hence $\lambda_{j+1}-\Sigma_i^j(\zeta)\in\mathscr{U}_{j+1}$.
Thus, $\bigcap\{(\lambda_{j+1}-\Sigma_i^j(\zeta))\mid \zeta\in{A_i}\}$ belongs to $\mathscr{U}_{j+1}$ and equals $A_j$, hence $|A_j|=\lambda_{j+1}$.
Define $A=\bigcup_{j\in Y}A_j$, so $|A|=\lambda$.
By proving that $c``[A]^2=\{0\}$ we will be done.
Pick $\alpha,\beta\in{A}$ such that $\alpha<\beta$.
If there exists $j\in\omega_1$ such that $\alpha,\beta\in{A_j}$ then $c(\alpha,\beta)=0$ since $A_j\subseteq A^j_0\subseteq\Sigma_j$ and $c``[\Sigma_j]^2=\{0\}$.
If not, then there are $i<j<\omega_1$ such that $\alpha\in{A_i}, \beta\in{A_j}$, and $i,j\in{Y}$.
By definition, $\beta\notin\Sigma_i^j(\alpha)$ and hence $c(\alpha,\beta)=0$ and the proof is accomplished.
\end{proof}
The above proof also gives the following corollary in \textsf{ZFC}.
Suppose that $\kappa$ is measurable, and $\mathscr{U}$ is a normal ultrafilter over $\kappa$.
Suppose that there is a base of $\mathscr{U}$ of size $\kappa^+$.
One says, in this case, that $\mathfrak{u}^{\rm nor}_\kappa=\kappa^+$.
It is possible to force $\mathfrak{u}^{\rm nor}_\kappa=\kappa^+$ even if $2^\kappa$ is arbitrarily large, see e.g. \cite{MR1632081} and \cite{MR3201820}.
It is easy to verify that if $\mathscr{U}$ witnesses $\mathfrak{u}^{\rm nor}_\kappa=\kappa^+$ then ${\rm Gal}(\mathscr{U},\lambda,\lambda)$ holds whenever $\lambda={\rm cf}(\lambda)>\kappa^+$, see e.g. \cite{bgp}.
\begin{corollary}
\label{cormeasurable} If $\kappa$ is measurable and $\mathfrak{u}^{\rm nor}_\kappa=\kappa^+$ then $\lambda\rightarrow(\lambda,\omega+1)^2$ whenever $\kappa={\rm cf}(\lambda)<\lambda$.
\end{corollary}
The corollary shows that the positive relation $\lambda\rightarrow(\lambda,\omega+1)^2$ may hold even if $2^\kappa>\lambda$.
This fact was known already to Shelah and Stanley, see \cite{MR1782118}.
It is shown there that if $\lambda$ is a strong limit singular of uncountable cofinality then many versions of Cohen forcing preserve the positive relation $\lambda\rightarrow(\lambda,\omega+1)^2$, in particular adding many Cohen subsets of $\kappa$ in such a way that $2^\kappa>\lambda$.
Our corollary generalizes these results, since any $\kappa$-complete forcing notion will preserve the statement $\mathfrak{u}^{\rm nor}_\kappa=\kappa^+$.
In the above statements we used Galvin's property in order to prove $\lambda\rightarrow(\lambda,\omega+1)^2$.
The connection between ordinary partition relations and the structure of normal filters seems to be helpful in the opposite direction as well.
That is, from the assumption that $\lambda\rightarrow(\lambda,\omega+1)^2$ one can learn something about the Galvin property.
\begin{proposition}
\label{clmctble} Let $\kappa={\rm cf}(\kappa)>\aleph_0$ and let $\mathscr{F}$ be a normal filter over $\kappa$.
Suppose that $\neg_{\mathrm{st}}{\rm Gal}(\mathscr{F},\kappa,\kappa^+)$ is witnessed by $\mathcal{C}=\{C_\alpha\mid \alpha<\kappa^+\}$.
Then one can find $a=\{\alpha_n\mid n<\omega\}\subseteq\kappa^+$ such that $\bigcap\{(\kappa\setminus C_{\alpha_n})\mid n<\omega\}\neq\emptyset$.
\end{proposition}
\begin{proof}
Assume toward contradiction that $\mathcal{C}=\{C_\alpha\mid \alpha<\kappa^+\}$ witnesses the failure of ${\rm Gal}(\mathscr{F},\kappa,\kappa^+)$, yet the complements are not overlapping in the sense that every infinite collection of them has empty intersection.
Define a coloring $c:[\kappa^+]^2\rightarrow 2$ as follows.
For $\alpha<\beta<\kappa^+$ let $c(\alpha,\beta)=0$ iff $\beta\in{C_\alpha}$.
Notice that there is no $1$-monochromatic sequence of length $\omega+1$ under $c$.
For if $\langle \alpha_n\mid n\leq\omega\rangle$ is such a sequence then $\alpha_\omega\in\bigcap\{(\kappa\setminus C_{\alpha_n})\mid n<\omega\}$, in contrary to our assumption at the beginning of the proof.
Likewise, there is no $0$-monochromatic set $A\in[\kappa^+]^{\kappa+\kappa}$.
For if $A=A_0\cup{A_1}$ with $\sup(A_0)<\min(A_1)$ and ${\rm otp}(A_0)={\rm otp}(A_1)=\kappa$, then $A_1\subseteq\bigcap\{C_\alpha\mid \alpha\in A_0\}$.
This is impossible since $\{C_\alpha\mid \alpha\in {A_0}\}\subseteq\mathcal{C}$.
Thus, the coloring $c$ witnesses $\kappa^+\nrightarrow(\kappa^+,\omega+1)^2$, which is an absurd since $\kappa^+$ is regular and uncountable.
\end{proof}
A similar argument becomes more powerful at successors of large cardinals.
By \cite{bgs} one can force a $\kappa$-complete ultrafilter $\mathscr{U}$ over a measurable cardinal $\kappa$ such that ${\rm Gal}(\mathscr{U},\kappa,\kappa^+)$ fails
\begin{proposition}
\label{clmbgsfh} Suppose that $\kappa$ is measurable and $\mathscr{U}$ is a $\kappa$-complete ultrafilter over $\kappa$ for which $\neg_{\mathrm{st}}{\rm Gal}(\mathscr{U},\kappa,\kappa^+)$ is witnessed by the sequence $\mathcal{C}=\{C_\alpha\mid \alpha<\kappa^+\}$.
Then there is
$S\in[\kappa]^\kappa$ such that $\bigcap\{(\kappa\setminus C_\alpha)\mid \alpha\in{S}\}$ is non-empty.
\end{proposition}
\begin{proof}
We define $c:[\kappa^+]^2\rightarrow 2$ as before, by letting $c(\alpha,\beta)=0$ iff $\beta\in{C_\alpha}$.
Of course, the definition is needed at $\alpha<\beta<\kappa^+$ only, since the coloring is symmetric.
Assume toward contradiction that there is no $S\in[\kappa]^\kappa$ such that $\bigcap\{(\kappa-C_\alpha)\mid \alpha\in{S}\}\neq\emptyset$.
By the same argument as in the previous claim, $c$ witnesses the negative relation $\kappa^+\nrightarrow(\kappa+\kappa,\kappa+1)^2$.
However, this relation holds if $\kappa$ is measurable, as proved in \cite{MR1968607}, a contradiction.
\end{proof}
The above statements show that there
is a limitation on forcing empty intersection over the complements of sets which witness the strong failure of the Galvin property.
One can try, however, to force empty intersection only over certain families.
The following is tailored to the goal of obtaining a negative relation at singular cardinals.
Given $\lambda>{\rm cf}(\lambda)=\kappa>\aleph_0$ and a
cofinal sequence $\langle \lambda_i\mid i<\kappa\rangle$ in $\lambda$,
let $\Delta_0:=[0,\lambda_0)$ and $\Delta_{1+i}:=[\lambda_i,\lambda_{i+1})$.
\begin{proposition}
\label{clmnegdirection} Suppose that:
\begin{enumerate}
\item [$(\aleph)$] $\lambda>{\rm cf}(\lambda)=\kappa>\aleph_0$ and $\mathscr{F}$ is a normal filter over $\kappa$.
\item [$(\beth)$] $\{C_\alpha\mid \alpha<\lambda\}\subseteq\mathscr{F}$ witnesses $\neg_{\mathrm{st}}{\rm Gal}(\mathscr{F},\kappa,\lambda)$.
\item [$(\gimel)$] $\bigcap\{(\kappa\setminus C_{\alpha_n})\mid n<\omega\}=\emptyset$ whenever $\alpha_n\in\Delta_{i_n}$ for every $n<\omega$, and $m\neq n\Rightarrow \alpha_m\neq \alpha_n$.
\end{enumerate}
Then $\lambda\nrightarrow(\lambda,\omega+1)^2$.
\end{proposition}
\begin{proof}
For $\alpha<\beta<\lambda$, if there exists $i\in\kappa$ such that $\alpha,\beta\in\Delta_i$ then let $c(\alpha,\beta)=0$.
If $\alpha\in\Delta_i,\beta\in\Delta_j$ and $i<j$, let $c(\alpha,\beta)=0$ iff $\beta\in C_\alpha$.
One can verify that $c:[\lambda]^2\rightarrow 2$ witnesses $\lambda\nrightarrow(\lambda,\omega+1)^2$, so we are done.
\end{proof}
\bibliographystyle{amsplain}
|
1,314,259,994,230 | arxiv | \section{Introduction}
A stellar tidal disruption event (TDE) happens when a star passes within the Roche radius of a massive black hole. As the streams of stellar debris circularize and are accreted by the black hole, we can expect a luminous flare of thermal emission \citep{Rees88}. The first (candidates) of these stellar tidal disruption flares (TDFs) were discovered with X-ray surveys \citep{Bade96,KomossaBade99}, followed by UV-selected flares \citep{Gezari06,Gezari09}. The detection of new TDFs is currently dominated by optical surveys \citep{vanVelzen10,Gezari12,Arcavi14,Holoien14}. About two new candidates are found each year, and this number is certain to increase in the near future as larger optical surveys (ZTF, LSST) become operational.
Stellar tidal disruption flares can be considered a multipurpose tool for extragalactic astrophysics. First of all, they can be used as probes to signal the existence of black holes in quiescent galaxies, potentially including intermediate-mass black holes in dwarf galaxies \citep{Maksym13,MacLeod14}. The dynamics of the TDE itself are also interesting; the rapid increase of the fallback rate of the stellar debris (from super- to sub-Eddington in a few years) presents a new domain to test our understanding of accretion physics. In particular, the detection of radio emission following TDFs \citep{Zauderer11,Bloom11,Levan11,vanVelzen16,Alexander17} can shed new light on the conditions required for the launch of relativistic jets.
While TDFs appear to be promising tools, their optical emission has proven to be difficult to model self-consistently. The observed temperature is too low to originate from a compact accretion disk near the pericenter of the stellar orbit \citep{Strubbe11,Lodato11,vanVelzen10}. Reprocessing of high-energy photons from the compact accretion disk by material at larger radii \citep{Guillochon14,Metzger16} and emission due to shocks caused by intersecting debris streams \citep{Piran15b,Krolik16} have both been proposed as explanations for optical TDF emission. Neither explanation is completely satisfactory. The intersecting-stream scenario suffers from a ``missing energy problem'' \citep{Piran15b}, whereas the reprocessing scenario appears at odds with the (tentative) discovery that the X-ray emission lags the UV emission \citep{Pasham17}.
Contrary to the optical emission, the X-ray properties of TDF candidates generally fit within the canonical TDE picture of \citet{Rees88}; see \citet[][]{Komossa15,Auchettl16} for reviews. Since very few X-ray selected TDFs have received optical follow-up near peak \citep{Saxton17}, it could be possible that X-ray-selected and optically selected TDF candidates are a separate class of tidal disruptions \citep{Dai15}. Some authors have gone one step further and proposed that most optically selected TDF candidates are, in fact, not due to the tidal disruption of a star. Supernovae (SNe) in AGN accretion disks \citep{Saxton16}, collisions of stars on bound orbits around the black hole \citep{Metzger17}, black hole accretion disk instabilities \citep{Saxton16}, or flares from accreting supermassive black hole binaries with subparsec separations \citep{Tanaka13} have been proposed as potential TDF impostors.
The hypothesis that flares from active galactic nuclei (AGN) are TDF impostors perhaps has the most observational support because relatively rapid changes of AGN luminosity and spectral type have been observed \citep{Storchi-Berg93,Shappee14,LaMassa15,Gezari17}. While the observed properties of these changing-look AGN are different from candidate TDFs \citep[][]{Ruan16,MacLeod16}, their existence demonstrates that the luminosity of AGN can increase beyond what is expected from the power spectrum that accurately describes the light curves of mundane AGN \citep{MacLeod12,Graham17}.
The mechanism that triggers the accretion rate increase in changing-look AGN could be similar to the thermal-viscous instability \citep{Meyer81,Smak83} that is known to operate in disks around stellar-mass compact objects that accrete from a donor star \citep{van-Paradijs96,Lasota01}. For black holes with a mass of $10^{8}\,M_{\odot}$, a similar mechanism \citep{Mineshige90,Siemiginowsk97,Janiuk02} could lead to outbursts with a recurrence time of $10^{3-6}\,~{\rm yr}$ \citep{Hatziminaogl01,Czerny09}. However, \citet*{Hameury09} argue that due to the high Mach number in AGN accretion disks, this quiescent time could be much shorter, and therefore large outbursts due to limit-cycle oscillations may not be expected for AGN.
The tidal disruption of a star can also occur in a Seyfert galaxy \citep[e.g.,][]{Drake11,Blanchard17}---perhaps such events could explain some of the changing-look AGN \citep{Merloni15,Wyrzykowski17}. Identifying observations that can discriminate between an AGN disk instability triggered by a star passing through the disk and a TDE inside a galaxy that also harbors an AGN is very challenging; these events are a mix of two phenomena that are not yet completely understood even if they occur in isolation. Fortunately, we can largely avoid this problem if we consider only TDF candidates from galaxies that are not classified as AGN before the optical outburst.
One signature is unique to a TDE and can thus be used to rule out most---or perhaps all---TDF impostor scenarios. A TDE requires that the star passes within the tidal radius,
\begin{equation}\label{eq:Rt}
R_{t} = R_{\rm star} \left(\frac{M_{\bullet}}{M_{\rm star}} \right)^{1/3} \sim 25 R_{s} \left(\frac{M_{\bullet}}{10^{6}M_{\odot}}\right)^{{-2/3}} \quad.
\end{equation}
Here we expressed the tidal radius for a solar-type star in units of the Schwarzschild radius ($R_{s}\equiv 2GM_{\bullet}/c^{2}$). For a black hole mass of $M_{\bullet} \approx 10^{8} M_{\odot}$ the tidal radius of a solar-type star is equal to the Schwarzschild radius, and the results of the disruption will not be visible to an observer outside the black hole event horizon \citep{Hills75}. This critical black hole mass is sometimes referred to as the Hills mass.
Repeating the calculation behind Eq.~\ref{eq:Rt} within in the framework of General Relativity, one finds that the Hills mass increases with black hole spin \citep{Kesden12b}. For spinning black holes, the outcome of the disruption also depends on the orientation between the black hole angular momentum vector and the star's orbital plane; after averaging over all inclinations, \citet{Kesden12b} finds that the effect of the black hole horizon yields a superexponential suppression of the TDF rate for $M_{\bullet}>M_{\rm Hills}$.
The goal of this work is to use the expected suppression in the flare rate due to a black hole horizon to demonstrate that optically selected TDF candidates are indeed due to the tidal disruption of a star.
The outline of this paper is as follows. We will first present our compilation of TDF candidates in Sec.~\ref{sec:sample}. Next, we compute the luminosity function (LF) and host galaxy mass function for this sample (Sec.~\ref{sec:LF}). We then use forward modeling to reproduce the observed distribution of black hole mass and galaxy mass (Sec.~\ref{sec:mock}). We discuss the implications of this result (Sec.~\ref{sec:discussion}) and close with a list of conclusions (Sec.~\ref{sec:conclusion}). We adopt a flat cosmology with $\Omega_{\Lambda}=0.7$ and $H_{0}=70\,{\rm km}{\rm s}^{-1}\,{\rm Mpc}^{-1}$. All magnitudes are in the AB \citep{oke74} system.
\subsection{TDE or TDF?}
In the literature, both tidal disruption event (TDE) and tidal disruption flare (TDF) are used to label transients due to stellar disruptions. In this work, we use TDE to refer to the general concept of the disruption of a star by a black hole, while TDF is used only for the electromagnetic result of this disruption. This distinction is subtle, yet useful, since not every TDE may lead to a TDF \citep[e.g.,][]{Guillochon15}.
\begin{deluxetable*}{l l l c c c c c c}
\tablewidth{0pt}
\tablecolumns{9}
\tablecaption{Sample of 17 candidate TDFs. }
\tablehead{name & R.A. & Decl. & $m_{\rm max}$ & $L_{g}$ & $T$ & $L_{\rm bb}$ & $z$ & $z_{\rm max}$ \\
& (J2000) & (J2000) & & ($\log_{10} {\rm erg}\,{\rm s}^{-1})$ & ($\times 10^{4}$ K) & ($\log_{10} {\rm erg}\,{\rm s}^{-1})$ & & \\}
\startdata
GALEX-D1-9 & 02:25:16.96& $-$04:32:59.1 & 22.4 (NUV) & $42.3$ & 5.6 & 44.1 & 0.326 & 0.554\\
GALEX-D3-13 & 14:19:29.78& $+$52:52:06.3 & 22.2 (NUV)$^*$ & $42.7$ & 4.9 & 44.3 & 0.370 & 0.821\\
GALEX-D23H-1 & 23:31:59.53& $+$00:17:14.5 & 20.9 (NUV) & $42.3$ & 4.9 & 43.9 & 0.185 & 0.517\\
SDSS-TDE1 & 23:42:01.40& $+$01:06:29.2 & 21.0 ($r$)$^*$ & $42.7$ & 2.4 & 43.5 & 0.136 & 0.174\\
SDSS-TDE2 & 23:23:48.61& $-$01:08:10.2 & 20.3 ($r$)$^*$ & $43.5$ & 1.8 & 44.0 & 0.256 & 0.469\\
PS1-10jh & 16:09:28.27& $+$53:40:23.9 & 19.8 ($g$) & $43.2$ & 2.9 & 44.2 & 0.170 & 0.409\\
PS1-11af & 09:57:26.81& $+$03:14:00.9 & 21.4 ($g$) & $43.3$ & 1.9 & 43.9 & 0.405 & 0.426\\
PTF-09ge & 14:57:03.18& $+$49:36:40.9 & 17.7 ($r$) & $43.4$ & 2.2 & 44.1 & 0.064 & 0.155\\
PTF-09axc & 14:53:13.07& $+$22:14:32.2 & 19.1 ($r$) & $43.2$ & 1.2 & 43.5 & 0.115 & 0.138\\
PTF-09djl & 16:33:55.97& $+$30:14:16.6 & 19.6 ($r$) & $43.5$ & 2.6 & 44.4 & 0.184 & 0.179\\
ASASSN-14ae & 11:08:40.11& $+$34:05:52.2 & 17.0 ($g$)$^*$ & $43.2$ & 2.1 & 43.9 & 0.044 & 0.051\\
ASASSN-14li & 12:48:15.23& $+$17:46:26.4 & 16.8 ($g$)$^*$ & $42.6$ & 3.5 & 43.8 & 0.021 & 0.025\\
ASASSN-15oi & 20:39:09.14& $-$30:45:20.6 & 16.2 ($g$)$^*$ & $43.6$ & 2.5 & 44.4 & 0.048 & 0.082\\
ASASSN-15lh & 22:02:15.44& $-$61:39:34.5 & 16.5 ($g$) & $44.8$ & 2.5 & 45.6 & 0.233 & 0.347\\
iPTF-15af & 08:48:28.13& $+$22:03:33.4 & -- & -- & -- & -- & 0.079 & 0.101\\
iPTF-16axa & 17:03:34.34& $+$30:35:36.6 & 18.5 ($r$)$^*$ & $43.5$ & 3.0 & 44.5 & 0.108 & 0.135\\
iPTF-16fnl & 00:29:57.05& $+$32:53:37.2 & 17.4 ($r$) & $42.3$ & 3.0 & 43.4 & 0.016 & 0.034\\
\enddata
\tablecomments{The third column, $m_{\rm max}$, lists the maximum observed apparent magnitude and the relevant filter for each survey; an asterisk indicates that the peak of the light curve was not resolved. The forth and fifth columns give the rest-frame $g$-band luminosity, as computed using the observed blackbody temperature ($T$) to make the k-correction. The last column lists the maximum redshift where this flare could have been detected given the survey effective flux limit.}
\label{tab:TDFs}
\end{deluxetable*}
\begin{deluxetable*}{l c c c c c c c c c c}
\tablewidth{0pt}
\tablecolumns{10}
\tablecaption{Properties of the TDF host galaxies. }
\tablehead{name & $m_{g}$ & $m_{r}$ & $m_{K}$ & $M_{r}$ & $M_{g}$ & gal. mass & $\sigma$ & $M_{\bullet}$ & $z_{\rm max, \sigma}$ \\
& & & & & & $(\log_{10} M_{\odot})$ & (km\,s$^{-1})$ & $(\log_{10} M_{\odot})$ & \\}
\startdata
GALEX-D1-9 & $22.01\pm0.12$ & $20.92\pm0.05$ & $19.29\pm0.01$ & $-20.5$ & $-20.0$ & 10.3 & $ - $ & $ - $ & 0.324 \\
GALEX-D3-13 & $21.97\pm0.07$ & $20.45\pm0.03$ & $18.40\pm0.01$ & $-21.5$ & $-20.8$ & 10.7 & $ 133 \pm 6$ & $7.4\pm 0.4$ & 0.375 \\
GALEX-D23H-1 & $20.09\pm0.03$ & $19.24\pm0.02$ & $17.81\pm0.07$ & $-20.7$ & $-20.1$ & 10.3 & $ 77 \pm18$ & $6.4\pm 0.6$ & 0.395 \\
SDSS-TDE1 & $20.27\pm0.02$ & $19.24\pm0.02$ & $17.93\pm0.07$ & $-19.9$ & $-19.2$ & 10.1 & $ 126 \pm 7$ & $7.3\pm 0.4$ & 0.275 \\
SDSS-TDE2 & $20.79\pm0.05$ & $19.50\pm0.02$ & $18.02\pm0.07$ & $-21.3$ & $-20.6$ & 10.6 & $ - $ & $ - $ & 0.410 \\
PS1-10jh & $21.91\pm0.08$ & $21.05\pm0.05$ & $19.88\pm0.02$ & $-18.6$ & $-18.1$ & 9.5 & $ 65 \pm 3$ & $6.1\pm 0.4$ & 0.170 \\
PS1-11af & $22.89\pm0.23$ & $21.35\pm0.09$ & $19.59\pm0.23$ & $-20.7$ & $-20.1$ & 10.1 & $ - $ & $ - $ & 0.284 \\
PTF-09ge & $17.91\pm0.01$ & $17.13\pm0.01$ & $16.71\pm0.10$ & $-20.2$ & $-19.5$ & 10.1 & $ 72 \pm 6$ & $6.2\pm 0.4$ & 0.330 \\
PTF-09axc & $18.66\pm0.01$ & $18.04\pm0.01$ & $17.60\pm0.15$ & $-20.7$ & $-20.2$ & 10.0 & $ 60 \pm 4$ & $5.9\pm 0.4$ & 0.411 \\
PTF-09djl & $20.57\pm0.03$ & $19.70\pm0.02$ & $18.54\pm0.11$ & $-20.2$ & $-19.6$ & 10.1 & $ 64 \pm 7$ & $6.0\pm 0.5$ & 0.323 \\
ASASSN-14ae & $17.27\pm0.01$ & $16.64\pm0.01$ & $16.23\pm0.08$ & $-19.9$ & $-19.2$ & 9.8 & $ 53 \pm 2$ & $5.7\pm 0.4$ & 0.291 \\
ASASSN-14li & $15.98\pm0.00$ & $15.47\pm0.00$ & $14.98\pm0.04$ & $-19.3$ & $-18.8$ & 9.6 & $ 78 \pm 2$ & $6.4\pm 0.4$ & 0.249 \\
ASASSN-15oi & $17.44\pm0.01$ & $16.79\pm0.02$ & $15.90\pm0.07$ & $-19.9$ & $-19.3$ & 9.9 & $ - $ & $ - $ & 0.301 \\
ASASSN-15lh & $19.59\pm0.10$ & $18.32\pm0.10$ & $17.01\pm0.12$ & $-22.2$ & $-21.4$ & 10.8 & $ 225 \pm15$ & $8.3\pm 0.4$ & 0.571 \\
iPTF-15af & $18.36\pm0.01$ & $17.49\pm0.01$ & $17.02\pm0.11$ & $-20.4$ & $-19.6$ & 10.2 & $ 106 \pm 2$ & $7.0\pm 0.4$ & 0.341 \\
iPTF-16axa & $19.33\pm0.02$ & $18.46\pm0.01$ & $17.92\pm0.17$ & $-20.1$ & $-19.4$ & 10.1 & $ 82 \pm 3$ & $6.5\pm 0.4$ & 0.314 \\
iPTF-16fnl & $15.22\pm0.00$ & $14.72\pm0.00$ & $14.18\pm0.07$ & $-19.5$ & $-19.0$ & 9.8 & $ 55 \pm 2$ & $5.8\pm 0.4$ & 0.277 \\
\enddata
\tablecomments{The apparent magnitudes ($m_{r}$, $m_{g}$, and $m_{K}$) are corrected for Galactic extinction using the \citet{Schlafly11} extinction maps. Uncertainties on the apparent magnitudes include only the statistical uncertainty. The absolute magnitudes are computed in the rest frame of the host galaxy.
The total stellar mass of the galaxy is estimated from the broadband ({\it ugrizJHK}) photometry. The velocity dispersion ($\sigma$) measurements are from the sample of \citet{Wevers17} (with the exception of ASASSN-15lh). The last column lists the maximum redshift for inclusion of the galaxy in the \citet{Wevers17} sample.}
\label{tab:hosts}
\end{deluxetable*}
\section{Observed TDFs}\label{sec:sample}
We will restrict our sample of candidate TDFs to nuclear flares found in optical/UV imaging surveys. The first motivation for this choice is the fact that candidate TDFs from optical imaging surveys show a number of common properties (discussed below), which justifies treating these flares as one class in our analysis. Further motivation for restricting to optical surveys is that other methods to find TDFs, X-ray surveys, or extreme coronal line emitters in spectroscopic galaxy samples \citep{Komossa08} require more assumptions to estimate the event rate. For these surveys, the cadence is (much) lower than the duration of the flare and estimating the volumetric flare rate requires a light-curve model (or one could measure the snapshot rate; see \citealt*{Auchettl17}).
We exclude flares found in galaxies that can be classified as a broad-line AGN or Seyfert, but we include flares from LINERs \citep{heckman80}, leaving 17 sources (Table~\ref{tab:TDFs}). { Our selection requirements and final sample are similar to the candidate TDF samples presented recently \citep{Hung17,Wevers17}. The only difference is that this earlier work used stricter requirements on either the TDF light-curve sampling (to be able to measure the decay rate) or the host brightness and Declination (to allow efficient spectroscopic follow-up).}
These 17 flares share a number of properties: a high blackbody temperature ($T=[1-3] \times 10^{4}$~K) and nearly constant colors (e.g., \citealt{Hung17}, Fig.~11). Also the optival/UV light curves of the TDFs in our sample are all consistent with a power-law decay rate; in all cases where monitoring observations cover more than one year of the light curve, an exponential decay rate can be ruled out \citep{vanVelzen11,Arcavi14,Gezari15,vanVelzen16,Brown16,Brown16b}.
In Table~\ref{tab:TDFs} we summarize the observed properties of the flares in our sample. To measure the rest-frame $g$-band luminosity ($L_{g}$) we used the observed blackbody luminosity to compute the k-correction \citep{Hogg02}.
\begin{deluxetable}{l c c c c r}
\tablewidth{0pt}
\tablecolumns{6}
\tablecaption{Survey properties. }
\tablehead{Survey & $N_{\rm TDF}$ & $m_{\rm lim}$ & band & $z_{\rm max^{*}}$ & {$(A\times \tau)^{*}$}\\
& & & & & (deg$^{2}$~yr)\\}
\startdata
GALEX & 3 & 23.0 & NUV & 0.393 & 17 \\
SDSS & 2 & 21.5 & r & 0.140 & 202 \\
PS1 & 2 & 21.5 & g & 0.168 & 120 \\
PTF & 3 & 19.5 & r & 0.054 & 3000 \\
iPTF & 3 & 19.5 & r & 0.054 & 5032 \\
ASAS-SN & 4 & 17.3 & g & 0.023 & 82637 \\
\enddata
\tablecomments{The second column lists the number of candidate TDFs from each survey. The typical maximum redshift for each survey ($z_{\rm max*}$) follows from the requirement that a flare with a peak luminosity of $L_{g}=10^{42.5}\,{\rm erg}\,{\rm s}^{-1}$ is detectable above the effective flux limit ($m_{\rm lim}$). From this redshift we can compute (Eq.~\ref{eq:rate}) the effective area times the survey duration.}
\label{tab:surveys}
\end{deluxetable}
Measurements of the velocity dispersion of the host galaxy have been obtained by \citet{Wevers17} for 12 of the 17 TDFs in our sample. This sample includes only sources at Declination $>0$ and is complete for a host galaxy flux limit of $m_{g}<22$ and $m_{r}<21$.
The stellar mass of the TDF host galaxies is estimated from broadband optical to near-IR photometry using \verb kcorrect \citep{blanton07}. The same software and wavelength range was used to estimate the mass of galaxies that are input for our synthetic sample of potential TDF host galaxies (see Sec.~\ref{sec:mock}). We use the SDSS \citep{york02} Petrosian \citep{blanton01,stoughton02} magnitudes (the treatment of the few flares outside the SDSS footprint is discussed below). The IR flux in the {\it J, H}, and {\it K} bands is measured from images of 2MASS \citep{Skrutskie97,Jarrett00} using a circular aperture with a radius equal to the 90\% light radius in $r$-band. When available, we substitute the 2MASS images with UKIDSS images \citep{Lawrence07,Hambly08} since the latter provide a better signal-to-noise ratio.
The TDFs in our sample were discovered by different surveys, each with their own selection function and detection efficiency. Since the detection efficiency of most of these surveys is unknown, we cannot use our sample to obtain an {\it absolute} measurement of the event rate or luminosity function. However, since each survey discovers events from the same parent distribution, we can use the detected number of TDF candidates in each survey to compare the selection efficiencies and thus obtain the {\it relative} luminosity function.
The detected number of flares in a given imaging survey is a linear function of the survey area, efficiency, and survey duration and a nonlinear function of the survey effective flux limit ($m_{\rm lim}$). Because multiple detections or spectroscopic follow-up observations of the flare are often required to identify a transient as a candidate TDF, the effective flux limit is typically larger than the single-epoch detection limit of the survey images. We estimate $m_{\rm lim}$ from the observed distribution of the flare's apparent magnitude near peak in each survey.
The effective flux limit can be used to compute the maximum redshift, $z_{\rm max}$, for the detection and identification of a flare with a given peak luminosity ($L_{g}$) and temperature.
For each of the five surveys in our sample we can thus compute the volume in which a typical TDF can be detected. Here we define a typical TDF as a flare with a peak luminosity of $L_{g}^{*} = 10^{42.5}$~erg~s$^{-1}$ and temperature $T^{*} = 2.5 \times 10^{4}$~K. The number of detected TDFs in each survey can now be estimated as {
\begin{equation}\label{eq:rate}
N_{\rm TDF,\,detected} \approx \dot{N} \, V(z_{\rm max*}) ~ A_{\rm survey} \times \tau_{\rm survey} \quad.
\end{equation}
Here $A_{\rm survey}\times \tau_{\rm survey}$ is used to label the product of the effective survey area and duration. $V(z_{\rm max*})$ denotes the comoving volume (per solid angle) corresponding to $z_{\rm max}$}. Since the flares in our sample span a relatively narrow redshift range, we may assume that each survey is sensitive to the same event rate ($\dot{N}$), and thus the number of TDFs found in each survey can be used to estimate $A_{\rm survey}\times \tau_{\rm survey}$. In Table~\ref{tab:surveys} we summarize the results of this exercise. To set the normalization of $A_{\rm survey}$, we adopted a mean rate of $\dot{N} = 5 \times 10^{-7} {\rm Mpc}^{-3} {\rm yr}^{-1}$, which was chosen to match the volumetric rate based on SDSS and ASAS-SN data \citep{vanVelzen14,Holoien16}.
Comparing the value of $A_{\rm survey}\times \tau_{\rm survey}$ obtained from Eq.~\ref{eq:rate} with the area and duration of each survey yields an estimate of the mean detection efficiency. For example, our estimate of {$(A \times \tau)^{*}$} for the ASAS-SN and GALEX TDF searches is similar to the total area and duration of these surveys, implying a high efficiency for detecting and identifying TDFs.
\subsection{Input Surveys}
In the following subsections we briefly discuss the surveys that provided the input for our compilation of TDF candidates.
\subsubsection{GALEX}
Three flares in our sample were found by searching for transients in GALEX \citep{Martin05} multi-epoch imaging in the near-UV (NUV) and far-UV (FUV) bands: GALEX-D3-13 \citep{Gezari06} GALEX-D1-9 \citep{Gezari08}, and GALEX-D23H-1 \citep{Gezari09}. The search was conducted using $\approx 5$ yr of GALEX observations of four extragalactic fields (each covering $\sim 1$~deg$^{2}$ of the sky).
Candidate TDFs were selected as transient UV sources from inactive galaxies. Active galaxies were identified using the large body of archival spectroscopic observations that are available for these fields, complemented by spectroscopic follow-up observations where necessary \citep{Gezari08}. For this survey, we adopt an effective flux limit of $m<23$ in the NUV band. The source D3-13 is located in the CANDLES \citep{Grogin11,Koekemoer11} footprint, and we use the WIRCam data \citep{Bielby12} cataloged by \citet{Stefanon17} to measure the near-IR flux of its host galaxy.
\subsubsection{SDSS Stripe~82}
Two flares in our sample, TDE1 and TDE2 \citep{vanVelzen10}, were found by searching for transients in SDSS Stripe~82 multi-epoch imaging data \citep{frieman08,Abazajian09}, covering about 300~deg$^{2}$. This search used the SDSS $u$, $g$, and $r$ filters and selected nuclear transients from inactive galaxies. Active galaxies were identified using SDSS spectra, colors, and optical variability \citep{vanVelzen10}. For this survey, we adopt an effective flux limit of 21.5 in the $r$ band.
\subsubsection{Pan-STARRS}
Two flares in our sample originate from the Pan-STARRS \citep{Chambers07,Chambers16} Medium Deep (PS1 MD) fields: PS1-10jh \citep{Gezari12} and PS1-11af \citep{Chornock14}. These two candidate TDFs did not originate from a single search, but since the PS1 MD fields all have a similar single-epoch flux limit, we will treat them as originating from one survey. For this survey, we adopt an effective flux limit of 21.5 in the $g$ band.
\subsubsection{PTF}\label{sec:PTF}
Three flares in our sample originate from the analysis of \citet{Arcavi14} using the Palomar Transient Factory \citep[PTF;][]{Law09}: PTF-09ge, PTF-09djl, and PTF-09axc. These TDF candidates were obtained by selecting nuclear transients from PTF imaging data that have received spectroscopic follow-up observations and have peak $R$-band luminosity in the range $-21<M_{r}<-19$. For this survey, we adopt an effective flux limit of 19.5 in the $r$ band.
Since the PTF search has a restriction on the TDF luminosity, our method will underestimate the effective area ($A_{\rm survey}$ via Eq.~\ref{eq:rate}) if the rate decreases with increasing flare luminosity. The TDF LF (discussed in the next section) indeed has a negative slope, and we modify our estimate of $A_{\rm survey}$ (listed in Table~\ref{tab:surveys}) to take this into account.
Besides the three sources presented by \citet{Arcavi14}, the PTF survey has yielded one more TDF candidate: PTF10iya \citep{Cenko12}. We exclude this source from our compilation since the WISE \citep{Wright10} colors provide strong evidence for a persistent AGN. Using the flux measured by \citet*{Lang14b}, we find $W1-W2=0.8\pm 0.1$, similar to colors of low-redshift quasars \citep{Stern12}. Furthermore, the mid-infrared light curve of this source, derived by including the NEOWISER catalog \citep{Mainzer14}, shows variability both before and after the discovery of the optical transient, which is unlike the observed WISE light curves of other TDFs in our sample \citep{vanVelzen16b,Jiang16}.
\subsubsection{iPTF}
Three flares in our sample originate from iPTF, which is the successor of PTF: iPTF-15af (N. Blagorodnova et al. in prep) iPTF-16axa \citep{Hung17}, and iPTF-16fnl \citep{Blagorodnova17}. The iPTF search was conducted with the same telescope and camera as PTF, but cadence and follow-up strategy were different. Contrary to the PTF search by \citet{Arcavi14}, the three flares from iPTF were not selected based on their luminosity, but based on their color and spectral similarity to previous TDFs. For iPTF we adopt $m<19.5$ in the $r$ band as the effective flux limit. For the flare iPTF-16fnl, we use the blackbody temperature reported by \citet{Brown18}.
\subsubsection{ASAS-SN}
Four flares in our sample originate from ASAS-SN \citep{Shappee14}: ASASSN-14ae \citep{Holoien14}, ASASSN-14li \citep{Holoien16}, ASASSN-15oi \citep{Holoien16b}, and ASASSN-15lh \citep{Dong16}. The nature of the fourth flare, ASASSN-15lh, is controversial: both a supernova \citep{Dong16,Godoy-Rivera17} and a TDF \citep{Leloudas16,Margutti17} have been proposed. In this paper we will consider both possible origins separately. For ASAS-SN we adopt an effective flux limit that is similar to the image flux limit, $m<17.3$ in the $g$ band.
Two flares from this survey are outside the SDSS footprint. For ASASSN-15oi we use the Pan-STARRS catalog \citep{Flewelling16} to obtain the host photometry. For ASASSN-15lh we use the host galaxy magnitudes from the best-fit population synthesis model of \citet{Leloudas16}. The measurement of the velocity dispersion of the host galaxy of ASASSN-15lh is presented in \citet{Kruhler17}.
\begin{figure}
\includegraphics[trim=4mm 2mm 0mm 5mm, clip, width=0.48\textwidth]{./figs/Lg_func_wpl.pdf}
\caption{The TDF luminosity function (LF). The number of sources in these five bins is $\{4,2,3,3,1\}$ (low to high). The last bin contains the TDF candidate ASASSN-15lh. The dashed lines show two a power-law fits (Eq.~\ref{eq:rateVmax}) while the dotted line shows a Gaussian LF (Eq.~\ref{eq:rateVmax_gau}). The solid line presents our default model for the TDF luminosity function (see~Sec.\ref{sec:modelLFs}). }\label{fig:Lg_func}
\end{figure}
\section{Luminosity/Mass Functions}\label{sec:LF}
{ For a survey of sources with a constant flux, the LF (e.g., the number of quasars per Mpc$^{3}$ as a function of luminosity) can be estimated by weighting each source by the maximum volume, $V_{\rm max}$, in which the source can be detected \citep{Schmidt68}. For transients, we are interested in the volumetric rate (e.g., the number of SNe per cubic Mpc per year as a function of peak luminosity). To estimate the volumetric rate from a survey for transients, we can also use the ``1/$V_{\rm max}$'' method, but now the weight should include the duration of the survey. We therefore define
\begin{equation}
\mathcal{V}_{\rm max} \equiv V(z_{\rm max})~A_{\rm survey}\times\tau_{\rm survey} \quad.
\end{equation}
Here $A_{\rm survey}\times \tau_{\rm survey}$ denotes the product of the effective survey duration and survey area, and $V(z_{\rm max})$ is the volume (per unit solid angle) corresponding to the maximum redshift. As explained in Sec.~\ref{sec:sample}, the product of the effective survey duration and survey area follows from the detected number of TDF candidates, while the maximum volume follows from the survey flux limit and the peak luminosity of the transient.} In Figs.~\ref{fig:Lg_func} and \ref{fig:mass_func} we show $1/\mathcal{V}_{\rm max}$ binned by the peak $g$-band luminosity and galaxy mass, respectively.
Since the PTF search for TDFs \citep{Arcavi14} used a luminosity selection (see~Sec.~\ref{sec:PTF}), we exclude these events when we compute the rate as a function of $L_{g}$. We also have to exclude iPTF-15af since the photometric data of this flare have not been published yet. We are thus left with $17-4=13$ sources. For TDFs with a measurement of the host galaxy velocity dispersion, the black hole mass is estimated from the $M$--$\sigma$ relation \citep{Ferrarese00,Gebhardt00}. We adopt the relation of \citet{Gultekin09}, obtained using both early-type and late-type galaxies. For the TDF host galaxies that have measured velocity dispersions, we compute the maximum volume using the lower value of $z_{\rm max}$ from the flux limit for the detection of the flare and the host galaxy flux limit for measuring the velocity dispersion---the former is the limiting factor for most sources (see the last column of Table~\ref{tab:TDFs} and Table~\ref{tab:hosts}).
The uncertainty on each bin of $\sum 1/\mathcal{V}_{\rm max}$ is estimated from $ \sum 1/\mathcal{V}_{\rm max}^{2}$ \citep{Schmidt68}. This yields a typical uncertainty of 0.3~dex for each bin, which is comparable to the Poisson uncertainty. For bins that contain only one source, we compute the uncertainty on the volumetric rate using the 1$\sigma$ confidence interval for Poisson statistics, [0.17, 3.41].
The sum of $1/\mathcal{V}_{\rm max}$ for all 13 TDF candidates that we use for the LF yields a rate of $(8\pm 4) \times 10^{-7}\, {\rm Mpc}^{-3}{\rm yr}^{-1}$.
The volumetric rate as a function of $L_{g}$ (Fig.~\ref{fig:Lg_func}) shows a steep decrease that can be parameterized as
\begin{equation}\label{eq:rateVmax}
\frac{d \dot{N}}{d \log_{10}L} = \dot{N}_{0} ~ (L / L_{0})^{a}
\end{equation}
For $L_{0}=10^{43}\,{\rm erg}\,{\rm s}^{-1}$, a least-squares fit yields $\dot{N}_{0} = (1.9 \pm 0.7) \times 10^{-7}\, {\rm Mpc}^{-3}\,{\rm yr}^{-1}$ and $a=-1.6 \pm 0.2$. If we exclude the luminous TDF candidate ASASSN-15lh, we find a more shallow slope with a larger uncertainty: $a=-1.3\pm 0.3$ and $\dot{N}_{0} = (2.3 \pm 0.8 )\times 10^{-7}\, {\rm Mpc}^{-3}\,{\rm yr}^{-1}$. When excluding ASASSN-15lh, a Gaussian function
\begin{equation}\label{eq:rateVmax_gau}
\frac{d \dot{N}}{d \log_{10}L} = \dot{N}_{0'} \, \exp[-(\log_{10}(L / L_{0'})^{2} /2b^{2} ] \quad
\end{equation}
with $L_{0'}=10^{42.5} \, {\rm erg}\,{\rm s}^{-1}$, $b=0.4$, and $N_{0'}=1\times 10^{-6}\, {\rm Mpc}^{-3}\,{\rm yr}^{-1}$, also provides a reasonable description of the LF (Fig.~\ref{fig:Lg_func}).
\begin{figure}
\includegraphics[trim=4mm 2mm 0mm 5mm, clip, width=0.48 \textwidth]{./figs/mass_func.pdf}
\caption{The TDF host galaxy stellar mass function. The number of sources in these three bins is $\{5,7,3,1\}$ (low to high). The highest-mass bin contains the TDF candidate ASASSN-15lh. The dashed line shows a galaxy mass function \citep{Baldry12}, multiplied with a constant TDF rate of $10^{-4}$ galaxy$^{-1}$ yr$^{-1}$.}\label{fig:mass_func}
\end{figure}
To convert our measurement of the volumetric TDF rate to a rate per galaxy, we compute the volumetric rate as a function of total stellar mass and divide by the stellar mass function of \citet{Baldry12}. For a stellar mass in the range $10^{9.5}<M_{\rm galaxy}/M_{\odot}<10^{10.5}$ a constant rate of $10^{-4}$ galaxy$^{-1}$ yr$^{-1}$ is consistent with our observations (Fig.~\ref{fig:mass_func}).
The rate as a function of black hole mass (Fig.~\ref{fig:MBH_func}) is also observed to be roughly constant for $M_{\bullet}<10^{7.5}\,M_{\odot}$.
However, the high luminosity of the TDF candidate ASASSN-15lh yields a very large $\mathcal{V}_{\rm max}$ and thus implies a rapid decrease of the volumetric rate for $M_{\bullet}\gtrsim 10^{7.5}\,M_{\odot}$.
The decrease of the rate toward the highest-mass bin is at least 3 orders of magnitude.
Arguably the only conceivable mechanism that can yield such an extreme turnover is the suppression of the flare rate by the black hole horizon.
We can thus conclude that {\it if} ASASSN-15lh is a member of the TDE family, the population of observed TDFs as a whole is consistent with the predicted suppression of the rate due to the direct capture of stars by the black hole.
However, if ASASSN-15lh is not due to a TDE, the mass function of the remaining TDFs in our sample is not a useful tool to measure rate suppression. Instead, we need to compare the observed mass distribution to the expected distribution in a flux-limited sample. This requires a forward-modeling approach, which is explained in the next section.
\begin{figure}
\includegraphics[trim=4mm 2mm 0mm 5mm, clip, width=0.48 \textwidth]{./figs/MBH_func.pdf}
\includegraphics[trim=4mm 2mm 0mm 5mm, clip, width=0.48 \textwidth]{./figs/MBH_func_more.pdf}
\caption{The TDF host galaxy black hole mass function. The number of sources in these four bins is $\{5,4,2,1\}$ (low to high). The highest-mass bin contains the TDF candidate ASASSN-15lh. In the top panel, the dashed line shows the \citet{Shankar04} black hole mass function multiplied with a constant TDF rate of $6\times 10^{-5}$ per black hole per year. The solid line shows the result of using this mass function as input to our model of the TDF rate (Eq.~\ref{eq:modelrate}). The dotted line indicates the mass function that would be obtained if the wait time between flares scales linearly with black hole mass. In the bottom panel, we compare four different predictions for the scaling of the disruption rate below the Hills mass (Sec.~\ref{sec:modeleventrates}). } \label{fig:MBH_func}
\end{figure}
\section{Forward Modeling}\label{sec:mock}
In the previous section we used the $1/V_{\rm max}$ method to reconstruct the TDF LF and mass function. In this section, we start with a model for the flare luminosity function and event rate and try to reproduce the observed distribution of luminosity and host galaxy mass. This forward-modeling approach has two advantages over a $1/V_{\rm max}$ reconstruction. First of all, we can include additional selection criteria beyond the survey flux limit (e.g., the contrast between the flux of the host and flare). Second, we can assign a significance to the apparent lack of events from high-mass black holes.
Our forward-modeling method consists of four steps: (i) draw flares with a peak luminosity from a model LF; (ii) insert these flares into a flux-limited galaxy sample; (iii) assign each flare a weight based on the event rate in its host galaxy; (iv) sum these weights for the simulated flares that pass the requirement for detection in each survey. In the following four subsections we provide the details of these steps.
\begin{figure}
\includegraphics[trim=4mm 2mm 0mm 5mm, clip, width=0.48 \textwidth]{./figs/Lg_fluxlim.pdf}
\caption{Demonstrating the difference between a luminosity function (LF) and a flux-limited sample. The solid line shows the LF obtained from the $1/V_{\rm max}$ method (Fig.~\ref{fig:Lg_func}). The dashed line shows the result of our forward analysis: a mock TDF sample obtained after drawing flares from a power-law LF and applying the selection criteria of each survey. }\label{fig:Lg}
\end{figure}
\subsection{Model Luminosity Functions}\label{sec:modelLFs}
Ideally, a model for TDFs or TDF impostors would yield a prediction for the LF of these events that can be tested using the observed luminosity distribution (Fig.~\ref{fig:Lg}). However, these models are not yet mature enough to predict an LF from first principles. We therefore use a more empirical approach and only consider LFs that are known to reproduce the observed luminosity distribution. We will consider two different LFs: one for SNe and one for AGN flares and TDFs.
The observed LF from the 1/$V_{\rm max}$ method (Fig.~\ref{fig:Lg_func}) provides a good starting point for our empirical LF models. Indeed, if we draw TDFs from a power-law LF with $a=-1.5$ (see Eq.~\ref{eq:rateVmax}) and apply the survey selection criteria (see Sec.~\ref{sec:selection}) we reproduce the observed distribution of $L_{g}$, see Fig.~\ref{fig:Lg}. We therefore use this power law as the model LF of nuclear SNe.
A simple power law is unlikely to provide a correct description of the LF of transients that are due to massive black holes, i.e., TDFs or AGN flares. Due to the abundance of low-mass galaxies, a power-law LF will yield too many transients with super-Eddington luminosities.
Motivated by the observation that the observed distribution of the Eddington ratio peaks near unity (Fig.~\ref{fig:fluxlim_fEdd}), we define the LF for AGN flares and TDFs as follows. We draw from the same power law that is used to model SNe, but we only accept flares with an Eddington ratio in the range $-3>\log_{10}(f_{\rm Edd})<0.3$.
{Here $f_{\rm Edd}\equiv L_{\rm bol} / L_{\rm Edd}$, with $L_{\rm Edd}=1.3\times 10^{38}\, M_{\bullet}$\,erg\,s$^{-1}$}. To assign a bolometric luminosity ($L_{\rm bol}$) to each simulated flare, we compute the blackbody luminosity by drawing a blackbody temperature from a lognormal distribution centered on $2.5\times 10^{4}$~K with a standard deviation of 0.15~dex (which provides a good description of the observed blackbody temperatures of the TDF candidates in our sample). The resulting LF for TDFs and AGN flares using this approach is shown in Fig.~\ref{fig:Lg_func}.
The next step is to insert the simulated flares into a host galaxy sample.
\subsection{Synthetic Galaxy Sample}\label{sec:syngal}
While large galaxy surveys like SDSS provide a good census of galaxy properties at $z<0.1$, many TDFs are found at higher redshift, where galaxy properties are more difficult to observe directly. We therefore need to construct a synthetic galaxy sample.
We use the galaxy LF measured by \citet{Cool12} to find the number of galaxies in bins of redshift and absolute magnitude, for blue and red galaxies separately. We populate each bin using the properties of real galaxies from the NYU-Valued-Added Galaxy Catalog \citep[NYU-VAGC;][]{Blanton05}, again for red and blue galaxies separately. In each bin, we compute the apparent magnitude using the median k-correction of the galaxies in that bin. We keep only bins with an apparent magnitude $m_{r}<22$.
Because we use a redshift-dependent LF for blue and red galaxies \citep{Cool12}, our synthetic flux-limited galaxy sample contains most effects of galaxy evolution (e.g., the increase of the quiescent-galaxy density to lower redshift). Our approach also accounts for the fact that blue galaxies are easier to detect at higher redshift (due to the k-correction). And finally, because we use real galaxies to populate each absolute magnitude bin, correlations of galaxy properties (e.g., mass or size) with luminosity are part of our sample. We confirmed that the median redshift of our synthetic galaxy sample, $\left<z\right>=0.47$, is consistent with the median photometric redshift of real galaxies (also selected with $m_{r}<22$) in the co-add of SDSS Stripe 82 \citep{Reis12,Annis14}.
The total stellar mass of each galaxy in NYU-VAGC is estimated from the 2MASS and SDSS broadband photometry using the \verb kcorrect software \citep{blanton07}. To estimate the starformation rate (SFR), we use the specific SFR within the SDSS spectroscopic fiber as measured by the MPA-JHU group \citep{Kauffmann03b,Brinchmann04}. To estimate the bulge mass, we use the bulge-to-total ratio ($B/T$) measured in $r$ band by \citet{Lackner12}. These measurements are not available for all NYU-VAGC galaxies, so we assigned each galaxy in our synthetic sample a $B/T$ using the nearest match in the 3D vector space spanned by $M_{r}$, $g-r$, and $r-i$.
The last quantity we wish to assign to our synthetic galaxy sample is the velocity dispersion. Unfortunately, the resolution of the SDSS spectrograph limits reliable measurements to $\sigma \gtrsim 100$~km~s$^{-1}$, which is larger than the typical velocity dispersion of TDF host galaxies \citep{Wevers17}. Following the approach of \citet{Bezanson11}, we use the virial theorem to estimate the velocity dispersion of each galaxy,
\begin{equation}\label{eq:sigma}
\sigma = \sqrt{\frac{GM}{k K(n) r_{e}}} \quad .
\end{equation}
Here $r_{e}$ is the effective radius, $k$ is a scale factor that accounts for the mean difference between the dynamical mass and the stellar mass estimated from the photometry \citep{Taylor10}, and $K(n)$ is a virial constant \citep{Bertin02} that depends on the Sersic index ($n$),
\begin{equation}
K(n) = \frac{73.32}{10.465 + (n-0.94)^{2}}+0.954\quad .
\end{equation}
Using the stellar mass, effective radii, and Sersic indices reported in the NYU-VAGC \citep{Blanton05}, we find that $k=0.560$ is required to match the observed velocity dispersion to the estimate from Eq.~\ref{eq:sigma}. This calibration is consistent with the value of $k$ reported by \citet{Bezanson12}. For $\sigma>100$~km~s$^{-1}$, the scatter between the observed value of the velocity dispersion and the value from Eq.~\ref{eq:sigma} is 0.08~dex.
The bulge mass and velocity dispersion can be used to estimate the mass of the black hole at the galaxy's center. We use the \citet{Gultekin09} $M$--$\sigma$ relation for their sample of ``all'' galaxies (i.e., both early-types and late-types):
\begin{equation}\label{eq:gulte_sigma}
\log_{10} M_{\bullet} = 8.13 + 4.24 \log_{10}(\sigma / 200~{\rm km}\,{\rm s}^{-1}) \quad.
\end{equation}
To estimate the black hole mass from the bulge mass, we adopt the \citet{Gultekin09} $M$--$L_{V}$ relation, and we use the NYU-VAGC galaxies to measure the small correction to the power-law index due to the luminosity dependence of the mass-to-light ratio, which yields
\begin{equation}\label{eq:gulte_bulge}
\log_{10} M_{\bullet} = 8.40 + 1.16 \log_{10}(M_{\rm bulge}/10^{11} M_{\odot}) \quad.
\end{equation}
We apply Gaussian noise with a standard deviation of 0.4~dex \citep{Gultekin09} when assigning the black hole mass based on the host galaxy properties. We find that the two methods to estimate the black hole mass agree reasonably well; the difference between using the velocity dispersion and the galaxy bulge mass roughly scales as $0.2 \log_{10}(M-7.5)$~dex, with $M$ the mass from the $M$--$\sigma$ relation. Since reliable $B/T$ measurements are not available for most of the TDF candidates in our sample, we will use the black hole mass estimate from the $M$-$\sigma$ relation as the default value in our analysis.
We simulated $10^{7}$ galaxies. This sample is available online (see Table~\ref{tab:syngal}).
\begin{deluxetable}{l || c c
\tablecolumns{3}
\tablewidth{0pt}
\tablecaption{
modeling scenarios}
\startdata
& $\dot{N}\propto L^{-2.5}$ & $-3 <\log_{10} f_{\rm Edd} < 0.3$ \\[3pt]
\hline\hline \\[2pt]
$\dot{N}\propto {\rm const}$ & SNe & AGN flares \\[2pt]
$\dot{N} \propto {\rm SFR} $ & SNe & -- \\[2pt]
$\dot{N} \propto {\rm mass} $ & SNe & -- \\[2pt]
$\dot{N} \propto M_{\bullet}^{-1}$ & -- & AGN flares \\[2pt]
Eq.~\ref{eq:modelrate} & -- & TDFs
\enddata
\tablecomments{The cells of this matrix show which combinations of event rate (rows) and luminosity function (columns) are considered as a description for AGN flares, nuclear SNe, or TDFs. The two luminosity functions have the same power-law index, but the function used for TDFs and AGN flares is capped based on the ratio of the blackbody luminosity to the Eddington luminosity ($f_{\rm Edd}$). } \label{tab:modelmatrix}
\end{deluxetable}
\begin{figure*}
\includegraphics[trim=4mm 2mm 0mm 5mm, clip, width=0.48 \textwidth]{./figs/mass_cumu_fluxlim0.pdf}
\includegraphics[trim=4mm 2mm 0mm 5mm, clip, width=0.48 \textwidth]{./figs/MBH_cumu_fluxlim0.pdf}
\caption{Cumulative distribution of host galaxy stellar mass and black hole mass. We show the observed distribution, compared to the distribution for two different mock TDF samples. The black solid line is our fiducial TDF model, using Eq.~\ref{eq:modelrate} to account for the suppression of the TDF rate due to the capture of stars by black holes. The dashed line shows the distribution that is obtained if the event rate is independent of mass. This second scenario clearly is inconsistent with the observations, as it predicts too many flares from high-mass host galaxies. }\label{fig:fluxlim}
\end{figure*}
\begin{figure*}
\includegraphics[trim=4mm 2mm 0mm 5mm, clip, width=0.48 \textwidth]{./figs/mass_cumu_fluxlim_other.pdf}
\includegraphics[trim=4mm 2mm 0mm 5mm, clip, width=0.48 \textwidth]{./figs/MBH_cumu_fluxlim_other.pdf}
\caption{Identical to Fig.~\ref{fig:fluxlim}, but showing three more models for the event rate and flare luminosity function (see Table~\ref{tab:modelmatrix}). We see that both the AGN flare scenario and SNe that trace the starformation rate (SFR) or galaxy mass are inconsistent with the observed distributions. While an SN model with a rate that is independent of galaxy properties is consistent with the observed mass distribution, this scenario is not consistent with the observed distribution of the Eddington ratio (Fig.~\ref{fig:fluxlim_fEdd}). }\label{fig:fluxlim_other}
\end{figure*}
\subsection{Model Event Rates}\label{sec:modeleventrates}
We consider four possible models for the scaling of the event rate with galaxy properties. First of all, the simplest assumption is a galaxy-independent rate. Next, we consider an event rate proportional to the SFR. This scaling could be expected if current optical TDF candidates are due to a new type of stellar explosion in galactic nuclei \citep{Saxton16}.
To model flares caused by AGN disk instabilities, we consider an event rate that is inversely proportional to central black hole mass, $\dot{N} \propto M_{\bullet}^{-1}$ (i.e., the wait time between outbursts is proportional to the black hole mass; see Sec.~\ref{sec:insta}). Finally, to estimate the rate of flares due to TDEs, we use the following equation:
\begin{equation}\label{eq:modelrate}
\dot{N} \propto M_{\bullet}^{\beta} e^{- (M_{\bullet}/10^{8}M_{\odot})^{2} } \quad.
\end{equation}
The power-law index $\beta$ parameterizes how the disruption rate changes due to the dynamics of the host galaxy; predictions for this index range from $+0.3$ \citep{Brockamp11}, to $-0.2$ \citep{Wang04,Kochanek16}, and $-0.5$ \citep{Stone16b}. All of these predictions are broadly consistent with the observed mass function derived from the $1/V_{\rm max}$ method, but steeper power laws can be ruled out (see Fig.~\ref{fig:MBH_func}, bottom panel). For our fiducial TDF model, we adopt $\beta=-0.2$, the relation predicted for an isothermal sphere \citep[cf.][Eq. 29]{Wang04}. Our parameterization of the turnover in the TDF rate approximates the curve of \citet[][, Figure~4]{Kesden12b} for a solar-type star disrupted by a black hole with a spin of $a=0.9$.
{ Finally, in Fig.~\ref{fig:MBH_func} we also compare our estimate of the TDF rate as a function of mass with the result of \citet{Graur17}, who conclude that this rate is proportional to $\Sigma/\sigma$, with $\Sigma$ the galaxy surface mass density. For this comparison, we computed $\Sigma/\sigma$ for the galaxies in our mock sample and binned the result as a function of black hole mass. }
\subsection{Mock TDF Samples}\label{sec:selection}
The last step of our forward-modeling analysis is applying the selection criteria of the surveys. Summing the event rate of simulated flares that pass the selection criteria yields the final output: a mock version of our flux-limited TDF sample (Table~\ref{tab:TDFs}).
Besides the obvious requirement that the peak flux of the simulated flare is larger than the effective survey flux limit, we also require that the magnitude difference between the simulated flare and the host galaxy ($m_{\rm peak} - m_{\rm host}$) is less than 3~mag (which corresponds to the lowest host-flare contrast in our sample of candidate TDFs). For simulated flares observed by the PTF survey we also apply their luminosity cut ($-19>M_{r}>-21$;see Sec.~\ref{sec:PTF}).
Similar to our method for normalizing the effective area in the $1/V_{\rm max}$ analysis, we require that the ratio of the simulated flares detected by each survey is equal to the ratio of detected TDF candidates for these surveys (see Table~\ref{tab:surveys}).
We now compare the observed distribution\footnote{In Figs. \ref{fig:fluxlim}--\ref{fig:fluxlim_fEdd} we show the observed distributions without ASASSN-15lh.}
of galaxy mass or black hole mass of the host galaxies of our TDF candidates with the distribution obtained for a simulated sample with and without a correction for captures (Fig.~\ref{fig:fluxlim}). We find that the simulation without a correction for captures overpredicts the number of flares from high-mass black holes. Using a Kolmogorov--Smirnov (KS) test, the hypothesis that the host galaxy stellar mass of the mock sample without captures and the host galaxy mass of observed candidate TDFs are drawn from the same distribution can be rejected with $p=2\times 10^{-3}$. If we apply the same test to the distribution of black hole mass, we again reject the null hypothesis, but with slightly lower significance ($p=2\times 10^{-2}$, due to the smaller sample size). Repeating this exercise using a Gaussian LF (Eq.~\ref{eq:rateVmax_gau}) instead of an power-law LF (Eq.~\ref{eq:rateVmax}) does not change the significance of the detection of rate suppression by black hole event horizons.
Since we use the number of detected TDF candidates in each survey to normalize the contribution of the different surveys to the final sample of mock TDFs, the small number of flares in each survey introduces a statistical uncertainty that is not captured in a single KS test. To estimate this uncertainty, we compute multiple mock samples, drawing the number of candidate TDFs in each surveys from a Poisson distribution centered on $N_{\rm TDF}$. For each of these samples, we compute the $p$-value for rejecting the null hypothesis. The distribution of the resulting $p$-values follows a lognormal distribution with a standard deviation of only 0.2~dex. We can thus conclude that Poisson fluctuation in the number of detected TDF candidates will not lead to a false detection of horizon suppression (a 9$\sigma$ fluctuation of $N_{\rm TDF}$ is required to reach $p>0.1$).
The simulated distributions of galaxy mass and black hole mass for the other three scenarios that we consider (see Table~\ref{tab:modelmatrix}) are shown in Fig.~\ref{fig:fluxlim_other}.
\section{Discussion}\label{sec:discussion}
\subsection{TDFs Are Not Due to Stellar Explosions}
We find that our fiducial TDF model correctly reproduces the distribution of host galaxy total stellar mass, host galaxy black holes mass, and Eddington ratio (see Fig.~\ref{fig:fluxlim} and Fig.~\ref{fig:fluxlim_fEdd}).
From Fig.~\ref{fig:fluxlim_other}, we conclude that if the observed TDFs are due to a hypothetical new class of nuclear SNe, the rate of these events needs to be independent of host galaxy mass or SFR. This requirement could be considered unlikely, because the rate of most types of known SNe either scales with the host galaxy surface brightness or is limited to a particular subset of galaxies \citep[e.g.,][]{Fruchter06}.
{ The strongest} evidence against the possibility that observed optical TDF candidates are a new class of SNe is the observed distribution of the Eddington ratio. The Eddington limit for photons does not apply to stellar explosions, but for each simulated SN we can still compute the Eddington ratio based on the central black hole mass of its host galaxy. If the flare rate is independent of galaxy properties and not constrained by the Eddington limit, more than half of the observed optical TDF candidates should have super-Eddington luminosities (Fig.~\ref{fig:fluxlim_fEdd}). The luminosity of candidate TDFs, however, is observed to be capped near the Eddington luminosity. The probability that a flux-limited SN sample would produce this skewed distribution of $f_{\rm Edd}$ is small (KS test yields $p=6\times 10^{-4}$). We find the same result if we use a Gaussian distribution (Eq.~\ref{eq:rateVmax_gau}) to draw the luminosity of the simulated SNe.
\begin{figure}
\centering
\includegraphics[trim=4mm 2mm 0mm 5mm, clip, width=0.48 \textwidth]{./figs/fEdd_cumu_fluxlim.pdf}
\caption{Cumulative distribution of the Eddington ratio. We show the observed distribution for TDF candidates with black hole mass measurements based on the host galaxy velocity dispersion, compared to the distribution predicted for a TDE scenario and three different SNe scenarios (see Table.~\ref{tab:modelmatrix}). If observed TDF are due to SNe, we would not obtain a luminosity distribution that is capped near the Eddington luminosity. }\label{fig:fluxlim_fEdd}
\end{figure}
\subsection{TDFs Are Unlikely Due to AGN}\label{sec:insta}
An instability in an AGN accretion disk could lead to a rapid increase of the accretion rate and may therefore mimic a TDF.
This scenario has several problems, such as the observed evolution of the broad emission lines of known candidate TDFs, which get more narrow with time, while AGN show the opposite behavior \citep{Ruan16,Holoien16}. But such problems are not insurmountable because the parameter space of AGN disk instabilities has not been fully explored yet.
Our work provides a test of the AGN flare scenario with minimal requirements. The only model prediction that is needed is the scaling of the flare rate with black hole mass.
The wait time between AGN outbursts depends on the black hole mass and the accretion rate. If the accretion rate normalized to the Eddington limit is constant over the mass range relevant for our TDF sample, the wait time between outbursts from active black holes is predicted to scale as $\tau \propto M_{\bullet}^{p}$, with $p \sim 1$ \citep{Mineshige90,Siemiginowsk97}. The increased wait time and longer flare duration reduce the rate of detected AGN flares from massive black holes, potentially explaining the lack of TDF candidates at high mass. However, we find that a scenario with $p=1$ predicts too many flares at the low-mass end (Figs.~\ref{fig:MBH_func} \& \ref{fig:fluxlim_other}). If instead the rate of AGN flares is independent of mass ($p=0$), a flux-limited sample of AGN flares should contain many more events from black holes with a mass $>10^{7.5}\, M_{\odot}$ (Fig.~\ref{fig:fluxlim}). We can thus conclude that most AGN flare scenarios are inconsistent with the observed mass distribution of TDF host galaxies. One caveat is that most AGN outburst models assume that the disk remains radiatively efficient between outbursts. It would be interesting to compare predictions for the rate of sub-Eddington AGN (e.g., LINERs) going into outburst.
Besides instabilities, stellar collisions near the tidal radius could also produce TDF impostors \citep{Metzger17}. These collisions happen between stars that accrete onto the central black hole via Roche lobe overflow, and therefore they are only possible when the Roche radius lies outside the innermost stable circular orbit. The rate of these collisions will therefore diminish above a mass scale similar to the Hills mass of TDEs and could thus explain the observed turnover in the TDF mass function (Fig.~\ref{fig:MBH_func}). However, multiple grazing collisions of the same stars are required to get a sufficiently high rate of these events, and around larger supermassive black holes, stars are often destroyed by their relativistic collision velocities. As a result, rate suppression in the stellar collision model likely occurs at an order of magnitude lower black hole mass ($M_{\bullet}\sim 10^{7},M_{\odot}$; \citealt{Metzger17}), which is inconsistent with our observations.
\subsection{Detection of Horizon Suppression}
The only scenario that can explain both the distribution of the Eddington ratio and the distribution of black hole mass requires a roughly constant rate up to a black hole mass of $M_{\bullet} \sim 10^{7.5} \, M_{\odot}$, followed by a rapid decrease toward higher mass. This, indeed, is a fundamental prediction of the TDE paradigm.
The location of the turnover scales with black hole spin and the density of the disrupted star. Assuming that for black holes with a mass of $\sim 10^{8} \, M_{\odot}$ most of the disrupted stars are similar to the Sun \citep[e.g.,][]{Kochanek16}, our mass function appears to imply a relatively high mean spin of black holes in this mass regime---perhaps similar to the spins inferred using observations of the iron fluorescence line in nearby AGN \citep{Risaliti13,Reynolds14}. The detection of more events similar to ASASSN-15lh will be key to making a more robust inference of black hole spin from the TDF black hole mass function.
Our measurement of the turnover in the black hole mass function relies on the $M$-$\sigma$ relation to estimate this mass, and therefore it is subject to the systematic uncertainty associated with the calibration of this relation. However, the turnover is also clearly detected in the distribution of total stellar mass (Figs.~\ref{fig:mass_func} and \ref{fig:fluxlim}): for the event rate to be independent of galaxy mass, 50\% of the TDFs in our compilation should have a host galaxy mass $M_{\rm galaxy}>10^{10.5}\,M_{\odot}$. Instead, only 3 out of 17 (including ASASSN-15lh) are found above this host galaxy mass limit. The stellar mass of TDF host galaxies and the synthetic host galaxy sample are calculated using the same method; hence the systematic uncertainty of this result is negligible.
At the low end of the host galaxy mass spectrum, we find no significant decrease of the number of flares compared to what is expected in a flux-limited survey. This could be considered surprising, since circularization and accretion of the stellar debris for disruptions around low-mass black holes are predicted to be less efficient \citep{Guillochon15}. Our observations thus support a constant black hole occupation fraction for $M_{\bullet}\gtrsim10^{5.5}\,M_{\odot}$.
Post-starburst galaxies (characterized as quiescent galaxies with strong Balmer absorption lines; \citealt{Dressler83,Zabludoff96}) are overrepresented among TDF hosts \citep{Arcavi14,French16,French17,Law-Smith17,Graur17}, which could be explained by a short relaxation time caused by a high central density in these galaxies \citep{StonevanVelzen16,Graur17}. An overrepresentation of post-starburst galaxies is unlikely to significantly influence the distribution of total stellar mass of the mock TDF sample because the relative mass increase in the recent star-formation episode of these galaxies is modest, 10--50\%, (\citealt{Kaviraj07}; D. French et al., in prep). Since we use the total stellar light to estimate the galaxy mass (including the old stars as measured by the near-IR flux), a post-starburst phase of TDF host galaxies will not lead to a significant change of the total stellar mass in the mock sample.
{ Our results are consistent with the recent work by \citet*{Lu17}, who used the lack of TDF candidates from high-mass black holes to rule out the hypothesis that supermassive black holes have a surface (at some small fraction above the Schwarzschild radius).}
\subsection{The Luminosity Function of TDFs}
Our work is the first to measure the shape of the LF of optical/UV-selected TDFs. We find that a steep power law provides a good description, $dN/dL \propto L^{-2.5}$ (Eq.~\ref{eq:rateVmax}).
About one-third of the candidate TDFs in our sample were discovered after the peak in the light curve and this introduces a systematic uncertainty to the LF. If the true peak luminosity of all of the sources discovered after maximum light is a factor of 2 higher than the observed maximum luminosity, the power-law index of the LF decreases by about 10\%.
{ While the TDF LF is steep, both the observed rate as a function of black hole mass (Fig.~\ref{fig:MBH_func}) and the peak luminosity of the flare \citep{Hung17} appear to be independent of black hole mass (for $M_{\bullet}<10^{7.5}\,M_{\odot}$). Here we speculated that this} might imply that the wide range in TDF peak luminosity is determined by the mass of the star that got disrupted. For a main-sequence mass--radius relation, the peak of the fallback rate of stellar debris is expected to scale as $M_{\rm star}^{0.8}$ \citep[e.g.,][]{Guillochon13}. If the peak luminosity is proportional to this fallback rate, a \citet{Kroupa93} initial mass function ($dN_{\rm star}/dM_{\rm star} \propto M_{\rm star}^{-2.3}$ for $M_{\rm star}\lesssim 1\, M_{\odot}$) would yield a steep decrease of the event rate with peak luminosity. Since the Hills mass is proportional to the mass of the disrupted star, this scenario implies that low-luminosity TDFs, such as iPTF-16fnl, only occur in relatively low-mass host galaxies, while high-luminosity TDFs can occur across a wider host galaxy mass range.
The steep flare LF also explains why our measurement of the average per-galaxy rate ($\approx 10^{-4}\,{\rm galaxy}^{-1}\,{\rm yr}^{-1}$; Fig.~\ref{fig:mass_func}) is higher than the rate based on SDSS data \citep{vanVelzen14} or ASAS-SN data \citep{Holoien16}. In the SDSS analysis, the per-galaxy rate follows from $N_{\rm TDF}/ \left<\epsilon_{\rm gal}\right>$, with $\epsilon_{\rm gal}$ the search efficiency times the number of galaxies that were surveyed. Because the average TDF rate was computed using the mean of this efficiency, the brightest flare contributes more to the per-galaxy rate (the difference in $\epsilon_{\rm gal}$ between SDSS-TDE1 and SDSS-TDE2 is factor of 4; see \citealt{vanVelzen14}, Table~1). The efficiency-weighted mean luminosity of the two SDSS flares is $\approx 10^{43.4} \,{\rm erg}\,{\rm s}^{-1}$. The rate from our LF (Fig.~\ref{fig:Lg_func}) at this relatively high luminosity is $\approx 10^{-7.3}\,{\rm Mpc}^{-3}\,{\rm yr}^{-1}$, which is consistent with the volumetric rate reported from the SDSS search.
Our measurement of the per-galaxy flare rate (Fig.~\ref{fig:mass_func}) is consistent with the theoretical predictions of the disruption rate by \citet{Stone16b}, who find $ \sim {\rm few} \times 10^{-4}\,$galaxy$^{-1}$~yr$^{-1}$. As forecasted by \citet{Kochanek16}, it appears that the tension between earlier measurements of the TDF rate and the theoretically expected rate could simply be due to the relatively high luminosity of the TDFs in these surveys. When accounting for our observation that faint TDFs occur more frequently, the discrepancy between the observed and predicted rate disappears.
The total rate of TDFs depends on the low-luminosity turnover of the LF, which is not constrained by our current sample of flares. However, if the bolometric luminosity of TDFs is capped near the Eddington limit, the LF should start to flatten just below the luminosity of the faintest flares in our sample (for a typical bolometric correction of $\sim 10$, a $g$-band luminosity of $L_{g}\sim10^{42.5}\,{\rm erg}\,{\rm s}^{-1}$ exceeds the Eddington limit for black holes with $M_{\bullet}<10^{5.5}\,M_{\odot}$). For an Eddington-limited emission mechanism, the peak of the luminosity distribution in a flux-limited sample shifts to higher black hole mass \citep{Kochanek16}. This could explain why the turnover of the LF has not been detected yet.
\section{Conclusions}\label{sec:conclusion}
Our main conclusions are as follows:
\begin{itemize}
\item We measured the luminosity function of TDFs (Fig.~\ref{fig:Lg_func}), finding a steep decrease of the event rate with luminosity (Eq.~\ref{eq:rateVmax}).
\item For galaxies with a stellar mass of $\sim 10^{10}\, M_{\odot}$, the observed per-galaxy rate is $\approx 1 \times 10^{-4}$~yr$^{-1}$ (Fig.~\ref{fig:mass_func}).
\item We measured the black hole mass function of TDF host galaxies (Fig.~\ref{fig:MBH_func}), finding an approximately constant volumetric rate for $M_{\bullet}<10^{7.5}\,M_{\odot}$.
\item The sharp decrease of the volumetric rate above $M_{\bullet}=10^{7.5}\, M_{\odot}$, as implied by the high-luminosity TDF candidate ASASSN-15lh, is consistent with the suppression of the TDF rate due to the capture of stars before they are disrupted.
\item Rate suppression due to black hole event horizons can also be detected while remaining agnostic about the origin of ASASSN-15lh. Using forward modeling to reproduce our flux-limited TDF sample, we conclude that rate suppression at high black hole mass plus an Eddington-limited emission mechanism are both required to explained the observed distribution of galaxy mass and Eddington ratio (Figs.~\ref{fig:fluxlim}--\ref{fig:fluxlim_fEdd}).
\end{itemize}
\acknowledgments
{\small I would like to thank Thomas Wevers, Sterl Phinney, Nick Stone, Or Graur, James Guillochon, Peter Jonker, Iris Groen, and Richard I. Anderson for useful discussions and the anonymous referee for the thoughtful comments. I am also grateful to the International Space Science Institute (ISSI) in Bern for their hospitality. This works is made possible by support of NASA through a Hubble Fellowship, HST-HF2-51350.
This research made use of Astropy, a community-developed core Python package for Astronomy \citep{Astropy-Coll13}.
Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions: the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington.
The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg, and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation.
This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This work is based in part on data obtained as part of the UKIRT Infrared Deep Sky Survey.}
\bibliographystyle{apj}
|
1,314,259,994,231 | arxiv | \section{Introduction}
Venous obstruction is a common pathological condition of the lower extremities, which reduces the vessel patency and hence the blood flow. Venous obstruction can be a result of a non-thrombotic syndrome (like May-Thurner and venous insufficiency) or an acute/chronic venous thrombosis~\citep{Murphy2017,beebe2005}. The prevalence and incidence rates of the venous obstruction vary depending on the underlying medical conditions. For instance, deep vein thrombosis (DVT) affects 300,000 people in North America each year with an incidence rate of $0.201\%$~\citep{Cohen2007,Silverstein1998,arshad2017}. Also, the prevalence rate of May-Thurner and chronic venous insufficiency are $<40\%$ and $<73\%$, respectively~\citep{cavalcante2015,beebe2005,radaideh2019}. Some recent studies support endovascular treatment (stenting) over medical/compressive therapy~\citep{rossi2018}.
Vascular stents are small mesh-like porous tubular scaffolds, which are deployed inside diseased blood vessels to restore patency. These devices are available in various sizes, structures, and materials to provide desired mechanical properties in each particular design. Self-expanding and balloon-expandable stents are two major categories; each has a specific deployment procedure and expansion mechanism resulting in a different clinical performance. Since the balloon-expandable stents have a small range of elastic expansion, their self-expanding counterparts are preferred for deployment in veins because of their higher compliance and the ability to retain patency during the physiological dilation of veins~\citep{Wittens2015}. Venous stenting evolved from the endovascular treatment of occlusive arteries. While the etiology of arterial and venous disease is different, arterial stents have been commonly used in off-label applications in venous endovascular treatment. However, recent studies on venous stenting suggested the need for designing venous stents accounting for the specific venous pathology~\citep{schwein2018,bento2019}. The design and manufacturing of venous stents have been overlooked and undervalued~\citep{gordon2008} given the prevalence of venous disease and the off-label use of arterial stents. The main goal of this study is to examine stent deformation characteristics that contribute to their clinical performance and hence aid in the selection of a stent in a given clinical application.
Although atherosclerosis is the main etiology for the arteries, a chronic venous disease occurs due to venous thrombosis and external compression by the adjacent artery~\citep{bento2019}. Compared to arteries, veins are up to three times more compliant (distensible). Pathological arteries usually retain a well-defined vessel with almost no change in the vessel elasticity~\citep{schwein2018}. However, a pathological vein can undergo a fibrous retraction reducing its compliance. During the early stages of thrombosis, the venous thrombus is compliant and differentiable from the venous wall. During the chronic phase, $23-60\%$ of acute DVT cases, the fibrotic thrombus is attached to the wall causing vein thickening and post-thrombotic recoil \citep{razavi2015,deatrick2011}. In summary, in designing a venous stent, one should consider the localized pinching forces (May-Thurner), high recoil (fibrotic veins), foreshortening (reduction of the length during expansion), and the large distensibility (high compliance) of the healthy wall proximal and distal to the pathological region. We first review pertinent literature on the deformation properties of stents.
The effect of radial pressure on the hemodynamics of a stented vessel has been investigated through experimental and computational studies~\citep{Freeman2010,Bedoya2006,Lally2005,Zahedmanesh2009}. It is shown that a stent with excessive radial pressure constrains the cyclic dilation of the vessel leading to post-deployment complications such as restenosis~\citep{Morrow2005,Vernhet2001}. Nevertheless, an optimal radial pressure is required to avoid post-deployment recoil and migration~\citep{Li2006}. The radial pressure is a function of stent structural parameters and the material properties, which has been extensively studied for braided stents~\citep{zaccaria2020,Kim2008}, Z stents~\citep{SnoWhill2001}, and closed-cell venous stents~\citep{dabir2018}. In venous stenting, oversizing the stent is a common technique to avoid recoil and migration. Here we provide a semi-analytical method, in Section 3, to determine the radial pressure variation due to stent oversizing.
Collapse is a common failure mode for stents that are placed in veins~\citep{Murphy2017}. Three different buckling modes in the collapse of balloon-expandable stents are identified in~\citep{Dumoulin2000} identified and collapse in self-expanding stents has been reported in~\citep{Kim2008,dabir2018,schwein2018}. They observed mechanical instability by applying a pinching force on the stent structure. It has been reported that the local collapse stiffness of a stent depends critically on its strut geometry and the elastic modulus of the stent material~\citep{Duerig2000}. We evaluate the performance of a venous stent against collapse experimentally and explain how we can qualitatively compare a stent’s behavior under collapse through a unit-cell study.
Compliance of a stent also plays a vital role in its clinical performance particularly for the venous system with high distensibility. The stent is deployed to maintain adequate contact with the healthy wall proximally and distally in order to increase the anchoring area and avoid migration. The compliance mismatch between the vessel and the stent magnifies the post-deployment complications~\citep{Berry2002,Morris2016,post2019}. The compliance of a stent/vessel is defined as the variation of pressure over the variation of diameter. Here, the stent compliance is determined through experiment and analysis.
Despite the critical role of the foreshortening in precise stent placement, there is little study on the foreshortening of venous stents. For balloon-expandable arterial stents, however, the significance of longitudinal strain and the foreshortening mechanism has been elaborated in the literature~\citep{Douglas2014,Tan2011}. However, the method cannot be used here due to differences in the expansion mechanism of self-expanding stents. Here, we measure the foreshortening during stent expansion and analytically study this parameter in Section 3.
The above studies confirm the importance of mechanical properties and deformation characteristics on the clinical performance of venous stents and a need to develop rapid assessment tools to inform the choice of stent topologies. Deformation characteristics of venous stents based on braided design, chevron design, Z design, and diamond design are compared using \emph{in vitro} experiments coupled with analytical and finite element modelling. Their suitability for deployment in different clinical contexts is assessed based on their deformation characteristics. We start with an \emph{in vitro} experiment to evaluate these parameters in Section 2. Afterwards, we employ the unit-cell study in Section 3 to determine radial pressure, compliance, foreshortening and collapse of the candidate stents in this study. We assess the validity of the unit-cell study by comparing the results with observations from the \emph{in vitro} experiment and discuss clinical relevance in Sections 4 and 5. Concluding remarks and future work are given in Section 6.
\section{\emph{In vitro} experiments}
Self-expanding stents are available in different structural design and materials. Here we chose two stainless steel stents (Z design and Braided design) and two Nitinol stents (Diamond design, Chevron design) shown in~\fref{Fig_1}. Note that these stents are commercial designs currently being used in practice for venous stenting and here we use the design names instead of commercial names provided by companies. Among these, Braided design is the only one with an open-cell structure and the rest are closed-cell designs. Diamond design and Chevron design are the recent designs dedicated to venous stenting, Braided design is a common off-labeled design (used for both arteries and veins), and Z design is a trachea stent commonly used for venous stenting as a local reinforcement~\citep{Murphy2017}. To study the effect of the material and structural design, we start with a series of \emph{in vitro} tests to determine the radial pressure, collapse resistance, and foreshortening. Compliance, defined as the diameter variation ratio to radial pressure can be calculated as well.
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth]{Fig1.pdf}
\caption{Stent designs used in this study and their structural design parameters.}
\label{Fig_1}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth]{Fig2.pdf}
\caption{\emph{In vitro} tests. (a) Stent is wrapped in an aluminum fabric sheet that is attached to the material testing machine (Instron 5965) grips at both ends. The rollers reduce the friction while the upper grip pulls the fabric and reduces the internal diameter of the aluminum warp. (b) The anvil applies compression locally at the mid-section of the stent. (c) The stent is globally compressed between two steel compression plates.}
\label{Fig_2}
\end{figure}
The experimental setup for the \emph{in vitro} tests are shown in~\fref{Fig_2}. An environmental chamber was used to maintain the ambient temperature at $37^o$ centigrade to simulate the body temperature. The radial pressure (i.e., circumferential resistive pressure) measured by the radial crimping test according to~\citep{Morris2016,duda2000} shown in~\fref{Fig_2}(a). An aluminum fabric of width equal to the undeployed length of the stent is wrapped around the fully deployed stent and threaded through a narrow gap between two rollers (diameter of $3~mm$).The lower edge of the fabric is attached to the fixed jaw while the upper edge is attached to the moving jaw and the load cell. As the upper jaw moves upward, the circumference of the aluminum wrap decreases leading to reduction of the internal diameter and radial crimping of the stent. Note that in this case we only measure the circumferential resistive force, not the chronic outward force that is applied by the stent to the wall during deployment. This is due to the fact that increasing the patency is commonly performed by immediate angioplasty after venous stenting. Hence, even if the chronic outward pressure is not enough, the angioplasty using balloon expansion can assist during deployment. Accordingly, the circumferential resistive force, resisting the post-deployment recoil, is a more representative characteristic in terms of clinical durability. In a temperature control chamber, the global collapse and local collapse tests, based on the deformation modes suggested by~\citep{Dumoulin2000,Bandyopadhyay2013}, were performed by compressing the stent by rigid plates and an anvil (tip diameter of $10~mm$), respectively (See \fref{Fig_2}(a) and \emph{b}). Results from the experiments will be presented and compared with semi-analytical model in Sections 4 and 5.
\section{Unit-cell study and finite element analysis}
The deformation characteristics of a stent rely on its lattice expansion mechanism, which can be investigated through a unit-cell study. This method can reduce the computation time and cost since the entire stent is not modelled. First, we start with defining the unit-cell for each design (see~\fref{Fig_3}), where a cylindrical polar co-ordinate system ($r-\theta-z$) is introduced in~\fref{Fig_3}(a). The forces and moments associated with these co-ordinate axes are $F_r$, $F_\theta$ and $M_z$, $M_\theta$, respectively. Each stent has $n_a$ number of unitcells along the axial ($z$) direction and $n_c$ number of unitcells in the circumferential ($\theta$) direction. Assuming periodic boundary conditions and axisymmetry of the structure, we can identify a unit-cell and define the boundary conditions at the decoupled joints/links as shown in~\fref{Fig_3}{(c)}. For uniform expansion and axisymmetric boundary conditions, the force $F_r$, and the moments $M_\theta$, and $M_z$ can be neglected~\citep{Hejazi2018}. Since Braided design is made through braiding, we cannot define a closed-joint uni-cell. Consequently, the approach to study the expansion of this stent will be different and discussed separately.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.6\textwidth]{Fig3.pdf}
\caption{Typical geometry and loading conditions of a unit-cell in a stent structure. (a) cylindrical coordinate system; (b) top view of the isolated (highlighted) unit-cell in the structure; (c) four isolated joints of the unit-cell; (d) front view of the isolated unit-cell in the structure; (e) the reaction moments and forces applied to the isolated joints, $F_p$ (resultant force due to contact pressure applied by the vessel to the stent), $M_z$ and $M_\theta$ (the reaction moment along $z$ axis and $\theta$ axis), $F_r$ and $F_\theta$ (the reaction forces along $r$ axis and $\theta$ axis); (f) the front view of a unit-cell loading condition; (g) kinematic role of a strut in deformation of a stent unit-cell. Each stent has $n_a$ number of unitcells along the axial ($z$) direction and $n_c$ number of unitcells in the circumferential ($\theta$) direction.}
\label{Fig_3}
\end{figure}
The venous pressure distribution acting on the stent is assumed to be uniform. Consequently, the resultant force $F_p$ due to pressure, which is applied by vessel on all struts, acts at the center of the unit-cell in a radial direction and it is related to $F_\theta$ (the tangential force applied to the joints) as:
\begin{equation} \label{e1}
F_p=2F_{\theta}\sin{\beta}, \quad \beta=\frac{\pi}{n_c}.
\end{equation}
We can define the length ($L$) and the diameter ($D$) of an expanding stent at each stage of deployment by following the method introduced by~\citep{Douglas2014} below:
\begin{equation}\label{e2}
L=n_a (l_0-2u),
\end{equation}
\begin{equation} \label{e3}
D=\frac{n_c (w_0+2v)}{\pi},
\end{equation}
where $u, v, l_0$, and $w_0$ are respectively axial displacement, circumferential displacement, undeformed length, and width of a unit-cell (\fref{Fig_3}(g)). Note that for an unexpanded stent $u=v=0$ so that $L=n_al_0$ and $D=\frac{n_cw_0}{\pi}$ define the initial length and diameter, respectively.
The main purpose of deploying a stent is to maintain the patency of the lumen. A vessel that tends to recoil applies a redial pressure to the stent, which governs post-deployment performance~\citep{Duerig2000,Morlacchi2013}. This pressure can be defined as:
\begin{equation} \label{e4}
P=\frac{F_p}{D\beta(l_0-2u)},
\end{equation}
Where $P$ is the lumen radial pressure (circumferential resistive pressure), the numerator is the total applied force, and the denominator is the circumferential area of a unit-cell. By substituting~\req{e1}, and~\req{e3} into~\req{e4} we have:
\begin{equation} \label{e5}
P=\frac{2F_{\theta}\sin{(\frac{\pi}{n_c})}}{(w_0+2v)(l_0-2u)}.
\end{equation}
We introduce the foreshortening parameter ($f$) as
\begin{equation} \label{e6}
f=\frac{l_0-l}{l_0}=\frac{2u}{l_0},
\end{equation}
where $l=l_0 -2u$ is the length of the unit-cell at a given stage of deployment. Using~\req{e6}, we can rewrite the radial pressure as a function of foreshortening as follows:
\begin{equation} \label{e7}
P=\frac{2F_\theta\tan{(\frac{\pi}{n_c})}}{(w_0 -2v) l_0 (1+f)}.
\end{equation}
The above indicates that $P$ and $f$ are inversely related. This suggests that while a large value of foreshortening is undesirable for precise deployment a larger radial pressure can be achieved. It is worth noting that this compromise in clinical performance can be avoided by choosing zero foreshortening stent-designs proposed in~\cite{Douglas2014}
Compliance of a stent is defined according to
\begin{equation} \label{e8}
C=\frac{D_2-D_1}{D_1(P_2-P_1)},
\end{equation}
where $D_i$ and $P_i$ are the incremental stent diameter and pressure. By substituting equations (2) and (3) into (8), we have
\begin{equation} \label{e9}
C=\frac{v_1-v_2}{\sin{(\frac{\pi}{n_c})}\left(\frac{(w_0+2v_1)F_{\theta_{2}}}{(w_0+2v_2)(l_0-2u_2)}-\frac{F_{\theta_{1}}}{l_0-2u_1}\right)}.
\end{equation}
Equations (2) to (9) indicate that for calculating the deformation characteristics (radial pressure, compliance, and foreshortening) we need to correlate the strut bending force ($F_\theta$) with displacements in the circumferential ($v$) and longitudinal ($u$) directions. In Sections 3.1 and 3.2 we study the bending mechanism of the candidate stents, which governs the deformation characteristics of the stent.
\subsection{Unit-cell deformation characteristics of Chevron design and Diamond design }
Chevron design and Diamond stent designs used in this study are made of Nitinol alloy, which provides the desired mechanical properties such as super-elasticity. The Nitinol struts undergo phase transition depending on the mechanical strain which influences their deformation characteristics. A mathematical model is introduced for the bending analysis of Nitinol beams in~\citep{Mirzaeifar2013} . We apply their model to find the deformation of the stent strut subjected to bending described in~\fref{Fig_3}(g). The bending analysis for a cantilever beam is summarized in Appendix A1. The Chevron design has a more complex shape, which can be divided into curved and straight sections (\fref{Fig_5}). Hence, we have to modify~\req{e5},~\req{e6},~\req{e7}, and~\req{e9} by substituting circumferential displacement $v$ by $3v$ and longitudinal displacement $u$ by $0.5u$ to account for the number of struts in the unit-cell and the different orientation of the joints. We can use equations (10) and (11) respectively, to calculate the bending moment in the curved and straight portion as:
\begin{equation} \label{e10}
M_c=\frac{1}{2}Fl\cos{\alpha}-Fr(1-\cos{\theta}),
\end{equation}
\begin{equation} \label{e11}
M_l=\frac{1}{2}F l\cos{\alpha}-Fx\cos{\alpha}+Fr.
\end{equation}
where $\theta$ and $x$ locate the section in the curved and the straight part, respectively in ~\fref{Fig_5}(b) and in~\fref{Fig_5}(d). Using ~\req{e10} and~\req{e11} we can calculate the bending moment throughout the strut of Chevron design as a function of $\theta$ for the curved portion and $x$ for the straight part. In these equations, $l$ is the effective arm length of the strut (contributing in bending moment), $r$ is the radius of the curved portion, and $F=F_{\theta}\cos\beta$ based on~\fref{Fig_5}. For the Diamond stent (Nitinol) since there is no curved region at the intersection of joints, we can directly use the analysis of a cantilever beam in Appendix A1 to calculate the internal bending moments during deployment. Having thus found the internal forces we can calculate the deformation properties of these two designs by following the Appendix A1 to find the displacements and calculate radial pressure and compliance using~\req{e7} and~\req{e9} .
\begin{figure}[!ht]
\centering
\includegraphics[width=\textwidth]{Fig4.pdf}
\caption{(a) Chevron design unit-cell geometry; (b) a strut of the unit-cell including the curved and linear parts; (c) the bending moment and shear force at a given cross section of the curve part; (d) the bending moment and shear force at a given cross section of the linkage.}
\label{Fig_5}
\end{figure}
\subsection{Unit-cell deformation characteristics of Z design and Braided design}
Geometric design imparts self-expanding ability to Z design and Braided design stents. Two sets of steel links and a coil that connect these links are the elements that define the Z design unit-cell. Accordingly, we can calculate the displacement of the strut based on the coil angular twist and the elastic bending of the link. The displacement in a single strut is a combination of the coil angular twist and the link bending deflection. The torsional stiffness ($k_t$) of the coil and the torsion angle can be calculated through $k_t=\frac{Ed^4}{10.8Dn}$, where $E, d, D, n,$ and $l$ are torsional stiffness, Young's modulus, wire diameter, the diameter of the coil, number of coil body turns, the torsion angle and link length~\citep{Budynas2008}. Accordingly, the angular twist of the coil part, longitudinal and circumferential displacements can be determined through
\begin{equation}\label{e12}
\gamma=k_{t}F(l+2d),
\end{equation}
\begin{equation}\label{e13}
u=l(1-\cos{\gamma}),
\end{equation}
\begin{equation}\label{e14}
v=\frac{Fl^3}{3EI}+l\sin{\gamma}.
\end{equation}
In~\req{e12} we can determine the longitudinal displacement of the strut tip in terms of the angular twist. Here, we ignore the contribution of the strut bending as it is in the elastic region of the bending deformation and can be assumed small in comparison to the effect of an angular twist. To calculate the circumferential displacement, we can use~\req{e13}, in which, the first term is the contribution of bending displacement and the second term represents the effect of coil angular twist.
Braided design is fabricated by braided wires forming its entire structure. Consequently, it does not have any true geometrical unit-cell or joint and therefore no real strut can be defined. The expansion of Braided design was investigated to derive an equation that relates the pressure to the diameter of the Braided design structure based on slender bar theory~\citep{Wang2004a,Wang2004b} as:
\begin{equation} \label{e15}
P=\frac{n\cos^2{\alpha}}{2\pi{r^2}\sin^2{\alpha}}\left[ \begin{split} &\frac{EI\sin{\alpha}}{r}\left(\frac{\cos^2{\alpha}}{r}-\frac{\cos^2{\alpha_0}}{r_0}\right)\\ &-\frac{GI_{p}\cos{\alpha}}{r}\left(\frac{\cos{\alpha}\sin{\alpha}}{r}-\frac{\cos{\alpha_0}\sin{\alpha_0}}{r_0}\right)\end{split} \right],
\end{equation}
\begin{equation}\label{e16}
f=\frac{\sqrt{\lambda_{0}^2+4\pi^2r_{0}^2-4\pi^2r^2}-\lambda_{0}}{\lambda_{0}},
\end{equation}
Where $n, E, G, I_p, r,$ and $r_0$ are respectively number of wires, Young's modulus, shear modulus, the area moment of inertia, stent radius, and nominal radius. To calculate the foreshortening, we can use~\req{e16}, in which $\lambda_0$ and $r_0$ are initial helical wire pitch and diameter. Furthermore, to determine the compliance, instead of using~\req{e9}, which has been used for other stents, we can directly use~\req{e8} associated with radial pressure values from~\req{e15}.
\subsection{Finite element simulation parameters and material properties}
In this work, the simulation has been performed in ABAQUS/Standard commercial code linked with a user material subroutine (UMAT) based on~\citep{Lagoudas2008}. Here, the FE method was employed to evaluate two different problems. We studied the bending of the Chevron design and Diamond design struts to validate the method presented in~\citep{Mirzaeifar2013}, which addresses the bending mechanics of a Nitinol beam. We used material properties of NITI-I for Nitinol stents, and an elastic modulus of 193 GPa, Poisson's ratio of 0.3 and uni-axial yield stress of 260 MPa for steel stents~\citep{Bandyopadhyay2013,Duerig2000}. We employed UMAT (user material subroutine), which is a framework for ABAQUS users to implement a material (Nitinol model was not available in the software library at the time of this study). The foundation of the UMAT code is the thermo-mechanical constitutive model of Nitinol~\citep{Auricchio1997,Bhattacharya2003,Lagoudas2008}. We used fully integrated solid linear hexahedron element (C3D20) for other designs. The global size for the meshing varies from 0.075 mm to 0.05 mm based on the mesh sensitivity test. The numerical finite element results are used to validate the semi-analytical method for bending of Nitinol struts, presented in Section 3.1.
\section{Results}
A comparison is drawn between FE simulation (Section 3.2) and the semi-analytical approach (Section 3.1) results for a strut bending of the Nitinol stents in~\fref{joint}. The change of the slope reflects the onset of phase transition (austenite to martensite). The FE solution here is stiffer for Chevron design. For Diamond design, however, the semi-analytical method demonstrated a stiffer response. This is because of the shear stress contribution to the phase transition, which has been ignored in Section 3.1. The change of the slope reflects the onset of phase transition (austenite to martensite). If von Mises stress is more than 260 MPa, the region has a pure martensite phase. A core of pure austenite always exists for the Chevron design joints. For Diamond design, however, we observe a core of martensite phase close to the joints.
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth]{Fig5.pdf}
\caption{Von Mises stress distribution over the stent strut at the delivery size (maximum magnitude of stress); (a) Diamond design joint; (b) Chevron design. Comparison between strut bending force vs. circumferential displacement from the analytical method in Section 3 and Finite Element calculations.}
\label{joint}
\end{figure}
The foreshortening of the stents, illustrated in~\fref{FS}, has been calculated through~\req{e6}) for joint based designs (Z design, Diamond design, and Chevron design) and~\req{e16} for Braided design. It should be noted that the foreshortening is a dimensionless characteristic and the presented results in~\fref{FS} are valid for different stent sizes. As shown, in all cases the experimental values of foreshortening is smaller. This can be the result of longitudinal compressive forces (due to the friction) that is applied to the stent during the crimping test in the aluminum fabric.
\begin{figure}[!ht]
\centering
\includegraphics[width=\textwidth]{Fig6.pdf}
\caption{Analytical prediction of foreshortening compared with experimental measurements.}
\label{FS}
\end{figure}
In clinical practice, stents are oversized to maintain a required radial pressure.~\fref{PFA} shows the radial pressure versus the over-sizing parameter ($D_{n}/D_{v}$), where $D_n$ and $D_v$ are stent nominal expanded diameter and vessel diameter, respectively. For a given $D_{n}=16mm$, the experimental data points were measured through the \emph{in vitro} test setup shown in Section 2. In this case, we simulate $D_{v}$ by adjusting the diameter of the aluminum wrap. For each stent, the joint analysis (Sections 3.2 and 3.3) yields the circumferential force ($F_\theta$) and longitudinal displacement ($u$), at a given circumferential displacement ($v$). In this case, circumferential displacement is adjusted to match the simulated vessel diameter using~\req{e3}. Accordingly,~\req{e5} was used to calculate the radial pressure.
\begin{figure}[!ht]
\centering
\includegraphics[width=\textwidth]{Fig7.pdf}
\caption{Radial pressure calculated based on~\req{e5}, shown by solid line, compared with data points corresponding to \emph{in vitro} experiment (Fig \fref{Fig_2}(a)). $D_n$ and $D_v$ are stent nominal expanded diameter and vessel diameter, respectively. Usually the ratio $\frac{D_n}{D_v}$ does not exceed 1.3 which corresponds to 30\% oversizing.}
\label{PFA}
\end{figure}
We can compare the radial pressure performance of the stents in~\fref{Comp}(a), where we combine and compare all stents in~\fref{PFA} in a single plot for a better comparison. Note that the oversizing parameter is limited to $D_n / D_v=1.3$ since 30\% oversizing is recommended in most of the clinical cases. Diamond design and Braided design have a larger radial pressure in this range. Although, Z design has the highest maximum radial pressure at $D_{n}/D_{v}=3$, it loses 70\% of its radial pressure at $D_{n}/D_{v}=1.5$. To compare stent radial compliance, calculated using~\req{e8}, we have~\fref{Comp}(b). The steel stents (Braided design and Z design) are much more compliant at higher oversizing values. However, in the range of $D_{n}/D_{v}<1.5$, they are much stiffer. The compliance of Nitinol stents, Chevron design and Diamond design, does not change as much as steel stents with changing the oversizing $D_{n}/D_{v}$. This can be due to the effect of the phase transition, which is evident in the local maximum points in the compliance curve.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.85\textwidth]{Fig8.pdf}
\caption{Experimental measurements of radial pressure and compliance of stents compared in a single plot.}
\label{Comp}
\end{figure}
The results of the global and local collapse tests are shown in~\fref{Col}(a) and (b), respectively. Wall stent and Z design have the higher resistance in both tests compared to Nitinol stents. Note that Z design has a higher resistance to localized collapse while Braided design performs better in global collapse test. It is worth mentioning that steel stents were also stiffer in the radial pressure test when $D_{n}/D_{v}<1.5$.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.9\textwidth]{Fig9.pdf}
\caption{Experimental measurements of collapse force ($F_c$) Vs. displacement ratio ($Z/D$). $D$ is the internal diameter of the fully expanded stent.}
\label{Col}
\end{figure}
\section{Discussion}
\subsection{Results assessment, validation, and limitations}
The assumption on loading conditions and resultant deformation of the semi-analytical method is validated through comparison with experiments (\fref{PFA}). The experiments show a slightly higher pressure, especially at higher expansion ratio ($\frac{D_n}{D_v}>1.3$).
This is partly due to friction forces exerted by the wrapping foil used in experiments, and the loading-unloading hysteresis. Further, in the semi-analytical method pure bending is assumed and torsional loads are ignored as axi-symmetric deformation is assumed. The role of shear in the deflection of the strut accounts for a portion of error, particularly for Diamond design as we can see in~\fref{joint}, where FE predicts the phase transition at a smaller displacement. The curved part of the Chevron design structure is connected to the straight strut by another fillet curve. Since the connection between these parts is assumed as a straight link for the purpose of simplification, the results deviate from the experiments. This conclusion is also valid for the fillet that connects the curved and straight parts of the Z design (See~\fref{Fig_5}).
In ~\fref{Comp}(a), we observe a higher radial pressure (up to 30\% oversizing,$D_n / D_v=1.3$) for Diamond design and Braided design (steel stents). They offer more scaffolding than Z design and Chevron design (Nitinol stents) due to higher coverage. The radial pressure is a function of bending force in each unit-cell, and the bending force itself is a function of the cross-section area, length of the strut, the material of the stent, and foreshortening according to~\req{e7}. Hence, with higher foreshortening, the material volume per unit length increases and the total contact surface decreases, which leads to higher radial pressure. Another remarkable observation is the relationship between compliance and collapse resistance. By comparing~\fref{Comp}(b) and \fref{Col}, we can conclude that steel stents are less susceptible to collapse. Consequently, collapse resistance is inversely correlated to compliance. When the collapse mode has a spatially non-uniform deformation as a result of structural instability the analytical method given here is not applicable. However, given the relationship between collapse resistance and radial compliance (calculated based on~\req{e9}), it is possible to qualitatively compare the collapse behavior of stent designs.
The limitation of the present study arises in both experimental and analytical modelling. The \emph{in vitro} experiment based on Section 2 only considers the uniform deployment and does not account for non-uniform lumen or a curved anatomy of the vein. In general it is a challenge to excise fibrotic veins and the vein material properties are difficult to emulate using polymeric tubes. This means that the vein-stent interaction is not accounted in this study. This is an area where further work is needed. However, for a given vein model the relative performance that we report in this study is expected to hold. Another alternative approach to measure the radial pressure is to use an aperture-type (crimper) machine (see~\citep{dabir2018,mckenna2020}. The limitations listed above are still unavoidable. Another consideration is the effect of foreshortening on the radial pressure test. In the setup used here, it is easy to use a wider fabric to account for stent elongation during the crimp test. In the aperture-type devices, however, the length of the stent is limited to the device capacity. In the analytical approach (Section 3), we ignored the frictional forces and the mechanical interaction between the vein wall and the stent was modelled as a uniform pressure distribution, which is not the case for curved vessel geometries. This effect can be included in future studies by assuming the elasticity of the vessel wall.
\subsection{Clinical significance}
This study compares four different venous stent designs (Chevron design, Z design, Diamond design, and Braided design) based on their collapse, foreshortening, radial pressure, and compliance. Usually, the stent deployment for abnormal/diseased vein is performed after the vein angioplasty under fluoroscopy. Consequently, a surgeon can observe the length of occlusion, the regions with high forces (based on the shape of inflated balloon), and the stiffness of the occluded part by observing the balloon pressure. Based on the mechanical properties of the diseased vein, we can suggest the most suitable design for a specific occlusion type. Based on our study, here we present a summary of some of the common vein occlusion scenarios.
\begin{itemize}
\item Z design has superior performance against collapse deformation. Consequently, it may be more reliable to treat diseases like May-Thurner syndrome, which tends to apply localized force.
\item Nitinol stents are more compliant. Thus they follow the joint movements. Accordingly, they are more suitable for deployment in proximity of expected major vessel bending during limbs movements. Diamond design and Braided design apply a higher radial pressure. Thus, they may be chosen for the long lesion with high recoil.
\item Stents have a number of anchors at both ends, which attach them to the vessel wall to avoid migration. If the locations of the stent tips are critical to be predicted (e.g., deployment close to branch orifice), Z design can be a good choice. Because of the small foreshortening of Z design in comparison to other designs, we can predict the final location of the ends.
\end{itemize}
\section{Conclusions}
This study compared four different stents currently used in veins. Two designs (Z, braided stents) are off-label while the remaining two (Diamond and Chevron) are specifically designed for venous stenting. Deformation characteristics of all four designs are compared under identical loading conditions through \emph{in -vtro} testing and semi-analytical modelling. Particular attention is given to foreshortening, compliance, radial pressure, and collapse resistance. An inverse correlation between radial compliance and collapse resistance, and foreshortening and radial pressure is found. A good agreement is found between the predictions of the unit cell based semi-analytical modelling and experiments. Relative merits of each stent design for common vein occlusion scenarios are identified. Venous stenting is a relatively new area of investigation compared with arterial stents. While we expect the relative comparision across the designs to hold, further work on vein-stent interaction is needed.
\section*{Conflict of interest}
The authors claim no conflict of interest regarding the choice of candidate stents. No human or animal test was conducted for this study.
\section*{Acknowledgment}
We sincerely acknowledge the Division of Vascular Surgery at Vancouver General Hospital for providing the stent models. Furthermore, the funding through a Discovery Grant to Srikantha A. Phani from Natural Sciences and Engineering Research Council (NSERC) Canada is acknowledged.
|
1,314,259,994,232 | arxiv | \section{Introduction}
\renewcommand1.3{1.3}
Non-decoupling effects of heavy particles in the low energy
observables have been for a long time an indirect but crucial
test for the discovery of new particles. The most recent example
is the top quark, which contributes significantly to the
observables that measure the electroweak (EW) radiative
corrections, as for instance $\Delta \rho$ \cite{VELT1}. In
particular, the latest global fit of the electroweak parameters
from LEP data gives a quite strong constraint for the allowed
top mass value \cite{LEPTOP}. This indirect search supports the
recently announced evidence of $t {\bar t}$ production at CDF
\cite{CDFTOP} and will certainly contribute to get a final
confirmation of the existence of the quark top.
The non-decoupling effects of the Higgs boson are, however,
weaker than in the case of the top quark due to the screening
theorem \cite{VELT2}. According to this theorem, the sensitivity
in the low energy observables to the Higgs boson mass is at most
logarithmic at one loop. This fact has made in the past a
difficult task to disentangle the Higgs from the dominant
fermionic effects and, therefore, no significant bound on the
Higgs mass has been obtained so far from the analysis of LEP
data. This situation, however, will likely improve in the near
future due to the increasing precision of electroweak
measurements and the forthcoming CDF data. A confirmation of
the top quark is still needed, as well as a precise measurement
of the top quark mass, but the important point is that it is now
beginning to be plausible the search for evidence of the Higgs
boson in electroweak radiative corrections.
On the other hand, regarding the theoretical aspects, the
leading logarithmic Higgs mass effects in the low energy
observables are known up to one-loop level since the pioneering
works by Appelquist and Bernard \cite{AB} and by Longhitano
\cite{LON}. Their strategy was to use the symmetry properties
of the $SU(2)_{\rm L} \times U(1)_{\rm Y}$ gauged non-linear $\sigma$-model (GNL) \cite{CHG}
along with a systematic power-counting analysis to provide a
list of these logarithmic Higgs mass dependent terms. However,
at present, it is known that these leading logarithmic terms are
not sufficient to discriminate a heavy Higgs possibility from an
alternative symmetry breaking scenario to which one requires to
respect the same SM symmetries. These logarithmic contributions
are a consequence of the general gauge and custodial symmetry
requirements of the low energy structure of EW interactions and
therefore they will be the same irrespective of the particular
choice for the breaking dynamics, with the Higgs mass being
replaced by some alternative physical mass. Thus, if one wants
to reveal the nature of the symmetry breaking from low energy
observables, one has to go beyond the leading logarithmic
effects.
In this paper, we present the complete calculation of the
leading and next-to-leading non-decoupling effects of a heavy
Higgs boson in the SM to one loop level \footnote{For
simplicity, we will ignore the fermions in all the discussion
but their contribution, which is assumed here to be the SM one,
has to be added in any comparison with data.}. These genuine
Higgs boson effects cannot be obtained using the GNL as in
\cite{AB,LON}, but must be calculated directly from the
evaluation of the one loop diagrams in the SM. Futhermore, in
order to classify in a systematic way those effects, we will use
here the electroweak chiral Lagrangian (EChL) \cite{HR2}. Our
approach is based on effective field theory methods, in which
the non-decoupling effects of a heavy Higgs boson are
represented, at energies below the Higgs mass, by certain set of
gauge invariant effective operators of the EChL.
The EChL is basically a non-linear sigma model coupled to the
$SU(2)_{\rm L} \times U(1)_{\rm Y}$ gauge fields, where the Higgs field has been removed from
the physical spectrum of the theory. The model respects the
fundamental symmetries of the SM, namely $SU(2)_{\rm L} \times U(1)_{\rm Y}$ gauge invariance
spontaneously broken to $U(1)_{\rm em}$ and the custodial
symmetry $SU(2)_{\rm C}$ of the pure scalar sector, but does not
include explicitely a particular dynamics for the symmetry
breaking \cite{HR2}-\cite{BDV}. Although this Higgs-less
parametrization of EW interactions is non-renormalizable, the
theory can be rendered finite to one loop by adding gauge
invariant operators up to dimension four. These effective
operators parametrize the low energy effects of the underlying
fundamental dynamics of the symmetry breaking. In particular,
the EChL can be regarded as an effective theory of the SM in
which the Higgs field has been integrated out, and its effects
at energies well below the Higgs mass are parametrized by the
chiral effective operators \cite{HR2}. We believe that this
kind of approach may be interesting for several reasons. First
of all, it provides a gauge invariant way of separating the
non-decoupling Higgs boson effects from the rest of the EW
radiative corrections. On the other hand, the EChL is a general
framework in which one can analyze the low energy effects not
only of a heavy Higgs in the SM, but of more general breaking
dynamics characterized by the absence of ligth modes \cite{AW}.
It is then desirable to have the EChL that parametrizes a SM
Higgs as a fundamental reference model.
In our previous work \cite{HR2}, we discussed a general
procedure to obtain the EChL operators by matching the SM
predictions in the limit of large Higgs mass with the
predictions from the EChL to one loop order. The subset of EChL
operators involved in the two and three-point Green's functions
of the gauge fields were also obtained there. In this work, we
complete the calculation of the EChL operators by analyzing the
four-point Green's functions for gauge fields. We will also
discuss here in more detail the relation between the
renormalization of both the SM and the effective theory, which
is crucial for understanding the true meaning of the effective
operators. We will show with some explicit examples how to
calculate observables in the EChL parametrization of EW
interactions.
The paper is organized as follows. Section two is a survey of
the electroweak chiral Lagrangian approach, in which we also
include a brief discussion of the formal renormalization
procedure of the effective theory and the matching conditions.
Section three is devoted to describe the computation of the SM
four-point Green's functions in the large $M_H$ limit. The
renormalization prescription chosen for the SM is fixed in this
section to be the on-shell scheme. The matching equations will
be solved in section four where we present the results for the
complete set of EChL bare parameters. Section five is devoted to
discuss the dependence of the bare EChL parameters on the
renormalization prescription fixed for the underlying SM. We
explain in section six the relation between the bare EChL
parameters and the renormalized EChL parameters for different
renormalizations of the effective theory. In section seven we
explain how to compute observables to one loop with the EChL and
demonstrate by choosing some simple observables that the result
agrees with previous calculations in the literature. Finally,
section eight is devoted to the conclusions.
\section{The electroweak chiral Lagrangian}
The EChL is the most simple effective theory of EW interactions
that parametrizes the physics of the $SU(2)_{\rm L} \times U(1)_{\rm Y}$ breaking dynamics at
low energies. The assumption made in this approach is that,
whatever the $SU(2)_{\rm L} \times U(1)_{\rm Y}$ breaking interactions may be, the particles
involved in the symmetry breaking are heavier than the W and Z
bosons. The EChL is then a low energy formulation of EW
interactions which contains just the "light" gauge and would-be
Goldstone fields, satisfying the basic requirement of $SU(2)_{\rm L} \times U(1)_{\rm Y}$
gauge invariance spontaneously broken to $U(1)_{em}$:
\begin{equation}
\cl_{\rm EChL} = \cl_{\rm NL} + \sum_{i=0}^{13} {\cal L}_{i}. \label{ECL}
\end{equation}
Its basic structure is a gauged non-linear sigma model $\cl_{\rm NL}$,
where a non-linear parametrization of the would-be Goldstone
bosons is coupled to the $SU(2)_{\rm L} \times U(1)_{\rm Y}$ gauge fields
\begin{equation}
\cl_{\rm NL} = \frac{v^2}{4}\; Tr\left[ D_\mu U^\dagger D^\mu U \right]
+ \frac{1}{2}\; Tr\left[ \wh_{\mu \nu} \wh^{\mu \nu} + \bh_{\mu \nu} \bh^{\mu \nu}
\right] + {\cal L}_{\rm R_\xi} + \cl_{\rm FP}^{\rm NL},
\label{NLL}
\end{equation}
where the bosonic fields have been parametrized as
\begin{eqnarray}
U & \equiv & \displaystyle{\exp\left( {i \;
\frac{\vec{\tau}\cdot\vec{\pi}}{v}}\right)},\;\;\;
v = 246 \;{\rm GeV}, \;\;\; \vec{\pi} = (\pi^1,\pi^2,\pi^3),
\nonumber\\
{\cal W}_\mu & \equiv & \frac{ -i}{2}\;
\vec{W}_\mu \cdot \vec{\tau}, \nonumber\\
{\cal B}_\mu & \equiv & \frac{ -i}{2} \; B_\mu \;
\tau^3,\label{FPAR}
\end{eqnarray}
and the covariant derivative and the field strength tensors are
defined as
\begin{eqnarray}
D_\mu U & \equiv & \partial_\mu
U - g {\cal W}_\mu U + g' U {\cal B}_\mu, \nonumber\\[2mm]
\wh_{\mu \nu} & \equiv & \partial_\mu {\cal W}_\nu - \partial_\nu {\cal W}_\mu -
g [{\cal W}_\mu, {\cal W}_\nu],\nonumber\\[2mm]
\bh_{\mu \nu} & \equiv & \partial_\mu {\cal B}_\nu - \partial_\nu {\cal B}_\mu.
\label{FTEN}
\end{eqnarray}
The physical fields are given by
\begin{eqnarray}
W^\pm_\mu & = & \frac{W^1_\mu \mp i W^2_\mu}{\sqrt 2},
\nonumber\\[2mm] Z_\mu & = & c\; W^3_\mu - s\; B_\mu
\nonumber, \\[2mm] A_\mu & = & s\; W^3_\mu + c\; B_\mu,
\end{eqnarray}
where $c = \cos \theta_{\rm w}, \; s = \sin
\theta_{\rm w}$ and the weak
angle is defined by $\tan \theta_{\rm w} = g' / g$.
The second term in eq.(\ref{ECL}) includes the complete set of
$SU(2)_{\rm L} \times U(1)_{\rm Y}$ and CP invariant operators up to dimension four
\footnote{ There is an extra term ${\cal L}_{14}$ proportional to
$\epsilon^{\mu\nu\alpha\beta}$ that is $CP$ conserving but $C$
and $P$ violating. It is not relevant in case of absence of
fermion contributions and will not be considered here.} that
were classified by Longhitano in \cite{LON}
\footnote{ The relation with Longhitano's notation is the
following: $
a_0=\frac{g^2}{g'^2}\beta_1\;;\;a_1=\frac{g}{g'}\alpha_1\;;\;
a_2=\frac{g}{g'}\alpha_2\;;\;a_3=-\alpha_3\;;\;a_i=\alpha_i\;,
i=4,5,6,7\;;\;a_8=-\alpha_8\;;\;a_9=-\alpha_9\;;\;
a_{10}=\alpha_{10}/2\;;\;a_{11}=\alpha_{11}\;;\;
a_{12}=\alpha_{12}/2\;;\;a_{13}=\alpha_{13}$.
Notice that the definition of $a_0$ is
different here than in \cite{HR2}.}:
\begin{eqnarray}
{\cal L}_{0} & = & a_0 g'^2 \frac{v^2}{4} \left[ Tr\left(
T V_\mu \right) \right]^2 \nonumber\\[2mm]
{\cal L}_{1} & = & a_1 \frac{i g g'}{2} B_{\mu\nu}
Tr\left( T \wh^{\mu \nu} \right) \nonumber\\[2mm]
{\cal L}_{2} & = & a_2 \frac{i g'}{2} B_{\mu\nu}
Tr\left( T [V^\mu,V^\nu ] \right) \nonumber\\[2mm]
{\cal L}_{3} & = & a_3 g Tr\left( \wh_{\mu \nu} [V^\mu,V^\nu ]\right)
\nonumber\\[2mm]
{\cal L}_{4} & = & a_4 \left[ Tr\left( V_\mu V_\nu \right)
\right]^2 \nonumber\\[2mm]
{\cal L}_{5} & = & a_5 \left[ Tr\left( V_\mu V^\mu \right)
\right]^2 \nonumber\\[2mm]
{\cal L}_{6} & = & a_6 Tr\left( V_\mu V_\nu \right) Tr\left( T V^\mu
\right) Tr\left( T V^\nu \right)\nonumber\\[2mm]
{\cal L}_{7} & = & a_7 Tr\left( V_\mu V^\mu \right) \left[
Tr\left( T V^\nu \right) \right]^2\nonumber\\[2mm]
{\cal L}_{8} & = & a_8 \frac{g^2}{4} \left[ Tr\left( T \wh_{\mu \nu} \right)
\right]^2 \nonumber\\[2mm]
{\cal L}_{9} & = & a_9 \frac{g}{2} Tr\left( T \wh_{\mu \nu} \right)
Tr\left( T [V^\mu,V^\nu ] \right) \nonumber\\[2mm]
{\cal L}_{10} & = & a_{10} \left[ Tr\left( T V_\mu \right)
Tr\left( T V_\nu \right) \right]^2 \nonumber\\[2mm]
{\cal L}_{11} & = & a_{11} Tr\left( ( D_\mu V^\mu )^2 \right)
\nonumber\\[2mm]
{\cal L}_{12} & = & a_{12} Tr\left( T D_\mu D_\nu V^\nu \right)
Tr \left( T V^\mu \right)\nonumber\\[2mm]
{\cal L}_{13} & = & a_{13} \frac{1}{2} \left[ Tr \left( T D_\mu V_\nu
\right) \right]^2 \label{Li}
\end{eqnarray}
where the basic building blocks are defined as
\begin{equation}
T \equiv U \tau^3 U^\dagger, \hspace{2cm}
V_\mu\equiv(D_\mu U) U^\dagger .
\end{equation}
The above particular base of invariants can be transformed into
a new one with only 11 independent structures by making use of the
classical equations of motion \cite{MUL}. The new effective Lagrangian
is then given in terms of a new set of chiral parameters $\hat{a}_i$
given by: $\hat{a}_1 = a_1 + a_{13}$, $\hat{a}_4 = a_4 - a_{13}$,
$\hat{a}_5 = a_5 + a_{13}$, $\hat{a}_6 = a_6 - a_{13}$,
$\hat{a}_7 = a_7 + a_{13}$, $\hat{a}_8 = a_8 + a_{13}$,
$\hat{a}_{11}=\hat{a}_{12}=\hat{a}_{13}=0$; $ \hat{a}_i = a_i,
i=0,2,3,9,10$. Both effective Lagrangians, however, will give rise
to the same physical on-shell amplitudes. In this work, since
we will not restrict ourselves to calculate on-shell matrix elements,
we keep the complete basis given in eq.(\ref{Li}).
We will work in a generic R$_\xi$ gauge, the gauge fixing term
${\cal L}_{{\rm R}_\xi}$ and the Faddeev-Popov Lagrangian $\cl_{\rm FP}^{\rm NL}$ in
eq.(\ref{ECL}) were given in our previous work~\cite{HR2}. We refer
the reader to this work for the detailed formulas and a discussion on
these terms. It is worth just recalling here that $\cl_{\rm FP}^{\rm NL}$ does not
coincide with the usual Faddeev-Popov Lagrangian of the SM due to the
non-linearity of the would-be Goldstone bosons under infinitesimal
$SU(2)_{\rm L} \times U(1)_{\rm Y}$ transformations. Furthermore, because of the non-linear
realization of the gauge symmetry, some of the couplings in
$\cl_{\rm NL}$ have different Feynman rules than in the SM. We collect
in fig.(1) the subset of them that are relevant for the present
calculation.
It is also important to mention that in a R$_\xi$ gauge, the
complete electroweak chiral Lagrangian must be BRS invariant and
include also effective operators involving the ghost fields.
Longhitano\cite{LON} showed, however, that the subset of
operators given above is sufficient to absorb the divergences of
the gauged non-linear sigma model if the Landau gauge is chosen.
As it will be shown later, we have demonstrated in this work that
the set of operators given in eq.(\ref{Li}) is also enough, in a
general R$_\xi$ gauge, to render finite the two, three and
four-point Green's functions with external gauge fields.
The non-linear sigma model in eq.(\ref{NLL}) is not a
renormalizable theory, as increasing the number of loops in a
calculation implies the appearence of new divergent structures
of higher and higher dimension. However, the EChL is an
effective theory that can be renormalized order by order in the
loop expansion. In particular, at one loop order, the new
divergences generated by a one loop calculation with $\cl_{\rm NL}$ can
be absorved into redefinitions of the effective operators given
in eq.(\ref{Li}) \cite{LON}. Therefore, one can obtain finite
renormalized Green's functions if one makes a suitable
redefinition of the fields and parameters of the EChL, among
which the chiral parameters $a_i$ must be included \cite{W}-
\cite{GL2}. Formally, one defines the renormalized quantities in the
effective theory by the following relations
\begin{center}$
\begin{array}{ll}
B_{\mu}^b \; = \; \widehat{Z}_B^{1/2} \; B_\mu, \hspace{1cm}& g'^b
\; =\; \widehat{Z}_B^{-1/2}\; ( g' - \widehat{\delta g'} ), \\[2mm]
\vec{W}_\mu^b \; =\; \widehat{Z}_W^{1/2}\; \vec{W}_\mu, &
g^b \; =\; \widehat{Z}_W^{-1/2}\; ( g - \widehat{\delta g} ),
\\[2mm]
\vec{\pi}^b \; =\; \widehat{Z}_\Phi^{1/2}\; \vec{\pi}, &
v^b \; = \; \widehat{Z}_\Phi^{1/2}\; ( v -\widehat{ \delta v}),\\[2mm]
\xi_B^b \; = \; \xi_B\; ( 1 + \widehat{\delta \xi}_B), \hspace{1cm}&
\xi_W^b \; = \; \xi_W \;( 1 + \widehat{\delta \xi}_W),
\end{array}$
\begin{equation}
a_i^b \; = \; a_i(\mu) \; + \; \delta a_i , \label{RET}
\end{equation}
\end{center}
where the renormalization constants of the effective theory are
$ \widehat{Z}_i \equiv 1 + \widehat{\delta Z_i}$ and the
superscript b denotes bare quantities. We use the hatted
notation to distinguish counterterms and Green's functions in the
effective theory from the corresponding quantities in the SM.
The 1PI renormalized Green's functions of the effective theory
to one loop will be generically denoted by
\begin{equation}
\widehat{\Gamma}^{\rm R} = \widehat{\Gamma}^{\rm T} +
\widehat{\Gamma}^{\rm C} + \widehat{\Gamma}^{\rm L},
\label{GFE}
\end{equation}
where the superscript R denote renormalized function and
the superscripts T, C and L denote the tree level, counterterm
and loop contributions respectively. We will
discuss in section 6 the on-shell renormalization of the effective
theory, giving explicit expressions for the counterterms
introduced in eq.(\ref{RET}). For the moment, in order to discuss the
matching procedure, we will treat the counterterm contributions to
the renormalized functions of eq.(\ref{GFE}) just at a formal level.
We would like to focus now our attention on the chiral
parameters. Once a particular renormalization scheme has been
chosen to fix the counterterms of the effective theory, the
renormalized $a_i(\mu)$ parameters remain as free parameters
that can not be determined within the framework of the low
energy effective theory. The values of the renormalized chiral
parameters can be constrained from the experiment, as they are
directly related to different observables in scattering
processes \cite{DH,BDV},\cite{DHT}-\cite{MAY} and in precision
electroweak measurements (\cite{HT}-\cite{EH},\cite{PES}, see also section
7); but to have any theoretical insight on their values, one has
to relate the effective theory with a particular underlying
fundamental dynamics of the symmetry breaking.
If the underlying fundamental theory is the standard model with a
heavy Higgs, the chiral parameters can be determined by matching the
predictions of the SM in the large Higgs mass limit and those of the
EChL, at one loop level. By heavy Higgs we mean a Higgs mass much
larger than any external momenta and light particle masses ($ p^2,
M_Z^2 \ll M_H^2$) so that one can make a low energy expansion, but
smaller than say 1 TeV, so that a perturbative loop calculation is
reliable.
We will impose here the strongest form of matching \cite{GEO} by
requiring that all renormalized one-light-particle irreducible (1LPI)
Green's functions are the same in both theories at scales $\mu \leq
M_H$. The 1LPI functions are, by definition, the Green's functions
with only light particles in the external legs and whose contributing
graphs cannot be disconnected by cutting a single light particle
line. This matching condition is equivalent to the equality of the
light particle effective action in the two descriptions.
Some other forms of matching have been discussed in the literature,
by requiring the equality of the two theories at the level of
the physical scattering amplitudes \cite{DH} or connected Green's
functions \cite{DOM}. These requirements, however, complicate the
calculation unnecesarily while give at the end the same results
for the physical observables.
There is also some discussion in the literature \cite{DOM} on the
dependence of the Green's functions on the parametrization
chosen for the would-be Goldstone bosons. We will fix here the particular
parametrizations of eq.(\ref{FPAR}) in the effective theory
and eq.(\ref{SMPAR}) in the SM. Of course, the physical observables
will not depend on this particular choice.
In order to completely determine the chiral parameters in terms of
the parameters of the SM, it is enough to impose matching conditions
in the two, three, and four-point 1LPI renormalized Green's functions
with external gauge fields. We have worked in a general
R$_\xi$-gauge to show that the chiral parameters $a_i$ are
$\xi$-independent. We use dimensional regularization and the on-shell
substraction scheme.
The SM Green's functions are non-local; in particular, they depend on
$p / M_H$ through the virtual Higgs propagators. One has to make a
large $M_H$ expansion to represent the virtual Higgs boson effects by
the local effective operators ${\cal L}_i$. In this step, care must
be taken since clearly the operations of making loop integrals and
taking the large $M_H$ limit do not commute. Thus, one must first
regulate the loop integrals by dimensional regularization, then
perform the renormalization with some fixed prescription (on-shell in
our case) and at the end take the large $M_H$ limit, with $M_H$ being
the renormalized Higgs mass. From the computational point of view,
in the large $M_H$ limit we have neglected contributions that depend
on $(p/M_H)$ and/or $(M_V/M_H, M_V = M_W, M_Z))$ and vanish when the
formal $M_H \rightarrow \infty$ limit is taken. We show in appendix A
one illustrative example of how to take the large $M_H$ expansion of
the loop integrals.
The matching procedure can be summarized by the following relation
among renormalized 1LPI Green's functions
\begin{equation}
\Gamma^{\rm R}_{\rm SM} (\mu ) \; = \;
\widehat{\Gamma}^{\rm R}_{\rm EChL}
(\mu ) \; , \;\;\;\;\;\;\;\;\;\mu \leq M_H,
\label{match}
\end{equation}
where the large Higgs mass expansion of the SM Green's functions has
to be made as explained above. This matching condition imposes a
relation between the renormalization of the SM and the
renormalization of the effective theory. We have chosen to
renormalize both theories in the on-shell scheme, so that the
renormalized parameters are the physical masses and coupling
constants. Therefore, the renormalized parameters are taken to be the
same in both theories and the matching conditions will provide
relations between the SM and the EChL counterterms
\footnote{ In some related literature on effective field theories
\cite{GEO,SANTA}, the choice of a mass-independent substraction
prescription (${\overline{\rm MS}}$) in both theories has also been discussed. In
that case, the matching procedure relates the running
${\overline{\rm MS}}$-renormalized parameters, that are different in the fundamental
and the effective theories.}.
The matching condition (\ref{match}) represents
symbolically a system of tensorial coupled equations (as many as 1LPI
functions for external gauge fields) with several unknowns, namely
the complete set of parameters $a_i^b$ that we
are interested in determining. In our previous work \cite{HR2}, we
solved the subset of coupled equations involving the two-point and
three-point functions. From this subset, we were able to determine
the chiral parameters $a_0^b, a_1^b, a_2^b, a_3^b, a_8^b, a_9^b,
a_{11}^b, a_{12}^b$ and $a_{13}^b$. In section 4, we will solve
the matching equations for the four-point Green's functions, thus
the set of EChL parameters for a heavy Higgs will be completed.
But before that, we have to set a renormalization prescription
for the standard model.
\section{Renormalization of the standard model}
We start by writing down the SM Lagrangian
\begin{equation}
{\cal L}_{\rm SM} = (D_\mu \Phi)^\dagger (D^\mu \Phi) + \mu^2
\Phi^\dagger \Phi - \lambda (\Phi^\dagger \Phi)^2 +
\frac{1}{2} Tr \left( \wh_{\mu \nu} \wh^{\mu \nu} + \bh_{\mu \nu} \bh^{\mu \nu} \right) +
{\cal L}_{{\rm R}_\xi} + {\cal L}_{\rm FP} ,\\[2mm]
\end{equation}
where
\begin{eqnarray}
\Phi & = & \frac{1}{\sqrt 2}\left(
\begin{array}{c}\phi_1 - i \phi_2 \\
\sigma + i \chi \end{array}\right), \hspace{2cm}
(\pi_1,\pi_2,\pi_3) \equiv (-\phi_2,\phi_1,-\chi) ,
\nonumber\\[2mm]
D_\mu \Phi & = & ( \partial_\mu + \frac{1}{2} i g
\vec{W}_\mu\cdot\vec{\tau} +
\frac{1}{2} i g' B_\mu) \Phi . \label{SMPAR}
\end{eqnarray}
$\wh_{\mu \nu}, \bh_{\mu \nu}$ are defined in eqs.(\ref{FPAR},\ref{FTEN}),
${\cal L}_{{\rm R}_\xi}$ and ${\cal L}_{\rm FP}$ are the usual R$_{\xi}$
gauge fixing and Faddeev--Popov terms of the standard model.
We rescale the fields and parameters as follows
\begin{equation}
\begin{array}{ll}
B_{\mu}^b = Z_B^{1/2} B_\mu , \hspace{2cm} &
\vec{W}_\mu^b = Z_W^{1/2} \vec{W}_\mu ,\\[2mm]
\Phi^b = Z_\Phi^{1/2} \Phi ,&
v^b = Z_\Phi^{1/2} ( v - \delta v ) , \\[2mm]
g^b = Z_W^{-1/2} ( g - \delta g ) , &
g'^b = Z_B^{-1/2} ( g' - \delta g' ) ,\\[2mm]
\mu^b = Z_\Phi^{-1/2} ( \mu - \delta \mu ) ,&
\lambda^b = \lambda (1 - \delta \lambda / \lambda) ,\\[2mm]
\xi_B^b = \xi_B ( 1 + \delta \xi_B ) , &
\xi_W^b = \xi_W ( 1 + \delta \xi_W ) .
\end{array} \label{REPS}
\end{equation}
where the renormalization constants of the SM are
$ Z_i \equiv 1 + \delta Z_i$ and the superscript b denotes bare
quantities.
We have chosen to renormalize the SM in the
on-shell scheme. We choose the physical masses, $M_H$, $M_W$,
$M_Z$ and $g$ as our renormalized parameters.
The weak mixing angle is defined in terms of physical
quantities, as it is usual in the on-shell scheme
\begin{equation}
\cos^2 \theta_W \equiv \frac{M_W^2}{M_Z^2}
\end{equation}
and from $g$ and $\theta_W$ one derives the coupling constant
$g' = g \tan \theta_W$.
The 1LPI renormalized Green's functions in the standard model
to one loop will be generically denoted by
\begin{equation}
\Gamma^{\rm R} = \Gamma^{\rm T} + \Gamma^{\rm C} + \Gamma^{\rm L},
\end{equation}
where one has to consider the tree, counterterm and loop
contributions of all the one light particle irreducible diagrams in
the SM; that is, all the diagrams that cannot be disconnected by
cutting a light (non-Higgs) particle line.
In principle, we should give now the whole set of renormalization
conditions defining the SM on-shell scheme. However, to extract from
the matching conditions the values of the chiral parameters $a_i$, we
only need for the moment to evaluate explicitely the SM counterterms
that enter in the renormalization of the diagrams $T_i$ in figs.(3),
that is, the tree level diagrams with an intermediate Higgs boson.
Furthermore, since we are doing a large $M_H$ expansion, it will not
be necessary to give the complete expressions for these SM
counterterms, but just the leading terms that give
non-negligible contributions (i.e. non vanishing in the large $M_H$
limit) once they are plugged into the matching equations of the
four-point functions. In summary, these considerations imply that we will
need explicit expressions for the tadpole and Higgs mass counterterms
to order $M_H^4$ and for the W and Z mass counterterms to order
$M_H^2$. The other SM counterterms, $\delta Z_W$ and $\delta g$,
have at most a logarithmic dependence on Higgs mass and give
subleading contributions to the renormalization of the $T_i$
diagrams. The $\delta Z_W$ and $\delta g$ counterterms, however,
do contribute to the renormalization of the four-point Green's
functions through the renormalization of the tree level irreducible
diagrams. We do not need to give explicit expressions for them
because, as we will see, they appear in the matching through the
differences $\Delta Z_W = \delta Z_W - \widehat{\delta Z}_W$ and
$\Delta g = \delta g - \widehat{\delta g}$. The values of these
differences will be extracted from the matching.
The renormalization of the scalar sector has been done following the
work of Marciano and Willenbrock~\cite{MAW}.
In order to fix the notation and renormalization for the tadpole
we first write down the SM Lagrangian for the scalar sector in
terms of the would-be Goldstone boson fields $\phi^\pm \equiv
(\phi_1 \mp i \phi_2) / \sqrt{2}$ and $\chi$ and the physical
Higgs boson field $H$. In terms of the bare fields and
parameters, it reads as follows
\begin{eqnarray}
{\cal L}_{\rm SM}^{\rm scalar} & = &
\partial_\mu \phi^{+ b} \partial^\mu \phi^{- b} + \frac{1}{2}
\partial_\mu \chi^b \partial^\mu \chi^b + \frac{1}{2}
\partial_\mu H^b \partial^\mu H^b \nonumber\\[2mm]
& & - \lambda^b \left[ \phi^{+ b} \phi^{- b} (\phi^{+ b} \phi^{- b} +
(\chi^b)^2 + (H^b)^2 ) + \frac{1}{4} ( (\chi^b)^2 + (H^b)^2 )^2
\right] \nonumber\\
& & - \lambda^b v^b \left[ 2 \phi^{+ b} \phi^{- b} H^b + (\chi^b)^2
H^b + (H^b)^3 \right] \nonumber \\[2mm]
& & - \lambda^b (v^b)^2 (H^b)^2 + \frac{\delta T}{v^b}
(\phi^{+ b} \phi^{- b} + \frac{1}{2} (\chi^b)^2 + \frac{1}{2} (H^b)^2 )
+ \delta T H^b \nonumber\\[2mm]
& & + ( \xi {\rm -dependent \;\; terms } )
\label{LESC}
\end{eqnarray}
The tadpole counterterm is defined in terms of bare
parameters as \footnote{ Notice that the sign chosen in the
definition of $\delta T$ is opposite to the one in \cite{MAW}}
\begin{equation}
\delta T \equiv v^b \left( (\mu^b)^2 - \lambda^b (v^b)^2 \right)
\end{equation}
The renormalization condition for the tadpole is fixed such that
the tadpole loop corrections $T$ are cancelled by the tadpole
counterterm $\delta T$, or equivalently, such that the
renormalized tadpole vanishes. From the computational point of
view, one ignores all tadpole diagrams and tadpole counterterms,
and includes the extra contributions to the renormalized scalar
Green's functions coming from the counterterm $\delta T / v$
in eq.(\ref{LESC}) that is quadratic in the scalar fields.
The bare masses of the Higgs and gauge bosons are taken as
\begin{equation}
(M_H^b)^2 = 2 \; \lambda^b \; (v^b)^2, \hspace{8mm}
(M_W^b)^2 = (g^b)^2 \; (v^b)^2 / 4, \hspace{8mm}
(M_Z^b)^2 = \left( (g^b)^2 + (g^{\prime b})^2 \right)\; (v^b)^2 / 4
\label{MWH}
\end{equation}
so that the following relation among the basic bare parameters holds
\begin{equation}
\lambda^b = \frac{(g^b)^2 \; (M_H^b)^2}{8 \; (M_W^b)^2} \label{LAMB}
\end{equation}
The renormalized masses are defined by
\begin{equation}
(M_H^b)^2 = M_H^2 + \delta M_H^2, \hspace{8mm}
(M_W^b)^2 = M_W^2 + \delta M_W^2, \hspace{8mm}
(M_Z^b)^2 = M_Z^2 + \delta M_Z^2,
\end{equation}
so that the on-shell renormalization conditions
\begin{eqnarray}
& {\rm Re} \left[ \Sigma_{H}^{\rm R} (q^2 = M_H^2) \right]
= 0,\hspace{1cm} & T^{\rm R} \equiv T + \delta T = 0,\nonumber\\[2mm]
& {\rm Re} \left[ \Sigma_{W}^{\rm R} (q^2 = M_W^2) \right]
= 0, \hspace{1cm}
& {\rm Re} \left[ \Sigma_{Z}^{\rm R} (q^2 = M_Z^2) \right]
= 0, \label{RCON}
\end{eqnarray}
imply that eqs.(\ref{MWH},\ref{LAMB}) are also fulfilled by the renormalized
quantities
\begin{eqnarray}
M_H^2 & = & 2 \lambda v^2, \nonumber\\[2mm]
M_W^2 & = & g^2 v^2 / 4, \nonumber\\[2mm]
M_Z^2 & = & (g^2 + g'^2) v^2 / 4, \nonumber\\[2mm]
\lambda & = & \frac{ g^2 M_H^2}{8 M_W^2}. \label{CORE}
\end{eqnarray}
The renormalization conditions (\ref{RCON}) fix the
values of the SM counterterms to be\footnote{ We
denote by $- i g_{\mu \nu} \Sigma_{\rm V}$, (V=W,Z) and
$- i \Sigma_{\rm H}$ the direct result from the
Feynman diagrams. The tadpole
loops are however denoted by $i T$.}
\begin{equation}
\begin{array}{ll}
\delta M_H^2 = - {\rm Re} \left[ \Sigma_{H}^{\rm L}
(q^2 = M_H^2) \right] +{\displaystyle \frac{\delta T}{v}}, \hspace{1cm}
& \delta T = - T, \\[4mm]
\delta M_W^2 = {\rm Re} \left[ \Sigma_{W}^{\rm L}
(q^2 = M_W^2) \right], \hspace{1cm}
& \delta M_Z^2 = {\rm Re} \left[ \Sigma_{Z}^{\rm L}
(q^2 = M_Z^2) \right]. \vspace{2mm}
\end{array} \label{RC2}
\end{equation}
If one wishes to keep just the non-vanishing
contributions in the large $M_H$ limit to the renormalization
of the tree level diagrams $T_i$, the computation of the unrenormalized
self-energies of eq.(\ref{RC2}) involve just the leading
diagrams collected in fig.(2). These loop diagrams give
the following values of the mass and tadpole counterterms in the
on-shell scheme
\begin{eqnarray}
\frac{\delta M_H^2}{M_H^2} & = & \frac{g^2 M_H^2}{M_W^2}
\frac{ 1}{16 \pi^2} \left[ \frac{3}{2} \hat{\Delta}_\epsilon
+ 3 - \frac{3 \sqrt{3}}{8} \pi \right] + {\cal O}(1),\nonumber\\[2mm]
\frac{\delta M_W^2}{M_W^2} & = & \frac{g^2 M_H^2}{M_W^2}
\frac{- 1}{16 \pi^2} \frac{1}{8} + {\cal O}(1),\nonumber\\[2mm]
\frac{\delta M_Z^2}{M_Z^2} & = & \frac{g^2 M_H^2}{M_W^2}
\frac{- 1}{16 \pi^2} \frac{1}{8} + {\cal O}(1),
\nonumber\\[2mm]
\frac{\delta T / v}{M_H^2} & = & - \frac{g^2 M_H^2}{M_W^2}
\frac{ 1}{16 \pi^2} \frac{3}{8} \left[ \hat{\Delta}_\epsilon
+ 1 \right] + {\cal O}(1), \label{SMCL}
\end{eqnarray}
where
\begin{equation}
\hat{\Delta}_\epsilon = \Delta_\epsilon -
\log \frac{M_H^2}{\mu^2}, \hspace{1cm}
\Delta_\epsilon = \frac{2}{\epsilon} - \gamma_{\rm E} +
\log 4 \pi, \hspace{1cm} \epsilon = 4 - D
\label{EPS}
\end{equation}
and $\mu$ is the usual mass scale of dimensional regularization.
We have checked the agreement of our expressions
with the results of \cite{MAW}.
We have explicitely indicated in the formulas of the SM counterterms
(\ref{SMCL}) that these expressions are truncated to a certain
order in the 1/$M_H$ expansion. This truncation is enough to
keep the non-vanishing effects of a heavy Higgs in the evaluation
of the following combination of SM counterterms
\begin{equation}
\delta S = \frac{M_W^2}{g^2 M_H^2} \left(- \frac{\delta M_H^2}{M_H^2}
+ \frac{ \delta T / v}{M_H^2} + \frac{\delta M_W^2}{M_W^2} \right)
\label{DELS}
\end{equation}
that comes from the renormalization of the tree level diagrams $T_i$
and appears explicitely in the matching equations for the
four-point functions given in appendix B\footnote{In the two-point
functions however, it is necessary to go to the next order in the
large $M_H$ expansion of these terms \cite{HR2}.}.
\section{Matching equations for the 4-point Green's functions}
In this section we present the results of our calculation of the
4-point Green's functions, giving the set of matching
equations that we have imposed and their solution.
The master equation that summarizes the complete
set of matching conditions for the renormalized four-point functions is
the following:
\begin{equation}
M_{abcd}^{{\rm T} \; \mu \nu \rho \lambda} +
M_{abcd}^{{\rm C} \; \mu \nu \rho \lambda} +
M_{abcd}^{{\rm L} \; \mu \nu \rho \lambda} =
\widehat{M}_{abcd}^{{\rm T} \; \mu \nu \rho \lambda} +
\widehat{M}_{abcd}^{{\rm C} \; \mu \nu \rho \lambda} +
\widehat{M}_{abcd}^{{\rm L} \; \mu \nu \rho \lambda},
\label{MAMA}
\end{equation}
where $abcd = \gamma\gamma WW, \gamma ZWW, ZZWW, WWWW, ZZZZ$.
The calculation of the one loop contributions
$M_{\rm L}$ and $\widehat{M}_{\rm L}$ is the most involved part.
One must include all the 1PI diagrams
in the EChL and all the 1LPI diagrams in the SM.
1LPI diagrams are those that cannot be disconnected by cutting a
single light particle line, that is, a non-Higgs particle line.
One must, in principle, account for all kind of diagrams with
gauge, scalar and ghost fields flowing in the loops.
However, some simplifications occur.
Firstly, a subset of the diagrams that have only light particles in
it is exactly the same in both models, and their contribution can be
simply dropped out from both sides of the matching equation
(\ref{MAMA}). This is the case, for instance, of the subset of
diagrams whith only gauge particles in them. Secondly, calculating
explicitely every diagram in the four-point 1LPI SM functions and using
the techniques given in app.A, we have checked that the diagrams
involving both gauge and Higgs particles in the loops give
vanishing contributions in the large $M_H$ limit to the four-point
functions. Only those with just scalars (Goldstone bosons or Higgs)
particles in the loops do contribute with non-vanishing corrections
in the large $M_H$ limit to the matching equation
(\ref{MAMA})\footnote{This is also true for the three-point functions but
it is false for the two-point functions where both pure scalar and
mixed gauge-scalar loops contribute in the large $M_H$ limit.}.
Finally, among the diagrams with pure scalar loops, there are some
with only Goldstone boson particles. One would expect that these
diagrams give the same contributions in the SM and the EChL, however
they do not. The reason is the already mentioned differences in the
Feynman rules of the vertices in fig.(1). These diagrams (denoted
generically by $D_i$ in figs.(3) ) must therefore be included in both
sides of the matching equation.
In fig.(3), we give the complete list of the tree level ($T_i$) and
one loop ($L_i, D_i$) diagrams that give a
contribution to the matching equations (\ref{MAMA}) of the
$\gamma\gamma WW, \gamma ZWW, ZZWW,WWWW$ and $ZZZZ$ Green's functions.
We give in appendix B the final result of the calculation of the
tree, counterterms and loop contributions to the matching equations
for the different Green's functions. Each matching condition of the
form given in eq.(B1) is a tensorial equation and therefore it
provides several equations corresponding to the various independent
tensorial structures. Furthermore, each of these equations can be
written in the form of a polynomial in powers of $c^2$. In summary,
one gets one equation "per" coefficient of the polinomial in each
independent tensorial structure and in each Green's function.
The result including the equations from the two, three and four-point
functions is a linear system with more equations than unknowns.
The system turns out to be compatible, giving a strong consistency
check of the calculation. By keeping just the independent
set of equations from the four ponit functions, one gets:
\begin{flushleft} \vspace{-1cm}
\begin{eqnarray}
\left( \Delta Z_W - \frac{\Delta g^2}{g^2} \right) \frac{1}{g^2} &=&
\frac{1}{16 \pi^2} \frac{- 1}{12} \left[ \hat{\Delta}_\epsilon
+ \frac{5}{6} \right]\\[2mm]
\left( \Delta Z_W - \frac{\Delta g^2}{g^2} \right) \frac{1}{g^2}
+ a^b_{11} &=&
\frac{1}{16 \pi^2} \frac{- 1}{12} \left[ \hat{\Delta}_\epsilon
+ \frac{4}{3} \right] \nonumber \\[2mm]
2 a^b_3 &=& \frac{1}{16 \pi^2} \frac{- 1}{12} \left[
\hat{\Delta}_\epsilon + \frac{17}{6} \right] \nonumber \\[2mm]
a^b_3 - a^b_{11} + a^b_{12} &=&
\frac{1}{16 \pi^2} \frac{- 1}{24} \left[ \hat{\Delta}_\epsilon
+ \frac{11}{6} \right] \nonumber \\[2mm]
2 \left( a^b_5 + a^b_7 \right) &=&
\frac{1}{16 \pi^2} \frac{1}{12} \left[ \frac{43}{2}
\hat{\Delta}_\epsilon + \frac{47}{3} \right]
+ \frac{M_W^2}{g^2 M_H^2} + \delta S \nonumber \\[2mm]
a_4^b + a^b_6 - a^b_{11} + 2 a^b_{12} &=&
\frac{1}{16 \pi^2} \frac{- 1}{12} \left[ \hat{\Delta}_\epsilon
+ \frac{7}{3} \right] \nonumber \\[2mm]
\left( \Delta Z_W - \frac{\Delta g^2}{g^2} \right) \frac{1}{g^2}
+ 2 a^b_3 - a_4^b - a^b_8 + 2 a^b_9 - 2 a^b_{13} &=&
\frac{1}{16 \pi^2} \frac{- 1}{12} \left[
\hat{\Delta}_\epsilon + \frac{5}{6} \right] \nonumber \\[2mm]
\left( \Delta Z_W - \frac{\Delta g^2}{g^2} \right) \frac{1}{g^2}
+ 2 a^b_3 + a_4^b + 2 a^b_5 - a^b_8 + 2 a^b_9 - 2 a^b_{13} &=&
\frac{1}{16 \pi^2} \frac{1}{24} \left[
37 \hat{\Delta}_\epsilon + \frac{55}{3} \right]
+ \frac{M_W^2}{g^2 M_H^2} + \delta S \nonumber \\[2mm]
2 \left( a^b_4 + a^b_5 + 2 ( a^b_6 + a^b_7 + 2 a^b_{10})
\right) &=& \frac{1}{16 \pi^2} \frac{ 1}{8} \left[
13 \hat{\Delta}_\epsilon + \frac{20}{3} \right]
+ \frac{M_W^2}{g^2 M_H^2} + \delta S \nonumber
\end{eqnarray}
\end{flushleft}
where $\hat{\Delta}_\epsilon$ and $\delta S$ have been defined in eqs.
(\ref{EPS}) and (\ref{DELS}) respectively and we use the following
notation for the differences of counterterms
\begin{equation}
\Delta Q \equiv \delta Q - \widehat{\delta Q} \hspace{1.2cm}
{\rm with} \hspace{1cm} Q = Z_B, Z_W, g^2, \; {\rm etc...}
\end{equation}
The first four equations provide a check for the values of
${\displaystyle \Delta Z_W, \frac{\Delta g^2}{g^2}, a^b_{11}, a^b_3}$
and $a^b_{12}$ that we already obtained in our previous work from the
calculation of the two and three-point functions. Finally, using
these results and the values of $a^b_8, a^b_9$ and $a^b_{13}$ from
\cite{HR2} we can extract the genuine parameters of the four-point
functions: $ a^b_4, a^b_5, a^b_6, a^b_7$ and $ a^b_{10}$.
By solving the complete linear system of equations, one gets
a unique solution for the bare electroweak chiral parameters
given by:
\begin{eqnarray}
a_0^b & = & \frac{1}{16 \pi^2} \frac{3}{8}
\left( \Delta_\epsilon - \log \frac{M_H^2}{\mu^2} +
\frac{5}{6}\right), \nonumber \\[2mm]
a_1^b & = & \frac{1}{16 \pi^2} \frac{1}{12}
\left( \Delta_\epsilon - \log \frac{M_H^2}{\mu^2}
+ \frac{5}{6} \right), \nonumber \\[2mm]
a_2^b & = & \frac{1}{16 \pi^2}
\frac{1}{24} \left( \Delta_\epsilon - \log
\frac{M_H^2}{\mu^2} +
\frac{17}{6} \right), \nonumber \\[2mm]
a_3^b & = & \frac{-1}{16 \pi^2} \frac{1}{24}
\left( \Delta_\epsilon - \log \frac{M_H^2}{\mu^2} +
\frac{17}{6} \right), \nonumber \\[2mm]
a_4^b & = & \frac{-1}{16 \pi^2} \frac{1}{12}
\left( \Delta_\epsilon - \log \frac{M_H^2}{\mu^2} +
\frac{17}{6}\right), \nonumber \\[2mm]
a_5^b & = & \frac{M_W^2}{2 g^2 M_H^2} - \frac{1}{16 \pi^2} \frac{1}{24}
\left( \Delta_\epsilon - \log \frac{M_H^2}{\mu^2} + \frac{79}{3}
- \frac{ 27 \pi}{2 \sqrt{ 3}} \right), \nonumber\\[2mm]
a_{11}^b & = &
\frac{-1}{16 \pi^2}\frac{1}{24}, \nonumber \\[2mm]
a_6^b & = & a_7^b \; =\; a_8^b \; = \; a_9^b \; = \; a_{10}^b \; = \;
a_{12}^b \; = \; a_{13}^b \; = \; 0. \label{aMH}
\end{eqnarray}
We would like to make some remarks on this result for the chiral
parameters:
\begin{enumerate}
\item First of all, we agree with the $1/\epsilon$ dependence
of the $a_i^b$ parameters that was first calculated by Longhitano
\cite{LON} looking at the divercences of the non-linear sigma model.
We see therefore that the divergences generated with the $\cl_{\rm NL}$ to
one loop are exactly canceled by the $1/\epsilon$ terms in the
$a_i^b$'s.
\item The values of $a^b_4$ and $a^b_5$ agree with the results
given in \cite{DH}, where the equivalence theorem was used in
comparing the scattering amplitudes for Goldstone bosons in the SM
\cite{DW} and the EChL. These values, on the
other hand, do not coincide with the values in \cite{EH}
where just contributions from pure Higgs loops were considered.
\item It is important to realize that the matching procedure fixes
completely the values of the bare parameters $a_i^b$ in terms of the
renormalized parameters of the SM.
\item Eqs.(\ref{aMH}) give the complete non-decoupling effects
of a heavy Higgs, that is, the leading logarithmic dependence
on $M_H$ and the next to leading constant contribution
to the electroweak chiral parameters.
The $a_i$'s are accurate up to corrections of the order
$(p/M_H)$ where $p \approx M_Z$ and higher order
corrections in the perturbative expansion.
\item We demonstrate that the $a_i$'s are gauge independent, as
expected.
\item There is only one effective operator, the one corresponding to
$a_5$, that gets a tree level contribution.
Its expression in terms of renormalized SM parameters
depends on the renormalization prescription that one has
chosen in the standard model, on-shell in our case.
We believe it is important at this point to clarify the relation
among the $a^b_i$'s that correspond to an on-shell
renormalization of the SM and their corresponding values if a
different renormalization prescription for the SM is chosen.
We will discuss this point in the following section.
\end{enumerate}
By putting together the results of the two, three and four-point
functions, one also obtains some relations among the counterterms
of the two theories
\begin{eqnarray}
\Delta Z_W & = & \frac{- g^2}{16 \pi^2} \frac{1}{12} \left(
\Delta_\epsilon - \log \frac{M_H^2}{\mu^2} + \frac{5}{6} \right),
\nonumber\\[2mm]
\Delta Z_B & = & \frac{- g'^2}{16 \pi^2} \frac{1}{12} \left(
\Delta_\epsilon - \log \frac{M_H^2}{\mu^2} + \frac{5}{6} \right),
\nonumber\\[2mm]
\Delta \xi_W & = & \Delta Z_W, \hspace{1cm}
\Delta \xi_B = \Delta Z_B, \nonumber \\[2mm]
\frac{\Delta g^2}{g^2} & = & \frac{\Delta g'^2}{g'^2}
= 0, \nonumber \\[2mm]
\Delta Z_\phi - 2 \frac{\Delta v}{v} & = & \frac{g^2}{16 \pi^2}
\left[ - \frac{M_H^2}{8 M_W^2} + \frac{3}{4} \left(
\Delta_\epsilon - \log \frac{M_H^2}{\mu^2} + \frac{5}{6} \right)
\right. \nonumber \\[2mm]
& & \left. + \frac{1}{4} \frac{\xi_Z}{c^2} \left(
\Delta_\epsilon - \log \frac{\xi_Z M_Z^2}{\mu^2} + 1 \right)
+ \frac{1}{2} \xi_W \left(
\Delta_\epsilon - \log \frac{\xi_W M_W^2}{\mu^2} + 1 \right)
\right]
\end{eqnarray}
These equations give the differences among the renormalization
constants of the SM in the large $M_H$ limit and those in the
EChL, when the on-shell renormalization scheme is chosen in
both theories. They are obtained here as a constraint imposed
by the matching; one can also calculate them from the explicit
expressions of the on-shell counterterms of the two theories
and verify that these relations are indeed satisfied.
\section{ Dependence of the chiral parameters on the renormalization
of the SM}
In the previous section, we have given the values of the bare
electroweak chiral parameters (\ref{aMH}) when the SM is renormalized
in the on-shell scheme. We would like now to discuss how these bare
parameters are changed when a different renormalization scheme is
chosen for the SM.
We have seen that the chiral parameter $a^b_5$ is the only one
that gets a tree level contribution when a heavy Higgs is integrated out,
and therefore it is the only one whose expression in terms of the
SM renormalized parameters will depend on the renormalization
prescription of the SM.
By using eqs.(\ref{CORE}) and (\ref{aMH}), one can rewrite $a^b_5$ in terms
of the on-shell renormalized scalar self-coupling $\lambda$:
\begin{equation}
a^b_5 = \frac{1}{16 \lambda} -
\frac{1}{16 \pi^2} \frac{1}{24}
\left( \hat{\Delta}_\epsilon + \frac{79}{3}
- \frac{ 27 \pi}{2 \sqrt{ 3}} \right). \label{a5LR}
\end{equation}
The easiest way to connect with a new renormalization scheme for
the SM is to write down $a^b_5$ in terms of the bare scalar
self-coupling $\lambda^b$. To this end, we first write down
the relation\footnote{The contribution from the
renormalization of $g$ is of higher order in the large $M_H$
expansion than the contributions
from the renormalization of $M_H$ and $M_W$ and will be ignored
here.} between the renormalized self-coupling $\lambda$
and the bare $\lambda^b$
\begin{equation}
\lambda = \lambda^b \left( 1 + \frac{\delta M_H^2}{M_H^2} -
\frac{\delta M_W^2}{M_W^2} \right)
\end{equation}
By substituing the values of the mass counterterms given in
(\ref{SMCL}) in this equation, we get the relation among the
bare and the renormalized self-coupling in the on-shell scheme
\begin{equation}
\lambda = \lambda^b \left[ 1 +
\frac{1}{16 \pi^2} \lambda^b \left( -12 \hat{\Delta}_\epsilon
-25 + \frac{9 \pi}{\sqrt 3} \right) \right]
\end{equation}
Now, by plugging the last equation into eq.(\ref{a5LR})
one finds the value of $a^b_5$ in terms of $\lambda^b$
\begin{equation}
a^b_5 = \frac{1}{16 \lambda^b} +
\frac{1}{16 \pi^2} \frac{1}{24}
\left( 17 \hat{\Delta}_\epsilon + \frac{67}{6}\right).\label{a5LB}
\end{equation}
In order to connect with a different renormalization prescription
we simply substitute $\lambda^b$ in eq.(\ref{a5LB}) by its
corresponding definition in terms of the new renormalized
self-coupling.
For instance, if one chooses the
${\overline{\rm MS}}$ scheme, where the scalar self-coupling is
defined by
\begin{equation}
\lambda^b = \lambda_{\overline{\rm MS}} \left[ 1 + \frac{1}{16 \pi^2} \;
\lambda_{\overline{\rm MS}} \; 12 \; \hat{\Delta}_\epsilon \right]
\end{equation}
then $a^b_5$ in terms of the renormalized self-couplig
$\lambda_{\overline{\rm MS}}$ is given by
\begin{equation}
a^b_5 = \frac{1}{16 \lambda_{\overline{\rm MS}}} -
\frac{1}{16 \pi^2} \frac{1}{24}
\left( \hat{\Delta}_\epsilon - \frac{67}{6}\right).
\end{equation}
and for the rest of chiral parameters one gets exactly the
same values as in eq.(\ref{aMH}).
Another example is the renormalization prescription chosen by
Gasser and Leutwyler in \cite{GL1}.
Their prescription for the renormalized coupling is given by
\footnote{In ref.\cite{GL1} $g_r$ is what we call here $\lambda_{\rm GL}$.
The mass apearing in $\hat{\Delta}_\epsilon$
is not the physical mass of the Higgs, but what they call $M_r$.
The renormalized mass $M_r$ was fixed in \cite{GL1} such that
$M_r^2 = 2 \lambda_{\rm GL} v^2$.}
\begin{equation}
\lambda^b = \lambda_{\rm GL} \left[ 1 + \frac{1}{16 \pi^2} \;
\lambda_{\rm GL} \; 12 \; ( \hat{\Delta}_\epsilon + 1) \right]
\end{equation}
and the expression of $a^b_5$ in terms of the renormalized
self-coupling $\lambda_{\rm GL}$ is therefore
\begin{equation}
a^b_5 = \frac{1}{16 \lambda_{\rm GL}} -
\frac{1}{16 \pi^2} \frac{1}{24}
\left( \hat{\Delta}_\epsilon + \frac{41}{6}\right).\label{a5GL}
\end{equation}
and the rest of parameters remain again the same.
For comparison, it is interesting to translate the results
of eqs.(\ref{aMH},\ref{a5GL}) to the usual notation in chiral
perturbation theory
\begin{eqnarray}
L^b_1 & = & \frac{l^b_1}{4} = a^b_5 =
\frac{1}{16 \lambda_{\rm GL}} -
\frac{1}{16 \pi^2} \frac{1}{24}
\left( \hat{\Delta}_\epsilon + \frac{41}{6}\right).
\nonumber \\[2mm]
L^b_2 & = & \frac{l^b_2}{4} = a^b_4 =
\frac{- 1}{16 \pi^2} \frac{1}{12}
\left( \hat{\Delta}_\epsilon +
\frac{17}{6}\right), \nonumber \\[2mm]
L^b_9 & = & - \frac{ l^b_6}{2} = a^b_3 - a^b_2 =
\frac{ - 1}{16 \pi^2} \frac{1}{12}
\left( \hat{\Delta}_\epsilon
+ \frac{17}{6} \right), \nonumber \\[2mm]
L^b_{10} & = & l^b_5 = a^b_1 =
\frac{1}{16 \pi^2} \frac{1}{12}
\left( \hat{\Delta}_\epsilon
+ \frac{5}{6} \right). \label{LiB}
\end{eqnarray}
These chiral parameters agree with the values found by Gasser
and Leutwyler in \cite{GL1} (see appendix B of this reference)
and in \cite{NS}. We find it a quite remarkable agreement since
they used functional methods to integrate out the Higgs particle
in a linear sigma model where the gauge fields were considered
as external sources and they used as well different techniques
to study the large $M_H$ limit. This confirms the fact already
mentioned that the contributions to the effective operators of
dimension four that come from mixed gauge-scalar loops are
subleading, in the large $M_H$ limit, compared to the pure
scalar loops contributions. However, this is not the case for
the dimension two custodial breaking operator $a_0$ that gets
contributions from mixed gauge-scalar loops \cite{HR2}.
To conclude this section, we emphasize once more that the bare chiral
parameters for a given underlying theory, the SM in our case, must be
computed with a choice for the renormalization prescription of this
theory. The explicit expression of the chiral parameters will vary
from one prescription to another, but the numerical value remains the
same, and the connection between different prescriptions can be
clearly and easily established.
\section{Renormalization of the effective theory}
In this section we briefly describe the renormalization procedure in
the effective theory. Given the effective Lagrangian of
eq.(\ref{ECL}), the first step is to redefine the fields and
parameters of the Lagrangian according to eq.(\ref{RET}). It
introduces, at a formal level, the set of counterterms of the
effective theory $\widehat{\delta Z}_i, \widehat{\delta g}$, etc,
that need to be computed once a particular renormalization
prescription scheme is chosen. We fix here the on-shell
renormalization scheme as we did in the case of the SM. For practical
reasons we prefer to choose the renormalization conditions as in
reference \cite{HO}, which are the most commonly used for LEP
physics. In terms of the renormalized selfenergies these
renormalization conditions read as follows
\begin{equation}
\widehat{\Sigma}^{\rm R}_{W} (M_W^2) = 0,\hspace{3mm}
\widehat{\Sigma}^{\rm R}_{Z} (M_Z^2) = 0,\hspace{3mm}
\widehat{\Sigma}^{\prime \; \rm R}_{\gamma} (0) = 0,\hspace{3mm}
\widehat{\Sigma}^{\rm R}_{\gamma Z} (0) = 0. \label{RCE}
\end{equation}
The renormalized self energies are computed in the effective theory
as usual, namely, by adding all the contributions from the one
loop diagrams and from the counterterms. We get the following
expressions\footnote{Notice that in our rotation defining the physical
gauge fields, the terms in $s$ have different sign than in reference
\cite{HO}}:
\begin{eqnarray}
\widehat{\Sigma}^{\rm R}_{\gamma} (q^2) & = &
\widehat{\Sigma}^{\rm L}_{\gamma} (q^2) +
\left( s^2 \widehat{\delta Z_W} + c^2 \widehat{\delta Z_B}
\right) q^2 + s^2 g^2 (a^b_8 - 2 a^b_1) q^2.
\nonumber\\[2mm]
\widehat{\Sigma}^{\rm R}_{W} (q^2) & = &
\widehat{\Sigma}^{\rm L}_{W} (q^2) +
\widehat{\delta Z_W} \left( q^2 - M_W^2 \right)
- \widehat{\delta M_W^2}. \nonumber\\[2mm]
\widehat{\Sigma}^{\rm R}_{Z} (q^2) & = &
\widehat{\Sigma}^{\rm L}_{Z} (q^2) +
\left( c^2 \widehat{\delta Z_W} + s^2 \widehat{\delta Z_B}
\right) ( q^2 - M_Z^2) - \widehat{\delta M_Z^2} \nonumber\\
& &
+ 2 g'^2 a_0^b M_Z^2 + \left( 2 s^2 g^2 a^b_1 + c^2 g^2 a_8^b
+ (g^2 + g^{\prime 2}) a^b_{13} \right) q^2.
\nonumber\\[2mm]
\widehat{\Sigma}^{\rm R}_{\gamma Z} (q^2) & = &
\widehat{\Sigma}^{\rm L}_{\gamma Z} (q^2) +
s c \left( \widehat{\delta Z_W} - \widehat{\delta Z_B}
\right) q^2 - s c\; M_Z^2 \left( \frac{\widehat{\delta g'}}{g'} -
\frac{\widehat{\delta g}}{g} \right) \nonumber\\
& & + \left( s c g^2 a^b_8 - (c^2 - s^2) g g' a^b_1 \right) q^2.
\label{JQL}
\end{eqnarray}
where
\begin{eqnarray}
\widehat{\delta M_W^2} & = & M_W^2 \left(
\widehat{\delta Z_\Phi} - 2 \frac{\widehat{\delta g}}{g}
- 2 \frac{\widehat{\delta v}}{v} - \widehat{\delta Z_W}
\right), \nonumber\\[2mm]
\widehat{\delta M_Z^2} & = & M_Z^2 \left(
\widehat{\delta Z_\Phi} - 2 c^2 \frac{\widehat{\delta g}}{g}
- 2 s^2 \frac{\widehat{\delta g'}}{g'}
- 2 \frac{\widehat{\delta v}}{v} -
c^2 \widehat{\delta Z_W} - s^2 \widehat{\delta Z_B}
\right),\nonumber\\[2mm]
M_W^2 & = & g^2 v^2 / 4, \nonumber\\[2mm]
M_Z^2 & = & (g^2 + g'^2) v^2 / 4,
\label{masas}
\end{eqnarray}
and the superscripts R and L denote renormalized and EChL
loops respectively.
{}From eq.(\ref{masas}) the following relation among the
$W$ and $Z$ mass counterterms is obtained
\begin{equation}
\frac{\widehat{\delta M_Z^2}}{M_Z^2} -
\frac{\widehat{\delta M_W^2}}{M_W^2} =
2 s^2 \frac{\widehat{\delta g}}{g} +
2 c^2 \frac{\widehat{\delta g'}}{g'}
+ s^2 \left(\widehat{\delta Z_W} - \widehat{\delta Z_B}
\right)
\end{equation}
Finally, by requiring these renormalized self energies to fulfill
eqs.(\ref{RCE}) and taking into account that the $U(1)_{\rm Y}$
Ward identity implies $\widehat{\delta g'} = 0$
one gets the following results for the values of the counterterms
in terms of the unrenormalized selfenergies of the effective
theory and the bare $a_i$'s:
\begin{eqnarray}
\widehat{\delta M_W^2} & = &
\widehat{\Sigma}^{\rm L}_W (M_W^2), \nonumber\\[2mm]
\widehat{\delta M_Z^2} & = &
\widehat{\Sigma}^{\rm L}_{Z} (M_Z^2) + M_Z^2
\left( 2 g'^2 a_0^b + 2 s^2 g^2 a^b_1 + c^2 g^2 a_8^b
+ (g^2 + g^{\prime 2}) a^b_{13} \right), \nonumber\\[2mm]
\frac{\widehat{\delta g}}{g} & = &
\frac{- 1}{s c} \frac{\widehat{\Sigma}^{\rm L}_{ \gamma Z}(0)}
{M_Z^2}, \nonumber\\[2mm]
\frac{\widehat{\delta g'}}{g'} & = & 0, \nonumber\\[2mm]
\widehat{\delta Z_W} & = &
\frac{c^2}{s^2} \left(\frac{\widehat{\Sigma}^{\rm L}_{Z}(M_Z^2)}
{M_Z^2} - \frac{\widehat{\Sigma}^{\rm L}_{W}(M_W^2)}
{M_W^2} \right) + 2 \frac{c}{s} \frac{
\widehat{\Sigma}^{\rm L}_{ \gamma Z}(0)}{M_Z^2} -
\widehat{\Sigma}^{\prime \;\rm L}_\gamma (0) \nonumber\\
& & + 2 g^2 a^b_0 + 2 g^2 a^b_1 +
\frac{c^2 - s^2}{s^2} g^2 a^b_8 + \frac{c^2}{s^2}
(g^2 + g'^2) a^b_{13}, \nonumber\\[2mm]
\widehat{\delta Z_B} & = &
\frac{\widehat{\Sigma}^{\rm L}_{W}(M_W^2)}{M_W^2} -
\frac{ \widehat{\Sigma}^{\rm L}_{Z}(M_Z^2)}{M_Z^2} -
2 \frac{s}{c} \frac{\widehat{\Sigma}^{\rm L}_{ \gamma Z}(0)}{M_Z^2} -
\widehat{\Sigma}^{\prime \rm L}_\gamma (0) \nonumber\\
& & - \left( 2 g'^2 a^b_0 + g^2 a^b_8 +
(g^2 + g'^2) a^b_{13} \right). \label{ECT}
\end{eqnarray}
Now that we have at hand eqs.(\ref{ECT}) the only parameters of the
theory that still need to be renormalized are the electroweak chiral
parameters $a_i$. The following formal redefinition of the chiral
parameters has already been introduced
\begin{equation}
a^b_i = a_i (\mu) + \delta a_i. \label{NSQ}
\end{equation}
The divergent part of the $a_i^b$ parameters, or equivalently the
divergent part of the counterterms $\delta a_i$, are fixed by
the symmetries of the effective theory and since the work of
Longhitano \cite{LON} they are known to be
\begin{eqnarray}
& & \delta a_0 |_{div} = \frac{1}{16 \pi^2} \frac{3}{8} \Delta_\epsilon ,
\hspace{2cm}\delta a_1 |_{div} = \frac{1}{16 \pi^2} \frac{1}{12}
\Delta_\epsilon , \nonumber \\[2mm]
& & \delta a_2 |_{div} = \frac{1}{16 \pi^2} \frac{1}{24} \Delta_\epsilon ,
\hspace{2cm}\delta a_3 |_{div} = \frac{- 1}{16 \pi^2} \frac{1}{24}
\Delta_\epsilon , \nonumber \\[2mm]
& & \delta a_4 |_{div} = \frac{- 1}{16 \pi^2} \frac{1}{12} \Delta_\epsilon ,
\hspace{2cm}\delta a_5 |_{div} = \frac{- 1}{16 \pi^2} \frac{1}{24}
\Delta_\epsilon , \nonumber \\[2mm]
& & \delta a_i |_{div} = 0, \hspace{5mm} i = 6,....13. \label{PP}
\end{eqnarray}
These universal divergent contributions to the chiral bare
parameters imply in turn universal renormalization group
equations for the renormalized parameters
\begin{eqnarray}
& &a_0(\mu) = a_0(\mu ') + \frac{1}{16 \pi^2}
\frac{3}{8} \log\frac{\mu^2}{\mu '^2}, \hspace{1cm}
a_1(\mu) = a_1(\mu ') + \frac{1}{16 \pi^2}
\frac{1}{12} \log\frac{\mu^2}{\mu '^2}, \nonumber\\[2mm]
& &a_2(\mu) = a_2(\mu ') + \frac{1}{16 \pi^2}
\frac{1}{24} \log\frac{\mu^2}{\mu '^2}, \hspace{1cm}
a_3(\mu) = a_3(\mu ') - \frac{1}{16 \pi^2}
\frac{1}{24} \log\frac{\mu^2}{\mu '^2}, \nonumber\\[2mm]
& &a_4(\mu) = a_4(\mu ') - \frac{1}{16 \pi^2}
\frac{1}{12} \log\frac{\mu^2}{\mu '^2}, \hspace{1cm}
a_5(\mu) = a_5(\mu ') - \frac{1}{16 \pi^2}
\frac{1}{24} \log\frac{\mu^2}{\mu '^2}, \nonumber\\[2mm]
& &a_i(\mu) = a_i(\mu') ; \hspace{3mm} i = 6,...13.
\end{eqnarray}
The value of the bare chiral parameters $a^b_i$, on the other hand,
is completely determined by the matching procedure in terms of the
renormalized parameters of the underlying physics that has been
integrated out, as we have seen for the particular case of a heavy
Higgs. However, for a given $a^b_i$, we still have to choose how to
separate the finite part into the renormalized $a_i(\mu)$ and the
counterterm $\delta a_i$ in eq.(\ref{NSQ}) such that their sum gives
$a^b_i$. This second renormalization scheme concerns only to the
effective theory. Therefore, in using a set of renormalized
parameters $a_i(\mu)$ for a particular underlying theory, one must
specify, in addition, how the finite parts of the counterterms in
eq.(\ref{NSQ}) have been fixed.
In the case of the SM, where a heavy Higgs has been integrated
out to one loop, the bare chiral parameters are given in
eq.(\ref{aMH}). They correspond to the on-shell renormalization
of the underlying SM. Now, in order to present the corresponding
renormalized parameters we have to fix the finite parts of the
counterterms. For instance, if we fix the counterterms to include
just the $\Delta_\epsilon$ terms as in eq.(\ref{PP}),
the renormalized chiral parameters for the SM with a heavy Higgs
are\footnote{This particular renormalization of the chiral parameters
was chosen in our previous work, where we called it ${\overline{\rm MS}}$.}:
\begin{equation}
\begin{array}{ll}
a_0 (\mu) = {\displaystyle \frac{1}{16 \pi^2}
\frac{3}{8}\left( \frac{5}{6} - \log\frac{M_H^2}{\mu^2} \right)},&
\hspace{-1cm}a_3 (\mu) = {\displaystyle \frac{-1}{16 \pi^2}
\frac{1}{24}
\left( \frac{17}{6} - \log\frac{M_H^2}{\mu^2} \right)},\\[6mm]
a_1 (\mu) = {\displaystyle \frac{1}{16 \pi^2} \frac{1}{12}
\left( \frac{5}{6} - \log\frac{M_H^2}{\mu^2} \right)}, &
\hspace{-1cm}a_4 (\mu) = {\displaystyle \frac{- 1}{16 \pi^2}
\frac{1}{12}
\left( \frac{17}{6} - \log\frac{M_H^2}{\mu^2} \right)},\\[6mm]
a_2 (\mu) = {\displaystyle \frac{1}{16 \pi^2} \frac{1}{24}
\left( \frac{17}{6} - \log\frac{M_H^2}{\mu^2} \right)}, &
\hspace{-1cm}a_{11} (\mu) = {\displaystyle
\frac{-1}{16 \pi^2}\frac{1}{24}},\\[6mm]
a_5 (\mu) = {\displaystyle \frac{v^2}{8 M_H^2} -
\frac{1}{16 \pi^2} \frac{1}{24}
\left( \frac{79}{3} - \frac{ 27 \pi}{2 \sqrt{3}}
- \log\frac{M_H^2}{\mu^2} \right)}, \\[6mm]
a_i (\mu) = 0, \hspace{3mm} i = 6,7,8,9,10,12,13.
\end{array} \label{aR}
\end{equation}
Another example is the renormalization scheme chosen by Gasser and
Leutwyler for the linear $O(N)$ sigma model in ref.\cite{GL1}.
The values of the bare parameters for their choice of the renormalization
scheme of the underlying sigma model were given in eqs.(\ref{LiB}).
Now, they fix instead the chiral counterterms to the following values:
\begin{eqnarray}
\delta L_1 & = & \frac{\delta l_1}{4} = \delta a_5^{\rm GL} =
\frac{1}{16 \pi^2} \frac{- 1}{24} ( \Delta_\epsilon + 1),
\nonumber \\[2mm]
\delta L_2 & = & \frac{\delta l_2}{4} = \delta a_4^{\rm GL} =
\frac{1}{16 \pi^2} \frac{- 1}{12} ( \Delta_\epsilon + 1),
\nonumber \\[2mm]
\delta L_9 & = & - \frac{\delta l_6}{2} = \delta a_3^{\rm GL}
- \delta a_2^{\rm GL} = \frac{1}{16 \pi^2} \frac{- 1}{12}
( \Delta_\epsilon + 1), \nonumber \\[2mm]
\delta L_{10} & = & \delta l_5 = \delta a_1^{\rm GL} =
\frac{1}{16 \pi^2} \frac{1}{12} ( \Delta_\epsilon + 1).
\end{eqnarray}
Consequently, they get the following renormalized chiral
parameters for the sigma model\footnote{Here we call
$M_{\rm GL}$ the renormalized mass $M_r$ of \cite{GL1}.}
\begin{eqnarray}
L_1(\mu) & = & \frac{l_1(\mu)}{4} = a_5^{\rm GL}(\mu) =
\frac{1}{16 \lambda_{\rm GL}} -
\frac{1}{16 \pi^2} \frac{1}{24}
\left( \frac{35}{6} - \log\frac{M^2_{\rm GL}}{\mu^2} \right),
\nonumber \\[2mm]
L_2(\mu) & = & \frac{l_2(\mu)}{4} = a_4^{\rm GL}(\mu) =
\frac{- 1}{16 \pi^2} \frac{1}{12}
\left( \frac{11}{6} - \log\frac{M^2_{\rm GL}}{\mu^2}
\right), \nonumber \\[2mm]
L_9(\mu) & = & - \frac{ l_6(\mu)}{2} = a_3^{\rm GL}(\mu)
- a_2^{\rm GL}(\mu) =
\frac{ - 1}{16 \pi^2} \frac{1}{12}
\left( \frac{11}{6} - \log\frac{M^2_{\rm GL}}{\mu^2}
\right), \nonumber \\[2mm]
L_{10}(\mu) & = & l_5(\mu) = a_1^{\rm GL}(\mu) =
\frac{1}{16 \pi^2} \frac{1}{12}
\left( - \frac{1}{6} - \log\frac{M^2_{\rm GL}}{\mu^2}
\right).
\end{eqnarray}
\section{Calculating observables with the EChL}
In this section we will show, as an example, the explicit calculation
of the radiative corrections to $\Delta \rho$ and $\Delta r$ within
the electroweak chiral Lagrangian approach. These observables are
defined in the effective theory in terms of the renormalized
self-energies in the same way as in the fundamental SM, namely:
\begin{eqnarray}
\Delta \rho & \equiv & \frac{\widehat{\Sigma}^{\rm R}_Z (0)}{M_Z^2}
- \frac{\widehat{\Sigma}^{\rm R}_W (0)}{M_W^2}, \nonumber\\[2mm]
\Delta r & \equiv & \frac{\widehat{\Sigma}^{\rm R}_W (0)}{M_W^2}
+ {\rm (vertex + box)}, \label{ROR}
\end{eqnarray}
where
\begin{displaymath}
{\rm (vertex + box)} \equiv \frac{g^2}{16 \pi^2} \left(
6 + \frac{ 7 - 4 s^2}{2 s^2} \log c^2 \right),
\end{displaymath}
and the renormalized self-energies can be computed as we have
explained in section 6.
Once a renormalization scheme has been chosen, one can always express
$\Delta \rho$ and $\Delta r$ in terms of unrenormalized self-energies
and the $a_i^b$'s. For instance, in the on-shell scheme
given by the conditions of eq.(\ref{RCE}), one gets the particular
values of the counterterms given in eqs.(\ref{ECT}). Next, by plugging
these counterterms into eqs.(\ref{JQL}), one obtains the renormalized
self-energies in terms of the unrenormalized ones and the
$a_i^b$'s. Finally, by substituing these formulas into
eqs.(\ref{ROR}) the following expressions for $\Delta \rho$ and
$ \Delta r$ in the on-shell scheme are found
\begin{eqnarray}
\Delta \rho & = & \frac{\widehat{\Sigma}^{\rm L}_Z (0)}{M_Z^2}
- \frac{\widehat{\Sigma}^{\rm L}_W (0)}{M_W^2}
+ \frac{2 s}{c} \frac{\widehat{\Sigma}^{\rm L}_{\gamma Z} (0)}{M_Z^2}
+ 2 g'^2 a^b_0, \nonumber \\[2mm]
\Delta r & = & \frac{\widehat{\Sigma}^{\rm L}_W (0) -
\widehat{\Sigma}^{\rm L}_W (M_W^2)}{M_W^2} +
\widehat{\Sigma}'^{\rm L}_\gamma (0) + \frac{c^2}{s^2}
\left[ \frac{\widehat{\Sigma}^{\rm L}_W (M_W^2)}{M_W^2}
- \frac{\widehat{\Sigma}^{\rm L}_Z (M_Z^2)}{M_Z^2}
- \frac{2 s}{c} \frac{\widehat{\Sigma}^{\rm L}_{\gamma Z} (0)}
{M_Z^2} \right] \nonumber\\[2mm]
& & - 2 g^2 a^b_0 + \frac{s^2 - c^2}{s^2}
g^2 (a^b_8 + a^b_{13} ) - 2 g^2 (a^b_1 + a^b_{13}) +
{\rm (vertex + box)}.
\end{eqnarray}
The explicit computation of the bosonic loop contributions in the
effective theory, as well as the contributions from just
$a^b_0$ and $a^b_1$ were found in \cite{DEH,EH}.
We present here the complete result
\begin{eqnarray}
\Delta \rho & = & \frac{g^2}{16 \pi^2} \left[
\frac{3}{4} \frac{s^2}{c^2} \left( - \Delta_\epsilon
+ \log \frac{M_W^2}{\mu^2} \right) + h(M_W^2,M_Z^2)
\right]
+ 2 g'^2 a^b_0, \nonumber \\[2mm]
\Delta r & = & \frac{g^2}{16 \pi^2} \left[
\frac{11}{12} \left( \Delta_\epsilon - \log \frac{M_W^2}{\mu^2}
\right) + f(M_W^2,M_Z^2) \right] \nonumber\\[2mm]
& & - 2 g^2 a^b_0 + \frac{s^2 - c^2}{s^2}
g^2 (a^b_8 + a^b_{13} ) - 2 g^2 (a^b_1 + a^b_{13}). \label{RORAB}
\end{eqnarray}
where
\begin{eqnarray}
h(M_W^2,M_Z^2) & = & \frac{1}{c^2} \log c^2 \left( \frac{17}{4 s^2} -
7 + 2 s^2 \right) + \frac{17}{4} - \frac{5}{8} \frac{s^2}{c^2}
\nonumber\\[2mm]
f(M_W^2,M_Z^2) & = & \log c^2 \left( \frac{5}{c^2} -1 +
\frac{3 c^2}{s^2} - \frac{17}{4 s^2 c^2} \right)
- s^2 (3 + 4 c^2) F(M_Z^2,M_W,M_W)\nonumber \\
& & + I_2(c^2)(1 - \frac{c^2}{s^2}) + \frac{c^2}{s^2} I_1(c^2) +
\frac{1}{8 c^2} ( 43 s^2 - 38) \nonumber \\
& & + \frac{1}{18} (154 s^2 - 166 c^2) + \frac{1}{4 c^2} +
\frac{1}{6} + \Delta \alpha +
\left( 6 + \frac{7 - 4 s^2} {2 s^2} \log c^2 \right).
\end{eqnarray}
and $F$, $I_1$, $I_2$ and $\Delta \alpha$ can be found in \cite{MS}.
In eqs.(\ref{RORAB}) there are apparently a divergent term and a $\mu$-scale
dependent term. However, when one redefines the bare effective chiral
parameters as usual, $a^b_i = a_i(\mu) + \delta a_i$, it can be easily
seen that the divergent terms are cancelled by the divergent parts
of the $\delta a_i$ and the $\mu$-scale dependence is cancelled by the
scale dependence of the $a_i(\mu)$. The observables $\Delta \rho$ and
$\Delta r$ turn out to be finite and scale and renormalization
prescription independent, as it must be. In particular, if we set the
substraction scheme for the chiral counterterms
to include just the $\Delta_\epsilon$ terms as in eq.(\ref{PP}),
the following expressions for $\Delta \rho$ and $\Delta r$
in terms of renormalized chiral parameters are obtained
\begin{eqnarray}
\Delta \rho & = & \frac{g^2}{16 \pi^2} \left[
\frac{3}{4} \frac{s^2}{c^2} \log \frac{M_W^2}{\mu^2}
+ h(M_W^2, M_Z^2) \right]
+ 2 g'^2 a_0(\mu), \nonumber \\[2mm]
\Delta r & = & \frac{g^2}{16 \pi^2} \left[
- \frac{11}{12}
\log \frac{M_W^2}{\mu^2} + f(M_W^2,M_Z^2) \right] \nonumber\\[2mm]
& & - 2 g^2 a_0(\mu) + \frac{s^2 - c^2}{s^2}
g^2 (a_8 + a_{13} ) - 2 g^2 (a_1(\mu) + a_{13}). \label{RRF}
\end{eqnarray}
Equations (\ref{RRF}) are general and can be applied to any
underlying physics for the symmetry breaking sector.
If we want to recover the values of $\Delta \rho$ and $\Delta r$
in the particular case of the SM with a heavy Higgs,
one just has to substitute the values of the chiral parameters
in eqs.(\ref{aR}) into eqs.(\ref{RRF}) to obtain
\begin{eqnarray}
\Delta \rho & = & \frac{g^2}{16 \pi^2} \left[
- \frac{3}{4} \frac{s^2}{c^2} \left(
\log \frac{M_H^2}{M_W^2} - \frac{5}{6} \right)
+ h(M_W^2, M_Z^2) \right], \nonumber \\[2mm]
\Delta r & = & \frac{g^2}{16 \pi^2} \left[
\frac{11}{12}
\left(\log \frac{M_H^2}{M_W^2} - \frac{5}{6} \right)
+ f(M_W^2,M_Z^2) \right].
\end{eqnarray}
which agrees with the result given in \cite{MS}.
One can similarly obtain the heavy Higgs contributions to other
relevant observables in electroweak phenomenology.
\section{Conclusions}
Given the present situation of remarkable improvement in electroweak
precision measurements and the encouraging prospects for the future,
we believe it is now imperative a good undertanding of the SM
Higgs boson radiative corrections. A heavy Higgs boson do not
decouple from the low energy electroweak observables and therefore
will leave some trace on them that could be observed with the
future precision measurements.
In this paper, we have calculated the complete non-decoupling
effects of the SM Higgs boson to one loop. They consist of the
already known leading logarithmic Higgs mass dependent effects
and the next to leading constant terms. Both effects are
relevant and for not too large $M_H$ values can be of
comparable magnitude.
We have classified these non-decoupling effects using the EChL
approach, an effective field theory that respects the SM
symmetries at low energies. Within this approach, the
non-decoupling effects of a heavy Higgs boson are represented,
at energies below the Higgs mass, by a subset of gauge invariant
effective operators of the EChL. We have calculated in this work
the values of the parameters of these chiral operators that
represent the SM Higgs. It is our main result and is given in
eq.(\ref{aMH}). We get just seven non-vanishing relevant
parameters summarizing the whole set of non-decoupling heavy
Higgs effects to one loop.
We have also discussed in detail in this work the relation
between the renormalization of both the SM and the effective
theory, with special emphasis in the on-shell scheme, that
has been our particular choice here.
In conclusion, we believe that the approach followed in this
work is interesting and useful because it provides a gauge
invariant way of separating the non-decoupling Higgs boson
effects from the rest of the EW radiative corrections and,
on the other hand, it is a general framework in which one can
analyze the low energy effects not only of an SM heavy Higgs,
but of alternative strongly interacting syummetry breaking
scenarios. The EChL that parametrizes the SM Higgs will
then serve as a fundamental reference model.
\section*{Acknowledgements}
We are indebted to C.P. Mart\'{\i}n for his valuable help
with some technical aspects of this work and his interesting comments.
We thank A. Dobado for many useful conversations and for
reading the manuscript. We appreciate the critical reading
of the manuscript done by S. Peris.
We would like to thank also the SLAC theory group and the CERN
Th-division for their hospitality.
M.J.H. acknowledges finantial support from Consejer\'{\i}a de
Educaci\'on de la Comunidad de Madrid during her stay at SLAC.
This work has been partially supported by the spanish
Ministerio de Educaci\'on y Ciencia under project
CICYT AEN93-0673.
\newpage
\setcounter{equation}{0}
\renewcommand{\theequation}{A.\arabic{equation}}
\section*{Appendix A}
In this appendix, we give an explicit example of the
large-$M_H$ techinques used in the calculation of
the loop integrals.
We start by giving the m-theorem of G. Giavarini, C. P. Martin and
F. Ruiz Ruiz \cite{CAR} (reduced to
the case of 1-loop integrals), which gives sufficient
conditions for a loop integral to vanish in the large-m
limit.
Consider an integral of the form
\begin{equation}
I(p,m) = m^\beta \int d^4 k \; \frac{M(k)}{\prod_i (l_i^2
+ m_i^2)^{n_i}} ,
\end{equation}
where
\begin{eqnarray}
l_i & = & k + \sum_{j=1}^{E} b_{ij} p_j \nonumber\\[2mm]
m_i & = & 0 \;\; {\rm or} \;\; m
\end{eqnarray}
and $M(k)$ is a monomial in the components of $k$. $\beta$ denote an
arbitrary real number.
The external momenta $p_1,....,p_E$ lay in a bounded
subdomain of $R^4$. Let $d$ be the mass dimension of $I(p,m)$ and $\omega$
the minimum of zero and the infrared degree of $I(p,m)$
at zero external momenta. Then\\
{\bf m-theorem}: If the integral $I(p,m)$ is both UV and IR
convergent by power counting at non-exceptional external momenta
and $d - \omega < 0 $, then $I(p,m)$ goes to zero when $m$ goes
to infinity.
As an example, consider the $(H-\phi)$ loop correction to the
$W$ self-energy given in fig.(2):
\begin{equation}
\frac{e^2}{4 s^2} \mu ^{4 - D} \int
\frac{d^D k}{(2 \pi)^D}
\frac{(2 k + q)_\mu (2 k + q)_\nu}{[k^2 - \xi M_W^2] [(k+q)^2 - M_H^2]}
\end{equation}
where $q$ is the $W$ external momentum and $D$ denotes the space-time
dimension in dimensional regularization.
Let's work out explicitly the large $M_H$ expansion of the most
divergent part of this correction which comes from the integral
\begin{equation}
I_{\mu \nu} = \mu^{4 - D} \int
\frac{d^D k}{(2 \pi)^D}
\frac{k_\mu \; k_\nu}{[k^2 - \xi M_W^2] [(k+q)^2 - M_H^2]}
\end{equation}
The superficial degree of UV divergence of $I_{\mu \nu}$ is 2 at $D=4$.
The first step is to rearrange {\sl algebraically}
the denominator that includes a light mass
\begin{equation}
\frac{1}{k^2 -\xi M_W^2} = \frac{1}{k^2} +
\frac{\xi M^2_W}{k^2 (k^2 - \xi M_W^2)}
\end{equation}
so that one gets a term in the integral with the same degree of UV
divergence but where the light mass is no more in the denominator,
and a second term with still a light mass dependence but with the
degree of UV divergence lowered by two.
Applying this algebraic rearrangement as many times as necessary
until the last term gives a convergent integral (twice in the case
of $I_{\mu \nu}$), one gets
\begin{eqnarray}
I_{\mu \nu} &=& \mu^{4 - D} \int \frac{d^D k}{(2 \pi)^D}
\left[ \frac{k_\mu k_\nu}{k^2 [(k+q)^2 - M_H^2]} +
\frac{ \xi M_W^2 k_\mu k_\nu}{k^4 [(k+q)^2 - M_H^2]} +
\frac{\xi^2 M_W^4 k_\mu k_\nu}{ k^4 [k^2 - \xi M_W^2] [(k+q)^2
- M_H^2]} \right] \nonumber \\[2mm]
&=& A_{\mu \nu} + B_{\mu \nu} + C_{\mu \nu}
\end{eqnarray}
Now the last term $C_{\mu \nu}$ is already UV and IR convergent.
One can apply the Lebesgue dominated convergence theorem
to see that the heavy Higgs mass limit can be safely taken
inside the integral in this term, and therefore,
\begin{displaymath}
C_{\mu \nu} \rightarrow 0 \hspace{8mm} {\rm when} \hspace{8mm}
M_H \rightarrow \infty
\end{displaymath}
Let's develope now the term $A_{\mu \nu}$, (the other term
$B_{\mu \nu}$ can be evaluated using the same procedure).
We rewrite the denominators in $A_{\mu \nu}$, using again an
algebraic identity
\begin{equation}
\frac{1}{(k+q)^2 - M_H^2} =
\frac{1}{k^2 - M_H^2} -
\frac{2 k q + q^2}{[k^2 - M_H^2] [(k+q)^2 - M_H^2]}
\end{equation}
One has to apply this identitity as many times as neccessary
until one gets an integral that fulfills the conditions
of the m-theorem.
Using this identity three times in $A_{\mu \nu}$
one get's
\begin{eqnarray}
A_{\mu \nu} &=& \mu^{4-D} \int \frac{d^D k}{(2 \pi)^D}
\left[ \frac{k_\mu k_\nu}{k^2 [k^2 - M_H^2]} -
\frac{k_\mu k_\nu (2 k q + q^2)}{k^2 [k^2 - M_H^2]^2} +
\frac{k_\mu k_\nu (2 k q + q^2)^2}{k^2 [k^2 - M_H^2]^3} -
\right. \nonumber \\[2mm]
& & \left. \frac{k_\mu k_\nu (2 k q + q^2)^3}
{k^2 [k^2 - M_H^2]^3 [(k+q)^2 - M_H^2]} \right]
\end{eqnarray}
Now the last integral is finite at $D=4$ and satisfies the
requirements of the m-theorem. It then goes to zero as $D
\rightarrow 4$ and $M_H \rightarrow \infty$.
The other three terms in $A_{\mu \nu}$
can be evaluated using standard techniques, and
one gets
\begin{equation}
A_{\mu \nu} = \frac{i}{16 \pi^2} \left[
g_{\mu \nu} \frac{M_H^2}{4} (\widehat{\Delta}_\epsilon
+ \frac{3}{2}) - g_{\mu \nu} q^2 \frac{1}{12}
(\widehat{\Delta}_\epsilon + \frac{5}{6}) +
q_\mu q_ \nu \frac{1}{3} (\widehat{\Delta}_\epsilon
+ \frac{1}{3}) \right]
\end{equation}
Using the same techniques to evaluate the rest of terms
one finally gets the large $M_H$ contribution of the
$(H-\phi)$ correction to the $W$ self energy
\begin{equation}
\frac{e^2}{s^2} \frac{i}{16 \pi^2} \left[
g_{\mu \nu} \frac{M_H^2}{4} ( \widehat{\Delta}_\epsilon +
\frac{3}{2} ) +
g_{\mu \nu} \frac{\xi M_W^2}{4} (\widehat{\Delta}_\epsilon
+ \frac{3}{2}) - g_{\mu \nu} q^2 \frac{1}{12}
(\widehat{\Delta}_\epsilon + \frac{5}{6}) +
q_\mu q_ \nu \frac{1}{12} (\widehat{\Delta}_\epsilon
+ \frac{4}{3}) \right]
\end{equation}
As a final comment, we would like to point a difference between these
large-m techniques and the commonly used expansion in powers of the
external momenta $q$. This large-m calculation gives us all the
existing contributions up to an arbitrary power in the external
momenta $q$, as far as they do not vanish in the large-$M_H$ limit.
On the other hand, these techniques provide an extremely easy way to
extract the non-vanishing large mass effects of any loop integral.
\newpage
\setcounter{equation}{0}
\renewcommand{\theequation}{B.\arabic{equation}}
\section*{Appendix B}
In this appendix, we present the results of the various terms
contributing to the matching equations (\ref{MAMA}). Since we are
interested mainly in the differences between the SM terms and
the corresponding ones of the EChL, it is convenient to rewrite
eq.(\ref{MAMA}) in the following form\footnote{ We denote by
$i M $ the direct result from Feynman diagrams}:
\begin{equation}
\left( i M_{abcd}^{{\rm T} \; \mu \nu \rho \lambda} -
i \widehat{M}_{abcd}^{{\rm T} \; \mu \nu \rho \lambda} \right) +
\left( i M_{abcd}^{{\rm C} \; \mu \nu \rho \lambda} -
i \widehat{M}_{abcd}^{{\rm C} \; \mu \nu \rho \lambda} \right) +
\left( i M_{abcd}^{{\rm L} \; \mu \nu \rho \lambda} -
i \widehat{M}_{abcd}^{{\rm L} \; \mu \nu \rho \lambda} \right) = 0
\end{equation}
The tree diagrams contributing to $ ( i M^{\rm T} - i
\widehat{M}^{\rm T}) $ are displayed in fig.(3). The contributions
from the counterterms are generated using eq.(\ref{RET}) for the 1PI
Green's functions of the effective theory, $i \widehat{M}^{\rm C}$,
and eq.(\ref{REPS}) for the 1LPI Green's functions of the SM, $i
M^{\rm C}$. For the difference of counterterms, we use the
following notation:
\begin{displaymath}
\Delta Q \equiv \delta Q - \widehat{\delta Q} \hspace{1.5cm}
{\rm with} \;\; Q = Z_B, Z_W, g^2, {\rm etc...}
\end{displaymath}
The loop contributions to $( i M^{\rm L} - i \widehat{M}^{\rm L} )$
come from the explicit evaluation of all the one loop diagrams
in fig.(3) in the large $M_H$ limit. To perform this calculation
we have used the techniques described in appendix A.
\subsection*{ {\bf $\displaystyle{\gamma \gamma}$}WW }
There are no differences in the tree level contributions
\begin{equation}
\left( i M_{\gamma\gamma WW}^{{\rm T} \; \mu\nu\rho\lambda} -
i \widehat{M}_{\gamma\gamma WW}^{{\rm T} \;
\mu\nu\rho\lambda} \right) = 0.
\end{equation}
The contributions from the counterterms are
\begin{eqnarray}
\left( i M_{\gamma\gamma WW}^{{\rm C} \; \mu\nu\rho\lambda} -
i \widehat{M}_{\gamma\gamma WW}^{{\rm C} \; \mu\nu\rho\lambda}
\right) &=& - i g^4 s^2 \left[ 2 \left(\Delta Z_W - \frac{\Delta
g^2}{g^2} \right)
\frac{1}{g^2}\right]
g^{\mu \nu} g^{\lambda \rho} \nonumber\\
& &+ i g^4 s^2 \left[ \left(\Delta Z_W - \frac{\Delta g^2}{g^2} \right)
\frac{1}{g^2} + a^b_{11}
\right] \left( g^{\mu \rho} g^{\lambda \nu} + g^{\lambda \mu }
g^{\nu \rho} \right).
\end{eqnarray}
The evaluation of diagrams in fig.(3.a) gives
\begin{eqnarray}
\left( i M_{\gamma\gamma WW}^{{\rm L} \; \mu\nu\rho\lambda} -
i \widehat{M}_{\gamma\gamma WW}^{{\rm L} \; \mu\nu\rho\lambda}
\right) & = &
\sum_{i=1}^{12} L_i + \sum_{j=1}^3 ( D_j - \widehat{D}_j ) =
\nonumber\\
& & - i \frac{g^4 s^2 }{16 \pi^2} \left[ \frac{1}{6} \left(
\hat{\Delta}_\epsilon + \frac{5}{6}\right) \right]
g^{\mu \nu} g^{\lambda \rho} + \nonumber \\[2mm]
& & i \frac{g^4 s^2}{16 \pi^2} \left[ \frac{1}{12} \left(
\hat{\Delta}_\epsilon + \frac{4}{3}\right)
\right] \left( g^{\mu \rho} g^{\lambda \nu} + g^{\lambda \mu }
g^{\nu \rho} \right).
\end{eqnarray}
\subsection*{{\bf ${\displaystyle \gamma}$}ZWW}
There are no differences at tree level
\begin{equation}
\left( i M_{\gamma ZWW}^{{\rm T} \; \mu\nu\rho\lambda} -
i \widehat{M}_{\gamma ZWW}^{{\rm T} \; \mu\nu\rho\lambda} \right) = 0.
\end{equation}
The contributions from the counterterms are
\begin{eqnarray}
\left( i M_{\gamma ZWW}^{{\rm C} \; \mu\nu\rho\lambda} -
i \widehat{M}_{\gamma ZWW}^{{\rm C} \; \mu\nu\rho\lambda} \right)
& = &
- i g^4 \frac{s}{c} \left[ 2 \left( \Delta Z_W - \frac{\Delta g^2}
{g^2} \right) \frac{c^2}{g^2} + 2 a^b_3 \right]
g^{\mu \nu} g^{\lambda \rho} \nonumber\\
& &+ i g^4 \frac{s}{c} \left[ \left( (\Delta Z_W - \frac{\Delta
g^2}{g^2})\frac{1}{g^2}
+ a^b_{11}\right) c^2 \right. \nonumber \\
& &\left. \rule[0mm]{0mm}{6mm} + a^b_3 - a^b_{11} + a^b_{12}
\right] \left( g^{\mu \rho} g^{\lambda \nu} + g^{\lambda \mu }
g^{\nu \rho} \right).
\end{eqnarray}
The evaluation of diagrams in fig.(3.b) gives
\begin{eqnarray}
\left( i M_{\gamma ZWW}^{{\rm L} \; \mu\nu\rho\lambda} -
i \widehat{M}_{\gamma ZWW}^{{\rm L} \; \mu\nu\rho\lambda} \right) &=&
\sum_{i=1}^{18} L_i + \sum_{j=1}^3 ( D_j - \widehat{D}_j ) = \\
& & - i \frac{g^4 s}{c}\frac{1}{16 \pi^2} \left[ \frac{1}{12}
\hat{\Delta}_\epsilon ( 2 c^2 + 1 ) + \frac{1}{72}
( 10 c^2 + 17 ) \right]
g^{\mu \nu} g^{\lambda \rho} \nonumber \\
& &+ i \frac{g^4 s}{c}\frac{1}{16 \pi^2} \left[ \frac{1}{12}
\hat{\Delta}_\epsilon ( c^2 + \frac{1}{2} ) + \frac{1}{36}
( 4 c^2 + \frac{11}{4} ) \right]
\left( g^{\mu \rho} g^{\lambda \nu} + g^{\lambda \mu }
g^{\nu \rho} \right). \nonumber
\end{eqnarray}
\subsection*{ZZWW}
In this case, there are already differences at tree level since
there is one diagram, $T_1$ in fig.(3c), with a tree level
exchange of a Higgs boson. The large $M_H$ limit of this
diagram gives
\begin{equation}
\left( i M_{ZZWW}^{{\rm T} \; \mu\nu\rho\lambda} -
i \widehat{M}_{ZZWW}^{{\rm T} \; \mu\nu\rho\lambda} \right) = T_1 =
i \frac{g^2}{c^2} \frac{M_W^2}{M_H^2} g^{\mu \nu} g^{\lambda \rho}.
\end{equation}
The contributions from the counterterms are
\begin{eqnarray}
\left( i M_{ZZWW}^{{\rm C} \; \mu\nu\rho\lambda} -
i \widehat{M}_{ZZWW}^{{\rm C} \; \mu\nu\rho\lambda} \right) &=&
+ i \frac{g^4}{c^2} \left[ - 2 \left( \Delta Z_W - \frac{\Delta
g^2}{g^2}
\right) \frac{c^4}{g^2} - 4 c^2 a^b_3 - 2 (a^b_5 + a^b_7) + \delta S
\right]
g^{\mu \nu} g^{\lambda \rho} \nonumber\\
& &+ i \frac{g^4}{c^2} \left[ \left( (\Delta Z_W - \frac{\Delta
g^2}{g^2}) \frac{1}{g^2}
+ a^b_{11}\right) c^4 - 2 c^2(-a^b_3+a^b_{11}- a^b_{12} )
\right. \nonumber\\
& & \left. \rule[0mm]{0mm}{6mm} - ( a^b_4 + a^b_6 - a^b_{11}+ 2 a^b_{12})
\right] \left( g^{\mu \rho} g^{\lambda \nu} + g^{\lambda \mu }
g^{\nu \rho} \right).
\end{eqnarray}
where $\delta S$ is given by the following combination of SM counterterms
\begin{displaymath}
\delta S = \frac{M_W^2}{g^2 M_H^2} \left(- \frac{\delta M_H^2}{M_H^2}
+ \frac{ \delta T / v}{M_H^2} + \frac{\delta M_W^2}{M_W^2} \right)
\end{displaymath}
and the SM counterterms $\delta M_H^2, \delta T $ and $ \delta M_H^2$
are given in eq.(\ref{SMCL}).
The evaluation of the diagrams in fig.(3.c) gives
\begin{eqnarray}
\left( i M_{ZZWW}^{{\rm L} \; \mu\nu\rho\lambda} -
i \widehat{M}_{ZZWW}^{{\rm L} \; \mu\nu\rho\lambda} \right) &=&
\sum_{i=1}^{47} L_i + \sum_{j=1}^6 ( D_j - \widehat{D}_j ) =
\nonumber\\
& & i \frac{g^4 }{c^2}\frac{1}{16 \pi^2} \left[ \frac{1}{6}
\hat{\Delta}_\epsilon ( - c^4 - c^2 + \frac{43}{4} ) + \frac{1}{36}
( - 5 c^4 - 17 c^2 + 47 ) \right]
g^{\mu \nu} g^{\lambda \rho} \nonumber\\
& &+ i\frac{g^4}{c^2}\frac{1}{16 \pi^2} \left[ \frac{1}{12}
\hat{\Delta}_\epsilon (c^4 + c^2 - 1)+\frac{1}{72}
(8 c^4+11 c^2 -14)\right] \nonumber \\
& &\left(g^{\mu \rho} g^{\lambda \nu}+g^{\lambda \mu}
g^{\nu \rho}\right).
\end{eqnarray}
\subsection*{WWWW}
The differences at tree level are given by diagrams $T_1$ and $T_2$ of
fig.(3.d). In the large $M_H$ limit we get
\begin{equation}
\left( i M_{WWWW}^{{\rm T} \; \mu\nu\rho\lambda} -
i \widehat{M}_{WWWW}^{{\rm T} \; \mu\nu\rho\lambda} \right) = T_1 + T_2 =
i g^2 \frac{M_W^2}{M_H^2} \left( g^{\mu \rho} g^{\lambda \nu} +
g^{\lambda \mu } g^{\nu \rho} \right).
\end{equation}
The contributions from the counterterms are
\begin{eqnarray}
\left( i M_{WWWW}^{{\rm C} \; \mu\nu\rho\lambda} -
i \widehat{M}_{WWWW}^{{\rm C} \; \mu\nu\rho\lambda} \right) &=&
i g^4 \left[ \left( \Delta Z_W - \frac{\Delta g^2}{g^2}
\right) \frac{1}{g^2} + 2 a^b_3 - a^b_4 - a^b_8 + 2 a^b_9 -
2 a^b_{13} \right]
2 g^{\mu \nu} g^{\lambda \rho} \nonumber\\
& &+ i g^4 \left[ - \left( \Delta Z_W - \frac{\Delta
g^2}{g^2}\right) \frac{1}{g^2}
- 2 a^b_3 - a^b_4 - 2 a^b_5 + a^b_8 - 2 a^b_9
\right. \nonumber\\
& & \left. \rule[0mm]{0mm}{6mm} + 2 a^b_{13} + \delta S
\right] \left( g^{\mu \rho} g^{\lambda \nu} + g^{\lambda \mu }
g^{\nu \rho} \right).
\end{eqnarray}
The one loop diagrams are displayed in fig.(3.d) and their evaluation
in the large $M_H$ limit gives
\begin{eqnarray}
\left( i M_{WWWW}^{{\rm L} \; \mu\nu\rho\lambda} -
i \widehat{M}_{WWWW}^{{\rm L} \; \mu\nu\rho\lambda} \right) &=&
\sum_{i=1}^{58} L_i + \sum_{j=1}^{12} ( D_j - \widehat{D}_j ) =
\nonumber\\
& & i \frac{g^4}{16 \pi^2} \left[ \frac{1}{12} \left(
\hat{\Delta}_\epsilon + \frac{5}{6}\right) \right]
2 g^{\mu \nu} g^{\lambda \rho} \nonumber \\
& &+ i \frac{g^4}{16 \pi^2} \left[ \frac{37}{24}
\hat{\Delta}_\epsilon + \frac{55}{72}
\right] \left( g^{\mu \rho} g^{\lambda \nu} + g^{\lambda \mu }
g^{\nu \rho} \right).
\end{eqnarray}
\subsection*{ZZZZ}
The differences at tree level are given by diagrams $T_1$, $T_2$
and $T_3$ of fig.(3.e). In the large $M_H$ limit we get
\begin{equation}
\left( i M_{ZZZZ}^{{\rm T} \; \mu\nu\rho\lambda} -
i \widehat{M}_{ZZZZ}^{{\rm T} \; \mu\nu\rho\lambda} \right) =
T_1 + T_2 + T_3 =
i \frac{g^2}{c^4} \frac{M_W^2}{M_H^2} \left( g^{\mu \nu}
g^{\rho \lambda} + g^{\mu \rho} g^{\lambda \nu} +
g^{\lambda \mu } g^{\nu \rho} \right).
\end{equation}
The contributions from the counterterms are
\begin{equation}
\left( i M_{ZZZZ}^{{\rm C} \; \mu\nu\rho\lambda} -
i \widehat{M}_{ZZZZ}^{{\rm C} \; \mu\nu\rho\lambda} \right) =
i \frac{g^4}{c^4} \left[ - 2 ( a^b_4 + a^b_5 ) - 4 ( a^b_6 +
a^b_7 + 2 a^b_{10} ) + \delta S \right]
\left( g^{\mu \nu} g^{\rho \lambda} + g^{\mu \rho}
g^{\lambda \nu} + g^{\lambda \mu } g^{\nu \rho} \right).
\end{equation}
\noindent
The one loop diagrams are displayed in fig.(3.e) and their evaluation
in the large $M_H$ limit gives
\begin{eqnarray}
\left( i M_{ZZZZ}^{{\rm L} \; \mu\nu\rho\lambda} -
i \widehat{M}_{ZZZZ}^{{\rm L} \; \mu\nu\rho\lambda} \right) &=&
\sum_{i=1}^{75} L_i + \sum_{j=1}^{18} ( D_j - \widehat{D}_j ) =
\nonumber\\
& & i \frac{g^4}{c^4} \frac{1}{16 \pi^2} \frac{1}{8} \left[
13 \hat{\Delta}_\epsilon + \frac{20}{3} \right]
\left( g^{\mu \nu} g^{\lambda \rho} +
g^{\mu \rho} g^{\lambda \nu} + g^{\lambda \mu }
g^{\nu \rho} \right).
\end{eqnarray}
\newpage
\section*{Figure Captions}
\begin{description}
\item[Fig.1] Feynman rules for the EChL couplings that differ
from the SM. We have shown only those that are relevant for
the present calculation.
\item[Fig.2] One-loop diagrams that give a leading contribution
to the SM combination of counterterms $\delta S$, as explained
in the text.
\item[Fig.3] 1LPI standard model diagrams relevant for the
matching of the four-point Green's functions of gauge fields up
to one loop. We have calculated all the existing diagrams
including gauge, Goldstone boson and Higgs fields in the loops.
We plot here only the diagrams that do not cancel at both sides
of the matching equation, either because they do not exist in
the EChL ($L_i$, $T_i$) or because they are different in the
EChL and the SM ($D_i$). We have also restricted our plot to
those diagrams that give a non-vanishing contribution to the
matching in the large $M_H$ limit. Diagrams $D_i$ have to be
calculated also in the effective theory, $\widehat{D}_i$, using
the Feynman rules given in fig.1. \\ All the momenta are taken
incoming and arrows indicate negative charge flux.
{\bf 3.a} Diagrams for the $\gamma \gamma W W$ Green function.
(P1) indicates that the diagram obtained exchanging the
$W^+$ and $W^-$ external legs has also to be included.
There are a total of 12 $L_i$ diagrams and 3 $D_i$ diagrams. \\
{\bf 3.b} $\gamma Z W W$ Green function. (P1) represents the
diagram obtained exchanging the $W^+$ and $W^-$ external legs.
There are 18 $L_i$ and 3 $D_i$ diagrams.\\
{\bf 3.c} $Z Z W W $ Green function. (P1) represents the
diagram obtained exchanging the $W^+$ and $W^-$ external legs.
There are 47 $L_i$, 6 $D_i$ and 1 $T_i$ diagrams.\\
{\bf 3.d} $W W W W $ Green function.
(P3) indicates that the three diagrams obtained by exchanging
the ($W^-_\mu \leftrightarrow W^-_\nu $),
($W^+_\rho \leftrightarrow W^+_\lambda $),
and ($W^-_\mu \leftrightarrow W^-_\nu ,
W^+_\rho \leftrightarrow W^+_\lambda $) external legs
have to be also included.
(P3)' indicates the substitutions
($W^+_\rho \leftrightarrow W^+_\lambda $),
($W^+_\rho \leftrightarrow W^-_\nu$) and
($W^-_\mu \rightarrow W^+_\lambda,
W^+_\lambda \rightarrow W^-_\nu,
W^-_\nu \rightarrow W^-_\mu$).
(P1) indicates the exchange
($W^+_\rho \leftrightarrow W^+_\lambda $).
There are 58 $L_i$, 12 $D_i$ and 2 $T_i$ diagrams.\\
{\bf 3.e} $Z Z Z Z$ Green function.
(P5) indicates the five diagrams obtained by the
following permutations of the $(\mu \nu \rho \lambda)$ external Z's:
$(\mu \nu \lambda \rho), \; (\mu \rho \nu \lambda), \; (\mu \rho
\lambda \nu), \; (\mu \lambda \nu \rho)$ and $(\mu \lambda \rho \nu)$.
(P2) indicates the exchange of $(\mu \nu \rho \lambda)$
by $(\mu \rho \lambda \nu)$ and $(\mu \lambda \nu \rho)$.
There are 75 $L_i$, 18 $D_i$ and 3 $T_i$ diagrams.
\end{description}
\newpage
|
1,314,259,994,233 | arxiv | \section{Introduction}
Telescope observing time is very precious. In particular, minimising the calibration overheads for the unavoidable telluric absorption feature correction would lead to a highly increased efficiency as the required observations have to be done directly before or after the actual science frames with the same airmass to capture the same transmission of the Earth's atmosphere. Apart from the loss of science time, this approach additionally introduces severe constraints on the scheduling.
We have developed the software package {\tt molecfit}, which calculates a transmission spectrum on a theoretical basis. Initially, the prototype was developed for estimating the water vapour content of the Earth's atmosphere for scheduling issues of infrared observations with the ESO Very Large Telescope \citep{SME10}. We have further developed this prototype leading to the package {\tt molecfit}, which can now be used for performing telluric absorption feature correction on single and multiple science frames.
\section{The Method}
The tool incorporates a similar approach as \citet{SEI10}, but is more advanced. It is based on the radiative transfer code LNFL/LBLRTM \citep{CLO05}, the line parameter list HITRAN \citep{ROT09}, an atmospheric profile containing information on the chemical composition and temperature of the Earth's atmosphere at the time of observations\footnote{\url{http://www.atm.ox.ac.uk/RFM/atm/}}$^,$\footnote{\url{ http://ready.arl.noaa.gov/gdas1.php}}$^,$\footnote{\url{http://archive.eso.org/asm/ambient-server?site=paranal}}, and the fitting package 'mpfit' by C. Markwardt\footnote{\url{ http://www.physics.wisc.edu/$\sim$craigm/idl/cmpfit.html}}. The fitting algorithm of {\tt molecfit} contains several steps (see Figure~\ref{fig:workflow} and the corresponding user manual \citep{MF13}): (1) scaling the continuum, (2) wavelength and resolution fit, (3) rescaling the continuum, (4) fitting the molecules, (5) joint continuum, wavelength, and resolution fit, and (6) fitting all components (molecules, continuum, wavelength, and resolution). The tool was developed to be instrument independent. It allows the implementation of instrument-specific line kernels for the resolution fit as well as a synthetic kernel composed of boxcar, Gaussian, and Lorentzian components. The best-fit transmission by the radiative transfer code can be used for a telluric absorption correction. The package even has a tool for applying this correction to a set of science observations. Additionally, {\tt molecfit} can be used to determine the water vapour content of the atmosphere for planning infrared observations. In this context, it is also able to fit mid-IR sky emission spectra. The tool has been tested for a selection of spectra taken with different ESO instruments. \articlefigure[width=1\textwidth]{P077_f1.eps}{fig:workflow}{Overview of the {\tt molecfit} workflow}
\section{Results}
We have successfully applied our method to various instruments. Figure~\ref{fig:results} shows a NIR arm spectrum of the galaxy NGC5638 taken with the X-Shooter spectrograph, a medium resolution ECHELLE spectrograph mounted at the ESO VLT covering $0.3$ to $2.5\,\mu$m. We used {\tt molecfit} directly on the science frames without incorporating the corresponding telluric standard star observation. The two regions (I)[$\lambda=1.55$ to $1.65\,\mu$m] and (II) [$\lambda=1.75$ to $2.0\,\mu$m] are defined to show the quality of the corrections with low and high atmospheric absorption, respectively. \articlefigure[width=1\textwidth]{P077_f2.eps}{fig:results}{X-Shooter@ESOVLT example of the telluric absorption feature correction performed by {\tt molecfit}. The uppermost panel shows the original NIR arm spectrum of the galaxy NGC5638 (black lines) and the corrected one (red lines). Regions (I)[$\lambda=1.55$ to $1.65\,\mu$m] and (II) [$\lambda=1.75$ to $2.0\,\mu$m] show the quality of the telluric feature correction in case of low and high absorption, respectively (middle panels). The lowest panel shows the transmission curve calculated by {\tt molecfit} including an identification of the most prominent molecular features.}
\section{Conclusion}
We obtain a reasonable telluric feature correction in the entire wavelength range of the NIR arm spectrum ($1$ to $2.45\,\mu$m). This is especially demonstrated in the two regions (I) and (II), which shows good results even in wavelength ranges with high molecular absorption by the Earth's atmosphere. A comparison with the IRAF task {\tt telluric}\footnote{\url{http://iraf.net/irafhelp.php?val=telluric\&help=Help+Page}}, a standard software tool, reveals less residuals and less offsets in the {\tt molecfit} corrected spectra. Thus, performing the telluric absorption feature correction on basis of theoretical transmission curves is a promising approach to save valuable telescope time.
\acknowledgements This study is carried out
in the framework of the Austrian ESO In-Kind project funded
by the Austrian Ministry for Research (BM:wf) under contracts
BMWF-10.490/0009-II/10/2009 and BMWF-10.490/0008-II/3/2011. This
publication is also supported by the Austrian Science Fund (FWF): P26130.
|
1,314,259,994,234 | arxiv | \section{Introduction to the main result}
Throughout the paper, we use the following conventions:
\begin{itemize}
\item The Greek letters $\alpha,\beta,\cdots$ denote indices from $0$ to $3$. The capital Latin letters $A,B,\cdots$ denote indices from $1$ to $2$. The little Latin letters $i,j,k,\cdots$ denote indices from $1$ to $3$.
\item $(\phi,F)$ is a \emph{given} finite energy smooth solution of the MKG equations. It exists globally and remains smooth according to the classical result of Klainerman-Machedon \cite{MKGkl}.
\item The letter $f$ denotes an \emph{arbitrary} section of the bundle $\mathbf{L}$ (it may not be $\phi$). The letter $G$ denotes an \emph{arbitrary} 2-form $G_{\mu\nu}$ (it may not be $F$).
\item We define $\psi=r\phi$.
\end{itemize}
We use two coordinate systems on the Minkowski spacetime $\mathbb{R}^{3+1}$: the Cartesian coordinates $(x^0=t,x^1,x^2,x^3)$ and the polar coordinates $(t,r,\vartheta)$. The optical functions $u$ and $v$ are defined as
\begin{equation*}
u=\frac{1}{2}(t-r), \ \ v=\frac{1}{2}(t+r), \quad u_+=1+|u|,\quad v_+=1+|v|.
\end{equation*}
A \emph{null frame} is defined by $(e_1,e_2,e_3={\underline{L}},e_4=L)$, where $L=\partial_t+\partial_r$, ${\underline{L}}=\partial_t-\partial_r$ and $e_1, e_2$ is an orthonormal complement of $L$ and ${\underline{L}}$.
The level sets of $u$ and $v$ define (locally) null foliations of the Minkowski spacetime. Given $r_2 >r_1 >0$, we define the outgoing (or incoming) null hypersurfaces $\mathcal{H}_{r_1}^{r_2}$ (or $\underline{\mathcal{H}}_{r_2}^{r_1}$) as
\begin{equation*}
\mathcal{H}_{r_1}^{r_2} :=\big\{(t,r,\vartheta) \,\big|\, t\geq 0, u=-\frac{1}{2}r_1, r_1\leq r\leq r_2\big\}\ \ \ \text{or}\ \ \
\underline{\mathcal{H}}_{r_2}^{r_1} :=\big\{(t,r,\vartheta) \,\big|\, t\geq 0, v= \frac{1}{2}r_2, r_1\leq r\leq r_2\big\}
\end{equation*}
respectively.
On the initial time slice $\big\{ t=0 \big\}$ where the Cauchy datum is given, we define
\begin{equation*}
\mathcal{B}_{r_1}^{r_2} :=\big\{(t,r,\vartheta) \,\big|\, t= 0, r_1\leq r\leq r_2\big\}.
\end{equation*}
In the limiting case where $r_2=\infty$, we write $\mathcal{H}_{r_1}=\mathcal{H}_{r_1}^{\infty}$, $\underline{\mathcal{H}}_{r_1}=\underline{\mathcal{H}}_{r_1}^{\infty}$ and $\mathcal{B}_{r_1}=\mathcal{B}_{r_1}^{\infty}$. Three hypersurfaces $\mathcal{H}_{r_1}^{r_2}$, $\underline{\mathcal{H}}_{r_2}^{r_1}$ and $\mathcal{B}_{r_1}^{r_2}$ bound a spacetime region and it is denoted by $\mathcal{D}_{r_1}^{r_2}$. In the following picture, the gray region is $\mathcal{D}_{r_1}^{r_2}$. The truncated light cones $\mathcal{H}_{r_1}^{r_2}$ and $\underline{\mathcal{H}}_{r_2}^{r_1}$ are denoted by the dashed line segments. Their intersection is a 2-sphere of radius $\frac{r_1+r_2}{2}$ and it is the tip of $\mathcal{D}_{r_1}^{r_2}$ in the picture. We denote this sphere by $\mathcal{S}_{r_1}^{r_2}$. The dashed-dotted line segment on the bottom is $\mathcal{B}_{r_1}^{r_2}$.
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \includegraphics[width=3.3in]{pic1.pdf}
In the null frame, we have $\nabla_L L= \nabla_L {\underline{L}}=\nabla_{{\underline{L}}} L=\nabla_{\underline{L}} {\underline{L}}=0$. Moreover, we have
\begin{equation*}
\begin{split}
\nabla_{e_A} L =\frac{1}{r} e_A, \ \nabla_{e_A} {\underline{L}} =-\frac{1}{r} e_A, \ \ \nabla_{e_A} e_B = \slashed{\nabla}_{e_A}e_B + \frac{1}{2r}\slashed{g}_{AB}({\underline{L}}-L),
\end{split}
\end{equation*}
where $\slashed{\nabla}_{e_A}e_B$ is the projection of ${\nabla}_{e_A} e_B$ to a 2-sphere $\mathcal{S}_{r_1}^{r_2}$ (or to the span of $e_1$ and $e_2$) and $\slashed{g}_{AB}$ is the restriction of the Minkowski metric to $\mathcal{S}_{r_1}^{r_2}$.
We can decompose $G_{\mu\nu}$ with respect to the null frame:
\begin{equation*}
\alpha(G)_A :=G(L,e_A), \ \underline{\alpha}(G)_A:=G({\underline{L}},e_A), \ \rho(G) :=\frac{1}{2}G({\underline{L}},L), \ \sigma(G)_{AB} := G_{AB}.
\end{equation*}
For the special case $G_{\mu\nu}=F_{\mu\nu}$, we write
\begin{equation*}
\alpha_A =F(L,e_A), \ \underline{\alpha}_A=F({\underline{L}},e_A), \ \rho :=\frac{1}{2}F({\underline{L}},L), \ \sigma_{AB} := F_{AB}.
\end{equation*}
Since $\sigma_{AB}$ is a 2-form on $\mathcal{S}_{r_1}^{r_2}$, there exists a function $\sigma$ so that $
\sigma_{AB} = \sigma \slashed{\mathscr{E}}_{AB}$ where $\slashed{\mathscr{E}}_{AB}$ is the volume form on $\mathcal{S}_{r_1}^{r_2}$. For the Hodge dual $\,^*F$ of $F$, if we denote $\,^*\alpha_A = -\slashed{\mathscr{E}}_{A}{}^{B}\alpha_B$ (the Hodge dual of $\alpha$ on $\mathcal{S}_{r_1}^{r_2}$), we have
\begin{equation*}
\alpha_A(\,^*F)
=\,^*\alpha_A, \ \underline{\alpha}_A(\,^*F)
=-\,^*\underline{\alpha}_A, \ \rho(\,^*F) =\sigma, \ \sigma(\,^*F)_{AB}= -\rho\slashed{\mathscr{E}}_{AB}.
\end{equation*}
\subsection{The main theorem}
We consider Cauchy problem to \eqref{MKG} with initial data given by
\begin{equation*}
\phi_0(x) = \phi(0,x), \ \phi_1(x)=\partial_t\phi(0,x), \ E^{\text{(ini)}}_i(x)=E_i(0,x), \ B^{\text{(ini)}}_i(x)=B_{i}(0,x).
\end{equation*}
The initial data set $(\phi_0, \phi_1, \ E^{\text{(ini)}}, \ B^{\text{(ini)}} )$
is said to be \textsl{admissible} if it satisfies the compatibility condition
\begin{equation}
\label{eq:comp:cond}
\mathbf{div}(E^{\text{(ini)}})=\Im(\phi_0\cdot \overline{\phi_1}),\quad \mathbf{div} (B^{\text{(ini)}})=0,
\end{equation}
To impose precise assumptions on the initial data, split the electric field $E^{\text{(ini)}}$ into the divergence free part $E^{df}$ and the curl free part $E^{cf}$, that is,
\[
\mathbf{div}(E^{df})=0,\quad \mathbf{curl} (E^{cf})=0,\quad E^{\text{(ini)}}=E^{df}+E^{cf}.
\]
From the above constraint equation, $E^{cf}$ is uniquely determined by $\Im(\phi_0\cdot \overline{\phi_1})$. In particular we can freely assign $(\phi_0, \phi_0, E^{df}, B^{\text{(ini)}})$ as long as $E^{cf}$, $B^{\text{(ini)}}$ are divergence free on the initial hypersurface $\{t=0\}$. We require this part of data decay rapidly and belong to certain weighted Sobolev space. However, since $E^{cf}$ satisfies an elliptic equation on $\mathbb{R}^3$, it has a nontrivial tail $\frac{q_0 x}{r^3}$ even with $(\phi_0, \phi_1)$ compactly supported. To describe the asymptotic behaviour of the solutions, we need to precisely capture the asymptotic behaviour of the solution contributed by the charge. By formally expanding the Green's function for Laplacian:
\begin{align*}
|x-y|^{-1}=|x|^{-1}+|x|^{-3}x\cdot y+\sum_{i,j=1}^3\frac{1}{2} |x|^{-3}(3|x|^{-2}x_ix_j-\delta_{ij})y_iy_j+o(|y|^2),
\end{align*}
we can define a potential function $V(x)$ as
\[
V(x)=|x|^{-1}\frac{1}{4\pi}\int_{\mathbb{R}^3}(1+|x|^{-2} x\cdot y+\frac{1}{2} |x|^{-2}(3|x|^{-2}(x\cdot y)^2-|y|^2))\Im(\phi_0\cdot \bar \phi_1)dy,\quad |x|>0.
\]
The potential is well defined if the initial data $(\phi_0, \phi_1)$ of the scalar field decay rapidly. With the potential $V(x)$, we can define the general charge 2-form $F[q_0]$ with components
\[
F[q_0]_{0i}=E_i[q_0]=\partial_{i}V(x),\quad F[q_0]_{ij}=0.
\]
It is straightforward to check that $F[q_0]$ satisfies the linear Maxwell equation on the region away from the axis $\{x=0\}$. Moreover, there is a constant $C$, depending only on $\phi_0$ and $\phi_1$, so that
\begin{equation}
\label{eq:bd4Fq}
|\rho(F[q_0])|\leq C r^{-2},\quad |\underline{\alpha}(F[q_0])|=|\alpha(F[q_0])|\leq C r^{-3},\quad |\sigma(F[q_0])|=0.
\end{equation}
We remark that most commonly one uses $F[q_0]=\frac{q_0}{r^2}dt\wedge dr$ to denote the charge part near special infinity and it is a special case of the above construction.
Let $\varepsilon_0$ be a small positive constant (say $10^{-2}$ ). We assume that the initial data is bounded in the following gauge invariant weighted Sobolev norm
\begin{equation}\label{initial data 1}
\begin{split}
C_0:=\sum\limits_{k\leq 2}\int_{\mathbb{R}^3} &\Big[(1+r)^{2k+6+8\varepsilon_0}\big(|D^{k+1} \phi_0|^2+|D^k\phi_1|^2 + |\nabla^k \big(E^{\text{(ini)}}-E[q_0]\mathbf{1}_{|x|\geq 1}\big)|^2\\
&+ |\nabla^k B^{\text{(ini)}}|^2\big)+r^{4+8\varepsilon_0}|\phi_0|^2\Big] dx.
\end{split}
\end{equation}
The main theorem of the paper is as follows:
\begin{theorem}[\bf Main result] Consider the Cauchy problem to the massless MKG equation \eqref{MKG} with admissible initial data $(\phi_0,\phi_1, E^{\text{(ini)}}_i, B^{\text{(ini)}}_i)$ bounded in the above weighted norm \eqref{initial data 1}. Then there is a global in time solution $(\phi, F)$ satisfying the following pointwise peeling estimates
\begin{equation}\label{peeling estimates}
\begin{split}
|\phi| \leq C u_+^{-1}v_+^{-1},\quad |D_L(r\phi)| \leq C u_+^{-1} v_+^{-2},\quad |\alpha(\mathring{F})| \leq C u_+^{-1} v_+^{-3} ,\\
|\rho(\mathring{F})|+|\sigma(\mathring{F})| +|\slashed{D} \phi| \leq C u_+^{-2}v_+^{-2},\ \ |\underline{\alpha}(\mathring{F})|+|D_{\underline{L}} \phi|\leq C u_+^{-3}v_+^{-1}.
\end{split}
\end{equation}
for some constant $C$ depending only on $C_0$, where $\mathring{F}=F-F[q_0]\mathbf{1}_{\{1+t\leq |x|\}}$ with $\mathbf{1}_{\{1+t\leq |x|\}}$ the characteristic function of the exterior region $\{(t, x)| t+1\leq |x|\}$.
\end{theorem}
We give several remarks.
\begin{remark}
There is no restriction on the size or on the support of the data. In particular, the charge $q_0$ can be large. Besides the above pointwise estimates, uniform energy estimates as well as weighted energy estimates can also be derived in the course of the proof.
\end{remark}
\begin{remark}
The peeling estimates \eqref{peeling estimates} for the chargeless part of the solution together with the trivial bound \eqref{eq:bd4Fq} of the charge part describe the asymptotic behaviour of the full solution in the exterior region. Moreover the estimate implies that the nontrivial charge can only affect the asymptotic behaviour of the solution in the exterior region. This confirms the conjecture of Shu in \cite{Shu}.
\end{remark}
\begin{remark}
There is a heuristic explanation of the construction of the charge part $F[q_0]$ from the dipole expansion perspective: if we expand the Maxwell field $F$ in a Taylor series near spatial infinity $r=\infty$ as
\begin{equation*}
F = F_2 +F_3+F_4+F_5+\cdots,
\end{equation*}
where $F_k = O(r^{-k})$. The formal expansion of the Green function gives the $F[q_0]=F_2+F_3+F_4$. In this work we require that the perturbation starts from $F_5$. Indeed, the main reason for doing this is to make $F-F[q_0]$ decay sufficiently fast initially so that the chargeless part is bounded in the weighted Sobolev norm defined in \eqref{initial data 1}.
\end{remark}
\begin{remark}
Regarding the dependence of the constant $C$ on the size of the initial data, our proof can easily imply that $C$ depends exponentially on the zeroth order weighted energy (without derivative of the initial data) but polynomially on the higher order weighted energies. Simply from the charge part, it seems that exponential dependence on the zeroth order energy can not be improved. However from the point of view of the bilinear estimates in \cite{MKGkl}, we conjecture that the dependence on higher order energy should be linear.
\end{remark}
\subsection{An outline of the proof: difficulties, ideas and novelties}
The proof uses almost all the existing techniques and results for Maxwell-Klein-Gordon equations: the vector field method, the conformal compactification, the conformal analogues in the vector field method and the low regularity existence results of Klainerman-Machedon. Besides these, we will also introduce new commutation vector fields, new null forms and study some new structure of the nonlinearities. In the rest of the section, we will first sketch the proof in three steps. Then, we will present the difficulties in each step and provide heuristic ideas to handle these difficulties. Finally, we will summarize some new aspects of the proof.
\subsubsection{The structure of the proof}
The proof consists of three steps:
\begin{itemize}
\item[Step 1]
We take a positive number $R_*$ and it determines the so-called exterior region $\mathcal{D}_{R_*}$ (grey part).
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \includegraphics[width=3.8in]{exterior.pdf}
\noindent For large $R_*$, by restricting data on the region where $r\geq R_*$, i.e., $\mathcal{B}_{R_*}$ (as the bottom of the grey region), we can assume the chargeless part of the restricted data is small. Since the grey region is the domain of dependence of $\mathcal{B}_{R_*}$, the solution in $\mathcal{D}_{R_*}$ is completely determined by the restricted data on $\mathcal{B}_{R_*}$. We therefore study the long time behaviour of solutions of MKG equations in the grey region $\mathcal{D}_{R_*}$ with data small in the chargeless part. We emphasize that this is not a small data problem as the charge part of the solution is large and is independent of the radius $R_*$.
\smallskip
\item[Step 2] This step connects the first step to the third. First of all, we will carefully choose a hyperboloid in $\mathcal{D}_{R_*}$ (on which we have precise control on the solution from the previous step). This hypersurface is denoted by $\Sigma_+$ in the next picture.
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \includegraphics[width=3 in]{hyperboloidenergy.pdf}
\noindent The solution restricted on this hyperboloid can be viewed as initial datum for the solution in the interior region which is unknown so far. This step is devoted to showing that the solution obtained from the previous step is sufficient regular on $\Sigma_+$ so that we can conduct the next step.
\smallskip
\item[Step 3] In this last step, we will study the asymptotics of the solution in the causal future $\mathcal{J}^+(\Sigma)$ which is the grey region in the left figure (this is the white region in the previous picture).
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \includegraphics[width=5in]{conformal.pdf}
The hypersurface $\Sigma$ consists of two parts: $\Sigma_-$ and $\Sigma_+$. Since $\Sigma_-$ is finite, the solution on $\Sigma_-$ can be well controlled by the data on the compact region $\{t=0, |x|\leq R_*\}$. This indeed follows from the classical result of Eardley-Moncrief or the result of Klainerman-Machedon. Together with Step 2, the restriction of the solution on $\Sigma$ will be well-understood in terms of the initial data.
Then we will perform a conformal transformation $\Psi$ to map $\mathcal{J}^+(\Sigma)$ to a backward finite light cone (the grey cone on the right hand side of the picture). The hypersurface $\Sigma$ will be mapped to the bottom of the cone. By multiplying conformal factors appropriately, the global dynamics of solutions to MKG equations defined on the left of the picture is then reduced to understanding the solution to MKG equations defined on the right of the picture. The estimates from Step 2 provide a bound of the $H^2$-norm of the solution on the bottom of the cone on the right hand side of the picture. This allows us to use the classical theory of Klainerman-Machedon to bound the solution on the cone up to two derivatives hence the $L^\infty$ norm of the solution. Finally, we undo the conformal transformation by rewriting the solution on the left hand side in terms of the solutions on the right hand side. The conformal factors then give the decay estimates of the solution in $\mathcal{J}^+(\Sigma)$. Together with the decay estimates from Step 1, we can derive the peeling estimates in the main theorem.
\end{itemize}
\subsubsection{Difficulties in the proof}
We list several difficulties which did not appear in previous works on MKG equations. We would like to emphasize that the first difficulty (the largeness of charge) listed below is related to all the rest. The remaining difficulties arise in course of the resolution of the first one. We also want to point out that the most difficult part of the proof is Step 1.
\begin{enumerate}
\item The large nonzero charge.
\smallskip
Although the energy norm of the chargeless part of the data in Step 1 is small, the charge $q_0$ can be large. First of all, the traditional conformal compactification method used in \cite{ChristodoulouYangM} by Christodoulou-Bruhat requires strong decay of the data which forces the charge to be vanishing. Secondly, the presence of nonzero charge may cause a logarithmic divergence in the energy estimates, see a more thorough discussion in the work \cite{LindbladMKG} of Lindblad-Sterbenz for the purely small data case. The error term caused by the charge can in fact be absorbed if the charge is sufficiently small. We overcome this large charge difficulty by using the method developed by Yang in \cite{yangILEMKG}.
We would also like to compare this charge difficulty with the massive case of recent work \cite{yang:mMKG} of Klainerman-Wang-Yang, in which they studied the massive MKG equations with small initial data. Their method also applies to the case with arbitrary large charge. However due to the existence of mass which gives control of the scalar field itself, the effect of nonzero charge can be easily controlled (see more detailed discussion in the next subsection). The main difficulty there, however, lies in the inconsistent asymptotic behaviours of Maxwell fields and solutions of Klein-Gordon equations.
\smallskip
\item The sharp peeling estimates.
\smallskip
Since in Step 3 we have to compactify $\mathcal{J}^+(\Sigma)$, the estimates for the solution obtained from Step 1 on $\Sigma$ must be sufficiently regular so that the solution on its conformal compactification are bounded in the right Sobolev spaces. In particular it requires to obtain the sharp decay estimates such as $D_L(r\phi)=O(r^{-2})$ and $\alpha=O(r^{-3})$ along outgoing light cones. So far as we know, even for the small data regime (with small charge), these estimates are unknown.
\smallskip
\item New commutators to prove the necessary sharp peeling estimates.
\smallskip
The idea to obtain the above sharp peeling estimates is straightforward: we need to put more $r$-weights in the usual energy estimates so that the $r$-weights will be converted into extra decay via Sobolev inequality. We will use the conformal Morawetz vector field $K$ as commutators. This vector field is of order $2$ in terms of weights $r$ and is used traditionally only as multipliers. In such a way, the structure of the nonlinear terms after commutation becomes the primary concern and we will show that it has some new null structure.
\smallskip
\item The hidden null structure of the MKG equations related to commutators.
\smallskip
This is related to point (2) above. When one commutes vector fields with the MKG equations, although it may generate many error terms, one needs to at least make sure some of the fundamental structures remain unchanged. Very often, these structures are important in the analytic perspective and more precisely they should be phrased in such a way that they fit into the energy estimates. We will show that there is a new null structure of the nonlinear terms which is invariant after commuting correct vector fields. Also, there is another important type structure, which we will call it \textsl{reduced structure}, also remains unchanged after commutations.
\smallskip
\item The choice of conformal compactifications.
\smallskip
The presence of nonzero charge prevents us to use the usual Penrose type compactification for the entire spacetime (see \cite{ChristodoulouYangM}): the $\rho$-component of the Maxwell field behaves as $\frac{q_0}{r^2}$ which cannot be compactified near the spatial infinity. However this effect of charge does not propagate from the spatial infinity to the future null infinity so that we can indeed perform a conformal transformation inside a null cone to avoid spatial infinity.
\smallskip
\item Precise energy estimates on the hypersurface $\Sigma$ in Step 2.
\smallskip
Since $\Sigma$ is a hyperboloid in Minkowski spacetime, the energy estimates on $\Sigma$, especially those needed in the Klainerman-Machedon theory after the compactification, are not straightforward. Nevertheless, this part is less serious compared to all the previous ones and can be derived by using the classical energy estimates in a geometric way.
\end{enumerate}
\subsubsection{Key ideas and novelties of the proof}
In this subsection, we will list all the ideas and new features of the proof in order to deal with the difficulties mentioned in the previous subsection.
\begin{enumerate}
\item The reduced structure and converting spatial decay against the logarithmic growth.
\smallskip
We first explain the reduced structure of the nonlinearity. Let $F=dA$. We may think of $A$ as $\phi$. Thus, the Maxwell equations are reduced to the form
\begin{equation*}
\Box A = \phi \cdot D\phi.
\end{equation*}
While most commonly, a nonlinear wave equation with quadratic interaction looks like
\begin{equation*}
\Box \phi = \nabla \phi \cdot \nabla \phi.
\end{equation*}
The MKG equations is one derivative less in the nonlinearities. This is the reduced structure.
In terms of energy estimates, the reduced structure will be reflected in the following formula:
\begin{equation*}
\int_{\mathcal{H}_{r_1}^{\infty}} |D_L \phi|^2 \leq C_1 \mathring{\varepsilon} r_1^{-\gamma_0}+C_2\int_{r_1}^{\infty}\int_{\mathcal{H}_{s}^{\infty}}\frac{|q_0|}{r^2}|\phi||D_L \phi|.
\end{equation*}
The left hand side is a classical energy flux term through outgoing null cones $\{u=r_1\}$. The first term on the right hand side is coming from the data and the exponent $-\gamma_0$ reflects the decay of the data near spatial infinity. The second term on the right hand side contains a $\phi$ without any derivative acting on it. We remark that the $\frac{q_0}{r^2}$ factor is arising from the charge. Heuristically for waves, a $\frac{1}{r}$ factor can be regard as $D_L$-derivative so that we should think of the second term as $\frac{1}{r}|D_L\phi|^2$ thus we see that there is a logarithmic growth when we integrate. We remark here that for the massive case in \cite{yang:mMKG}, since solution of massive Klein-Gordon equation decays as quickly as its derivatives, i.e., one can regard $\phi$ as $D_L\phi$, the above error term can be easily absorbed by using Gronwall's inequality.
We use an idea introduced by Yang in \cite{yangILEMKG} to handle this logarithmic loss. The precise statement is summarized and proved in Lemma \ref{lemma key}. Morally speaking, to obtain the estimates for the energy flux, we can afford a loss in $r$ instead of in time:
\begin{equation*}
\int_{\mathcal{H}_{r_1}^{\infty}} |D_L \phi|^2 \leq C\mathring{\varepsilon} \cdot r_1^{-\gamma_0+\varepsilon_0}.
\end{equation*}
In other words, the decay rate near null infinity changes from $\gamma_0$ to $\gamma_0 -\varepsilon_0$.
\smallskip
\item The $\varepsilon_0$-reductive argument for higher order energy estimates.
\smallskip
The argument is designed to make a better use of the reduced structure of the nonlinearity when we do higher order energy estimates. Although the charge vanishes when taking derivatives, the above type error term arises from the connection field $A$ and has the same structure as described previously. We design an ansatz which allows higher order derivative to lose more decay. To be more precise, we will lose $2(k+1)\varepsilon_0$-decay for the $k$-th order derivatives. We will use the following example to illustrate how the argument works. In the course of deriving energy estimates for the first order derivatives, schematically, the nonlinear terms look like $\int |\nabla \phi||\nabla^{2}\phi |$. On the other hand, $|\nabla\phi|$ is already controlled when one derives estimates for the solution itself without commuting vector fields with equations, thus the estimates on $\nabla \phi$ only lose $2\varepsilon_0$ decay. Compared to the $4\varepsilon_0$ loss in the first order derivative case, we indeed have a gain in decay for the nonlinear terms. This gain will play an essential role in closing the estimates.
\smallskip
\item Morawetz vector field as commutator and new commutation formulas.
\smallskip
Traditionally, the Morawetz vector field $K$ is only used as multipliers in the energy estimates. In this work, we will commute $K$ with the equation. The extra weights compared to the classical commutators such as rotations and scaling provide an extra decay factor for the solutions near null infinity. This extra decay factor is indispensable when we perform the conformal compactification. We would like to remark that, since $K$ is the image of $\partial_t$ under the inversion map, commuting $K$ with the equation can be regarded as the usual commutation of $\partial_t$ after the conformal transformation. Thus, this idea should be viewed as a vector field method version of conformal transformations. We also remark here that using such weighted vector fields with order $2$ has been used in the works \cite{Stefanos:avectorfield}, \cite{Stefanos:Latetime} of Angelopoulos-Aretakis-Gajic for deriving the sharp decay of linear waves on black hole spacetimes. However in those works, commuting the equation with the vector field $r^2 L$ is straightforward when writing the equation in terms of the radiation field $r\phi$ under a suitable null frame. This idea also applies to the scalar field equation of the MKG system in our situation but may not be so successful for the Maxwell equation since the Maxwell equation does not commute with the vector field $K$. Our new observation is as follows: for $Z\in \mathcal{Z}=\big\{T,\Omega_{12},\Omega_{23},\Omega_{31},S, K\big\}$, where $T$ is the time translation, $\Omega_{ij}$ are rotations and $S$ is scaling, for $\mathbf{Div}$ (the principle part of the Maxwell equations) and $\Box_A$, we have the following two formulas
\begin{equation}
[r^2 \mathbf{Div\,}, \mathcal{L}_Z]G = 0, \ \
[r^2\Box_A, D_Z+\frac{Z(r)}{r}]\phi = r^2 Q(\phi, F;Z)
\end{equation}
for any closed 2-form $G$ and complex scalar field $\phi$. In other words, although the equations do not commute with the vector fields $S$ or $K$, they commutes with the equations multiplied by $r^2$.
We emphasize that the formula holds for $K$ and $Q(\phi, F;Z)$ is quadratic in $\phi$ and $F=dA$. We also remark that to our knowledge these commutator formulas are new.
\smallskip
\item A new null form.
\smallskip
The quadratic form $Q(\phi, F;Z)$ is indeed a null form. Take $Z=S$ for example. It can be shown that
\begin{equation*}
\begin{split}
|Q(\phi,F;Z)| \lesssim &\big(\frac{r}{|u|}|\rho|+|\underline{\alpha}|\big)|D_L(r\phi)|+\big(\frac{r}{|u|}|\alpha|+\frac{|u|}{r}|\underline{\alpha}|+|\sigma|\big)|\slashed{D}(r\phi)| \\ &+\big(|\alpha|+\frac{|u|}{r}|\rho|\big)|D_{\underline{L}}(r\phi)|+\big(|\rho|+|\sigma|\big)|\phi|+{\text{cubic terms}}.
\end{split}
\end{equation*}
Similar estimates hold for other vector fields in $\mathcal{Z}$. We remark that rather than $\phi$ itself, the derivatives of $r\phi$ appear naturally in the above null structure estimate.
The most remarkable property of $Q(\phi,F;Z)$ is that it has an iterative structure. This is crucial when we commutate multiple derivatives with equations. More precisely, if we define $\widehat{D}_{Z} = D_Z + \frac{Z(r)}{r}$, we can show that
\begin{equation*}
\big[\widehat{D}_{Y},[\widehat{D}_{X}, r^2\Box_A]\big]\phi = -r^2Q(\phi,F;[Y,X])-r^2Q(\phi,\mathcal{L}_Y F;X)+2r^2F_{Y\mu}F_{X}{}^{\mu}\phi.
\end{equation*}
The right hand side after commutating two derivatives can still be expressed in terms of $Q$ and it still satisfies the null structure. This is one of the keys in the proof.
We remark that to our knowledge this null structure is also new.
\smallskip
\item The algebraic structure of $J$.
\smallskip
We have seen that $r\phi$ appears naturally in the null form estimates. We would like to point out another perspective. We mentioned previously that $D_L(r\phi) = O(\frac{1}{r^2})$. We can also show that the best decay estimates for $D_L \phi$ is still $O(\frac{1}{r^2})$ instead of $O(\frac{1}{r^3})$. From this point of view, we may consider $r\phi$ to be "better" than $\phi$ itself. On the other hand, for the Maxwell equation, instead of commuting with the operator $\mathbf{Div}$, we commute with $r^2\mathbf{Div}$. It thus requires to analyze $r^2\cdot J$, where the charge density $J$ has components $J_\mu = \Im(\phi\cdot \overline{D_\mu\phi})$. The special algebraic form implies
\begin{equation*}
r^2 \cdot J_\mu[\phi] = \Im\big((r\phi)\cdot \overline{D_\mu(r\phi)}\big)=J_\mu[r\phi].
\end{equation*}
Therefore, we only have to deal with the "better" field $r\phi$ rather than $\phi$ itself. This special cancellation from the algebraic structure is crucial to obtain the sharp peeling estimates and to close the energy estimates.
\smallskip
\item The conformal compactification.
Since the trace of the energy momentum tensor for MKG equations are not zero, this field theory is not conformal. However, for special conformal transformations, it can still be conformally invariant, e.g., if $\Box \Lambda = 0$ where $\Lambda$ is the conformal factor. The inversion map restricted in the forward light cone is such a conformal map in $\mathbb{R}^{3+1}$ (not in other dimensions).
On the other hand, there is another important observation: although the presence of a nonzero charge does not allow compactification around the spatial infinity, this effect indeed does not appear on the null infinity. This was first pointed out by Shu in \cite{Shu}. The following computation for $F[q_0]$ justifies this observation: on a outgoing light cone $\mathcal{H}_u$ defined by $r-t=2u$, the conformal energy flux passing through this light cone (this is the basic energy quantity needed after the conformal transformation) is given by
\begin{equation*}
\mathcal{E}\big[F[q_0]\big] \approx \int_{\mathcal{H}_u} |u|^4|\rho|^2.
\end{equation*}
Since $|\rho|\approx \frac{q_0}{r^2}$ (as $F[q_0]$ has the leading term $q_0 dt\wedge dr$) and $u$ is a constant on $\mathcal{H}_u$, the above energy flux is finite. On the other hand, if one considers conformal energy on a constant time slice, the factor $u^4$ would be replace by $r^2 u^2$ (near spatial infinity) so that the contribution of the charge part of the field would be divergent. This is why we choose inversions as the conformal mappings.
\smallskip
\item $r^p$-weighted energy estimates.
\smallskip
We use the $r^p$-weighted energy estimates which was first introduced by Dafermos-Rodnianski in \cite{newapp} for the study of decay of linear waves on black hole spacetimes. The method has also been used in the first author's works on MKG equations, see \cite{yangILEMKG, yangMKG}, where $p<2$. The new point in the current work is that we have to deal with the end point case $p=2$ to get the sharp peeling estimates.
\end{enumerate}
\subsection{Further discussions}\label{section future}
It is instructive to make a comparison to the works \cite{Kl:peeling:EE}, \cite{kl:EE} of Klainerman-Nicol{\`o} to prove higher peeling estimates near Minkowski spacetime in an exterior region and the work \cite{Jonathan:stabilityofEE} of Luk-Oh for proving global nonlinear stability of dispersive solutions to Einstein equations. Indeed, for a given initial datum of the vacuum Einstein field equations, one can work in the region $r\geq R_*$ and can assume that the datum is small provided $R_*$ is sufficiently large. The mass $m$ for the Einstein equations plays a similar role as the charge $q_0$ for the Maxwell-Klein-Gordon equations: they all represent a slow decay tail representing a static solution at spatial infinity (which is the Schwarzschild solutions in the Einstein equations' case). The proof of Klainerman and Nicol{\`o} indeed does not use the smallness of the mass $m$ and this is similar to our case where we do not assume $q_0$ is small. For vacuum Einstein field equations, the mass $m$ comes in through the $\rho$-component of the curvature:
\[\rho=\frac{m}{r^3}+\mathring{\rho},
\]
where $\mathring{\rho}$ decays as $\frac{1}{r^4}$. However, for MKG equations, the charge $q_0$ comes in through the $\rho$-component of the Maxwell field:
\[\rho=\frac{q_0}{r^2}+\mathring{\rho}.
\]
The $r^{-3}$ decay is sufficient to apply Gronwall's inequality in the Einstein equations' case while for MKG equations we have to find a new way to compensate the logarithmic loss as we mentioned before.
Alternatively, for this large mass issue for Einstein equations, Luk-Oh in \cite{Jonathan:stabilityofEE} choose a special gauge condition so that such mass problem does not appear. Since our approach in this paper is gauge invariant and the charge is inherited in the connection field $A$, the charge difficulty is essentially different from the mass problem for Einstein field equations.
For Einstein field equations coupled with other fields, say a scalar field, the coupling field may bring a slower decay tail. We believe that our method in the exterior region can also be applied to these cases to derive sharp peeling estimates.
\textbf{Acknowledgments.} The authors would like to thank Sergiu Klainerman for helpful suggestions on the manuscript. The second author is also deeply indebt to Pengyu Le for teaching him the conformal aspects for the Maxwell-Klein-Gordon theory. The first author is partially supported by the Recruitment Program of Global Experts in China and a start-up grant at Peking University. The second author is supported by NSFC-11522111 and China National Support Program for Young Top-Notch Talents.
\section{Preparations}
\subsection{The null decompositions of equations}
Recall from the main theorem that the chargeless part $\mathring{F}$ of the solution is defined as
\[
\mathring{F}=F-F[q_0]\mathbf{1}_{\{t+1\leq |x|\}}.
\]
It is straightforward to see that ${\mathring{F}}$ satisfies the same equations as $F$:
\begin{equation}\label{Maxwell for Fc}
\nabla^\mu {\mathring{F}}_{\mu\nu}=-J_\nu
\end{equation}
in the exterior region $\{t+1\leq |x|\}$.
In terms of the null components, we can rewrite this equation as
\begin{equation}\label{Maxwell null Fc}
\left\{\begin{aligned}
L(r^2\mathring{\rho})+\slashed{\mathbf{div\,}} (r^2\mathring{\alpha}) =r^2 J_L, \ \ {\underline{L}}(r^2\mathring{\rho})&-\slashed{\mathbf{div\,}} (r^2\underline{\mathring{\alpha}}) =-r^2 J_{\underline{L}},\\
L(r^2\mathring{\sigma})+\slashed{\mathbf{div\,}} (r^2\,^*\mathring{\alpha}) =0, \ \ {\underline{L}}(r^2\mathring{\sigma})&+\slashed{\mathbf{div\,}} (r^2\,^*\underline{\mathring{\alpha}}) =0,\\
\slashed{\nabla}_{\underline{L}} (r \mathring{\alpha})_A-\slashed{\nabla}_A(r\mathring{\rho})-\,^*\slashed{\nabla}_A(r\mathring{\sigma}) &=rJ_A,\\
\slashed{\nabla}_L (r \underline{\mathring{\alpha}})_A+\slashed{\nabla}_A(r\mathring{\rho})-\,^*\slashed{\nabla}_A(r\mathring{\sigma}) &=rJ_A.
\end{aligned}
\right.
\end{equation}
Here for simplicity, $(\mathring{\alpha}, \mathring{\underline{\alpha}}, \mathring{\rho}, \mathring{\sigma})$ are the null components associated to the 2-form $\mathring{F}$. For any complex scalar field ${f}$, the covariant wave operator $\Box_A$ can be expressed in null frames:
\begin{equation}\label{scalar box null}
\begin{split}
r\Box_A {f} = -D_{\underline{L}} D_L \big(r {f} \big) + \slashed{D}^2\big( r{f} \big) + i\rho\cdot\big(r{f}\big)= -D_L D_{\underline{L}} \big(r {f} \big) + \slashed{D}^2\big( r{f} \big) - i\rho\cdot\big(r{f}\big),
\end{split}
\end{equation}
where $\slashed{D}^2(r{f}) =\sum_{A,B=1}^2 m^{AB} D_{e_A} D_{e_B}(r{f})$.
\subsection{Commutator vector fields and null structures}\label{section commutator vector fields}
We shall use the following set of vector fields as commutators:
\begin{equation*}
\mathcal{Z}=\big\{T,\Omega_{12},\Omega_{23},\Omega_{31},S, K\big\},
\end{equation*}
where $K=v^2 L + u^2{\underline{L}}$ is the Morawetz vector field, $S = vL+u{\underline{L}}$ is the scaling vector field, $\Omega_{ij}=x_i\partial_j-x_j\partial_i$ are the rotation vector fields and $T=\partial_t$ is the time translation. For vector fields in $\mathcal{Z}$, we define their discrepancy index as
\[
\xi(T)=-1,\quad \xi(\Omega_{ij})=\xi(S)=0,\quad \xi(K)=1.
\]
In the energy estimates, it involves the the deformation tensor of these vector fields: $$\,^{(Z)}\pi_{\mu\nu}=\frac{1}{2}\mathcal{L}_Z m_{\mu\nu}=\frac{1}{2}(\nabla_\mu Z_\nu + \nabla_\nu Z_\mu), $$
where $\mathcal{L}_Z m$ is the Lie derivative of the Minkowski metric.
By computation, we have
\begin{equation*}
\,^{(K)}\pi_{\mu\nu}=t\cdot m_{\mu\nu}, \ \ \,^{(S)}\pi_{\mu\nu}= m_{\mu\nu},\quad ^{(\Omega_{ij})}\pi_{\mu\nu}=0,\quad ^{(T)}\pi_{\mu\nu}=0,
\end{equation*}
where $m_{\mu\nu}$ is the flat metric of the Minkowski spacetime. We also remark that the set $\mathcal{Z}$ is closed under the Lie bracket: the only non-vanishing $[Z_1,Z_2]$'s for $Z_1,Z_2\in \mathcal{Z}$ are
\begin{equation*}
[T,S]=T, \ \ [T,K]=2S, \ \ [S,K]=K.
\end{equation*}
For $Z\in \mathcal{Z}$, we define the modified covariant derivative acting on complex scalar field associated to the 1-form $A$ as follows:
\begin{equation*}
\widehat{D}_Z=D_Z + \frac{Z(r)}{r}.
\end{equation*}
This is the conjugate of $D_Z$ by the function $r$, i.e., $\widehat{D}_Z f = r^{-1}D_Z(r f)$.
\begin{lemma}[Commutator formula]\label{commutator lemma}
For any closed 2-form $G$ and any complex scalar field $f$, we have
\begin{equation}\label{commutator formula 2}
[r^2 \mathbf{Div\,}, \mathcal{L}_Z]G = 0,
\end{equation}
\begin{equation}\label{commutator formula 3}
[r^2\Box_A, \widehat{D}_Z]f = 2\sqrt{-1}r^2 F_{\mu \nu}Z^\nu D^\mu f+\sqrt{-1}r^2\nabla^\mu \big( Z^\nu F_{\mu \nu} \big)f
\end{equation}
for all $Z\in \mathcal{Z}$.
\end{lemma}
\begin{remark}
To our knowledge, this set of commutator formulas are new and it is one of the key ingredients to the proof.
\end{remark}
\begin{proof}
We first show the following formula
\begin{equation}\label{commutator formula 1}
[\Box_A, D_Z + \frac{Z(r)}{r}]{f} = \frac{2Z(r)}{r}\Box_A{f} + 2\sqrt{-1}F_{\mu \nu}Z^\nu D^\mu{f}+\sqrt{-1}\nabla^\mu \big( Z^\nu F_{\mu \nu} \big)f.
\end{equation}
By commuting derivatives, we have
\begin{equation*}
[\Box_A, D_Z]{f} = \Box Z_\mu D^\mu {f} + 2\,^{(Z)}\pi_{\mu\nu}D^\mu D^\nu{f} + 2\sqrt{-1}F_{\mu \nu}Z^\nu D^\mu{f}+\sqrt{-1}\nabla^\mu \big( Z^\nu F_{\mu \nu} \big){f}.
\end{equation*}
For any function $f_1$, we have
\begin{equation*}
[\Box_A, f_1]{f} = \Box f_1\cdot {f}+2\nabla^\mu f_1 D_\mu{f},
\end{equation*}
where $f_1$ will be $\frac{Z(r)}{r}$.
For $Z\in \mathcal{Z}$, if $Z \neq K$ or $S$, we have $\,^{(Z)}\pi_{\mu\nu}=0$ and $f=0$, therefore, \eqref{commutator formula 1} holds.
For $K$, we have $f_1=t$, $[\Box_A, f_1]{f} = 2\nabla^\mu f_1 D_\mu{f}$ and $\Box K =-T$. Hence,
\begin{equation*}
[\Box_A, D_K+\frac{K(r)}{r}]{f} = -T_\mu D^\mu {f} + 2t\Box_A{f} + 2\sqrt{-1}F_{\mu \nu}Z^\nu D^\mu{f}+\sqrt{-1}\nabla^\mu \big( Z^\nu F_{\mu \nu} \big){f}+2\nabla^\mu t D_\mu{f}.
\end{equation*}
The first term and the last term on the right hand side cancel. This proves the case for $Z=K$.
For $S$, we have $f_1=1$ and the proof follows exactly in the same manner. Thus formula \eqref{commutator formula 3} holds.
\medskip
We turn to the proof of \eqref{commutator formula 2}. By commuting the derivatives, we have
\begin{equation*}
[\mathbf{Div\,}, \mathcal{L}_Z]G_\nu =\Box Z^\mu \,G_{\mu\nu}+\nabla_\nu\nabla^\mu Z^{\delta}\,G_{\mu\delta} + 2\,^{(Z)}\pi^{\mu\delta}\nabla_{\delta}G_{\mu\nu}.
\end{equation*}
If $Z\in \mathcal{Z}$ but $Z\neq K$ or $S$, then $[r^2,\mathcal{L}_Z]=0$. The above formula shows that $[\mathbf{Div\,}, \mathcal{L}_Z]=0$. Hence, \eqref{commutator formula 2} holds.
For $K$, the above formula implies
\begin{align*}
[\mathbf{Div\,}, \mathcal{L}_K]G_\nu &= -2T^\mu \,G_{\mu\nu}+\nabla_\nu\nabla^\mu K^{\delta}\,G_{\mu\delta} + 2t\nabla^{\mu}G_{\mu\nu}.
\end{align*}
In the Cartesian coordinates, one can check immediately that
\begin{equation*}
\nabla_\nu\nabla^\mu K^{\delta}\,G_{\mu\delta}=2G(\partial_\nu,\partial_t).
\end{equation*}
Therefore, we obtain
\begin{align*}
[\mathbf{Div\,}, \mathcal{L}_K]G &= 2t\,\mathbf{Div\,}{G}.
\end{align*}
Finally, we have
\begin{align*}
\mathcal{L}_K\big(r^2 \mathbf{Div\,} {G}\big) &= K(r^2)\mathbf{Div\,}{G}+r^2 \mathcal{L}_K \big(\mathbf{Div\,} {G}\big)\\
&=2tr^2 \mathbf{Div\,}{G} + r^2 \mathbf{Div\,} \big( \mathcal{L}_K {G}\big) -r^2[\mathbf{Div\,}, \mathcal{L}_K]{G}\\
&= r^2 \mathbf{Div\,} \big( \mathcal{L}_K G \big).
\end{align*}
For $Z=S$, recall that ${}^{(S)}\pi=m$. The computation in this case is straightforward. This yields \eqref{commutator formula 2}.
\end{proof}
Motivated by the formula \eqref{commutator formula 3}, we introduce the following commutator null form.
\begin{definition}
For any closed 2-form $G$ and any complex scalar field $f$, we define for any vector field $Z$ the quadratic form
\begin{equation*}
Q(f,G;Z) =2\sqrt{-1} G_{\mu \nu}Z^\nu D^\mu f+\sqrt{-1} \nabla^\mu \big( Z^\nu G_{\mu \nu} \big)f.
\end{equation*}
\end{definition}
We then can write \eqref{commutator formula 3} as
\begin{equation}\label{commutator formula 4}
[r^2\Box_A, \widehat{D}_Z]f = r^2 Q(f,F;Z).
\end{equation}
To avoid to many constants, in the sequel we use the convention that $B\lesssim K$ means that there is a constant $C$, depending only on the charge $q_0$ and the size of the initial data $C_0$ such that $B\leq CK$.
The next proposition manifests the null structure of the quadratic form $Q(f,G;Z)$:
\begin{proposition}[Pointwise estimate of null form]\label{lemma null form}
For all $Z\in \mathcal{Z}$, $r\geq 1$ and $|u|\geq 1$, we have
\begin{equation}\label{null form estimate}
\begin{split}
|u|^{-\xi(Z)}|Q(f,G;Z)| &\lesssim \big(\frac{r}{|u|}|\rho|+|\underline{\alpha}|\big)|D_L(rf)|+\big(\frac{r}{|u|}|\alpha|+\frac{|u|}{r}|\underline{\alpha}|+|\sigma|\big)|\slashed{D}(rf)| \\
&\ \ \ +\big(|\alpha|+\frac{|u|}{r}|\rho|\big)|D_{\underline{L}}(rf)|+ \big(|\rho|+|\sigma|\big)|f|+\big(|u||J_{\underline{L}}|+\frac{r^2}{|u|}|J_L|+r|\slashed{J}|\big)|f|
\end{split}
\end{equation}
for all $G$ and $f$. The current $J$ is associated to $G$, i.e., $J_\nu=\nabla^\mu G_{\mu\nu}$. Similarly, the null components $\alpha, \rho, \sigma$ and $\underline{\alpha}$ are all defined with respect to $G$.
\end{proposition}
\begin{proof}
We show bound $Q(f, G;Z)$ for each $Z\in \mathcal{Z}$ one by one. We have
\begin{equation*}
\frac{Q(f,G;Z)}{\sqrt{-1}} =\underbrace{2 r^{-1}\,G_{\mu \nu}\,Z^\nu D^\mu(rf)}_{\mathbf{I}_1}- \underbrace{\big(Z^\nu J_\nu\big)\cdot f}_{\mathbf{I}_2} -\underbrace{\big(2r^{-1}\nabla^\mu r G_{\mu\nu}Z^\nu-\nabla^\mu Z^\nu G_{\mu\nu}\big)f}_{\mathbf{I}_3}.
\end{equation*}
For $Z=T$, we have
\begin{align*}
\mathbf{I}_1&=-\frac{1}{r}(\alpha+\underline{\alpha})\cdot \slashed{D}(rf)+\frac{1}{r}\rho\big(D_{\underline{L}} (rf)-D_L(rf)\big),\ \mathbf{I}_2=\frac{1}{2}(J_L+J_{\underline{L}})f, \ \mathbf{I}_3=-r^{-1}\rho f.
\end{align*}
Therefore, we have
\begin{equation*}
|Q(f,G;T)| \lesssim\frac{|\slashed{D}(rf)|}{r}\big(|\alpha|+|\underline{\alpha}|\big)+\frac{|\rho|}{r}\big(|f|+|D_L(rf)|+|D_{\underline{L}} (rf)|\big)+\big(|J_L|+|J_{\underline{L}}|\big)|f|.
\end{equation*}
For $Z=\Omega_{ij}$, we have
\begin{align*}
\mathbf{I}_1&\lesssim |D_L(rf)||\underline{\alpha}|+|D_{\underline{L}} (rf)||\alpha|+|\sigma||\slashed{D} (rf)|,\ \ I_2\leq r|\slashed{J}|f,\\
\mathbf{I}_3&=\Big(\frac{1}{r}\big(G_{L\Omega_{ij}}-G_{{\underline{L}} \Omega_{ij}}\big) + \nabla_{\underline{L}}\Omega^A_{ij} G_{LA} + \nabla_L \Omega_{ij}^A G_{{\underline{L}} A}-\nabla^{A}\Omega_{ij}^{B}G_{AB}\Big) f =-\nabla^{A}\Omega_{ij}^{B}G_{AB}f.
\end{align*}
Therefore, we have
\begin{equation}\label{null form Omega}
|Q(f,G;\Omega_{ij})| \lesssim |D_L(rf)||\underline{\alpha}|+|D_{\underline{L}} (rf)||\alpha|+|\sigma||f|+r |\slashed{J}||f|+|\sigma||\slashed{D} (rf)|.
\end{equation}
For $Z=S$, we have
\begin{align*}
\mathbf{I}_1&= 2\frac{u}{r}\rho D_{\underline{L}} (rf) - 2\frac{v}{r}\rho D_L (rf) -2\frac{v}{r}\alpha\cdot \slashed{D}(rf)-2\frac{u}{r}\underline{\alpha}\cdot \slashed{D}(rf),\\
\mathbf{I}_2&=-v J_L f-u J_{{\underline{L}}}f,\ \ \mathbf{I}_3=-2\frac{u+v}{r}\rho f.
\end{align*}
Therefore, we have
\begin{equation*}
|Q(f,G;S)| \lesssim r^{-1} {|u|}\big(|\rho|| D_{\underline{L}} (rf)| + |\underline{\alpha}||\slashed{D}(rf)|\big) + \big(|\rho||D_L (rf)| + |\alpha||\slashed{D}(rf)|\big)+|\rho||f|+r |J_L||f|+ |u| |J_{{\underline{L}}}||f|.
\end{equation*}
For $Z=K$, we have
\begin{align*}
\mathbf{I}_1&= -2\frac{u^2}{r}\rho D_{\underline{L}} (rf) + 2\frac{v^2}{r}\rho D_L (rf) + 2\frac{v^2}{r}\alpha\cdot \slashed{D}(rf)+2\frac{u^2}{r}\underline{\alpha}\cdot \slashed{D}(rf),\\
\mathbf{I}_2&=-v^2 J_L f-u^2 J_{{\underline{L}}}f,\ \ \mathbf{I}_3=-4\frac{uv}{r}\rho f.
\end{align*}
Therefore, we have
\begin{equation}\label{null form estimate for K}
\begin{split}
|Q(f,G;K)| \lesssim & r^{-1} {u^2}\big(|\rho|| D_{\underline{L}}(rf)| + |\underline{\alpha}||\slashed{D}(rf)|\big) + {r}\big(|\rho||D_L (rf)| + |\alpha||\slashed{D}(rf)|\big)\\
&+|u||\rho||f|+r^2 |J_L||f|+ u^2 |J_{{\underline{L}}}||f|.
\end{split}
\end{equation}
The lemma is an immediate consequence of the above estimates.
\end{proof}
To analyze the higher order energy estimates of the solution, we will commute the equations with the vector fields twice. From the commutation formula \eqref{commutator formula 4}, we have the following identity:
\begin{equation}\label{formula to compute box z z}
\begin{split}
&\ \ \ r^2\Box_A\widehat{D}_{Z_{1}} \widehat{D}_{Z_2}{f}\\
&= [r^2\Box_A,\widehat{D}_{Z_{1}}] \widehat{D}_{Z_2}{f} +[r^2\Box_A, \widehat{D}_{Z_2}] \widehat{D}_{Z_{1}}{f}+\big[\widehat{D}_{Z_{1}},[r^2\Box_A, \widehat{D}_{Z_2}]\big]{f}+\widehat{D}_{Z_{1}} \widehat{D}_{Z_2} \big(r^2\Box_A{f}\big)\\
&= r^2Q(\widehat{D}_{Z_1}{f},F;Z_2) + r^2Q(\widehat{D}_{Z_2}{f},F;Z_1) +\big[\widehat{D}_{Z_{1}},[r^2\Box_A, \widehat{D}_{Z_2}]\big]{f}+\widehat{D}_{Z_{1}} \widehat{D}_{Z_2} \big(r^2\Box_A{f}\big).
\end{split}
\end{equation}
Note that for solution of MKG equations the last term vanishes. In particular to derive the equation for the second order derivative of the solution, we need to estimate the double commutator.
\begin{proposition}\label{prop twice commutated}
For all $X,Y \in \mathcal{Z}$, we have
\begin{equation}\label{commutator twice commutated}
\big[\widehat{D}_{Y},[r^2\Box_A, \widehat{D}_{X}]\big]{f} = r^2Q({f}, F;[Y,X])+r^2Q({f}, \mathcal{L}_Y F;X)-2r^2F_{Y\mu}F_{X}{}^{\mu}{f}.
\end{equation}
\end{proposition}
\begin{proof}
First from Lemma \ref{commutator lemma}, we can write
\begin{equation*}
[r^2\Box_A, \widehat{D}_{X}]{f}=r^2(2\sqrt{-1} X^\nu F_{\mu\nu}D^{\mu}{f}+\sqrt{-1} \nabla^\mu( F_{\mu\nu}X^\nu){f}).
\end{equation*}
Then for any two vector fields $X$ and $Y$, direction computation implies that
\begin{align*}
&\big[\widehat{D}_{Y},[\widehat{D}_{X}, r^2\Box_A]\big]{f}\\
&=-\nabla_Y\big(2\sqrt{-1}r^2X^\nu F_{\mu\nu}\big)D^\mu{f} - \nabla_Y\big(\sqrt{-1}r^2\nabla^\mu(F_{\mu\nu}X^\nu)\big){f}\\
&\quad +2\sqrt{-1}r^2X^\nu F_{\mu\nu}\nabla^\mu\big(\frac{Y(r)}{r}\big){f}
-2\sqrt{-1}r^2X^\nu F_{\mu\nu}[D_Y,D^\mu]{f}\\
&=-\Big(\underbrace{\nabla_Y\big(2\sqrt{-1}r^2X^\nu F_{\mu\nu}\big)D^\mu{f}}_{\mathbf{I}_1} +\underbrace{\nabla_Y\big(\sqrt{-1}r^2\nabla^\mu(F_{\mu\nu}X^\nu)\big){f}}_{\mathbf{I}_2}\Big)\\
&\quad +2\sqrt{-1}r^2X^\nu F_{\mu\nu}\nabla^\mu\big(\frac{Y(r)}{r}\big){f} +2\sqrt{-1}r^2 X^\nu \nabla^\mu Y^\delta F_{\mu\nu} D_\delta {f} +2r^2F_{Y\mu}F_{X}{}^{\mu}{f}.
\end{align*}
Now for the term $\mathbf{I}_1$, we can compute that
\begin{align*}
\mathbf{I}_1&=2\sqrt{-1}Y(r^2)X^\nu F_{\mu\nu}D^\mu{f}+2\sqrt{-1}r^2\nabla_Y X^\nu F_{\mu\nu}D^\mu{f}+2\sqrt{-1}r^2X^\nu \nabla_Y F_{\mu\nu}D^\mu{f}\\
&=2\sqrt{-1}Y(r^2)X^\nu F_{\mu\nu}D^\mu{f}+2\sqrt{-1}r^2\big(\mathcal{L}_Y X^\nu +\nabla_X Y^\nu\big) F_{\mu\nu}D^\mu{f}\\
& \ \ \ +2\sqrt{-1}r^2X^\nu \big(\mathcal{L}_Y F_{\mu\nu}-\nabla_\mu Y^\delta F_{\delta\nu}-\nabla_\nu Y^\delta F_{\mu\delta}\big)D^\mu{f}\\
&=\underbrace{2\sqrt{-1}Y(r^2)X^\nu F_{\mu\nu}D^\mu{f}}_{\mathbf{I}_{11}}+\underbrace{2\sqrt{-1}r^2 \mathcal{L}_Y X^\nu F_{\mu\nu}D^\mu{f}}_{\mathbf{I}_{12}}\\
&\quad+\underbrace{2\sqrt{-1}r^2X^\nu \mathcal{L}_Y F_{\mu\nu} D^\mu{f}}_{\mathbf{I}_{13}}-2\sqrt{-1}r^2X^\nu \nabla_\mu Y^\delta F_{\delta\nu}D^\mu{f}.
\end{align*}
As for the term $\mathbf{I}_2$, we can further show that
\begin{align*}
\mathbf{I}_2&=\sqrt{-1}Y(r^2)\nabla^\mu(F_{\mu\nu}X^\nu) {f}+\sqrt{-1}r^2\nabla^\mu(F_{\mu\nu}\nabla_Y X^\nu) {f}\\
&\quad +\sqrt{-1}r^2\nabla^\mu(\nabla_Y F_{\mu\nu}X^\nu) {f}+\sqrt{-1}r^2[\nabla_Y, \nabla^\mu](F_{\mu\nu}X^\nu) {f}\\
&=\sqrt{-1}Y(r^2)\nabla^\mu(F_{\mu\nu}X^\nu) {f}+\sqrt{-1}r^2\nabla^\mu\big(F_{\mu\nu}(\mathcal{L}_Y X^\nu +
\nabla_X Y^\nu)\big) {f}-\sqrt{-1}r^2\nabla^\mu Y^\delta F_{\mu\nu}\nabla_\delta X^\nu {f}\\
& \ \ \ +\sqrt{-1}r^2\nabla^\mu\Big(\big(\mathcal{L}_Y F_{\mu\nu}-\nabla_\mu Y^\delta F_{\delta\nu}-\nabla_\nu Y^\delta F_{\mu\delta}\big)X^\nu\Big) {f}-\sqrt{-1}r^2\nabla^\mu Y^\delta \nabla_\delta F_{\mu\nu}X^\nu {f} \\
&=\underbrace{\sqrt{-1}Y(r^2)\nabla^\mu(F_{\mu\nu}X^\nu) {f}}_{\mathbf{I}_{21}}+\underbrace{\sqrt{-1}r^2\nabla^\mu\big(F_{\mu\nu} \mathcal{L}_Y X^\nu \big) {f}}_{\mathbf{I}_{22}}+\underbrace{\sqrt{-1}r^2\nabla^\mu\big( \mathcal{L}_Y F_{\mu\nu}X^\nu\big) {f}}_{\mathbf{I}_{23}}\\
&\ \ \ -\sqrt{-1}r^2\nabla^\mu \big( \nabla_\mu Y^\delta F_{\delta\nu} X^\nu\big) {f}-\sqrt{-1}r^2\nabla^\mu Y^\delta \nabla_\delta F_{\mu\nu}X^\nu {f} -\sqrt{-1}r^2\nabla^\mu Y^\delta F_{\mu\nu}\nabla_\delta X^\nu {f}
\end{align*}
We notice that the $\mathbf{I}_{1i}+\mathbf{I}_{2i}$'s can be expressed in terms of the quadratic form $Q$. We therefore derive that
\begin{align*}
&\big[\widehat{D}_{Y},[\widehat{D}_{X}, r^2\Box_A]\big]{f} \\
&= -Y(r^2)Q({f},F;X)-r^2Q({f}, F;[Y,X])-r^2Q({f},\mathcal{L}_Y F;X)+2r^2F_{Y\mu}F_{X}{}^{\mu}{f}\\
&\ \ \ +2\sqrt{-1}r^2X^\nu F_{\mu\nu}\nabla^\mu\big(\frac{Y(r)}{r}\big){f}+4\sqrt{-1}r^2X^\nu \,^{(Y)}\pi^{\delta\mu}F_{\mu\nu}D_\delta{f}\\
& \ \ +\sqrt{-1}r^2\nabla^\mu \big( \nabla_\mu Y^\delta F_{\delta\nu} X^\nu\big) {f}+\sqrt{-1}r^2\nabla^\mu Y^\delta \nabla_\delta F_{\mu\nu}X^\nu {f} +\sqrt{-1}r^2\nabla^\mu Y^\delta F_{\mu\nu}\nabla_\delta X^\nu {f}\\
&= -Y(r^2)Q({f}, F;X)-r^2Q({f}, F;[Y,X])-r^2Q({f}, \mathcal{L}_Y F;X)+2r^2F_{Y\mu}F_{X}{}^{\mu}{f}\\
&\ \ \ +2\sqrt{-1}r^2X^\nu F_{\mu\nu}\nabla^\mu\big(\frac{Y(r)}{r}\big){f}+4\sqrt{-1}r^2X^\nu \,^{(Y)}\pi^{\delta\mu}F_{\mu\nu}D_\delta{f}\\
& \ \ +\sqrt{-1}r^2\Box Y^\delta F_{\delta\nu} X^\nu {f}+2\sqrt{-1}r^2 \,^{(Y)}\pi^{\delta\mu}\left( \nabla_\delta F_{\mu\nu}X^\nu {f} + F_{\mu\nu} \nabla_\delta X^\nu {f}\right).
\end{align*}
Note that the last two terms can be written as
\begin{equation*}
\,^{(Y)}\pi^{\delta\mu} (\nabla_\delta F_{\mu\nu}X^\nu {f} +F_{\mu\nu} \nabla_\delta X^\nu {f})
=\,^{(Y)}\pi^{\delta\mu} \nabla_\delta\big(F_{\mu\nu}X^\nu \big){f}.
\end{equation*}
We now simplify the previous identity by checking vector fields $Y\in\mathcal{Z}$. We basically have two cases: when $Y=K$, $S$ or $Y=T$, $\Omega_{ij}$. For the latter situation, we notice that $Y$ is Killing and
\[
Y(r)=0,\quad ^{(Y)}\pi=0,\quad \Box Y^{\delta}=0.
\]
Therefore we conclude from the previous identity that
\begin{equation*}
\begin{split}
\big[\widehat{D}_{Y},[\widehat{D}_{X}, r^2\Box_A]\big]{f} &= -r^2Q({f},F;[Y,X])-r^2Q(f,\mathcal{L}_Y F;X)+2r^2F_{Y\mu}F_{X}{}^{\mu}{f}.
\end{split}
\end{equation*}
Now for the first case when $Y=K$ or $S$, note that we can write these two vector fields in a uniform way
\[
Y=u^p{\underline{L}}+v^p L,\quad p=1, 2,
\]
in which $p=1$ corresponds to the scaling vector field $S$ while $p=2$ stands for the conformal Killing vector field $K$. We then can compute that
\begin{align*}
^{(Y)}\pi=t^{p-1}m,\quad r^{-1}Y(r)=t^{p-1},\quad \Box Y^{\delta}\partial_{\delta}=p(p-1)\partial_t, \quad Y(r^2)=2 rY(r)=2r^2 t^{p-1}.
\end{align*}
We therefore can show that
\begin{equation*}
\begin{split}
&4X^\nu \,^{(K)}\pi^{\delta\mu}F_{\mu\nu}D_\delta{f} +2X^\nu F_{\mu\nu}\nabla^\mu\big(\frac{K(r)}{r}\big){f}+\Box K^\delta F_{\delta\nu} X^\nu {f}+2 ^{(K)}\pi^{\delta\mu} \nabla_\delta\big(F_{\mu\nu}X^\nu \big){f}\\
&=4X^\nu t^{p-1}m^{\delta\mu}F_{\mu\nu}D_\delta{f} +2X^\nu F_{\mu\nu}\nabla^\mu(t^{p-1}){f}+ p(p-1) F_{0\nu} X^\nu {f}+2 t^{p-1} m^{\delta\mu} \nabla_\delta\big(F_{\mu\nu}X^\nu \big){f}\\
&=4X^\nu t^{p-1}F_{\mu\nu}D^\mu {f} + (p-2)(p-1) F_{0\nu} X^\nu {f}+2 t^{p-1} \nabla^\mu\big(F_{\mu\nu}X^\nu \big){f}\\
&=-\sqrt{-1}r^{-2} Y(r^2)Q({f}, F; X).
\end{split}
\end{equation*}
The last step follows by the definition of $Q({f}, F;X)$ and the fact that $p=1$ or $2$. In particular we have shown that estimate \eqref{commutator twice commutated} holds for all $X$, $Y\in\mathcal{Z}$.
\end{proof}
We are now ready to commute vector fields with MKG equations \eqref{MKG}. First of all, recall that we have defined the discrepancy index $\xi$ for $Z\in \mathcal{Z}$, that is, the value of $T$, $\Omega_{ij}$, $S$ and $K$ are $-1$, $0$, $0$ and $1$ respectively. Let $\mathbf{k} =(k_0,k_1,k_2)$ be a triplet of nonnegative integers. The number $k_0$, $k_1$ and $k_2$ denote the number of index $-1$, $0$ and $1$ vector fields respectively. For a given $\mathbf{k}$, we define the {\bf discrepancy index} $\xi(\mathbf{k})$ as
\begin{equation*}
\xi(\mathbf{k})=k_2-k_0.
\end{equation*}
We also define $|\mathbf{k}|=k_0+k_1+k_2$. For derivatives on forms, for example the Maxwell field $F$ or the charge density $J$, we take the Lie derivative $\mathcal{L}$. For any given tensor field $\mathcal{T}$, we use the expression $\mathcal{L}_Z^\mathbf{k} \mathcal{T}$ to denote the following $\mathbf{k}$-derivatives on forms for $Z \in \mathcal{Z}$:
\begin{equation*}
\mathcal{L}_Z^\mathbf{k}\mathcal{T} = \mathcal{L}_{Z_1} \mathcal{L}_{Z_2}\cdots \mathcal{L}_{Z_{|k|}} \mathcal{T},
\end{equation*}
where there are exactly $k_0$ degree $-1$ vector fields, exactly $k_1$ degree $0$ vector fields and exactly $k_2$ degree $1$ vector fields in the collection $\{Z_i | 1\leq i \leq |\mathbf{k}|\}$.
In the sequel we only consider situations where $|k|\leq 2$. It corresponds to commute at most two derivatives with the Maxwell-Klein-Gordon equations \eqref{MKG}.
As for derivatives on the complex scalar fields, we use the modified covariant derivative $\widehat{D}$. Define
\begin{equation*}
\widehat{D}_Z^\mathbf{k}{f} = \widehat{D}_{Z_1} \widehat{D}_{Z_2}\cdots \widehat{D}_{Z_{|\mathbf{k}|}}{f}.
\end{equation*}
For the solution $\phi$, we also define shorthand notations $\phi^{(\mathbf{k})}= \widehat{D}^{\mathbf{k}}_Z\phi$ and $\psi^{(\mathbf{k})}= r\widehat{D}^{\mathbf{k}}_Z\phi$. The previous commutator calculations allow us to derive the wave equations for $\phi^{(\mathbf{k})}$. Define
\[
{N^{(\mathbf{k})}} = \Box_A {\phi^{(\mathbf{k})}}.
\]
In particular we have $N^{(\mathbf{0})}=0$. By definition of $Q$, we see that $N^{(\mathbf{1})}=Q(\phi, F;Z)$. For the second order derivative $\phi^{(\mathbf{k})}=\widehat{D}_{Z_1}\widehat{D}_{Z_2}\phi$, Proposition \ref{prop twice commutated} together with the identity \eqref{formula to compute box z z} implies that
\begin{equation}\label{commutated wave equation}
N^{(\mathbf{2})}=Q(\widehat{D}_{Z_1}\phi,F;Z_2) + Q(\widehat{D}_{Z_2}\phi,F;Z_1)+Q(\phi, F;[Z_1,Z_2])+Q(\phi, \mathcal{L}_{Z_1} F;Z_2)-2F_{Z_1\mu}F_{Z_2}{}^{\mu}\phi.
\end{equation}
We now turn to the Maxwell part. We first explain our notations. For $r\neq 0$, we shall use the following shorthand notations to derivatives of the chargeless part of $F$:
\begin{equation*}
\alpha^{(\mathbf{k})} =\alpha(\mathcal{L}_Z^\mathbf{k}{\mathring{F}}), \ \ \underline{\alpha}^{(\mathbf{k})} =\underline{\alpha}(\mathcal{L}_Z^\mathbf{k}{\mathring{F}}), \ \ \rho^{(\mathbf{k})} =\rho(\mathcal{L}_Z^\mathbf{k}{\mathring{F}}), \ \ \sigma^{(\mathbf{k})} =\sigma(\mathcal{L}_Z^\mathbf{k}{\mathring{F}}).
\end{equation*}
We notice that $\rho^{(\mathbf{0})} \neq \rho$ for $q_0\neq 0$. In most of the cases in this paper, only the total number of derivatives in $\mathcal{L}^\mathbf{k}_{Z}$ is important. The exact form of $\mathbf{k}$ is usually irrelevant unless it is emphasized. Therefore, we will use shorthand notations ${\mathbf{(1)}}$ and ${\mathbf{(2)}}$ rather than writing down the explicit expression of $\mathbf{k}$, e.g., for $\alpha(\mathcal{L}_T\mathcal{L}_\Omega {\mathring{F}})$ we simply write it as $\alpha^{(\mathbf{2})}$.
For a given $\mathbf{k}$, we also define
\begin{equation}
{J^{(\mathbf{k})}} = \mathcal{L}_Z^\mathbf{k} (r^2 J).
\end{equation}
We remark that $J^{(\mathbf{0})} = r^2 J$ which is {\bf not} the current $J$. The null components of ${J^{(\mathbf{k})}}$ are denoted by $J_L^{(\mathbf{k})}$, $J_{\underline{L}}^{(\mathbf{k})}$ and $\slashed{J}^{(\mathbf{k})}$. More precisely, we define
\begin{equation*}
J_L^{(\mathbf{k})} = -\frac{1}{2} m(\mathcal{L}_Z^\mathbf{k} (r^2 J), {\underline{L}}), \ \ J_{\underline{L}}^{(\mathbf{k})}=-\frac{1}{2}m(\mathcal{L}_Z^\mathbf{k} (r^2 J),L), \ \ \slashed{J}^{(\mathbf{k})}_A = m(\mathcal{L}_Z^\mathbf{k} (r^2 J),e_A) \ \ \text{for} \ A=1,2.
\end{equation*}
In view of \eqref{Maxwell for Fc}, \eqref{Maxwell null Fc} and \eqref{commutator formula 2}, we can commute $\mathcal{L}_Z^{\mathbf{k}}$ to derive
\begin{equation}\label{Maxwell null commuted}
\left\{\begin{aligned}
L(r^2\rho^{(\mathbf{k})})+\slashed{\mathbf{div\,}} (r^2\alpha^{(\mathbf{k})}) = J_L^{(\mathbf{k})}, \ \ {\underline{L}}(r^2\rho^{(\mathbf{k})})&-\slashed{\mathbf{div\,}} (r^2\underline{\alpha}^{(\mathbf{k})}) =-J_{\underline{L}}^{(\mathbf{k})},\\
L(r^2\sigma^{(\mathbf{k})})+\slashed{\mathbf{div\,}} (r^2\,^*\alpha^{(\mathbf{k})}) =0, \ \ {\underline{L}}(r^2\sigma^{(\mathbf{k})})&+\slashed{\mathbf{div\,}} (r^2\,^*\underline{\alpha}^{(\mathbf{k})}) =0,\\
\slashed{\nabla}_{\underline{L}} (r \alpha^{(\mathbf{k})})_A-\slashed{\nabla}_A(r\rho^{(\mathbf{k})})-\,^*\slashed{\nabla}_A(r\sigma^{(\mathbf{k})}) &=r^{-1}\slashed{J}^{(\mathbf{k})}_A,\\
\slashed{\nabla}_L (r \underline{\alpha}^{(\mathbf{k})})_A+\slashed{\nabla}_A(r\rho^{(\mathbf{k})})-\,^*\slashed{\nabla}_A(r\sigma^{(\mathbf{k})}) &=r^{-1}\slashed{J}^{(\mathbf{k})}_A.
\end{aligned}
\right.
\end{equation}
\subsection{Multiplier vector fields and energy quantities}\label{sectioin multiplier vector fields}
One can associate the so-called energy momentum 2-tensor $T[G, {f}]_{\alpha\beta}$ to a closed 2-form $G$ and any complex scalar field ${f}$:
\begin{equation*}
\begin{aligned}
T[{G}, {f}]_{\alpha\beta} = \underbrace{{G}_{\alpha\mu}{G}_{\beta}{}^{\mu}-\frac{1}{4}m_{\alpha\beta}{G}_{\mu\nu}{G}^{\mu\nu}}_{T[{G}]_{\alpha\beta}}+\underbrace{\Re\big(\overline{D_\alpha{f}}D_\beta{f}\big)-\frac{1}{2}m_{\alpha\beta}\overline{D^\mu{f}}D_\mu {f}}_{T[{f}]_{\alpha\beta}}.
\end{aligned}
\end{equation*}
Given a smooth $\mathbb{R}-$valued function $\chi$ and a vector field $Y^\mu$, for any (multiplier) vector field $X$, we define the associated current as:
\begin{equation}\label{current twisted}
^{(X)}\widetilde{J}[{G}, {f}]_\mu =T[{G}, {f}]_{\mu\nu}X^\nu-\frac{1}{2}\nabla_\mu\chi \cdot |{f}|^2+\frac{1}{2}\chi\cdot \nabla_\mu\big(|{f}|^2\big) + Y_\mu.
\end{equation}
It can be computed that the space-time divergence of $^{(X)}\widetilde{J}[{G}, {f}]$ is given by the following formula:
\begin{equation}\label{divergence of J}
\begin{split}
\mathbf{Div\,} \big(\,^{(X)}\widetilde{J}[{G}, {f}]\big)&=\underbrace{T[{G}, {f}]_{\mu\nu}\,^{(X)} \pi^{\mu\nu}+\chi \overline{D^\mu{f}}D_\mu {f}-\frac{1}{2}\Box \chi \cdot |{f}|^2+\mathbf{Div\,} Y}_{\mathbf{D}_1}\\
&\ \ +\underbrace{\Re\big(\overline{\Box_A{f}} (D_X {f}+\chi {f})\big)+\nabla^\mu {G}_{\mu\nu}\cdot{G}^{\delta\nu} X_\delta+X^\mu F_{\mu\nu} {J}[{f}]^\nu}_{\mathbf{D}_2},
\end{split}
\end{equation}
where the current $J_\mu[{f}] = \Im({f}\cdot \overline{D_\mu{f}})$.
In this paper, we will use two types of vector fields as multipliers. In particular the multiplier $X$ will be chosen as $X=\partial_t$ or $X=r^p L$ ($0\leq p \leq 2$). Their deformation tensors are recorded in the following table:
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\begin{tabular}{|c|*{6}{c}|}
\hline
& $\pi_{LL}$ & $\pi_{L{\underline{L}}}$ & $\pi_{{\underline{L}}\Lb}$ & $\pi_{L A}$ & $\pi_{{\underline{L}} A}$ & $\pi_{AB}$\\
\hline
$X=\partial_t$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$\\
\hline
$X=r^p L$ & $0$ & $-p r^{p-1}$ & $2 p r^{p-1}$ & $0$ & $0$ & $r^{p-1}\slashed{g}_{AB}$\\
\hline
\end{tabular}
To define energy quantities, we first clarify the measure over different regions or hypersurfaces. In the sequel, the variable $\vartheta$ denotes for a coordinate on the unit sphere $\mathbf{S}^2$. We have
\begin{equation*}
\begin{split}
&\int_{\mathcal{H}_{r_1}^{r_2}} \cdot = \int_{\frac{r_1}{2}}^{\frac{r_2}{2}}\int_{\mathbf{S}^2} \cdot \, \, \,r^2 dv d\vartheta, \quad\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} \cdot = \int_{-\frac{r_2}{2}}^{-\frac{r_1}{2}}\int_{ \mathbf{S}^2} \cdot \, \, \,r^2 du d\vartheta, \\
&\int_{\mathcal{B}_{r_1}^{r_2}} \cdot = \int_{r_1}^{r_2}\int_{\mathbf{S}^2} \cdot \, \, \,r^2 dr d\vartheta, \quad \int_{\mathcal{D}_{r_1}^{r_2}} \cdot =\frac{1}{2}\int \int \int_{ \mathbf{S}^2} \cdot \, \, \,r^2 du dv d\vartheta.
\end{split}
\end{equation*}
Given ${G}$ and ${f}$, the energy through $\mathcal{B}_{r_1}^{r_2}$ and the energy flux through $\mathcal{H}_{r_1}^{r_2}$ or $\underline{\mathcal{H}}_{r_2}^{r_1}$ are defined as
\begin{equation*}
\begin{split}
\mathcal{E}[{G},{f}](\mathcal{B}_{r_1}^{r_2})&:=\int_{\mathcal{B}_{r_1}^{r_2}} |\alpha({G})|^2+ |\underline{\alpha}({G})|^2+ |\rho({G})|^2+ |\sigma({G})|^2 + |D{f}|^2,\\
\mathcal{F}[{G},{f}](\mathcal{H}_{r_1}^{r_2})&:=\int_{\mathcal{H}_{r_1}^{r_2}} |\alpha({G})|^2+ |\rho({G})|^2+ |\sigma({G})|^2 + |D_L{f}|^2 + |\slashed{D}{f}|^2,\\
{\underline{\mathcal{F}}}[{G},{f}](\underline{\mathcal{H}}_{r_2}^{r_1})&:=\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |\underline{\alpha}({G})|^2+ |\rho({G})|^2+ |\sigma({G})|^2 + |D_{\underline{L}}{f}|^2 + |\slashed{D}{f}|^2.
\end{split}
\end{equation*}
One can take $X=\partial_t$, $\chi=0$, $Y=0$ and then integrate \eqref{divergence of J} over $\mathcal{D}_{r_1}^{r_2}$. This leads to the classical energy identity:
\begin{lemma}[Classical energy identity]\label{classical energy identity}
For all closed 2-form $G$ and any complex scalar field ${f}$ and all $ 0<r_1 < r_2$, we have
\begin{equation}\label{classical energy inequality}
\mathcal{F}[G,{f}](\mathcal{H}_{r_1}^{r_2}) + {\underline{\mathcal{F}}}[G,{f}](\underline{\mathcal{H}}_{r_2}^{r_1}) = \mathcal{E}[G,{f}](\mathcal{B}_{r_1}^{r_2})-\int_{D_{r_1}^{r_2}}\Re\big(\overline{\Box_A{f}} \cdot D_{\partial_t} {f} \big)+\nabla^\mu G_{\mu\nu}\cdot G_0{}^{\nu} + F_{0\mu} {J}[{f}]^\mu.
\end{equation}
\end{lemma}
If we choose $X=r^p L$, $\chi = r^{p-1}$ and $Y=\frac{p}{2}r^{p-2} |{f}|^2 L$, this leads to the $r$-weighted energy identity
\begin{lemma}[$r$-weighted energy identity]\label{r weighted energy estimates}
For all closed 2-form ${G}$ and complex scalar field ${f}$, we have
\begin{equation}\label{r wieghted}
\begin{split}
&\ \ \ \ \int_{\mathcal{B}_{r_1}^{r_2}}r^{p-2}\big(|D_L (rf)|^2+|\slashed{D} (rf)|^2\big)+r^{p}\big(|\alpha({G})|^2+|\rho({G})|^2+|\sigma({G})|^2\big)\\
&=\int_{\mathcal{H}_{r_1}^{r_2}}r^{p-2}\big(|D_L (rf)|^2+r^2|\alpha({G})|^2\big)+\int_{\underline{\mathcal{H}}_{r_2}^{r_1}}r^{p-2}\big(|\slashed{D} (rf)|^2+r^2|\rho({G})|^2+r^2|\sigma({G})|^2\big)\\
&\ \ \ +\frac{1}{2}\int_{\mathcal{D}_{r_1}^{r_2}}r^{p-3}\Big(p\big(|D_L (rf)|^2+r^2|\alpha({G})|^2\big)+(2-p)(|\slashed{D}(rf)|^2+r^2|\rho({G})|^2+r^2|\sigma({G})|^2)\Big)\\
&\ \ \ + \underbrace{\int_{\mathcal{D}_{r_1}^{r_2}} r^{p-1}\Re\big(\overline{\Box_A{f}} \cdot D_L (rf)\big)+r^p \nabla^\mu {G}_{\mu\nu}\cdot{G}_{L}{}^{\nu}+r^p F_{L\mu} {J}[{f}]^\mu}_{\text{$r$-weighted error term} \ \mathbf{Err_p}}
\end{split}
\end{equation}
for all $ 0<r_1 < r_2$ and $p \in [0,2]$.
\end{lemma}
One can find the detailed proof in \cite{yangILEMKG}. For reader's interest, we provide the proof here.
\begin{proof}
The identity \eqref{r wieghted} is equivalent to the following one:
\begin{align}
\notag
&\ \ \ \ \underbrace{\int_{r_1}^{r_2}\int_{\mathbf{S}^2}r^{p}\big(|D_L (rf)|^2+|\slashed{D} (rf)|^2\big)+r^{p+2}\big(|\alpha({G})|^2+|\rho({G})|^2+|\sigma({G})|^2\big) dr d\vartheta}_{L_1}\\
\notag
&=\underbrace{\int_{\frac{r_1}{2}}^{\frac{r_2}{2}}\int_{\mathbf{S}^2} r^{p} \big(|D_L (rf)|^2+r^2|\alpha({G})|^2\big)dvd\vartheta}_{R_1}+\underbrace{\int_{-\frac{r_2}{2}}^{-\frac{r_1}{2}}\int_{\mathbf{S}^2}r^{p}\big(|\slashed{D} (rf)|^2+r^2|\rho({G})|^2+r^2|\sigma({G})|^2\big) du d\vartheta}_{R_2}\\
\notag
&\ \ \ +\int_{u}\int_{\vartheta}\int_{v}r^{p-1}\Big(p(|D_L (rf)|^2+r^2|\alpha({G})|^2\big)+(2-p)(|\slashed{D}(rf)|^2+r^2|\rho({G})|^2+r^2|\sigma({G})|^2)\Big) dv d\vartheta du\\
&\ \ \ + \int_{\mathcal{D}_{r_1}^{r_2}} r^{p-1}\Re\big(\overline{\Box_A{f}} \cdot D_L (rf)\big)+r^p \nabla^\mu {G}_{\mu\nu}\cdot{G}_{L}{}^{\delta}+r^p F_{L\mu} {J}[{f}]^\mu. \label{r wieghted'}
\end{align}
We take $X=r^p L$, $\chi = r^{p-1}$ and $Y=\frac{p}{2}r^{p-2} |{f}|^2 L$ in \eqref{divergence of J}. We can compute that
\begin{align*}
T[{G}, {f}]_{\mu\nu}\,^{(X)} \pi^{\mu\nu} &= -\frac{p-2}{2}r^{p-1}\big(\rho({G})^2+\sigma({G})^2\big)-\frac{p}{2}r^{p-1}|\slashed{D}{f}|^2 \\ & \quad\ +\frac{p}{2}r^{p-1}\big(|D_L{f}|^2+|\alpha({G})|^2\big)+r^{p-1}D_L{f} D_{\underline{L}}{f}, \\
\chi \overline{D^\mu{f}}D_\mu {f} &=-r^{p-1}D_L{f} D_{\underline{L}}{f} +r^{p-1}|\slashed{D}{f}|^2,\ \ \
-\frac{1}{2}\Box \chi \cdot |{f}|^2=-\frac{p(p-1)}{2}r^{p-3}|{f}|^2,\\
\mathbf{Div\,} Y&=\frac{p^2}{2}r^{p-3}|{f}|^2+pr^{p-2}\Re(\overline{D_L {f}}\cdot \phi).
\end{align*}
Since $r^2|D_L f|^2=|D_L (rf)|^2-L(r|{f}|^2)$, we obtain
\begin{equation}\label{C3}
\begin{split}
\mathbf{D}_1&=\frac{2-p}{2}r^{p-3}\big(r^2\rho({G})^2+r^2\sigma({G})^2+|\slashed{D}(rf)|^2\big)+\frac{p}{2}r^{p-3}\big(|\alpha({G})|^2+|D_L(rf)|^2\big),\\
\mathbf{D}_2&= r^{p-1}\Re\big(\overline{\Box_A{f}} \cdot D_L (rf)\big)+r^p \nabla^\mu {G}_{\mu\nu}\cdot{G}_{L}{}^{\nu}+r^p F_{L\mu} {J}[{f}]^\mu,
\end{split}
\end{equation}
where $\mathbf{D}_i$'s are defined in \eqref{divergence of J}. Now we turn to the boundary integrals. On $\mathcal{B}_{r_1}^{r_2}$, the normal $n^\mu$ is $\partial_t$, we have
\begin{equation*}
\begin{split}
^{(X)}\widetilde{J}[{G}, {f}]^\mu n_\mu = &\frac{1}{2}r^{p-2}\big(r^2\alpha({G})^2+r^2\rho({G})^2+r^2\sigma({G})^2+|D_L(rf)|^2+|\slashed{D}(rf)|^2\big)\\
&-\frac{1}{2}\big((p+1)r^{p-2}|{f}|^2+r^{p-1}\partial_r(|{f}|^2)\big).
\end{split}
\end{equation*}
Therefore we derive that
\begin{equation}\label{C5}
\begin{split}
\int_{\mathcal{B}_{r_1}^{r_2}}\,^{(X)}\widetilde{J}[{G}, {f}]^\mu n_\mu &=\frac{1}{2} \underbrace{\int_{\mathcal{B}_{r_1}^{r_2}} r^2\alpha({G})^2+r^2\rho({G})^2+r^2\sigma({G})^2+|D_L(rf)|^2+|\slashed{D}(rf)|^2}_{L_1 \ in \ \eqref{r wieghted'}} \\
&\quad \quad -\frac{1}{2}\int_{r_1}^{r_2}\int_{\mathbf{S}^2}\underbrace{(p+1)r^{p}|{f}|^2+r^{p+1}\partial_r(|{f}|^2)}_{=\partial_r(r^{p+1}|{f}|^2)} d\vartheta dr\\
&=\frac{1}{2}L_1+\frac{1}{2}\int_{\mathcal{S}_{r_1}^{r_1}}r^{p-1}|{f}|^2-\frac{1}{2}\int_{\mathcal{S}_{r_2}^{r_2}}r^{p-1}|{f}|^2.
\end{split}
\end{equation}
On $\mathcal{H}_{r_1}^{r_2}$, the normal $n^\mu$ is $L$. Hence,
\begin{equation*}
^{(X)}\widetilde{J}[{G}, {f}]^\mu n_\mu = r^{p-2}\big(r^2\alpha({G})^2+|D_L(rf)|^2+|\slashed{D}(rf)|^2\big)-\frac{1}{2}\big((p+1)r^{p-2}|{f}|^2+r^{p-1}L(|{f}|^2)\big).
\end{equation*}
Therefore, we have
\begin{equation}\label{C6}
\begin{split}
\int_{\mathcal{H}_{r_1}^{r_2}}\,^{(X)}\widetilde{J}[{G}, {f}]^\mu n_\mu &=\underbrace{\int_{\frac{r_1}{2}}^{\frac{r_2}{2}}\int_{\mathbf{S}^2} r^{p} \big(|D_L (rf)|^2+r^2|\alpha({G})|^2\big)dvd\vartheta}_{R_1 \ in \ \eqref{r wieghted'}} \\ &\quad-\frac{1}{2}\int_{\frac{r_1}{2}}^{\frac{r_2}{2}}\int_{\mathbf{S}^2}\underbrace{(p+1)r^{p}|{f}|^2+r^{p+1}L(|{f}|^2)}_{=L (r^{p+1}|{f}|^2)} d\vartheta dv\\
&=R_1+\frac{1}{2}\int_{\mathcal{S}_{r_1}^{r_1}}r^{p-1}|{f}|^2-\frac{1}{2}\int_{\mathcal{S}_{r_1}^{r_2}}r^{p-1}|{f}|^2.
\end{split}
\end{equation}
On $\underline{\mathcal{H}}_{r_1}^{r_2}$, the normal $n^\mu$ is ${\underline{L}}$. Hence,
\begin{equation*}
^{(X)}\widetilde{J}[{G}, {f}]^\mu n_\mu = r^{p-2}\big(r^2\rho({G})^2+r^2\sigma({G})^2+|\slashed{D}(rf)|^2\big)+\frac{1}{2}\big(-(p+1)r^{p-2}|{f}|^2+r^{p-1}{\underline{L}}(|{f}|^2)\big).
\end{equation*}
Therefore, we have
\begin{equation}\label{C7}
\begin{split}
\int_{\underline{\mathcal{H}}_{r_1}^{r_2}}\,^{(X)}\widetilde{J}[{G}, {f}]^\mu n_\mu &=\underbrace{\int_{-\frac{r_2}{2}}^{-\frac{r_1}{2}}\int_{\mathbf{S}^2}r^{p}\big(|\slashed{D} (rf)|^2+r^2|\rho({G})|^2+r^2|\sigma({G})|^2\big) du d\vartheta}_{R_2 \ in \ \eqref{r wieghted'}} \\
&\quad +\frac{1}{2}\int_{-\frac{r_2}{2}}^{-\frac{r_1}{2}}\int_{\mathbf{S}^2}\underbrace{-(p+1)r^{p}|{f}|^2+r^{p+1}{\underline{L}}(|{f}|^2)}_{={\underline{L}} (r^{p+1}|{f}|^2)} d\vartheta dv\\
&=R_2+\frac{1}{2}\int_{\mathcal{S}_{r_1}^{r_2}}r^{p-1}|{f}|^2-\frac{1}{2}\int_{\mathcal{S}_{r_2}^{r_2}}r^{p-1}|{f}|^2.
\end{split}
\end{equation}
By combining\eqref{C3}-\eqref{C7} we can the use Stokes formula to complete the proof.
\end{proof}
To end this section, we introduce energy norms. For all $r_1>0$, $p \in[0,2]$, $\mathbf{k} \leq 2$ and a given small $\delta>0$, we define the standard energy norms
\begin{equation*}
\begin{split}
\mathcal{E}^{(\mathbf{k})}(\phi;r_1)&=\mathcal{F}[0, \phi^{(\mathbf{k})}](\mathcal{H}_{r_1})+\sup_{r_2 \geq r_1}{\underline{\mathcal{F}}}[0,\phi^{(\mathbf{k})}](\underline{\mathcal{H}}_{r_2}^{r_1}),\\
\mathcal{E}^{(\mathbf{k})}({\mathring{F}};r_1)&=\mathcal{F}[\mathcal{L}_Z^\mathbf{k}({\mathring{F}}), 0](\mathcal{H}_{r_1})+\sup_{r_2 \geq r_1}{\underline{\mathcal{F}}}[\mathcal{L}_Z^\mathbf{k}({\mathring{F}}),0](\underline{\mathcal{H}}_{r_2}^{r_1}),
\end{split}
\end{equation*}
and the $r^p$-weighted energy norms
\begin{equation*}
\begin{split}
\mathcal{E}^{(\mathbf{k})}(\phi;p;r_1)&=\int_{\mathcal{H}_{r_1}}r^{p-2}|D_L \psi^{(\mathbf{k})}|^2+\sup_{r_2 \geq r_1}\int_{\underline{\mathcal{H}}_{r_2}^{r_1}}r^{p-2}|\slashed{D} \psi^{(\mathbf{k})}|^2 \\&\qquad +\int_{\mathcal{D}_{r_1}} r^{p-3}\Big(p|D_L\psi^{(\mathbf{k})}|^2+(2-p)|\slashed{D}\psi^{(\mathbf{k})}|^2\big)\Big),\\
\mathcal{E}^{(\mathbf{k})}({\mathring{F}};p;r_1)&=\int_{\mathcal{H}_{r_1}}r^p|\alpha^{(\mathbf{k})}|^2+\sup_{r_2 \geq r_1}\int_{\underline{\mathcal{H}}_{r_2}^{r_1}}r^p\big(|\rho^{(\mathbf{k})}|^2+|\sigma^{(\mathbf{k})}|^2\big) \\ &\qquad+\int_{\mathcal{D}_{r_1}} r^{p-1}\Big(p|\alpha^{(\mathbf{k})}|^2+(2-p)\big(|\rho^{(\mathbf{k})}|^2+|\sigma^{(\mathbf{k})}|^2\big)\Big).
\end{split}
\end{equation*}
\section{The analysis in the exterior region 0: set-up and zeroth order energy estimates}
We emphasize again that, till the end of the paper, $(\phi,F)$ is a given solution of \eqref{MKG} associated to a given finite energy smooth initial datum. According to the result of Klainerman-Machedon \cite{MKGkl}, the solution exits globally.
\subsection{The exterior region}
We take a positive number $R_*$ and require that $R_*\geq 1$. The number $R_*$ should be understood as a large number and its size will be determined later on (solely by the initial datum). It determines the so-called exterior region $\mathcal{D}_{R_*}$. It is the grey region in the following picture.
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \includegraphics[width=3.8in]{exterior.pdf}
The boundary of the exterior region consists of two pieces: the outgoing null hypersurface $\mathcal{H}_{r_1}^{r_2}$ and its bottom $\mathcal{B}_{R_*}$. The exterior region is also the domain of dependence of $\mathcal{B}_{R_*}$.
According to \eqref{initial data 1}, the following number is the initial energy for $\phi$ and ${\mathring{F}}$ on $\mathcal{B}_{R_*}$:
\begin{equation}\label{def for mathring E}
\mathring{\mathcal{E}}_{\geq R_*} = \sum_{k=0}^2 \int_{r\geq R_*}\int_{\mathbf{S}^2}\Big[r^{2k+6+8\varepsilon_0}\big(|D^k D\phi_0|^2+|D^k\phi_1|^2 + |\nabla^k {\mathring{F}}|^2\big)+r^{4+8\varepsilon_0}|\phi_0|^2\Big]\, r^2dr d\vartheta.
\end{equation}
Since we will eventually take a large $R_*$, we can assume that for a given small positive number $\mathring{\varepsilon}<1$ one has
\begin{equation*}
\mathring{\mathcal{E}}_{\geq R_*} \leq \mathring{\varepsilon}.
\end{equation*}
Before we proceed to the energy estimates, we prove a key technical lemma. The lemma is indispensable to the estimate on terms with critical decay (coming from the charge term) of the current $J$.
\begin{lemma}\label{lemma key}(Key technical lemma) Let $C_0, C_1,C_2, \gamma_0$ and $\varepsilon_0$ be positive numbers. The constant $\varepsilon_0$ is small, say $\varepsilon_0=0.001$ and $\gamma_0 > 100 \varepsilon_0$. Let ${f}$ be an arbitrary scalar field satisfying the following two conditions:
1). For all $r_1 \geq R_*$, we have
\begin{equation}\label{condition 1 in lemma}
\int_{\mathcal{B}_{r_1}} r^{-2}|{f}|^2 \leq C_0 \mathring{\varepsilon} r_1^{-\gamma_0}.
\end{equation}
2). For all $r_2>r_1\geq R_*$, we have
\begin{equation}\label{condition 2 in lemma}
\int_{\mathcal{H}_{r_1}^{r_2}} |D_L {f}|^2 \leq C_1 \mathring{\varepsilon} r_1^{-\gamma_0}+C_2\int_{\mathcal{D}_{r_1}^{r_2}}\frac{1}{r^2}|{f}||D_L {f}|.
\end{equation}
Then there exists a constant $C$ depending only on $C_0,C_1,C_2$ and $\varepsilon_0$ such that
\begin{equation}
\int_{\mathcal{H}_{r_1}^{r_2}} |D_L {f}|^2 \leq C\mathring{\varepsilon} \cdot r_1^{-\gamma_0+\varepsilon_0}.
\end{equation}
\end{lemma}
\begin{remark}
As we have mentioned in the introduction that the error term caused by the charge may lead to a logarithmic growth by using the standard Gronwall's inequality. The importance of this lemma is to avoid this log-loss with the price of losing a bit of decay. This technique was introduced by the first author in \cite{yangILEMKG} to derive the energy flux decay. For completeness we summarize it as a Lemma which will also be used to obtain higher order energy estimates.
\end{remark}
\begin{proof}
Recall that $u_+=1+|u|$. By virtue of Cauchy-Schwarz inequality, we have
\begin{align*}
\mathbf{I}&:=\int_{\mathcal{D}_{r_1}^{r_2}}\frac{1}{r^2}|{f}||D_L {f}|\lesssim \underbrace{\int_{u}\int_{v}\int_{\vartheta} u_+^{-1}r^2|D_L {f}|^2 du dv d\vartheta}_{\mathbf{I}_1}+\underbrace{\int_{u}\int_{v}\int_{\vartheta}u_+ r^{-2}|{f}|^2 du dv d\vartheta}_{\mathbf{I}_2}.
\end{align*}
We first deal with $\mathbf{I}_2$. In view of the case $\gamma=4$ in \eqref{inequality hardy on H} of Appendix \ref{Appendix tools}, we have
\begin{align*}
\mathbf{I_2} &\lesssim \int_{\frac{r_1}{2}}^{\frac{r_2}{2}}u_+\Big(\int_{\mathcal{H}_{2u}^{r_2}}r^{-4}|{f}|^2 \Big)du \lesssim \int_{\frac{r_1}{2}}^{\frac{r_2}{2}}u_+\Big( u_+^{-3}\int_{\mathcal{S}_{2u}^{2u}}|{f}|^2+u_+^{-2}\int_{\mathcal{H}_{2u}^{r_2}}\big|D_L{f}\big|^2 \Big)du\\
&=\underbrace{\int_{\frac{r_1}{2}}^{\frac{r_2}{2}}u_+^{-2}\Big(\int_{\mathcal{S}_{2u}^{2u}}|{f}|^2 \Big)du}_{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \displaystyle\approx \int_{\mathcal{B}_{r_1}^{r_2}}|{f}|^2 \lesssim C_0 r_1^{-\gamma_0}\mathring{\varepsilon}}+\int_{\mathcal{D}_{r_1}^{r_2}}u_+^{-1}\big|D_L{f}\big|^2.
\end{align*}
Here the implicit constant is a universal constant. In particular there exists a universal constant $C$ such that
\begin{equation*}
\begin{split}
\int_{\mathcal{H}_{r_1}^{r_2}} |D_L {f}|^2
&\leq C C_1 r_1^{-\gamma_0}\mathring{\varepsilon} + C C_2 \int_{\mathcal{D}_{r_1}^{r_2}}\big(1+|u|\big)^{-1}|D_L {f}|^2 \\
&= C C_1 r_1^{-\gamma_0}\mathring{\varepsilon} + C C_2 \int_{{r_1}}^{r_2}\frac{1}{s}\Big(\int_{\mathcal{H}_{s}^{r_2}}|D_L {f}|^2\Big) ds.
\end{split}
\end{equation*}
We now apply Gronwall's inequality in Lemma \ref{lemma gronwall} (by setting $f(s) = \int_{\mathcal{H}_{s}^{r_2}}|D_L {f}|^2$) to conclude that
\begin{equation*}\label{A1}
\int_{\mathcal{H}_{r_1}^{r_2}} |D_L {f}|^2 \leq C(C_1+C_2)\mathring{\varepsilon} \cdot r_1^{-\gamma_0} \big(r_2 r_1^{-1})^{CC_2}.
\end{equation*}
For a given $r_1$, define $r_1^* := r_1^{1+\frac{\varepsilon_0}{2CC_2}}$. Then for all $r_2 \leq r_1^*$, we have
\begin{equation}\label{first case losing epsilon}
\int_{\mathcal{H}_{r_1}^{r_2}} |D_L {f}|^2 \lesssim_{C_1,C_2}\mathring{\varepsilon} \cdot r_1^{-\gamma_0+\frac{\varepsilon_0}{2}}, \ \ \ r_2 \leq r_1^*=r_1^{1+\frac{\varepsilon_0}{2CC_2}}.
\end{equation}
Here the implicit constant depends only on $C_1$ and $C_2$.
We now study the case $r_2 > r_1^*$ in a different way. In fact, we take $r_2=\infty$ and we have
\begin{equation}\label{II2}
\begin{split}
\mathbf{I}&=\int_{\mathcal{D}_{r_1}}\frac{1}{r^2}|{f}||D_L {f}| \leq \int_{\mathcal{D}_{r_1}} u_+^{-1-\frac{\varepsilon_0}{2CC_2}}|D_L {f}|^2+u_+^{1+\frac{\varepsilon_0}{2CC_2}}r^{-4}|{f}|^2\\
&=\underbrace{\int_{\frac{r_1}{2}} u_+^{-1-\frac{\varepsilon_0}{2CC_2}} \int_{\mathcal{H}_{2u}}|D_L{f}|^2 du }_{\mathbf{II}_1}+\underbrace{\int_{\frac{r_1}{2}}^{\infty} u_+^{1+\frac{\varepsilon_0}{2CC_2}}\Big(\overbrace{\int_{v}\int_{\vartheta}r^{-2}|{f}|^2dv d\vartheta}^{\displaystyle \mathbf{II'}_2(u) = \int_{\mathcal{H}_{2u}} r^{-4}|{f}|^2} \Big) du}_{\mathbf{II}_2},
\end{split}
\end{equation}
where $C$ is the constant in the definition of $r_1^*$. Because $u_+^{-1-\frac{\varepsilon_0}{2CC_2}}$ is integrable in $u$, Gronwall's inequality enables us to bound ${\mathbf{II}_1}$ by the righthand side of \eqref{condition 2 in lemma}. It suffices to control ${\mathbf{II}_2}$.
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \includegraphics[width=2.5 in]{pic2.pdf}
The cone $\mathcal{H}_{2u}$ is the union of $\mathcal{H}_{2u}^{(2u)^*}$ and $\mathcal{H}_{2u}^{\geq (2u)^*}$ which is the cone emanated from the sphere $\mathcal{S}_{2u}^{(2u)^*}$ ( $(2u)^*= (2u)^{1+\frac{\varepsilon_0}{2CC_2}}$). In the picture, $\mathcal{H}_{2u}^{\geq (2u)^*}$ is denoted by the dashed line. Thus, we have
\begin{align*}
\mathbf{II'}_2(u) = \underbrace{\int_{\mathcal{H}_{2u}^{(2u)^*}} r^{-4} |{f}|^2}_{\mathbf{A}}+\underbrace{\int_{\mathcal{H}_{2u}^{\geq (2u)^*}} r^{-4} |{f}|^2}_{\mathbf{B}}.
\end{align*}
For the term $\mathbf{A}$, we can apply the $\gamma=4$ case of \eqref{inequality hardy on H} and we obtain
\begin{equation*}
\begin{split}
\int_{\mathcal{H}_{2u}^{(2u)^*}}\frac{1}{r^4}|{f}|^2 &\lesssim \underbrace{u_+^{-3}\int_{\mathcal{S}_{2u}^{2u}}|{f}|^2}_{\mathbf{A}_1}+u_+^{-2}\underbrace{\int_{\mathcal{H}_{2u}^{(2u)^*}}\big|D_L{f}\big|^2}_{\text{$\mathbf{A}_2$, use \eqref{first case losing epsilon}}}\lesssim_{C_1,C_2} u_+^{-3}\int_{\mathcal{S}_{2u}^{2u}}|{f}|^2+\mathring{\varepsilon} \cdot u_+^{-\gamma_0-2+\frac{\varepsilon_0}{2}}.
\end{split}
\end{equation*}
So the contribution of $\mathbf{A}$ in $\mathbf{II}_2$ is bounded by (we can always assume that $CC_2\geq 1$)
\begin{align*}
\int_{\frac{r_1}{2}}^{\infty}u_+^{1+\frac{\varepsilon_0}{2CC_2}}\mathbf{A}\, du &\lesssim_{C_1,C_2}\underbrace{ \int_{\frac{r_1}{2}}^{\infty}u_+^{-2+\frac{\varepsilon_0}{2CC_2}} \int_{\mathcal{S}_{2u}^{2u}}|{f}|^2 du}_{\text{use \eqref{condition 1 in lemma} and Lemma \ref{lemma changing decay rates}}}+\mathring{\varepsilon}\int_{\frac{r_1}{2}}^{\infty}u_+^{-\gamma_0-1+\varepsilon_0+\frac{\varepsilon_0}{2CC_2}} du\\
&\lesssim_{C_0,C_1,C_2}\mathring{\varepsilon} \cdot u^{-\gamma_0+\varepsilon_0}.
\end{align*}
For the term $\mathbf{B}$, we can apply Lemma \ref{lemma Hardy on H} with $\gamma=4$ and $r_2=\infty$ to obtain that
\begin{equation}
\mathbf{B} \lesssim \underbrace{\big((2u)^*\big)^{-3}\int_{\mathcal{S}_{2u}^{(2u)^*}}|{f}|^2}_{\mathbf{B}_1}+\underbrace{(u^*)^{-2}\int_{\mathcal{H}_{2u}^{\geq (2u)^*}}\big|D_L{f}\big|^2}_{\mathbf{B}_2}.
\end{equation}
The contribution of $\mathbf{B}_2$ in $\mathbf{II}_2$ is bounded by
\begin{align*}
\int_{\frac{r_1}{2}}^{\infty}u_+^{1+\frac{\varepsilon_0}{2CC_2}} \mathbf{B}_2 du
&\lesssim \int_{\frac{r_1}{2}}^{\infty}u_+^{-1-\frac{\varepsilon_0}{2CC_2}} \int_{\mathcal{H}_{2u}}|D_L \psi|^2 du.
\end{align*}
Therefore, this is the same expression as $\mathbf{II_1}$ and we can ignore this term.
For $\mathbf{B}_1$, up to a universal constant, according to Lemma \ref{lemma Hardy on H} for $r_1 =2u$ and $r_2 = (2u)^*$, we have
\begin{align*}
\mathbf{B}_1\leq |u|^{-3}\int_{\mathcal{S}_{2u}^{2u}}|\psi|^2
+\int_{\mathcal{H}_{2u}^{(2u)^*}}\frac{1}{r^2}|D_L\psi|^2.
\end{align*}
Now the first term on the right hand side is $\mathcal{A}_1$ and the second term is $\mathcal{A}_2$ which have already been estimated. In particular we have bounded all the terms and thus complete the proof.
\end{proof}
\subsection{Zeroth order energy estimates} We prove the zeroth order energy estimate.
\begin{proposition}\label{proposition energy estimate 0th order} For $r_1 \geq R_*$ and $1\leq p \leq 2$, we have
\begin{equation}\label{energy estimate 0th order}
\begin{split}
\mathcal{E}^{(0)}(\phi;r_1)+\mathcal{E}^{(0)}({\mathring{F}};r_1) &\leq 2\mathring{\varepsilon} \cdot r_1^{-6-6\varepsilon_0},\\
\mathcal{E}^{(0)}(\phi;p;r_1)+\mathcal{E}^{(0)}({\mathring{F}};p;r_1) &\leq 2\mathring{\varepsilon} \cdot r_1^{p-6-6\varepsilon_0}.
\end{split}
\end{equation}
\end{proposition}
\begin{proof}
We first prove the second estimate for the endpoint case $p=2$ (This is the only case which has applications in the current work. Indeed, for $p<2$, the proof is exactly the same and one may also see \cite{yangILEMKG}). We set $G={\mathring{F}}$, ${f}=\phi$ and $rf=\psi=r\phi$ in Lemma \ref{r weighted energy estimates}. Thus, \eqref{r wieghted} yields
\begin{equation}\label{r weighted for Fc}
\begin{split}
& \ \ \int_{\mathcal{H}_{r_1}^{r_2}}|D_L \psi|^2+r^2|\mathring{\alpha}|^2+\int_{\underline{\mathcal{H}}_{r_2}^{r_1}}|\slashed{D} \psi|^2+r^2|\mathring{\rho}|^2+r^2|\mathring{\sigma}|^2 +\int_{\mathcal{D}_{r_1}^{r_2}}r^{-1}\big(|D_L \psi|^2+r^2|\mathring{\alpha}|^2\big)+\mathbf{Err_p}\\
&=\int_{\mathcal{B}_{r_1}^{r_2}}|D_L \psi|^2+|\slashed{D} \psi|^2+r^{2}\big(|\mathring{\alpha}|^2+|\mathring{\rho}|^2+|\mathring{\sigma}|^2\big) \leq \mathring{\varepsilon} \cdot r_1^{-4-8\varepsilon_0}.
\end{split}
\end{equation}
It suffices to bound the term $\mathbf{Err_p}$ of \eqref{r wieghted}. It is straightforward to see that the integrand of $\mathbf{Err_p}$ is $q_0 r^{p-2}J_L$. Hence,
\begin{align*}
\big|\mathbf{Err_p}\big|&\stackrel{p=2}{=}\big| q_0 \int_{\mathcal{D}_{r_1}^{r_2}}J_L\big|=\big|q_0 \int_{\mathcal{D}_{r_1}^{r_2}}\Im(\overline{D_L\phi}\cdot\phi)\big|=\big|q_0 \int_{\mathcal{D}_{r_1}^{r_2}}r^{-2}\Im(\overline{D_L\psi}\cdot\psi)\big|.
\end{align*}
In particular, \eqref{r weighted for Fc} implies
\begin{equation*}
\begin{split}
\int_{\mathcal{H}_{r_1}^{r_2}}|D_L \psi|^2 \lesssim \mathring{\varepsilon} \cdot r_1^{-4-8\varepsilon_0}+ \int_{\mathcal{D}_{r_1}^{r_2}}r^{-2}|\psi||D_L\psi|
\end{split}
\end{equation*}
We now can use Lemma \ref{lemma key} (with $\gamma=-4-8\varepsilon_0$) and we obtain that
\begin{equation*}
\begin{split}
\int_{\mathcal{H}_{r_1}^{r_2}}|D_L \psi|^2 + \int_{\mathcal{D}_{r_1}^{r_2}}r^{-2}|\psi||D_L\psi|\lesssim \mathring{\varepsilon} \cdot r_1^{-4-7\varepsilon_0}.
\end{split}
\end{equation*}
This leads to the $r$-weighted energy estimates with $p=2$. The case when $p<2$ follows in a similar way. Once we have control on the error term caused by the nonzero charge, the first estimate of the proposition is an immediate consequence of the basic energy identity \eqref{classical energy inequality}. We may always assume that $\mathbf{R}_*$ is large enough so that by afford a factor $r_1^{-\varepsilon_0}$ we beat all the constants from Lemma \ref{lemma key}.This completes the proof.
\end{proof}
\section{The analysis in the exterior region 1: bootstrap ansatz and decay estimates}
\subsection{Bootstrap ansatz}
We make two sets of ansatz on the exterior region $\mathcal{D}_{R_*}$. The first set is on the energy quantities:
\begin{equation*}
\boxed{
\begin{aligned}
\mathcal{E}^{(\mathbf{k})}({\mathring{F}};r_1)+\mathcal{E}^{(\mathbf{k})}(\phi;r_1)& \leq 4{\mathring{\varepsilon}}r_1^{-6+2\xi(\mathbf{k})-(6-2|\mathbf{k}|)\varepsilon_0},\\
\mathcal{E}^{(\mathbf{k})}({\mathring{F}};p;r_1)+\mathcal{E}^{(\mathbf{k})}(\phi;p;r_1)&\leq 4{\mathring{\varepsilon}}r_1^{p-6+2\xi(\mathbf{k})-(6-2|\mathbf{k}|)\varepsilon_0},
\end{aligned}
\ \ \ r_1\geq R_*, \ |\mathbf{k}| = 1, 2 \ \text{and} \ p\in[0,2]. \quad {\mathbf{(B)}}}
\end{equation*}
The second set is on the current terms:
\begin{equation*}
\boxed{
\begin{aligned}
& \ \text{For all} \ r_1\geq R_*, \ |\mathbf{k}| \leq 1, \ \text{we assume}\\
& \int_{\mathcal{H}_{r_1}}\frac{{|J_L^{(\mathbf{k})}|^2}}{r_1^2}+\int_{\mathcal{H}_{r_1}}\frac{|\slashed{J}^{(\mathbf{k})}|^2}{r^2}+\sup_{r_2 \geq r_1}\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} \frac{|\slashed{J}^{(\mathbf{k})}|^2}{r^2} +\sup_{r_2 \geq r_1}r_1^{\frac{3}{2}}\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} \frac{|J_{\underline{L}}^{(\mathbf{k})}|^2}{r^{\frac{7}{2}}}\leq 4{\mathring{\varepsilon}}^2 r_1^{-8+2\xi(\mathbf{k})-4\varepsilon_0}\\
&\text{and for } |\mathbf{k}|=2, \ \text{we assume} \\
& \int_{\mathcal{H}_{r_1}}\frac{|J_L^{(\mathbf{k})}|^2}{r_1^2}+\int_{\mathcal{H}_{r_1}}\frac{|\slashed{J}^{(\mathbf{k})}|^2}{r^2} +\sup_{r_2 \geq r_1}r_1^{\frac{3}{2}}\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} \frac{|J_{\underline{L}}^{(\mathbf{k})}|^2}{r^{\frac{7}{2}}}\leq 4{\mathring{\varepsilon}}^2 r_1^{-8+2\xi(\mathbf{k})-4\varepsilon_0}.
\end{aligned}\quad {\mathbf{(C)}}}
\end{equation*}
We will show that if $\mathring{\varepsilon}$ is sufficiently small (by setting $R_*$ to be sufficiently large), the constant $4$ in the ansatz can be improved to be $2$. In the sequel, the bootstrap argument should be understood dynamically (as one does in solving the Cauchy problem): we assume that the solution is defined in the region where $0\leq t \leq T_*$ and $T_*$ is a fixed positive number. Therefore, for sufficiently small $T_*$, $\mathbf{(B)}$ holds. The bootstrap argument will show that one can indeed replace the constant $4$ by $2$ and this is independent of $T_*$. Therefore, by letting $T_*\rightarrow \infty$, we obtain estimates on the entire spacetime.
Based on these ansatz, we will first derive pointwise estimates on ${\mathring{F}}$ and $\phi$.
\subsection{Pointwise decay estimates of the Maxwell field}
We use $(\mathbf{B})$ to bound $\mathring{\alpha}$, $\mathring{\rho}$, $\mathring{\sigma}$ and $\underline{\mathring{\alpha}}$.
\begin{proposition}\label{Proposition pointwise decay of Maxwell}
We have the following decay estimates:
\begin{equation*}
\begin{split}
|\mathring{\alpha}| &\lesssim \sqrt{{\mathring{\varepsilon}}} r^{-3} u_+^{-1-\varepsilon_0}, \ \ |\mathring{\rho}|+|\mathring{\sigma}| \lesssim \sqrt{{\mathring{\varepsilon}}} r^{-2} u_+^{-2-\varepsilon_0},\ \ |\underline{\mathring{\alpha}}| \lesssim \sqrt{{\mathring{\varepsilon}}} r^{-1} u_+^{-3-\varepsilon_0}.
\end{split}
\end{equation*}
\end{proposition}
\begin{proof}
{\bf Step 1. $L^\infty$ estimate of $\underline{\mathring{\alpha}}$.}
In view of \eqref{formula compare Lie and nablaslash}, Lemma \ref{lemma commuting Z with null decomposition}, the last equation in \eqref{Maxwell null commuted} and the fact that ${\underline{L}}=2T-L$, we have
\begin{align*}
\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |\mathcal{L}_{\underline{L}} \big(\mathcal{L}_\Omega\underline{\mathring{\alpha}}\big)|^2 &\leq\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |\mathcal{L}_T \big(\mathcal{L}_\Omega\underline{\mathring{\alpha}}\big)|^2+\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |\mathcal{L}_L \big(\mathcal{L}_\Omega\underline{\mathring{\alpha}}\big)|^2\\
&= \int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |\underline{\alpha}^{(\mathbf{2})}|^2 + \int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |-\slashed{\nabla} \rho^{(\mathbf{1})} +\,^*\slashed{\nabla} \sigma^{(\mathbf{1})} + r^{-2}\slashed{J}^{(\mathbf{1})} + \frac{1}{r}\underline{\alpha}^{(\mathbf{1})}|^2.
\end{align*}
We remark that in this case $\xi(\mathbf{2})= - 1$ and $\xi(\mathbf{1})=0$. By $\mathbf{(B)}$, we then have
\begin{align*}
\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |\mathcal{L}_{\underline{L}} \big(\mathcal{L}_\Omega\underline{\mathring{\alpha}}\big)|^2 &\lesssim {\mathring{\varepsilon}} r_1^{-8-2\varepsilon_0}+ {\mathring{\varepsilon}} r_1^{-6-4\varepsilon_0}+ \int_{\underline{\mathcal{H}}_{r_2}^{r_1}} \frac{|\slashed{J}^{(\mathbf{1})}|^2}{r^4}.
\end{align*}
We can use the first term of $\mathbf{(C)}$ to bound the last term in the above inequality.
Recall that for forms $\Xi$, we have $ r^2 |\slashed{\nabla} \Xi|^2 \lesssim |\mathcal{L}_\Omega \Xi|^2+|\Xi|^2$. Therefore, $\mathbf{(B)}$ together $(\mathbf{C})$ imply that
\begin{align*}
\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |\mathcal{L}_{\underline{L}} \big(\mathcal{L}_\Omega\underline{\mathring{\alpha}}\big)|^2 \lesssim {\mathring{\varepsilon}} r_1^{-6-4\varepsilon_0}.
\end{align*}
By $\mathbf{(B)}$, we also have
\begin{align*}
\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |\mathcal{L}_\Omega \big(\mathcal{L}_\Omega\underline{\mathring{\alpha}}\big)|^2 \lesssim {\mathring{\varepsilon}} r_1^{-6-2\varepsilon_0}.
\end{align*}
We then can apply \eqref{Sobolve on incoming null hypersurfaces} to derive
\begin{equation*}
\begin{split}
\|\mathcal{L}_\Omega\underline{\mathring{\alpha}}\|_{L^4(\mathcal{S}_{r_1}^{r_2})} &\lesssim r_2^{-\frac{1}{2}} \sqrt{{\mathring{\varepsilon}}} r_1^{-3-\varepsilon_0}.
\end{split}
\end{equation*}
We can repeat the above argument by switching $\mathcal{L}_\Omega\underline{\mathring{\alpha}}$ to $\underline{\mathring{\alpha}}$ and we obtain
\begin{equation*}
\begin{split}
\|\underline{\mathring{\alpha}}\|_{L^4(\mathcal{S}_{r_1}^{r_2})} &\lesssim r_2^{-\frac{1}{2}} \sqrt{{\mathring{\varepsilon}}} r_1^{-3-2\varepsilon_0}.
\end{split}
\end{equation*}
Compared to the $L^4$ bound of $\mathcal{L}_\Omega \underline{\mathring{\alpha}}$, this bound gains an extra $r^{-\varepsilon_0}$ because we use one less derivative in this case. This is clear from the bootstrap ansatz $\mathbf{(B)}$. We then apply the Sobolev inequality \eqref{Sobolev on sphere} on $\mathcal{S}^{r_1}_{r_2}$. In view of the fact that $\frac{r_1+r_2}{2} \approx r_2$ and $|u|\approx r_1$ on $\mathcal{S}^{r_1}_{r_2}$, we obtain
\begin{equation}
|\underline{\mathring{\alpha}}| \lesssim \sqrt{{\mathring{\varepsilon}}} r^{-1} u_+^{-3-\varepsilon_0}.
\end{equation}
{\bf Step 2. $L^\infty$ estimate of $\mathring{\rho}$ and $\mathring{\sigma}$.}
We only derive the bound on $\mathring{\rho}$ since $\mathring{\sigma}$ can be bounded exactly in the same manner. First of all, for $\mathbf{l}=(0,1,0)$ and $\mathbf{k} = (0,2,0)$, we have
\begin{align*}
{\underline{L}}\big(\mathcal{L}_\Omega (r\mathring{\rho})\big)=r^{-1}{\underline{L}}(r^2\rho^{(\mathbf{1})})-\rho^{(\mathbf{1})}.
\end{align*}
Thus by using the null equation for $\mathring{\rho}$ as well as the bootstrap assumptions we can show that
\begin{align*}
\int_{\underline{\mathcal{H}}_{r_2}^{r_1}}|{\underline{L}} \big(\mathcal{L}_\Omega(r \mathring{\rho})\big)|^2 &\leq\int_{\underline{\mathcal{H}}_{r_2}^{r_1}}r^{-2}|\mathcal{L}_{\underline{L}} \big(r^2 \rho^{(\mathbf{l})}\big)|^2+|\rho^{(\mathbf{l})}|^2 \stackrel{\eqref{Maxwell null commuted}}{=}\int_{\underline{\mathcal{H}}_{r_2}^{r_1}}r^{-2}|\slashed{\mathbf{div\,}} (r^2\underline{\alpha}^{(\mathbf{l})}) -J_{\underline{L}}^{(\mathbf{l})}|^2+|\rho^{(\mathbf{l})}|^2\\
&\leq \int_{\underline{\mathcal{H}}_{r_2}^{r_1}}|\underline{\alpha}^{(\mathbf{k})})|^2+|\rho^{(\mathbf{l})}|^2 + r^{-2}|J_{\underline{L}}^{(\mathbf{l})}|^2\stackrel{(\mathbf{B}),(\mathbf{C})}{\lesssim} {\mathring{\varepsilon}} r_1^{-6-2\varepsilon_0}.
\end{align*}
By the $p=2$ case of $(\mathbf{B})$, we also have
\begin{align*}
\int_{\underline{\mathcal{H}}_{r_2}^{r_1}}r^2 |\mathcal{L}_\Omega\mathring{\rho}|^2+ \int_{\underline{\mathcal{H}}_{r_2}^{r_1}}r^2 |\mathcal{L}_\Omega \big(\mathcal{L}_\Omega\mathring{\rho}\big)|^2 &\lesssim {\mathring{\varepsilon}} r_1^{-4-2\varepsilon_0}.
\end{align*}
Therefore, we obtain that
\begin{equation*}
\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |\mathcal{L}_\Omega \big(r \mathring{\rho}\big)|^2+\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} \Big|\mathcal{L}_{\underline{L}} \Big(\mathcal{L}_\Omega\big(r\mathring{\rho}\big)\Big)\Big|^2 +\int_{\underline{\mathcal{H}}_{r_2}^{r_1}}\Big|\mathcal{L}_\Omega \Big(\mathcal{L}_\Omega\big(r\mathring{\rho}\big)\Big)\Big|^2 \lesssim {\mathring{\varepsilon}} r_1^{-4-2\varepsilon_0}.
\end{equation*}
According to \eqref{Sobolve on incoming null hypersurfaces}, the above energy estimate implies that
\begin{equation*}
\begin{split}
\|\mathcal{L}_\Omega \big(r\mathring{\rho}\big)\|_{L^4(\mathcal{S}_{r_1}^{r_2})} &\lesssim r_2^{-\frac{1}{2}} \sqrt{{\mathring{\varepsilon}}} r_1^{-2-\varepsilon_0}.
\end{split}
\end{equation*}
Similarly, we have
\begin{equation*}
\| r\mathring{\rho}\|_{L^4(\mathcal{S}_{r_1}^{r_2})} \lesssim r_2^{-\frac{1}{2}} \sqrt{{\mathring{\varepsilon}}} r_1^{-2-2\varepsilon_0}.
\end{equation*}
We notice that this is a similar bound but with an extra $r_1^{-\varepsilon_0}$ due to one less derivative compared to the previous case.
We then apply \eqref{Sobolev on sphere} on $\mathcal{S}^{r_1}_{r_2}$ and conclude that
\begin{equation}
|\mathring{\rho}| \lesssim \sqrt{{\mathring{\varepsilon}}} r^{-2} u_+^{-2-\varepsilon_0}.
\end{equation}
\begin{remark}
By using the flux on $\mathcal{H}_{r_1}^{r_2}$, the same argument yields:
\begin{equation*}
|\mathring{\alpha}| \lesssim \sqrt{{\mathring{\varepsilon}}} r^{-2} u_+^{-2-\varepsilon_0}.
\end{equation*}
This is not optimal and we will obtain a better decay in the next step.
\end{remark}
{\bf Step 3. $L^\infty$ estimate of $\mathring{\alpha}$.} The sharp decay of $\mathring{\alpha}$ relies on the commutator $K$ and the $r^p$-weighted energy estimate. Note that for an arbitrary two form $G$, we have
\begin{equation*}
\alpha(\mathcal{L}_{K} G)_A = v^{-1}\nabla_L (v^3 \alpha(G))_A+u^2\nabla_{\underline{L}} \alpha(G)_A+u\alpha(G)_A.
\end{equation*}
Therefore, we have
\begin{equation*}
v\alpha(\mathcal{L}_{K} G)_A = \nabla_L \big(v^3 \alpha(G)\big)_A+u^2\nabla_{\underline{L}} \big(r\alpha(G)\big)_A+(u^2+uv)\alpha(G)_A.
\end{equation*}
If we take $G=\mathcal{L}_\Omega {\mathring{F}}$, in view of the third equation in \eqref{Maxwell null commuted}, we also have
\begin{equation}\label{eq:1}
\nabla_{L}(v^3\mathcal{L}_{\Omega}\mathring{\alpha})=v\alpha^{(0,1,1)}-(u^2+uv)\alpha^{(0,1,0)}-u^2\big[\slashed{\nabla}(r\rho^{(0,1,0)})+\,^*\slashed{\nabla}(r\sigma^{(0,1,0)})-r^{-1}\slashed{J}^{(0,1,0)}\big].
\end{equation}
By virtue of the bootstrap assumptions $(\textbf{B})$, $(\textbf{C})$ and $|u|\lesssim r$, especially the $r^p$-weighted energy norms, we have
\begin{align*}
\int_{\mathcal{H}_{r_1}} |\nabla_{L}(v^3\mathcal{L}_{\Omega}\mathring{\alpha})|^2 & \lesssim \int_{\mathcal{H}_{r_1}} v^2|\alpha^{(0,1,1)}|^2+|u|^2v^2|\alpha^{(0,1,0)}|^2+|u|^4\big(|\rho^{(0,2,0)}|^2+|\sigma^{(0,2,0)}|^2\big) +\frac{|u|^4}{r^2}|\slashed{J}^{(0,1,0)}|^2\\
&\lesssim {\mathring{\varepsilon}} r_1^{-2-2\varepsilon_0}.
\end{align*}
In view of $v=u+r$, we have
\begin{equation}\label{highest weight estimate for alpha}
\|\nabla_{L}(rv^2\mathcal{L}_{\Omega}\mathring{\alpha})\|_{L^2(\mathcal{H}_{r_1})} \lesssim \sqrt{\mathring{\varepsilon}} r_1^{-1-\varepsilon_0}.
\end{equation}
This estimate can be used to get a sharp decay estimates for $\|\mathcal{L}_\Omega\mathring{\alpha}\|_{L^2(\mathcal{S}_{r_1}^{r_2})}$. In fact, we have
\begin{align*}
\|v^2\mathcal{L}_\Omega\mathring{\alpha}\|_{L^2(\mathcal{S}_{r_1}^{r_2})}^2- \|v^2\mathcal{L}_\Omega\mathring{\alpha}\|_{L^2(\mathcal{S}_{r_1}^{r_1})}^2&= \int_{\frac{r_2}{2}}^{\frac{r_1}{2}}\int_{\mathbf{S}^2}L\big(\big| r v^2 \mathcal{L}_\Omega\mathring{\alpha}\big|^2\big)d\vartheta dv \\
&\lesssim \int_{\frac{r_2}{2}}^{\frac{r_1}{2}}\int_{\mathbf{S}^2} |\nabla_L\big( r v^2 \mathcal{L}_\Omega\mathring{\alpha} \big)||r \mathcal{L}_\Omega\mathring{\alpha} |r^2d\vartheta dv\\
&\leq \|\nabla_{L}(rv^2\mathcal{L}_{\Omega}\mathring{\alpha})\|_{L^2(\mathcal{H}_{r_1})}\|r\mathcal{L}_{\Omega}\mathring{\alpha} \|_{L^2(\mathcal{H}_{r_1})}.
\end{align*}
Thus,
\begin{align*}
\|v^2\mathcal{L}_\Omega\mathring{\alpha}\|_{L^2(\mathcal{S}_{r_1}^{r_2})}^2&\lesssim\|v^2\mathcal{L}_\Omega\mathring{\alpha}\|_{L^2(\mathcal{S}_{r_1}^{r_1})}^2+\|\nabla_{L}(rv^2\mathcal{L}_{\Omega}\mathring{\alpha})\|_{L^2(\mathcal{H}_{r_1})}\|r\mathcal{L}_{\Omega}\mathring{\alpha}\|_{L^2(\mathcal{H}_{r_1})}\\
&\lesssim {\mathring{\varepsilon}} r_1^{-3-3\varepsilon_0}.
\end{align*}
As a result, we obtain
\begin{equation}\label{L2 bound for LOmegaAlpha on sphere}
\|\mathcal{L}_\Omega\mathring{\alpha}\|_{L^2(\mathcal{S}_{r_1}^{r_2})} \lesssim \sqrt{\mathring{\varepsilon}} r_2^{-2}r_1^{-\frac{3}{2}-\frac{3}{2}\varepsilon_0}.
\end{equation}
One can also bound $\|\mathcal{L}_\Omega\mathring{\alpha}\|_{L^4(\mathcal{S}_{r_1}^{r_2})}$. We take $\Xi=r\mathcal{L}_\Omega\mathring{\alpha}$ in \eqref{Sobolve on incoming null hypersurfaces} and we obtain
\begin{align*}
r_2^3 \|\mathcal{L}_\Omega\mathring{\alpha}\|_{L^4(\mathcal{S}_{r_1}^{r_2})}^2 &\lesssim \int_{\mathcal{H}_{r_2}^{r_1}} |r\mathcal{L}_\Omega\alpha|^2+\int_{\mathcal{H}_{r_2}^{r_1}} \frac{1}{r^2}|\mathcal{L}_L (r^2\mathcal{L}_\Omega\mathring{\alpha})|^2+\int_{\mathcal{H}_{r_2}^{r_1}} {r^2}|\mathcal{L}_\Omega (\mathcal{L}_\Omega\mathring{\alpha})|^2\\
& \lesssim \int_{\mathcal{H}_{r_2}^{r_1}} r^2|\alpha^{(0,1,0)}|^2+\underbrace{\int_{\mathcal{H}_{r_2}^{r_1}} \frac{1}{r^2}|\mathcal{L}_L (r^2\mathcal{L}_\Omega\mathring{\alpha})|^2}_{bounded \ in \ \eqref{highest weight estimate for alpha}}+\int_{\mathcal{H}_{r_2}^{r_1}} {r^2}|\alpha^{(0,2,0)}|^2\\
& \lesssim \mathring{\varepsilon} r_1^{-4-2\varepsilon_0}.
\end{align*}
In other words, we have
\begin{equation}\label{L4 bound for LOmegaAlpha on sphere}
\|\mathcal{L}_\Omega\mathring{\alpha}\|_{L^4(\mathcal{S}_{r_1}^{r_2})} \lesssim \sqrt{\mathring{\varepsilon}} r_2^{-\frac{3}{2}}r_1^{-2-\varepsilon_0}.
\end{equation}
For $q\in [2,4]$, by interpolating \eqref{L2 bound for LOmegaAlpha on sphere} and \eqref{L4 bound for LOmegaAlpha on sphere}, we have
\begin{equation}\label{Lq bound for LOmegaAlpha on sphere}
\|\mathcal{L}_\Omega\mathring{\alpha}\|_{L^q(\mathcal{S}_{r_1}^{r_2})} \lesssim \sqrt{\mathring{\varepsilon}} r_2^{-\big(1+\frac{2}{q}\big)}r_1^{-\big(\frac{5}{2}-\frac{2}{q}+(\frac{1}{2}+\frac{2}{q})\varepsilon_0\big)}, \ 2\leq q \leq 4.
\end{equation}
We now try to improve decay in $r_2$ in \eqref{Lq bound for LOmegaAlpha on sphere} for $2<q<\frac{9}{4}$. For this purpose, we choose $\gamma$ so that
\begin{equation*}
\gamma+\frac{2}{q}=3.
\end{equation*}
Therefore, we have
\begin{align*}
\|r^\gamma\mathcal{L}_\Omega\mathring{\alpha}\|_{L^q(\mathcal{S}_{r_1}^{r_2})}^q- \|r^\gamma\mathcal{L}_\Omega\mathring{\alpha}\|_{L^q(\mathcal{S}_{r_1}^{r_1})}^q&= \int_{\frac{r_2}{2}}^{\frac{r_1}{2}}\int_{\mathbf{S}^2}L\big(\big| r^3 \mathcal{L}_\Omega\mathring{\alpha}\big|^q\big)d\vartheta dv \\
&\lesssim \int_{\frac{r_2}{2}}^{\frac{r_1}{2}}\int_{\mathbf{S}^2} |\nabla_L\big( r v^2 \mathcal{L}_\Omega\mathring{\alpha} \big)|| r^3 \mathcal{L}_\Omega\mathring{\alpha}\big|^{q-1}d\vartheta dv.
\end{align*}
According to Cauchy-Schwarz inequality, we have
\begin{equation}\label{aux 1}
\|r^\gamma\mathcal{L}_\Omega\mathring{\alpha}\|_{L^q(\mathcal{S}_{r_1}^{r_2})}^q \lesssim \|r^\gamma\mathcal{L}_\Omega\mathring{\alpha}\|_{L^q(\mathcal{S}_{r_1}^{r_1})}^q+\|\nabla_L\big( r v^2 \mathcal{L}_\Omega\mathring{\alpha} \big)\|_{L^2(\mathcal{H}_{r_1})} \underbrace{\|r^{3q-5} |\mathcal{L}_\Omega\mathring{\alpha} |^{q-1}\|_{L^2(\mathcal{H}_{r_1})}}_{\mathbf{I}}.
\end{equation}
To bound $\mathbf{I}$, since $q<\frac{9}{4}$, we proceed as follows
\begin{align*}
\mathbf{I}&=\Big(\int_{\mathcal{H}_{r_1}}r^{6q-10} |\mathcal{L}_\Omega\mathring{\alpha} |^{2q-2}\Big)^{\frac{1}{2}}=\Big(\int_{r_1}^{r_2}r^{6q-10} \|\mathcal{L}_\Omega\mathring{\alpha} \|_{L^{2q-2}({\mathcal{S}_{r_1}^r})}^{2q-2} dr\Big)^{\frac{1}{2}}\\
&\stackrel{\eqref{Lq bound for LOmegaAlpha on sphere}}{\lesssim} \Big(\int_{r_1}^{r_2}r^{6q-10}\cdot \mathring{\varepsilon}^{q-1}\cdot r^{-2q}r_1^{-\big(5q-7+(q+1)\varepsilon_0\big)} dr\Big)^{\frac{1}{2}}\lesssim \mathring{\varepsilon}^{\frac{q-1}{2}} r_1^{-\frac{1}{2}\big(q+2+(q+1)\varepsilon_0\big)} .
\end{align*}
In view of \eqref{conf estimates aux} and \eqref{highest weight estimate for alpha}, we have
\begin{equation*}
\|r^\gamma\mathcal{L}_\Omega\mathring{\alpha}\|_{L^q(\mathcal{S}_{r_1}^{r_2})}^q \lesssim \mathring{\varepsilon}^{\frac{q}{2}} r_1^{-q(1+\varepsilon_0)} + \mathring{\varepsilon}^{\frac{q}{2}} r_1^{-\frac{1}{2}\big(q+4+(q+3)\varepsilon_0\big)}
\end{equation*}
Therefore, we have
\begin{equation}\label{Lq final estimate 1}
\|\mathcal{L}_\Omega\mathring{\alpha}\|_{L^q(\mathcal{S}_{r_1}^{r_2})} \lesssim \sqrt{\mathring{\varepsilon}}r_2^{\frac{2}{q}-3} r_1^{-(1+\varepsilon_0)}, \ \text{for} \ 2<q<\frac{9}{4}.
\end{equation}
We remark that, compared to \eqref{Lq bound for LOmegaAlpha on sphere}, the decay in $r_2$ has been improved. Similary, we also have
\begin{equation}\label{Lq final estimate 2}
\|\mathring{\alpha}\|_{L^q(\mathcal{S}_{r_1}^{r_2})} \lesssim \sqrt{\mathring{\varepsilon}}r_2^{\frac{2}{q}-3} r_1^{-(1+\varepsilon_0)}, \ \text{for} \ 2<q<\frac{9}{4}.
\end{equation}
We can fix a $q \in (2,\frac{9}{4})$ (say $q=\frac{17}{8}$) and apply \eqref{Sobolev on sphere}. Therefore, \eqref{Lq final estimate 1} and \eqref{Lq final estimate 2} together yield
\begin{equation*}
|\mathring{\alpha}| \lesssim \sqrt{{\mathring{\varepsilon}}} r^{-3} u_+^{-1-\varepsilon_0}.
\end{equation*}
This completes the proof.
\end{proof}
\subsection{Pointwise decay estimates of the scalar field}
We start with the decay estimate of $\phi$ on the initial slice $\mathcal{B}_{R_*}$. By \eqref{Sobolve on incoming null hypersurfaces} and \eqref{Sobolev on sphere}, we have
\begin{equation*}
\begin{split}
\|\phi\|_{L^4(\mathcal{S}_{r_1}^{r_1})} \lesssim \sqrt{{\mathring{\varepsilon}}} r_1^{-\frac{5}{2}-4\varepsilon_0}, \ \ \|D_\Omega \phi\|_{L^4(\mathcal{S}_{r_1}^{r_1})} \lesssim \sqrt{{\mathring{\varepsilon}}} r_1^{-\frac{5}{2}-4\varepsilon_0}.
\end{split}
\end{equation*}
By \eqref{Sobolev on sphere}, we have
\begin{equation*}
\|\phi\|_{L^\infty(\mathcal{S}_{r_1}^{r_1})}\lesssim \sqrt{{\mathring{\varepsilon}}} r_1^{-3-4\varepsilon_0}.
\end{equation*}
\begin{proposition}\label{Proposition pointwise decay of scalar field}
For the solution $(\phi, F)$ of the MKG equations on the exterior region $\{t+R_*\leq |x|\}$, the scalar field verifies the following decay estimates:
\begin{equation*}
\begin{split}
|\phi| &\lesssim \sqrt{{\mathring{\varepsilon}}} r^{-1} u_+^{-\frac{5}{2}-2\varepsilon_0},\ \ |D_{\underline{L}} \phi| \lesssim \sqrt{{\mathring{\varepsilon}}} r^{-1} u_+^{-3-\varepsilon_0},\\
|\slashed{D}\phi| &\lesssim \sqrt{{\mathring{\varepsilon}}} r^{-2} u_+^{-2-\varepsilon_0}, \ \ |D_L\psi| \lesssim \sqrt{{\mathring{\varepsilon}}} r^{-2} u_+^{-1-\varepsilon_0}.
\end{split}
\end{equation*}
\end{proposition}
\begin{proof}
{\bf Step 1. $L^\infty$ estimate of $\phi$.} For $k\leq 2$, by Lemma \ref{Sobolev trace estimates on outgoing null hypersurfaces} and $(\mathbf{B})$ we have
\begin{equation}\label{bound on L2 on spheres}
\begin{split}
\|D^{k}_\Omega \phi\|^2_{L^2(\mathcal{S}_{r_1}^{r_2})}&\lesssim \|D^{k}_\Omega \phi\|^2_{L^2(\mathcal{S}_{r_1}^{r_1})}+\frac{1}{r_1}\int_{\mathcal{H}_{r_1}} |D_L D_\Omega^k\psi|^2\stackrel{(\mathbf{B})}{\lesssim} {\mathring{\varepsilon}} r_1^{-5-2\varepsilon_0}.
\end{split}
\end{equation}
We now use \eqref{Sobolev on sphere} to conclude that
\begin{equation*}
\|\phi\|_{L^\infty(\mathcal{S}_{r_1}^{r_2})} {\lesssim} \sqrt{{\mathring{\varepsilon}}} r^{-1} u_+^{-\frac{5}{2}-\varepsilon_0}.
\end{equation*}
Here note that $u_+=1+\frac{1}{2} |t-r|=1+\frac{1}{2} r_1$.
We can indeed improve the estimates by gaining a $r_1^{-\varepsilon_0}$. First of all, notice that in \eqref{bound on L2 on spheres}, for $k\leq 1$, we have
\begin{equation*}
\begin{split}
\|D^{k}_\Omega \phi\|^2_{L^2(\mathcal{S}_{r_1}^{r_2})}&{\lesssim} {\mathring{\varepsilon}} r_1^{-5-4\varepsilon_0}.
\end{split}
\end{equation*}
To save one derivative, we can use the second equation in \eqref{Sobolve for scalar filed on incoming null hypersurfaces} to derive that
\begin{equation*}
\begin{split}
\|D_\Omega \phi\|^2_{L^4(\mathcal{S}_{r_1}^{r_2})}&{\lesssim} {\mathring{\varepsilon}} r_2^{-1}r_1^{-4-4\varepsilon_0}.
\end{split}
\end{equation*}
Thus, by \eqref{Sobolev on sphere} again, we have
\begin{equation}\label{pointwise bound on phi}
\|\phi\|_{L^\infty(\mathcal{S}_{r_1}^{r_2})} {\lesssim} \sqrt{{\mathring{\varepsilon}}} r^{-1} r_1^{-\frac{5}{2}-2\varepsilon_0}{\lesssim} \sqrt{{\mathring{\varepsilon}}} r^{-1} u_+^{-\frac{5}{2}-2\varepsilon_0}.
\end{equation}
{\bf Step 2. $L^\infty$ estimate of $D_{\underline{L}} \phi$.} We first bound $\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |D_{\underline{L}} \big(D_\Omega D_{\underline{L}} \phi \big)|^2$. It can be split into:
\begin{align*}
\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |D_{\underline{L}} \big(D_\Omega D_{\underline{L}} \phi \big)|^2 &\leq \underbrace{\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |D_T \big(D_\Omega D_{\underline{L}} \phi \big)|^2}_{\mathbf{I}_1}+\underbrace{\int_{\underline{\mathcal{H}}_{r_2}^{r_1}}|D_L \big(D_\Omega D_{\underline{L}} \phi \big)|^2}_{\mathbf{I}_2}.
\end{align*}
To bound $\mathbf{I}_1$, we first commute derivatives to derive
\begin{align*}
D_T D_\Omega D_{\underline{L}} \phi &= D_T \big([D_\Omega, D_{\underline{L}}] \phi\big) +[D_T,D_{\underline{L}}]D_\Omega \phi+ D_{\underline{L}} D_T D_\Omega \phi\\
&=\sqrt{-1}\mathcal{L}_T F_{\Omega{\underline{L}}}\phi +\sqrt{-1}F_{\Omega{\underline{L}}}D_T \phi + \sqrt{-1}F_{T{\underline{L}}}D_\Omega\phi +D_{\underline{L}} D_T D_\Omega \phi.
\end{align*}
We therefore can bound that
\begin{equation*}
|D_T D_\Omega D_{\underline{L}} \phi| \leq r|\underline{\alpha}^{(\mathbf{1})}||\phi| + r|\underline{\alpha}||\phi^{(\mathbf{1})}|+r|\rho| |\slashed{D}\phi|+|D_{\underline{L}}\phi^{(\mathbf{2})}|,
\end{equation*}
where the discrepancy indices of the $(\mathbf{1})$ and $(\mathbf{2})$ are all equal to $-1$ and we note that $\alpha$, $\underline{\alpha}$ and $\rho$ are the curvature components for the full Maxwell field $F$. Therefore, we can split $\mathbf{I}_1$ into four terms:
\begin{align*}
\mathbf{I}_1 &\leq \int_{\underline{\mathcal{H}}_{r_2}^{r_1}} r^2|\underline{\alpha}^{(\mathbf{1})}|^2|\phi|^2 + \int_{\underline{\mathcal{H}}_{r_2}^{r_1}} r^2|\underline{\alpha}|^2|\phi^{(\mathbf{1})}|^2+\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} r^2|\rho|^2 |\slashed{D}\phi|^2+\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |D_{\underline{L}}\phi^{(\mathbf{2})}|^2.
\end{align*}
Recall that the full Maxwell field $F$ splits into the chargeless part ${\mathring{F}}$ which has been bounded in Proposition \ref{Proposition pointwise decay of Maxwell} and the charge part $F[q_0]$ satisfying the trivial bound \eqref{eq:bd4Fq}. Since $F[q_0]$ is stationary, we note that $\underline{\alpha}^{(\mathbf{1})}=\underline{\mathring{\alpha}}^{(\mathbf{1})}$. Therefore we can use \eqref{pointwise bound on phi} to bound $\phi$ in the first term, use $|\underline{\alpha}|\lesssim r^{-1}u_+^{-2}$ for the second term, use $|\rho|\lesssim r^{-2}$ in the third term and the bootstrap assumption $(\mathbf{B})$ to bound the last term. In particular we can show that
\begin{align*}
\mathbf{I}_1 & \lesssim \int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |\underline{\mathring{\alpha}}^{(\mathbf{1})}|^2r_1^{-5-4\varepsilon_0} + u_+^{-4}|\phi^{(\mathbf{1})}|^2+ r^{-2} |\slashed{D}\phi|^2+ |D_{\underline{L}}\phi^{(\mathbf{2})}|^2 \\
&\lesssim {\mathring{\varepsilon}} r_1^{-8-2\varepsilon_0}+r_1^{-4}\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |\phi^{(\mathbf{1})}|^2.
\end{align*}
Since $\phi^{(\mathbf{1})}=D_T\phi$, according to \eqref{Sobolev trace estimates on outgoing null hypersurfaces}, for $k\leq 1$ we have
\begin{equation}\label{L2 of DT DOmega phi on sphere}
\begin{split}
\|D_T D_\Omega^k\phi\|^2_{L^2(\mathcal{S}_{r_1}^{r_2})}&\lesssim \|D_T D_\Omega^k\phi\|^2_{L^2(\mathcal{S}_{r_1}^{r_1})}+\frac{1}{r_1}\int_{\mathcal{H}_{r_1}} |D_L D_T D_\Omega^k\psi|^2\stackrel{(\mathbf{B})}{\lesssim} {\mathring{\varepsilon}} r_1^{-7-2\varepsilon_0}.
\end{split}
\end{equation}
We now use the case $k=0$ to conclude that
\begin{align*}
\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |\phi^{(\mathbf{1})}|^2&\lesssim \int_{-\frac{r_2}{2}}^{-\frac{r_1}{2}}\big(\int_{\mathcal{S}_{-2u}^{r_2}} \mathring{\varepsilon} u_+^{-4}|D_T\phi|^2\big) du \lesssim \mathring{\varepsilon} \int_{-\frac{r_2}{2}}^{-\frac{r_1}{2}} \mathring{\varepsilon} u_+^{-11-2\varepsilon_0 } du \lesssim {\mathring{\varepsilon}}^{2} r_1^{-10-2\varepsilon_0}.
\end{align*}
Here we keep in mind that $u_+=1+\frac{1}{2}r_1$. In particular we derive that
\begin{align*}
\mathbf{I}_{1} &\lesssim {\mathring{\varepsilon}}r_1^{-8-2\varepsilon_0}.
\end{align*}
Now we turn to the estimate of $\mathbf{I}_2$. By using the null equations for $\phi$, we first can write that
\begin{align*}
D_L D_\Omega D_{\underline{L}} \phi &= D_L \big([D_\Omega, D_{\underline{L}}] \phi\big) +r^{-1}D_L D_{\underline{L}} \big(rD_\Omega \phi\big)+r^{-1}(D_L D_\Omega \phi-D_{\underline{L}} D_\Omega \phi)\\
&\stackrel{\eqref{scalar box null}}{=}\sqrt{-1}D_L\big(F_{\Omega{\underline{L}}}\phi \big) -\Box_A D_\Omega\phi +\slashed{D}^2 D_\Omega\phi - \sqrt{-1} \rho\cdot D_\Omega\phi+\frac{1}{r}(D_L D_\Omega \phi-D_{\underline{L}} D_\Omega \phi)\\
&=\sqrt{-1}\mathcal{L}_L F_{\Omega{\underline{L}}}\cdot\phi -Q(\phi,F;\Omega)+ \Big(2\sqrt{-1} F_{\Omega{\underline{L}}}D_{T} \phi+\frac{2}{r}D_T D_\Omega \phi\Big)\\
&\ \ \ \ \ \ +\Big(\slashed{D}^2\big(D_\Omega\phi \big) - \sqrt{-1} \rho\cdot\big(D_\Omega\phi\big)-\frac{2}{r}D_{\underline{L}} D_\Omega \phi-\sqrt{-1} F_{\Omega{\underline{L}}} D_{\underline{L}} \phi\Big).
\end{align*}
For the integral of the last term, we use the pointwise bounds:
\[
|\rho|\lesssim r^{-2},\quad |F_{\Omega {\underline{L}}}|=r|\underline{\alpha} | \lesssim r^{-2}+ \sqrt{\mathring{\varepsilon}} u_+^{-3-\varepsilon_0}\lesssim u_+^{-2}.
\]
We therefore can bound that
\begin{align*}
&\int_{\underline{\mathcal{H}}_{r_2}^{r_1}}|\slashed{D}^2\big(D_\Omega\phi \big) - \sqrt{-1} \rho\cdot\big(D_\Omega\phi\big)-\frac{2}{r}D_{\underline{L}} D_\Omega \phi-\sqrt{-1} F_{\Omega{\underline{L}}} D_{\underline{L}} \phi|^2\\
&\lesssim \int_{\underline{\mathcal{H}}_{r_2}^{r_1}}r^{-2}|\slashed{D} D_\Omega^2\phi |^2+r^{-4}| D_\Omega\phi|^2+r^{-2}|D_{\underline{L}} D_\Omega \phi|^2+u_+^{-4}|D_{\underline{L}} \phi|^2\\
&\lesssim {\mathring{\varepsilon}}r_1^{-8-2\varepsilon_0}.
\end{align*}
For the third term in the previous identity, by using the above estimate \eqref{L2 of DT DOmega phi on sphere}, we can show that
\begin{align*}
&\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |2\sqrt{-1} F_{\Omega{\underline{L}}}D_{T} \phi+\frac{2}{r}D_T D_\Omega \phi|^2 \lesssim \int_{\underline{\mathcal{H}}_{r_2}^{r_1}}r^{-2}|D_T D_\Omega \phi|^2+u_+^{-4}|D_T \phi|^2 \lesssim {\mathring{\varepsilon}}r_1^{-8-2\varepsilon_0}.
\end{align*}
For the first term $\sqrt{-1}\mathcal{L}_L F_{\Omega{\underline{L}}}\cdot\phi$, we use the null equation \eqref{Maxwell null commuted} to show that
\begin{align*}
\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |\sqrt{-1}\mathcal{L}_L F_{\Omega{\underline{L}}}\cdot\phi|^2 &\lesssim \int_{\underline{\mathcal{H}}_{r_2}^{r_1}} \big(|\rho^{(\mathbf{1})}|^2+|\sigma^{(\mathbf{1})}|^2 +r^2|\slashed{J}|^2+r^2|\underline{\alpha}|^2\big)|\phi|^2.
\end{align*}
Now recall that $|\slashed{J}|=|\phi||\slashed{D}\phi|$ and we have the bounds $ |\mathcal{L}_{\Omega}F[q_0]|\lesssim r^{-3}$.
Then by using the bootstrap assumptions on $\mathring{F}$ as well as the pointwise bound for $\phi$, we indeed can show that
\begin{align*}
\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |\sqrt{-1}\mathcal{L}_L F_{\Omega{\underline{L}}}\cdot\phi|^2 &\lesssim \int_{\underline{\mathcal{H}}_{r_2}^{r_1}} \big(|\mathring{\rho}^{(\mathbf{1})}|^2+|\mathring{\sigma}^{(\mathbf{1})}|^2 +u_+^{-5}|\slashed{D}\phi|^2+r^2|\underline{\mathring{\alpha}}|^2+r^{-4}\big)|\phi|^2 \lesssim {\mathring{\varepsilon}}r_1^{-8-2\varepsilon_0}.
\end{align*}
Finally for the quadratic term $Q(\phi, F; \Omega)$, we use the bound \eqref{null form Omega} in the proof for Proposition \ref{lemma null form} to show that
\begin{align*}
\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |Q(\phi,F;\Omega)|^2 &\lesssim \int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |D_L\psi|^2|\underline{\alpha}|^2+|D_{\underline{L}} \psi|^2|\alpha|^2+|\sigma|^2|\phi|^2+r^2 |\slashed{J}|^2|\phi|^2+|\sigma|^2|\slashed {D} \psi|^2\\
&\lesssim \int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |D_L\psi|^2r^{-2}u_+^{-4}+|D_{\underline{L}} \psi|^2 r^{-6}+r^{-4}u_+^{-2}(|\phi|^2+r^2|\slashed{D} \phi|^2)+u_+^{-10} |\slashed{D}\phi|^2 \\
&\lesssim \int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |D_T\phi|^2 u_+^{-4}+|D_{\underline{L}} \phi|^2 u_+^{-4}+r^{-4}u_+^{-2}|\phi|^2+u_+^{-4}|\slashed{D} \phi|^2\\
& \lesssim {\mathring{\varepsilon}}r_1^{-8-2\varepsilon_0}.
\end{align*}
Here we have used the fact that $L=2T-{\underline{L}}$ to bound $D_L\psi$ and estimate \eqref{L2 of DT DOmega phi on sphere} to bound the integral of $\phi$ as well as $D_T\phi$.
Combining the above estimate, we have shown that
\begin{equation}\label{eq 1}
\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |D_{\underline{L}} \big(D_\Omega D_{\underline{L}} \phi \big)|^2 \lesssim {\mathring{\varepsilon}} r_1^{-8-2\varepsilon_0}.
\end{equation}
The next object is to derive estimate for $\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |D_\Omega \big(D_\Omega D_{\underline{L}} \phi \big)|^2$. First for $\Omega$, $\Omega'$ being angular momentum vector fields, recall the following commutation formula:
\begin{align*}
D_\Omega \big(D_{\Omega'} D_{\underline{L}} \phi \big
&=\sqrt{-1}\big(\mathcal{L}_\Omega F_{\Omega' {\underline{L}}} \phi + F([\Omega,\Omega'], {\underline{L}}) \phi+ F_{\Omega' {\underline{L}}} D_\Omega \phi+F_{\Omega {\underline{L}}} D_{\Omega'}\phi\big) +D_{\underline{L}} D_\Omega D_{\Omega'} \phi
\end{align*}
For the first four terms, we can bound the full Maxwell field by the pointwise bound according to Proposition \ref{Proposition pointwise decay of Maxwell} together with the property of the charge 2-form $F[q_0]$. More precisely we can show that
\begin{align*}
& \int_{\underline{\mathcal{H}}_{r_2}^{r_1}}|\sqrt{-1}\big(\mathcal{L}_\Omega F_{\Omega' {\underline{L}}} \phi + F([\Omega,\Omega'], {\underline{L}}) \phi+ F_{\Omega' {\underline{L}}} D_\Omega \phi+F_{\Omega {\underline{L}}} D_{\Omega'}\phi\big)|^2\\
&\lesssim \int_{\underline{\mathcal{H}}_{r_2}^{r_1}}(|\mathcal{L}_\Omega \underline{\mathring{\alpha}}|^2 +|\underline{\mathring{\alpha}}|^2)u_+^{-5} +r^{-4}|\phi|^2+ u_+^{-4}r^2|\slashed{D}\phi|^2 \lesssim {\mathring{\varepsilon}}r_1^{-8-2\varepsilon_0}.
\end{align*}
Then by using the ansatz $\mathbf{(B)}$, we can derive that
\begin{equation*}
\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |D_\Omega \big(D_\Omega D_{\underline{L}} \phi \big)|^2\lesssim {\mathring{\varepsilon}} r_1^{-6-2\varepsilon_0}.
\end{equation*}
Similarly, we also have
\begin{equation*}
\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |D_\Omega D_{\underline{L}} \phi |^2\lesssim {\mathring{\varepsilon}}r_1^{-6-4\varepsilon_0}.
\end{equation*}
Then using the Sobolev inequality \eqref{Sobolve for scalar filed on incoming null hypersurfaces}, we derive that
\begin{equation*}
\|D_\Omega D_{\underline{L}}\phi\|_{L^4(\mathcal{S}_{r_1}^{r_2})} \lesssim r_2^{-\frac{1}{2}} \sqrt{{\mathring{\varepsilon}}} r_1^{-3-\varepsilon_0}.
\end{equation*}
We then repeat the same argument for $D_{\underline{L}}\phi$ to derive
\begin{equation*}
\| D_{\underline{L}}\phi\|_{L^4(\mathcal{S}_{r_1}^{r_2})} \lesssim r_2^{-\frac{1}{2}} \sqrt{{\mathring{\varepsilon}}} r_1^{-3-2\varepsilon_0}.
\end{equation*}
Finally, by virtue of \eqref{Sobolev on sphere} and the fact that $u_+=1+\frac{1}{2}r_1$, we obtain that
\begin{equation*}
\|D_{\underline{L}}\phi\|_{L^\infty(\mathcal{S}_{r_1}^{r_2})} \lesssim \sqrt{{\mathring{\varepsilon}}} r_1^{-1} u_+^{-3-\varepsilon_0}.
\end{equation*}
{\bf Step 3. $L^\infty$ estimate of $\slashed{D}\phi$.} By the bootstrap ansatz $(\mathbf{B})$, we have
\begin{align*}
\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |D_{\underline{L}} D_{\Omega'}\big( D_\Omega \phi \big)|^2 \lesssim {\mathring{\varepsilon}}r_1^{-6-2\varepsilon_0},
\end{align*}
We now use the $r^p$-weighted energy estimate with $p=2$ of the bootstrap assumption $(\mathbf{B})$ to show that
\begin{align*}
\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |D_{\Omega^{''}} \big(D_{\Omega'} D_\Omega \phi \big)|^2 \lesssim \int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |\slashed{\nabla} \big(D_{\Omega'} D_\Omega \psi \big)|^2 &\lesssim {\mathring{\varepsilon}}r_1^{-4-2\varepsilon_0},
\end{align*}
Therefore by using the Sobolev embedding, we have
\begin{equation*}
\|D_{\Omega'} D_\Omega\phi\|_{L^4(\mathcal{S}_{r_1}^{r_2})} \lesssim r_2^{-\frac{1}{2}} \sqrt{{\mathring{\varepsilon}}} r_1^{-2-\varepsilon_0}.
\end{equation*}
Similarly, we can also obtain
\begin{equation*}
\| D_\Omega\phi\|_{L^4(\mathcal{S}_{r_1}^{r_2})} \lesssim r_2^{-\frac{1}{2}} \sqrt{{\mathring{\varepsilon}}} r_1^{-2-2\varepsilon_0}.
\end{equation*}
Therefore, \eqref{Sobolev on sphere} implies that, for all angular momentum vector field $\Omega$, we have
\begin{equation*}
\|D_\Omega\phi\|_{L^\infty(\mathcal{S}_{r_1}^{r_2})} \lesssim \sqrt{{\mathring{\varepsilon}}} r_2^{-1} u_+^{-2-\varepsilon_0}.
\end{equation*}
Considering that $|D_{\Omega}\phi|=r|\slashed{D}\phi|$, the above estimate implies that
\begin{equation*}
\|\slashed{D}\phi\|_{L^\infty(\mathcal{S}_{r_1}^{r_2})} \lesssim \sqrt{{\mathring{\varepsilon}}} r^{-2} u_+^{-2-\varepsilon_0}.
\end{equation*}
Here note that on the sphere $\mathcal{S}_{r_1}^{r_2}$ it holds the relation $r=\frac{r_1+r_2}{2}$.
{\bf Step 4. $L^\infty$ estimate of $D_L(r \phi)$.} The idea is to use the highest weight commutator $K=v^2L+u^2{\underline{L}}$. According to the bootstrap ansatz, we have
\begin{align*}
\sum\limits_{k\leq 1}\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} r_1^2|D_{\underline{L}} D_{\Omega}^k\big( \widehat{D}_K \phi \big)|^2+r^2|\slashed{D}D_{\Omega}^k \big( \widehat{D}_K \phi \big)|^2 \lesssim {\mathring{\varepsilon}}r_1^{-2-2\varepsilon_0}.
\end{align*}
Here we may note that $D_{\Omega}=\widehat{D}_{\Omega}$. In particular we conclude that
\begin{align*}
\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} |D_\Omega D_{\Omega'}\big( \widehat{D}_K \phi \big)|^2+|D_\Omega \big( \widehat{D}_K \phi \big)|^2 \lesssim \sqrt{\mathring{\varepsilon}} r_1^{-2-2\varepsilon_0}.
\end{align*}
Therefore the Sobolev embedding implies that
\begin{align*}
|\widehat{D}_K \phi |&\lesssim \sqrt{\mathring{\varepsilon}}r^{-1}u_+^{-1-\varepsilon_0}.
\end{align*}
On the other hand, we have
\begin{align*}
v^2 D_{L}(r\phi)=D_{K}(r\phi)-u^2 D_{{\underline{L}}}(r\phi)=r \widehat{D}_{K}\phi-ru^2 D_{{\underline{L}}}\phi+u^2 \phi.
\end{align*}
Then by using the bounds for $\phi$ and $D_{{\underline{L}}}\phi$, we derive that
\begin{equation*}
v^2|D_L(r\phi)| \lesssim \sqrt{{\mathring{\varepsilon}}} (1+ |u|)^{-1-\varepsilon_0}.
\end{equation*}
This completes the proof.
\end{proof}
\section{The analysis in the exterior region 2: energy estimates}
\subsection{Energy estimates on Maxwell field}
For an multi-index $\mathbf{k}$ with $1\leq |\mathbf{k}| \leq 2$, we can take $G=\mathcal{L}_{Z}^{\mathbf{k}}{\mathring{F}}$ and ${f}=0$ in \eqref{classical energy inequality} and \eqref{r wieghted} to deduce:
\begin{equation*}
\begin{split}
\mathcal{E}^{(\mathbf{k})}({\mathring{F}};r_1) &\leq \mathcal{E}[\mathcal{L}_Z^\mathbf{k} {\mathring{F}}](\mathcal{B}_{r_1})+\int_{\mathcal{D}_{r_1}}r^{-2}\big|{J^{(\mathbf{k})}}_\nu\cdot\mathcal{L}_{Z}^{\mathbf{k}}{\mathring{F}}_0{}^{\nu}\big|\\
&\leq {\mathring{\varepsilon}}r_1^{-6+2\xi(\mathbf{k})-8\varepsilon_0}+C\int_{\mathcal{D}_{r_1}}\underbrace{\frac{ |{J^{(\mathbf{k})}}_L||\rho^{(\mathbf{k})}|}{r^2}}_{\mathbf{I}_1}+\underbrace{\frac{ |{J^{(\mathbf{k})}}_{\underline{L}}||\rho^{(\mathbf{k})}|}{r^2}}_{\mathbf{I}_2}+\underbrace{\frac{|\slashed{J}^{(\mathbf{k})}||\alpha^{(\mathbf{k})}|}{r^2}}_{\mathbf{I}_3}+\underbrace{\frac{|\slashed{J}^{(\mathbf{k})}||\underline{\alpha}^{(\mathbf{k})}|}{r^2}}_{\mathbf{I}_4},
\end{split}
\end{equation*}
and
\begin{equation*}
\begin{split}
\mathcal{E}^{(\mathbf{k})}({\mathring{F}};p=2;r_1) &\leq \int_{\mathcal{B}_{r_1}}r^{2}\big(|\alpha^{(\mathbf{k})}|^2+|\rho^{(\mathbf{k})}|^2+|\sigma^{(\mathbf{k})}|^2\big)+ \int_{\mathcal{D}_{r_1}} \big|{J^{(\mathbf{k})}}_\nu\cdot\mathcal{L}_{Z}^{\mathbf{k}}{\mathring{F}}_L{}^{\nu}\big|,\\
&\leq {\mathring{\varepsilon}}r_1^{-4+2\xi(\mathbf{k})-8\varepsilon_0}+C \int_{\mathcal{D}_{r_1}} \underbrace{|{J^{(\mathbf{k})}}_L||\rho^{(\mathbf{k})}|}_{\mathbf{I}_5}+\underbrace{|\slashed{J}^{(\mathbf{k})}||\alpha^{(\mathbf{k})}|}_{\mathbf{I}_6},
\end{split}
\end{equation*}
where $C$ is a universal constant. In this section, the constant $C$ may change but they all denote universal constants. We now bound the $\mathbf{I}_i$'s one by one.
For $\mathbf{I}_1$ and $\mathbf{I}_5$, we have
\begin{align*}
\int_{\mathcal{D}_{r_1}}\mathbf{I}_5 &\lesssim \Big(\int_{\mathcal{D}_{r_1}} |{J^{(\mathbf{k})}}_L|^2 \Big)^{\frac{1}{2}} \Big(\int_{\mathcal{D}_{r_1}} |\rho^{(\mathbf{k})}|^2\Big)^{\frac{1}{2}}\\
&\lesssim \Big( \int_{r\geq r_1} \big(\int_{\mathcal{H}_r} |{J^{(\mathbf{k})}}_L|^2\big) dr\Big)^{\frac{1}{2}} \Big( \int_{r_1}^{\infty} \big(\int_{\mathcal{H}_{r}}|\rho^{(\mathbf{k})}|^2\big) dr\Big)^{\frac{1}{2}}\\
&\lesssim \mathring{\varepsilon} r_1^{-\frac{5}{2}+\xi(\mathbf{k})-2\varepsilon_0}\cdot {\mathring{\varepsilon}}^{\frac{1}{2}} r_1^{-\frac{5}{2}+\xi(\mathbf{k})-(3-|\mathbf{k}|)\varepsilon_0} \lesssim {\mathring{\varepsilon}}^
{\frac{3}{2}}r_1^{-4+2\xi(\mathbf{k})-(6-2|\mathbf{k}|)\varepsilon_0}.
\end{align*}
The last step follows from the bootstrap assumption $\mathbf{(C)}$ as well as the bootstrap assumption $\mathbf{(B)}$. Similarly,
\begin{align*}
\int_{\mathcal{D}_{r_1}}\mathbf{I}_
\lesssim {\mathring{\varepsilon}}^{\frac{3}{2}}r_1^{-6+2\xi(\mathbf{k})-(6-2|\mathbf{k}|)\varepsilon_0}.
\end{align*}
For $\mathbf{I}_2$, we have
\begin{align*}
\int_{\mathcal{D}_{r_1}}\mathbf{I}_2&\lesssim \Big(\int_{\mathcal{D}_{r_1}}\frac{|{J^{(\mathbf{k})}}_{\underline{L}}|^2}{r^{\frac{19}{4}}}\Big)^{\frac{1}{2}}\Big(\int_{\mathcal{D}_{r_1}} r^{\frac{3}{4}}|\rho^{(\mathbf{k})}|^2\Big)^{\frac{1}{2}}\\
&= \Big(\int_{\frac{r_1}{2}}^{\infty} \frac{1}{v^{\frac{5}{4}}}\big(\int_{\underline{\mathcal{H}}_{2v}^{r_1}}\frac{|{J^{(\mathbf{k})}}_{\underline{L}}|^2}{r^{\frac{7}{2}}}\big) dv \Big)^{\frac{1}{2}} \Big( \int_{\frac{r_1}{2}}^{\infty} \frac{1}{v^{\frac{5}{4}}}\big(\int_{\underline{\mathcal{H}}_{2v}^{r_1}}r^2|\rho^{(\mathbf{k})}|^2\big) dv\Big)^{\frac{1}{2}}\\
& \lesssim \mathring{\varepsilon} r_1^{-\frac{39}{8}+\xi(\mathbf{k})-2\varepsilon_0}\cdot {\mathring{\varepsilon}}^{\frac{1}{2}} r_1^{-\frac{17}{8}+\xi(\mathbf{k})-(3-|\mathbf{k}|)\varepsilon_0} \lesssim {\mathring{\varepsilon}}^
{\frac{3}{2}}r_1^{-6+2\xi(\mathbf{k})-(6-2|\mathbf{k}|)\varepsilon_0}.
\end{align*}
We remark that in the last step we have used the bootstrap assumption $\mathbf{(B)}$ since $\int_{\underline{\mathcal{H}}^{r_1}_{r}} r^2|\rho^{(\mathbf{k})}|^2$ appears in the $r^p$-weighted energy. Another key point is that $v^{-\frac{5}{4}}$ is integrable on $[\frac{r_1}{2},\infty)$.
For $\mathbf{I}_3$ and $\mathbf{I}_6$, we have
\begin{align*}
\int_{\mathcal{D}_{r_1}}\mathbf{I}_6&\lesssim \Big(\int_{\mathcal{D}_{r_1}}\frac{|\slashed{J}^{(\mathbf{k})} |^2}{r^2}\Big)^{\frac{1}{2}}\Big(\int_{\mathcal{D}_{r_1}}r^2 |\alpha^{(\mathbf{k})}|^2\Big)^{\frac{1}{2}}\\
&=\Big(\int_{r_1}^\infty\big(\int_{\mathcal{H}_{r_2}}\frac{|\slashed{J}^{(\mathbf{k})}|^2}{r^2}\big)dr_2\Big)^{\frac{1}{2}} \Big( \int_{r_1}^{\infty} \big(\int_{\mathcal{H}_{r}}r^2|\alpha^{(\mathbf{k})}|^2\big) dr\Big)^{\frac{1}{2}} \\
&\lesssim \mathring{\varepsilon} r_1^{-\frac{7}{2}+\xi(\mathbf{k})-2\varepsilon_0}\cdot {\mathring{\varepsilon}}^{\frac{1}{2}} r_1^{-\frac{3}{2}+\xi(\mathbf{k})-(3-|\mathbf{k}|)\varepsilon_0} \lesssim {\mathring{\varepsilon}}^
{\frac{3}{2}}r_1^{-6+2\xi(\mathbf{k})-(4-2|\mathbf{k}|)\varepsilon_0}.
\end{align*}
Similarly, we have
\begin{align*}
\int_{\mathcal{D}_{r_1}}\mathbf{I}_3
\lesssim {\mathring{\varepsilon}}^{\frac{3}{2}}r_1^{-6+2\xi(\mathbf{k})-(6-2|\mathbf{k}|)\varepsilon_0}.
\end{align*}
For $\mathbf{I}_4$, we have
\begin{align*}
\int_{\mathcal{D}_{r_3}}\mathbf{I}_4&\lesssim \Big(\int_{\mathcal{D}_{r_1}} \frac{ |\slashed{J}^{(\mathbf{k})}|^2}{r^2}\Big)^{\frac{1}{2}} \Big(\int_{\mathcal{D}_{r_1}}\frac{|\underline{\alpha}^{(\mathbf{k})}|^2}{r^2}\Big)^{\frac{1}{2}}\\
&\lesssim\Big( \int_{r_1}^\infty\big( \int_{\mathcal{H}_{r_2}}\frac{ |\slashed{J}^{(\mathbf{k})}|^2}{r^2}\big)dr_2\Big)^{\frac{1}{2}} \Big( \int_{\frac{r_1}{2}}^\infty \frac{1}{v^2}\big(\int_{\underline{\mathcal{H}}^{r_1}_{2v}} |\underline{\alpha}^{(\mathbf{k})}|^2\big) dv \Big)^{\frac{1}{2}}\\
&\lesssim \mathring{\varepsilon} r_1^{-\frac{7}{2}+\xi(\mathbf{k})-2\varepsilon_0}\cdot {\mathring{\varepsilon}}^{\frac{1}{2}} r_1^{-\frac{7}{2}+\xi(\mathbf{k})-(3-|\mathbf{k}|)\varepsilon_0} \lesssim {\mathring{\varepsilon}}^
{\frac{3}{2}}r_1^{-6+2\xi(\mathbf{k})-(6-2|\mathbf{k}|)\varepsilon_0}.
\end{align*}
As a conclusion and by our convention on the implicit constant, we derive that
\begin{equation*}
\begin{split}
\mathcal{E}^{(\mathbf{k})}({\mathring{F}};r_1)&\leq {\mathring{\varepsilon}}r_1^{-6+2\xi(\mathbf{k})-(6-2|\mathbf{k}|)\varepsilon_0}\big(1+C{\mathring{\varepsilon}}^{\frac{1}{2}}\big),\\
\mathcal{E}^{(\mathbf{k})}({\mathring{F}};p=2;r_1) &\leq {\mathring{\varepsilon}}r_1^{-6+2\xi(\mathbf{k})-(6-2|\mathbf{k}|)\varepsilon_0}\big(1+C{\mathring{\varepsilon}}^{\frac{1}{2}}\big)
\end{split}
\end{equation*}
for some universal constant $C$.
For sufficiently small $\mathring{\varepsilon}$, we then has closed the bootstrap argument for the Maxwell fields in $\mathbf{(B)}$:
\begin{equation}\label{improved bootstrap maxwell}
\begin{split}
\mathcal{E}^{(\mathbf{k})}({\mathring{F}};r_1) & \leq 2 {\mathring{\varepsilon}}r_1^{-6+2\xi(\mathbf{k})-(6-2|\mathbf{k}|)\varepsilon_0},\ \
\mathcal{E}^{(\mathbf{k})}({\mathring{F}};p=2;r_1) \leq 2{\mathring{\varepsilon}}r_1^{-6+2\xi(\mathbf{k})-(6-2|\mathbf{k}|)\varepsilon_0}.
\end{split}
\end{equation}
\subsection{Energy estimates on scalar field}
For all multi-index $\mathbf{k}$ such that $1\leq |\mathbf{k}| \leq 2$, we take ${f}=\widehat{D}_{Z}^{\mathbf{k}}\phi$ and $G=0$ in \eqref{classical energy inequality} and \eqref{r wieghted}. Let $\psi^{(\mathbf{k})}=r\phi^{(\mathbf{k})}$. We deduce the following energy estimates
\begin{equation}
\label{eq:EE:scal:00}
\begin{split}
\mathcal{E}^{(\mathbf{k})}(\phi;r_1) &\leq \mathcal{E}[\phi^{(\mathbf{k})}](\mathcal{B}_{r_1})+\int_{\mathcal{D}_{r_1}}\big| \Box_A\phi^{(\mathbf{k})} \cdot D_{\partial_t} \phi^{(\mathbf{k})} \big|+|F_{0\mu} {J}[\phi^{(\mathbf{k})}]^\mu|\\
&\leq {\mathring{\varepsilon}} r_1^{-6+2\xi(\mathbf{k})-8\varepsilon_0} +\underbrace{\int_{\mathcal{D}_{r_1}}\big| \Box_A\phi^{(\mathbf{k})}
|\big(| D_L \phi^{(\mathbf{k})}|+|D_{\underline{L}}\phi^{(\mathbf{k})}|\big)}_{\mathbf{R}_1} \\
&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\underbrace{\int_{\mathcal{D}_{r_1}}(|\alpha|+|\underline{\alpha}|)|\slashed{D}\phi^{(\mathbf{k})}||\phi^{(\mathbf{k})}|}_{\mathbf{S}_1}+\underbrace{\int_{\mathcal{D}_{r_1}}|\rho|\big(|D_L\phi^{(\mathbf{k})}|+|D_{\underline{L}}\phi^{(\mathbf{k})}|\big)|\phi^{(\mathbf{k})}|}_{\mathbf{T}_1}
\end{split}
\end{equation}
as well as the $r$-weighted energy estimates
\begin{equation}
\label{eq:EE:scal:000}
\begin{split}
\mathcal{E}^{(\mathbf{k})}(\phi;p=2;r_1) &\leq \int_{\mathcal{B}_{r_1}}|D_L \psi^{(\mathbf{k})}|^2+|\slashed{D} \psi^{(\mathbf{k})}|^2+ \int_{\mathcal{D}_{r_1}} r\big|\Box_A\phi^{(\mathbf{k})} \cdot D_L \psi^{(\mathbf{k})}\big|+r^2 \big|F_{L\mu} {J}[\phi^{(\mathbf{k})}]^\mu\big|\\
&\leq {\mathring{\varepsilon}}r_1^{-4+2\xi(\mathbf{k})-8\varepsilon_0}+ \underbrace{\int_{\mathcal{D}_{r_1}} r\big|\Box_A\phi^{(\mathbf{k})}\big||D_L \psi^{(\mathbf{k})}|}_{\mathbf{R}_2}+\underbrace{\int_{\mathcal{D}_{r_1}} r^2|\alpha||\slashed{D}\phi^{(\mathbf{k})}||\phi^{(\mathbf{k})}|}_{\mathbf{S}_2}\\
&\qquad\qquad\qquad\qquad+\underbrace{\int_{\mathcal{D}_{r_1}}|\rho||D_L\psi^{(\mathbf{k})}||\psi^{(\mathbf{k})}|}_{\mathbf{T}_2}.
\end{split}
\end{equation}
We remark that for the term $\mathbf{T}_2$, we have used the following structure of current term:
\begin{equation*}
r^2{J}[\phi^{(\mathbf{k})}]= r^2 \Im(\phi^{(\mathbf{k})}\cdot \overline{D \phi^{(\mathbf{k})}})= \Im(\psi^{(\mathbf{k})}\cdot \overline{D \psi^{(\mathbf{k})}})={J}[\psi^{(\mathbf{k})}].
\end{equation*}
This will be crucial for the estimate of $\mathbf{T}_2$. We first bound the $\mathbf{S}_i$'s which rely on the following lemma:
\begin{lemma}\label{lemma technical on phik}
Under the bootstrap ansatz, for $\gamma_2\geq 0$, $\gamma_1 > 1$, we have
\begin{equation*}
\int_{\mathcal{D}_{r_1}}\frac{|\phi^{(\mathbf{k})}|^2}{r^{\gamma_1}|u|^{\gamma_2}} \lesssim {\mathring{\varepsilon}} r_1^{-3-\gamma_1-\gamma_2+2\xi(\mathbf{k})-(6-2|\mathbf{k}|)\varepsilon_0}.
\end{equation*}
\end{lemma}
\begin{proof}
Let $\mathcal{S}_{u,v}$ be the intersection of $\mathcal{H}_{u}$ and $\underline{\mathcal{H}}_{v}$. By \eqref{Sobolev trace estimates on outgoing null hypersurfaces}, we then have
\begin{align*}
\int_{\mathcal{D}_{r_1}}\frac{|\phi^{(\mathbf{k})}|^2}{r^{\gamma_1}|u|^{\gamma_2}} &=\int_{u}\int_{v}\frac{\int_{\mathcal{S}_{u,v}}|\phi^{(\mathbf{k})}|^2}{r^{\gamma_1}|u|^{\gamma_2}}\lesssim \int_{u}\int_{v}\frac{\int_{\mathcal{S}_{u,u}}|\phi^{(\mathbf{k})}|^2+|u|^{-1}\int_{\mathcal{H}_u}|D_L\psi^{(\mathbf{k})}|^2}{r^{\gamma_1}|u|^{\gamma_2}}\\
&\lesssim \int_{u}\frac{\int_{\mathcal{S}_{u,u}}|\phi^{(\mathbf{k})}|^2}{|u|^{\gamma_1+\gamma_2-1}}+\int_{u}\frac{\int_{\mathcal{H}_u}|D_L\psi^{(\mathbf{k})}|^2}{|u|^{\gamma_1+\gamma_2}}.
\end{align*}
The first term is from the initial data and it is bounded by ${\mathring{\varepsilon}} r_1^{-3-\gamma_1-\gamma_2+2\xi(\mathbf{k})-8\varepsilon_0}$. We can control the second term by the bootstrap ansatz and it is bounded by $C{\mathring{\varepsilon}} r_1^{-3-\gamma_1-\gamma_2+2\xi(\mathbf{k})-(6-2|\mathbf{k}|)\varepsilon_0}$. This completes the proof.
\end{proof}
For $\mathbf{S}_1$, according to Proposition \ref{Proposition pointwise decay of Maxwell} and the decay properties of the charge part $\alpha(F[q_0])$, $\underline{\alpha}(F[q_0])$, we in particular have the following bounds
\[
|\alpha|\lesssim \sqrt{{\mathring{\varepsilon}}} r^{-3} u_+^{-1-\varepsilon_0 }+r^{-3} \lesssim r^{-3}, \quad |\underline{\alpha}| \lesssim \sqrt{{\mathring{\varepsilon}}} r^{-1}u_+^{-3-\varepsilon_0}+r^{-3}\lesssim r^{-1}u_+^{-2}.
\]
We have used the fact that $\mathring{\varepsilon}$ is sufficiently small. Therefore we can show that
\begin{align*}
\mathbf{S}_1&\lesssim \int_{\mathcal{D}_{r_1}}\frac{|\slashed{D}\phi^{(\mathbf{k})}||\phi^{(\mathbf{k})}|}{r |u|^{2}}
\lesssim \Big(\int_{\mathcal{D}_{r_1}}\frac{|\slashed{D}\phi^{(\mathbf{k})}|^2}{|u|}\Big)^{\frac{1}{2}} \Big(\int_{D_{r_1}}\frac{|\phi^{(\mathbf{k})}|^2}{r^{2}|u|^{3}}\Big)^{\frac{1}{2}}\\
&= \Big( \int_{|u| \geq \frac{r_1}{2}} \frac{\int_{\mathcal{H}_u}|\slashed{D}\phi^{(\mathbf{k})}|^2}{|u|} du\Big)^\frac{1}{2}\Big(\int_{D_{r_1}}\frac{|\phi^{(\mathbf{k})}|^2}{r^{2}|u|^{3}}\Big)^{\frac{1}{2}}.
\end{align*}
We use the bootstrap ansatz to bound the first term and use Lemma \ref{lemma technical on phik} to bound the second term. Therefore, we obtain
\begin{equation*}
\mathbf{S}_1 \lesssim {\mathring{\varepsilon}} r_1^{-6.5+2\xi(\mathbf{k})-(6-2|\mathbf{k}|)\varepsilon_0}.
\end{equation*}
We can also derive in the same manner that
\begin{equation*}
\mathbf{S}_2 \lesssim {\mathring{\varepsilon}} r_1^{-4.5+2\xi(\mathbf{k})-(6-2|\mathbf{k}|)\varepsilon_0}.
\end{equation*}
By our convention the implicit constant is independent of $R_*$. Since $r_1\geq R_*$, by choosing $R_*$ sufficiently large, we derive the following estimates
\begin{equation}\label{energy identities to be bounded}
\begin{split}
\mathcal{E}^{(\mathbf{k})}(\phi;r_1) \leq \frac{5}{4}\mathring{\varepsilon} r_1^{-6+2\xi(\mathbf{k})-8\varepsilon_0}+\mathbf{R}_1
+\mathbf{T}_1,\\
\mathcal{E}^{(\mathbf{k})}(\phi;p=2;r_1) \leq \frac{5}{4}\mathring{\varepsilon} r_1^{-4+2\xi(\mathbf{k})-8\varepsilon_0}+ \mathbf{R}_2
+\mathbf{T}_2
\end{split}
\end{equation}
with $\mathbf{R}_i$, $\mathbf{T}_i$ defined in \eqref{eq:EE:scal:00} and \eqref{eq:EE:scal:000}.
\subsubsection{Energy estimates on one derivatives of the scalar field}\label{section on 1 derivatives on scalar}
We consider the case where $|\mathbf{k}|=1$. The multi-index $\mathbf{k}$ then represents a vector field $Z\in \Gamma$. In view of \eqref{null form estimate} and the pointwise bounds in Proposition \ref{Proposition pointwise decay of Maxwell} and Proposition \ref{Proposition pointwise decay of scalar field}, we have
\begin{equation*}
|u|^{-\xi(Z)}|Q(\phi,F;Z)| \lesssim \frac{1}{r|u|}|D_L\psi|+\frac{1}{r^2|u|}|\slashed{D}\psi| +\frac{|u|}{r^3}|D_{\underline{L}}\psi|+\frac{1}{r^2}|\phi|.
\end{equation*}
Thus, we have
\begin{equation}\label{the first pointwise bound on Q}
\begin{split}
r^2|\Box_A\phi^{({\bf 1})}|^2 &\lesssim r^2 |Q(\phi,F;Z)|^2\\
&\lesssim |u|^{2\xi({\bf 1})-2}|D_L\psi|^2+|u|^{2\xi({\bf 1})-2}|\slashed{D}\phi|^2+\frac{|u|^{2\xi({\bf 1})+2}}{r^2}|D_{\underline{L}}\phi|^2+\frac{|u|^{2\xi({\bf 1})}}{r^2}|\phi|^2.
\end{split}
\end{equation}
Since $r^{-2}\lesssim |u|^{-2}$, according to the bounds on the zeroth order energy estimates, we have
\begin{equation*}
\begin{split}
\int_{\mathcal{D}_{r_1}} |u|^{2\xi({\bf 1})-2}|D_L\psi|^2 &\leq \int_{u}{|u|^{2\xi({\bf 1})-2}}\Big(\int_{\mathcal{H}_u}|D_L \psi|^2\Big)du\lesssim {\mathring{\varepsilon}} r_1^{2\xi({\bf 1})-5-6\varepsilon_0},
\end{split}
\end{equation*}
\begin{align*}
\int_{\mathcal{D}_{r_1}} |u|^{2\xi({\bf 1})-2}|\slashed{D}\phi|^2 &\lesssim \int_{u}{|u|^{2\xi({\bf 1})-4-2\varepsilon_0}}\Big(\int_{\mathcal{H}_u}|\slashed{D}\phi|^2\Big)du \lesssim {{\mathring{\varepsilon}} r_1^{2\xi({\bf 1})-7-6\varepsilon_0}},
\end{align*}
and
\begin{align*}
\int_{\mathcal{D}_{r_1}}\frac{|u|^{2\xi({\bf 1})+2}}{r^2}|D_{\underline{L}}\phi|^2 &\lesssim \int_{v}{|u|^{2\xi({\bf 1})}}|v|^{-2}\Big(\int_{\underline{\mathcal{H}}_v}|D_{\underline{L}}\phi|^2\Big)du \lesssim {\mathring{\varepsilon}} r_1^{2\xi({\bf 1})-5-6\varepsilon_0}.
\end{align*}
By Lemma \ref{lemma technical on phik}, we also have
\begin{align*}
\int_{\mathcal{D}_{r_1}} \frac{|u|^{2\xi({\bf 1})}}{r^2}|\phi|^2 &\lesssim {\mathring{\varepsilon}} r_1^{2\xi({\bf 1})-5-6\varepsilon_0}.
\end{align*}
Thus, we have
\begin{equation}\label{the first null form bound on spacetime slab}
\int_{\mathcal{D}_{r_1}} r^2|Q(\phi,F;Z)|^2 \lesssim r_1^{2\xi({Z})-5-6\varepsilon_0}.
\end{equation}
Let $(\mathbf{2})$ denotes two vector fields $Z_1$ and $Z_2$. If we replace $\phi$ by $\widehat{D}_{Z_1}$ in the proof between \eqref{the first pointwise bound on Q} and \eqref{the first null form bound on spacetime slab}, we obtain
\begin{equation}\label{the first null form bound on spacetime slab with one more derivatives}
\int_{\mathcal{D}_{r_1}} r^2|Q(\widehat{D}_{Z_1}\phi,F;Z_2)|^2 \lesssim r_1^{2\xi(\mathbf{2})-5-6\varepsilon_0}.
\end{equation}
Similarly, we have
\begin{align*}
&\int_{\mathcal{D}_{r_1}}r^{-2}\big(|D_L\phi^{({\bf 1})}|+|D_{\underline{L}}\phi^{({\bf 1})}| \big)^2 \\
&\lesssim \int_{|u|\geq r_1} u_+^{-2}\big(\int_{\mathcal{H}_u}|D_L \phi^{({\bf 1})}|^2\big)du+\int_{v}v^{-2}\Big(\int_{\underline{\mathcal{H}}_v}|D_{\underline{L}} \phi^{({\bf 1})}|^2\Big)dv\lesssim {\mathring{\varepsilon}} r_1^{2\xi({\bf 1})-5-4\varepsilon_0}.
\end{align*}
Therefore, we can bound $\mathbf{R}_1$ as follows
\begin{align*}
\mathbf{R}_1 &\leq \Big(\int_{\mathcal{D}_{r_1}} r^2|\Box_A\phi^{({\bf 1})}|^2 \Big)^\frac{1}{2} \Big(\int_{\mathcal{D}_{r_1}} r^{-2}\big(|D_L\phi^{({\bf 1})}|+|D_{\underline{L}}\phi^{({\bf 1})}| \big)^2\big)\Big)^\frac{1}{2} \lesssim {\mathring{\varepsilon}} r_1^{-6+2\xi({\bf 1})-5\varepsilon_0}.
\end{align*}
One can also proceed exactly in the same manner to prove that
\begin{align*}
\mathbf{R}_2 &\lesssim \Big(\int_{\mathcal{D}_{r_1}}r^2 |\Box_A\phi^{({\bf 1})}|^2 \Big)^\frac{1}{2} \Big(\int_{\mathcal{D}_{r_1}} |D_L\psi^{({\bf 1})}|^2\Big)^\frac{1}{2}\lesssim {\mathring{\varepsilon}} r_1^{-4+2\xi({\bf 1})-5\varepsilon_0}
\end{align*}
by using the $r$-weighted energy estimates.
Therefore, for sufficiently large $R_*$, since $r_1\geq R_*$, we have
\begin{equation}\label{energy identities final}
\begin{split}
\mathcal{E}^{{({\bf 1})}}(\phi;r_1) &\lesssim \mathring{\varepsilon} r_1^{-6+2\xi({{\bf 1}})-5\varepsilon_0}+\mathbf{T}_1 \\
\mathcal{E}^{({\bf 1})}(\phi;p=2;r_1) & \lesssim \mathring{\varepsilon} r_1^{-4+2\xi({{\bf 1}})-5\varepsilon_0}+\mathbf{T}_2
\end{split}
\end{equation}
At this stage, we need to first control $\mathbf{T}_2$ in the second equation. In view of the definition of $\mathcal{E}^{(\mathbf{k})}(\phi;p=2;r_1)$ and the fact that $|\rho|\lesssim r^{-2}$, the second inequality gives
\begin{equation*}
\int_{\mathcal{H}_{r_1}} |D_L\psi^{({\bf 1})}|^2 \lesssim \mathring{\varepsilon} r_1^{-4+2\xi{({\bf 1})}-5\varepsilon_0}+ \int_{\mathcal{D}_{r_1}} \frac{|D_L\psi^{({\bf 1})}||\psi^{({\bf 1})}|}{r^2}.
\end{equation*}
When we apply Lemma \ref{lemma key} in this case, we change $\varepsilon_0$ to $\frac{1}{2}\varepsilon_0$. This leads to
\begin{equation*}
\int_{\mathcal{H}_{r_1}} |D_L\psi^{({\bf 1})}|^2 \lesssim \mathring{\varepsilon} r_1^{-4+2\xi{({\bf 1})}-4.5\varepsilon_0}.
\end{equation*}
The gain of $r^{-0.5\varepsilon_0}$ can be used to improve the estimates in Lemma \ref{lemma technical on phik}. This gives
\begin{equation}\label{improved est}
\int_{\mathcal{D}_{r_1}} \frac{|\psi^{({\bf 1})}|^2}{r^4} =\int_{\mathcal{D}_{r_1}} \frac{|\phi^{({\bf 1})}|^2}{r^2} \lesssim {\mathring{\varepsilon}} r_1^{-5+2\xi({\bf 1})-4.5\varepsilon_0}.
\end{equation}
Hence,
\begin{equation*}
\mathbf{T}_2 \lesssim \Big(\int_{\mathcal{D}_{r_1}} |D_L\psi^{({\bf 1})}|^2\big)^\frac{1}{2} \Big(\int_{\mathcal{D}_{r_1}} \frac{|\psi^{({\bf 1})}|^2}{r^4}\Big)^\frac{1}{2}\lesssim \mathring{\varepsilon} r_1^{-4+2\xi{({\bf 1})}-4.5\varepsilon_0}.
\end{equation*}
This improved estimate \eqref{improved est} also allows us to bound $\mathbf{T}_1$ as follows:
\begin{align*}
\mathbf{T}_1 &\lesssim\big( \underbrace{\int_{\mathcal{D}_{r_1}} r^{-2}|D_L\phi^{({\bf 1})}|^2 +\int_{\mathcal{D}_{r_1}} r^{-2}|D_{\underline{L}}\phi^{({\bf 1})}|^2 }_{\lesssim r_1^{-7+2\xi{({\bf 1})}-4\varepsilon_0} \text{by \bf{(B)}} }\big)^\frac{1}{2}\big(\underbrace{\int_{\mathcal{D}_{r_1}} \frac{|\phi^{({\bf 1})}|^2}{r^2}}_{\lesssim r_1^{-5+2\xi{({\bf 1})}-4.5\varepsilon_0}}\Big)^\frac{1}{2}\\
&\lesssim \mathring{\varepsilon} r_1^{-6+2\xi{({\bf 1})}-4.25\varepsilon_0}.
\end{align*}
Thus, the estimate \eqref{energy identities final} implies
\begin{equation*}
\begin{split}
\mathcal{E}^{{({\bf 1})}}(\phi;r_1) &\lesssim \mathring{\varepsilon} r_1^{-6+2\xi({{\bf 1}})-4.25\varepsilon_0},\ \
\mathcal{E}^{({\bf 1})}(\phi;p=2;r_1) \lesssim \mathring{\varepsilon} r_1^{-4+2\xi({{\bf 1}})-4.5\varepsilon_0}.
\end{split}
\end{equation*}
For sufficiently large $R_*$, we then have closed the bootstrap argument for first order energy quantities on scalar field in $\mathbf{(B)}$:
\begin{equation}\label{improved bootstrap first order scalar}
\begin{split}
\mathcal{E}^{{({\bf 1})}}& \leq 2 {\mathring{\varepsilon}}r_1^{-6+2\xi({\bf 1})-4\varepsilon_0},\ \
\mathcal{E}^{{({\bf 1})}}(\phi;p=2;r_1) \leq 2{\mathring{\varepsilon}}r_1^{-4+2\xi({\bf 1})-4\varepsilon_0}.
\end{split}
\end{equation}
\subsubsection{Energy estimates on second derivatives of the scalar field}
We now fix a $\mathbf{k}$ so that $|\mathbf{k}|=2$ and the first objective is to bound the $\mathbf{R}_1$ and $\mathbf{R}_2$ term in \eqref{energy identities to be bounded}. For this purpose, we first recall that, for $({\bf 2})$ representing $\widehat{D}_{Z_1}\widehat{D}_{Z_2}$, we have
\begin{equation*}
\Box_A \phi^{(\mathbf{2})} =Q(\widehat{D}_{Z_1}\phi,F;Z_2) + Q(\widehat{D}_{Z_2}\phi,F;Z_1)+Q(\phi, F;[Z_1,Z_2])+Q(\phi, \mathcal{L}_{Z_1} F;Z_2)-2F_{Z_1\mu}F_{Z_2}{}^{\mu}\phi.
\end{equation*}
For $\mathbf{R}_1$, according to the above expression, we split it into three parts:
\begin{equation*}
\begin{split}
\mathbf{R}_1&\lesssim\int_{\mathcal{D}_{r_1}} \underbrace{\Big(\big|Q(\widehat{D}_{Z_1}\phi,F;Z_2)\big|+\big|Q(\widehat{D}_{Z_2}\phi,F;Z_1)\big|+\big|Q(\phi, F;[Z_1,Z_2])\big|\Big)\big(| D_L \phi^{(\mathbf{2})}|+|D_{\underline{L}}\phi^{(\mathbf{2})}|\big)}_{\mathbf{R}_{11}}\\
& \ \ \ + \int_{\mathcal{D}_{r_1}}\underbrace{\big|Q(\phi, \mathcal{L}_{Z_1} F;Z_2)\big|\big(| D_L \phi^{(\mathbf{2})}|+|D_{\underline{L}}\phi^{(\mathbf{2})}|\big)}_{\mathbf{R}_{12}}+\int_{\mathcal{D}_{r_1}}\underbrace{\big|F_{Z_1\mu}F_{Z_2}{}^{\mu}\phi\big|\big(| D_L \phi^{(\mathbf{2})}|+|D_{\underline{L}}\phi^{(\mathbf{2})}|\big)}_{\mathbf{R}_{13}}
\end{split}
\end{equation*}
All the three $Q$-terms in $\mathbf{R}_{11}$ can be schematically written as either $Q(\phi^{({\bf 1})},F;Z)$ or $Q(\phi^{({\bf 0})},F;Z)$ due to the observation that the linear span of $\mathcal{Z}$ is closed under commutations. These terms resemble the terms in $\mathbf{R}_1$ in Section \ref{section on 1 derivatives on scalar}. Thanks to \eqref{the first null form bound on spacetime slab with one more derivatives}, they can be bounded exactly in the same manner:
\begin{align*}
\mathbf{R}_{1}
&\lesssim \Big(\int_{\mathcal{D}_{r_1}} r^2\big|Q(\widehat{D}_{Z_1}\phi,F;Z_2)\big|^2+r^2\big|Q(\widehat{D}_{Z_2}\phi,F;Z_1)\big|^2+r^2\big|Q(\phi, F;[Z_1,Z_2])\big|^2 \Big)^\frac{1}{2} \\
&\ \ \ \times\Big(\int_{\mathcal{D}_{r_1}} r^{-2}\big(|D_L\phi^{({\bf 2})}|+|D_{\underline{L}}\phi^{({\bf 2})}| \big)^2\big)\Big)^\frac{1}{2} \lesssim {\mathring{\varepsilon}} r_1^{-6+2\xi({\bf 2})-3\varepsilon_0}.
\end{align*}
Here we remark that compared with the estimate of $\mathbf{R}_1$ in the last subsection we lose a decay power of $\varepsilon_0$ is due to the weaker decay of second order energy estimates in the bootstrap assumption.
For $\mathbf{R}_{12}$, we use $(\mathbf{1})$ to denote the vector field $Z_1$, according to \eqref{null form estimate} and the pointwise bounds on the scalar field, we have
\begin{equation*}
\begin{split}
|u|^{-\xi(Z_2)}|Q(\phi,\mathcal{L}_{Z_1}F;Z_2)| &\lesssim \big(\frac{r}{|u|}|\rho(\mathcal{L}_{Z_1}F)|+|\underline{\alpha}(\mathcal{L}_{Z_1}F)|\big)|D_L\psi|\\
&\ \ +\big(\frac{r}{|u|}|\alpha(\mathcal{L}_{Z_1}F)| +\frac{|u|}{r}|\underline{\alpha}(\mathcal{L}_{Z_1}F)|+|\sigma(\mathcal{L}_{Z_1}F)|\big)|\slashed{D}\psi| \\
& \ \ + \big(|\alpha(\mathcal{L}_{Z_1}F)|+\frac{|u|}{r}|\rho(\mathcal{L}_{Z_1}F)|\big)|D_{\underline{L}}\psi|+\big(|\rho(\mathcal{L}_{Z_1}F)|+|\sigma(\mathcal{L}_{Z_1}F)|\big)|\phi|\\
& \ \ +\big(\frac{|u|}{r^2}|J(\mathcal{L}_{Z_1}F)_{\underline{L}}|+\frac{1}{|u|}|J(\mathcal{L}_{Z_1}F)_L|+\frac{1}{r}|\slashed{J}(\mathcal{L}_{Z_1}F)|\big)|\phi|.
\end{split}
\end{equation*}
Since $F={\mathring{F}}+F[q_0]$ and $F[q_0]$ solves the linear Maxwell equations, accroding to \eqref{commutator formula 2}, we have
\begin{equation*}
J(\mathcal{L}_{Z_1}F)_{\underline{L}}=J^{({\bf 1})}_{\underline{L}}, \ J(\mathcal{L}_{Z_1}F)_L=J^{({\bf 1})}_L, \ \slashed{J}(\mathcal{L}_{Z_1}F)=\slashed{J}^{({\bf 1})}.
\end{equation*}
Therefore, according to the pointwise decay for the scalar field, we have
\begin{equation*}
\begin{split}
&\ \ \ \ \ |u|^{-\xi(Z_2)}|Q(\phi,\mathcal{L}_{Z_1}F;Z_2)|\\
&\lesssim \big(\frac{r}{|u|}|\slashed{D}\psi|+|D_{\underline{L}}\psi|\big) |\alpha(\mathcal{L}_{Z_1}F)|+ \big(\frac{r}{|u|}|D_L\psi|+\frac{|u|}{r}|D_{\underline{L}}\psi|+|\phi|\big)|\rho(\mathcal{L}_{Z_1}F)|+(|\slashed{D}\psi|+|\phi|\big)|\sigma(\mathcal{L}_{Z_1}F)| \\
&\ \ \ + \big(|D_L\psi|+\frac{|u|}{r}|\slashed{D}\psi|\big) |\underline{\alpha}(\mathcal{L}_{Z_1}F)|+\big(\frac{|u|}{r^2}|J^{({\bf 1})}_{\underline{L}}|+\frac{1}{|u|}|J^{({\bf 1})}_L|+\frac{1}{r}|\slashed{J}^{({\bf 1})}|\big)|\phi| \\
&\lesssim \underbrace{\frac{\sqrt{{\mathring{\varepsilon}}}}{|u|^{3+\varepsilon_0}} |\alpha(\mathcal{L}_{Z_1}F)|}_{\mathbf{A}_0} + \underbrace{\frac{\sqrt{{\mathring{\varepsilon}}}}{r|u|^{2+\varepsilon_0}}|\rho(\mathcal{L}_{Z_1}F)|+\frac{\sqrt{{\mathring{\varepsilon}}}}{r|u|^{2+\varepsilon_0}}|\sigma(\mathcal{L}_{Z_1}F)|}_{\mathbf{A}_1} +\underbrace{\frac{\sqrt{{\mathring{\varepsilon}}}}{r^2|u|^{1+\varepsilon_0}}|\underline{\alpha}(\mathcal{L}_{Z_1}F)|}_{\mathbf{A}_2} \\
&\ \ \ +\underbrace{\frac{\sqrt{{\mathring{\varepsilon}}}}{r|u|^{\frac{7}{2}+2\varepsilon_0}}|J^{({\bf 1})}_L|+\frac{\sqrt{{\mathring{\varepsilon}}}}{r^2|u|^{\frac{5}{2}+2\varepsilon_0}}|\slashed{J}^{({\bf 1})}|}_{\mathbf{A}_{3}}+\underbrace{\frac{\sqrt{{\mathring{\varepsilon}}}}{r^3|u|^{\frac{3}{2}+2\varepsilon_0}}|J^{({\bf 1})}_{\underline{L}}|}_{\mathbf{A}_{4}}.
\end{split}
\end{equation*}
On the other hand, according to Lemma \ref{lemma commuting Z with null decomposition}, we have
\begin{align*}
\big|\alpha(\mathcal{L}_{Z_1}F[q_0])\big|&\leq \big|\mathcal{L}_{Z_1}\big(\alpha(F[q_0])\big)\big|+r^{\xi(Z_1)}\big|\alpha(F[q_0])\big|\lesssim r^{-3+\xi(Z_1)}.
\end{align*}
Hence,
\begin{align*}
\big|\alpha(\mathcal{L}_{Z_1}F)\big|\leq \big|\alpha(\mathcal{L}_{Z_1}{\mathring{F}})\big|+\big|\alpha(\mathcal{L}_{Z_1}F[q_0])\big|\leq |{\alpha}^{({\bf 1})}| + r^{-3+\xi(Z_1)}.
\end{align*}
Similarly, since we have
\begin{align*}
\big|\underline{\alpha}(\mathcal{L}_{Z_1}F[q_0])\big|&\lesssim r^{-3+\xi(Z_1)},\ \big|\rho(\mathcal{L}_{Z_1}F[q_0])\big|\lesssim r^{-3+\xi(Z_1)},\ \sigma(\mathcal{L}_{Z_1}F[q_0])=0,
\end{align*}
We notice that the estimate on $\rho(\mathcal{L}_{Z_1}F[q_0])$ is as good as the other components. This is due to the fact that $\mathcal{L}_Z \big(\frac{1}{r^2}dt\wedge dr\big)=0$ for all $Z \in \mathcal{Z}$. We conclude that
\begin{equation}\label{onederivative of F}
\begin{split}
\big|\alpha(\mathcal{L}_{Z_1}F)&\lesssim |{\alpha}^{({\bf 1})}| + r^{-3+\xi(Z_1)}, \ \big|\underline{\alpha}(\mathcal{L}_{Z_1}F)\lesssim |{\underline{\alpha}}^{({\bf 1})}| + r^{-3+\xi(Z_1)},\\
\big|\rho(\mathcal{L}_{Z_1}F)&\lesssim |{\rho}^{({\bf 1})}| + r^{-3+\xi(Z_1)}, \ \big|\sigma(\mathcal{L}_{Z_1}F)\lesssim |{\sigma}^{({\bf 1})}|.
\end{split}
\end{equation}
We notice that for $Z_1=K$ we lose decay in $r$. For the $\alpha$ component, we can improve the decay in $r$:
\begin{lemma}
\begin{equation}\label{improved estimate for Fq0}
|\alpha(\mathcal{L}_{K}F[q_0])\big|\lesssim r^{-3}|u|.
\end{equation}
\end{lemma}
\begin{proof}We recall the definition for $F[q_0]$:
\[
F[q_0]_{0i}=\partial_{i}V(x),\quad F[q_0]_{ij}=0, \text{for} i,j=1,2,3,
\]
where the potential $V(x)$ is given by
\[
V(x)=\frac{1}{4\pi}\int_{\mathbb{R}^3}(\underbrace{\frac{1}{r}}_{V_1}+\underbrace{\frac{x\cdot y}{r^3}}_{V_2}+\underbrace{\frac{1}{2} \frac{(3|x|^{-2}(x\cdot y)^2-|y|^2)}{r^3}}_{V_3})\Im(\phi_0\cdot \bar \phi_1)dy,\quad |x|>0.
\]
The contribution from $V_3$ is of order $r^{-3}$ so that we can ignore it. The contribution from $V_1$ gives the charge part $\frac{1}{r^2}dt\wedge dr$ and it will vanish when one takes $\mathcal{L}_K$ derivative. Thus, we consider
\[
F^{(2)}[q_0]_{0i} = \partial_i\Big(\frac{1}{4\pi}\int_{\mathbb{R}^3} \frac{x\cdot y}{r^3}\Im\big((\phi_0\cdot \bar \phi_1)(y)\big)dy\Big), \ F^{(2)}[q_0]_{ij}=0.
\]
Thus, we have
\begin{equation*}
\alpha(F^{(2)}[q_0])_A=\frac{1}{4\pi}\int_{\mathbb{R}^3} \frac{e_A\cdot y}{r^3}\Im\big((\phi_0\cdot \bar \phi_1)(y)\big)dy.
\end{equation*}
By virtue of the formula for $\mathcal{L}_K$ in Lemma \ref{lemma commuting Z with null decomposition}, we obtain
\begin{equation*}
\alpha(\mathcal{L}_K F^{(2)}[q_0])_A=\frac{1}{4\pi}\int_{\mathbb{R}^3} \frac{(r-t)e_A\cdot y}{r^3}\Im\big((\phi_0\cdot \bar \phi_1)(y)\big)dy.
\end{equation*}
This completes the proof of the lemma.
\end{proof}
As a corollary, we have
\begin{equation}\label{precise bound on alpha F Fq0}
\big|\alpha(\mathcal{L}_{Z_1}F)\lesssim |{\alpha}^{({\bf 1})}| + r^{-3}|u|^{\xi(Z_1)}.
\end{equation}
\begin{lemma}We have the following spacetime estimates:
\begin{equation}\label{bound on r box phi 2}
\|rQ(\phi,\mathcal{L}_{Z_1}F;Z_2)\|_{L^2(\mathcal{D}_{r_1})} \lesssim \sqrt{\mathring{\varepsilon}} r_1^{\xi({\bf 2})-3-\varepsilon_0}.
\end{equation}
\end{lemma}
\begin{proof}With the help of \eqref{onederivative of F} and \eqref{improved estimate for Fq0}, we can bound the terms $\int_{\mathcal{D}_{r_1}} r^2|u|^{2\xi(Z_2)}|\mathbf{A}_i|^2$ one by one. This will prove the lemma:
\begin{equation*}
\begin{split}
\int_{\mathcal{D}_{r_1}} r^2|u|^{2\xi(Z_2)}|\mathbf{A}_0|^2 &\lesssim {\mathring{\varepsilon}}\int_{\mathcal{D}_{r_1}}{|u|^{2\xi({Z_2})-6-2\varepsilon_0}}\big(r^2|{\alpha}^{({\bf 1})}|^2 +r^{-4}|u|^{2\xi(Z_1)}\big) \\
&\lesssim {\mathring{\varepsilon}}\int_{r_1}^\infty{|u|^{2\xi({Z_2})-6-2\varepsilon_0}}\Big(\int_{\mathcal{H}_{r_2}}\big(r^2|{\alpha}^{({\bf 1})}|^2 +r^{-4}|u|^{2\xi(Z_1)}\big)\Big)dr_2.
\end{split}
\end{equation*}
For $\mathbf{A}_1$, we have
\begin{equation*}
\begin{split}
\int_{\mathcal{D}_{r_1}} r^2|u|^{2\xi(Z_2)}|\mathbf{A}_1|^2 &\lesssim {\mathring{\varepsilon}}\int_{\mathcal{D}_{r_1}}{|u|^{2\xi({Z_2})-4-2\varepsilon_0}}\big(|{\rho}^{({\bf 1})}|^2 +|{\sigma}^{({\bf 1})}|^2 + r^{-6+2\xi(Z_1)}\big) \\
&\lesssim {\mathring{\varepsilon}}\int_{r_1}^\infty{|u|^{2\xi({Z_2})-4-2\varepsilon_0}}\Big(\int_{\mathcal{H}_{r_2}}\big(|{\rho}^{({\bf 1})}|^2 +|{\sigma}^{({\bf 1})}|^2 + r^{-6+2\xi(Z_1)}\big)\Big)dr_2.
\end{split}
\end{equation*}
For $\mathbf{A}_2$, we have
\begin{equation*}
\begin{split}
\int_{\mathcal{D}_{r_1}} r^2|u|^{2\xi(Z_2)}|\mathbf{A}_2|^2 &\lesssim {\mathring{\varepsilon}}\int_{\mathcal{D}_{r_1}}{|u|^{2\xi({Z_2})-2-2\varepsilon_0}}r^{-2}\big(|{\underline{\alpha}}^{({\bf 1})}|^2 + r^{-6+2\xi(Z_1)}\big) \\
&\lesssim {\mathring{\varepsilon}}\int_{\frac{r_1}{2}}^\infty{r_1^{2\xi({Z_2})-4-2\varepsilon_0}}v^{-2}\Big(\int_{\underline{\mathcal{H}}_{2v}^{r_1}}\big(|{\underline{\alpha}}^{({\bf 1})}|^2 + r^{-6+2\xi(Z_1)}\big)\Big)dr_2.
\end{split}
\end{equation*}
For $\mathbf{A}_{3}$, based on the ansatz $\mathbf{(C)}$, we can proceed in the same manner to obtain
\begin{align*}
\int_{\mathcal{D}_{r_1}} r^2|u|^{2\xi(Z_2)}|\mathbf{A}_3|^2& \lesssim {\mathring{\varepsilon}}\int_{\mathcal{D}_{r_1}}{|u|^{2\xi({Z_2})-7-4\varepsilon_0}} |J^{({\bf 1})}_L|^2+|u|^{2\xi({Z_2})-5-4\varepsilon_0}\frac{|\slashed{J}^{({\bf 1})}|^2}{r^2}\\
&={\mathring{\varepsilon}}\int_{r_1}^\infty \Big({|u|^{2\xi({Z_2})-7-4\varepsilon_0}} \int_{\mathcal{H}_{r_2}}|J^{({\bf 1})}_L|^2+|u|^{2\xi({Z_2})-5-4\varepsilon_0}\int_{\mathcal{H}_{r_2}}\frac{|\slashed{J}^{({\bf 1})}|^2}{r^2}\Big)dr_2.
\end{align*}
All the terms on the righthand sides of the above four inequalities now can be integrated. They are all bounded by ${\mathring{\varepsilon}} r_1^{2\xi({\bf 2})-6-2\varepsilon_0}$.
For $\mathbf{A}_4$, let the vector field $Z$ represent the index $(\mathbf{1})$, we have
\begin{align*}
J^{(\mathbf{1})} &= \mathcal{L}_Z (r^2 J)=\mathcal{L}_Z(\Im(\overline{\psi}\cdot D\psi )).
\end{align*}
Since
\begin{equation*}
\mathcal{L}_{Z}\big(\overline{\psi}\cdot D\psi\big)_\mu = \overline{D_Z\psi} \cdot D_\mu\psi + (D_\mu \log(r))\overline{\psi}\cdot D_Z \psi+r\overline{\psi} \cdot D_\mu \big(\widehat{D}_Z\phi\big) +i F_{Z\mu}|\psi|^2,
\end{equation*}
we have
\begin{equation}\label{formula and bound on J1}
{|J^{(\mathbf{1})}_\mu|^2}\lesssim r^4|\phi|^2|D_\mu\big(\widehat{D}_Z\phi\big)|^2 + \big(|D_\mu \psi|^2+|\phi|^2\big)|D_Z\psi|^2 + r^4|F_{Z\mu}|^2|\phi|^4.
\end{equation}
In particular, we have
\begin{align*}
\int_{\mathcal{D}_{r_1}} r^2|u|^{2\xi(Z_2)}|\mathbf{A}_4|^2& \lesssim {\mathring{\varepsilon}}\int_{\mathcal{D}_{r_1}}{|u|^{2\xi({Z_2})-3-4\varepsilon_0}}\big(|\phi|^2|D_{\underline{L}} \big(\widehat{D}_{Z_1}\phi\big)|^2 + \frac{\big(|D_{\underline{L}} \psi|^2+|\phi|^2\big)|D_{Z_1}\psi|^2}{r^4}+ |F_{{Z_1}{\underline{L}}}|^2|\phi|^4\big).
\end{align*}
In view of the pointwise bounds, we can then use the following crude bound for $D_{Z_1}\psi$ and $F_{Z_1{\underline{L}}}$:
\begin{equation*}
|D_{Z_1}\psi|\lesssim |u|^{\xi(Z_1)-1-\varepsilon_0},\ \ |F_{Z_1{\underline{L}}}| \lesssim r^{\xi(Z_1)-1}|u|^{-1-\varepsilon_0}.
\end{equation*}
Therefore, we obtain
\begin{align*}
\int_{\mathcal{D}_{r_1}} r^2|u|^{2\xi(Z_2)}|\mathbf{A}_4|^2& \lesssim {\mathring{\varepsilon}}\int_{\mathcal{D}_{r_1}}{|u|^{2\xi({Z_2})-5-4\varepsilon_0}}\big(\frac{|D_{\underline{L}} \big(\widehat{D}_{Z_1}\phi\big)|^2+|D_{\underline{L}} \phi|^2}{r^2} +r^{-4}|u|^{-1}\mathring{\varepsilon}^2\big)\\
&\lesssim \mathring{\varepsilon}^2 r_1^{2\xi({\bf 2})-6-2\varepsilon_0},
\end{align*}
where we bound $D_{\underline{L}}(\widehat{D}_{Z_1}\phi)$ and $D_{\underline{L}} \phi$ on $\underline{\mathcal{H}}_{r_2}$ as before.
We complete the proof by putting the estimates of the $\mathbf{A}_i$'s all together.
\end{proof}
The term $\mathbf{R}_{12}$ can be easily bounded by the lemma:
\begin{align*}
\int_{\mathcal{D}_{r_1}} \mathbf{R}_{12} & \lesssim \|rQ(\phi,\mathcal{L}_{Z_1}F;Z_2)\|_{L^2(\mathcal{D}_{r_1})} \Big(\int_{\mathcal{D}_{r_1}}r^{-2}(| D_L \phi^{(\mathbf{2})}|^2+|D_{\underline{L}}\phi^{(\mathbf{2})}|^2)\Big)^{\frac{1}{2}}\\
&\lesssim {\mathring{\varepsilon}} r_1^{2\xi({\bf 2})-6.5-2\varepsilon_0}.
\end{align*}
To bound $\mathbf{R}_{13}$, we need the following lemma:
\begin{lemma}We have the following estimates:
\begin{equation}\label{bound on r box phi 2 2}
\|r F_{Z_1 \mu}F_{Z_2}{}^\mu\|_{L^2(\mathcal{D}_{r_1})} \lesssim \sqrt{\mathring{\varepsilon}} r_1^{\xi({\bf 2})-2.5-2\varepsilon_0}.
\end{equation}
\end{lemma}
\begin{proof}
According to the different choices of $Z_1$ and $Z_2$, we have
\begin{itemize}
\item[Case 1]$(Z_1,Z_2) =(\Omega, \Omega')$. We have
\begin{align*}
r |F_{\Omega \mu} F_{\Omega}{}^\mu \phi|&\lesssim r^3|\phi|\big( |\alpha||\underline{\alpha}|+|\sigma|^2\big)\lesssim\frac{\sqrt{{\mathring{\varepsilon}}}}{r^{4}|u|^{\frac{5}{2}+2\varepsilon_0}}.
\end{align*}
\item[Case 2]$Z_1=\Omega$ and $Z_2 =v^{1+\xi(Z_2)}L+u^{1+\xi(Z_2)}{\underline{L}}$. Thus,
\begin{align*}
r|F_{\Omega \mu} F_{Z_2}{}^\mu\phi|&\lesssim r^2|\phi|\big(|\sigma|+|\rho|\big)\big(v^{1+\xi(Z_2)}|\alpha|+u^{1+\xi(Z_2)}|\underline{\alpha}|\big)\lesssim \frac{\sqrt{{\mathring{\varepsilon}}}}{r^{3-\xi(Z_2)}|u|^{\frac{5}{2}+2\varepsilon_0}}
\end{align*}
\item[Case 3]$Z_1 =v^{1+\xi(Z_1)}L+u^{1+\xi(Z_1)}{\underline{L}}$ and $Z_2 =v^{1+\xi(Z_2)}L+u^{1+\xi(Z_2)}{\underline{L}}$. We have
\begin{align*}
|F_{Z_1 \mu} F_{Z_2}{}^\mu|&\lesssim v^{2+\xi(Z_1)+\xi(Z_2)}|\alpha|^2+|u|^{2+\xi(Z_1)+\xi(Z_2)}|\underline{\alpha}|^2\\
&+\big( |u|^{1+\xi(Z_1)} v^{1+\xi(Z_2)}+ |u|^{1+\xi(Z_2)} v^{1+\xi(Z_1)}\big)\big(|\alpha||\underline{\alpha}|+|\rho|^2\big).
\end{align*}
Thus, we have
\begin{align*}
r|F_{Z_1 \mu} F_{Z_2}{}^\mu \phi|&\lesssim \sqrt{\mathring{\varepsilon}}|u|^{-\frac{5}{2}-2\varepsilon_0}\big(r^{-4+\xi{(\mathbf{2})}}+|u|^{-4+\xi{(\mathbf{2})}}r^{-2}\mathring{\varepsilon}+|u|^{1+\xi(Z_1)} r^{-3+\xi(Z_2)}+ |u|^{1+\xi(Z_2)} r^{-3+\xi(Z_1)}\big).
\end{align*}
\end{itemize}
Then we can simply integrate the above pointwise bounds to conclude.
\end{proof}
This lemma leads to the estimate of $\mathbf{R}_{13}$:
\begin{equation*}
\begin{split}
\int_{\mathcal{D}_{r_1}} \mathbf{R}_{13} & \lesssim \|r F_{Z_1 \mu}F_{Z_2}{}^\mu\|_{L^2(\mathcal{D}_{r_1})} \Big(\int_{\mathcal{D}_{r_1}}r^{-2}(| D_L \phi^{(\mathbf{2})}|^2+|D_{\underline{L}}\phi^{(\mathbf{2})}|^2)\Big)^{\frac{1}{2}}\\
&\lesssim {\mathring{\varepsilon}} r_1^{2\xi({\bf 2})-6-1.5\varepsilon_0}.
\end{split}
\end{equation*}
Finally, from the estimates of $\mathbf{R}_{11},\mathbf{R}_{12}$ and $\mathbf{R}_{13}$, we conclude that
\begin{align*}
\int_{\mathcal{D}_{r_1}}\mathbf{R}_1 & \lesssim {\mathring{\varepsilon}} r_1^{-6+2\xi({\bf 2})-1.5\varepsilon_0}.
\end{align*}
Based on \eqref{bound on r box phi 2} and \eqref{bound on r box phi 2 2}, one can also proceed exactly in the same manner to prove that
\begin{align*}
\int_{\mathcal{D}_{r_1}}\mathbf{R}_2 &\lesssim {\mathring{\varepsilon}} r_1^{-4+2\xi({\bf 1})-1.5\varepsilon_0}.
\end{align*}
Therefore, for sufficiently large $R_*$, since $r_1\geq R_*$, we have
\begin{equation*}
\begin{split}
\mathcal{E}^{{({\bf 2})}}(\phi;r_1) &\lesssim \mathring{\varepsilon} r_1^{-6+2\xi({{\bf 2}})-1.5\varepsilon_0}+\int_{\mathcal{D}_{r_1}}\underbrace{|\rho|\big(|D_L\phi^{({\bf 2})}|+|D_{\underline{L}}\phi^{({\bf 2})}|\big)|\phi^{({\bf 2})}|}_{\mathbf{T}_1},\\
\mathcal{E}^{({\bf 2})}(\phi;p=2;r_1) & \lesssim \mathring{\varepsilon} r_1^{-4+2\xi({{\bf 2}})-1.5\varepsilon_0}+ \int_{\mathcal{D}_{r_1}}\underbrace{|\rho||D_L\psi^{({\bf 2})}||\psi^{({\bf 2})}|}_{\mathbf{T}_2}.
\end{split}
\end{equation*}
For $\mathbf{T}_1$ and $\mathbf{T}_2$ we can the proceed exactly in the same manner as in the previous subsection. Finally, for sufficiently large $R_*$, we can close the bootstrap argument for second order energy quantities on scalar field in $\mathbf{(B)}$:
\begin{equation}\label{improved bootstrap second order scalar}
\begin{split}
\mathcal{E}^{{({\bf 2})}}& \leq 2 {\mathring{\varepsilon}}r_1^{-6+2\xi({\bf 1})-2\varepsilon_0},\ \
\mathcal{E}^{{({\bf 2})}}(\phi;p=2;r_1) \leq 2{\mathring{\varepsilon}}r_1^{-4+2\xi({\bf 1})-2\varepsilon_0}.
\end{split}
\end{equation}
\subsubsection{The estimates on the current terms}
We now recover the estimates for the current terms in $(\mathbf{C})$.
\medskip
\underline{$\blacktriangleright$~~Zeroth order estimates}
\medskip
For $\mathbf{k}=(\mathbf{0})$, $J^{(\mathbf{0})} = r^2 J = \Im(\overline{\psi}\cdot D\psi )$, according to the pointwise bound on $\phi$, we have
\begin{align*}
|J^{(\mathbf{0})}_L|&\lesssim \mathring{\varepsilon} r^{-2}|u|^{-\frac{7}{2}-3\varepsilon_0},\ \ |\slashed{J}^{(\mathbf{0})}|\lesssim \mathring{\varepsilon} r^{-1}|u|^{-\frac{9}{2}-3\varepsilon_0}, \ \ |J^{(\mathbf{0})}_L| \lesssim \mathring{\varepsilon} |u|^{-\frac{11}{2}-3\varepsilon_0}.
\end{align*}
We then can directly integrate these bounds and we obtain
\begin{align*}
& r_1^{-2}\int_{\mathcal{H}_{r_1}}{|J^{(\mathbf{0})}_L|^2}+\int_{\mathcal{H}_{r_1}}\frac{|\slashed{J}^{(\mathbf{0})}|^2}{r^2}+\sup_{r_2 \geq r_1}\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} \frac{|\slashed{J}^{(\mathbf{0})}|^2}{r^2} +\sup_{r_2 \geq r_1}r_1^{\frac{3}{2}}\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} \frac{|J^{(\mathbf{0})}_{\underline{L}}|^2}{r^{\frac{7}{2}}}\lesssim {\mathring{\varepsilon}}^2 r_1^{-10+2\xi(\mathbf{k})-6\varepsilon_0}.
\end{align*}
Thus, for sufficiently large $R_*$, it immediately closes $(\mathbf{C})$ for $\mathbf{k}=(\mathbf{0})$.
\medskip
\underline{$\blacktriangleright$~~First order estimates}
\medskip
For $\mathbf{k}=(\mathbf{1})$, we use a vector field $Z$ to represent this index. Notice that we have
\begin{align*}
J^{(\mathbf{1})}_\mu&=\Im\Big(\overline{D_Z\psi} \cdot D_\mu\psi + \overline{\psi}\cdot D_\mu(r\widehat{D}_Z\phi) +i F_{Z\mu}|\psi|^2\Big)\\
&=\Im\Big(\overline{\psi^{(\mathbf{1})}} \cdot D_\mu\psi + \overline{\psi}\cdot D_\mu \psi^{(\mathbf{1})} \Big)+ F_{Z\mu}|\psi|^2.
\end{align*}
$\blacklozenge$~For $J^{(\mathbf{1})}_L$, we have
\begin{align*}
|J^{(\mathbf{1})}_L|&\lesssim |\psi^{(\mathbf{1})}||D_L\psi| + |\psi||D_L \psi^{(\mathbf{1})}|+ |F_{ZL}||\psi|^2\\
&\lesssim \underbrace{\sqrt{\mathring{\varepsilon}}r^{-1}|u|^{-1-\varepsilon_0}|\phi^{(\mathbf{1})}|}_{\mathbf{I}_{L1}} + \underbrace{\sqrt{\mathring{\varepsilon}}|u|^{-\frac{5}{2}-2\varepsilon_0}|D_L \psi^{(\mathbf{1})}|}_{\mathbf{I}_{L2}}+\underbrace{{\mathring{\varepsilon}} |u|^{-5-4\varepsilon_0}|F_{ZL}|}_{\mathbf{I}_{L3}}.
\end{align*}
For $k\leq 2$, by Lemma \ref{Sobolev trace estimates on outgoing null hypersurfaces}, we have
\begin{equation}\label{L^2 estimates on spheres for k=1 2}
\begin{split}
\|\phi^{(\mathbf{k})}\|^2_{L^2(\mathcal{S}_{r_1}^{r_2})}&\lesssim \|\phi^{(\mathbf{k})}\|^2_{L^2(\mathcal{S}_{r_1}^{r_1})}+\frac{1}{r_1}\int_{\mathcal{H}_{r_1}} |D_L \psi^{(\mathbf{k})}|^2 \lesssim {\mathring{\varepsilon}} r_1^{-5+2\xi(\mathbf{k})-2\varepsilon_0}.
\end{split}
\end{equation}
For $\mathbf{I}_{L1}$, we have
\begin{align*}
r_1^{-2}\int_{\mathcal{H}_{r_1}} |\mathbf{I}_{L1}|^2&\lesssim \mathring{\varepsilon} r_1^{-2} |u|^{-2-2\varepsilon_0} \int_{r_1}^{r_2}r^{-2}\big(\int_{\mathcal{S}_{r_1}^r}|\phi^{(\mathbf{1})}|^2\big) dr
\lesssim \mathring{\varepsilon}^2r_1^{-10+2\xi(\mathbf{1})-4\varepsilon_0}.
\end{align*}
For $\mathbf{I}_{L2}$, we have
\begin{align*}
r_1^{-2}\int_{\mathcal{H}_{r_1}} |\mathbf{I}_{L2}|^2&\lesssim \mathring{\varepsilon} r_1^{-2} |u|^{-5-4\varepsilon_0} \int_{\mathcal{H}_{r_1}}|D_L \psi^{(\mathbf{1})}|^2\lesssim \mathring{\varepsilon}^2r_1^{-11+2\xi(\mathbf{1})-6\varepsilon_0}.
\end{align*}
For $\mathbf{I}_{L3}$, we have two cases:
\begin{equation*}
\mathbf{I}_{L3} \leq \begin{cases}{\mathring{\varepsilon}} |u|^{-5-4\varepsilon_0}r|\alpha|\lesssim {\mathring{\varepsilon}} |u|^{-5-4\varepsilon_0}r^{-2}, \ \text{if} \ Z=\Omega;\\
{\mathring{\varepsilon}} |u|^{-5-4\varepsilon_0}|u|^{1+\xi(\mathbf{1})}|\rho|\lesssim {\mathring{\varepsilon}} |u|^{-4+\xi(\mathbf{1})-4\varepsilon_0}r^{-2}, \ \text{if} \ Z=v^{1+\xi(\mathbf{1})}L+u^{1+\xi(\mathbf{1})}{\underline{L}}.
\end{cases}
\end{equation*}
In both cases, we can simply directly integrate the pointwise bounds on $\mathcal{H}_{r_1}$. Therefore, the contribution from $\mathbf{I}_{L3}$ is also bounded by $\mathring{\varepsilon}^2r_1^{-11+\xi(\mathbf{1})-8\varepsilon_0}$.
Hence, we conclude that
\begin{align*}
r_1^{-2}\int_{\mathcal{H}_{r_1}}{|J^{(\mathbf{1})}_L|^2} \lesssim {\mathring{\varepsilon}}^2 r_1^{-10+2\xi(\mathbf{1})-4\varepsilon_0}.
\end{align*}
$\blacklozenge$~For $\slashed{J}^{(\mathbf{1})}$, we have
\begin{align*}
|\slashed{J}^{(\mathbf{1})}|&\lesssim |\psi^{(\mathbf{1})}||\slashed{D}\psi| + |\psi||\slashed{D} \psi^{(\mathbf{1})}|+ |F_{ZA}||\psi|^2\\
&\lesssim \underbrace{\sqrt{\mathring{\varepsilon}}|u|^{-2-\varepsilon_0}|\phi^{(\mathbf{1})}|}_{\slashed{\mathbf{I}}_{1}} + \underbrace{\sqrt{\mathring{\varepsilon}}r|u|^{-\frac{5}{2}-2\varepsilon_0}|\slashed{D} \phi^{(\mathbf{1})}|}_{\slashed{\mathbf{I}}_{2}}+\underbrace{{\mathring{\varepsilon}} |u|^{-5-4\varepsilon_0}|F_{ZA}|}_{\slashed{\mathbf{I}}_{3}}.
\end{align*}
For $\slashed{\mathbf{I}}_{1}$, according to \eqref{L^2 estimates on spheres for k=1 2}, we have
\begin{align*}
\int_{\mathcal{H}_{r_1}} \frac{|\slashed{\mathbf{I}}_{1}|^2}{r^2}+\int_{\underline{\mathcal{H}}_{r_1}^{r_2}} \frac{|\slashed{\mathbf{I}}_{1}|^2}{r^2}&\lesssim \mathring{\varepsilon} \int_{r_1}^{r_2}r^{-2}|u|^{-4-2\varepsilon_0} \big(\int_{\mathcal{S}_{r_1}^r}|\phi^{(\mathbf{1})}|^2\big) dr \lesssim \mathring{\varepsilon}^2r_1^{-10+2\xi(\mathbf{1})-2\varepsilon_0}.
\end{align*}
For $\slashed{\mathbf{I}}_{2}$, we have
\begin{align*}
\int_{\mathcal{H}_{r_1}} \frac{|\slashed{\mathbf{I}}_{2}|^2}{r^2}+\int_{\underline{\mathcal{H}}_{r_1}^{r_2}} \frac{|\slashed{\mathbf{I}}_{2}|^2}{r^2}&\lesssim \mathring{\varepsilon} |u|^{-5-4\varepsilon_0} \Big(\int_{\mathcal{H}_{r_1}}|\slashed{D} \phi^{(\mathbf{1})}|^2+ \int_{\underline{\mathcal{H}}_{r_1}^{r_2}}|\slashed{D} \phi^{(\mathbf{1})}|^2\Big)
\lesssim \mathring{\varepsilon}^2r_1^{-11+2\xi(\mathbf{1})-8\varepsilon_0}.
\end{align*}
For $\slashed{\mathbf{I}}_{3}$, we have two cases:
\begin{equation*}
|\slashed{\mathbf{I}}_{3}| \leq \begin{cases}&{\mathring{\varepsilon}} |u|^{-5-4\varepsilon_0}r|\mathring{\sigma}|\lesssim {\mathring{\varepsilon}}^\frac{3}{2} |u|^{-7-6\varepsilon_0}r^{-1}, \ \text{if} \ Z=\Omega;\\
\lesssim {\mathring{\varepsilon}} |u|^{-5-4\varepsilon_0}\big(r^{-2+\xi(\mathbf{1})}+\sqrt{\varepsilon_0}r^{-1}|u|^{-2+\xi(\mathbf{1})-\varepsilon_0}\big), \ \text{if} \ Z=v^{1+\xi(\mathbf{1})}L+u^{1+\xi(\mathbf{1})}{\underline{L}}.
\end{cases}
\end{equation*}
In both cases, the contribution of $\slashed{I}_3$ can be estimated directly by integrating the above bounds and it is bounded by $\mathring{\varepsilon}^2r_1^{-13+2\xi(\mathbf{1})-8\varepsilon_0}$.
Hence, we conclude that
\begin{align*}
\int_{\mathcal{H}_{r_1}} \frac{|\slashed{J}^{(\mathbf{1})}|^2}{r^2}+\sup_{r_2\geq r_1}\int_{\underline{\mathcal{H}}_{r_1}^{r_2}} \frac{|\slashed{J}^{(\mathbf{1})}|^2}{r^2}&\lesssim \mathring{\varepsilon}^2r_1^{-10+2\xi(\mathbf{1})-2\varepsilon_0}.
\end{align*}
$\blacklozenge$~For $J^{(\mathbf{1})}_{\underline{L}}$, we have
\begin{align*}
|J^{(\mathbf{1})}_{\underline{L}}|&\lesssim |\psi^{(\mathbf{1})}||D_{\underline{L}}\psi| + |\psi||D_{\underline{L}} \psi^{(\mathbf{1})}|+ |F_{Z{\underline{L}}}||\psi|^2\\
&\lesssim \underbrace{\sqrt{\mathring{\varepsilon}}r|u|^{-\frac{5}{2}-2\varepsilon_0}|\phi^{(\mathbf{1})}|}_{\mathbf{I}_{\Lb1}} + \underbrace{\sqrt{\mathring{\varepsilon}}|u|^{-\frac{5}{2}-2\varepsilon_0}|D_{\underline{L}} \psi^{(\mathbf{1})}|}_{\mathbf{I}_{\Lb2}}+\underbrace{{\mathring{\varepsilon}} |u|^{-5-4\varepsilon_0}|F_{Z{\underline{L}}}|}_{\mathbf{I}_{\Lb3}}.
\end{align*}
For $\mathbf{I}_{\Lb1}$, according to \eqref{L^2 estimates on spheres for k=1 2}, we have
\begin{align*}
r_1^{\frac{3}{2}}\int_{\underline{\mathcal{H}}_{r_1}^{r_2}} \frac{|{\mathbf{I}}_{\Lb1}|^2}{r^{\frac{7}{2}}}&\lesssim \mathring{\varepsilon} r_1^{\frac{3}{2}}\int_{r_1}^{r_2}r^{-\frac{3}{2}}r_1^{-5-4\varepsilon_0} \big(\int_{\mathcal{S}_{r_1}^r}|\phi^{(\mathbf{1})}|^2\big) dr \lesssim \mathring{\varepsilon}^2r_1^{-10+2\xi(\mathbf{1})-6\varepsilon_0}.
\end{align*}
For $\mathbf{I}_{\Lb2}$, we first notice that $|D_{\underline{L}} \psi^{(\mathbf{1})}|\lesssim r |D_{\underline{L}} \phi^{(\mathbf{1})}|+|\phi^{(\mathbf{1})}|$. The contribution from $|\phi^{(\mathbf{1})}|$ can be ignored since it has been already treated in $\mathbf{I}_{\Lb1}$. Thus, modulo this term, we have
\begin{align*}
r_1^{\frac{3}{2}}\int_{\underline{\mathcal{H}}_{r_1}^{r_2}} \frac{|{\mathbf{I}}_{\Lb2}|^2}{r^{\frac{7}{2}}}&\lesssim \mathring{\varepsilon} r_1^{\frac{3}{2}}\int_{\underline{\mathcal{H}}_{r_1}^{r_2}} r^{-\frac{3}{2}}|u|^{-5-4\varepsilon_0}|D_{\underline{L}}\phi^{(\mathbf{1})}|^2 \lesssim \mathring{\varepsilon}^2r_1^{-11+2\xi(\mathbf{1})-6\varepsilon_0}.
\end{align*}
For $\mathbf{I}_{\Lb3}$, we have two cases:
\begin{equation*}
|\mathbf{I}_{L3}| \leq\begin{cases} {\mathring{\varepsilon}} |u|^{-5-4\varepsilon_0}r|\underline{\alpha}|\lesssim {\mathring{\varepsilon}} |u|^{-5-4\varepsilon_0}r^{-2}+{\mathring{\varepsilon}}^\frac{3}{2} |u|^{-8-5\varepsilon_0}, \ \text{if} \ Z=\Omega;\\
{\mathring{\varepsilon}} |u|^{-5-4\varepsilon_0}|v|^{1+\xi(\mathbf{1})}|\rho|\lesssim |u|^{-5-4\varepsilon_0}r^{-1+\xi(\mathbf{1})} ,\ \text{if} \ Z=v^{1+\xi(\mathbf{1})}L+u^{1+\xi(\mathbf{1})}{\underline{L}}.
\end{cases}
\end{equation*}
In both cases, we can simply directly integrate the pointwise bounds on $\underline{\mathcal{H}}_{r_1}^{r_2}$ to obtain bound $r_1^{\frac{3}{2}}\int_{\underline{\mathcal{H}}_{r_1}^{r_2}}$ by $\mathring{\varepsilon}^2r_1^{-11+2\xi(\mathbf{1})-2\varepsilon_0}$.
Hence, we conclude that
\begin{align*}
\sup_{r_2\geq r_1}r_1^{\frac{3}{2}}\int_{\underline{\mathcal{H}}_{r_1}^{r_2}} \frac{|J^{(\mathbf{1})}_{\underline{L}}|^2}{r^{\frac{7}{2}}} &\lesssim \mathring{\varepsilon}^2r_1^{-10+2\xi(\mathbf{1})-2\varepsilon_0}.
\end{align*}
Putting all the estimates together, we obtain that
\begin{align*}
& r_1^{-2}\int_{\mathcal{H}_{r_1}}{|J^{(\mathbf{1})}_L|^2}+\int_{\mathcal{H}_{r_1}}\frac{|\slashed{J}^{(\mathbf{1})}|^2}{r^2}+\sup_{r_2 \geq r_1}\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} \frac{|\slashed{J}^{(\mathbf{1})}|^2}{r^2} +\sup_{r_2 \geq r_1}r_1^{\frac{3}{2}}\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} \frac{|J^{(\mathbf{1})}_{\underline{L}}|^2}{r^{\frac{7}{2}}}\lesssim {\mathring{\varepsilon}}^2 r_1^{-10+2\xi(\mathbf{k})-2\varepsilon_0}.
\end{align*}
Thus, for sufficiently large $R_*$, it immediately closes $(\mathbf{C})$ for $\mathbf{k}=(\mathbf{1})$.
\medskip
\underline{$\blacktriangleright$~~Second order estimates}
\medskip
For $\mathbf{k}=(\mathbf{2})$, we assume that the vector fields $Z_1$ and $Z_2$ represent this index. We also use $(\mathbf{1})$ to denote $Z_1$ and $(\mathbf{1'})$ to denote $Z_2$. We first derive the a formula for $J^{(\mathbf{2})} =\mathcal{L}_{Z_1}\mathcal{L}_{Z_2}(\Im(\overline{\psi}\cdot D\psi ))$. Indeed, we have
\begin{align*}
\mathcal{L}_{Z_1}\mathcal{L}_{Z_2}(\overline{\psi}\cdot D\psi )_\mu&= \overline{\psi} \cdot D_\mu D_{Z_1} D_{Z_2}\psi+ \overline{ D_{Z_1} D_{Z_2}\psi}\cdot D_\mu\psi + \overline{D_{Z_2}\psi} \cdot D_\mu D_{Z_1} \psi+ \overline{D_{Z_1}\psi} \cdot D_\mu D_{Z_2} \psi\\
&+ 2i\Re\big(\overline{D_{Z_{2}}\psi}\cdot \psi\big) F_{Z_1 \mu}+i\big(\mathcal{L}_{Z_1}F\big)_{Z_2 \mu}|\psi|^2+iF_{[Z_2,Z_1]\mu}|\psi|^2+iF_{Z_2\mu}Z_1(|\psi|^2).
\end{align*}
Hence,
\begin{align*}
J^{(\mathbf{2})}_\mu&= \Im\Big(\overline{\psi} \cdot D_\mu \psi^{(\mathbf{2})} + \overline{ \psi^{(\mathbf{2})}}\cdot D_\mu\psi + \overline{\psi^{(\mathbf{1'})}} \cdot D_\mu \psi^{(\mathbf{1})}+ \overline{\psi^{(\mathbf{1})}} \cdot D_\mu \psi^{(\mathbf{1'})} \Big)\\
& \ \ + 2\Re\big(\overline{\psi^{(\mathbf{1'})}}\cdot \psi\big) F_{Z_1 \mu}+2\Re\big(\overline{\psi^{(\mathbf{1})}}\cdot \psi\big) F_{Z_2 \mu}+\big(\mathcal{L}_{Z_1}F\big)_{Z_2 \mu}|\psi|^2+F_{[Z_2,Z_1]\mu}|\psi|^2.
\end{align*}
In view of the symmetry of the indices $(\mathbf{1})$ and $(\mathbf{1'})$, we may drop the terms with similar structures and bound $J^{(\mathbf{2})}_\mu$ as follows:
\begin{align*}
|J^{(\mathbf{2})}_L|&\lesssim |\psi| |D_\mu \psi^{(\mathbf{2})}| + |\psi^{(\mathbf{2})}||D_\mu\psi| +|\psi^{(\mathbf{1'})}||D_\mu \psi^{(\mathbf{1})}|+ |\psi||\psi^{(\mathbf{1'})}||F_{Z_1 \mu}|+\big(|\big(\mathcal{L}_{Z_1}F\big)_{Z_2 \mu}|+|F_{[Z_2,Z_1]\mu}|\big)|\psi|^2.
\end{align*}
In order to bound $|J^{(\mathbf{2})}_L|$ in an efficient way, we first bound $\psi^{(\mathbf{1'})}$ in $L^\infty$. We have
\begin{equation*}
|\psi^{(\mathbf{1'})}|\leq\begin{cases} r^2|\slashed{D}\varphi|, \ &\text{if} \ Z=\Omega;\\
|v^{1+\xi(\mathbf{1'})}||D_L \psi|+ |u^{1+\xi(\mathbf{1'})}|\big(|rD_{\underline{L}} \phi|+|\phi|\big),\ &\text{if} \ Z=v^{1+\xi(\mathbf{1})}L+u^{1+\xi(\mathbf{1})}{\underline{L}}.
\end{cases}
\end{equation*}
By virtue of the pointwise bounds on $D\phi$, we have
\begin{equation*}
|\psi^{(\mathbf{1'})}| \lesssim \sqrt{\mathring{\varepsilon}}|u|^{-\frac{3}{2}+\xi(\mathbf{1'})-2\varepsilon_0}.
\end{equation*}
We can now bound $\phi$ and $\psi^{(\mathbf{1'})}$ in $J^{(\mathbf{2})}_\mu$ to derive
\begin{equation}\label{J2 pointwise bound}
\begin{split}
|J^{(\mathbf{2})}_\mu|&\lesssim \Big(\sqrt{\mathring{\varepsilon}}|u|^{-\frac{5}{2}-2\varepsilon_0} |D_\mu \psi^{(\mathbf{2})}| +\sqrt{\mathring{\varepsilon}}|u|^{-\frac{3}{2}+\xi(\mathbf{1'})-2\varepsilon_0}|D_\mu \psi^{(\mathbf{1})}|\Big)
+|\psi^{(\mathbf{2})}||D_\mu\psi|\\
&\ \ \ \ \ \ \ \ \ \ +\Big( {\mathring{\varepsilon}}|u|^{-4+\xi(\mathbf{1'})-4\varepsilon_0}|F_{Z_1 \mu}|+{\mathring{\varepsilon}}|u|^{-5-4\varepsilon_0}\big(|\big(\mathcal{L}_{Z_1}F\big)_{Z_2 \mu}|+|F_{[Z_2,Z_1]\mu}|\big)\Big).
\end{split}
\end{equation}
$\blacklozenge$~For $J^{(\mathbf{2})}_L$, we can use the pointwise bound for $D_L\psi$ in \eqref{J2 pointwise bound} (where $\mu = L$) and we obtain
\begin{align*}
|J^{(\mathbf{2})}_L|&\lesssim \underbrace{\Big(\sqrt{\mathring{\varepsilon}}|u|^{-\frac{5}{2}-2\varepsilon_0} |D_L \psi^{(\mathbf{2})}| +\sqrt{\mathring{\varepsilon}}|u|^{-\frac{3}{2}+\xi(\mathbf{1'})-2\varepsilon_0}|D_L \psi^{(\mathbf{1})}|\Big)}_{\mathbf{I\!I}_{L2}}
+\underbrace{\sqrt{\mathring{\varepsilon}}r^{-2}|u|^{-1-\varepsilon_0}|\psi^{(\mathbf{2})}|}_{\mathbf{I\!I}_{L1}}\\
&\ \ \ \ \ \ \ \ \ \ +\underbrace{\Big( {\mathring{\varepsilon}}|u|^{-4+\xi(\mathbf{1'})-4\varepsilon_0}|F_{Z_1 L}|+{\mathring{\varepsilon}}|u|^{-5-4\varepsilon_0}|F_{[Z_2,Z_1]L}|\Big)}_{\mathbf{I\!I}_{L3}}+\underbrace{{\mathring{\varepsilon}}|u|^{-5-4\varepsilon_0}|\big(\mathcal{L}_{Z_1}F\big)_{Z_2 L}|}_{\mathbf{I\!I}_{L4}}.
\end{align*}
For $\mathbf{I\!I}_{L1}$, in view of \eqref{L^2 estimates on spheres for k=1 2}, we have
\begin{align*}
r_1^{-2}\int_{\mathcal{H}_{r_1}} |\mathbf{I\!I}_{L1}|^2&\lesssim \mathring{\varepsilon} r_1^{-2} |u|^{-2-2\varepsilon_0} \int_{r_1}^{r_2}r^{-2}\big(\int_{\mathcal{S}_{r_1}^r}|\phi^{(\mathbf{2})}|^2\big) dr \lesssim \mathring{\varepsilon}^2r_1^{-10+2\xi(\mathbf{2})-4\varepsilon_0}.
\end{align*}
For $\mathbf{I\!I}_{L2}$, by the $r^p$-weighted energy estimates, we have
\begin{align*}
r_1^{-2}\int_{\mathcal{H}_{r_1}} |\mathbf{I\!I}_{L2}|^2&\lesssim \mathring{\varepsilon} r_1^{-2} \int_{\mathcal{H}_{r_1}}|u|^{-5-4\varepsilon_0} |D_L \psi^{(\mathbf{2})}|^2+ |u|^{-3+2\xi(\mathbf{1'})-4\varepsilon_0} |D_L \psi^{(\mathbf{1})}|^2\lesssim \mathring{\varepsilon}^2r_1^{-9+2\xi(\mathbf{2})-6\varepsilon_0}.
\end{align*}
For $\mathbf{I\!I}_{L3}$, we will need the following two inequalities:
\begin{equation*}
|F_{Z_1L}|\lesssim r^{-2}|u|^{1+\xi(\mathbf{1})}, \ \ |F_{[Z_1,Z_2]L}|\lesssim r^{-2}|u|^{1+\xi(\mathbf{2})}.
\end{equation*}
The first one can be checked by a direction computation. For the second, we notice that the only non-vanishing $[Z_1,Z_2]$'s for $Z_1,Z_2\in \mathcal{Z}$ are $[T,S]=T$, $[T,K]=2S$ and $[S,K]=K$. For those vector fields, it is clear that $\xi([Z_1,Z_2])=\xi(Z_1)+\xi(Z_2)$. Therefore, the second inequality follows from the first one. In particular, we have
\begin{align*}
|\mathbf{I\!I}_{L3}| \lesssim {\mathring{\varepsilon}} r^{-2}|u|^{-3+\xi(\mathbf{2})-4\varepsilon_0}.
\end{align*}
We can integrate this pointwise bound on $\mathcal{H}_{r_1}$ to bound $r_1^{-2}\int_{\mathcal{H}_{r_1}}|\mathbf{I\!I}_{L3}|^2$ by $\lesssim \mathring{\varepsilon}^2r_1^{-9+2\xi(\mathbf{2})-8\varepsilon_0}$.
\noindent For $\mathbf{I\!I}_{L4}$, we have two cases
\begin{equation*}
|\big(\mathcal{L}_{Z_1}F\big)_{\Omega L}|\leq\begin{cases} r|{\alpha}^{({\bf 1})}| + r^{-2}|u|^{\xi(\mathbf{1})}, \ &\text{if} \ Z_2=\Omega;\\
|u|^{1+\xi(\mathbf{1'})}\big(|{\rho}^{({\bf 1})}| + r^{-3}|u|^{\xi(\mathbf{1})}\big),\ &\text{if} \ Z_2=v^{1+\xi(\mathbf{1'})}L+u^{1+\xi(\mathbf{1'})}{\underline{L}}.
\end{cases}
\end{equation*}
For the first case, we have \begin{align*}
r_1^{-2}\int_{\mathcal{H}_{r_1}} |\mathbf{I\!I}_{L4}|^2&\lesssim r_1^{-2} \mathring{\varepsilon}^2|u|^{-10-4\varepsilon_0}\int_{\mathcal{H}_{r_1}} r^2|{\alpha}^{({\bf 1})}|^2+r^{-4}|u|^{2\xi(\mathbf{1})}\lesssim \mathring{\varepsilon}^2r_1^{-13+2\xi(\mathbf{1})-4\varepsilon_0}.
\end{align*}
For the second case, we have \begin{align*}
r_1^{-2}\int_{\mathcal{H}_{r_1}} |\mathbf{I\!I}_{L4}|^2&\lesssim r_1^{-2} \mathring{\varepsilon}^2|u|^{-8+2\xi(\mathbf{1'})-4\varepsilon_0}\int_{\mathcal{H}_{r_1}} |{\rho}^{({\bf 1})}|^2+r^{-6}|u|^{2\xi(\mathbf{1})}\lesssim \mathring{\varepsilon}^2r_1^{-13+2\xi(\mathbf{2})-4\varepsilon_0}.
\end{align*}
Hence, $r_1^{-2}\int_{\mathcal{H}_{r_1}} |\mathbf{I\!I}_{L4}|^2$ is bounded by $\mathring{\varepsilon}^2r_1^{-13+2\xi(\mathbf{2})-4\varepsilon_0}$
Together with previous estimates, we obtain
\begin{align*}
r_1^{-2}\int_{\mathcal{H}_{r_1}} |J^{(\mathbf{2})}_L|^2&\lesssim \mathring{\varepsilon}^2r_1^{-9+2\xi(\mathbf{2})-6\varepsilon_0}.
\end{align*}
$\blacklozenge$~For $\slashed{J}^{(\mathbf{2})}$, we bound $\slashed{D}\psi$ in \eqref{J2 pointwise bound} in $L^\infty$ (where $\mu = e_A$) and we obtain
\begin{align*}
|J^{(\mathbf{2})}_\mu|&\lesssim \underbrace{\Big(\sqrt{\mathring{\varepsilon}}r|u|^{-\frac{5}{2}-2\varepsilon_0} |\slashed{D} \phi^{(\mathbf{2})}| +\sqrt{\mathring{\varepsilon}}r|u|^{-\frac{3}{2}+\xi(\mathbf{1'})-2\varepsilon_0}|\slashed{D}\phi^{(\mathbf{1})}|\Big)}_{\slashed{\mathbf{I\!I}}_{2}}
+\underbrace{\sqrt{\mathring{\varepsilon}}|u|^{-2-\varepsilon_0}|\phi^{(\mathbf{2})}|}_{\slashed{\mathbf{I\!I}}_{1}}\\
&\ \ \ \ \ \ \ \ \ \ +\underbrace{\Big( {\mathring{\varepsilon}}|u|^{-4+\xi(\mathbf{1'})-4\varepsilon_0}|F_{Z_1 A}|+{\mathring{\varepsilon}}|u|^{-5-4\varepsilon_0}|F_{[Z_2,Z_1]A}|\Big)}_{\slashed{\mathbf{I\!I}}_{3}}+\underbrace{{\mathring{\varepsilon}}|u|^{-5-4\varepsilon_0}|\big(\mathcal{L}_{Z_1}F\big)_{Z_2 A}|}_{\slashed{\mathbf{I\!I}}_{4}}.
\end{align*}
For $\slashed{\mathbf{I\!I}}_{1}$, according to \eqref{L^2 estimates on spheres for k=1 2}, we have
\begin{align*}
\int_{\mathcal{H}_{r_1}} \frac{|\slashed{\mathbf{I\!I}}_{1}|^2}{r^2}&\lesssim \mathring{\varepsilon} \int_{r_1}^{r_2}r^{-2}|u|^{-4-2\varepsilon_0} \big(\int_{\mathcal{S}_{r_1}^r}|\phi^{(\mathbf{2})}|^2\big) dr \lesssim \mathring{\varepsilon}^2r_1^{-10+2\xi(\mathbf{2})-2\varepsilon_0}.
\end{align*}
For $\slashed{\mathbf{I\!I}}_{2}$, we have
\begin{align*}
\int_{\mathcal{H}_{r_1}} \frac{|\slashed{\mathbf{I\!I}}_{2}|^2}{r^2}&\lesssim \mathring{\varepsilon} |u|^{-5-4\varepsilon_0} \int_{\mathcal{H}_{r_1}}|\slashed{D} \phi^{(\mathbf{2})}|^2+ \mathring{\varepsilon} |u|^{-5-4\varepsilon_0} \int_{\underline{\mathcal{H}}_{r_1}^{r_2}}|\slashed{D} \phi^{(\mathbf{2})}|^2\\
& \ \ + \mathring{\varepsilon} |u|^{-3+2\xi(\mathbf{1'})-4\varepsilon_0} \int_{\mathcal{H}_{r_1}}|\slashed{D} \phi^{(\mathbf{1})}|^2+ \mathring{\varepsilon} |u|^{-3+2\xi(\mathbf{1'})-4\varepsilon_0} \int_{\underline{\mathcal{H}}_{r_1}^{r_2}}|\slashed{D} \phi^{(\mathbf{1})}|^2\lesssim \mathring{\varepsilon}^2r_1^{-9+2\xi(\mathbf{2})-8\varepsilon_0}.
\end{align*}
For $\slashed{\mathbf{I\!I}}_{3}$, since
\begin{equation*}
|F_{Z_1A}|\lesssim \sqrt{\mathring{\varepsilon}}r^{-1}|u|^{-2+\xi(\mathbf{1})-\varepsilon_0}+r^{-2+\xi(\mathbf{1})},\ \
|F_{[Z_1,Z_2]A}|\lesssim \sqrt{\mathring{\varepsilon}}r^{-1}|u|^{-2+\xi(\mathbf{2})-\varepsilon_0}+r^{-2+\xi(\mathbf{2})},
\end{equation*}
we can integrate these pointwise bounds to derive
\begin{align*}
\int_{\mathcal{H}_{r_1}} \frac{|\slashed{\mathbf{I\!I}}_{3}|^2}{r^2}
&\lesssim \mathring{\varepsilon}^2r_1^{-11+2\xi(\mathbf{2})-8\varepsilon_0}.
\end{align*}
For $\slashed{\mathbf{I\!I}}_{4}$, we have two cases
\begin{equation*}
|\big(\mathcal{L}_{Z_1}F\big)_{\Omega A}|\leq\begin{cases} r|{\sigma}^{({\bf 1})}|, \ &\text{if} \ Z_2=\Omega;\\
r^{1+\xi(\mathbf{1'})}|{\alpha}^{({\bf 1})}| + |u|^{1+\xi(\mathbf{1'})}|{\underline{\alpha}}^{({\bf 1})}|+r^{-2+\xi(\mathbf{1'})}|u|^{\xi(\mathbf{1})},\ &\text{if} \ Z_2=v^{1+\xi(\mathbf{1'})}L+u^{1+\xi(\mathbf{1'})}{\underline{L}}.
\end{cases}
\end{equation*}
We claim that in both cases we all have
\begin{align*}
\int_{\mathcal{H}_{r_1}} \frac{|\slashed{\mathbf{I\!I}}_{4}|^2}{r^2}&\lesssim \mathring{\varepsilon}^2r_1^{-16+2\xi(\mathbf{2})-8\varepsilon_0}.
\end{align*}
The proof for the first case is straightforward.
For the second case, we need the following bound on ${\alpha}^{({\bf 1})}$:
\begin{equation}\label{L2 sphere bound on alphabone}
\|{\underline{\alpha}}^{({\bf 1})}\|_{L^2(\mathcal{S}_{r_1}^{r_2})}\lesssim \sqrt{\mathring{\varepsilon}}r_1^{-3+\xi(\mathbf{1})-\varepsilon_0}.
\end{equation}
In fact,
\begin{align*}
\|{\underline{\alpha}}^{({\bf 1})}\|_{L^2(\mathcal{S}_{r_1}^{r_2})}^2- \|{\underline{\alpha}}^{({\bf 1})}\|_{L^2(\mathcal{S}_{r_1}^{r_1})}^2&= \int_{\frac{r_2}{2}}^{\frac{r_1}{2}}\int_{\mathbf{S}^2}L\big(\big| r {\underline{\alpha}}^{({\bf 1})} \big|^2\big)d\vartheta dv \lesssim \int_{\frac{r_2}{2}}^{\frac{r_1}{2}}\int_{\mathbf{S}^2} |\slashed{\nabla}_L\big(r{\underline{\alpha}}^{({\bf 1})} \big)|| r{\underline{\alpha}}^{({\bf 1})}|d\vartheta dv\\
&\lesssim \|\slashed{\nabla}_L(r{\underline{\alpha}}^{({\bf 1})})\|_{L^2(\mathcal{H}_{r_1}^{r_2})} \Big(\int_{\frac{r_2}{2}}^{\frac{r_1}{2}}\frac{1}{r^2}\big(\int_{\mathcal{S}_{r_1}^{r}} |{\underline{\alpha}}^{({\bf 1})}|^2\big)dr\Big)^\frac{1}{2}.
\end{align*}
To bound $\|\slashed{\nabla}_L(r{\underline{\alpha}}^{({\bf 1})})\|_{L^2(\mathcal{H}_{r_1}^{r_2})}$, according to \eqref{Maxwell null commuted} and the facts that $r|\slashed{\nabla}\rho(\mathcal{L}_{Z_1} {\mathring{F}})|\simeq |\rho(\mathcal{L}_{\Omega}\mathcal{L}_{Z_1}{\mathring{F}} )|$ and $r|\slashed{\nabla}\sigma(\mathcal{L}_{Z_1} {\mathring{F}})|\simeq |\sigma(\mathcal{L}_{\Omega}\mathcal{L}_{Z_1}{\mathring{F}} )|$, we have
\begin{align*}
\|\slashed{\nabla}_L(r{\underline{\alpha}}^{({\bf 1})})\|_{L^2(\mathcal{H}_{r_1}^{r_2})}^2 &\leq \int_{\mathcal{H}_{r_1}^{r_2}}|\rho(\mathcal{L}_{\Omega}\mathcal{L}_{Z_1}{\mathring{F}} )|^2+|\sigma(\mathcal{L}_{\Omega}\mathcal{L}_{Z_1}{\mathring{F}} )|^2+\frac{|\slashed{J}^{(\mathbf{1})}|^2}{r^2}\lesssim \mathring{\varepsilon} r_1^{-6+2\xi(\mathbf{1})-4\varepsilon_0}.
\end{align*}
Similar to the proof of Lemma \ref{lemma A7}, we use Gronwall's inequality to obtain \eqref{L2 sphere bound on alphabone}. Thus,
\begin{align*}
\int_{\mathcal{H}_{r_1}} \frac{|\slashed{\mathbf{I\!I}}_{4}|^2}{r^2}
&\lesssim {\mathring{\varepsilon}}^2\int_{\mathcal{H}_{r_1}} |u|^{-10-8\varepsilon_0}\Big(r^{2\xi(\mathbf{1'})} {|{\alpha}^{({\bf 1})}|^2} + |u|^{2+2\xi(\mathbf{1'})}\frac{|{\underline{\alpha}}^{({\bf 1})}|^2}{r^2}+r^{-6+2\xi(\mathbf{1'})}|u|^{2\xi(\mathbf{1})}\Big)
\end{align*}
The first term can be bounded by the $r^p$-weighted energy estimates. The last term can bounded directly. For the second term, we use \eqref{L2 sphere bound on alphabone} to get its $L^2$ bound on $\mathcal{S}_{r_1}^{r_2}$ then integrate over $r$. This proves the estimate for $\slashed{\mathbf{I\!I}}_{4}$. Together with other estimates, we obtain
\begin{align*}
\int_{\mathcal{H}_{r_1}} \frac{|\slashed{\mathbf{I\!I}}|^2}{r^2}&\lesssim \mathring{\varepsilon}^2r_1^{-9+2\xi(\mathbf{2})-6\varepsilon_0}.
\end{align*}
$\blacklozenge$~For $J^{(\mathbf{2})}_{\underline{L}}$, we can use the pointwise bound for $D_L\psi$ in \eqref{J2 pointwise bound} (where $\mu = L$) and we obtain
\begin{align*}
|J^{(\mathbf{2})}_\mu|&\lesssim \underbrace{\Big(\sqrt{\mathring{\varepsilon}}|u|^{-\frac{5}{2}-2\varepsilon_0} |D_{\underline{L}} \psi^{(\mathbf{2})}| +\sqrt{\mathring{\varepsilon}}|u|^{-\frac{3}{2}+\xi(\mathbf{1'})-2\varepsilon_0}|D_{\underline{L}} \psi^{(\mathbf{1})}|\Big)}_{\mathbf{I\!I}_{\Lb2}}
+\underbrace{\sqrt{\mathring{\varepsilon}}|u|^{-3-\varepsilon_0}|\psi^{(\mathbf{2})}|}_{\mathbf{I\!I}_{\Lb1}}\\
&\ \ \ \ \ \ \ \ \ \ +\underbrace{\Big( {\mathring{\varepsilon}}|u|^{-4+\xi(\mathbf{1'})-4\varepsilon_0}|F_{Z_1 {\underline{L}}}|+{\mathring{\varepsilon}}|u|^{-5-4\varepsilon_0}|F_{[Z_2,Z_1]{\underline{L}}}|\Big)}_{\mathbf{I\!I}_{\Lb3}}+\underbrace{{\mathring{\varepsilon}}|u|^{-5-4\varepsilon_0}|\big(\mathcal{L}_{Z_1}F\big)_{Z_2 {\underline{L}}}|}_{\mathbf{I\!I}_{\Lb4}}.
\end{align*}
For $\mathbf{I\!I}_{\Lb1}$, according to \eqref{L^2 estimates on spheres for k=1 2}, we have
\begin{align*}
r_1^{\frac{3}{2}}\int_{\underline{\mathcal{H}}_{r_1}^{r_2}} \frac{|{\mathbf{I\!I}}_{\Lb1}|^2}{r^{\frac{7}{2}}}&\lesssim \mathring{\varepsilon} r_1^{\frac{3}{2}}\int_{r_1}^{r_2}r^{-\frac{3}{2}}r_1^{-6-2\varepsilon_0} \big(\int_{\mathcal{S}_{r_1}^r}|\phi^{(\mathbf{2})}|^2\big) dr \lesssim \mathring{\varepsilon}^2r_1^{-11+2\xi(\mathbf{2})-6\varepsilon_0}.
\end{align*}
For $\mathbf{I\!I}_{\Lb2}$, we first notice that $|D_{\underline{L}} \psi^{(\mathbf{k})}|\lesssim r |D_{\underline{L}} \phi^{(\mathbf{k})}|+|\phi^{(\mathbf{k})}|$. The contribution from $|\phi^{(\mathbf{1})}|$ and $|\phi^{(\mathbf{2})}|$ can be ignored since they have been already treated in $\mathbf{I}_{\Lb1}$ and $\mathbf{I\!I}_{\Lb1}$. Thus, modulo those terms, we have
\begin{align*}
r_1^{\frac{3}{2}}\int_{\underline{\mathcal{H}}_{r_1}^{r_2}} \frac{|{\mathbf{I\!I}}_{\Lb2}|^2}{r^{\frac{7}{2}}}&\lesssim \mathring{\varepsilon} r_1^{\frac{3}{2}}\int_{\underline{\mathcal{H}}_{r_1}^{r_2}} r^{-\frac{3}{2}}\big(|u|^{-5-4\varepsilon_0}|D_{\underline{L}}\phi^{(\mathbf{2})}|^2+ |u|^{-3+2\xi(\mathbf{1'})-4\varepsilon_0}|D_{\underline{L}}\phi^{(\mathbf{1})}|^2\big) \lesssim \mathring{\varepsilon}^2r_1^{-9+2\xi(\mathbf{1})-6\varepsilon_0}.
\end{align*}
For $\mathbf{I\!I}_{\Lb3}$, we notice that
\begin{equation*}
|F_{Z_1{\underline{L}}}|\lesssim \begin{cases} \sqrt{\mathring{\varepsilon}}|u|^{-3-\varepsilon_0}+r^{-2}, \ \ &\text{if} \ \ Z_1=\Omega;\\
r^{-1+\xi(\mathbf{1})},\ \ &\text{if} \ \ Z_1=v^{1+\xi(\mathbf{1})}L+u^{1+\xi(\mathbf{1})}{\underline{L}}.
\end{cases}.
\end{equation*}
Since $\Omega$ can not be a commutator $[Z_1,Z_2]$, we have
\begin{equation*}
|F_{[Z_1,Z_2]{\underline{L}}}|\lesssim r^{-1}|u|^{1+\xi(\mathbf{2})}.
\end{equation*}
We remark that the $\xi(\mathbf{2})$ in the above formula is at most $1$.
Thus, we can directly integrate those pointwise bounds and we obtain
\begin{align*}
r_1^{\frac{3}{2}}\int_{\underline{\mathcal{H}}_{r_1}^{r_2}} \frac{|\mathbf{I\!I}_{\underline{L}}|^2}{r^{\frac{7}{2}}} &\lesssim \mathring{\varepsilon}^2r_1^{-9+2\xi(\mathbf{1})-2\varepsilon_0}.
\end{align*}
For $\mathbf{I\!I}_{\Lb4}$, we have two cases
\begin{itemize}
\item If $Z_2=\Omega$, by \eqref{onederivative of F}, we have
\begin{equation*}
|\big(\mathcal{L}_{Z_1}F\big)_{\Omega {\underline{L}}}|\lesssim r|{\underline{\alpha}}^{({\bf 1})}| + r^{-2+\xi(\mathbf{1})}.
\end{equation*}
Thus, \begin{align*}
r_1^{\frac{3}{2}}\int_{\underline{\mathcal{H}}_{r_1}^{r_2}} \frac{|\mathbf{I\!I}_{\Lb4}|^2}{r^{\frac{7}{2}}} &\lesssim \mathring{\varepsilon}^2r_1^{\frac{3}{2}}\int_{\underline{\mathcal{H}}_{r_1}^{r_2}} r^{-\frac{3}{2}}|u|^{-10-8\varepsilon_0}|{\underline{\alpha}}^{({\bf 1})}|^2+r^{-\frac{13}{2}+2\xi(\mathbf{1})}|u|^{-10-8\varepsilon_0}.
\end{align*}
\item If $Z_2=v^{1+\xi(\mathbf{1'})}L+u^{1+\xi(\mathbf{1'})}{\underline{L}}$, by \eqref{onederivative of F}, we have
\begin{align*}
|\big(\mathcal{L}_{Z_1}F\big)_{Z_2 {\underline{L}}}|\lesssim r^{1+\xi(\mathbf{1'})}|{\rho}^{({\bf 1})}| + r^{-2+\xi(\mathbf{2})}.
\end{align*}
Thus, \begin{align*}
r_1^{\frac{3}{2}}\int_{\underline{\mathcal{H}}_{r_1}^{r_2}} \frac{|\mathbf{I\!I}_{\Lb4}|^2}{r^{\frac{7}{2}}} &\lesssim \mathring{\varepsilon}^2r_1^{\frac{3}{2}}\int_{\underline{\mathcal{H}}_{r_1}^{r_2}} r^{2\xi(\mathbf{1'})-\frac{7}{2}}|u|^{-10-8\varepsilon_0}r^2|{\rho}^{({\bf 1})}|^2+r^{-\frac{15}{2}+2\xi(\mathbf{2})}|u|^{-10-8\varepsilon_0}.
\end{align*}
\end{itemize}
In both cases, we have
\begin{align*}
r_1^{\frac{3}{2}}\int_{\underline{\mathcal{H}}_{r_1}^{r_2}} \frac{|{\mathbf{I\!I}}_{\Lb4}|^2}{r^{\frac{7}{2}}} &\lesssim \mathring{\varepsilon}^2r_1^{-12+2\xi(\mathbf{1})-4\varepsilon_0}.
\end{align*}
Hence, we conclude that
\begin{align*}
\sup_{r_2\geq r_1}r_1^{\frac{3}{2}}\int_{\underline{\mathcal{H}}_{r_1}^{r_2}} \frac{|J^{(\mathbf{2})}_{\underline{L}}|^2}{r^{\frac{9}{2}}} &\lesssim \mathring{\varepsilon}^2r_1^{-10+2\xi(\mathbf{1})-2\varepsilon_0}.
\end{align*}
Putting all the estimates together, we obtain that
\begin{align*}
& r_1^{-2}\int_{\mathcal{H}_{r_1}}{|J^{(\mathbf{2})}_L|^2}+\int_{\mathcal{H}_{r_1}}\frac{|\slashed{J}^{(\mathbf{2})}|^2}{r^2}+\sup_{r_2 \geq r_1}r_1^{\frac{3}{2}}\int_{\underline{\mathcal{H}}_{r_2}^{r_1}} \frac{|J^{(\mathbf{2})}_{\underline{L}}|^2}{r^{\frac{7}{2}}}\lesssim {\mathring{\varepsilon}}^2 r_1^{-9+2\xi(\mathbf{k})-2\varepsilon_0}.
\end{align*}
Thus, for sufficiently large $R_*$, it immediately closes $(\mathbf{C})$ for $\mathbf{k}=(\mathbf{2})$.
\section{The analysis in the interior region}
\subsection{The conformal theory of Maxwell-Klein-Gordon equations}
We review the conformal theory of the Maxwell-Klein-Gordon equations. We refer the readers to Chapter 4 of the the booklet \cite{Christodoulou:Book:GR1} of Christodoulou for more details. Let $\mathbf{L}$ be a line bundle over a four dimension Lorentzian manifold $(M,g)$ with a given hermitian metric $h$. Let $D_A$ be a $\mathbf{U}(1)$-connection compatible with $h$ where $A$ is the corresponding connection 1-form and let $\phi$ be a section of $\mathbf{L}$. The action for Maxwell-Klein-Gordon equations is
\begin{equation*}
\mathcal{L}(A,\phi;g,h)=\int_{M} \underbrace{\frac{1}{2}g^{\mu\nu}h\big((D_A)_{\partial_\mu} \phi,(D_A)_{\partial_\nu}\phi\big)}_{L_s(A,\phi;g,h)} +\underbrace{\frac{1}{4}F^{\mu\nu}F_{\mu\nu}}_{L_m(A,\phi;g,h)} d\text{vol}_g.
\end{equation*}
Let $\Lambda,\lambda$ be two positive smooth functions on $M$. We conformally change the metrics $g$ and $h$ by the following rules:
\begin{equation*}
\widetilde{g} = \Lambda^2 g, \ \ \widetilde{h} = \lambda^2 h.
\end{equation*}
The $\widetilde{D_A}\phi=D_A \phi +(d\log\gamma)\phi$ is a connection compatible with $\widetilde{h}$ ($A$ is viewed as a given $1$-form which gives a connection compatible with $\widetilde{h}$ via this formula). We remark that this is not a gauge transformation. The action in the new conformal settings is related to the old one by the following formulae:
\begin{align*}
L_s(A,\phi;g,h)&=\frac{\Lambda^2}{\lambda^2}\Big(L_s(A,\phi;\widetilde{g},\widetilde{h})+\frac{1}{2}|\phi|_{\widetilde{h}}^2\gamma^{-1}\Box_{\widetilde{g}}\gamma-\frac{1}{2}\text{Div}_{\widetilde{g}}\big(|\phi|^2_{\widetilde{h}}\cdot\text{grad}_{\widetilde{g}}(\log\gamma)\big)\Big),\\
L_m(A,\phi;g,h)&=\Lambda^4 L_m(A,\phi;\widetilde{g},\widetilde{h}).
\end{align*}
We also notice that $d\text{vol}_g = \Lambda^{-4} d\text{vol}_{\widetilde{g}}$. By setting $\lambda=\Lambda^{-1}$, we obtain
\begin{equation*}
\mathcal{L}(A,\phi;g,h)=\mathcal{L}(A,\phi;\widetilde{g},\widetilde{h})+\int_M \frac{1}{2}|\phi|_{\widetilde{h}}^2\gamma^{-1}\Box_{\widetilde{g}}\gamma-\frac{1}{2}\text{Div}_{\widetilde{g}}\big(|\phi|^2_{\widetilde{h}}\cdot\text{grad}_{\widetilde{g}}(\log\gamma)\big)\Big).
\end{equation*}
We now impose a condition on $\gamma$: $\Box_{\widetilde{g}}\gamma =0$. In applications, we will take $\widetilde{g}$ to be the Minkowski metric and $\gamma(t,x)=\big((t+C)^2-|x|^2\big)^{-1}$ where $C$ is a constant so that this condition is always satisfied. Thus, we conclude that the difference between the two actions $\mathcal{L}(A,\phi;g,h)$ and $\mathcal{L}(A,\phi;\widetilde{g},\widetilde{h})$ is simply a divergence term. In particular, this implies that the two actions give the same Euler-Lagrange equations. Therefore $(A,\phi)$ being a solution of \eqref{MKG} with $(g,h)$ is equivalent to being a solution of \eqref{MKG} with $(\widetilde{g},\widetilde{h})$. We point out that the two sets of (identical) solutions need to be measured in different metrics.
We study a special case. Let ${\Phi}:(U,m) \rightarrow (\widetilde{U},\widetilde{m})$ be a conformal mapping between two domains of Minkowski space. We assume that
\begin{equation*}
{\Phi}^*\widetilde{m}_{\mu\nu} = \Lambda^{-2} m_{\mu\nu}.
\end{equation*}
where $\Lambda(t,x) = (t+R_*+1)^2-|x|^2$ is a smooth function on $U$. By setting $\widetilde{g}=m$ and $g={\Phi}^*\widetilde{m}$, we apply the previous constructions. This yields the following lemma:
\begin{lemma}
Let $\widetilde{\mathbf{L}}$ be a complex line bundle on $\widetilde{U}$ and ${\Phi}:U \rightarrow \widetilde{U}$ be a conformal diffeomorphism described above. If $(\widetilde{\phi},\widetilde{A})$ is a solution of \eqref{MKG} for $\widetilde{\mathbf{L}}$, then $(\phi,A):=({\Phi}^*\widetilde{\phi},{\Phi}^*\widetilde{A})$ is solution of \eqref{MKG} for $\mathbf{L}={\Phi}^*\widetilde{\mathbf{L}}$ with respect to $(m,\Lambda^{-2}{\Phi}^*\widetilde{h})$. In particular, one takes $h= {\Phi}^*\widetilde{h}$ on $\mathbf{L}$ (this is always the case since we usually identify scalar fields with a $\mathbb{C}-$valued functions by fixing a unit section in $\mathbf{L}$ or $\widetilde{\mathbf{L}}$). We can reformulate the statement as follows: If $(\widetilde{\phi},\widetilde{A})$ ($\phi$ is a complex function) a solution of \eqref{MKG} on $\widetilde{U}$, then $(\Lambda^{-1} {\Phi}^*\widetilde{\phi}, {\Phi}^*\widetilde{A})$ is also a solution of \eqref{MKG} on $U$.
\end{lemma}
We can also reverse the direction:
\begin{corollary}
If $(\phi,A)$ is a solution of \eqref{MKG} on ${U}$, then $(\big({\Phi}^{-1}\big)^*\big(\Lambda \cdot \phi\big), \big({\Phi}^{-1}\big)^*{A})$ is also a solution of \eqref{MKG} on $\widetilde{U}$.
\end{corollary}
\subsection{The conformal picture}\label{section conformal picture}
From now on until the end of the paper, the radius $R_*$ is fixed. And in the sequel we allow the implicit constant depends also on $R_*$ and the size of the data $C_0$. More precisely, $B\lesssim P$ means that there is a constant $C$ depending only on $C_0$ such that $B\leq CP$. Here we point out that the radius $R_*$ as well as the charge $q_0$ only depends on the size of the data $C_0$. We define a hyperboloid $\Sigma$ in the Minkowski space by the following equation:
\begin{equation*}
\Sigma =\Big\{ (t,x) \Big| -\big(t+R_*+1-\frac{2R_*+1}{2(R_*+1)}\big)^2+|x|^2=-\Big( \frac{2R_*+1}{2(R_*+1)}\Big)^2\Big\}.
\end{equation*}
Geometrically, $\Sigma$ (drawn as the bold black curve in the left figure) is the unique hyperboloid passing through $\mathcal{S}_{R_*}^{R_*}$ and asymptotic to $\mathcal{H}_{R_*+\frac{1}{2R_*+2}}$ (the dash-dot-dot line in the left figure). We denote its causal future by $\mathcal{J}^+(\Sigma)$ and it is the grey region in the left figure.
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \includegraphics[width=5in]{conformal.pdf}
We now define a map ${\Phi}: \mathcal{J}^+(\Sigma) \rightarrow \mathbf{R}^{3+1}$. To distinguish the domain and the target of the map, we will use $\widetilde{t}$ and $\widetilde{x_i}$'s as coordinate system on the target Minkowski space $(\mathbb{R}^{3+1},\widetilde{m}_{\alpha\beta})$ where $\widetilde{m}_{\alpha\beta}$ is the Minkowski metric on the target. The map ${\Phi}$ is given by the following formula:
\begin{equation*}
{\Phi}: (t,x)\mapsto (\widetilde{t},\widetilde{x})=\Big(-\frac{t+R_*+1}{(t+R_*+1)^2-|x|^2}+\frac{R_*+1}{2R_*+1},\frac{x}{(t+R_*+1)^2-|x|^2}\Big)
\end{equation*}
Geometrically, it is the composition of a time translation with the standard inversion map centered at $(-R_*-1,0,0,0)$. It is straightforward to see that the image of $\Sigma$ is given by $\widetilde{t}=0$ and $|\widetilde{x}|<\frac{R_*+1}{2R_*+1}$. It is a 3-dimensional open ball denoted by $\widetilde{\mathcal{B}}_{\frac{R_*+1}{2R_*+1}}$ (see the bold straight line segment in the right figure). We also define $\Sigma_\pm = \Sigma \cap \{\pm t\geq 0\}$ and their images under ${\Phi}$ are denoted by $\mathcal{B}_{\pm}$. We remark that the $\Sigma_+$ are entirely inside the exterior region where we have already obtained good control on the solutions. The image of $\mathcal{J}^+(\Sigma)$ is indeed the future domain of dependence of $\widetilde{\mathcal{B}}_{\frac{R_*+1}{2R_*+1}}$ and denoted by $\mathcal{D}^+(\widetilde{\mathcal{B}}_{\frac{R_*+1}{2R_*+1}})$. It is depicted as the grey region in the right figure. As a result, we obtain a diffeomorphism:
\begin{equation*}
{\Phi}: \mathcal{J}^+(\Sigma) \rightarrow \mathcal{D}^+(\widetilde{\mathcal{B}}_{\frac{R_*+1}{2R_*+1}}).
\end{equation*}
With the naturally induced flat metrics, ${\Phi}$ is indeed a conformal map:
\begin{equation*}
{\Phi}^*\widetilde{m}_{\mu\nu} = \frac{1}{\Lambda(t,x)^2}m_{\mu\nu} \ \ \text{with}\ \ \Lambda(t,x) = (t+R_*+1)^2-|x|^2.
\end{equation*}
We define functions on the domain of definition on the target of ${\Phi}$:
\begin{align*}
&u_{*}=u+\frac{R_*+1}{2}, \ v_*=v+\frac{R_*+1}{2},\ \widetilde{u}=\frac{1}{2}(\widetilde{t}-\frac{R_*+1}{2R_*+1}-\widetilde{r}),\\ &\widetilde{v}=\frac{1}{2}(\widetilde{t}-\frac{R_*+1}{2R_*+1}+\widetilde{r}),\quad
\widetilde{\Lambda}(\widetilde{t},\widetilde{x})=\big(\widetilde{t}-\frac{R_*+1}{2R_*+1}\big)^2-|\widetilde{x}|^2.
\end{align*}
It is straightforward to check that
\begin{align*}
\Lambda &= 4u_*v_*, \ \ \widetilde{\Lambda}=4\widetilde{u}\widetilde{v},\ \
{\Phi}^*(\widetilde{u})=-\frac{1}{4u_*}, \ \ {\Phi}^*(\widetilde{v})=-\frac{1}{4v_*}, \ \ \big({\Phi}^{-1}\big)^*\widetilde{\Lambda}=\Lambda^{-1}.
\end{align*}
We can also define two principal null vector fields on the target:
\begin{equation*}
\widetilde{L}=\partial_{\widetilde{t}}+\partial_{\widetilde{r}}, \ \ \widetilde{{\underline{L}}}=\partial_{\widetilde{t}}-\partial_{\widetilde{r}}.
\end{equation*}
We can compute the tangent map ${\Phi}_*$ as follows:
\begin{equation*}
\begin{split}
{\Phi}_* L&= 4\widetilde{v}^2\widetilde{L}, \ \ {\Phi}_* {\underline{L}}= 4\widetilde{u}^2\widetilde{{\underline{L}}}, \ \ \ {\Phi}_* (x_i\partial_{x_j}-x_j\partial_{x_i}) = \widetilde{x}_i\partial_{\widetilde{x}_j}-\widetilde{x}_j\partial_{\widetilde{x}_i}.
\end{split}
\end{equation*}
Now define $\widetilde{e}_1$ and $\widetilde{e}_2$ via the following formula:
\begin{equation*}
{\Phi}_* e_A = \widetilde{\Lambda}(\widetilde{t},\widetilde{x})\widetilde{e}_A.
\end{equation*}
Thus, $(\widetilde{e}_1,\widetilde{e}_2,\widetilde{L}, \widetilde{{\underline{L}}})$ consists of a null frame for the target space $\mathcal{D}^+(\widetilde{\mathcal{B}}_{\frac{R_*+1}{2R_*+1}})$.
The following set of formulae gives the image of $\mathcal{Z}$ under ${\Phi}_*$:
\begin{equation*}
{\Phi}_* T= 2\widetilde{v}^2\widetilde{L}+ 2\widetilde{u}^2\widetilde{{\underline{L}}}, \ \ {\Phi}_* (x_i\partial_{x_j}-x_j\partial_{x_i}) = \widetilde{x}_i\partial_{\widetilde{x}_j}-\widetilde{x}_j\partial_{\widetilde{x}_i}, \ \ {\Phi}_* K = \frac{1}{2}\partial_{\widetilde{t}}.
\end{equation*}
The next objective is to define the fields on the target of ${\Phi}$ corresponding to $(G = dA,{f})$ (on the domain of definition). The correspondence is given by the following formulae: \footnote{In the sequel of the section, when we write $(t,x)$ and $(\widetilde{t},\widetilde{x})$, it is always understood as ${\Phi}: (t,x)\mapsto (\widetilde{t},\widetilde{x})$.}
\begin{align*}
\widetilde{{f}}(\widetilde{t},\widetilde{x})&:=\Big(\big({\Phi}^{-1}\big)^*{(\Lambda \cdot {f})}\Big)\big(\widetilde{t},\widetilde{x}\big)=\Lambda(t,x){{f}}(t,x), \\ \widetilde{A}(\widetilde{t},\widetilde{x})&:=\Big(\big({\Phi}^{-1}\big)^*{A}\Big)\big(\widetilde{t},\widetilde{x}\big) \ \ (\text{hence} \ \widetilde{\Omega}(\widetilde{t},\widetilde{x}):=\Big(\big({\Phi}^{-1}\big)^*{\Omega}\Big)\big(\widetilde{t},\widetilde{x}\big)).
\end{align*}
In view of the conformal theory presented at the beginning of the section, if we take $(G,{f})=(F,\phi)$ the solution of \eqref{MKG}, the pair $(\widetilde{\phi},\widetilde{F})$ is a solution of \eqref{MKG} on $\mathcal{D}^+(\widetilde{\mathcal{B}}_{\frac{R_*+1}{2R_*+1}})$.
In the rest of this subsection (Section \ref{section conformal picture}), we will use the following shorthand notations for the null components of $G$ and $\widetilde{G}$:
\begin{align*}
\alpha &= \alpha(G),\ \ \rho = \rho(G), \ \ \sigma=\sigma(G), \ \ \underline{\alpha} =\underline{\alpha}(G),\\
\widetilde{\alpha} &= \alpha(\widetilde{G}),\ \ \widetilde{\rho} = \rho(\widetilde{G}), \ \ \widetilde{\sigma}=\sigma(\widetilde{G}), \ \ \underline{\widetilde{\alpha}} =\underline{\alpha}(\widetilde{G}).
\end{align*}
Since ${\Phi}_*$ and ${\Phi}^*$ behave well in the functorial way, the following formulae are immediate consequences of the previous computations:
\begin{equation}\label{conformal formulae for null components}
\begin{split}
&\widetilde{\alpha}_A(\widetilde{t},\widetilde{x}) = 16u_*v_*^3\alpha_A(t,x), \ \ \widetilde{\rho}(\widetilde{t},\widetilde{x})= 16u_*^2v_*^2\rho(t,x), \\
&\widetilde{\sigma}(\widetilde{t},\widetilde{x})= 16u_*^2v_*^2\sigma(t,x),\ \
\underline{\widetilde{\alpha}}_A(\widetilde{t},\widetilde{x}) = 16u_*^3v_*\underline{\alpha}_A(t,x),\\
\widetilde{D}_{\widetilde{L}}\widetilde{\phi}(\widetilde{t},\widetilde{x})&=4v_*^2 D_L(\Lambda \phi)(t,x), \ \ \widetilde{D}_{\widetilde{e}_A}\widetilde{\phi}=4u_*v_* D_{e_A}(\Lambda \phi)(t,x), \ \ \widetilde{D}_{\widetilde{{\underline{L}}}}\widetilde{\phi}(\widetilde{t},\widetilde{x})=4u_*^2 D_{\underline{L}}(\Lambda \phi)(t,x).
\end{split}
\end{equation}
The correspondence also behaves well with respect to taking derivatives for $Z \in \mathcal{Z}$:\footnote{We also use $\widetilde{D}$ to denote the covariant derivatives corresponding to $\widetilde{A}$ on the target. This should not be confused with the $\widetilde{D}$ on the domain of definition of ${\Phi}$.}
\begin{equation}\label{commutator for derivative and conformal compactification}
\begin{split}
\widetilde{\mathcal{L}_Z G}&=\mathcal{L}_{{\Phi}_* Z}\widetilde{G},\ \ \forall Z \in \mathcal{Z},\ \ \ \
\widetilde{\widehat{D}_{\Omega_{ij}} {f}} =\widetilde{D}_{{\Phi}_*\Omega_{ij}}\widetilde{{f}}, \\ \widetilde{\widehat{D}_T {f}}&=\widetilde{D}_{{\Phi}_*T}\widetilde{{f}}+\big(\widetilde{t}-\frac{R_*+1}{2R_*+1}\big)\widetilde{{f}}, \ \ \widetilde{\widehat{D}_K {f}}=\widetilde{D}_{{\Phi}_*K}\widetilde{{f}}+(R_*+1)^2\big(\widetilde{t}-\frac{R_*^2}{(R_*+1)(2R_*+1)}\big)\widetilde{{f}}.
\end{split}
\end{equation}
We will study the energy flux through the space-like hypersurface $\Sigma$. For this purpose, we need to study the geometry of $\Sigma$. It is more convenient to use a new coordinate system to characterize $\Sigma$. We define
\begin{equation*}
U=\sqrt{\big(t+R_*+\frac{1}{2R_*+2}\big)^2-r^2}, \ \ V=\sqrt{\big(t+R_*+\frac{1}{2R_*+2}\big)^2+r^2}.
\end{equation*}
The new coordinates system is $(U,V,\vartheta)$ where $\vartheta \in \mathbf{S}^2$ is the standard spherical coordinates. According to the definition, $\Sigma$ is defined by $U= \frac{2R_*+1}{2(R_*+1)}$. Thus, $(V,\vartheta)$ should be regarded as local coordinate system on $\Sigma$. To simplify notations, we also define
\begin{equation*}
t_*=t+R_*+\frac{1}{2R_*+2}.
\end{equation*}
In the new coordinate system, the volume form of the Minkowski metric $m$ can be written as
\begin{equation}\label{volume form spacetime}
d\text{vol}_m = r^2 dt dr d\vartheta =\frac{rUV}{2t_*}dUdV d\vartheta.
\end{equation}
The hypersurface $\Sigma$ can also be viewed as the graph of the function $g$ over $\mathbb{R}^3$, where $g$ is defined as
\begin{equation*}
t=g(x)=\sqrt{|x|^2+\Big( \frac{2R_*+1}{2(R_*+1)}\Big)^2}-\frac{2R_*^2+2R_*+1}{2(R_*+1)}.
\end{equation*}
Therefore the surface measure on $\Sigma_+$ is given by (using $r$ and $\vartheta$ as coordinate function):
\begin{equation*}
d\mu_{\Sigma}=\sqrt{1+|\nabla g|^2}dx= \sqrt{\frac{2r^2+\big(\frac{2R_*+1}{2(R_*+1)}\big)^2}{r^2+\big(\frac{2R_*+1}{2(R_*+1)}\big)^2}}r^2dr d\vartheta.
\end{equation*}
In view of the defining equation of $\Sigma$, one can express it in terms of $(V,\vartheta)$ on $\Sigma$:
\begin{equation}\label{volume form of Sigma}
d\mu_\Sigma = \frac{rV^2}{t_*}dV d\vartheta.
\end{equation}
The following lemma gives a formula to compute the contraction of a vector field with the spacetime volume form on $\Sigma$. It will play a key role in the computations of the local energy density on $\Sigma$.
\begin{lemma}\label{lemma contraction of volume form}Let $J$ be a smooth vector field on $\mathbb{R}^{3+1}$ and $\iota: \Sigma \hookrightarrow \mathbb{R}^{3+1}$ be the canonical embedding. We use $i_J d\text{vol}_m$ to denote the contraction of $J$ and the spacetime volume form. Then we have
\begin{align*}
\iota^*\big(i_J d\text{vol}_m\big
&=-\frac{rV}{4t_*}\big((t_*-r)J_{\underline{L}}+(t_*+r)J_L\big)dV d\vartheta.
\end{align*}
\end{lemma}
\begin{proof}
Since we have already derived the formulae for volume/surface measures in terms of $(U,V,\vartheta)$, it just remains to relate $L$ and ${\underline{L}}$ to $\partial_U$ and $\partial_V$. This is recorded in the following formulae:
\begin{equation}\label{L in terms of U and V}
L=\frac{t_*-r}{U}\partial_U+\frac{t_*+r}{V}\partial_V, \ \ {\underline{L}}=\frac{t_*+r}{U}\partial_U+\frac{t_*-r}{V}\partial_V.
\end{equation}
\end{proof}
We turn to the study of energy quantities. Given fields $(\widetilde{{f}},\widetilde{G})$ on $\widetilde{\mathcal{B}}_{\frac{R_*+1}{2R_*+1}}$, the standard energy is defined as
\begin{equation*}
\mathcal{E}[\widetilde{{f}},\widetilde{G}](\widetilde{\mathcal{B}}_{\frac{R_*+1}{2R_*+1}})=\int_{0}^{\frac{R_*+1}{2R_*+1}}\int_{\mathbf{S}^2} \Big(|\widetilde{\alpha}|^2+|\widetilde{\rho}|^2+|\widetilde{\sigma}|^2+|\underline{\widetilde{\alpha}}|^2+|\widetilde{D}_{\widetilde{L}}\widetilde{\phi}|^2+\sum_{A=1}^2|\widetilde{D}_{\widetilde{e}_A}\widetilde{\phi}|^2+|\widetilde{D}_{\widetilde{{\underline{L}}}}\widetilde{\phi}|^2\Big)\widetilde{r}^2d\widetilde{r} d\widetilde{\vartheta}.
\end{equation*}
In view of our analysis in the exterior region, the more relevant part of the energy is as follows:
\begin{equation*}
\mathcal{E}[\widetilde{{f}},\widetilde{G}](\widetilde{\mathcal{B}}_{+})=\int_{\frac{R_*}{2R_*+1}}^{\frac{R_*+1}{2R_*+1}}\int_{\mathbf{S}^2} \Big(|\widetilde{\alpha}|^2+|\widetilde{\rho}|^2+|\widetilde{\sigma}|^2+|\underline{\widetilde{\alpha}}|^2+|\widetilde{D}_{\widetilde{L}}\widetilde{\phi}|^2+\sum_{A=1}^2|\widetilde{D}_{\widetilde{e}_A}\widetilde{\phi}|^2+|\widetilde{D}_{\widetilde{{\underline{L}}}}\widetilde{\phi}|^2\Big)\widetilde{r}^2d\widetilde{r} d\widetilde{\vartheta}.
\end{equation*}
The main objective is to rewrite this energy in terms of $({f}, G)$ on $\Sigma_+$:
\begin{proposition}\label{proposition conformal energy on Sigma}
Given the conformal correspondence $({f},G)\mapsto (\widetilde{{f}},\widetilde{G})$, we have
\begin{equation}\label{conformal energy estimate on Sigma plus}
\mathcal{E}[\widetilde{{f}},\widetilde{G}](\widetilde{\mathcal{B}}_{+})\lesssim \int_{\Sigma_+} r^2|\widetilde{\alpha}|^2+|\widetilde{\rho}|^2+|\widetilde{\sigma}|^2+\frac{|\underline{\widetilde{\alpha}}|^2}{r^2}+|D_L \Psi|^2+|D_L{f}|^2+|\slashed{D} {f}|^2+\frac{|D_{\underline{L}}{f}|^2}{r^2}+\frac{|{f}|^2}{r^2}.
\end{equation}
where we use the hypersurface measure $d\mu_\Sigma$ for the integration and $\Psi = r{f}$.
\end{proposition}
\begin{proof}
We start to express the volume form $\widetilde{r}^2d\widetilde{r} d\widetilde{\vartheta}$ in terms of $dVd\vartheta$. We notice that $\big({\Phi}^{-1}\big)^*(U) = \frac{2R_*+1}{2R_*+2}$ on $\Sigma_+$. In terms of $V$ and $U$, we have
\begin{equation*}
\widetilde{r}=\frac{\sqrt{\frac{V^2-U^2}{2}}}{U^2+\big(\frac{2R_*+1}{2R_*+2}\big)^2+\frac{2R_*+1}{R_*+1}\sqrt{\frac{V^2+U^2}{2}}}=\frac{\sqrt{V^2-U^2}}{2^{\frac{3}{2}}U^2+2U\sqrt{V^2+U^2}}.
\end{equation*}
Thus,
\begin{equation*}
d\widetilde{r}=\frac{2^{\frac{3}{2}}U^2\sqrt{V^2+4U^3}V}{\big(2^{\frac{3}{2}}U^2+2U\sqrt{V^2+U^2}\big)^2\sqrt{V^2-U^2}\sqrt{V^2+U^2}}dV.
\end{equation*}
Since $U\approx 1$ and $V\approx r$ on $\Sigma_+$ (provided $R_*\geq 1$ which always holds). Thus, $\widetilde{r}\approx 1$ and we have
\begin{equation}
\widetilde{r}^2d\widetilde{r} d\widetilde{\vartheta} \approx r^{-2}dV d\vartheta.
\end{equation}
In view of \eqref{volume form of Sigma} and the facts that $t_* \approx r \approx v_*$ on $\Sigma_+$, we conclude that
\begin{equation}\label{vol form estimate on target}
\widetilde{r}^2d\widetilde{r} d\widetilde{\vartheta} \approx v_*^{-4}d\mu_{\Sigma_{+}}.
\end{equation}
Thus, combing with \eqref{conformal formulae for null components}, the above equation yields
\begin{equation}\label{conformal energy on Sigma plus}
\begin{split}
\mathcal{E}[\widetilde{{f}},\widetilde{G}](\widetilde{\mathcal{B}}_{+})=\int_{\Sigma_+} & u_*^2v_*^2|\widetilde{\alpha}|^2+u_*^4\big(|\widetilde{\rho}|^2+|\widetilde{\sigma}|^2\big)+u_*^6v_*^{-2}|\underline{\widetilde{\alpha}}|^2+|D_L(\Lambda{f})|^2\\
&+u_*^2v_*^{-2}|\slashed{D}(\Lambda{f})|^2+u_*^4v_*^{-4}|D_{\underline{L}}(\Lambda{f})|^2.
\end{split}
\end{equation}
The above integration is understood over the measure $d\mu_{\Sigma_+}$. On the other hand, since $Lu_*={\underline{L}} v_*=1$, $Lv_*={\underline{L}} u_* =0$ and $\Lambda = 4u_* v_*$, we can easily obtain that
\begin{align*}
|D_L(\Lambda{f})|^2&=|4u_* D_L(v_*{f})|^2=|4u_*D_L\big((u_*+r){f}\big))|^2\lesssim u_*^4 |D_L{f}|^2+u_*^2|D_L\Psi|^2,\\
|\slashed{D}(\Lambda{f})|^2 &\approx u_*^2v_*^2 |\slashed{D}{f}|^2, \ \ |D_{\underline{L}}(\Lambda{f})|^2 \lesssim v_*^2u_*^2|D_{\underline{L}} {f}|^2+|v_*|^2|{f}|^2.
\end{align*}
Since $\frac{1}{2}-\frac{1}{4(R_*+1)}\leq u_* \leq \frac{1}{2}$, thus $u_*\approx 1$. The above estimate together with \eqref{conformal energy on Sigma plus} completes the proof of the proposition.
\end{proof}
We can further more eliminate the term $\frac{|{f}|^2}{r^2}$ in \eqref{conformal energy estimate on Sigma plus}:
\begin{corollary}\label{corollary conformal energy}
Given the conformal correspondence $({f},G)\mapsto (\widetilde{{f}},\widetilde{G})$, we have
\begin{equation}\label{conformal energy estimate on Sigma plus without zeroth order terms}
\mathcal{E}[\widetilde{{f}},\widetilde{G}](\widetilde{\mathcal{B}}_{+})\lesssim R_*^{-1} \int_{\mathcal{S}_{R_*}^{R_*}} |{f}|^2+\int_{\Sigma_+} r^2|\widetilde{\alpha}|^2+|\widetilde{\rho}|^2+|\widetilde{\sigma}|^2+\frac{|\underline{\widetilde{\alpha}}|^2}{r^2}+|D_L \Psi|^2+|D_L{f}|^2+|\slashed{D} {f}|^2+\frac{|D_{\underline{L}}{f}|^2}{r^2}.
\end{equation}
\end{corollary}
\begin{proof}
According to \eqref{volume form of Sigma}, by Newton-Leibniz formula, we have
\begin{align*}
\int_{\Sigma_+} \frac{|{f}|^2}{r^2}&\approx \int_{R_*}^\infty \int_{\mathbf{S}^2}|{f}|^2dVd\vartheta = R_*^{-1} \int_{\mathcal{S}_{R_*}^{R_*}} |{f}|^2-\lim_{V_0\rightarrow \infty}\Big( V_0^{-1}\int_{\Sigma_+\cap V=V_0}|{f}|^2+2\int_{\Sigma_{+}}V_0^{-1}\Re(\overline{D_{\partial_V}{f}}\cdot {f})\Big)\\
& \lesssim R_*^{-1} \int_{\mathcal{S}_{R_*}^{R_*}} |{f}|^2+\int_{\Sigma_{+}}V^{-1}|D_{\partial_V}{f}||{f}| \lesssim R_*^{-1} \int_{\mathcal{S}_{R_*}^{R_*}} |{f}|^2+\int_{\Sigma_{+}}|D_{\partial_V}{f}|^2)+\frac{1}{2}\int_{\Sigma_{+}}\frac{|{f}|^2}{r^2}.
\end{align*}
Thus,
\begin{align*}
\int_{\Sigma_+} \frac{|{f}|^2}{r^2}\lesssim R_*^{-1} \int_{\mathcal{S}_{R_*}^{R_*}} |{f}|^2+\int_{\Sigma_{+}}|D_{\partial_V}{f}|^2.
\end{align*}
According to \eqref{L in terms of U and V}, on $\Sigma_+$, we have
\begin{align*}
|D_{\partial_V}{f}|^2 =|\frac{V}{4t_*r}\big((t_*+r)L-(t_*-r){\underline{L}}\big){f}|^2\lesssim |D_L {f}|^2+\frac{|D_{\underline{L}}{f}|^2}{r^2}.
\end{align*}
Therefore,
\begin{align*}
\int_{\Sigma_+} \frac{|{f}|^2}{r^2}\lesssim R_*^{-1} \int_{\mathcal{S}_{R_*}^{R_*}} |{f}|^2+\int_{\Sigma_{+}}|D_L {f}|^2+\frac{|D_{\underline{L}}{f}|^2}{r^2}.
\end{align*}
In view of \eqref{conformal energy estimate on Sigma plus}, this completes the proof.
\end{proof}
To bound the righthand side of \eqref{conformal energy estimate on Sigma plus}, we need standard energy estimate and $r^p$-weighted energy estimates on $\Sigma_+$. We take the integration domain $\mathcal{D}$ to be the spacetime slab bounded by $\Sigma_+$ and $\mathcal{B}_{R_*}$, see the grey region in the following picture:
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \includegraphics[width=3 in]{hyperboloidenergy.pdf}
\begin{lemma}\label{energy identities hyperboloid}
For all $G$ and ${f}$, $1\leq p\leq 2$, we have
\begin{equation}\label{basic energy hyperboloid}
\begin{split}
& \ \ \ \int_{\Sigma_+}\frac{(t_*+r)\big(|D_L{f}|^2+|\alpha|^2\big)+ 2t_*\big(|\slashed{D}{f}|^2+|\rho|^2+|\sigma|^2\big)+(t_*-r)\big(|D_{\underline{L}}{f}|^2+|\underline{\alpha}|^2\big)}{8V}\\
&= \mathcal{E}[G,{f}](\mathcal{B}_{R_*})-\int_{\mathcal{D}}\Re\big(\overline{\Box_A{f}} \cdot D_{\partial_t} {f} \big)+\nabla^\mu G_{\mu\nu}\cdot G_0{}^{\nu} + F_{0\mu} {J}[{f}]^\mu,
\end{split}
\end{equation}
and
\begin{equation}\label{r wieghted hyperboloid}
\begin{split}
&\ \ \ \ \int_{\mathcal{B}_{R_*}}r^{p-2}\big(|D_L \Psi|^2+|\slashed{D} \Psi|^2\big)+r^{p}\big(|\alpha(G)|^2+|\rho(G)|^2+|\sigma(G)|^2\big)\\
&=\int_{\Sigma_+}\frac{r^p \big(|D_L\Psi|^2+r^2|\alpha|^2\big)(t_*+r)+r^p \big(|\slashed{D}\Psi|^2+r^2(|\rho|^2+|\sigma|^2)\big)(t_*-r)}{4r^2V} \\
&\ \ \ +\int_{\mathcal{D}}r^{p-3}\Big(p(|D_L \Psi|^2+r^2|\alpha(G)|^2\big)+(2-p)(|\slashed{D}\Psi|^2+r^2|\rho(G)|^2+r^2|\sigma(G)|^2)\Big) \\
&\ \ \ + \int_{\mathcal{D}} r^{p-1}\Re\big(\overline{\Box_A{f}} \cdot D_L \Psi\big)+r^p \nabla^\mu G_{\mu\nu}\cdot G_{L}{}^{\delta}+r^p F_{L\mu} {J}[{f}]^\mu.
\end{split}
\end{equation}
Here $\Psi=rf$.
\end{lemma}
\begin{proof}
We first derive $r^p$-weighted energy identity. The following computations are similar to those in the proof of Lemma \ref{r weighted energy estimates}. We take $X=r^p L$, $\chi = r^{p-1}$ and $Y=\frac{p}{2}r^{p-2} |{f}|^2 L$ in \eqref{divergence of J} and then integrate on $\mathcal{D}$. The expressions of $\mathbf{D_1}$ and $\mathbf{D_2}$ (in \eqref{divergence of J}) are given in \eqref{C3}. According to Stokes formula, we have
\begin{equation*}
\int_{\mathcal{B}_{R_*}}\widetilde{J}[G, {f}]^\mu n_\mu+\int_{\Sigma_+}\iota^*\big(i_X d\text{vol}_m\big)=\int_{\mathcal{D}} \mathbf{D_1}+\mathbf{D_2}
\end{equation*}
On $\mathcal{B}_{R_*}$, the normal $n^\mu$ is $\partial_t$, we have
\begin{equation*}
^{(X)}\widetilde{J}[G, {f}]^\mu n_\mu = \frac{1}{2}r^{p-2}\big(r^2\alpha(G)^2+r^2\rho(G)^2+r^2\sigma(G)^2+|D_L\Psi|^2+|\slashed{D}\Psi|^2\big)-\frac{1}{2}\big((p+1)r^{p-2}|{f}|^2+r^{p-1}\partial_r(|{f}|^2)\big).
\end{equation*}
Therefore, we have
\begin{equation*}
\begin{split}
\int_{\mathcal{B}_{R_*}}\,^{(X)}\widetilde{J}[G, {f}]^\mu n_\mu &=\frac{1}{2}\int_{\mathcal{B}_{r_1}^{r_2}} r^2\alpha(G)^2+r^2\rho(G)^2+r^2\sigma(G)^2+|D_L\Psi|^2+|\slashed{D}\Psi|^2 \\ &\quad-\frac{1}{2}\int_{r_1}^{r_2}\int_{\mathbf{S}^2}\underbrace{(p+1)r^{p}|{f}|^2+r^{p+1}\partial_r(|{f}|^2)}_{=\partial_r(r^{p+1}|{f}|^2)} d\vartheta dr\\
&=\frac{1}{2}L_1+\frac{1}{2}\int_{\mathcal{S}_{r_1}^{r_1}}r^{p-1}|{f}|^2.
\end{split}
\end{equation*}
On the other hand, in view of Lemma \ref{lemma contraction of volume form}, we need to compute $\,^{(X)}\widetilde{J}[G, {f}]_L$ and $\,^{(X)}\widetilde{J}[G, {f}]_{\underline{L}}$. In fact, we have
\begin{align*}
r^2\cdot \,^{(X)}\widetilde{J}[G, {f}]_L &= r^p \big(|D_L\Psi|^2+r^2|\alpha|^2\big)-\frac{1}{2}L(r^{p+1}|{f}|^2),\\
r^2\cdot \,^{(X)}\widetilde{J}[G, {f}]_{\underline{L}} &= r^p \big(|\slashed{D}\Psi|^2+r^2(|\rho|^2+|\sigma|^2)\big)+\frac{1}{2}{\underline{L}}(r^{p+1}|{f}|^2).
\end{align*}
Therefore, by Lemma \ref{lemma contraction of volume form} and replacing $L$ and ${\underline{L}}$ in the above formulae by \eqref{L in terms of U and V}, we obtain that
\begin{align*}
\iota^*\big(i_{\widetilde{J}}d\text{vol}_m\big)
&=-\frac{r^p \big(|D_L\Psi|^2+r^2|\alpha|^2\big)(t_*+r)+r^p \big(|\slashed{D}\Psi|^2+r^2(|\rho|^2+|\sigma|^2)\big)(t_*-r)}{4r^2V}d\mu_{\Sigma}\\
&\quad +\frac{1}{2}\partial_V (r^{p+1}|{f}|^2)dVd\vartheta
\end{align*}
Therefore, we have
\begin{equation}\label{CC6}
\begin{split}
\int_{\Sigma_+}\iota^*\big(i_X d\text{vol}_m\big) &=-\int_{\Sigma_+}\frac{r^p \big(|D_L\Psi|^2+r^2|\alpha|^2\big)(t_*+r)+r^p \big(|\slashed{D}\Psi|^2+r^2(|\rho|^2+|\sigma|^2)\big)(t_*-r)}{4r^2V} \\
&\quad -\frac{1}{2}\int_{\mathcal{S}_{r_1}^{r_1}}r^{p-1}|{f}|^2.
\end{split}
\end{equation}
The $r^p$-weighted energy identity follows immediately.
For the basic energy identity, we simply take $X=\partial_t$, $\chi = 0$ and $Y=0$ in \eqref{divergence of J}. The identity easily follows if we observe that
\begin{align*}
\iota^*\big(i_{\widetilde{J}}d\text{vol}_m\big) &=-\frac{(t_*+r)|D_L{f}|^2+ 2t_*|\slashed{D}{f}|^2+(t_*-r)|D_{\underline{L}}{f}|^2}{8V}d\mu_\Sigma.
\end{align*}
\end{proof}
\subsection{The energy estimates on the conformal conformal compactification}\label{section conformal energy}
We first apply the theory of the previous section to the static solution $({f}, G)=(0,F[q_0])$. By the definition of the charge field $F[q_0]$, according to \eqref{conformal energy on Sigma plus}, we can bound that
\begin{equation}\label{conformal energy for static solution}
\mathcal{E}\big[F[q_0]\big](\widetilde{\mathcal{B}}_{+}) \lesssim \int_{\Sigma_+} u_*^4|r^{-2}|^2+u_+^2 |r^{-3}|^2\lesssim 1.
\end{equation}
This is due to the fact that $|\rho(F[q_0])|$ decays like $r^{-2}$ while all the other components decay at least $r^{-3}$. We also note that on $\Sigma_+$, $u_* \approx 1$.
\begin{remark}
Despite the simplicity, the computation in \eqref{conformal energy for static solution} is of great conceptual importance. Indeed, if one considers conformal energy on a constant time slice, the factor $u_*^4$ would be replace by $r^2$ (near spatial infinity) so that the contribution of the charge part of the field would be divergent. However, \eqref{conformal energy for static solution} shows that the charge part behaves very well near null infinity. This is the reason we choose the inversion to compactify the spacetime over the usually Penrose compactification.
\end{remark}
The main purpose of the current section is to obtain (the $L^2$ bounds up to two derivatives) of $(\widetilde{\phi},\widetilde{F})$ on $\mathcal{B}_{+}$. In view of \eqref{conformal energy estimate on Sigma plus without zeroth order terms}, it is reasonable to define the following quantity:
\begin{equation}
\mathcal{E}_+[{f},{G}]=\int_{\Sigma_+} r^2|\widetilde{\alpha}|^2+|\widetilde{\rho}|^2+|\widetilde{\sigma}|^2+\frac{|\underline{\widetilde{\alpha}}|^2}{r^2}+|D_L \Psi|^2+|D_L{f}|^2+|\slashed{D} {f}|^2+\frac{|D_{\underline{L}}{f}|^2}{r^2}.
\end{equation}
In what follows, we take the $({f},{G})$ to be $(\phi^{(\mathbf{k})},\mathcal{L}_Z^{(\mathbf{k})} {\mathring{F}})$ for $|\mathbf{k}|\leq 2$. In this specific set-up, we first simplify the estimates in Lemma \ref{energy identities hyperboloid}.
We start with identity \eqref{basic energy hyperboloid}. Because $t_*\approx V\approx r$ and $|t_*-r|\approx 1$, its lefthand side is approximately
\begin{equation*}
\int_{\Sigma_+} |\alpha^{(\mathbf{k})}|^2+|\rho^{(\mathbf{k})}|^2+|\sigma^{(\mathbf{k})}|^2+|D_L\phi^{(\mathbf{k})}|^2+|\slashed{D}\phi^{(\mathbf{k})}|^2+\frac{|\underline{\alpha}^{(\mathbf{k})}|^2+|D_{\underline{L}}\phi^{(\mathbf{k})}|^2}{r^2}.
\end{equation*}
The first term of the righthand side is coming from the data hence bounded by ${\mathring{\varepsilon}}R_*^{-6-2\varepsilon_0}$. The second one is precisely the error terms that we have controlled in the exterior region (indeed, $\mathcal{D}\subset \mathcal{D}_{R_*}$), hence also bounded by ${\mathring{\varepsilon}}R_*^{-6-2\varepsilon_0}$. Therefore, via \eqref{energy identities hyperboloid}, we arrive at the following estimates
\begin{equation}\label{basic energy est for phik Fk on Sigma plus}
\int_{\Sigma_+} |\alpha^{(\mathbf{k})}|^2+|\rho^{(\mathbf{k})}|^2+|\sigma^{(\mathbf{k})}|^2+|D_L\phi^{(\mathbf{k})}|^2+|\slashed{D}\phi^{(\mathbf{k})}|^2+\frac{|\underline{\alpha}^{(\mathbf{k})}|^2+|D_{\underline{L}}\phi^{(\mathbf{k})}|^2}{r} \lesssim {\mathring{\varepsilon}}R_*^{-6-2\varepsilon_0}.
\end{equation}
We use the $p=2$ case of \eqref{r wieghted hyperboloid}. The first term of the left hand side is coming from the data hence bounded by ${\mathring{\varepsilon}}R_*^{-4-2\varepsilon_0}$. The second and third terms of the righthand side are precisely the error terms that we have controlled in the exterior region hence also bounded by ${\mathring{\varepsilon}}R_*^{-4-2\varepsilon_0}$. Finally, we use $t_*\approx V\approx r$ and $|t_*-r|\approx 1$ for the first term on the righthand side. Therefore, we arrive at the following estimate:
\begin{equation}\label{weighted energy est for phik Fk on Sigma plus}
\int_{\Sigma_+}|D_L\psi^{(\mathbf{k})}|^2+r^2|\alpha|^2+r\big(|\slashed{D}\phi^{(\mathbf{k})}|^2+|\rho|^2+|\sigma|^2)\big) \lesssim_{R_*} {\mathring{\varepsilon}}R_*^{-4-2\varepsilon_0}.
\end{equation}
As a conclusion, we obtain that
\begin{equation}\label{conf estimates aux}
\mathcal{E}_+[\phi^{(\mathbf{k})},\mathcal{L}_Z^{(\mathbf{k})}{\mathring{F}}]\lesssim 1.
\end{equation}
Here we recall that the implicit constant here depends only on the size of the initial data $C_0$ defined before the main theorem.
\begin{remark}
From this point till the end of the paper, we will ignore the dependence on $R_*$ for the universal constants since $R_*$ is already fixed.
\end{remark}
We now have all the preparations to bound the $H^2$ norms of $(\widetilde{\phi},\widetilde{F})$ on $\mathcal{B}_+$.
\bigskip
We take $({f}, G)=(\phi, F)$ in \eqref{conformal energy estimate on Sigma plus without zeroth order terms}. The first term on the righthand of \eqref{conformal energy estimate on Sigma plus without zeroth order terms} is bounded by the initial datum. Therefore, by taking $(\mathbf{k})=(0)$ in \eqref{conf estimates aux}, the estimate \eqref{conformal energy estimate on Sigma plus without zeroth order terms} gives
\begin{equation*}
\mathcal{E}(\widetilde{\phi}, \widetilde{{\mathring{F}}})(\mathcal{B}_+) \lesssim 1.
\end{equation*}
On the other hand, we know that $\widetilde{F}=\widetilde{{\mathring{F}}}+\widetilde{F_{q_0}}$ and we have already shown that $\mathcal{E}(\widetilde{F}[q_0])\lesssim \mathcal{E}_{\text{initial}}$. Hence, we have
\begin{equation}\label{conformal energy zeroth order final}
\mathcal{E}(\widetilde{\phi}, \widetilde{F})(\mathcal{B}_+) \lesssim 1.
\end{equation}
We now assume $|\mathbf{k}|=1$ and we take $({f},G)=(\widehat{D}_Z \phi, \mathcal{L}_Z F)$ where $Z$ corresponds to the index $(\mathbf{k})$. If $Z=\Omega_{ij}$, the commutator formula \eqref{commutator for derivative and conformal compactification} has no error term for $\Omega_{ij}$'s. We then can repeat the above procedure. Therefore, we have
\begin{equation*}
\mathcal{E}(\widetilde{D}_{\widetilde{\Omega_{ij}}}\widetilde{\phi}, \mathcal{L}_{\widetilde{\Omega_{ij}}}\widetilde{F})(\mathcal{B}_+) \lesssim 1,
\end{equation*}
where $\widetilde{\Omega_{ij}} =\widetilde{x_i}\partial_{\widetilde{x_j}}-\widetilde{x_j}\partial_{\widetilde{x_i}}$.
If $Z=T$, according to \eqref{commutator for derivative and conformal compactification}, we have $\widetilde{D}_{{\Phi}_*T}\widetilde{\phi}=\widetilde{\widehat{D}_T \phi}-\big(\widetilde{t}-\frac{R_*+1}{2R_*+1}\big)\widetilde{\phi}$.
Similar to the previous case, $\mathcal{E}(\widetilde{\widetilde{D}_T \phi},\mathcal{L}_{{\Phi}_* T}\widetilde{F}=\widetilde{\mathcal{L}_T {\mathring{F}}})(\mathcal{B}_+)$ are bounded by \eqref{conformal energy estimate on Sigma plus without zeroth order terms} and \eqref{conf estimates aux}. Since $\widetilde{t}$ and its derivatives on $\mathcal{B}_+$ is bounded, in view of the $L^\infty$ estimates on $\phi$ (which implies the $\widetilde{\phi}$ is bounded in $L^2$) and \eqref{conformal energy zeroth order final}, the energy contributed by $\big(\widetilde{t}-\frac{R_*+1}{2R_*+1}\big)\widetilde{\phi}$ is also bounded. Thus, we conclude that
\begin{equation}\label{A2}
\mathcal{E}(\widetilde{D}_{\widetilde{v}^2\widetilde{L}+ \widetilde{u}^2\widetilde{{\underline{L}}}}\widetilde{\phi}, \mathcal{L}_{\widetilde{v}^2\widetilde{L}+ \widetilde{u}^2 \widetilde{{\underline{L}}}}\widetilde{F})(\mathcal{B}_+) \lesssim 1.
\end{equation}
If $Z=K$, according to\eqref{commutator for derivative and conformal compactification}, we have $
\widetilde{D}_{{\Phi}_*K}\widetilde{\phi}= \widetilde{\widehat{D}_K \phi}-(R_*+1)^2\big(\widetilde{t}-\frac{R_*^2}{(R_*+1)(2R_*+1)}\big)\widetilde{\phi}$.
Similarly, since $\widetilde{t}$ and its derivatives on $\mathcal{B}_+$ are bounded, we can argue exactly in the same manner that
\begin{equation}\label{A3}
\mathcal{E}(\widetilde{D}_{\partial_{\widetilde{t}}}\widetilde{\phi}, \mathcal{L}_{\partial_{\widetilde{t}}}\widetilde{F})(\mathcal{B}_+) \lesssim 1.
\end{equation}
On the other hand, on $\mathcal{B}_+$, both $\widetilde{r}$ and its inverse are bounded (as well as their derivatives). We also have
\begin{equation*}
\widetilde{v}^2\widetilde{L}+ \widetilde{u}^2\widetilde{{\underline{L}}} = \frac{1}{2}\Big(\widetilde{r}^2+\big(\frac{R_*+1}{2R_*+1}\big)\Big)\partial_{\widetilde{t}}+\frac{R_*+1}{2R_*+1}\widetilde{r}\partial_{\widetilde{r}}.
\end{equation*}
Therefore, \eqref{A2} and $\eqref{A3}$ together with all the previous estimates imply that
\begin{equation}\label{conformal side first order}
\mathcal{E}({\widetilde{\phi}, \widetilde{F}})(\mathcal{B}_+) + \mathcal{E}(\widetilde{D}\widetilde{\phi}, \nabla \widetilde{F})(\mathcal{B}_+) \lesssim 1.
\end{equation}
We now assume $|\mathbf{k}|=2$ and we take $({f}, G)=(\phi^{(\mathbf{2})}, \mathcal{L}_Z^{(\mathbf{2})}F)$. Since \eqref{commutator for derivative and conformal compactification} has no error term for $\Omega_{ij}$'s, it is immediate to see that
\begin{equation*}
\mathcal{E}(\widetilde{D}_{\widetilde{\Omega_{ij}}}\widetilde{D}_{\widetilde{\Omega'_{ij}}}\widetilde{\phi}, \mathcal{L}_{\widetilde{\Omega_{ij}}}\mathcal{L}_{\widetilde{\Omega'_{ij}}}\widetilde{F})(\mathcal{B}_+) \lesssim 1.
\end{equation*}
We now consider the case $({f}, G)=(\widehat{D}_T\widehat{D}_T \phi, \mathcal{L}_T\mathcal{L}_T F)$. On $\widetilde{t}=0$ (or $\mathcal{B}_+$), we have
\begin{equation*}
\widetilde{D}_{{\Phi}_*T}\widetilde{D}_{{\Phi}_*T}\widetilde{\phi}=\widetilde{\widehat{D}_T\widehat{D}_T \phi}+\frac{2R_*+2}{2R_*+1}\widetilde{D}_{{\Phi}_* T}\widetilde{\phi}+\Big(2\big(\frac{R_*+1}{2R_*+1}\big)^2+r^2\Big)\widetilde{\phi}.
\end{equation*}
We can bound all the terms on the righthand side and we obtain
\begin{equation}\label{A4}
\mathcal{E}(\widetilde{D}_{\widetilde{v}^2\widetilde{L}+ \widetilde{u}^2\widetilde{{\underline{L}}}}\widetilde{D}_{\widetilde{v}^2\widetilde{L}+ \widetilde{u}^2\widetilde{{\underline{L}}}}\widetilde{\phi}, \mathcal{L}_{\widetilde{v}^2\widetilde{L}+ \widetilde{u}^2 \widetilde{{\underline{L}}}}\mathcal{L}_{\widetilde{v}^2\widetilde{L}+ \widetilde{u}^2 \widetilde{{\underline{L}}}}\widetilde{F})(\mathcal{B}_+) \lesssim 1.
\end{equation}
We now consider the case $({f}, G)=(\widehat{D}_K\widehat{D}_K \phi, \mathcal{L}_K\mathcal{L}_K F)$. On $\mathcal{B}_+$, we have
\begin{equation*}
\frac{1}{4}\widetilde{D}_{\widetilde{T}}\widetilde{D}_{\widetilde{T}}\widetilde{\phi}=\widetilde{\widehat{D}_T\widehat{D}_T \phi}+\frac{R_*^2(R_*+1)}{2R_*+1}\widetilde{D}_{\widetilde{T}}\widetilde{\phi}+\Big(\frac{1}{2}(R_*+1)^2+\frac{(R_*+1)^2R_*^4}{(2R_*+1)^2}\Big)\widetilde{\phi}.
\end{equation*}
This implies
\begin{equation}\label{A5}
\mathcal{E}(\widetilde{D}_{\partial_{\widetilde{t}}}\widetilde{D}_{\partial_{\widetilde{t}}}\widetilde{\phi}, \mathcal{L}_{\partial_{\widetilde{t}}}\mathcal{L}_{\partial_{\widetilde{t}}}\widetilde{F})(\mathcal{B}_+) \lesssim 1.
\end{equation}
Similar to \eqref{conformal side first order}, by combining \eqref{A4}, \eqref{A5} and all the previous estimates, we finally obtain that
\begin{equation}\label{conformal side second order}
\mathcal{E}({\widetilde{\phi}, \widetilde{F}})(\mathcal{B}_+) + \mathcal{E}(\widetilde{D}\widetilde{\phi}, \nabla \widetilde{F})(\mathcal{B}_+)+\mathcal{E}(\widetilde{D}^2\widetilde{\phi}, \nabla^2 \widetilde{F})(\mathcal{B}_+) \lesssim 1.
\end{equation}
Finally, to obtain the $H^2$ bound of $({\widetilde{\phi}, \widetilde{F}})$ on the entire ball $\mathcal{B}_{\frac{R_*+1}{2R_*+1}}$, it remains to bound the contribution from $\mathcal{B}_{\frac{R_*+1}{2R_*+1}}-\mathcal{B}_+ =\Phi(\Sigma_-)$. On the other hand, according to the theory of Klainerman-Machedon \cite{MKGkl}, since $\Sigma_-$ is bounded, the solution is also bounded up to two derivatives in $L^2$ norms. This implies immediately that the $H^2$ energy of $({\widetilde{\phi}, \widetilde{F}})$ on $\mathcal{B}_{\frac{R_*+1}{2R_*+1}}-\mathcal{B}_+$ is bounded. Finally, we obtain that
\begin{equation}\label{conformal side H2 bound}
\mathcal{E}({\widetilde{\phi}, \widetilde{F}})(\mathcal{B}_{\frac{R_*+1}{2R_*+1}}) + \mathcal{E}(\widetilde{D}\widetilde{\phi}, \nabla \widetilde{F})(\mathcal{B}_{\frac{R_*+1}{2R_*+1}})+\mathcal{E}(\widetilde{D}^2\widetilde{\phi}, \nabla^2 \widetilde{F})(\mathcal{B}_{\frac{R_*+1}{2R_*+1}}) \lesssim 1.
\end{equation}
Once again, by the Main Theorem proved in Klainerman-Machedon \cite{MKGkl}, we have uniform $H^2$ control of $\widetilde{\phi}$ and $\widetilde{F}$. According to Sobolev inequality on $\mathcal{B}_{\frac{R_*+1}{2R_*+1}}$, we conclude that there exists a constant $C$, so that we have the pointwise bound
\begin{equation*}
|\widetilde{\phi}|+|\widetilde{D}\widetilde{\phi}|+|\widetilde{F}|\lesssim 1.
\end{equation*}
Finally, we use the formulae in \eqref{conformal formulae for null components} and this provides the peeling estimates for the null components of $D\phi$ and $F$ in the interior region. Together with the pointwise estimates derived in the exterior region, this finishes the proof of the main theorem.
|
1,314,259,994,235 | arxiv | \section{Introduction}
Tissue classification~(\cite{gurcan2009histopathological,FUCHS2011515}), also known as tissue phenotyping, aims to use computer algorithms to automatically recognize different tissue types in the Whole Slide Images (WSIs). It is one of the fundamental tasks in computational pathology~(\cite{srinidhi2021deep}) which can parse the landscape of tumor microenvironment for precise predictions of cancer diagnosis~(\cite{bulten2022artificial}), prognosis~(\cite{fu2020pan,pages2018international}) and treatment response~(\cite{vanguri2022multimodal}). With the advancements of deep learning algorithms and the growing number of open data~(\cite{kather2019predicting,zhao2020artificial}), this problem has been well studied with outstanding classification performance~(\cite{hatami2021deep}). In clinical practice, however, it still faces ethical, regulatory and legal obstacles where centralized data collection may lead to privacy leakage.
Federated Learning (FL)~(\cite{yang2019federated}) framework provides a promising solution to protect user privacy by only sharing the intermediate results or the model parameters instead of the raw data, which has been widely studied in medical image analysis~(\cite{pati2022federated,sheller2020federated}). But only very few attempts~(\cite{saldanha2022swarm,shen2022tmi,ke2021isbi}) have been made in computational pathology and the research progress still lags behind other medical image modalities~(\cite{rauniyar2022federated}) due to the following two obstacles.
The first one is the data dependency problem. Since most of the existing FL frameworks are constructed based on deep learning models. They are data-hungry and commonly require a large amount of well-annotated samples. However, labeling histopathological images is time-consuming, expertise-dependent and expensive~(\cite{greenwald2022whole,pati2021reducing}). When without enough training samples, existing models may not achieve favorable performance.
Another obstacle is the communication overhead. The training procedure of traditional FL models needs multiple cloud-client iterations to achieve global convergence. However, deep learning models are with tens of millions of parameters, which greatly increases the communication burden when with multiple communication rounds. Lack of training samples may further amplify the communication burden because deep learning models commonly require more iterations to converge when with limited training samples. Moreover, frequent communications may increase the chance of being attacked, such as man-in-the-middle attacks~(\cite{wang2020man}).
Therefore, it is urged to construct a data-efficient and communication-efficient FL model for histopathological tissue classification. In this paper, we proposed a simple and effective solution for histopathological tissue classification, which considers not only the data sharing problem, but also the data dependency, communication efficiency, model robustness and model inversion attack. Our proposed model Federated Deep-Broad Learning, \textit{FedDBL} in short, contains three integrated components, including a common federated learning framework, a pre-trained deep learning (DL) backbone and a broad learning (BL) inference system~(\cite{BLS}). The federated learning framework serves for decentralized training to avoid data sharing across different medical centers or institutions. The pre-trained DL backbone can provide stable and robust deep features when there are not enough training labels, while can also effectively avoid the model inversion attack since no back-propagation is calculated for gradients. The BL system is a lightweight classifier with good approximation capability which can greatly shorten the transmission time and overcome the data dependency problem. Fig.~\ref{fig:compare} comprehensively demonstrates the strengths of FedDBL compared with the centralized learning and the conventional federated learning ways. And to the best of our knowledge, this is the first FL-based model for histopathological tissue classification.
\begin{figure}[t!]
\includegraphics[width=\linewidth]{fig/Compare.pdf}
\caption{Overall comparison among centralized training, traditional DL-based FL and our proposed FedDBL paradigms. (a) Centralized learning gathers data from all the clients which cannot protect the patient's privacy. (b) Traditional DL-based FL preserves privacy by transmitting the model parameters to the central server without sharing the raw data. However, the communication overload highly depends on the model size and the number of communication rounds. (c) Our proposed FedDBL not only protects privacy, but also dramatically saves the communication burden through a super lightweight trainable broad learning system.}
\label{fig:compare}
\end{figure}
Extensive experiments with five-fold cross-validation are conducted to demonstrate the superiority of FedDBL in several aspects, including data dependency, communication efficiency, flexibility and the practicability of the model encryption. When with enough training data, FedDBL can mostly outperform conventional FL strategies and achieve comparable or even better classification performance compared with centralized learning strategy. When reducing the training samples in the data dependency experiment, FedDBL still maintains a high-level performance and greatly outperforms both centralized learning and conventional FL frameworks, even with only 1\% training samples. FedDBL is also flexible to any deep learning architectures to support data- and communication-efficient histopathological tissue classification. Another spotlight of FedDBL is communication efficiency. Compared with the conventional FL frameworks, FedDBL's one-round training manner reduces the upload workload from 4.609GB to 276.5KB (over 17,000 times faster) with ResNet-50 backbone compared to traditional 50-round iterative training. Thanks to the tiny model size, FedDBL is also computationally efficient in model encryption which can further upgrade the privacy protection level. The main contributions of this paper can be summarized as follows:
\begin{itemize}
\item We propose a novel federated learning approach (FedDBL) for histopathological tissue classification to preserve patients' privacy. To the best of our knowledge, FedDBL is the first FL-based approach tailored for histopathological tissue classification.
\item FedDBL is a simple, effective and easy-to-use algorithm that associates three classical methods, including a robust pre-trained deep learning feature extractor, a fast broad learning inference system and a simple federated learning framework.
\item FedDBL is the first study that considers communication efficiency and data efficiency simultaneously which reduces the communication overhead of each client with extremely limited training samples.
\item Extensive experiments demonstrate that FedDBL drastically relieves the dependence on training data and reduces the communication overhead while maintaining outstanding classification performance, which promotes its clinical practicability.
\end{itemize}
\section{Related Works}
\subsection{Histopathological Tissue Classification}
High-resolution WSIs offer a wide range of tissue phenotypes where the pixel-level annotation is time-consuming and requires a great deal of biomedical knowledge~(\cite{srinidhi2021deep}), making patch-level histopathological tissue classification an alternate solution for automated analysis in computer-aided tumor diagnosis~(\cite{kather2019predicting,xue2021selective,abdeltawab2021pyramidal}).
Due to the rapid development of computer vision, the most popular natural image classification models can be transferred into histopathological tissue phenotyping. However, it still suffers from the data dependency problem with a huge annotation burden~(\cite{ayyad2021role}). Thus, various approaches have been proposed to reduce the annotation effort. \cite{han2022multi} proposed a multi-layer pseudo-supervision approach with a progressive dropout attention mechanism to convert patch-level labels into pseudo-pixel-level labels. An extra classification gate mechanism was presented which reduced the false-positive rate for non-predominant category classification and improved the segmentation performance in return. \cite{xue2021selective} utilized a generative adversarial network (GAN) to generate pseudo samples to expand the training data. \cite{dolezal2022uncertainty} cropped WSIs into tiles for training the uncertainty quantization model and solved the problem of domain shift in external validation data. In order to get rid of lacking image annotations, \cite{wang2022transformer} employed unsupervised contrastive learning to obtain a robust initialized model with moderate feature representation of the histopathological feature space, with no annotation burden. Our previous study (\cite{lin2022pdbl}) introduced pyramidal deep-broad learning (PDBL) as a pluggable module for any CNN backbone to further improve histopathological tissue classification performance.
Besides that, another unexplored challenge is the patient privacy issue. Only a few attempts~(\cite{saldanha2022swarm,saldanha2022direct}) have been made in federated learning for computational pathology, which will be discussed in the following subsection. And to the best of our knowledge, we are the first study to consider privacy protection in histopathological tissue classification.
\subsection{Federated Learning}
\subsubsection{Federated Learning in Medical Image Analysis}
Because of the ethical issue, federated learning (FL) has been widely adopted in medical applications to preserve the patients' privacy~(\cite{pati2022federated,warnat2021swarm,sheller2020federated}). In medical imaging, FL has witnessed a boost in interest (\cite{kaissis2020secure}), such as MRI reconstruction~(\cite{guo2021multi,li2020multi}), CT lesion segmentation~(\cite{yang2021federated}) and etc.
In the COVID-19 pandemic, COVID-19-related applications with data from different medical centers or even from different countries become the most urgent demand in the real-world clinical scenario while FL greatly advances the diagnostic performance~(\cite{bai2021advancing}). \cite{dayan2021federated} used 20 institutes’ data across the global for predicting the future oxygen requirements of symptomatic patients suffering from COVID-19. \cite{dou2021federated} proposed a federated model to detect COVID-19 lung abnormalities with good generalization capability on unseen multinational datasets.
\subsubsection{Federated Learning in Computational Histopathology}
In histopathological images, a swarm learning architecture with blockchain protocols has been proposed to predict the mutational status~(\cite{saldanha2022swarm}). However, compared with other medical imaging modalities, there are few studies~(\cite{saldanha2022direct}) that adopt federated learning in histopathological images for the following reasons. First, the digitalization of pathology is unpopular. Pathological diagnosis still relies on observing specimens under a microscope. Second, image annotation is also an obstacle for the histopathological image process since only pathologists are capable to label WSIs which greatly increases the difficulties of acquiring well-annotated data. Third, due to the gigapixel resolution of WSIs, the size of the deep learning model is generally large, which increases the communication burden in networking.
There are technical solutions in FL to the high communication overhead problem, such as compressing the model size~(\cite{reisizadeh2020fedpaq,jhunjhunwala2021adaptive}). \cite{reisizadeh2020fedpaq} proposed FedPAQ to reduce the interactive overhead of FL by compressing the model with lower bit-precision and \cite{jhunjhunwala2021adaptive} proposed an adaptive quantization strategy to achieve communication efficiency.
However, the underlying assumption of existing studies is that there should be enough samples for model training where they may not be able to take into account both communication efficiency and limited data issue~(\cite{kamp2021federated, zhang2023two}). In this study, we fully consider the specialty of histopathological images, the difficulties of data labeling and the communication efficiency in the real-world clinical scenario, which has never been discussed in decentralized computational pathology.
\section{Methodology}
In this section, we introduce our framework Federated Deep-Broad Learning (FedDBL). This framework is designed for privacy-preserving tissue classification with limited training samples and extremely low communication overhead. In the following subsections, we first describe the intuitive thinking and problem setting in Section~\ref{sub:problem-setting}. The overall framework and the methodology of FedDBL are shown in Section~\ref{sub:FedDBL}. Finally, we demonstrate the implementation details in Section~\ref{sub:implementation}.
\subsection{Problem Setting}
\label{sub:problem-setting}
As a classical upstream task in computational pathology, existing tissue classification approaches have achieved outstanding performance under an ideal condition with enough training samples by centralized learning. However, they might face the following obstacles in the real-world clinical scenario.
\textbf{Annotation burden:} Collecting enough well-labeled training samples is expensive and time-consuming because it requires labelers with medical background.
\textbf{Privacy preservation:} The raw data should not be shared across different medical institutions (or clients) to preserve the patient's privacy. Transmitting raw data may break the principle of medical ethics.
\textbf{Communication cost:} The communication overhead has always been a challenge in federated learning models affected by many compound factors, such as the model size, the communication rounds, the model convergence speed, the network bandwidth and etc.
To resolve the aforementioned challenges, we propose a simple and effective FL-based framework, demonstrated in Fig.~\ref{fig:FedDBL}. First, we abandon conventional end-to-end training manner since limited training samples may harm the robustness of the deep learning model and decrease the convergence speed. Therefore, we separate feature extraction and inference for local training in each client. A pre-trained deep feature extractor (CNN backbone) is introduced to avoid the feature extractor being affected by the training sample bias from different clients in order to guarantee the robustness of extracted features. Then an independent broad learning inference system~(\cite{BLS,lin2022pdbl}) serves for fast inference. Finally, we apply a classical weighted averaging as in FedAvg~(\cite{mcmahan2017communication}), to fuse the broad learning inference systems from all the clients.
\begin{figure*}[t]
\centering
\includegraphics[width=.975\linewidth]{fig/Framework.pdf}
\caption{The overall architecture of FedDBL with three modules, deep feature extraction module, broad inference module and federated decentralized module. (a) Deep feature extraction module serves for extracting multi-scale deep-broad features from low level to high level by a pre-trained DL backbone. Features of all the patches are stored in a feature bank. (b) Broad inference module is introduced for fast inference by a broad learning system. (c) Federated decentralized module applies a classical federated average approach to aggregate the broad learning weights from different clients.}
\label{fig:FedDBL}
\end{figure*}
\subsection{FedDBL Architecture and Formulation}
\label{sub:FedDBL}
As shown in Fig.~\ref{fig:FedDBL}, FedDBL consists of three modules, deep feature extraction module (DL-module), broad inference module (BL-module) and federated decentralized module (Fed-module). DL-module together with BL-module serves for local training on the client side. Fed-module is executed on the server side.
Algorithm~\ref{algorithm:FedDBL-server} provides the details of the entire FedDBL pipeline.
Let $\mathcal{D}_{1}, \mathcal{D}_{2}, \cdots, \mathcal{D}_k, \cdots, \mathcal{D}_{K}$ denote the local training sets from $K$ clients with the dataset size of $n_k$ for each client $\mathcal{D}_k$. The total number of training samples is denoted as $N = \sum_{k=1}^{K}{n_k}$. For each sample $X$ with ground truth $Y$ in $\mathcal{D}$, DL-module with pre-trained parameters $\Theta$ extracts the features and stores them in the feature bank $\mathbf{B}$. Then BL-module calculates the weights $W_{client}$ of broad learning system. By the federated aggregation approach, we can obtain the global weight $W_{global}$. The workflows of the server and the clients are demonstrated in Algorithm~\ref{algorithm:FedDBL-server} and Algorithm~\ref{algorithm:FedDBL-client}, respectively.
\begin{algorithm}[!ht]
\caption{FedDBL framework (Server Execution)}
\label{algorithm:FedDBL-server}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input {A set of $K$ clients}
\Output {A global model $W_{global}$}
Prepare pre-trained DL backbone parameters $\Theta$
Initialize BL system setting
\For{each client $k$ \emph{\textbf{in parallel}}}{
$W_{client}^{k} \leftarrow $ClientExecution$\left(\Theta, \mathcal{D}_{k} \right)$
}
$W_{global} \leftarrow$ Fed-module$(W_{client}^1,\cdots,W_{client}^K)$
\textbf{return} $W_{global}$
\end{algorithm}
\begin{algorithm}[!ht]
\caption{FedDBL framework (Client Execution)}
\label{algorithm:FedDBL-client}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input {Pre-trained DL backbone $\Theta$, training set $\mathcal{D}$ with $n$ training samples}
\Output {Deep-broad learning model $W_{client}$.}
\tcc{DL-module}
\For{training sample $X$ in $\mathcal{D}$} {
\For{$s$-th stage in $\Theta$}{
$f_{X}^{s} \leftarrow \Theta^{s}\left(X\right)$ \tcp{Feature extraction}
$\mathbf{e}_{X}^{s}=\frac{1}{H_{X}^{s} \times W_{X}^{s}} \sum_{i=1}^{H_{X}^{s}} \sum_{j=1}^{W_{X}^{s}} f_{X}^{s}(i,j)$ \tcp{Adaptive global average pooling}
}
$\mathbf{b}_{X} = \mathbf{e}_{X}^{1} \parallel \mathbf{e}_{X}^{2} \parallel \cdots \parallel \mathbf{e}_{X}^{S}$ \tcp{Concatenation}
}
\textbf{Obtain} $\left\{\mathbf{b}_{i}|i=1,2,\cdots,n \right\}$\\
$\mathbf{B} \leftarrow \sigma\left( \left[\mathbf{b_1}^\mathbf{T}, \mathbf{b_2}^\mathbf{T}, \dots ,\mathbf{b}_{n}^\mathbf{T} \right]\right)^\mathbf{T}$ \tcp{Normalization transformation}
\tcc{BL-module}
Initialize BL system setting defined by central server \\
$\mathbf{B}^{+} \leftarrow \lim_{\lambda \to 0} \left(\mathbf{B} \mathbf{B}^{\mathbf{T}} + \lambda E \right)^{-1} \mathbf{B}^{\mathbf{T}}$ \tcp{Solve Pseudo-inverse}
$W_{client} \leftarrow \mathbf{B}^{+} Y$ \tcp{Calculate BL model weight}
\textbf{return} $W_{client}$
\end{algorithm}
\subsubsection{Deep Feature Extraction Module}
A large number of samples and repeated backpropagation are required in standard DL training to achieve a good feature representation ability. When suffering from the insufficient data problem, the model training procedure might be unstable which leads to poor feature representation and model overfitting. Our previous study~(\cite{lin2022pdbl}) reveals that directly adopting a stable pre-trained model for feature extraction is more favorable to the model performance than training the model with limited samples, even the pre-trained model was trained by an irrelevant image domain (ImageNet\footnote{https://image-net.org/}). Inspired by this idea, we use a pre-trained CNN model with no further training to extract the deep features. Notice that, the selection of the pre-trained models is flexible, and can be from any image domain. We have conducted an experiment to justify the flexibility in Section~\ref{sec:exp}. Another advantage of using pre-trained models is to avoid model inverse attacks since the training samples are all unseen. To enrich the feature representation, we extract multi-stage features from low-level to high-level, details as below.
As illustrated in DL-module of Algorithm~\ref{algorithm:FedDBL-client}, each client $k$ $(k \in \left[1, \cdots, K\right])$ downloads the pre-trained DL backbone as feature extractor $\Theta$ and extracts multi-stage deep features $\mathbf{b}_X$ of training sample $X$ locally (we omit $k$ for simplicity), where $\mathbf{b}_X$ consists of multiple stages’ features $\Theta^s(X)(s \in \left[1, \cdots, S\right])$. The features of the entire dataset $\mathcal{D}_k$ are stored in the feature bank $\mathbf{B}$. Then the feature bank $\mathbf{B}$ will be passed to broad inference module. Since neither the training data nor the feature bank is shared across different clients, there is no privacy leakage risk in deep feature extraction module.
\subsubsection{Broad Inference Module}
With the feature bank $\mathbf{B}$, each client $k$ can conduct a local BL system~(\cite{BLS}) through BL-module (Algorithm~\ref{algorithm:FedDBL-client}) for fast inference. By solving the Eq.~\eqref{eq:W_opt_1} optimization problem, an optimal BL model $W_{client}$ can be obtained rapidly through the pseudo-inverse method (Eq.~\eqref{eq:W_opt_2}).
\begin{equation}
\label{eq:W_opt_1}
W_{client}=\underset{W_{init}}{{\arg\min}} \left\| \mathbf{B}W_{init}-Y \right\|_{2}^{2} + \gamma \left\|W_{init} \right\|_{2}^{2}
\end{equation}
\begin{equation}
\label{eq:W_opt_2}
W_{client} = \mathbf{B}^{+} Y = \lim_{\lambda \to 0} \left(\mathbf{B} \mathbf{B}^{\mathbf{T}} + \lambda E \right)^{-1} \mathbf{B}^{\mathbf{T}} Y
\end{equation}
where $Y$ represents the ground-truth label matrix, $\mathbf{B}$ is feature bank in the form of matrix. $W_{init}$ is the initialized broad learning weights. $E$ is the identity matrix, $\lambda$ is a constant parameter and $\gamma$ is the regularization parameter. The pseudo-inverse method of solving BL model considerably reduces the computational burden while achieving high communication efficiency. For the inference process, the predicted results can be calculated by $Y_{test}=\mathbf{B}_{test} W_{client}$ after extracting test samples’ deep features with the largest probabilistic value.
Thanks to the lightweight broad learning model $W_{client}$, the communication efficiency is drastically improved compared with the conventional DL-based FL frameworks.
\subsubsection{Federated Decentralized Module}
In this module, we conduct a federated learning framework for decentralized learning. Given the broad learning model $W_{client}^k$ of each client $k$, we first upload the models from all the clients to the central server. And then general federated aggregation methods can be applied to aggregate them. Here, we use the most common weighted averaging way for model aggregation as adopted in FedAvg~(\cite{mcmahan2017communication}), FedProx~(\cite{li2020federated}) and FedPAQ~(\cite{reisizadeh2020fedpaq}).
\begin{equation}
\label{eq:W_global}
W_{global}=\sum_{k=1}^{K} \frac{n_{k}}{N} W_{client}^{k}
\end{equation}
where $W_{global}$ is the global model from the server, $n_k $ is the number of training samples in client $k$ and $N$ is the total number of training samples. A larger training dataset will contribute more to the global model. Since we only share the broad learning model for once, the communication efficiency and the patient's privacy are guaranteed.
\subsection{Implementation Details}
\label{sub:implementation}
All of our experiments are implemented in Pytorch on a workstation with an NVIDIA RTX 3090 and the i9-11900K CPU with 16 cores. We use the cross-entropy loss for the baseline centralized training with batch size $20$. The SGD optimizer is set as follows: the learning rate is $1e^{-3}$, the weight decay is $1e^{-4}$ and the momentum is 0.9. The patches are $224 \times 224$ under $20\times$ WSIs. Different client numbers are used depending on the datasets.
We adopt three well-known federated aggregation methods, FedAvg (\cite{mcmahan2017communication}), FedProx (\cite{li2020federated}) and FedPAQ (\cite{reisizadeh2020fedpaq}), for comparison. And the centralized model is trained as the baseline. FedProx has the parameter $\mu$ to adjust the effect of the proximal term on the loss function. Here we set $\mu$ as $1$ which has better performance.
\section{Experiments}
\label{sec:exp}
In this section, we present the details of the datasets and conduct various experiments to demonstrate the performance and efficiency of the proposed FedDBL. Section~\ref{sub:datasets} shows two open datasets and the experimental settings in the federated learning framework. In Section~\ref{sub:one-round}, we compare FedDBL with centralized learning baselines, conventional federated learning baselines and one-round federated learning baselines. The effectiveness is comprehensively discussed in Section~\ref{sub:multi-round}. We use accuracy and macro F1-score as the evaluation metrics in all the experiments.
\subsection{Datasets and Experimental Settings}
\label{sub:datasets}
\begin{table*}[ht]
\centering
\caption{Statistics of Multi-center CRC. \#1 denotes TCGA, \#2 denotes Kather, \#3 denotes Guangdong Provincial People’s Hospital and \#4 denotes Yunnan Cancer Hospital.}
\begin{threeparttable}
\begin{tabular}{p{0.04\textwidth}>{\makecell[r]}
p{0.06\textwidth}>{\makecell[r]}
p{0.06\textwidth}>{\makecell[r]}
p{0.06\textwidth}>{\makecell[r]}
p{0.06\textwidth}>{\makecell[r]}
p{0.06\textwidth}>{\makecell[r]}
p{0.06\textwidth}>{\makecell[r]}
p{0.06\textwidth}>{\makecell[r]}
p{0.06\textwidth}>{\makecell[r]}
p{0.06\textwidth}>{\makecell[r]}
p{0.07\textwidth}}
\toprule
&ADI &BACK &DEB &LYM &MUC &MUS &NORM &STR &TUM &\textbf{Total} \cr
\midrule
\#1\centering &\makecell[r]{10,065} &\makecell[r]{10,736} &\makecell[r]{10,603} &\makecell[r]{2,340}
&\makecell[r]{9,398} &\makecell[r]{12,974} &\makecell[r]{10,003} &\makecell[r]{10,081}
&\makecell[r]{12,899} &\makecell[r]{89,099} \cr
\#2\centering &\makecell[r]{10,407} &\makecell[r]{10,566} &\makecell[r]{11,512} &\makecell[r]{11,577}
&\makecell[r]{8,896} &\makecell[r]{13,536} &\makecell[r]{8,763} &\makecell[r]{10,446}
&\makecell[r]{14,317} &\makecell[r]{100,000} \cr
\#3\centering &\makecell[r]{10,000} &\makecell[r]{22,565} &\makecell[r]{9,999} &\makecell[r]{5,831}
&\makecell[r]{10,737} &\makecell[r]{10,000} &\makecell[r]{13,368} &\makecell[r]{12,584}
&\makecell[r]{10,000} &\makecell[r]{105,084} \cr
\#4\centering &\makecell[r]{2,500} &\makecell[r]{2,500} &\makecell[r]{2,500} &\makecell[r]{2,500}
&\makecell[r]{2,500} &\makecell[r]{2,500} &\makecell[r]{2,500} &\makecell[r]{2,500}
&\makecell[r]{2,500} &\makecell[r]{22,500} \cr
\midrule
\textbf{Total}\centering &\makecell[r]{32,972} &\makecell[r]{46,367} &\makecell[r]{34,614} &\makecell[r]{22,228}
&\makecell[r]{31,531} &\makecell[r]{39,010} &\makecell[r]{34,634} &\makecell[r]{35,611}
&\makecell[r]{39,716} &\makecell[r]{316,683} \cr
\bottomrule
\hspace{1mm}
\end{tabular}
\end{threeparttable}
\label{tab:Multi-CRC}
\end{table*}
\begin{table}[t]
\centering
\caption{Statistics of BCSS. \#1, \#2 and \#3 are the datasets of three clients.}
\begin{threeparttable}
\begin{tabular}{p{0.03\textwidth}>{\makecell[r]}
p{0.05\textwidth}>{\makecell[r]}
p{0.05\textwidth}>{\makecell[r]}
p{0.05\textwidth}>{\makecell[r]}
p{0.05\textwidth}>{\makecell[r]}
p{0.05\textwidth}}
\toprule
&TUM &STR &LYM &NEC &\textbf{Total} \cr
\midrule
\#1\centering &\makecell[r]{2,016} &\makecell[r]{598} &\makecell[r]{220} &\makecell[r]{217} &\makecell[r]{3,051} \cr
\#2\centering &\makecell[r]{1,962} &\makecell[r]{987} &\makecell[r]{269} &\makecell[r]{372} &\makecell[r]{3,590} \cr
\#3\centering &\makecell[r]{718} &\makecell[r]{704} &\makecell[r]{127} &\makecell[r]{88} &\makecell[r]{1,637} \cr
\midrule
\textbf{Total}\centering &\makecell[r]{4,696} &\makecell[r]{2,289} &\makecell[r]{616} &\makecell[r]{677} &\makecell[r]{8,278}\cr
\bottomrule
\hspace{1mm}
\end{tabular}
\end{threeparttable}
\label{tab:BCSS}
\end{table}
\textbf{Multi-center CRC:} This is a multi-center datasets~(\cite{zhao2020artificial,kather2019predicting}) of colorectal cancer. Kather dataset (\cite{kather2019predicting}) defined nine different tissue types of H\&E stained WSIs, including adipose (ADI), background (BACK), debris (DEB), lymphocytes (LYM), mucus (MUC), smooth muscle (MUS), normal colon mucosa (NORM), cancer-associated stroma (STR), and colorectal adenocarcinoma epithelium (TUM). It contains 100k patches extracted from 86 WSIs. Following Kather, \cite{zhao2020artificial} also released another CRC dataset from three different medical centers, including 89.1k patches (85 slides) from The Cancer Genome Atlas (TCGA), 105.1k patches (106 slides) from Guangdong Provincial People's Hospital and 22.5k patches (48 slides) from Yunnan Cancer Hospital. All these patches are with the same resolution of $224\times 224$ at $20\times$ magnification. Table~\ref{tab:Multi-CRC} demonstrates the statistics of each dataset.
\textbf{BCSS}: Here, we introduce another dataset of breast cancer. Breast Cancer Semantic Segmentation (BCSS)~(\cite{amgad2019structured}) is an open challenge released in Grand Challenge\footnote{https://bcsegmentation.grand-challenge.org/}. There are 151 ROI images with pixel-level annotations in WSIs retrieved in TCGA. According to the naming convention provided by the supplementary document of BCSS, the ROIs are extracted from 21 different medical centers/hospitals. To generate a patch-level dataset, we first divide these ROIs into three clients and each of them has 7 medical centers. Then we crop each ROI into $224\times 224$ pixels by a sliding window with a step size of 120 pixels at $20\times$ objective magnification. Since this dataset is long-tail, we only keep the tissues from the four predominant classes, including tumor (TUM), stroma (STR), lymphocytic infiltrate (LYM), and necrosis or debris (NEC). The patches with the area of the majority class larger than 95\% are kept while the others are discarded as ambiguous patches. Finally, a total number of 8278 patches are left, and the size of each client’s dataset is shown in Table ~\ref{tab:BCSS}.
\textbf{Experimental Settings:}
We conduct the federated learning environment by the following steps. First of all, Multi-center CRC includes four clients according to the dataset setting from the original papers. BCSS is separated into three clients due to the limited training samples. For each client, the local dataset is randomly separated into a training set and a test set with a ratio of $7:3$. Then, we randomly sample seven incremental subsets with the proportions of $\left[1\%, 5\%, 10\%, 30\%, 50\%, 70\%, 100\%\right]$ from the training set. To conduct a 5-fold cross-validation experiment, we repeat the random sampling strategy five times for each proportion. In addition, we simply combine the training sets as well as the test sets from all the clients for centralized learning comparison.
\begin{table*}[t]
\centering
\caption{Accuracy of five-fold cross-validation experiment for different proportions of the training data under one-round training. Centralized methods are used for baselines, \textbf{bold} is the highest among federated algorithms and \textcolor{red}{red} represents the highest score among all methods including centralized learning. ResNet-50 and CTransPath indicate the CNN backbones pre-trained on ImageNet and pathology images, respectively. The performance of each fold can be found in the supplemental material.}
\begin{threeparttable}
\setlength{\tabcolsep}{2.5mm}{
\begin{tabular}{p{0.11\textwidth}>{}
p{0.1\textwidth}>{}
p{0.15\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}}
\toprule
\multirow{2}{*}{Datasets} & \multirow{2}{*}{Base model} & \multirow{2}{*}{Models} & \multicolumn{7}{c}{Accuracy} \cr
\cmidrule(lr){4-10}
& & &1\% &5\% &10\% &30\% &50\% &70\% &100\% \cr
\midrule
\multirow{10}{*}{\makecell[l]{Multi-center \\ CRC}}
&\multirow{7}{*}{ResNet-50}
&\emph{Centralized} &0.7185 &0.7913 &0.8819 &0.9033 &0.9306 &\textcolor{red}{0.9414} &0.9450 \cr
&&\emph{Centralized-FC} &0.8334 &\textcolor{red}{0.8970} &0.9129 &0.9260 &0.9378 &0.9412 &\textcolor{red}{0.9458} \cr
&&\emph{FedAvg} &0.1465 &0.3095 &0.4025 &0.4145 &0.4461 &0.4823 &0.4345 \cr
&&\emph{FedProx} &0.1900 &0.2976 &0.4332 &0.4621 &0.4663 &0.4448 &0.4818 \cr
&&\emph{FedPAQ} &0.1984 &0.3373 &0.3798 &0.5208 &0.4778 &0.4919 &0.4994 \cr
&&\emph{FedAvg-FC} &0.7942 &\textbf{0.8552} &0.8628 &0.8817 &0.8900 &0.8857 &0.8977 \cr
&&\emph{FedDBL(\textbf{ours})} &\textbf{\textcolor{red}{0.8832}} &0.8456 &\textbf{\textcolor{red}{0.9229}}
&\textbf{\textcolor{red}{0.9411}} &\textbf{\textcolor{red}{0.9410}} &\textbf{0.9413} &\textbf{0.9399} \cr
\cdashline{2-10}
&\multirow{3}{*}{\centering CTransPath}
&\emph{Centralized-FC} &\textcolor{red}{0.9390} &0.9594 &\textcolor{red}{0.9670} &\textcolor{red}{0.9756} &\textcolor{red}{0.9788} &\textcolor{red}{0.9801} &\textcolor{red}{0.9817} \cr
&&\emph{FedAvg-FC} &0.9074 &0.9382 &0.9455 &0.9536 &0.9563 &0.9577 &0.9595 \cr
&&\emph{FedDBL(\textbf{ours})} &\textbf{0.9213} &\textbf{\textcolor{red}{0.9640}} &\textbf{0.9654} &\textbf{0.9663} &\textbf{0.9668} &\textbf{0.9669} &\textbf{0.9669} \cr
\midrule
\multirow{10}{*}{\centering BCSS}
&\multirow{7}{*}{ResNet-50}
&\emph{Centralized} &0.7657 &0.4779 &0.5726 &0.8676 &0.8836 &0.8944 &0.9408 \cr
&&\emph{Centralized-FC} &0.5806 &0.8106 &0.8640 &0.9345 &0.9490 &0.9543 &0.9600 \cr
&&\emph{FedAvg} &0.5972 &0.7495 &0.7062 &0.6959 &0.6611 &0.5889 &0.6420 \cr
&&\emph{FedProx} &0.5951 &0.7277 &0.7158 &0.7155 &0.6654 &0.6631 &0.6308 \cr
&&\emph{FedPAQ} &0.5931 &0.6597 &0.6802 &0.6822 &0.6254 &0.6454 &0.6170 \cr
&&\emph{FedAvg-FC} &0.5740 &0.6234 &0.7714 &0.8754 &0.9014 &0.9259 &0.9365 \cr
&&\emph{FedDBL(\textbf{ours})} &\textbf{\textcolor{red}{0.9012}} &\textbf{\textcolor{red}{0.9511}} &\textbf{\textcolor{red}{0.9603}}
&\textbf{\textcolor{red}{0.9711}} &\textbf{\textcolor{red}{0.9745}} &\textbf{\textcolor{red}{0.9731}}&\textbf{\textcolor{red}{0.9652}} \cr
\cdashline{2-10}
&\multirow{3}{*}{\centering CTransPath}
&\emph{Centralized-FC} &0.6159 &0.5904 &0.6692 &0.8886 &0.9435 &0.9609 &0.9726 \cr
&&\emph{FedAvg-FC} &0.5858 &0.5772 &0.5676 &0.6689 &0.8072 &0.8537 &0.9072 \cr
&&\emph{FedDBL(\textbf{ours})} &\textbf{\textcolor{red}{0.9593}} &\textbf{\textcolor{red}{0.9754}} &\textbf{\textcolor{red}{0.9777}}
&\textbf{\textcolor{red}{0.9770}} &\textbf{\textcolor{red}{0.9834}} &\textbf{\textcolor{red}{0.9883}} &\textbf{\textcolor{red}{0.9910}} \cr
\bottomrule
\hspace{1mm}
\end{tabular}}
\end{threeparttable}
\label{tab:one-round-accuracy}
\end{table*}
\begin{table*}[th]
\centering
\caption{F1-scores of five-fold cross-validation experiment for different proportions of the training data under one-round training. Centralized methods are used for baselines, \textbf{bold} is the highest among federated algorithms and \textcolor{red}{red} represents the highest among all methods including centralized learning. ResNet-50 and CTransPath indicate the CNN backbones pre-trained on ImageNet and pathology images, respectively. The performance of each fold can be found in the supplemental material.}
\begin{threeparttable}
\setlength{\tabcolsep}{2.5mm}{
\begin{tabular}{p{0.11\textwidth}>{}
p{0.1\textwidth}>{}
p{0.15\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}}
\toprule
\multirow{2}{*}{Datasets} & \multirow{2}{*}{Base model} & \multirow{2}{*}{Models} & \multicolumn{7}{c}{F1-score} \cr
\cmidrule(lr){4-10}
& & &1\% &5\% &10\% &30\% &50\% &70\% &100\% \cr
\midrule
\multirow{10}{*}{\makecell[l]{Multi-center \\ CRC}}
&\multirow{7}{*}{ResNet-50}
&\emph{Centralized} &0.7191 &0.7863 &0.8801 &0.9027 &0.9300 &0.9415 &0.9451 \cr
&&\emph{Centralized-FC} &0.8301 &\textcolor{red}{0.8977} &0.9137 &0.9265 &0.9383 &0.9417 &\textcolor{red}{0.9463} \cr
&&\emph{FedAvg} &0.0669 &0.2203 &0.3214 &0.3250 &0.3240 &0.3829 &0.3366 \cr
&&\emph{FedProx} &0.1086 &0.2110 &0.3468 &0.3576 &0.3627 &0.3468 &0.3981 \cr
&&\emph{FedPAQ} &0.1146 &0.2527 &0.2789 &0.4285 &0.3773 &0.3930 &0.4032 \cr
&&\emph{FedAvg-FC} &0.7912 &\textbf{0.8527} &0.8610 &0.8805 &0.8896 &0.8841 &0.8971 \cr
&&\emph{FedDBL(\textbf{ours})} &\textbf{\textcolor{red}{0.8833}} &0.8502 &\textbf{\textcolor{red}{0.9268}} &\textbf{\textcolor{red}{0.9443}}
&\textbf{\textcolor{red}{0.9443}} &\textbf{\textcolor{red}{0.9445}} &\textbf{0.9432} \cr
\cdashline{2-10}
&\multirow{3}{*}{\centering CTransPath}
&\emph{Centralized-FC} &\textcolor{red}{0.9396} &0.9599 &\textcolor{red}{0.9673} &\textcolor{red}{0.9757} &\textcolor{red}{0.9789} &\textcolor{red}{0.9802} &\textcolor{red}{0.9817} \cr
&&\emph{FedAvg-FC} &0.9031 &0.9385 &0.9459 &0.9541 &0.9568 &0.9582 &0.9600 \cr
&&\emph{FedDBL(\textbf{ours})} &\textbf{0.9217} &\textbf{\textcolor{red}{0.9643}} &\textbf{0.9657} &\textbf{0.9666} &\textbf{0.9671} &\textbf{0.9672} &\textbf{0.9672} \cr
\midrule
\multirow{10}{*}{\centering BCSS}
&\multirow{7}{*}{ResNet-50}
&\emph{Centralized} &0.5870 &0.2228 &0.4280 &0.8196 &0.8081 &0.8245 &0.8974 \cr
&&\emph{Centralized-FC} &0.2056 &0.4846 &0.7297 &0.8979 &0.9231 &0.9304 &0.9404 \cr
&&\emph{FedAvg} &0.2300 &0.4059 &0.3689 &0.3433 &0.2962 &0.2176 &0.2798 \cr
&&\emph{FedProx} &0.2276 &0.3968 &0.3578 &0.3600 &0.3024 &0.3052 &0.2633 \cr
&&\emph{FedPAQ} &0.2210 &0.3020 &0.3157 &0.3328 &0.2766 &0.2837 &0.2582 \cr
&&\emph{FedAvg-FC} &0.2004 &0.2650 &0.4161 &0.7445 &0.8192 &0.8764 &0.8959 \cr
&&\emph{FedDBL(\textbf{ours})} &\textbf{\textcolor{red}{0.8471}} &\textbf{\textcolor{red}{0.9227}} &\textbf{\textcolor{red}{0.9413}} &\textbf{\textcolor{red}{0.9578}} &\textbf{\textcolor{red}{0.9638}} &\textbf{\textcolor{red}{0.9621}} &\textbf{\textcolor{red}{0.9550}} \cr
\cdashline{2-10}
&\multirow{3}{*}{\centering CTransPath}
&\emph{Centralized-FC} &0.2572 &0.2191 &0.3272 &0.7578 &0.9035 &0.9392 &0.9614 \cr
&&\emph{FedAvg-FC} &0.2148 &0.1981 &0.1816 &0.3272 &0.4571 &0.6165 &0.8061 \cr
&&\emph{FedDBL(\textbf{ours})} &\textbf{\textcolor{red}{0.9416}} &\textbf{\textcolor{red}{0.9644}} &\textbf{\textcolor{red}{0.9685}} &\textbf{\textcolor{red}{0.9639}} &\textbf{\textcolor{red}{0.9758}} &\textbf{\textcolor{red}{0.9818}} &\textbf{\textcolor{red}{0.9859}} \cr
\bottomrule
\hspace{1mm}
\end{tabular}}
\end{threeparttable}
\label{tab:one-round-f1-score}
\end{table*}
\subsection{Comparisons under One-round Communication}
\label{sub:one-round}
In this experiment, we evaluate the data efficiency, communication efficiency and flexibility of our proposed \emph{FedDBL}. Table~\ref{tab:one-round-accuracy} and~\ref{tab:one-round-f1-score} demonstrate the average accuracy and F1-scores performance. We compare \emph{FedDBL} with four FL frameworks and two centralized training approaches with only one-round communication or local training. We first employ ResNet-50 pre-trained on ImageNet as the CNN backbone for \emph{FedDBL} and all the other competitors.
\begin{enumerate}[(1)]
\item \emph{Centralized}: We fine-tune the pre-trained backbone with a random initialized fully connected (FC) layer in the centralized learning manner.
\item \emph{Centralized-FC}: We freeze the pre-trained backbone while fine-tuning the FC-layer in the centralized learning manner.
\item \emph{FedAvg}: We fine-tune the pre-trained model by FedAvg framework~(\cite{mcmahan2017communication}).
\item \emph{FedProx}: We fine-tune the pre-trained model model by FedProx framework~(\cite{li2020federated}).
\item \emph{FedPAQ}: We fine-tune the pre-trained model by a communication-efficient federated learning framework, FedPAQ~(\cite{reisizadeh2020fedpaq}).
\item \emph{FedAvg-FC}: We freeze the pre-trained CNN backbone and only update the FC-layer by FedAvg framework.
\end{enumerate}
\textbf{Data efficiency:} As shown in Table~\ref{tab:one-round-accuracy} and~\ref{tab:one-round-f1-score}, when with enough training samples (100\%) in one-round training experiment, centralized learning can achieve better performance in both datasets than other conventional FL frameworks, \emph{FedAvg}, \emph{FedProx} and \emph{FedPAQ}. Because centralized learning gathers the training samples from all clients such that the CNN model is trained more stably with a faster convergence speed than existing FL frameworks. When freezing the CNN backbone, \emph{FedAvg-FC} returns to a more favorable performance. \emph{Centralized-FC} also show better performance than \emph{Centralized}. This observation shows that when with limited communication resources or local training time but sufficient training samples, maintaining a more stable CNN feature extractor is better than retraining the entire model. Only updating FC-layer is a better solution under this circumstance. The proposed \emph{FedDBL} can achieve comparable performance with centralized learning strategies in Multi-center CRC dataset and even outperform them in BCSS dataset. When reducing the training data, the performance of all the approaches drops dramatically except \emph{FedDBL}, especially when with only 1\% training samples. \emph{FedAvg-FC} with the frozen CNN backbone achieves around 0.79 accuracy and F1-score in Multi-center CRC but is still less effective than \emph{FedDBL}. However, the quantitative results of \emph{FedAvg-FC} in BCSS with 1\% training samples are much worse than the ones in Multi-center CRC dataset because Multi-center CRC is around 38 times larger than BCSS. In this experiment, the proposed \emph{FedDBL} performs the most stable quantitative results among all the approaches in both datasets. It even outperforms centralized learning in most of the training data proportions. From this experiment, we can conclude that when with limited network communication resources and training samples, \emph{FedDBL} is the best solution for histopathological tissue classification.
\textbf{Flexibility:} Besides the data and communication efficiency, \emph{FedDBL} is also a flexible framework that can be further upgraded by replacing any module with a superior one if it exists, for example, a more robust feature extractor, a more outstanding classifier or a superior federated aggregation strategy. In this experiment, we demonstrate the flexibility of \emph{FedDBL} by replacing the ResNet-50 CNN backbone pre-trained on ImageNet with a domain-relevant backbone CTransPath~(\cite{wang2022transformer}) pre-trained on histopathological images. The lower parts of both datasets in Table~\ref{tab:one-round-accuracy} and~\ref{tab:one-round-f1-score} demonstrate the comparisons under CTransPath. Here, we only compare \emph{FedDBL} with the ones only updating FC-layer. When experimenting on the larger dataset of Multi-center CRC, the domain-relevant pre-trained feature extractor CTransPath can greatly improve all three approaches. Centralized learning demonstrates the best results in almost all the dataset proportions. But \emph{FedDBL} still constantly outperforms \emph{FedAvg-FC} under the same circumstance. In the much smaller dataset of BCSS, \emph{FedDBL} demonstrates its superiority and outperforms both \emph{Centralized-FC} and \emph{FedAvg-FC}. When with only 1\% training samples in BCSS, CTransPath can improve the F1-score of \emph{FedDBL} from 0.8471 (ResNet-50) to 0.9416. Less or no improvement is observed for the other two approaches.
\textbf{Communication efficiency:} Higher communication efficiency benefits not only from fewer communication rounds but also from a smaller model or feature size for transmission. Conventional federated frameworks share either the parameters of the deep learning models or the extracted features. In our proposed \emph{FedDBL}, we only share the lightweight broad learning weights without sharing any deep learning parameters or deep features. In Table~\ref{tab:Overhead}, we demonstrate the model size and total upload overhead per client of the entire training phase. We can observe that the size of ResNet-50 for sharing is 94.4MB. \emph{FedDBL(ResNet-50)} and \emph{FedDBL(CTP)} share only 276.5KB and 55.4KB broad learning weights respectively, where CTP denotes CTransPath backbone. With only one-round communication, \emph{FedDBL(ResNet-50)} reduces the communication overhead by nearly 350 times compared with ResNet-50. Since conventional federated frameworks might need multiple training iterations for model convergence, 50-round communication will greatly increase the total upload overhead from 94.4MB to 4.609GB. Thanks to the lightweight BL-module and one-shot training manner, \emph{FedDBL(CTP)} reduces the upload workload from 4.609GB to 55.4KB which is over 87,000 times faster than \emph{FedAvg(ResNet-50)}. Even \emph{FedDBL(ResNet-50)} is over 17,000 times faster.
\begin{table*}[th]
\centering
\caption{Comparisons with different methods on 50-round training (Accuracy). \textbf{Bold} is the highest among federated methods; \textcolor{red}{red} is the highest score among all algorithms including centralized learning.}
\begin{threeparttable}
\setlength{\tabcolsep}{2.5mm}{
\begin{tabular}{p{0.11\textwidth}>{}
p{0.13\textwidth}>{\centering}
p{0.07\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}}
\toprule
\multirow{2}{*}{Datasets} & \multirow{2}{*}{Models} & \multirow{2}{*}{\makecell[c]{Global \\ Epochs}} & \multicolumn{7}{c}{Accuracy} \cr
\cmidrule(lr){4-10}
& &&1\% &5\% &10\% &30\% &50\% &70\% &100\% \cr
\midrule
\multirow{5}{*}{\makecell[l]{Multi-center \\ CRC}}
& \emph{Centralized} &50 &0.8830 &0.9308 &0.9508 &\textcolor{red}{0.9713} &\textcolor{red}{0.9789} &\textcolor{red}{0.9817} &\textcolor{red}{0.9846}\cr
\cdashline{2-10}
& \emph{FedAvg} &50 &0.8747 &0.9289 &0.9385 &0.9542 &0.9590 &0.9604 &0.9629\cr
& \emph{FedProx} &50 &0.8786 &0.9289 &0.9409 &0.9549 &0.9586 &0.9615 &0.9622\cr
& \emph{FedDBL} &1 &0.8832 &0.8456 &0.9229 &0.9411 &0.9410 &0.9413 &0.9399\cr
& \emph{FedDBL(CTP)} &1 &\textbf{\textcolor{red}{0.9213}} &\textbf{\textcolor{red}{0.9640}} &\textbf{\textcolor{red}{0.9654}} &\textbf{0.9663} &\textbf{0.9668} &\textbf{0.9669} &\textbf{0.9669}\cr
\midrule
\multirow{5}{*}{\centering BCSS}
&\emph{Centralized} &50 &0.8291 &0.9179 &0.9458 &0.9650 &0.9810 &0.9818 &0.9838 \cr
\cdashline{2-10}
&\emph{FedAvg} &50 &0.8657 &0.9340 &0.9393 &0.9634 &0.9730 &0.9804 &0.9834 \cr
&\emph{FedProx} &50 &0.8700 &0.9201 &0.9502 &0.9671 &0.9737 &0.9776 &0.9830 \cr
&\emph{FedDBL} &1 &0.9012 &0.9511 &0.9603 &0.9711 &0.9745 &0.9731 &0.9652 \cr
&\emph{FedDBL(CTP)} &1 &\textbf{\textcolor{red}{0.9593}} &\textbf{\textcolor{red}{0.9754}} &\textbf{\textcolor{red}{0.9777}} &\textbf{\textcolor{red}{0.9770}} &\textbf{\textcolor{red}{0.9834}} &\textbf{\textcolor{red}{0.9883}} &\textbf{\textcolor{red}{0.9910}} \cr
\bottomrule
\hspace{1mm}
\end{tabular}}
\end{threeparttable}
\label{tab:accuracy}
\end{table*}
\begin{table*}[th]
\centering
\caption{Comparisons with different methods on 50-round training (F1-score). \textbf{Bold} is the highest among federated methods; \textcolor{red}{red} is the highest score among all algorithms including centralized learning.}
\begin{threeparttable}
\setlength{\tabcolsep}{2.5mm}{
\begin{tabular}{p{0.11\textwidth}>{}
p{0.13\textwidth}>{\centering}
p{0.07\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}}
\toprule
\multirow{2}{*}{Datasets} & \multirow{2}{*}{Models} & \multirow{2}{*}{\makecell[c]{Global \\ Epochs}} & \multicolumn{7}{c}{F1-score} \cr
\cmidrule(lr){4-10}
& &&1\% &5\% &10\% &30\% &50\% &70\% &100\% \cr
\midrule
\multirow{5}{*}{\makecell[l]{Multi-center \\ CRC}}
& \emph{Centralized} &50 &0.8813 &0.9307 &0.9512 &\textcolor{red}{0.9714}&\textcolor{red}{0.9789}&\textcolor{red}{0.9816}&\textcolor{red}{0.9845}\cr
\cdashline{2-10}
& \emph{FedAvg} &50 &0.8720 &0.9283 &0.9386 &0.9545 &0.9592 &0.9607 &0.9632\cr
& \emph{FedProx} &50 &0.8751 &0.9288 &0.9409 &0.9550 &0.9591 &0.9618 &0.9625\cr
& \emph{FedDBL} &1 &0.8833 &0.8502 &0.9268 &0.9443 &0.9443 &0.9445 &0.9432\cr
& \emph{FedDBL(CTP)} &1 &\textbf{\textcolor{red}{0.9217}} &\textbf{\textcolor{red}{0.9643}} &\textbf{\textcolor{red}{0.9657}}
&\textbf{0.9666} &\textbf{0.9671} &\textbf{0.9672} &\textbf{0.9672}\cr
\midrule
\multirow{5}{*}{\centering BCSS}
&\emph{Centralized} &50 &0.7671 &0.8808 &0.9182 &0.9554 &0.9722 &0.9756 &0.9761 \cr
\cdashline{2-10}
&\emph{FedAvg} &50 &0.8088 &0.9006 &0.8999 &0.9448 &0.9598 &0.9721 &0.9741 \cr
&\emph{FedProx} &50 &0.8095 &0.8851 &0.9293 &0.9522 &0.9599 &0.9669 &0.9769 \cr
&\emph{FedDBL} &1 &0.8471 &0.9227 &0.9413 &0.9578 &0.9638 &0.9621 &0.9550 \cr
&\emph{FedDBL(CTP)} &1 &\textbf{\textcolor{red}{0.9416}} &\textbf{\textcolor{red}{0.9644}} &\textbf{\textcolor{red}{0.9685}} &\textbf{\textcolor{red}{0.9639}} &\textbf{\textcolor{red}{0.9758}} &\textbf{\textcolor{red}{0.9818}} &\textbf{\textcolor{red}{0.9859}} \cr
\bottomrule
\hspace{1mm}
\end{tabular}}
\end{threeparttable}
\label{tab:f1-score}
\end{table*}
\begin{table}[t]
\centering
\caption{Model size and total upload overhead per client among three models.}
\begin{threeparttable}
\begin{tabular}{cccc}
\toprule
Models\centering &\makecell[c]{\emph{FedAvg} \\ \emph{(ResNet-50)}} &\makecell[c]{\emph{FedDBL} \\ \emph{(ResNet-50)}} &\makecell[c]{ \emph{FedDBL} \\ \emph{(CTP)}} \cr
\midrule
Model size \centering &94.4MB &276.5KB &55.4KB \cr
Total upload size \centering &4.609GB &276.5KB &55.4KB \cr
\bottomrule
\hspace{1mm}
\end{tabular}
\end{threeparttable}
\label{tab:Overhead}
\end{table}
\subsection{Comparisons under Multiple-round Communication}
\label{sub:multi-round}
In this experiment, we compare \emph{FedDBL} under one-round communication with centralized learning and two federated frameworks under multiple-round communication. Table~\ref{tab:accuracy} and Table~\ref{tab:f1-score} demonstrate the accuracy and F1-score, respectively. As expected, the centralized learning strategy achieves the best classification performance when with enough training data in Multi-center CRC dataset. \emph{FedDBL(CTP)} with domain-relevant feature extractor constantly surpasses two federated learning frameworks, and even outperforms centralized learning when with less than 10\% training data. In BCSS, since this dataset is much smaller than Multi-center CRC, even \emph{FedDBL} with ImageNet pre-trained feature extractor can outperform both centralized learning and federated frameworks and \emph{FedDBL(CTP)} can further improve the quantitative results to a remarkable level.
Moreover, we also visualize the average accuracy of the results in Table~\ref{tab:accuracy} and Table~\ref{tab:f1-score} at every epoch in Fig.~\ref{fig:Epoch_plot} to demonstrate the convergence speed of the existing approaches trained with four representative dataset proportions. Since \emph{FedDBL} is a one-round communication framework, we show \emph{FedDBL} and \emph{FedDBL(CTP)} results by a gray dash line and an orange dash line, respectively. The most representative region in each sub-figure has been highlighted by a zoom-in window. As we can see, the convergence speed and the optimal performance of the existing models highly depend on the proportion of the training data. When with 100\% training data in Multi-center CRC, centralized learning can fast surpass \emph{FedDBL} within five epochs and even outperform \emph{FedDBL(CTP)}. Two federated frameworks need more training epochs to converge and can achieve comparable performance with \emph{FedDBL(CTP)}. When reducing the training samples, the convergence speed is becoming slower and the optimal performance also decreases. When with 1\% training samples in BCSS, three existing models vibrate heavily and are even not able to surpass \emph{FedDBL} within 50 training epochs.
All the above experimental results have demonstrated the data efficiency, communication efficiency, model flexibility and model robustness of FedDBL on histopathological image classification. FedDBL, in our opinion, has the potential to significantly save computational and communication resources, relieve the pathologist's labeling burden and preserve the patient's privacy, which greatly promotes its clinical practicability compared with existing approaches.
\begin{figure*}[htp]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\centering
\begin{tabular}{>{\centering\arraybackslash\hspace{0pt}}p{\linewidth}}
\includegraphics[width=\linewidth]{fig/Multi-center_CRC.pdf}
\includegraphics[width=\linewidth]{fig/BCSS.pdf}
\end{tabular}
\caption{Average global accuracy scores of 5-fold cross-validation results at each training epoch. Four representative dataset proportions are selected for visualization. Since FedDBL is a one-round communication framework, we show \emph{FedDBL} and \emph{FedDBL(CTP)} results by a gray dash line and an orange dash line, respectively. We also highlight the most representative region for each sub-figure for better visualization.}
\label{fig:Epoch_plot}
\end{figure*}
\section{FedDBL for Privacy-preserving}
One of the objectives of FedDBL is to preserve patients' privacy. However, even though it is not necessary to directly share the raw data, conventional DL-based federated frameworks still suffer from different kinds of federated attacks, such as model inversion attack~(\cite{zhu2019deep}) or man-in-the-middle attack~(\cite{wang2020man}). Compared with conventional DL-based federated frameworks, FedDBL can defend against the aforementioned attacks because the training data is totally unseen for the DL-module and it is the deep features that are used for calculating the parameters of the BL-module at each client locally. Actually, the pre-trained DL-module can be regarded as a perturbation process to transfer the data into high-level features. And the features are also protected by only sharing the parameters of the BL-module.
Thanks to the lightweight BL-module, we can further protect the parameters by employing an additional encryption algorithm~(\cite{aono2017efficient}) to support federated aggregation in the encrypted domain. The corresponding packing-encryption~(\cite{le2018privacy}) is used where it exploits the redundancy of ciphertext space to hold more plaintext in an encrypted block. Table~\ref{tab:encrypted} demonstrates the model accuracy, F1-score and the model size of the packing-encrypted FedDBL with 1\% Multi-center CRC dataset using CTransPath pre-trained backbone. When the bit-length for the encryption precision is set as 32 bits, the encryption algorithm does not harm the parameters. So the model accuracy is preserved. By using packing-encryption, the model size is still much smaller than that of ResNet-50 in the plaintext condition (94.4MB) shown in Table~\ref{tab:Overhead}.
\begin{table}[t]
\centering
\caption{Comparison between \emph{FedDBL} and \emph{FedDBL(encrypted)} under 1\% Multi-center CRC and CTransPath backbone.}
\begin{threeparttable}
\begin{tabular}{cccc}
\toprule
Models &\makecell[c]{Accuracy} &\makecell[c]{F1-score} &\makecell[c]{Model size} \cr
\midrule
\makecell[c]{\emph{FedDBL}} &\makecell[c]{0.9593} &\makecell[c]{0.9416} &\makecell[c]{55.4KB} \cr
\makecell[c]{\emph{FedDBL} \\ \emph{(encrypted)}} &\makecell[c]{0.9593} &\makecell[c]{0.9416} &\makecell[c]{30MB} \cr
\bottomrule
\hspace{1mm}
\end{tabular}
\end{threeparttable}
\label{tab:encrypted}
\end{table}
\section{Discussion and Conclusion}
In this paper, we proposed a novel federated framework (FedDBL) for histopathological tissue classification. Thanks to the robust deep learning feature extractor and flexible broad learning inference system, FedDBL can greatly improve the classification performance with only one-round communication and extremely limited training samples, which significantly reduces the data dependency and communication burden. With the employment of DL-module and BL-module for local training, each client just needs to solve a lightweight broad learning system, which drastically reduces the training overhead. Sharing the lightweight BL-module in the federated framework not only greatly saves the communication burden, but also preserves the patients' privacy. Experimental results with five-fold cross-validation demonstrate that FedDBL outperforms both centralized learning strategies and federated frameworks in one-round communication. It even outperforms all the competitors in 50-round training iterations when with limited training samples.
Moreover, due to the flexible model design, FedDBL can be further upgraded by replacing any module with a superior one in the future. In this paper, we have proven that a domain-relevant deep feature extractor is more effective than a domain-irrelevant one. Since the federated framework in this study is the most common federated average aggregation strategy. We expect a more outstanding federated aggregation framework can be applied in the future to further improve the performance of FedDBL under extreme data and communication situations.
\section{Introduction}
Tissue classification~\cite{gurcan2009histopathological,FUCHS2011515}, also known as tissue phenotyping, aims to use computer algorithms to automatically recognize different tissue types in the Whole Slide Images (WSIs). It is one of the fundamental tasks in computational pathology~\cite{srinidhi2021deep} which can parse the landscape of tumor microenvironment for precise predictions of cancer diagnosis~\cite{bulten2022artificial}, prognosis~\cite{fu2020pan,pages2018international} and treatment response~\cite{vanguri2022multimodal}. With the advancements of deep learning algorithms and the growing number of open data~\cite{kather2019predicting,zhao2020artificial}, this problem has been well studied with outstanding classification performance~\cite{hatami2021deep}. In clinical practice, however, it still faces ethical, regulatory and legal obstacles where centralized data collection may lead to privacy leakage.
Federated Learning (FL)~\cite{yang2019federated} framework provides a promising solution to protect user privacy by only sharing the intermediate results or the model parameters instead of the raw data, which has been widely studied in medical image analysis~\cite{pati2022federated,sheller2020federated}. But only very few attempts~\cite{saldanha2022swarm,shen2022tmi,ke2021isbi} have been made in computational pathology and the research progress still lags behind other medical image modalities~\cite{rauniyar2022federated} due to the following two obstacles.
The first one is the data dependency problem. Since most of the existing FL frameworks are constructed based on deep learning models. They are data-hungry and commonly require a large amount of well-annotated samples. However, labeling histopathological images is time-consuming, expertise-dependent and expensive~\cite{greenwald2022whole,pati2021reducing}. When without enough training samples, existing models may not achieve favorable performance.
Another obstacle is the communication overhead. The training procedure of traditional FL models needs multiple cloud-client iterations to achieve global convergence. However, deep learning models are with tens of millions of parameters, which greatly increases the communication burden when with multiple communication rounds. Lack of training samples may further amplify the communication burden because deep learning models commonly require more iterations to converge when with limited training samples. Moreover, frequent communications may increase the chance of being attacked, such as man-in-the-middle attacks~\cite{wang2020man}.
Therefore, it is urged to construct a data-efficient and communication-efficient FL model for histopathological tissue classification. In this paper, we proposed a simple and effective solution for histopathological tissue classification, which considers not only the data sharing problem, but also the data dependency, communication efficiency, model robustness and model inversion attack. Our proposed model Federated Deep-Broad Learning, \textit{FedDBL} in short, contains three integrated components, including a common federated learning framework, a pre-trained deep learning (DL) backbone and a broad learning (BL) inference system~\cite{BLS}. The federated learning framework serves for decentralized training to avoid data sharing across different medical centers or institutions. The pre-trained DL backbone can provide stable and robust deep features when there are not enough training labels, while can also effectively avoid the model inversion attack since no back-propagation is calculated for gradients. The BL system is a lightweight classifier with good approximation capability which can greatly shorten the transmission time and overcome the data dependency problem. Fig.~\ref{fig:compare} comprehensively demonstrates the strengths of FedDBL compared with the centralized learning and the conventional federated learning ways. And to the best of our knowledge, this is the first FL-based model for histopathological tissue classification.
\begin{figure}[t!]
\includegraphics[width=\linewidth]{fig/Compare.pdf}
\caption{Overall comparison among centralized training, traditional DL-based FL and our proposed FedDBL paradigms. (a) Centralized learning gathers data from all the clients which cannot protect the patient's privacy. (b) Traditional DL-based FL preserves privacy by transmitting the model parameters to the central server without sharing the raw data. However, the communication overload highly depends on the model size and the number of communication rounds. (c) Our proposed FedDBL not only protects privacy, but also dramatically saves the communication burden through a super lightweight trainable broad learning system.}
\label{fig:compare}
\end{figure}
Extensive experiments with five-fold cross-validation are conducted to demonstrate the superiority of FedDBL in several aspects, including data dependency, communication efficiency, flexibility and the practicability of the model encryption. When with enough training data, FedDBL can mostly outperform conventional FL strategies and achieve comparable or even better classification performance compared with centralized learning strategy. When reducing the training samples in the data dependency experiment, FedDBL still maintains a high-level performance and greatly outperforms both centralized learning and conventional FL frameworks, even with only 1\% training samples. FedDBL is also flexible to any deep learning architectures to support data- and communication-efficient histopathological tissue classification. Another spotlight of FedDBL is communication efficiency. Compared with the conventional FL frameworks, FedDBL's one-round training manner reduces the upload workload from 4.609GB to 276.5KB (over 17,000 times faster) with ResNet-50 backbone compared to traditional 50-round iterative training. Thanks to the tiny model size, FedDBL is also computationally efficient in model encryption which can further upgrade the privacy protection level. The main contributions of this paper can be summarized as follows:
\begin{itemize}
\item We propose a novel federated learning approach (FedDBL) for histopathological tissue classification to preserve patients' privacy. To the best of our knowledge, FedDBL is the first FL-based approach tailored for histopathological tissue classification.
\item FedDBL is a simple, effective and easy-to-use algorithm that associates three classical methods, including a robust pre-trained deep learning feature extractor, a fast broad learning inference system and a simple federated learning framework.
\item FedDBL is the first study that considers communication efficiency and data efficiency simultaneously which reduces the communication overhead of each client with extremely limited training samples.
\item Extensive experiments demonstrate that FedDBL drastically relieves the dependence on training data and reduces the communication overhead while maintaining outstanding classification performance, which promotes its clinical practicability.
\end{itemize}
\section{Related Works}
\subsection{Histopathological Tissue Classification}
High-resolution WSIs offer a wide range of tissue phenotypes where the pixel-level annotation is time-consuming and requires a great deal of biomedical knowledge~\cite{srinidhi2021deep}, making patch-level histopathological tissue classification an alternate solution for automated analysis in computer-aided tumor diagnosis~\cite{kather2019predicting,xue2021selective,abdeltawab2021pyramidal}.
Due to the rapid development of computer vision, the most popular natural image classification models can be transferred into histopathological tissue phenotyping. However, it still suffers from the data dependency problem with a huge annotation burden~\cite{ayyad2021role}. Thus, various approaches have been proposed to reduce the annotation effort. Han~\textit{et~al}.~\cite{han2022multi} proposed a multi-layer pseudo-supervision approach with a progressive dropout attention mechanism to convert patch-level labels into pseudo-pixel-level labels. An extra classification gate mechanism was presented which reduced the false-positive rate for non-predominant category classification and improved the segmentation performance in return. Xue~\textit{et~al}.~\cite{xue2021selective} utilized a generative adversarial network (GAN) to generate pseudo samples to expand the training data. Dolezal~\textit{et~al}.~\cite{dolezal2022uncertainty} cropped WSIs into tiles for training the uncertainty quantization model and solved the problem of domain shift in external validation data. In order to get rid of lacking image annotations, Wang~\textit{et~al}.~\cite{wang2022transformer} employed unsupervised contrastive learning to obtain a robust initialized model with moderate feature representation of the histopathological feature space, with no annotation burden. Our previous study~\cite{lin2022pdbl} introduced pyramidal deep-broad learning (PDBL) as a pluggable module for any CNN backbone to further improve histopathological tissue classification performance.
Besides that, another unexplored challenge is the patient privacy issue. Only a few attempts~\cite{saldanha2022swarm,saldanha2022direct} have been made in federated learning for computational pathology, which will be discussed in the following subsection. And to the best of our knowledge, we are the first study to consider privacy protection in histopathological tissue classification.
\subsection{Federated Learning}
\subsubsection{Federated Learning in Medical Image Analysis}
Because of the ethical issue, federated learning (FL) has been widely adopted in medical applications to preserve the patients' privacy~\cite{pati2022federated,warnat2021swarm,sheller2020federated}. In medical imaging, FL has witnessed a boost in interest~\cite{kaissis2020secure}, such as MRI reconstruction~\cite{guo2021multi,li2020multi}, CT lesion segmentation~\cite{yang2021federated} and etc.
In the COVID-19 pandemic, COVID-19-related applications with data from different medical centers or even from different countries become the most urgent demand in the real-world clinical scenario while FL greatly advances the diagnostic performance~\cite{bai2021advancing}. Dayan~\textit{et~al}.~\cite{dayan2021federated} used 20 institutes’ data across the global for predicting the future oxygen requirements of symptomatic patients suffering from COVID-19. Dou~\textit{et~al}.~\cite{dou2021federated} proposed a federated model to detect COVID-19 lung abnormalities with good generalization capability on unseen multinational datasets.
\subsubsection{Federated Learning in Computational Histopathology}
In histopathological images, a swarm learning architecture with blockchain protocols has been proposed to predict the mutational status~\cite{saldanha2022swarm}. However, compared with other medical imaging modalities, there are few studies~\cite{saldanha2022direct} that adopt federated learning in histopathological images for the following reasons. First, the digitalization of pathology is unpopular. Pathological diagnosis still relies on observing specimens under a microscope. Second, image annotation is also an obstacle for the histopathological image process since only pathologists are capable to label WSIs which greatly increases the difficulties of acquiring well-annotated data. Third, due to the gigapixel resolution of WSIs, the size of the deep learning model is generally large, which increases the communication burden in networking.
There are technical solutions in FL to the high communication overhead problem, such as compressing the model size~\cite{reisizadeh2020fedpaq,jhunjhunwala2021adaptive}. Reisizadeh~\textit{et~al}.~\cite{reisizadeh2020fedpaq} proposed FedPAQ to reduce the interactive overhead of FL by compressing the model with lower bit-precision and Jhunjhunwala~\textit{et~al}.~\cite{jhunjhunwala2021adaptive} proposed an adaptive quantization strategy to achieve communication efficiency.
However, the underlying assumption of existing studies is that there should be enough samples for model training where they may not be able to take into account both communication efficiency and limited data issue~\cite{kamp2021federated, zhang2023two}. In this study, we fully consider the specialty of histopathological images, the difficulties of data labeling and the communication efficiency in the real-world clinical scenario, which has never been discussed in decentralized computational pathology.
\section{Methodology}
In this section, we introduce our framework Federated Deep-Broad Learning (FedDBL). This framework is designed for privacy-preserving tissue classification with limited training samples and extremely low communication overhead. In the following subsections, we first describe the intuitive thinking and problem setting in Section~\ref{sub:problem-setting}. The overall framework and the methodology of FedDBL are shown in Section~\ref{sub:FedDBL}. Finally, we demonstrate the implementation details in Section~\ref{sub:implementation}.
\subsection{Problem Setting}
\label{sub:problem-setting}
As a classical upstream task in computational pathology, existing tissue classification approaches have achieved outstanding performance under an ideal condition with enough training samples by centralized learning. However, they might face the following obstacles in the real-world clinical scenario.
\textbf{Annotation burden:} Collecting enough well-labeled training samples is expensive and time-consuming because it requires labelers with medical background.
\textbf{Privacy preservation:} The raw data should not be shared across different medical institutions (or clients) to preserve the patient's privacy. Transmitting raw data may break the principle of medical ethics.
\textbf{Communication cost:} The communication overhead has always been a challenge in federated learning models affected by many compound factors, such as the model size, the communication rounds, the model convergence speed, the network bandwidth and etc.
To resolve the aforementioned challenges, we propose a simple and effective FL-based framework, demonstrated in Fig.~\ref{fig:FedDBL}. First, we abandon conventional end-to-end training manner since limited training samples may harm the robustness of the deep learning model and decrease the convergence speed. Therefore, we separate feature extraction and inference for local training in each client. A pre-trained deep feature extractor (CNN backbone) is introduced to avoid the feature extractor being affected by the training sample bias from different clients in order to guarantee the robustness of extracted features. Then an independent broad learning inference system~\cite{BLS,lin2022pdbl} serves for fast inference. Finally, we apply a classical weighted averaging as in FedAvg~\cite{mcmahan2017communication}, to fuse the broad learning inference systems from all the clients.
\begin{figure*}[t]
\centering
\includegraphics[width=.975\linewidth]{fig/Framework.pdf}
\caption{The overall architecture of FedDBL with three modules, deep feature extraction module, broad inference module and federated decentralized module. (a) Deep feature extraction module serves for extracting multi-scale deep-broad features from low level to high level by a pre-trained DL backbone. Features of all the patches are stored in a feature bank. (b) Broad inference module is introduced for fast inference by a broad learning system. (c) Federated decentralized module applies a classical federated average approach to aggregate the broad learning weights from different clients.}
\label{fig:FedDBL}
\end{figure*}
\subsection{FedDBL Architecture and Formulation}
\label{sub:FedDBL}
As shown in Fig.~\ref{fig:FedDBL}, FedDBL consists of three modules, deep feature extraction module (DL-module), broad inference module (BL-module) and federated decentralized module (Fed-module). DL-module together with BL-module serves for local training on the client side. Fed-module is executed on the server side.
Algorithm~\ref{algorithm:FedDBL-server} provides the details of the entire FedDBL pipeline.
Let $\mathcal{D}_{1}, \mathcal{D}_{2}, \cdots, \mathcal{D}_k, \cdots, \mathcal{D}_{K}$ denote the local training sets from $K$ clients with the dataset size of $n_k$ for each client $\mathcal{D}_k$. The total number of training samples is denoted as $N = \sum_{k=1}^{K}{n_k}$. For each sample $X$ with ground truth $Y$ in $\mathcal{D}$, DL-module with pre-trained parameters $\Theta$ extracts the features and stores them in the feature bank $\mathbf{B}$. Then BL-module calculates the weights $W_{client}$ of broad learning system. By the federated aggregation approach, we can obtain the global weight $W_{global}$. The workflows of the server and the clients are demonstrated in Algorithm~\ref{algorithm:FedDBL-server} and Algorithm~\ref{algorithm:FedDBL-client}, respectively.
\begin{algorithm}[!ht]
\caption{FedDBL framework (Server Execution)}
\label{algorithm:FedDBL-server}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input {A set of $K$ clients}
\Output {A global model $W_{global}$}
Prepare pre-trained DL backbone parameters $\Theta$
Initialize BL system setting
\For{each client $k$ \emph{\textbf{in parallel}}}{
$W_{client}^{k} \leftarrow $ClientExecution$\left(\Theta, \mathcal{D}_{k} \right)$
}
$W_{global} \leftarrow$ Fed-module$(W_{client}^1,\cdots,W_{client}^K)$
\textbf{return} $W_{global}$
\end{algorithm}
\begin{algorithm}[!ht]
\caption{FedDBL framework (Client Execution)}
\label{algorithm:FedDBL-client}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input {Pre-trained DL backbone $\Theta$, training set $\mathcal{D}$ with $n$ training samples}
\Output {Deep-broad learning model $W_{client}$.}
\tcc{DL-module}
\For{training sample $X$ in $\mathcal{D}$} {
\For{$s$-th stage in $\Theta$}{
$f_{X}^{s} \leftarrow \Theta^{s}\left(X\right)$ \tcp{Feature extraction}
$\mathbf{e}_{X}^{s}=\frac{1}{H_{X}^{s} \times W_{X}^{s}} \sum_{i=1}^{H_{X}^{s}} \sum_{j=1}^{W_{X}^{s}} f_{X}^{s}(i,j)$ \tcp{Adaptive global average pooling}
}
$\mathbf{b}_{X} = \mathbf{e}_{X}^{1} \parallel \mathbf{e}_{X}^{2} \parallel \cdots \parallel \mathbf{e}_{X}^{S}$ \tcp{Concatenation}
}
\textbf{Obtain} $\left\{\mathbf{b}_{i}|i=1,2,\cdots,n \right\}$\\
$\mathbf{B} \leftarrow \sigma\left( \left[\mathbf{b_1}^\mathbf{T}, \mathbf{b_2}^\mathbf{T}, \dots ,\mathbf{b}_{n}^\mathbf{T} \right]\right)^\mathbf{T}$ \tcp{Normalization transformation}
\tcc{BL-module}
Initialize BL system setting defined by central server \\
$\mathbf{B}^{+} \leftarrow \lim_{\lambda \to 0} \left(\mathbf{B} \mathbf{B}^{\mathbf{T}} + \lambda E \right)^{-1} \mathbf{B}^{\mathbf{T}}$ \tcp{Solve Pseudo-inverse}
$W_{client} \leftarrow \mathbf{B}^{+} Y$ \tcp{Calculate BL model weight}
\textbf{return} $W_{client}$
\end{algorithm}
\subsubsection{Deep Feature Extraction Module}
A large number of samples and repeated backpropagation are required in standard DL training to achieve a good feature representation ability. When suffering from the insufficient data problem, the model training procedure might be unstable which leads to poor feature representation and model overfitting. Our previous study~\cite{lin2022pdbl} reveals that directly adopting a stable pre-trained model for feature extraction is more favorable to the model performance than training the model with limited samples, even the pre-trained model was trained by an irrelevant image domain (ImageNet\footnote{https://image-net.org/}). Inspired by this idea, we use a pre-trained CNN model with no further training to extract the deep features. Notice that, the selection of the pre-trained models is flexible, and can be from any image domain. We have conducted an experiment to justify the flexibility in Section~\ref{sec:exp}. Another advantage of using pre-trained models is to avoid model inverse attacks since the training samples are all unseen. To enrich the feature representation, we extract multi-stage features from low-level to high-level, details as below.
As illustrated in DL-module of Algorithm~\ref{algorithm:FedDBL-client}, each client $k$ $(k \in \left[1, \cdots, K\right])$ downloads the pre-trained DL backbone as feature extractor $\Theta$ and extracts multi-stage deep features $\mathbf{b}_X$ of training sample $X$ locally (we omit $k$ for simplicity), where $\mathbf{b}_X$ consists of multiple stages’ features $\Theta^s(X)(s \in \left[1, \cdots, S\right])$. The features of the entire dataset $\mathcal{D}_k$ are stored in the feature bank $\mathbf{B}$. Then the feature bank $\mathbf{B}$ will be passed to broad inference module. Since neither the training data nor the feature bank is shared across different clients, there is no privacy leakage risk in deep feature extraction module.
\subsubsection{Broad Inference Module}
With the feature bank $\mathbf{B}$, each client $k$ can conduct a local BL system~\cite{BLS} through BL-module (Algorithm~\ref{algorithm:FedDBL-client}) for fast inference. By solving the Eq.~\eqref{eq:W_opt_1} optimization problem, an optimal BL model $W_{client}$ can be obtained rapidly through the pseudo-inverse method (Eq.~\eqref{eq:W_opt_2}).
\begin{equation}
\label{eq:W_opt_1}
W_{client}=\underset{W_{init}}{{\arg\min}} \left\| \mathbf{B}W_{init}-Y \right\|_{2}^{2} + \gamma \left\|W_{init} \right\|_{2}^{2}
\end{equation}
\begin{equation}
\label{eq:W_opt_2}
W_{client} = \mathbf{B}^{+} Y = \lim_{\lambda \to 0} \left(\mathbf{B} \mathbf{B}^{\mathbf{T}} + \lambda E \right)^{-1} \mathbf{B}^{\mathbf{T}} Y
\end{equation}
where $Y$ represents the ground-truth label matrix, $\mathbf{B}$ is feature bank in the form of matrix. $W_{init}$ is the initialized broad learning weights. $E$ is the identity matrix, $\lambda$ is a constant parameter and $\gamma$ is the regularization parameter. The pseudo-inverse method of solving BL model considerably reduces the computational burden while achieving high communication efficiency. For the inference process, the predicted results can be calculated by $Y_{test}=\mathbf{B}_{test} W_{client}$ after extracting test samples’ deep features with the largest probabilistic value.
Thanks to the lightweight broad learning model $W_{client}$, the communication efficiency is drastically improved compared with the conventional DL-based FL frameworks.
\subsubsection{Federated Decentralized Module}
In this module, we conduct a federated learning framework for decentralized learning. Given the broad learning model $W_{client}^k$ of each client $k$, we first upload the models from all the clients to the central server. And then general federated aggregation methods can be applied to aggregate them. Here, we use the most common weighted averaging way for model aggregation as adopted in FedAvg~\cite{mcmahan2017communication}, FedProx~\cite{li2020federated} and FedPAQ~\cite{reisizadeh2020fedpaq}.
\begin{equation}
\label{eq:W_global}
W_{global}=\sum_{k=1}^{K} \frac{n_{k}}{N} W_{client}^{k}
\end{equation}
where $W_{global}$ is the global model from the server, $n_k $ is the number of training samples in client $k$ and $N$ is the total number of training samples. A larger training dataset will contribute more to the global model. Since we only share the broad learning model for once, the communication efficiency and the patient's privacy are guaranteed.
\subsection{Implementation Details}
\label{sub:implementation}
All of our experiments are implemented in Pytorch on a workstation with an NVIDIA RTX 3090 and the i9-11900K CPU with 16 cores. We use the cross-entropy loss for the baseline centralized training with batch size $20$. The SGD optimizer is set as follows: the learning rate is $1e^{-3}$, the weight decay is $1e^{-4}$ and the momentum is 0.9. The patches are $224 \times 224$ under $20\times$ WSIs. Different client numbers are used depending on the datasets.
We adopt three well-known federated aggregation methods, FedAvg \cite{mcmahan2017communication}, FedProx \cite{li2020federated} and FedPAQ \cite{reisizadeh2020fedpaq}, for comparison. And the centralized model is trained as the baseline. FedProx has the parameter $\mu$ to adjust the effect of the proximal term on the loss function. Here we set $\mu$ as $1$ which has better performance.
\section{Experiments}
\label{sec:exp}
In this section, we present the details of the datasets and conduct various experiments to demonstrate the performance and efficiency of the proposed FedDBL. Section~\ref{sub:datasets} shows two open datasets and the experimental settings in the federated learning framework. In Section~\ref{sub:one-round}, we compare FedDBL with centralized learning baselines, conventional federated learning baselines and one-round federated learning baselines. The effectiveness is comprehensively discussed in Section~\ref{sub:multi-round}. We use accuracy and macro F1-score as the evaluation metrics in all the experiments.
\subsection{Datasets and Experimental Settings}
\label{sub:datasets}
\begin{table*}[ht]
\centering
\caption{Statistics of Multi-center CRC. \#1 denotes TCGA, \#2 denotes Kather, \#3 denotes Guangdong Provincial People’s Hospital and \#4 denotes Yunnan Cancer Hospital.}
\begin{threeparttable}
\begin{tabular}{p{0.04\textwidth}>{\makecell[r]}
p{0.06\textwidth}>{\makecell[r]}
p{0.06\textwidth}>{\makecell[r]}
p{0.06\textwidth}>{\makecell[r]}
p{0.06\textwidth}>{\makecell[r]}
p{0.06\textwidth}>{\makecell[r]}
p{0.06\textwidth}>{\makecell[r]}
p{0.06\textwidth}>{\makecell[r]}
p{0.06\textwidth}>{\makecell[r]}
p{0.06\textwidth}>{\makecell[r]}
p{0.07\textwidth}}
\toprule
&ADI &BACK &DEB &LYM &MUC &MUS &NORM &STR &TUM &\textbf{Total} \cr
\midrule
\#1\centering &\makecell[r]{10,065} &\makecell[r]{10,736} &\makecell[r]{10,603} &\makecell[r]{2,340}
&\makecell[r]{9,398} &\makecell[r]{12,974} &\makecell[r]{10,003} &\makecell[r]{10,081}
&\makecell[r]{12,899} &\makecell[r]{89,099} \cr
\#2\centering &\makecell[r]{10,407} &\makecell[r]{10,566} &\makecell[r]{11,512} &\makecell[r]{11,577}
&\makecell[r]{8,896} &\makecell[r]{13,536} &\makecell[r]{8,763} &\makecell[r]{10,446}
&\makecell[r]{14,317} &\makecell[r]{100,000} \cr
\#3\centering &\makecell[r]{10,000} &\makecell[r]{22,565} &\makecell[r]{9,999} &\makecell[r]{5,831}
&\makecell[r]{10,737} &\makecell[r]{10,000} &\makecell[r]{13,368} &\makecell[r]{12,584}
&\makecell[r]{10,000} &\makecell[r]{105,084} \cr
\#4\centering &\makecell[r]{2,500} &\makecell[r]{2,500} &\makecell[r]{2,500} &\makecell[r]{2,500}
&\makecell[r]{2,500} &\makecell[r]{2,500} &\makecell[r]{2,500} &\makecell[r]{2,500}
&\makecell[r]{2,500} &\makecell[r]{22,500} \cr
\midrule
\textbf{Total}\centering &\makecell[r]{32,972} &\makecell[r]{46,367} &\makecell[r]{34,614} &\makecell[r]{22,228}
&\makecell[r]{31,531} &\makecell[r]{39,010} &\makecell[r]{34,634} &\makecell[r]{35,611}
&\makecell[r]{39,716} &\makecell[r]{316,683} \cr
\bottomrule
\hspace{1mm}
\end{tabular}
\end{threeparttable}
\label{tab:Multi-CRC}
\end{table*}
\begin{table}[t]
\centering
\caption{Statistics of BCSS. \#1, \#2 and \#3 are the datasets of three clients.}
\begin{threeparttable}
\begin{tabular}{p{0.03\textwidth}>{\makecell[r]}
p{0.05\textwidth}>{\makecell[r]}
p{0.05\textwidth}>{\makecell[r]}
p{0.05\textwidth}>{\makecell[r]}
p{0.05\textwidth}>{\makecell[r]}
p{0.05\textwidth}}
\toprule
&TUM &STR &LYM &NEC &\textbf{Total} \cr
\midrule
\#1\centering &\makecell[r]{2,016} &\makecell[r]{598} &\makecell[r]{220} &\makecell[r]{217} &\makecell[r]{3,051} \cr
\#2\centering &\makecell[r]{1,962} &\makecell[r]{987} &\makecell[r]{269} &\makecell[r]{372} &\makecell[r]{3,590} \cr
\#3\centering &\makecell[r]{718} &\makecell[r]{704} &\makecell[r]{127} &\makecell[r]{88} &\makecell[r]{1,637} \cr
\midrule
\textbf{Total}\centering &\makecell[r]{4,696} &\makecell[r]{2,289} &\makecell[r]{616} &\makecell[r]{677} &\makecell[r]{8,278}\cr
\bottomrule
\hspace{1mm}
\end{tabular}
\end{threeparttable}
\label{tab:BCSS}
\end{table}
\textbf{Multi-center CRC:} This is a multi-center datasets~\cite{zhao2020artificial,kather2019predicting} of colorectal cancer. Kather dataset \cite{kather2019predicting} defined nine different tissue types of H\&E stained WSIs, including adipose (ADI), background (BACK), debris (DEB), lymphocytes (LYM), mucus (MUC), smooth muscle (MUS), normal colon mucosa (NORM), cancer-associated stroma (STR), and colorectal adenocarcinoma epithelium (TUM). It contains 100k patches extracted from 86 WSIs. Following Kather, \cite{zhao2020artificial} also released another CRC dataset from three different medical centers, including 89.1k patches (85 slides) from The Cancer Genome Atlas (TCGA), 105.1k patches (106 slides) from Guangdong Provincial People's Hospital and 22.5k patches (48 slides) from Yunnan Cancer Hospital. All these patches are with the same resolution of $224\times 224$ at $20\times$ magnification. Table~\ref{tab:Multi-CRC} demonstrates the statistics of each dataset.
\textbf{BCSS}: Here, we introduce another dataset of breast cancer. Breast Cancer Semantic Segmentation (BCSS)~\cite{amgad2019structured} is an open challenge released in Grand Challenge\footnote{https://bcsegmentation.grand-challenge.org/}. There are 151 ROI images with pixel-level annotations in WSIs retrieved in TCGA. According to the naming convention provided by the supplementary document of BCSS, the ROIs are extracted from 21 different medical centers/hospitals. To generate a patch-level dataset, we first divide these ROIs into three clients and each of them has 7 medical centers. Then we crop each ROI into $224\times 224$ pixels by a sliding window with a step size of 120 pixels at $20\times$ objective magnification. Since this dataset is long-tail, we only keep the tissues from the four predominant classes, including tumor (TUM), stroma (STR), lymphocytic infiltrate (LYM), and necrosis or debris (NEC). The patches with the area of the majority class larger than 95\% are kept while the others are discarded as ambiguous patches. Finally, a total number of 8278 patches are left, and the size of each client’s dataset is shown in Table ~\ref{tab:BCSS}.
\textbf{Experimental Settings:}
We conduct the federated learning environment by the following steps. First of all, Multi-center CRC includes four clients according to the dataset setting from the original papers. BCSS is separated into three clients due to the limited training samples. For each client, the local dataset is randomly separated into a training set and a test set with a ratio of $7:3$. Then, we randomly sample seven incremental subsets with the proportions of $\left[1\%, 5\%, 10\%, 30\%, 50\%, 70\%, 100\%\right]$ from the training set. To conduct a 5-fold cross-validation experiment, we repeat the random sampling strategy five times for each proportion. In addition, we simply combine the training sets as well as the test sets from all the clients for centralized learning comparison.
\begin{table*}[t]
\centering
\caption{Accuracy of five-fold cross-validation experiment for different proportions of the training data under one-round training. Centralized methods are used for baselines, \textbf{bold} is the highest among federated algorithms and \textcolor{red}{red} represents the highest score among all methods including centralized learning. ResNet-50 and CTransPath indicate the CNN backbones pre-trained on ImageNet and pathology images, respectively. The performance of each fold can be found in the supplemental material.}
\begin{threeparttable}
\setlength{\tabcolsep}{2.5mm}{
\begin{tabular}{p{0.11\textwidth}>{}
p{0.1\textwidth}>{}
p{0.15\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}}
\toprule
\multirow{2}{*}{Datasets} & \multirow{2}{*}{Base model} & \multirow{2}{*}{Models} & \multicolumn{7}{c}{Accuracy} \cr
\cmidrule(lr){4-10}
& & &1\% &5\% &10\% &30\% &50\% &70\% &100\% \cr
\midrule
\multirow{10}{*}{\makecell[l]{Multi-center \\ CRC}}
&\multirow{7}{*}{ResNet-50}
&\emph{Centralized} &0.7185 &0.7913 &0.8819 &0.9033 &0.9306 &\textcolor{red}{0.9414} &0.9450 \cr
&&\emph{Centralized-FC} &0.8334 &\textcolor{red}{0.8970} &0.9129 &0.9260 &0.9378 &0.9412 &\textcolor{red}{0.9458} \cr
&&\emph{FedAvg} &0.1465 &0.3095 &0.4025 &0.4145 &0.4461 &0.4823 &0.4345 \cr
&&\emph{FedProx} &0.1900 &0.2976 &0.4332 &0.4621 &0.4663 &0.4448 &0.4818 \cr
&&\emph{FedPAQ} &0.1984 &0.3373 &0.3798 &0.5208 &0.4778 &0.4919 &0.4994 \cr
&&\emph{FedAvg-FC} &0.7942 &\textbf{0.8552} &0.8628 &0.8817 &0.8900 &0.8857 &0.8977 \cr
&&\emph{FedDBL(\textbf{ours}} &\textbf{\textcolor{red}{0.8832}} &0.8456 &\textbf{\textcolor{red}{0.9229}}
&\textbf{\textcolor{red}{0.9411}} &\textbf{\textcolor{red}{0.9410}} &\textbf{0.9413} &\textbf{0.9399} \cr
\cdashline{2-10}
&\multirow{3}{*}{\centering CTransPath}
&\emph{Centralized-FC} &\textcolor{red}{0.9390} &0.9594 &\textcolor{red}{0.9670} &\textcolor{red}{0.9756} &\textcolor{red}{0.9788} &\textcolor{red}{0.9801} &\textcolor{red}{0.9817} \cr
&&\emph{FedAvg-FC} &0.9074 &0.9382 &0.9455 &0.9536 &0.9563 &0.9577 &0.9595 \cr
&&\emph{FedDBL(\textbf{ours}} &\textbf{0.9213} &\textbf{\textcolor{red}{0.9640}} &\textbf{0.9654} &\textbf{0.9663} &\textbf{0.9668} &\textbf{0.9669} &\textbf{0.9669} \cr
\midrule
\multirow{10}{*}{\centering BCSS}
&\multirow{7}{*}{ResNet-50}
&\emph{Centralized} &0.7657 &0.4779 &0.5726 &0.8676 &0.8836 &0.8944 &0.9408 \cr
&&\emph{Centralized-FC} &0.5806 &0.8106 &0.8640 &0.9345 &0.9490 &0.9543 &0.9600 \cr
&&\emph{FedAvg} &0.5972 &0.7495 &0.7062 &0.6959 &0.6611 &0.5889 &0.6420 \cr
&&\emph{FedProx} &0.5951 &0.7277 &0.7158 &0.7155 &0.6654 &0.6631 &0.6308 \cr
&&\emph{FedPAQ} &0.5931 &0.6597 &0.6802 &0.6822 &0.6254 &0.6454 &0.6170 \cr
&&\emph{FedAvg-FC} &0.5740 &0.6234 &0.7714 &0.8754 &0.9014 &0.9259 &0.9365 \cr
&&\emph{FedDBL(\textbf{ours}} &\textbf{\textcolor{red}{0.9012}} &\textbf{\textcolor{red}{0.9511}} &\textbf{\textcolor{red}{0.9603}}
&\textbf{\textcolor{red}{0.9711}} &\textbf{\textcolor{red}{0.9745}} &\textbf{\textcolor{red}{0.9731}}&\textbf{\textcolor{red}{0.9652}} \cr
\cdashline{2-10}
&\multirow{3}{*}{\centering CTransPath}
&\emph{Centralized-FC} &0.6159 &0.5904 &0.6692 &0.8886 &0.9435 &0.9609 &0.9726 \cr
&&\emph{FedAvg-FC} &0.5858 &0.5772 &0.5676 &0.6689 &0.8072 &0.8537 &0.9072 \cr
&&\emph{FedDBL(\textbf{ours}} &\textbf{\textcolor{red}{0.9593}} &\textbf{\textcolor{red}{0.9754}} &\textbf{\textcolor{red}{0.9777}}
&\textbf{\textcolor{red}{0.9770}} &\textbf{\textcolor{red}{0.9834}} &\textbf{\textcolor{red}{0.9883}} &\textbf{\textcolor{red}{0.9910}} \cr
\bottomrule
\hspace{1mm}
\end{tabular}}
\end{threeparttable}
\label{tab:one-round-accuracy}
\end{table*}
\begin{table*}[th]
\centering
\caption{F1-scores of five-fold cross-validation experiment for different proportions of the training data under one-round training. Centralized methods are used for baselines, \textbf{bold} is the highest among federated algorithms and \textcolor{red}{red} represents the highest among all methods including centralized learning. ResNet-50 and CTransPath indicate the CNN backbones pre-trained on ImageNet and pathology images, respectively. The performance of each fold can be found in the supplemental material.}
\begin{threeparttable}
\setlength{\tabcolsep}{2.5mm}{
\begin{tabular}{p{0.11\textwidth}>{}
p{0.1\textwidth}>{}
p{0.15\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}}
\toprule
\multirow{2}{*}{Datasets} & \multirow{2}{*}{Base model} & \multirow{2}{*}{Models} & \multicolumn{7}{c}{F1-score} \cr
\cmidrule(lr){4-10}
& & &1\% &5\% &10\% &30\% &50\% &70\% &100\% \cr
\midrule
\multirow{10}{*}{\makecell[l]{Multi-center \\ CRC}}
&\multirow{7}{*}{ResNet-50}
&\emph{Centralized} &0.7191 &0.7863 &0.8801 &0.9027 &0.9300 &0.9415 &0.9451 \cr
&&\emph{Centralized-FC} &0.8301 &\textcolor{red}{0.8977} &0.9137 &0.9265 &0.9383 &0.9417 &\textcolor{red}{0.9463} \cr
&&\emph{FedAvg} &0.0669 &0.2203 &0.3214 &0.3250 &0.3240 &0.3829 &0.3366 \cr
&&\emph{FedProx} &0.1086 &0.2110 &0.3468 &0.3576 &0.3627 &0.3468 &0.3981 \cr
&&\emph{FedPAQ} &0.1146 &0.2527 &0.2789 &0.4285 &0.3773 &0.3930 &0.4032 \cr
&&\emph{FedAvg-FC} &0.7912 &\textbf{0.8527} &0.8610 &0.8805 &0.8896 &0.8841 &0.8971 \cr
&&\emph{FedDBL(\textbf{ours}} &\textbf{\textcolor{red}{0.8833}} &0.8502 &\textbf{\textcolor{red}{0.9268}} &\textbf{\textcolor{red}{0.9443}}
&\textbf{\textcolor{red}{0.9443}} &\textbf{\textcolor{red}{0.9445}} &\textbf{0.9432} \cr
\cdashline{2-10}
&\multirow{3}{*}{\centering CTransPath}
&\emph{Centralized-FC} &\textcolor{red}{0.9396} &0.9599 &\textcolor{red}{0.9673} &\textcolor{red}{0.9757} &\textcolor{red}{0.9789} &\textcolor{red}{0.9802} &\textcolor{red}{0.9817} \cr
&&\emph{FedAvg-FC} &0.9031 &0.9385 &0.9459 &0.9541 &0.9568 &0.9582 &0.9600 \cr
&&\emph{FedDBL(\textbf{ours}} &\textbf{0.9217} &\textbf{\textcolor{red}{0.9643}} &\textbf{0.9657} &\textbf{0.9666} &\textbf{0.9671} &\textbf{0.9672} &\textbf{0.9672} \cr
\midrule
\multirow{10}{*}{\centering BCSS}
&\multirow{7}{*}{ResNet-50}
&\emph{Centralized} &0.5870 &0.2228 &0.4280 &0.8196 &0.8081 &0.8245 &0.8974 \cr
&&\emph{Centralized-FC} &0.2056 &0.4846 &0.7297 &0.8979 &0.9231 &0.9304 &0.9404 \cr
&&\emph{FedAvg} &0.2300 &0.4059 &0.3689 &0.3433 &0.2962 &0.2176 &0.2798 \cr
&&\emph{FedProx} &0.2276 &0.3968 &0.3578 &0.3600 &0.3024 &0.3052 &0.2633 \cr
&&\emph{FedPAQ} &0.2210 &0.3020 &0.3157 &0.3328 &0.2766 &0.2837 &0.2582 \cr
&&\emph{FedAvg-FC} &0.2004 &0.2650 &0.4161 &0.7445 &0.8192 &0.8764 &0.8959 \cr
&&\emph{FedDBL(\textbf{ours}} &\textbf{\textcolor{red}{0.8471}} &\textbf{\textcolor{red}{0.9227}} &\textbf{\textcolor{red}{0.9413}} &\textbf{\textcolor{red}{0.9578}} &\textbf{\textcolor{red}{0.9638}} &\textbf{\textcolor{red}{0.9621}} &\textbf{\textcolor{red}{0.9550}} \cr
\cdashline{2-10}
&\multirow{3}{*}{\centering CTransPath}
&\emph{Centralized-FC} &0.2572 &0.2191 &0.3272 &0.7578 &0.9035 &0.9392 &0.9614 \cr
&&\emph{FedAvg-FC} &0.2148 &0.1981 &0.1816 &0.3272 &0.4571 &0.6165 &0.8061 \cr
&&\emph{FedDBL(\textbf{ours}} &\textbf{\textcolor{red}{0.9416}} &\textbf{\textcolor{red}{0.9644}} &\textbf{\textcolor{red}{0.9685}} &\textbf{\textcolor{red}{0.9639}} &\textbf{\textcolor{red}{0.9758}} &\textbf{\textcolor{red}{0.9818}} &\textbf{\textcolor{red}{0.9859}} \cr
\bottomrule
\hspace{1mm}
\end{tabular}}
\end{threeparttable}
\label{tab:one-round-f1-score}
\end{table*}
\subsection{Comparisons under One-round Communication}
\label{sub:one-round}
In this experiment, we evaluate the data efficiency, communication efficiency and flexibility of our proposed \emph{FedDBL}. Table~\ref{tab:one-round-accuracy} and~\ref{tab:one-round-f1-score} demonstrate the average accuracy and F1-scores performance. We compare \emph{FedDBL} with four FL frameworks and two centralized training approaches with only one-round communication or local training. We first employ ResNet-50 pre-trained on ImageNet as the CNN backbone for \emph{FedDBL} and all the other competitors.
\begin{enumerate}[(1)]
\item \emph{Centralized}: We fine-tune the pre-trained backbone with a random initialized fully connected (FC) layer in the centralized learning manner.
\item \emph{Centralized-FC}: We freeze the pre-trained backbone while fine-tuning the FC-layer in the centralized learning manner.
\item \emph{FedAvg}: We fine-tune the pre-trained model by FedAvg framework~\cite{mcmahan2017communication}.
\item \emph{FedProx}: We fine-tune the pre-trained model model by FedProx framework~\cite{li2020federated}.
\item \emph{FedPAQ}: We fine-tune the pre-trained model by a communication-efficient federated learning framework, FedPAQ~\cite{reisizadeh2020fedpaq}.
\item \emph{FedAvg-FC}: We freeze the pre-trained CNN backbone and only update the FC-layer by FedAvg framework.
\end{enumerate}
\textbf{Data efficiency:} As shown in Table~\ref{tab:one-round-accuracy} and~\ref{tab:one-round-f1-score}, when with enough training samples (100\%) in one-round training experiment, centralized learning can achieve better performance in both datasets than other conventional FL frameworks, \emph{FedAvg}, \emph{FedProx} and \emph{FedPAQ}. Because centralized learning gathers the training samples from all clients such that the CNN model is trained more stably with a faster convergence speed than existing FL frameworks. When freezing the CNN backbone, \emph{FedAvg-FC} returns to a more favorable performance. \emph{Centralized-FC} also show better performance than \emph{Centralized}. This observation shows that when with limited communication resources or local training time but sufficient training samples, maintaining a more stable CNN feature extractor is better than retraining the entire model. Only updating FC-layer is a better solution under this circumstance. The proposed \emph{FedDBL} can achieve comparable performance with centralized learning strategies in Multi-center CRC dataset and even outperform them in BCSS dataset. When reducing the training data, the performance of all the approaches drops dramatically except \emph{FedDBL}, especially when with only 1\% training samples. \emph{FedAvg-FC} with the frozen CNN backbone achieves around 0.79 accuracy and F1-score in Multi-center CRC but is still less effective than \emph{FedDBL}. However, the quantitative results of \emph{FedAvg-FC} in BCSS with 1\% training samples are much worse than the ones in Multi-center CRC dataset because Multi-center CRC is around 38 times larger than BCSS. In this experiment, the proposed \emph{FedDBL} performs the most stable quantitative results among all the approaches in both datasets. It even outperforms centralized learning in most of the training data proportions. From this experiment, we can conclude that when with limited network communication resources and training samples, \emph{FedDBL} is the best solution for histopathological tissue classification.
\textbf{Flexibility:} Besides the data and communication efficiency, \emph{FedDBL} is also a flexible framework that can be further upgraded by replacing any module with a superior one if it exists, for example, a more robust feature extractor, a more outstanding classifier or a superior federated aggregation strategy. In this experiment, we demonstrate the flexibility of \emph{FedDBL} by replacing the ResNet-50 CNN backbone pre-trained on ImageNet with a domain-relevant backbone CTransPath~\cite{wang2022transformer} pre-trained on histopathological images. The lower parts of both datasets in Table~\ref{tab:one-round-accuracy} and~\ref{tab:one-round-f1-score} demonstrate the comparisons under CTransPath. Here, we only compare \emph{FedDBL} with the ones only updating FC-layer. When experimenting on the larger dataset of Multi-center CRC, the domain-relevant pre-trained feature extractor CTransPath can greatly improve all three approaches. Centralized learning demonstrates the best results in almost all the dataset proportions. But \emph{FedDBL} still constantly outperforms \emph{FedAvg-FC} under the same circumstance. In the much smaller dataset of BCSS, \emph{FedDBL} demonstrates its superiority and outperforms both \emph{Centralized-FC} and \emph{FedAvg-FC}. When with only 1\% training samples in BCSS, CTransPath can improve the F1-score of \emph{FedDBL} from 0.8471 (ResNet-50) to 0.9416. Less or no improvement is observed for the other two approaches.
\textbf{Communication efficiency:} Higher communication efficiency benefits not only from fewer communication rounds but also from a smaller model or feature size for transmission. Conventional federated frameworks share either the parameters of the deep learning models or the extracted features. In our proposed \emph{FedDBL}, we only share the lightweight broad learning weights without sharing any deep learning parameters or deep features. In Table~\ref{tab:Overhead}, we demonstrate the model size and total upload overhead per client of the entire training phase. We can observe that the size of ResNet-50 for sharing is 94.4MB. \emph{FedDBL(ResNet-50)} and \emph{FedDBL(CTP)} share only 276.5KB and 55.4KB broad learning weights respectively, where CTP denotes CTransPath backbone. With only one-round communication, \emph{FedDBL(ResNet-50)} reduces the communication overhead by nearly 350 times compared with ResNet-50. Since conventional federated frameworks might need multiple training iterations for model convergence, 50-round communication will greatly increase the total upload overhead from 94.4MB to 4.609GB. Thanks to the lightweight BL-module and one-shot training manner, \emph{FedDBL(CTP)} reduces the upload workload from 4.609GB to 55.4KB which is over 87,000 times faster than \emph{FedAvg(ResNet-50)}. Even \emph{FedDBL(ResNet-50)} is over 17,000 times faster.
\begin{table*}[th]
\centering
\caption{Comparisons with different methods on 50-round training (Accuracy). \textbf{Bold} is the highest among federated methods; \textcolor{red}{red} is the highest score among all algorithms including centralized learning.}
\begin{threeparttable}
\setlength{\tabcolsep}{2.5mm}{
\begin{tabular}{p{0.11\textwidth}>{}
p{0.13\textwidth}>{\centering}
p{0.07\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}}
\toprule
\multirow{2}{*}{Datasets} & \multirow{2}{*}{Models} & \multirow{2}{*}{\makecell[c]{Global \\ Epochs}} & \multicolumn{7}{c}{Accuracy} \cr
\cmidrule(lr){4-10}
& &&1\% &5\% &10\% &30\% &50\% &70\% &100\% \cr
\midrule
\multirow{5}{*}{\makecell[l]{Multi-center \\ CRC}}
& \emph{Centralized} &50 &0.8830 &0.9308 &0.9508 &\textcolor{red}{0.9713} &\textcolor{red}{0.9789} &\textcolor{red}{0.9817} &\textcolor{red}{0.9846}\cr
\cdashline{2-10}
& \emph{FedAvg} &50 &0.8747 &0.9289 &0.9385 &0.9542 &0.9590 &0.9604 &0.9629\cr
& \emph{FedProx} &50 &0.8786 &0.9289 &0.9409 &0.9549 &0.9586 &0.9615 &0.9622\cr
& \emph{FedDBL} &1 &0.8832 &0.8456 &0.9229 &0.9411 &0.9410 &0.9413 &0.9399\cr
& \emph{FedDBL(CTP)} &1 &\textbf{\textcolor{red}{0.9213}} &\textbf{\textcolor{red}{0.9640}} &\textbf{\textcolor{red}{0.9654}} &\textbf{0.9663} &\textbf{0.9668} &\textbf{0.9669} &\textbf{0.9669}\cr
\midrule
\multirow{5}{*}{\centering BCSS}
&\emph{Centralized} &50 &0.8291 &0.9179 &0.9458 &0.9650 &0.9810 &0.9818 &0.9838 \cr
\cdashline{2-10}
&\emph{FedAvg} &50 &0.8657 &0.9340 &0.9393 &0.9634 &0.9730 &0.9804 &0.9834 \cr
&\emph{FedProx} &50 &0.8700 &0.9201 &0.9502 &0.9671 &0.9737 &0.9776 &0.9830 \cr
&\emph{FedDBL} &1 &0.9012 &0.9511 &0.9603 &0.9711 &0.9745 &0.9731 &0.9652 \cr
&\emph{FedDBL(CTP)} &1 &\textbf{\textcolor{red}{0.9593}} &\textbf{\textcolor{red}{0.9754}} &\textbf{\textcolor{red}{0.9777}} &\textbf{\textcolor{red}{0.9770}} &\textbf{\textcolor{red}{0.9834}} &\textbf{\textcolor{red}{0.9883}} &\textbf{\textcolor{red}{0.9910}} \cr
\bottomrule
\hspace{1mm}
\end{tabular}}
\end{threeparttable}
\label{tab:accuracy}
\end{table*}
\begin{table*}[th]
\centering
\caption{Comparisons with different methods on 50-round training (F1-score). \textbf{Bold} is the highest among federated methods; \textcolor{red}{red} is the highest score among all algorithms including centralized learning.}
\begin{threeparttable}
\setlength{\tabcolsep}{2.5mm}{
\begin{tabular}{p{0.11\textwidth}>{}
p{0.13\textwidth}>{\centering}
p{0.07\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}>{\centering}
p{0.05\textwidth}}
\toprule
\multirow{2}{*}{Datasets} & \multirow{2}{*}{Models} & \multirow{2}{*}{\makecell[c]{Global \\ Epochs}} & \multicolumn{7}{c}{F1-score} \cr
\cmidrule(lr){4-10}
& &&1\% &5\% &10\% &30\% &50\% &70\% &100\% \cr
\midrule
\multirow{5}{*}{\makecell[l]{Multi-center \\ CRC}}
& \emph{Centralized} &50 &0.8813 &0.9307 &0.9512 &\textcolor{red}{0.9714}&\textcolor{red}{0.9789}&\textcolor{red}{0.9816}&\textcolor{red}{0.9845}\cr
\cdashline{2-10}
& \emph{FedAvg} &50 &0.8720 &0.9283 &0.9386 &0.9545 &0.9592 &0.9607 &0.9632\cr
& \emph{FedProx} &50 &0.8751 &0.9288 &0.9409 &0.9550 &0.9591 &0.9618 &0.9625\cr
& \emph{FedDBL} &1 &0.8833 &0.8502 &0.9268 &0.9443 &0.9443 &0.9445 &0.9432\cr
& \emph{FedDBL(CTP)} &1 &\textbf{\textcolor{red}{0.9217}} &\textbf{\textcolor{red}{0.9643}} &\textbf{\textcolor{red}{0.9657}}
&\textbf{0.9666} &\textbf{0.9671} &\textbf{0.9672} &\textbf{0.9672}\cr
\midrule
\multirow{5}{*}{\centering BCSS}
&\emph{Centralized} &50 &0.7671 &0.8808 &0.9182 &0.9554 &0.9722 &0.9756 &0.9761 \cr
\cdashline{2-10}
&\emph{FedAvg} &50 &0.8088 &0.9006 &0.8999 &0.9448 &0.9598 &0.9721 &0.9741 \cr
&\emph{FedProx} &50 &0.8095 &0.8851 &0.9293 &0.9522 &0.9599 &0.9669 &0.9769 \cr
&\emph{FedDBL} &1 &0.8471 &0.9227 &0.9413 &0.9578 &0.9638 &0.9621 &0.9550 \cr
&\emph{FedDBL(CTP)} &1 &\textbf{\textcolor{red}{0.9416}} &\textbf{\textcolor{red}{0.9644}} &\textbf{\textcolor{red}{0.9685}} &\textbf{\textcolor{red}{0.9639}} &\textbf{\textcolor{red}{0.9758}} &\textbf{\textcolor{red}{0.9818}} &\textbf{\textcolor{red}{0.9859}} \cr
\bottomrule
\hspace{1mm}
\end{tabular}}
\end{threeparttable}
\label{tab:f1-score}
\end{table*}
\begin{table}[t]
\centering
\caption{Model size and total upload overhead per client among three models.}
\begin{threeparttable}
\begin{tabular}{cccc}
\toprule
Models\centering &\makecell[c]{\emph{FedAvg} \\ \emph{(ResNet-50)}} &\makecell[c]{\emph{FedDBL} \\ \emph{(ResNet-50)}} &\makecell[c]{ \emph{FedDBL} \\ \emph{(CTP)}} \cr
\midrule
Model size \centering &94.4MB &276.5KB &55.4KB \cr
Total upload size \centering &4.609GB &276.5KB &55.4KB \cr
\bottomrule
\hspace{1mm}
\end{tabular}
\end{threeparttable}
\label{tab:Overhead}
\end{table}
\subsection{Comparisons under Multiple-round Communication}
\label{sub:multi-round}
In this experiment, we compare \emph{FedDBL} under one-round communication with centralized learning and two federated frameworks under multiple-round communication. Table~\ref{tab:accuracy} and Table~\ref{tab:f1-score} demonstrate the accuracy and F1-score, respectively. As expected, the centralized learning strategy achieves the best classification performance when with enough training data in Multi-center CRC dataset. \emph{FedDBL(CTP)} with domain-relevant feature extractor constantly surpasses two federated learning frameworks, and even outperforms centralized learning when with less than 10\% training data. In BCSS, since this dataset is much smaller than Multi-center CRC, even \emph{FedDBL} with ImageNet pre-trained feature extractor can outperform both centralized learning and federated frameworks and \emph{FedDBL(CTP)} can further improve the quantitative results to a remarkable level.
Moreover, we also visualize the average accuracy of the results in Table~\ref{tab:accuracy} and Table~\ref{tab:f1-score} at every epoch in Fig.~\ref{fig:Epoch_plot} to demonstrate the convergence speed of the existing approaches trained with four representative dataset proportions. Since \emph{FedDBL} is a one-round communication framework, we show \emph{FedDBL} and \emph{FedDBL(CTP)} results by a gray dash line and an orange dash line, respectively. The most representative region in each sub-figure has been highlighted by a zoom-in window. As we can see, the convergence speed and the optimal performance of the existing models highly depend on the proportion of the training data. When with 100\% training data in Multi-center CRC, centralized learning can fast surpass \emph{FedDBL} within five epochs and even outperform \emph{FedDBL(CTP)}. Two federated frameworks need more training epochs to converge and can achieve comparable performance with \emph{FedDBL(CTP)}. When reducing the training samples, the convergence speed is becoming slower and the optimal performance also decreases. When with 1\% training samples in BCSS, three existing models vibrate heavily and are even not able to surpass \emph{FedDBL} within 50 training epochs.
All the above experimental results have demonstrated the data efficiency, communication efficiency, model flexibility and model robustness of FedDBL on histopathological image classification. FedDBL, in our opinion, has the potential to significantly save computational and communication resources, relieve the pathologist's labeling burden and preserve the patient's privacy, which greatly promotes its clinical practicability compared with existing approaches.
\begin{figure*}[htp]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\centering
\begin{tabular}{>{\centering\arraybackslash\hspace{0pt}}p{\linewidth}}
\includegraphics[width=\linewidth]{fig/Multi-center_CRC.pdf}
\includegraphics[width=\linewidth]{fig/BCSS.pdf}
\end{tabular}
\caption{Average global accuracy scores of 5-fold cross-validation results at each training epoch. Four representative dataset proportions are selected for visualization. Since FedDBL is a one-round communication framework, we show \emph{FedDBL} and \emph{FedDBL(CTP)} results by a gray dash line and an orange dash line, respectively. We also highlight the most representative region for each sub-figure for better visualization.}
\label{fig:Epoch_plot}
\end{figure*}
\section{FedDBL for Privacy-preserving}
One of the objectives of FedDBL is to preserve patients' privacy. However, even though it is not necessary to directly share the raw data, conventional DL-based federated frameworks still suffer from different kinds of federated attacks, such as model inversion attack~\cite{zhu2019deep} or man-in-the-middle attack~\cite{wang2020man}. Compared with conventional DL-based federated frameworks, FedDBL can defend against the aforementioned attacks because the training data is totally unseen for the DL-module and it is the deep features that are used for calculating the parameters of the BL-module at each client locally. Actually, the pre-trained DL-module can be regarded as a perturbation process to transfer the data into high-level features. And the features are also protected by only sharing the parameters of the BL-module.
Thanks to the lightweight BL-module, we can further protect the parameters by employing an additional encryption algorithm~\cite{aono2017efficient} to support federated aggregation in the encrypted domain. The corresponding packing-encryption~\cite{le2018privacy} is used where it exploits the redundancy of ciphertext space to hold more plaintext in an encrypted block. Table~\ref{tab:encrypted} demonstrates the model accuracy, F1-score and the model size of the packing-encrypted FedDBL with 1\% Multi-center CRC dataset using CTransPath pre-trained backbone. When the bit-length for the encryption precision is set as 32 bits, the encryption algorithm does not harm the parameters. So the model accuracy is preserved. By using packing-encryption, the model size is still much smaller than that of ResNet-50 in the plaintext condition (94.4MB) shown in Table~\ref{tab:Overhead}.
\begin{table}[t]
\centering
\caption{Comparison between \emph{FedDBL} and \emph{FedDBL(encrypted)} under 1\% Multi-center CRC and CTransPath backbone.}
\begin{threeparttable}
\begin{tabular}{cccc}
\toprule
Models &\makecell[c]{Accuracy} &\makecell[c]{F1-score} &\makecell[c]{Model size} \cr
\midrule
\makecell[c]{\emph{FedDBL}} &\makecell[c]{0.9593} &\makecell[c]{0.9416} &\makecell[c]{55.4KB} \cr
\makecell[c]{\emph{FedDBL} \\ \emph{(encrypted)}} &\makecell[c]{0.9593} &\makecell[c]{0.9416} &\makecell[c]{30MB} \cr
\bottomrule
\hspace{1mm}
\end{tabular}
\end{threeparttable}
\label{tab:encrypted}
\end{table}
\section{Discussion and Conclusion}
In this paper, we proposed a novel federated framework (FedDBL) for histopathological tissue classification. Thanks to the robust deep learning feature extractor and flexible broad learning inference system, FedDBL can greatly improve the classification performance with only one-round communication and extremely limited training samples, which significantly reduces the data dependency and communication burden. With the employment of DL-module and BL-module for local training, each client just needs to solve a lightweight broad learning system, which drastically reduces the training overhead. Sharing the lightweight BL-module in the federated framework not only greatly saves the communication burden, but also preserves the patients' privacy. Experimental results with five-fold cross-validation demonstrate that FedDBL outperforms both centralized learning strategies and federated frameworks in one-round communication. It even outperforms all the competitors in 50-round training iterations when with limited training samples.
Moreover, due to the flexible model design, FedDBL can be further upgraded by replacing any module with a superior one in the future. In this paper, we have proven that a domain-relevant deep feature extractor is more effective than a domain-irrelevant one. Since the federated framework in this study is the most common federated average aggregation strategy. We expect a more outstanding federated aggregation framework can be applied in the future to further improve the performance of FedDBL under extreme data and communication situations.
\section*{Acknowledgments}
This work was supported by Key-Area Research and Development Program of Guangdong Province (No. 2021B0101420006),
the National Key R\&D Program of China (No. 2021YFF1201003),
Regional Innovation and Development Joint Fund of National Natural Science Foundation of China (No. U22A20345),
the National Natural Science Foundation of China (No. 82271941, 82072090 and 81772840),
the National Science Foundation for Young Scientists of China (No. 62102103),
the Natural Science Foundation for Distinguished Young Scholars of Guangdong Province (No. 2023B1515020043),
the Natural Science Foundation of Guangdong Province (No. 2023A1515030251),
Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application (No. 2022B1212010011), Science and Technology Projects in Guangzhou (No. 20220102000 and 202201010513) and
High-level Hospital Construction Project (No. DFJHBF202105).
\section*{Acknowledgments}
This work was supported by Key-Area Research and Development Program of Guangdong Province (No. 2021B0101420006),
the National Key R\&D Program of China (No. 2021YFF1201003),
Regional Innovation and Development Joint Fund of National Natural Science Foundation of China (No. U22A20345),
the National Natural Science Foundation of China (No. 82271941, 82072090 and 81772840),
the National Science Foundation for Young Scientists of China (No. 62102103),
the Natural Science Foundation for Distinguished Young Scholars of Guangdong Province (No. 2023B1515020043),
the Natural Science Foundation of Guangdong Province (No. 2023A1515030251),
Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application (No. 2022B1212010011), Science and Technology Projects in Guangzhou (No. 20220102000 and 202201010513) and
High-level Hospital Construction Project (No. DFJHBF202105).
|
1,314,259,994,236 | arxiv | \section{Introduction and summary}
The Hirota equation (T system) is ubiquitous in the theory of quantum integrable
systems \cite{Hirota:1981, Kulish:1981bi, Kirillov:1987zz, Bazhanov:1987zu, Klumper:1992vt,
Kuniba:1993cn, Bazhanov:1994ft, Fioravanti:1995cq, Krichever:1996qd, Zabrodin:1998, Gromov:2008gj,
Gromov:2009tv}. For an $sl(2)$-invariant periodic quantum spin chain
(see Appendix \ref{sec:closedtransf} for details), the Hirota equation
takes the form \cite{Zabrodin:1998, Gromov:2008gj} \footnote{In general,
the transfer matrix $T_{a,s}$ can have two subscripts, corresponding to the
representation of the auxiliary space given by a rectangular Young tableau
with $s$ rows and $a$ columns. For simplicity, we focus here exclusively on the $sl(2)$ case, where $T$ has a single
subscript $T_{s} = T_{1,s}$.}
\begin{eqnarray}
T^{+}_{k}\, T^{-}_{k} - T_{k+1}\, T_{k-1} = \phi^{[k]}\,
\bar{\phi}^{[-k]}\,, \qquad T_{-1}=0\,,
\qquad k = 0, 1, 2,
\ldots \,,
\label{Hirota}
\end{eqnarray}
where $f^{\pm}= f(u \pm \frac{i}{2})$ and $f^{[\pm k]}= f(u \pm \frac{i k}{2})$
for any function $f(u)$.
Here $T_{k}(u)$ is the
transfer matrix constructed with a spin-$k/2$ auxiliary space
\cite{Kulish:1981bi, Kulish:1981gi}; in particular, $T_{1}(u)$ is the
fundamental transfer matrix.
These transfer matrices mutually commute $(\left[ T_{k}(u)\,, T_{j}(v)
\right]=0)$, and obey the Hirota equation (\ref{Hirota}) with
\begin{eqnarray}
\phi(u)=(u+\tfrac{i}{2})^{N}\,, \qquad
\bar\phi(u)=(u-\tfrac{i}{2})^{N}\,.
\label{phi}
\end{eqnarray}
The eigenvalues corresponding to simultaneous eigenvectors
of these transfer matrices (which we also denote by $T_{k}(u)$)
evidently also obey the Hirota equation, and we henceforth regard $T_{k}(u)$ as a scalar function.
It is well known that the Hirota equation (\ref{Hirota}) admits a Lax representation
through the auxiliary linear problem (see \cite{Krichever:1996qd,
Gromov:2008gj} and references therein)
\begin{eqnarray}
T_{k+1}\, Q^{[k]} - T_{k}^{-}\, Q^{[k+2]} &=& \phi^{[k]}\,
\bar{Q}^{[-k-2]} \,, \label{QI}\\
T_{k-1}\, \bar{Q}^{[-k-2]} - T_{k}^{-}\, \bar{Q}^{[-k]} &=&
-\bar{\phi}^{[-k]}\,
Q^{[k]} \, , \label{QII}
\end{eqnarray}
where the function $\bar{Q}(u)$ is defined by
$\bar{Q}(u)= \left(Q(u^*)\right)^{*}$.
However, in order to reproduce the celebrated Baxter T-Q relation, we
henceforth restrict our attention to the case that $Q$ is real analytic:
$Q(u)^*= Q(u^*)$, which implies that
\begin{eqnarray}
Q(u)= \bar{Q}(u)
\label{QbarQ}
\end{eqnarray}
for any complex $u$. Note that the Hirota equation (\ref{Hirota}) with $k=0$ can be satisfied by setting
\begin{eqnarray}
T_{0}=\phi^{-}\,, \qquad \bar\phi=\phi^{[-2]} \,.
\label{k0constraint}
\end{eqnarray}
It then follows from the first Lax equation (\ref{QI}) with $k=0$ (or
alternatively from the second Lax equation (\ref{QII}) with $k=1$)
that
\begin{eqnarray}
T_{1}\, Q = \bar{\phi}\, Q^{[2]} + \phi\, Q^{[-2]} \,,
\label{BaxTQ}
\end{eqnarray}
which is the important Baxter T-Q equation \cite{Baxter1982}.
Assuming that $Q(u)$ is a polynomial in $u$ of order $M$, i.e.
$Q(u)=\prod_{j=1}^{M}(u-u_{j})$, the analyticity of $T_{1}$ together
with the T-Q equation imply the Bethe equations for the zeros of
$Q(u)$
\begin{eqnarray}
\bar{\phi}(u_{k})\, Q(u_{k}+i) + \phi(u_{k})\, Q(u_{k}-i) = 0\,,
\end{eqnarray}
which can be rewritten in the more familiar form
\begin{eqnarray}
\left(\frac{u_{k}+\frac{i}{2}}{u_{k}-\frac{i}{2}}\right)^{N} =
\prod_{j=1, j \ne k}^M
\frac{u_{k}-u_{j}+i}{u_{k}-u_{j}-i} \,.
\end{eqnarray}
In principle, by solving the Bethe equations, one can obtain $Q$, and therefore
(through (\ref{BaxTQ})) $T_{1}$.
Since (\ref{QI}) is linear, it can be solved for all the $T_{k}$ in terms of $Q$,
and therefore it gives T-Q-like equations for the higher
(fused) transfer matrices.
The conventional wisdom has been that (\ref{QI})-(\ref{QII}) is the
unique Lax representation for the Hirota equation. However, it has
recently been shown that quantum integrable models without $U(1)$
symmetry (such as the open XXX spin-1/2 chain with non-diagonal
boundary terms, see Appendix \ref{sec:opentransf} for details) can be
solved using a Baxter T-Q equation with an inhomogeneous term, i.e. with the
structure \cite{Cao:2013qxa, Wang2015, Nepomechie:2013ila,
Belliard:2013aaa, Kitanine:2014swa, Belliard:2015}
\begin{eqnarray}
T_{1}\, Q = \bar{\varphi}\, Q^{[2]} + \varphi\, Q^{[-2]} + \Delta\,,
\label{inhomTQ}
\end{eqnarray}
where $\Delta(u)$ is real analytic (in particular, real for real $u$)
and independent of $Q$. Indeed, such an
inhomogeneous term is necessary in order for the function $Q(u)$ to be a {\em
polynomial} in $u$, i.e.
$Q(u)=\prod_{j=1}^{N}(u-u_{j})(u+u_{j})$. The analyticity of $T_{1}$ together
with the T-Q equation (\ref{inhomTQ}) then imply the following Bethe equations for the zeros of
$Q(u)$
\begin{eqnarray}
\bar{\varphi}(u_{k})\, Q(u_{k}+i) + \varphi(u_{k})\, Q(u_{k}-i) +
\Delta(u_{k}) = 0\,.
\label{BEopen}
\end{eqnarray}
Similarly to the case of the periodic chain, by solving the Bethe equations (\ref{BEopen}), one can obtain $Q$, and therefore
(through (\ref{inhomTQ})) $T_{1}$.
The transfer matrices
for such models \cite{Sklyanin:1988yz}, constructed using non-diagonal
boundary S-matrices \cite{Ghoshal:1993tm, deVega:1993xi}, still obey
\cite{Mezincescu:1990fc, Mezincescu:1991ke, Zhou:1995zy} the Hirota
equation, albeit in a slightly modified form,
\begin{eqnarray}
T^{+}_{k}\, T^{-}_{k} - T_{k+1}\, T_{k-1} = T_{2,k}\,, \qquad T_{-1}=0\,,
\qquad k = 0, 1, 2,
\ldots \,,
\label{Hirotaopen}
\end{eqnarray}
where $T_{2,k}$ is given by the quantum determinant (\ref{qdet}).
This naturally raises the question: does the Hirota equation
(\ref{Hirotaopen}) admit a Lax
representation with inhomogeneous terms?
We answer this question here in the affirmative. Indeed, we show that
such a Lax representation is given by (\ref{QIopeninhom})-(\ref{QIIopeninhom})
\begin{eqnarray}
T_{k+1}\, Q^{[k]} - \bar\varphi^{[k]}\, T_{k}^{-}\, Q^{[k+2]} &=& X_{k}\,
Q^{[-k-2]}+ \sum_{l=0}^{k}\psi_{l,k}\, \Delta^{[2l-k]}\, T_{l}^{[l-k-1]}
\,, \label{inhomQI}\\
\varphi^{[-k]}\, T_{k-1}\, Q^{[-k-2]} - T_{k}^{-}\, Q^{[-k]} &=&
-Y_{k}\, Q^{[k]} - \sum_{l=0}^{k-1}\bar{\psi}_{l,k-1}^{-}\, \Delta^{[k-2l-2]}\, T_{l}^{[k-l-1]}
\,, \label{inhomQII}
\end{eqnarray}
where $X_{k}, Y_{k}$ and $\psi_{l,k}$ are given by (\ref{XY}) and (\ref{psi}).
The key new point is the appearance of terms containing $\Delta$,
which do not contain $Q$ and therefore are ``inhomogeneous'' terms.
In particular, (\ref{inhomQI}) reduces to (\ref{inhomTQ}) for $k=0$.
As the equations (\ref{inhomQI}) are still linear, they can be solved
for all $T_{k}$ in terms of $Q$.
We remark that equivalent expressions for
$T_{k}$ in terms of $Q$ were obtained earlier by means of a generating
function \cite{Nepomechie:2013ila}. An AdS/CFT generalization of this
generating function was proposed in \cite{Zhang:2015fea}, and it was
subsequently used in \cite{Bajnok:2015kfz} to compute wrapping
corrections.
Interestingly, the compatibility of the system
(\ref{inhomQI})-(\ref{inhomQII}) leads to a family of
Hirota-like equations (\ref{genHirota})
\begin{eqnarray}
T_{k+1}\, T_{k-a-1}^{[a]}
- T_{k}^{-}\, T_{k-a}^{[a+1]}
+ T_{2,k-a}^{[a]}\, T_{a}^{[a-k-1]} = 0\,, \qquad
a=0\,,1\,, \ldots\,, k-1\,,
\label{newids}
\end{eqnarray}
whose particular case $a=0$ coincides with the original Hirota equation
(\ref{Hirotaopen}).
To our knowledge, the bilinear relations (\ref{newids}) with $a>0$ are new. We show
that these relations are consistent with the Hirota equation (\ref{Hirotaopen})
by first solving the latter to obtain a determinant expression for
$T_{k}$ in terms of $T_{1}$, and by then
judiciously applying Pl\"ucker relations.
The outline of this paper is as follows. In Sec. \ref{sec:periodic}
we briefly review for the periodic spin chain how the compatibility of the
auxiliary linear problem implies the Hirota equation. In Sec.
\ref{sec:open} we turn to the open spin chain. We present both
homogeneous and inhomogeneous Lax representations of the Hirota
equation. We derive the compatibility conditions for the auxiliary problem
(\ref{inhomQI})-(\ref{inhomQII}), and show that they are satisfied if
the Hirota-like equations (\ref{newids}) are obeyed. In Sec.
\ref{sec:solvingHirota} we solve the Hirota equation
(\ref{Hirotaopen}) to obtain a
determinant expression for $T_{k}$ in terms of $T_{1}$, see
(\ref{Tk1}) and (\ref{Tk2}). In Sec. \ref{sec:solvingHirotalike} we
use Pl\"ucker relations to show that this solution is also a solution
of the Hirota-like equations. In Sec. \ref{sec:discuss} we briefly
discuss our results and we point out some further related problems.
We briefly review the
construction of the family of commuting
transfer matrices for integrable periodic and
open quantum spin chains in appendices \ref{sec:closedtransf} and
\ref{sec:opentransf}, respectively.
\section{Periodic spin chain}\label{sec:periodic}
It is useful to begin by briefly reviewing how the compatibility of
the auxiliary linear problem for a periodic spin chain
(\ref{QI})-(\ref{QII}) with (\ref{QbarQ})
\begin{eqnarray}
T_{k+1}\, Q^{[k]} - T_{k}^{-}\, Q^{[k+2]} &=& \phi^{[k]}\,
Q^{[-k-2]} \,, \label{QIagain}\\
T_{k-1}\, Q^{[-k-2]} - T_{k}^{-}\, Q^{[-k]} &=&
-\bar{\phi}^{[-k]}\,
Q^{[k]} \,, \label{QIIagain}
\end{eqnarray}
implies the Hirota equation (\ref{Hirota}).
Multiplying (\ref{QIIagain}) by $T_{k+1}$ gives
\begin{eqnarray}
T_{k+1}\, T_{k-1}\, Q^{[-k-2]} - T_{k+1}\, T_{k}^{-}\, Q^{[-k]} =
-\bar{\phi}^{[-k]}\, T_{k+1}\, Q^{[k]} \,.
\label{a}
\end{eqnarray}
On the other hand, performing on (\ref{QIIagain}) the shifts $k \mapsto k+1$ and
$u\mapsto u+\frac{i}{2}$, and then multiplying the result by $T_{k}^{-}$ gives
\begin{eqnarray}
T_{k}^{-}\, T_{k}^{+}\, Q^{[-k-2]} - T_{k}^{-}\, T_{k+1}\, Q^{[-k]} =
-\bar{\phi}^{[-k]}\, T_{k}^{-}\, Q^{[k+2]} \,.
\label{b}
\end{eqnarray}
Subtracting (\ref{a}) from (\ref{b}) yields
\begin{eqnarray}
\left(T^{+}_{k}\, T^{-}_{k} - T_{k+1}\, T_{k-1} \right) Q^{[-k-2]}
= \bar{\phi}^{[-k]}\left( T_{k+1}\, Q^{[k]} - T_{k}^{-}\, Q^{[k+2]}
\right) = \bar{\phi}^{[-k]}\, \phi^{[k]}\, Q^{[-k-2]}\,,
\label{c}
\end{eqnarray}
where the second equality follows from (\ref{QIagain}). It is now clear
that (\ref{c}), which expresses the compatibility of (\ref{QIagain}) and
(\ref{QIIagain}) for the function $Q$, implies the Hirota
equation (\ref{Hirota}). Note also that (\ref{QIIagain}) can be obtained
from (\ref{QIagain}): performing on (\ref{QIagain}) the shifts $k \mapsto k-1$
and $u\mapsto u+\frac{i}{2}$, we obtain
\begin{eqnarray}
T_{k}^{+}\, Q^{[k]} -T_{k-1}\, Q^{[k+2]} =
\phi^{[k]}\, Q^{[-k]} \,,
\label{ItoII}
\end{eqnarray}
which (up to an overall factor $-1$) is the complex conjugate of
(\ref{QIIagain}), assuming that
$T_{k}(u)^* = T_{k}(u^*)$ is real analytic and $\phi(u)^*=\bar{\phi}(u^*)$.
In fact, provided that the last two conditions are met, the entire
reasoning can be extended to the general case $Q(u)^*= \bar{Q}(u^*)$.
\section{Open spin chain}\label{sec:open}
For an open spin chain, the corresponding function $\varphi(u)$
(\ref{phiopen}) does not
satisfy the constraint $\bar\phi=\phi^{[-2]}$ (\ref{k0constraint})
that follows from (\ref{Hirota}). Indeed, the Hirota
equation takes a form slightly different from (\ref{Hirota}),
namely,
\begin{eqnarray}
T^{+}_{k}\, T^{-}_{k} - T_{k+1}\, T_{k-1} = T_{2, k}\,, \qquad T_{-1}=0\,,
\qquad k = 0, 1, 2,
\ldots \,,
\label{Hirotaopenagain}
\end{eqnarray}
where $T_{2,k}$ is the quantum determinant
\begin{eqnarray}
T_{2,k} = \prod_{j=0}^{k-1}\varphi^{[k-2j]}\, \bar\varphi^{[2j-k]}\,,
\label{qdet}
\end{eqnarray}
which satisfies the discrete Laplace equation
\begin{eqnarray}
T^{+}_{2,k}\, T^{-}_{2,k} =T_{2,k+1}\, T_{2,k-1} \,.
\end{eqnarray}
Since $T_{2,0}=1$, Eq. (\ref{Hirotaopenagain}) with $k=0$ implies that
\begin{eqnarray}
T_{0}=1 \,,
\end{eqnarray}
which differs from the first relation of (\ref{k0constraint}).
\subsection{Homogeneous case}
For an open spin chain with {\em diagonal} boundary terms
$(\xi=0)$, we propose
the following homogeneous auxiliary linear problem
\begin{eqnarray}
T_{k+1}\, Q^{[k]} - \bar\varphi^{[k]}\, T_{k}^{-}\, Q^{[k+2]} &=& X_{k}\,
Q^{[-k-2]} \,, \label{QIopenhom}\\
\varphi^{[-k]}\, T_{k-1}\, Q^{[-k-2]} - T_{k}^{-}\, Q^{[-k]} &=&
-Y_{k}\,
Q^{[k]} \,, \label{QIIopenhom}
\end{eqnarray}
where
\begin{eqnarray}
X_{k} = \prod_{j=0}^{k}\varphi^{[k-2j]}\,, \qquad
Y_{k} = \prod_{j=0}^{k-1}\bar\varphi^{[2j-k]} \,,
\label{XY}
\end{eqnarray}
instead of (\ref{QIagain})-(\ref{QIIagain}). Indeed, following the same
steps as in the periodic case (\ref{a})-(\ref{c}), we find with the
help of the simple identities
\begin{eqnarray}
Y_{k+1}^{+} = \bar\varphi^{[k]}\, Y_{k}\,, \qquad X_{k}\, Y_{k} =
\varphi^{[-k]}\, T_{2,k}\,,
\label{identities}
\end{eqnarray}
that the compatibility of the linear system
(\ref{QIopenhom})-(\ref{QIIopenhom}) implies the Hirota equation
(\ref{Hirotaopenagain}).
Moreover, (\ref{QIIopenhom}) can be obtained from (\ref{QIopenhom}) in the same way
that (\ref{QIIagain}) can be obtained from (\ref{QIagain}), see (\ref{ItoII}).
Eq. (\ref{QIopenhom}) can be solved for $T_{k}$ in
terms of $Q$. For $k=0$, one readily obtains the usual T-Q equation
\begin{eqnarray}
T_{1}\, Q = \bar{\varphi}\, Q^{[2]} + \varphi\, Q^{[-2]}\,,
\end{eqnarray}
as in the periodic case (\ref{BaxTQ}). The result for general values
of $k$ can alternatively be obtained from a generating function \cite{Nepomechie:2013ila}
\begin{eqnarray}
{\cal W}_{diag} \equiv (1 - {\cal D} B {\cal D})^{-1}\, (1 - {\cal D} A {\cal D})^{-1} =
\sum_{k=0}^{\infty} {\cal D}^{k}\, T_{k} \, {\cal D}^{k} \,,
\label{W1}
\end{eqnarray}
where
\begin{eqnarray}
A = \varphi\, \frac{Q^{[-2]}}{Q} \,, \qquad B = \bar{\varphi}\, \frac{Q^{[2]}}{Q} \,,
\label{AB}
\end{eqnarray}
and ${\cal D} =e^{-\frac{i}{2}\partial_{u}}$ implying that ${\cal D}
f = f^{-} {\cal D}$. In this way, we obtain
\begin{eqnarray}
T_{k}=\sum_{l=0}^{k}\prod_{j=0}^{k-l-1}B^{[k-1-2j]}\,
\prod_{i=0}^{l-1}A^{[2l-k-1-2i]}\,.
\label{Tkgendiag}
\end{eqnarray}
\subsection{Inhomogeneous case}
For an open spin chain with {\em non-diagonal} boundary terms
$(\xi \ne 0)$, we propose
the following inhomogeneous auxiliary linear problem
\begin{eqnarray}
T_{k+1}\, Q^{[k]} - \bar\varphi^{[k]}\, T_{k}^{-}\, Q^{[k+2]} &=& X_{k}\,
Q^{[-k-2]}+ \sum_{l=0}^{k}\psi_{l,k}\, \Delta^{[2l-k]}\, T_{l}^{[l-k-1]}
\,, \label{QIopeninhom}\\
\varphi^{[-k]}\, T_{k-1}\, Q^{[-k-2]} - T_{k}^{-}\, Q^{[-k]} &=&
-Y_{k}\, Q^{[k]} - \sum_{l=0}^{k-1}\bar{\psi}_{l,k-1}^{-}\, \Delta^{[k-2l-2]}\, T_{l}^{[k-l-1]}
\,, \label{QIIopeninhom}
\end{eqnarray}
where $X_{k}$ and $Y_{k}$ are given by (\ref{XY}), and $\psi_{l,k}$
is given by
\begin{eqnarray}
\psi_{l,k} = \prod_{j=0}^{k-l-1}\varphi^{[k-2j]}\,, \qquad
\bar{\psi}_{l,k} = \prod_{j=0}^{k-l-1}\bar{\varphi}^{[2j-k]}\, .
\label{psi}
\end{eqnarray}
For $\Delta=0$, this system evidently reduces to the homogeneous
system (\ref{QIopenhom})-(\ref{QIIopenhom}).
Eq. (\ref{QIopeninhom}) can be used to solve for all $T_{k}$ in terms of $Q$.
The inhomogeneous T-Q equation (\ref{inhomTQ}) is obtained for
$k=0$. The result for general values
of $k$ can again be alternatively obtained from a generating function \cite{Nepomechie:2013ila}
\begin{eqnarray}
{\cal W} \equiv \left[1 - {\cal D} (A+B+C) {\cal D} + {\cal D} A {\cal
D}^{2} B {\cal D} \right]^{-1} =
\sum_{k=0}^{\infty} {\cal D}^{k}\, T_{k} \, {\cal D}^{k} \,,
\label{W2}
\end{eqnarray}
where $A$ and $B$ are again given by (\ref{AB}), and $C$ is given by
\begin{eqnarray}
C=\frac{\Delta}{Q} \,.
\label{C}
\end{eqnarray}
Note that this generating function reduces to ${\cal W}_{diag}$
(\ref{W1}) for $\Delta=0$.
\subsubsection{Compatibility conditions}\label{sec:comp}
We now proceed as in the homogeneous case to derive the compatibility
conditions for the auxiliary linear problem
(\ref{QIopeninhom})-(\ref{QIIopeninhom})
for the function $Q$. Multiplying (\ref{QIIopeninhom}) by $T_{k+1}$ gives
\begin{eqnarray}
T_{k+1}\, T_{k-1}\, \varphi^{[-k]}\, Q^{[-k-2]} - T_{k+1}\, T_{k}^{-}\,
Q^{[-k]} &=& - Y_{k}\, T_{k+1}\, Q^{[k]} \nonumber\\
&-& \sum_{l=0}^{k-1}\bar{\psi}_{l,k-1}^{-}\, \Delta^{[k-2l-2]}\, T_{k+1}\, T_{l}^{[k-l-1]}
\,.
\label{inhoma}
\end{eqnarray}
On the other hand, performing on (\ref{QIIopeninhom}) the shifts $k \mapsto k+1$ and
$u\mapsto u+\frac{i}{2}$, and then multiplying the result by $T_{k}^{-}$ gives
\begin{eqnarray}
T_{k}^{-}\, T_{k}^{+}\, \varphi^{[-k]}\, Q^{[-k-2]} - T_{k}^{-}\, T_{k+1}\, Q^{[-k]} =
-Y_{k}\, \bar\varphi^{[k]}\, T_{k}^{-}\, Q^{[k+2]}
- \sum_{l=0}^{k}\bar{\psi}_{l,k}\,\Delta^{[k-2l]}\, T_{k}^{-}\, T_{l}^{[k-l+1]}.
\label{inhomb}
\end{eqnarray}
Subtracting (\ref{inhoma}) from (\ref{inhomb}) yields
\begin{eqnarray}
\lefteqn{\left(T^{+}_{k}\, T^{-}_{k} - T_{k+1}\, T_{k-1} \right)\varphi^{[-k]}\, Q^{[-k-2]}
= Y_{k}\left(T_{k+1}\, Q^{[k]} - \bar\varphi^{[k]}\, T_{k}^{-}\,
Q^{[k+2]}\right)}\\
&&
+ \sum_{l=0}^{k-1}\bar{\psi}_{l,k-1}^{-}\, \Delta^{[k-2l-2]}\, T_{k+1}\, T_{l}^{[k-l-1]}
- \sum_{l=0}^{k}\bar{\psi}_{l,k}\, \Delta^{[k-2l]}\, T_{k}^{-}\, T_{l}^{[k-l+1]} \,.
\nonumber
\label{inhomc}
\end{eqnarray}
Using (\ref{QIopeninhom}), (\ref{identities}) and (\ref{psi}), we arrive at
\begin{eqnarray}
\lefteqn{\left(T^{+}_{k}\, T^{-}_{k} - T_{k+1}\, T_{k-1} - T_{2,k}\right)\varphi^{[-k]}\, Q^{[-k-2]}}\nonumber \\
&&= \sum_{l=0}^{k-1}\bar{\psi}_{l,k-1}^{-}\, \Delta^{[k-2l-2]}\, T_{k+1}\, T_{l}^{[k-l-1]}
- \sum_{l=0}^{k}\bar{\psi}_{l,k}\, \Delta^{[k-2l]}\, T_{k}^{-}\, T_{l}^{[k-l+1]}
+ \sum_{l=0}^{k}\psi_{l,k}\, Y_{k}\, \Delta^{[2l-k]}\,
T_{l}^{[l-k-1]} \nonumber\\
&&= \sum_{a=0}^{k-1}\Delta^{[-k+2a]}
\left(\prod_{j=0}^{a-1}\bar\varphi^{[-k+2j]}\right) H_{k,a}\,, \qquad k
= 1, 2, \ldots \,,
\label{inhomd}
\end{eqnarray}
where
\begin{eqnarray}
H_{k,a} = T_{k+1}\, T_{k-a-1}^{[a]}
- T_{k}^{-}\, T_{k-a}^{[a+1]}
+ T_{2,k-a}^{[a]}\, T_{a}^{[a-k-1]} \,.
\end{eqnarray}
The compatibility conditions (\ref{inhomd}) are satisfied for
nonzero $Q$ and $\Delta$ if
\begin{eqnarray}
H_{k,a} = 0 \,, \qquad a = 0\,, 1\,, \ldots \,, k-1\,,
\label{genHirota}
\end{eqnarray}
which are precisely the Hirota-like bilinear relations
(\ref{newids}). (Recall that Eq. (\ref{genHirota})
with $a=0$ coincides with the original open-chain Hirota equation (\ref{Hirotaopen}).)
\subsection{Solving the Hirota equation}\label{sec:solvingHirota}
It is easy to explicitly solve the open-chain Hirota equation
(\ref{Hirotaopenagain}) for $T_{k}$ in terms of $T_{1}$ for small values of
$k$, and to show that the resulting expressions can be conveniently recast in
terms of determinants
\begin{eqnarray}
T_{2} &=& \left| \begin{array}{cc}
T_{1}^{[1]} & \varphi^{[1]} \\
\bar\varphi^{[-1]} & T_{1}^{[-1]} \end{array} \right| \,, \nonumber\\
T_{3} &=& \left| \begin{array}{ccc}
T_{1}^{[2]} & \varphi^{[2]} & 0\\
\bar\varphi^{[0]} & T_{1}^{[0]} & \varphi^{[0]} \\
0 & \bar{\varphi}^{[-2]} & T_{1}^{[-2]}
\end{array} \right| \,, \nonumber\\
T_{4} &=& \left| \begin{array}{cccc}
T_{1}^{[3]} & \varphi^{[3]} & 0 & 0\\
\bar{\varphi}^{[1]} & T_{1}^{[1]} & \varphi^{[1]} & 0\\
0 & \bar{\varphi}^{[-1]} & T_{1}^{[-1]} & \varphi^{[-1]}\\
0 & 0 & \bar{\varphi}^{[-3]} & T_{1}^{[-3]}
\end{array} \right| \,.
\end{eqnarray}
This suggests a general determinant expression for $T_{k}$ in
terms of $T_{1}$ (see also \cite{Krichever:1996qd})
\begin{eqnarray}
T_{k} = \det (M^{(k)})\,,
\label{Tk1}
\end{eqnarray}
where $M^{(k)}$ is a $k \times k$ matrix whose elements are given by
\begin{eqnarray}
M^{(k)}_{ij} = T_{1}^{[k+1-2i]} \delta_{ij} + \bar\varphi^{[k+1-2i]}\delta_{i,j+1} +
\varphi^{[k+1-2i]}\delta_{i,j-1} \,, \qquad i\,, j = 1\,, \ldots\,, k
\,.
\label{Tk2}
\end{eqnarray}
We can now verify that (\ref{Tk1}) is the solution of the Hirota
equation using Jacobi's determinant identity \cite{Krichever:1996qd,
Hirota:2003}
\begin{eqnarray}
D[p_{1}, p_{2}| q_{1} q_{2}]\, D = D[p_{1}|q_{1}]\, D[p_{2}|q_{2}] -
D[p_{1}|q_{2}]\, D[p_{2}|q_{1}]\,,
\label{Jacobi}
\end{eqnarray}
where $D$ is the determinant of a square matrix, and $D[p_{1}, p_{2},
\ldots, p_{n}|q_{1}, q_{2}, \ldots, q_{n}]$ denotes the minor determinant obtained
from the same matrix by removing rows $p_{1}, p_{2}, \ldots, p_{n}$ and columns $q_{1}, q_{2},
\ldots, q_{n}$.
Indeed, let us observe that the matrix $M^{(k+1)}$, obtained from
(\ref{Tk2}), contains $M^{(k-1)}$ as a submatrix
\begin{eqnarray}
M^{(k+1)} = \left(\begin{array}{c:ccccc:c}
T_{1}^{[k]} & \varphi^{[k]} & 0 & \ldots & 0 & 0 & 0 \\
\hdashline
\bar{\varphi}^{[k-2]} & &&&&& 0 \\
\vdots &&& M^{(k-1)} &&& \vdots \\
0 &&&&&& \varphi^{[-k+2]} \\
\hdashline
0 & 0 & 0 & \ldots & 0 & \bar{\varphi}^{[-k]} & T_{1}^{[-k]}
\end{array} \right)\,.
\end{eqnarray}
Applying the Jacobi identity (\ref{Jacobi}) to the above $(k+1)
\times (k+1)$ matrix with
$p_{1}=q_{1}=1$ and $p_{2}=q_{2}=k+1$, and then using
(\ref{Tk1}), we recover the Hirota equation
(\ref{Hirota}). (Note that the matrices corresponding to $D[1|k+1]$
and $D[k+1|1]$ are either upper or lower triangular, with $\varphi$'s
along the diagonal.)
We remark that Eq. (\ref{Tk1}) provides an expression for $T_{k}$ in
terms of $Q$ upon setting $T_{1} = A + B + C$ (see (\ref{AB}),
(\ref{C}) )
in (\ref{Tk2}). In particular, for the diagonal case $\Delta=0$, the
result is equivalent to (\ref{Tkgendiag}).
\subsection{Solving the Hirota-like
equations}\label{sec:solvingHirotalike}
We now demonstrate that the solution (\ref{Tk1}) of the Hirota equation is
also a solution of the Hirota-like equations (\ref{newids}). The main
idea is to use Pl\"ucker relations \cite{Krichever:1996qd,
Hirota:2003}, which are generalizations of Jacobi's identity.
In this way, we see that the Hirota equation stems from the Jacobi identity,
while the generalizations of the Hirota equation stem from the
extension of the Jacobi identity to the Pl\"ucker relations.
Let $X$ be a {\em rectangular} matrix with $n+1$ rows and $r+1$ columns $(n \ge r)$,
\begin{eqnarray}
X = \left( \begin{array}{cccc}
X_{0,0} & X_{0,1} & \ldots & X_{0,r}\\
X_{1,0} & X_{1,1} & \ldots & X_{1,r}\\
\vdots & \vdots & \cdots & \vdots \\
X_{n,0} & X_{n,1} & \ldots & X_{n,r}
\end{array}\right)
\,.
\label{Xmat}
\end{eqnarray}
Following \cite{Krichever:1996qd}, we define $(i_{0}\,, i_{1}\,,
\ldots\,, i_{r})$ to be the determinant of the square matrix formed
by the rows with labels $i_{0}\,, i_{1}\,, \ldots\,, i_{r}$,
\begin{eqnarray}
\left| \begin{array}{cccc}
X_{i_{0},0} & X_{i_{0},1} & \ldots & X_{i_{0},r}\\
X_{i_{1},0} & X_{i_{1},1} & \ldots & X_{i_{1},r}\\
\vdots & \vdots & \cdots & \vdots \\
X_{i_{r},0} & X_{i_{r},1} & \ldots & X_{i_{r},r}\\
\end{array} \right| \equiv
(i_{0}\,, i_{1}\,, \ldots\,, i_{r})\,.
\end{eqnarray}
It is understood that $(i_{0}\,, i_{1}\,, \ldots\,, i_{r})$ is
antisymmetric in all the indices, and therefore vanishes if any two
indices coincide. The Pl\"ucker relations are then given by \cite{Krichever:1996qd}
\begin{eqnarray}
(i_{0}\,, i_{1}\,, \ldots\,, i_{r})\, (j_{0}\,, j_{1}\,, \ldots\,,
j_{r}) = \sum_{p=0}^{r} (j_{p}\,, i_{1}\,, \ldots\,, i_{r})\,
(j_{0}\,,\ldots\,, j_{p-1}\,, i_{0}\,, j_{p+1}\,, \ldots\,, j_{r})
\label{Plucker}
\end{eqnarray}
for all $i_{p}, j_{p} \in \{0\,, n \}$ with $p=0, 1, \ldots, r$.
It is convenient to introduce the notation $[i, p]$ as follows
\begin{eqnarray}
(X_{i,0}\,, X_{i,1}\,, \ldots \,, X_{i,r})\Big\vert_{[i, p]}
= (X_{i,0}\,, X_{i,1}\,, \ldots \,, X_{i,r})\Big\vert_{X_{i,j} = \delta_{j,p}}
=(\stackrel{\stackrel{0}{\downarrow}}{0}\,, \ldots 0\,, \stackrel{\stackrel{p}{\downarrow}}{1} \,,
0\,, \ldots \,, \stackrel{\stackrel{r}{\downarrow}}{0})
\,.
\end{eqnarray}
In other words, $[i, p]$ means
that the row of matrix $X$ labeled $i$ is given by $(0\,, \ldots 0\,, 1 \,, 0\,, \ldots \,, 0)$,
where the 1 appears in the column labeled $p$. Similarly, for a
set of $m$ rows of the matrix $X$, we define
\begin{eqnarray}
\left( \begin{array}{cccc}
X_{i_{1},0} & X_{i_{1},1} & \ldots & X_{i_{1},r}\\
X_{i_{2},0} & X_{i_{2},1} & \ldots & X_{i_{2},r}\\
\vdots & \vdots & \cdots & \vdots \\
X_{i_{m},0} & X_{i_{m},1} & \ldots & X_{i_{m},r}\\
\end{array}\right)\left|_{
\left[ \begin{array}{c}
i_{1}, p_{1}\\
i_{2}, p_{2}\\
\vdots\\
i_{m}, p_{m}
\end{array}\right]} \right.
= \left( \begin{array}{cccc}
X_{i_{1},0} & X_{i_{1},1} & \ldots & X_{i_{1},r}\\
X_{i_{2},0} & X_{i_{2},1} & \ldots & X_{i_{2},r}\\
\vdots & \vdots & \cdots & \vdots \\
X_{i_{m},0} & X_{i_{m},1} & \ldots & X_{i_{m},r}\\
\end{array}\right)\left|_{\begin{array}{c}
X_{i_{1},j}=\delta_{j,p_{1}}\\
X_{i_{2},j}= \delta_{j,p_{2}}\\
\vdots\\
X_{i_{m},j}= \delta_{j,p_{m}}
\end{array}} \right. \,.
\label{subs}
\end{eqnarray}
We are now ready to show that the solution of the Hirota equation
(\ref{Hirotaopen}) is also solution to the Hirota-like equations (\ref{newids}).
We set
\begin{eqnarray}
r=k\,, \qquad n = r+a+2\,.
\end{eqnarray}
We then choose the matrix $X$ (\ref{Xmat}) such that its first
$r+1$ rows are given by the matrix $M^{(r+1)}$ in (\ref{Tk2})
\begin{eqnarray}
\left( \begin{array}{cccc}
X_{0,0} & X_{0,1} & \ldots & X_{0,r}\\
X_{1,0} & X_{1,1} & \ldots & X_{1,r}\\
\vdots & \vdots & \cdots & \vdots \\
X_{r,0} & X_{r,1} & \ldots & X_{r,r}
\end{array}\right) = M^{(r+1)}
\,,
\label{Xmat2}
\end{eqnarray}
and we choose the remaining rows of $X$ as follows
\begin{eqnarray}
\left[ \begin{array}{cc}
r+1, & 0\\
r+2, & r-a\\
r+3, & r-a+1\\
\vdots & \vdots\\
r+a+2,& r
\end{array}\right]\,,
\label{Xmat3}
\end{eqnarray}
where we have used the notation (\ref{subs}). The Pl\"ucker relations
(\ref{Plucker}) for this matrix with the following choice of indices \footnote{In \cite{Krichever:1996qd} it is
assumed that $i_{p}=j_{p}$ for $p\ne 0,1$ in order to reduce the
number of terms to 3. We do not make this assumption here, but the
number of terms nevertheless reduces to 3 by virtue of our choice
(\ref{Xmat3}).}
\begin{eqnarray}
i_{l} = j_{l} = l \,, \qquad l = 1\,, 2\,, \ldots\,, r-a-1\,, \nonumber \\
j_{0} = 0\,, \quad j_{l} = l\,, \qquad l= r-a\,, r-a+1\,, \ldots\,, r
\,, \nonumber \\
i_{0} = r+1\,, \quad i_{l}=l+a+2\,, \qquad l= r-a\,, r-a+1\,,
\ldots\,, r \,,
\label{indices}
\end{eqnarray}
can be shown to coincide (using the identification (\ref{Tk1}))
with the Hirota-like equations
(\ref{newids}).
We conclude that the solution (\ref{Tk1}), (\ref{Tk2}) of the
original Hirota equation (\ref{Hirotaopen}) also satisfies the
Hirota-like equations (\ref{newids}), and therefore the latter system
of equations is consistent with it. We remark that we
have not succeeded to find a simple transformation that maps the
Hirota equations (\ref{Hirotaopen}) to the Hirota-like equations (\ref{newids}).
\section{Discussion}\label{sec:discuss}
We have shown that the open-chain $sl(2)$ Hirota equation (\ref{Hirotaopen}) admits a Lax
representation with inhomogeneous terms
(\ref{inhomQI})-(\ref{inhomQII}), thereby demonstrating that the
off-diagonal Bethe ansatz approach \cite{Cao:2013qxa, Wang2015} can be accommodated
within the conventional framework of quantum integrability.
\footnote{For the special case of diagonal boundary terms
($\xi=0\,, \Delta=0$), this Lax representation reduces to (\ref{QIopenhom}),
(\ref{QIIopenhom}), which -- to our knowledge -- is also new.}
In so doing, we have found a family of Hirota-like equations (\ref{newids})
which are consistent with the original Hirota equation.
It is an interesting question whether these Hirota-like equations can
be obtained directly by fusion.
We expect that (\ref{inhomQI})-(\ref{inhomQII}) is the most general Lax
representation of the Hirota equation with a single function
$Q(u)$, and therefore it
should describe the most general rank-one integrable quantum system.
It may be interesting to work out the generalization to higher-rank
algebras and superalgebras. It may also be interesting to investigate
the analogue of such inhomogeneous terms in conformal field theories
and classical integrable systems with boundaries.
\section*{Acknowledgments}
We dedicate this paper to the memory of Petr P. Kulish, who made
seminal contributions to the field of quantum integrability, and who
provided to one of us (RN) a key insight that made possible the
completion of his first foray \cite{Mezincescu:1990fc} into this
field. DF thanks the partial support of the grants: GAST (INFN),
UniTo-SanPaolo Nr TO-Call3-2012-0088, the ESF Network HoloGrav
(09-RNP-092 (PESC)), the MPNS--COST Action MP1210 and the EU Network
GATIS (no 317089). RN thanks the Bologna INFN theory group for its
warm hospitality, and Volodya Kazakov for a helpful discussion. The
work of RN was supported in part by the National Science Foundation
under Grant PHY-1212337, and by a Cooper fellowship.
|
1,314,259,994,237 | arxiv | \section{Introduction} \label{introduction}
The effective monitoring of degenerative patient conditions represents a significant challenge in many clinical decision-making problems and has given rise to the development of numerous mathematical and computational models \cite{brownell1999dopamine,gratwicke2017early,llano2017multivariate,chen2014credit}. Developing a knowledge-driven contemporaneous health index (CHI) that can precisely reflect the underlying patient condition across the course of the condition’s progression holds a unique value, like facilitating a range of clinical decision-making opportunities \cite{spring2013healthy,rivera2012optimized,deshpande2014control}, enhancing the continuity of care, and facilitating communications between clinicians, healthcare providers, and patients. It will also be a crucial enabling factor for the development of many envisioned AI systems to implement adaptive interventions for better healthcare management, given a representation of the dynamic evolution of the patient's condition.
Thus, to ensure continuity of care, we should be more explicit about our level of confidence in model outputs. Ideally, decision-makers should be provided with recommendations that are robust in the face of substantial uncertainty about future outcomes. However, computational models are an abstraction of clinical observations, as such, they are usually built on analytically tractable assumptions that may simplify the real-world problem. Also, most of these models are estimated from imperfect data, subjecting them to all kinds of statistical errors. An approach that yields only a single prediction doesn't adequately reflect any uncertainty, neither in the empirical data nor the estimated parameters \cite{allmaras2013estimating}. As a result, the outcomes from such mathematical models may not be consistent with the clinical observations. Uncertainty is an unavoidable feature that affects prediction capabilities in real-world domains such as healthcare \cite{hoffman1994propagation, meghdadi2017brain}, manufacturing \cite{montomoli2015uncertainty, nannapaneni2014uncertainty}, signal processing \cite{reynders2016uncertainty, nobari2015uncertainty}, and etc. A certain amount of uncertainty is always involved in decision-making systems that do not encounter samples when the experimental data are insufficient to calibrate. In such cases, there is always a chance that the model parameters be determined unambiguously even in the existence of complex mathematical models. In clinical predictions, it is necessary to deal with such uncertainty in an effective manner, because if the model parameters are not well constrained, the resulting predictions may represent an unacceptable degree of posterior uncertainty. What is more, while most existing models in patient monitoring generate one single prediction without telling confidence level, uncertainty quantification could tell us on which samples we may not be ready to act based on the model. Therefore, to develop a reliable model for a clinically relevant prediction, uncertainty quantification is a much-needed capacity \cite{collis2017bayesian,biglino2017computational,bozzi2017uncertainty}.
A number of patient monitoring index approaches have been developed in the literature. A standard formulation of these health indices is to use weighted sum models (e.g., regression models), and combine multiple static clinical measurements to predict the disease condition. For example, there exist many risk score models to predict AD by using multi-modality data integration methods \cite{liu2013data,yuan2012multi,zhang2011multimodal} to combine neuroimaging data \cite{weiner2013alzheimer,weiner20152014}, genomics data \cite{biffi2010genetic}, clinical data \cite{reitz2010summary}, etc. There are a few approaches that have formulated the decline of AD-related score over time as a multi-task learning model \cite{zhou2013modeling,zhou2012modeling}. These existing efforts have been limited to combining static data rather than longitudinal data. Besides, these data are usually sampled at irregular time points, which adds in another layer of complexity to the modeling efforts. Our problem's objective is fundamentally different from the existing risk score models; we focus on developing the contemporaneous health index (CHI) that can fuse irregular multivariate longitudinal time series dataset to quantify the severity of degenerative disease conditions that are required to fit the monotonic degradation process of the disease condition. For example, in our previous work \cite{ samareh2018dl} to address the patient heterogeneity, we developed a dictionary learning-based contemporaneous health index for degenerative disease monitoring, called DL-CHI, that leveraged the knowledge of the monotonic disease progression process to fuse the data by integrating CHI with dictionary learning. The basic idea of DL-CHI was learning individual models via the CHI formulation, and then rebuilding the model parameters of each patient's models through a supervised dictionary learning. However, both CHI and DL-CHI frameworks only generate one single prediction value for a sample and ignore the sampling uncertainty (i.e., it is common in healthcare that the label information is usually obtained by subjective methods which are subject to uncertainty). Therefore, if we could enable CHI to conduct uncertainty quantification and incorporate the uncertainty in labels in its modeling, we can widen its applicability in real-world contexts. The main objective of this paper is to develop a framework that can focus on the contemporaneous health index (CHI) developed in \cite{huang2017chi}, and can further equip CHI with uncertainty quantification capacity.
In this paper, we develop the uncertainty quantification based contemporaneous longitudinal index, named UQ-CHI, with a particular focus on continuous patient monitoring of degenerative conditions. Our method is to combine convex optimization and Bayesian learning using the maximum entropy learning (MEL) framework, integrating uncertainty on labels as well. The basic idea of MEL is to identify the distribution of the parameters of a statistical model that bears the maximum uncertainty, a principle that is conservative and robust \cite{mackay2003information,izenman2008modern,phillips2006maximum}. It has been investigated in a few machine learning models \cite{jaakkola2000maximum,sun2013multi,chao2019semi,zhu2018semi} as well. For example, in \cite{jaakkola2000maximum}, MEL was used to learn a distribution of the parameters in the support vector machine model rather than a single vector of the parameters. This distribution of the parameters could help us evaluate the uncertainty of the learned support vector machine model and translate into the uncertainty of predictions.
To adapt the MEL formulation and to develop UQ-CHI, few challenges should be addressed. The objective function of MEL, as its distinct feature, bears the full spirit of maximum entropy: no matter what is the model, we are studying, the learning objective of MEL is to learn the distribution model of the parameters of the model that has the maximum entropy. If there is a prior distribution of the parameters, the Kullback–Leibler divergence could be used to extend this idea. In our case, the design of the prior distribution should be studied to account for label uncertainties. Besides the objective function, the MEL encodes information from the data into constraints, e.g., if the model is for classification, for each sample, there would be a constraint that the expectation of the prediction over the distribution of the parameters should match the observed outcome on this sample. In our case, we will derive the constraints from the CHI model and integrate with the MEL framework. In detail, we consider two steps in our method, i.e., training and prediction. In the training step, we consider a prior uncertainty over the labels to handle uncertain or incomplete labels. Then we derive a solution to the optimization problem by using a specific prior formulation. In the second step, we develop a prediction method, with a rejection option method, for new samples with the obtained uncertainty quantification capacity. A distinct feature of our model is that it provides a closed-form solution for predicting the label of a new example. The whole pipeline of this UQ-CHI model is shown in Figure \ref{figure_1}.
\begin{figure*}[!ht]
\includegraphics[scale=0.1]{figure_1.png}
\centering
\caption{A conceptual overview of the UQ-CHI method}
\label{figure_1}
\end{figure*}
The remainder of this paper is organized as follows: in Section \ref{related works}, we will review related literature in modeling the contemporaneous health index for degenerative conditions and the MEL framework. In Section \ref{proposed work}, the UQ-CHI framework will be presented. In Section \ref{numerical studies}, we will implement and evaluate the UQ-CHI using a simulated dataset. We then continue the numerical analysis with a real-world application on Alzheimer's disease dataset in Section \ref{ADNI}. We will conclude the study in Section \ref{concl}. Note that, in this paper, we use lowercase letters, e.g., x, to represent scalars, boldface lowercase letters, e.g., \textbf{v}, to represent vectors, and boldface uppercase letters, e.g., \textbf{W}, to represent matrices.
\section{Related works}\label{related works}
In this section, we will first briefly present the basic formulation of the contemporaneous health index (CHI) model, and its extension, the dictionary learning based contemporaneous health index (DL-CHI), then we will present the proposed model: the UQ-CHI.
\subsection{The CHI model}\label{CHI}
The CHI model is developed in \cite{huang2017chi} which exploits the monotonic pattern of disease over the course of progression to improve further the data fusion of multivariate clinical measurements taken at irregular time points. The CHI framework was inspired by the common characteristics of degenerative conditions (e.g., AD) that often cause irreversible degradation. For example, in AD, to measure the degradation of the neural systems a number of biomarkers were developed, including neuroimaging modalities such as PET and MRI scans \cite{mueller2005alzheimer,petrella2003neuroimaging}. For example, MRI scans show a decline in the brain volume over time along with the disease progression. The same phenomenon could be observed on the PET scans when there is a persistent shrinkage of metabolic activities. Such monotonic patterns indicate that once the disease progression started, it tends to deteriorate over time increasingly. The task of CHI is to translate multivariate longitudinal and irregular clinical measurements into a contemporaneous health index $h_{n,t}$ to capture the patient’s condition changing over the course of progression. Note, clinical measurements for each patient could be taken with different length of time and at different time locations. Targeting degenerative conditions, CHI is designed to be monotonic, i.e., $h_{n,t_{1}}\geq h_{n,t_{2}}$ if $t_{1}\geq t_{2}$, while higher index represents a more severe condition. CHI is a latent structure; hence, clinical variables associated with it should be measured over time to facilitate data for learning the index.
Let, $ \mathbf{x}_{n, t} = \left[x_{n,1,t},\ldots, x_{n,d,t}\right]^ T\in \mathbb{R} ^{d}$, denote a training set of $N$ patients. Each measurement $x_{n,i,t}$, is the value of the $i$th variable for the $n$th subject in a given time $t$, where $t\in \left\{ 1,\ldots,T_{n}\right\}$ is the time index. our goal is, given a training set, convert each measurement $ \mathbf{x}_{n,t}$ into an health index $h_{n,t}$, which requires a mathematical model of $h_{n,t} = f( \mathbf{x}_{n,t})$. For simplicity, multivariable form of the hypothesis function $h_{n,t}$ was studies in \cite{huang2017chi}, i.e., $h_{n,t} = \mathbf{x}_{n,t}.\mathbf{w}$, where $\mathbf{w} \in R^{d}$ is a vector of weight coefficients that combines the $d$ variables. The total number of positive and negative samples is shown by $N^+$ and $N^-$ respectively, i.e., $N^+:=|\{n|y_n=1\}|$ and $N^-:=|\{n|y_n=-1\}|$. The formulation of the CHI learning framework is shown in below:
\begin{subequations} \label{eq:original}
\begin{align}
\min_{\mathbf{w},b}\quad & {\frac{1}{2}} \|\mathbf{w}\|^2 + \label{eq:original_a}\\
&\beta\sum_{n\in \{1,\ldots,N\}} \max\bigg(0, 1-y_n( \mathbf{x}_{n, T_n}^\top \mathbf{w}+b)\bigg) +\label{eq:original_b}\\
&\alpha\sum_{\substack{n\in \{1,,\ldots,,N\} \\ t\in \{1,\ldots, T_n-1\}}} \max\bigg(0, 1-\mathbf{z}_{n, t}^\top \mathbf{w}\bigg)
+ \label{eq:original_c}\\
&{\frac{\lambda}{2}} \Bigg({\frac{1}{N^+}} \sum_{n\in \{N^+|y_n=1\}} \bigg(( \mathbf{x}_{n, T_{n}}-\bar{ \mathbf{x}}^+_{T_{n}})^T \mathbf{w}\bigg)^2 \Bigg)+\label{eq:original_d}\\
&{\frac{\lambda}{2}} \Bigg({\frac{1}{N^-}} \sum_{n\in \{N^-|y_n=-1\}} \bigg(( \mathbf{x}_{n, T_{n}}-\bar{ \mathbf{x}}^-_{T_{n}})^T \mathbf{w}\bigg)^2 \Bigg) \label{eq:original_e} +\\
&\gamma \|\mathbf{w}\|_1.\label{eq:original_f}
\end{align}
\end{subequations}
Items in \eqref{eq:original} can be explained as follows:
\begin{itemize}
\item The first term \eqref{eq:original_a} and the second term \eqref{eq:original_b} are derived from a general formulation of support vector machine (SVM). These two terms are used to enhance the discriminatory power of CHI by utilizing the label information. Here, $y_n \in \{1, -1\}$ is the label of the $n$th sample that indicates if the $n$th subject has the disease or not.
\item To accommodate the monotonic pattern of disease progression, and to enforce the monotonicity of the learned health index, the term \eqref{eq:original_c} is invented, i.e., $h_{n,t_{1}}\geq h_{n,t_{2}}$ if $t_{1}\geq t_{2}$. Here, $\mathbf{z}_{n,t}$ is the difference of two successive data vectors $\mathbf{z}_{n,t}:= \mathbf{x}_{n,t+1}- \mathbf{x}_{n,t}$.
\item To encourage the homogeneity of CHI within the group that has the same health status terms \eqref{eq:original_d} and \eqref{eq:original_e} are invented. Here, $\mathbf{\bar{x}}^+_{T_n}$ and $\mathbf{\bar{x}}^-_{T_n}$ represent the center of data vectors at time $T_n$ for all positive and negative samples, respectively, that are,
\begin{align*}
\bar{ \mathbf{x}}^+_{T_n}:=&{\frac{1}{N^+}}\sum_{n\in \{n|y_n=1\}} \mathbf{x}_{n, T_n}
\\
\bar{ \mathbf{x}}^-_{T_n}:=&{\frac{1}{N^-}}\sum_{n\in \{n|y_n=-1\}} \mathbf{x}_{n, T_n}.
\end{align*}
\item To encourage sparsity of the features, $L_1$-norm penalty is used as shown in the last term \eqref{eq:original_f}.
\end{itemize}
The CHI formulation can be solved by using the block coordinate descent algorithm that is illustrated in \cite{huang2017chi}. Note, the CHI formulation generalizes many existing models, such as SVM, sparse SVM, LASSO, etc.
\subsection{The DL-CHI model}\label{DL-CHI}
CHI formulation is designed for learning a model for the average of a population, and thus, ignores the patient heterogeneity. Patients who suffer from AD have very heterogeneous progression patterns \cite{cummings2000cognitive,folstein1989heterogeneity,friedland1988alzheimer}. Building a personalized model on an individual basis could be used to consider the heterogeneity. However, such models require a significant amount of labeled training samples, which is not feasible in such clinical settings. Towards this goal, the DL-CHI approach was further developed in \cite{samareh2018dl} by integrating CHI with dictionary learning \cite{olshausen1996emergence,cummings2000cognitive}. Dictionary learning algorithms reconstruct the input signals as an approximated signal via a sparse linear combination of a few dictionary elements or basis \cite{wright2009robust} (each column of the dictionary represents a basis vector). Dictionary learning algorithms can reveal the hidden structures in the data (in a similar spirit as principal component analysis) by spanning the space of a personalized model and capturing patient heterogeneity. They play a role in the regularization of the model learning, in a way that each dictionary basis vector can be viewed as the numerical representations of patient heterogeneity. Thus, DL algorithms can improve the classification performance. Translating this wisdom into DL-CHI, the basic idea is first to learn individual models through the CHI formulation, and then, reconstruct the model parameters of the individual learned models via supervised dictionary learning. As such, each model is represented as a sparse linear combination of the basis vectors. Numerous experiments in both simulated and real-world data have shown the effectiveness of DL-CHI in creating personalized CHI models.
Despite accounting the patient heterogeneity, DL-CHI ignores the sampling uncertainty, therefore limits its applicability in real-world applications. Thus, this motivates us to enable CHI to conduct uncertainty quantification.
\subsection{The MEL formulation} \label{MED}
As mentioned in Section \ref{introduction}, MEL formulation has a distinct objective function that aims to learn the distribution of the parameters of a model that encodes maximum uncertainty (i.e., evaluated by the entropy concept). It also has constraints that encode information from the data, e.g., if the model is for classification, for each sample, there would be a constraint that the expectation of the prediction over the distribution of the parameters should match the observed outcome on this sample. To further illustrate some details, one typical application of the MEL is the maximum entropy discrimination (MED) method that focuses on the application of MEL on classification models.
Let’s consider a binary classification problem, where the response variable $y$ takes values from $\left\{ +1,-1\right\}$. Let $ \mathbf{x}_n =[ \mathbf{x}_1,\dots, \mathbf{x}_n]$ be an input feature vector and $\mathcal{D}\left( \mathbf{x}_{n}|\mathbf{w} \right)$ be a discriminant function parameterized by $\mathbf{w}$, and $\mathbf{\gamma}$ e.g., $\mathcal{D}( \mathbf{x}_n|\mathbf{w}) = \mathbf{w}^T \mathbf{x}_n $. The training set is defined by $D=\left\{ \mathbf{x}_{n},y_{n}\right\} ^{N}_{n=1}$ and the hinge loss is defined as $h(x) = \max \left( 0,y_{i}\mathcal{D}( \mathbf{x}_n|\mathbf{w})\right)$. The classification margin is defined as $y_n\mathcal{D}( \mathbf{x}_n,\mathbf{w})$, and it is large and positive when the label $y_n$ agrees with the prediction. Traditional learning machines such as the max-margin methods learn the optimal parameter setting $\mathbf{w},\mathbf{\gamma}$ by the empirical loss and the regularization penalty as shown below:
\begin{equation}\label{original theory}
\begin{aligned}
&\min_{(\mathbf{w},\mathbf{\gamma}_n)} R(\mathbf{w}) + \sum_{n} L(\mathbf{\gamma}_n)\\
&\quad \text{s.t.} \quad y_n\mathcal{D}( \mathbf{x}_n\mid\mathbf{w}) - \mathbf{\gamma}_n
\geq \mathbf{0}, \quad \forall n
\end{aligned}
\end{equation}
Where $L()$ is the loss function which is a non-increasing and convex function of the margin, and $R(\mathbf{w})$ is the regularization penalty. However, MED considers a more general problem of finding a distribution $p(\mathbf{w, \gamma})$ over $\mathbf{w}$ and classification margin parameters $\gamma$. This could be done by minimizing its relative entropy with respect to some prior target distribution $p_{0}(\mathbf{w, \gamma})$ under certain margin constraints. Specifically, suppose that a prior distribution, denoted as $p_0(\mathbf{w,\gamma})$, is available, then MED learns a distribution $p(\mathbf{w,\gamma})$ by solving a regularized risk minimization problem. When the prior distribution is not a uniform distribution, this can be generalized as minimizing the relative entropy (or Kullback-Leibler divergence) and the regularization penalty as follows (penalizing larger distances from priors):
\begin{equation} \label{eq:MED}
\begin{aligned}
\min _{p\left( \mathbf{w,\gamma}\right) }KL\big( p\left( \mathbf{w,\gamma}\right) ||p_{0}\left( \mathbf{w,\gamma} \right) \big) + CR\big(p(\mathbf{w,\gamma})\big).
\end{aligned}
\end{equation}
Here, $C$ is a constant and $R(p\big(\mathbf{w,\gamma})\big) = \sum_n h\big(y_n E_{p(\mathbf{w,\gamma})}[\mathcal{D}\left( \mathbf{x}_{n}|\mathbf{w} \right)-\mathbf{\gamma}_n]\big)$ is the hinge-loss that captures the large-margin principle underlying the MED prediction rule:
\begin{equation}\label{predrule}
\begin{aligned}
\hat{y} = sign\big(E_{p(\mathbf{w,\gamma})}[\mathcal{D}\left( \mathbf{x}_{n}|\mathbf{w})-\mathbf{\gamma}_n] \right)\big).
\end{aligned}
\end{equation}
And the KL divergence is defined as follows:
\begin{equation} \label{KL divergence}
\begin{aligned}
KL\big( p\left( \mathbf{w} ,\mathbf{\gamma} \right) ||p_{0}\left( \mathbf{w} ,\mathbf{\gamma} \right) \big)=\int p\left( \mathbf{w} ,\mathbf{\gamma}\right) \log \dfrac {p\left( \mathbf{w} ,\mathbf{\gamma}\right) }{p_{0} \left( \mathbf{w, \gamma} \right) }.
\end{aligned}
\end{equation}
Here in \eqref{eq:MED}, the classification margin quantities are included; $\mathbf{\gamma}_n$ as slack variables in the optimization, which represents the minimum margin that $y_n\mathcal{D}( \mathbf{x}_n|\mathbf{w})$ must satisfy. MED considers an expectation form of the traditional approaches and casts Eq. \eqref{original theory} as an integration. The classification constraints will also be applied in an expected form. As a result, MED no longer finds a fixed set of the parameters, but a distribution over them, and it uses a convex combination of discriminant functions rather than one single discriminant function to make model averaging for decisions. In particular, MED formulation finds distributions that are as close as possible with the prior distribution over all parameters regarding KL-divergence subject to various moment constraints. This analogy extends to cases where the distributions are also over unlabeled samples, missing values, or other probabilistic entities that are introduced when designing the discriminant function. Correspondingly, MED is an effective approach to learn a discriminative classifier as well as consider uncertainties over model parameters, which combines generative and discriminative learning \cite{sun2018multi,zhu2018semi}. This generalization facilitates a number of extensions of the basic approach, including uncertainty quantification described in this paper. The present work contributes by introducing a novel generalization of CHI formulation by integrating the MED to perform the task of uncertainty quantification.
\section{The proposed work: the UQ-CHI model} \label{proposed work}
The overall goal of UQ-CHI is to learn a distribution
$p(\mathbf{w})$ over the parameters of CHI model $\mathbf{w}$. An additional goal is that this could be done even if only partial labels are given, and the labels might also be with uncertainty. Therefore, the first step in constructing the UQ-CHI is to create the constraint structure. To design the UQ-CHI, we incorporate some features from the original formulation of the CHI via Eq. \eqref{eq:original} as follows: First, we utilize the label information by defining the discriminant function $\mathcal{D}\left( \mathbf{x}_{n, Tn}|\mathbf{w} \right)=\mathbf{w}^{T} \mathbf{x}_{n,Tn}$ which corresponds to \eqref{eq:original_b}. We, then incorporate the distinct feature of the CHI formulation, the monotonicity regularization function $\mathcal{M}\left(\mathbf{z}_{n, t}|\mathbf{w} \right)=\mathbf{w}^{T}\mathbf{z}_{n,t}$ that corresponds to Eq. \eqref{eq:original_c}. Note that, here, we will not incorporate the additional terms in Eq. \eqref{eq:original_d} and Eq. \eqref{eq:original_e} as they demand full knowledge of labels of the samples. In addition, we don't include the sparsity regularization term \eqref{eq:original_f}, since our focus is to learn $p(\mathbf{w})$ rather than the parameter vector $\mathbf{w}$. Also, our model can induce sparsity, e.g., if we impose a Laplace prior distribution for the parameters as to what is done in Bayesian Lasso model \cite{park2008bayesian}.
In the following subsections, we will introduce how we design the prior distributions, the constraints, and how to derive computational algorithms and closed-form solutions for training and prediction.
\subsection{Design of constraints and prior distributions}\label{constraints}
As aforementioned, there are two types of constraints that we can extract from the CHI formulation into the development of UQ-CHI. One corresponds to the discriminant function $\mathcal{D}\left( \mathbf{x}_{n, Tn}|\mathbf{w} \right)=\mathbf{w}^{T} \mathbf{x}_{n,Tn}$ used in CHI, to generate prediction on samples, while the other one corresponds to the monotonicity regularization function $\mathcal{M}\left(\mathbf{z}_{n, t}|\mathbf{w} \right)=\mathbf{w}^{T}\mathbf{z}_{n,t}$. Based on the CHI formulation, it is supposed that the model should lead to $y_n\mathcal{D}\left( \mathbf{x}_{n, Tn}|\mathbf{w} \right) = 1$ and $\mathcal{M}\left(\mathbf{z}_{n, t}|\mathbf{w} \right)\geq0$. As this perfect model may not exist, a set of margin variables $\mathbf{\gamma}=[\mathbf{\gamma}_1,\ldots,\mathbf{\gamma}_n]$ are introduced. We consider an expectation form of the previous approach and cast Eq. \eqref{eq:original} as an integration. Hence, the classification constraints are applied in an expected sense. This will lead to the following formulation for the constraints:
\begin{subequations} \label{constraintseq}
\begin{align}
&\int p\left( \mathbf{w},\mathbf{\gamma}\right)[p(y_n) \mathcal{D}( \mathbf{x}_{n,Tn}\mid \mathbf{w}) - \mathbf{\gamma}_{n}]d\mathbf{w}d\mathbf{\gamma}
+ \label{constraintseq_a} \\
&\int p\left(\mathbf{w,\gamma}\right) \left[\mathcal{M}\left( \mathbf{z}_{n, t}|\mathbf{w} \right) -\mathbf{\gamma}_{n}\right] d\mathbf{w} d\mathbf{\gamma} \geq 0.\label{constraintseq_b}
\end{align}
\end{subequations}
Here, the term \eqref{constraintseq_a} is the discriminant function and the term \eqref{constraintseq_b} is the monotonicity regularization function. And, $p(y_n)$ is the distribution of $y_n$, and $p(\mathbf{w,\gamma})$ is the distribution of $\mathbf{w,\gamma}$. With the prior distribution, we can derive the prediction rule: $\hat{y} = sign(E_{p(\mathbf{w})}[\mathcal{D}\left( \mathbf{x}_{n}|\mathbf{w}] \right))$.
Now we move on to the design of the prior distribution $p_0(\mathbf{w,\gamma}, y)$. It is natural to decompose the joint prior distribution as a product of three distributions:
\begin{equation} \label{eq:PriorMEDCHI}
\begin{aligned}
p_0(\mathbf{w,\gamma}, y) = p_0(\mathbf{w})\prod ^{N}_{1}p_{0}\left( \mathbf{\gamma}_n\right)\prod ^{N}_{1}p_{0}\left( y_n\right).
\end{aligned}
\end{equation}
In what follows we discuss each of the three prior distributions. Specifically, it is reasonable to assume that a level of uncertainty can be designed to each example in defining $p_0(y_n)$. A simple solution is to set $p_{0}\left( y_n\right) =1$ whenever $y_n$ is observed and $p_{0}\left( y_n\right) =0.5$ otherwise. To define $p_{0}\left(\mathbf{w}\right)$, we choose $p_{0}\left(\mathbf{w}\right)$ to be a Gaussian distribution with mean vector as $\mathbf{0}$ and covariance matrix as an identity matrix $\mathbf{I}$. To define the prior over the margin variables, we assume that it could be factorized $p\left(\mathbf{\gamma}\right) =\prod _{n}p_{0}\left(\mathbf{\gamma}_n\right)$. Further, following the idea proposed in \cite{jaakkola2000maximum}, we can set $p_{0}\left( \mathbf{\gamma}_n\right)= ce^{-c\left( 1-\mathbf{\gamma}_{n}\right) }$ and $\mathbf{\gamma}_{n}\leq 1$. Here, $1-\frac{1}{c}$ is actually the mean of the prior distribution of $\mathbf{\gamma}_n$, so the idea of this distribution is to incur a penalty only for margins smaller than $1-\frac{1}{c}$, while for margins larger than this quantity are not penalized. More details about the design of prior distributions will be given in Section \ref{tract}.
\subsection{The computational algorithm for UQ-CHI}\label{UQ-CHI solution}
The full formulation of the proposed UQ-CHI model is shown below:
\begin{subequations}\label{full UQ-CHI}
\begin{align}
&\min _{p\left( \mathbf{w,\gamma}\right) }KL\big( p\left( \mathbf{w,\gamma}\right) ||p_{0}\left( \mathbf{w,\gamma} \right) \big)\label{full UQ-CHI_a}\\
&\quad \textrm{s.t.} \int p\left( \mathbf{w},\mathbf{\gamma}\right)[p(y_n) \mathcal{D}( \mathbf{x}_{n,Tn}\mid \mathbf{w}) - \mathbf{\gamma}_{n}]d\mathbf{w}d\mathbf{\gamma} + \label{full UQ-CHI_b} \\
&\qquad \int p\left(\mathbf{w,\gamma}\right) \left[\mathcal{M}\left( \mathbf{z}_{n, t}|\mathbf{w} \right) -\mathbf{\gamma}_{n}\right] d\mathbf{w} d\mathbf{\gamma} \geq 0.\label{full UQ-CHI_c}
\end{align}
\end{subequations}
Essentially, solving optimization formulation Eq. \eqref{full UQ-CHI} is to find a solution by calculating the relative entropy projection from the overall prior distribution $p_0(\mathbf{w, \gamma}, y)$ to the admissible set of distributions $p$ that are consistent with the constraints. In what follows, we develop the computational algorithm to solve this formulation Eq. \eqref{full UQ-CHI} and further derive the method for the prediction on samples.
\subsubsection{Step 1: Training the model}
In the training step, we consider a joint distribution of $\mathbf{w}$, and the margin vector of $\mathbf{\gamma}=[\mathbf{\gamma}_1,\ldots,\mathbf{\gamma}_n]$ while fixing $p(y_n)$. In this step, we first explain the solution to the MED optimization problem subject to the terms in \eqref{eq:MED}.
\begin{lem} \label{lemma_1}
Let the loss function be a non-increasing and convex function of the margin, and let the Lagrangian of the optimization problem defined as $\mathcal{L}$ and $\mathbf{\lambda}=\left[ \mathbf{\lambda} _{1},\ldots ,\mathbf{\lambda} _{n}\right]$ be a set of non-negative Lagrange multipliers. Given the prior distribution $p_0(\mathbf{w})$ and the model distribution $p(\mathbf{w})$, and the discriminant function $\mathcal{D}\left( \mathbf{x}_{n}|\mathbf{w} \right)$ in order to minimize the relative entropy in terms of the KL-divergence ($KL(p(\mathbf{w}||p_0(\mathbf{w})$) subjected to set of defined constraints, the MED optimization problem \eqref{eq:MED} can be written as:
\begin{equation}\label{regularZ}
\begin{aligned}
& \max _{\lambda }J(\mathbf{\lambda}) = -logZ(\mathbf{\lambda})\\
&\quad \textrm{s.t.} \quad \mathbf{\lambda}_i \geq 0 \quad \text{for} \quad i=1,\ldots,N
\end{aligned}
\end{equation}
Here, $Z(\mathbf{\lambda})$ is the normalization constant defined as:
\begin{equation}\label{lemma_1_Z}
\begin{aligned}
&Z(\mathbf{\lambda})=\int p_0(\mathbf{w}) \exp \Bigg(\sum_n \mathbf{\lambda}_ny_n \mathcal{D}\left( \mathbf{x}_{n}|\mathbf{w} \right)\Bigg)d\mathbf{w},
\end{aligned}
\end{equation}
\end{lem}
The proof of Lemma \ref{lemma_1} can be found in \ref{Appendix_A}. Now, the model training problem is revealed to be another optimization problem, that is learning optimal $\mathbf{\lambda}^*$ by solving the dual objective function $J$ under positivity constraint. Based on the results from Lemma \ref{lemma_1}, after adding dual variables for the constraint in Eq. \eqref{full UQ-CHI}, the Lagrangian of the optimization problem can be written as:
\begin{equation} \label{eq:MMED-CHIlagrang}
\begin{aligned}
&\mathcal{L} = \int p\left(\mathbf{w,\gamma}\right) \log \dfrac {p\left( \mathbf{w,\gamma}\right) }{p_{0}\left( \mathbf{w,\gamma}\right) }d\mathbf{w} d\mathbf{\gamma}-\\
&\qquad \Bigg(\sum _{n\in \left\{ 1,\ldots,N\right\} }\int p\left( \mathbf{w},\mathbf{\gamma}\right)\mathbf{\lambda}_n[p(y_n) \mathcal{D}( \mathbf{x}_{n,Tn}\mid \mathbf{w}) - \mathbf{\gamma}_{n}]d\mathbf{w}d\mathbf{\gamma} + \\
&\quad \sum_{\substack{n\in \{1,\ldots,N\}\\ t\in \left\{ 1,\ldots ,T_{n-1}\right\}} }\int p\left(\mathbf{w,\gamma}\right)\mathbf{\lambda}_n \left[\mathcal{M}\left( \mathbf{z}_{n, t}|\mathbf{w} \right) -\mathbf{\gamma}_{n}\right] d\mathbf{w} d\mathbf{\gamma}\Bigg).
\end{aligned}
\end{equation}
In order to find a solution, we require:
\begin{equation}\label{MMEDCHIDerivative}
\begin{aligned}
&\dfrac {\partial \mathcal{L}}{\partial p\left(\mathbf{w,\gamma} \right) } = \log \dfrac {p\left(\mathbf{w,\gamma}\right) }{p_{0}\left(\mathbf{w,\gamma}\right)}+1 -\\
&\qquad \qquad \quad \Bigg ( \sum _{n\in \left\{ 1,\ldots,N\right\} }\mathbf{\lambda}_n[p(y_n) \mathcal{D}( \mathbf{x}_{n,Tn}\mid \mathbf{w}) - \mathbf{\gamma}_{n}] + \\
&\qquad \quad \quad \sum_{\substack{n\in \{1,\ldots,N\}\\ t\in \left\{ 1,\ldots ,T_{n-1}\right\}} }\mathbf{\lambda}_n \left[\mathcal{M}\left( \mathbf{z}_{n, t}|\mathbf{w} \right) -\mathbf{\gamma}_{n}\right]\Bigg)\\
&\qquad \qquad = \mathbf{0},
\end{aligned}
\end{equation}
Which results in the following theorem.
\begin{theorem}\label{theory_1}
The solution to the UQ-CHI problem has the following general form:
\begin{equation} \label{eq:MMEDCHIsol}
\begin{aligned}
&p\left(\mathbf{w,\gamma}\right)^* = \frac{1}{Z(\mathbf{\lambda})}p_{0}\left( \mathbf{w,\gamma}\right)\\
&\qquad \quad \quad \exp \Bigg (\sum _{n\in \left\{ 1,\ldots ,N\right\} }\mathbf{\lambda}_n[p(y_n) \mathcal{D}( \mathbf{x}_{n,Tn}\mid \mathbf{w}) - \mathbf{\gamma}_{n}]+ \\
&\qquad \quad\sum_{\substack{n\in \{1,\ldots,N\}\\ t\in \left\{ 1,\ldots ,T_{n-1}\right\}} } \mathbf{\lambda}_n\left[\mathcal{M}\left( \mathbf{z}_{n, t}|\mathbf{w} \right) -\mathbf{\gamma}_{n}\right] \Bigg) .
\end{aligned}
\end{equation}
\end{theorem}
Thus, finding the solution to \eqref{full UQ-CHI} depends on being able to evaluate the normalization constant $Z(\mathbf{\lambda})$.
\begin{lem} \label{lemma_2}
Let $Z(\mathbf{\lambda})$ be the normalization constant defined in Eq. \eqref{lemma_1_Z}. Based on the finding in \eqref{eq:MMEDCHIsol}, $Z(\mathbf{\lambda})$ can be reformulated as follows:
\begin{subequations}\label{finalZZ}
\begin{align}
&Z(\mathbf{\lambda})= Z_\mathbf{w}(\mathbf{\lambda}) + Z_\mathbf{\gamma}(\mathbf{\mathbf{\lambda}})\\
&\qquad =\exp \Bigg(\frac{1}{2}\bigg(\sum_{\substack{n\in \{1,\ldots,N\}} }\mathbf{\lambda}_{n} p_n(y_n) \mathbf{x}_{n,Tn} \sum_{\substack{n\in \{1,\ldots,N\}\\ t\in \left\{ 1,\ldots ,T_{n-1}\right\}} } \mathbf{\lambda}_n \mathbf{z}_{n,t}\bigg )^T\label{finalZZ_a}\\
&\qquad \quad \bigg(\sum_{\substack{n\in \{1,\ldots,N\}} }\mathbf{\lambda}_{n} p_n(y_n) \mathbf{x}_{n,Tn} \sum_{\substack{n\in \{1,\ldots,N\}\\ t\in \left\{ 1,\ldots ,T_{n-1}\right\}} } \mathbf{\lambda}_n \mathbf{z}_{n,t}\bigg)\Bigg)+ \label{finalZZ_b}\\
&\quad \quad\prod_{n\in \{1,\ldots,N\} }\frac{1}{1-\mathbf{\lambda}_n/c}\exp(-\mathbf{\lambda}_n)\label{finalZZ_c}.
\end{align}
\end{subequations}
Where, $Z_\mathbf{w}(\mathbf{\lambda})$ is defined in \eqref{finalZZ_a} and \eqref{finalZZ_b} $Z_\mathbf{\gamma}(\mathbf{\lambda})$ is defined in \eqref{finalZZ_c}.
\end{lem}
The proof of Lemma \ref{lemma_2} can be found in the \ref{Appendix_B}. Given the reformulated normalization constant $Z(\mathbf{\lambda})$ in \eqref{finalZZ}, the maximum of the jointly concave function objective function $J(\mathbf{\lambda})$ showing in Eq. \eqref{regularZ} can be found through a constrained non-linear optimization. As a result, by substituting Eq. \eqref{finalZZ} in Eq. \eqref{regularZ} we get:
\begin{equation}\label{seven}
\begin{split}
&J(\mathbf{\lambda})=\sum _{n\in \{1,\ldots,N\}}\bigg( \mathbf{\mathbf{\lambda}}_{n}+log (1-\mathbf{\lambda}_{n}/c)\bigg)-\\
&\qquad \qquad\frac{1}{2}\bigg(\sum_{\substack{n\in \{1,\ldots,N\}} }\mathbf{\lambda}_{n} p_n(y_n) \mathbf{x}_{n,Tn} \sum_{\substack{n\in \{1,\ldots,N\}\\ t\in \left\{ 1,\ldots ,T_{n-1}\right\}} } \mathbf{\lambda}_n \mathbf{z}_{n,t} \bigg)^T \\
& \qquad\qquad \bigg(\sum_{\substack{n\in \{1,\ldots,N\}} }\mathbf{\lambda}_{n} p_n(y_n) \mathbf{x}_{n,Tn} \sum_{\substack{n\in \{1,\ldots,N\}\\ t\in \left\{ 1,\ldots ,T_{n-1}\right\}} } \mathbf{\lambda}_n \mathbf{z}_{n,t}\bigg).
\end{split}
\end{equation}
Here, $\lambda \geq \mathbf{0}$. Thus, we have the following dual optimization problem:
\begin{equation}\label{MMEDCHIdual}
\begin{aligned}
&\max _{\mathbf{\lambda}}\sum _{n\in \{1,\ldots,N\}}\bigg( \mathbf{\lambda}_{n}+log (1-\mathbf{\lambda}_{n}/c)\bigg) -\\
&\qquad \quad \frac{1}{2}\bigg(\sum_{\substack{n\in \{1,\ldots,N\}} }\mathbf{\lambda}_{n} p_n(y_n) \mathbf{x}_{n,Tn} \sum_{\substack{n\in \{1,\ldots,N\}\\ t\in \left\{ 1,\ldots ,T_{n-1}\right\}} } \mathbf{\lambda}_n \mathbf{z}_{n,t} \bigg)^T \\
&\qquad \quad \bigg(\sum_{\substack{n\in \{1,\ldots,N\}} }\mathbf{\lambda}_{n} p_n(y_n) \mathbf{x}_{n,Tn} \sum_{\substack{n\in \{1,\ldots,N\}\\ t\in \left\{ 1,\ldots ,T_{n-1}\right\}} } \mathbf{\lambda}_n \mathbf{z}_{n,t}\bigg) \\
&\quad \textrm{s.t.} \quad \mathbf{\lambda} \geq \mathbf{0}
\end{aligned}
\end{equation}
The Lagrange multiplier $\mathbf{\lambda}$, is recovered by solving the convex optimization problem Eq. \eqref{MMEDCHIdual}. Note that since the prior factorizes across $\mathbf{w,\gamma}$, UQ-CHI solution also factorized as well, i.e., $p(\mathbf{w,\gamma})=p(\mathbf{w})p(\mathbf{\gamma})$.
\begin{cor}\label{cor_1}
From results in Theorem \ref{theory_1} the marginal distribution $p(\mathbf{w})$ can be found as follows:
\begin{equation}\label{distributionestimate}
\begin{aligned}
& p\left(\mathbf{w}\right)=\frac{1}{Z_\mathbf{w}(\mathbf{\lambda})}
p_{0}\left( \mathbf{w}\right) \exp \bigg(\sum_{\substack{n\in \{1,\ldots,N\}}}\mathbf{\lambda}_n p_n(y_n)\mathbf{w}^T \mathbf{x}_{n,Tn}+ \\
&\qquad \sum _{\substack{n\in \{1,\ldots,N\}\\ t\in \left\{ 1,\ldots ,T_{n-1}\right\}}}\mathbf{\lambda}_{n} \mathbf{w}^T\mathbf{z}_{n, t}\bigg).
\end{aligned}
\end{equation}
\end{cor}
Where, $Z_{\mathbf{w}}(\mathbf{\lambda})$ can be obtained from Eq. \eqref{finalZZ_a} and \eqref{finalZZ_b}.
\subsubsection{Step 2: Prediction}\label{prediction}
After obtaining the marginal distribution $p(\mathbf{w})$ in \eqref{distributionestimate}, the following lemma is used to predict the label of a new example $ \mathbf{x}_{new}$. Referring to the solution of the UQ-CHI problem in \eqref{eq:MMEDCHIsol}, we can easily modify the regularization approach for predicting a new label from a new input sample $ \mathbf{x}_{new}$ that is shown by $\hat{y}=\textit{sign } \mathcal{D}\left( \mathbf{x}| \mathbf{\hat w} \right)$. In what follows, we generate the predictive label for the upcoming new labels.
\begin{lem}\label{lemma_3}
Given the marginal distribution $p\left(\mathbf{w}\right)$ in \eqref{distributionestimate} and the convex combination of discriminant functions $\int p(\mathbf{w})\mathcal{D} \left( \mathbf{x}|\mathbf{w}\right)d\mathbf{w}$, and let $\mathbf{\lambda}^*$ be the optimal Lagrangian multiplier obtained from the optimization problem \eqref{MMEDCHIdual}, and given $Z_\mathbf{w}(\mathbf{\lambda})$ obtained from \eqref{finalZZ_c}, then the predictive label for the new ($ \mathbf{x}_{new}$) can be generated as:
\begin{equation}\label{predict}
\begin{aligned}
&\hat {y}=sign \Bigg(\bigg(\sum_{\substack{n\in \{1,\ldots,N\}}}\mathbf{\lambda}_n p_n(y_n) \mathbf{x}_{n,Tn} + \sum _{\substack{n\in \{1,\ldots,N\}\\ t\in \left\{ 1,\ldots ,T_{n-1}\right\}}}\mathbf{\lambda}_{n} \mathbf{z}_{n, t}\bigg)^T \mathbf{x}_{new}\Bigg).
\end{aligned}
\end{equation}
\end{lem}
The proof of Lemma \ref{lemma_3} is shown in \ref{Appendix_C}.
\subsubsection{Summary of the algorithms}\label{summary}
A full description of the training and prediction of UQ-CHI model is given in Algorithm \ref{alg:alg1}.
\begin{algorithm}[!ht]
\caption{The UQ-CHI algorithm~}
\label{alg:alg1}
\begin{algorithmic}[1]
\Require $ \mathbf{x}_{n, t} \in \mathbb{R} ^{d}$, $p_{0}\left( y_n\right)$, $p_{0}\left( \mathbf{w}\right)$, and $p_{0}\left( \mathbf{\gamma}\right)$
\Ensure Generate predictive labels for the upcoming new labels
\While {not converge}
\State \textbf{Start iterations } t:= 1,2,\ldots\textbf{do}
\State \textbf{Step 1 - Training model: find $\mathbf{\lambda}^*$ and $p(\mathbf{w})$}
\State \textbf{for} $n=1,2,\ldots ,N$,
\State $\max _{\mathbf{\lambda}}\sum _{n\in \{1,\ldots,N\}}\bigg( \mathbf{\lambda}_{n}+log (1-\mathbf{\lambda}_{n}/c)\bigg) -$\\
$\qquad \quad \frac{1}{2}\bigg(\sum_{\substack{n\in \{1,\ldots,N\}} }\mathbf{\lambda}_{n} p_n(y_n) \mathbf{x}_{n,Tn} \sum_{\substack{n\in \{1,\ldots,N\}\\ t\in \left\{ 1,\ldots ,T_{n-1}\right\}} } \mathbf{\lambda}_n \mathbf{z}_{n,t} \bigg)^T$ \\
$\qquad \quad \bigg(\sum_{\substack{n\in \{1,\ldots,N\}} }\mathbf{\lambda}_{n} p_n(y_n) \mathbf{x}_{n,Tn} \sum_{\substack{n\in \{1,\ldots,N\}\\ t\in \left\{ 1,\ldots ,T_{n-1}\right\}} } \mathbf{\lambda}_n \mathbf{z}_{n,t}\bigg)$ \\
$\quad \textrm{s.t.} \quad \mathbf{\lambda} \geq \mathbf{0}$
\State $ p\left(\mathbf{w}\right)=\frac{1}{Z_\mathbf{w}(\mathbf{\lambda})}
p_{0}\left( \mathbf{w}\right) \exp \bigg(\sum_{\substack{n\in \{1,\ldots,N\}}}\mathbf{\lambda}_n p_n(y_n)\mathbf{w}^T \mathbf{x}_{n,Tn}+ $\\
$\qquad \sum _{\substack{n\in \{1,\ldots,N\}\\ t\in \left\{ 1,\ldots ,T_{n-1}\right\}}}\mathbf{\lambda}_{n} \mathbf{w}^T\mathbf{z}_{n, t}\bigg) $
\State \textbf{Step 2 - Prediction: predict the
label of a new example ($ \mathbf{x}_{new}$)}
\State ${\displaystyle\hat {y}=sign \Bigg(\bigg(\sum_{\substack{n\in \{1,\ldots,N\}}}\mathbf{\lambda}_n p_n(y_n) \mathbf{x}_{n,Tn} + \sum _{\substack{n\in \{1,\ldots,N\}\\ t\in \left\{ 1,\ldots ,T_{n-1}\right\}}}\mathbf{\lambda}_{n} \mathbf{z}_{n, t}\bigg)^T \mathbf{x}_{new}\Bigg)}$
\State \textbf{ end for}
\EndWhile
\end{algorithmic}
\end{algorithm}
\subsection{UQ-CHI with rejection option}\label{rejction option}
Typically the performance of a prediction model is evaluated based on its accuracy, on a scheme of classifying all samples, regardless of the degree of confidence associated with the classification of the samples. However, accuracy is not the only measurement that can be used to judge the model’s performance. In many healthcare application, it is safer to make predictions when the confidence assigned to the classification is relatively high, rather than classify all samples even if confidence is low. In this case, a sample can be rejected if it doesn't fit into any of the classes. In pattern recognition, this problem is typically solved by estimating the class conditional probabilities and rejecting the samples that have the lowest class posterior probabilities, that are the most unreliable samples. As UQ-CHI enables uncertainty quantification, here, we create a rejection option in prediction to show the utility of uncertainty quantification in practice. The basic idea of rejection option is that the prediction model rejects to generate a prediction if the uncertainty is higher than a given threshold. In other words, a sample that is most likely to be misclassified is rejected as described below:
\begin{equation}
\begin{aligned}
\max p(\mathbf{w}|x_i) < T \quad i = 1,\dots,N
\end{aligned}
\end{equation}
Here, T is the rejection rate. The samples $x_i$ are rejected for which the maximum posterior probability $p(\mathbf{w}|x_i)$ is below a threshold. And a sample is accepted when:
\begin{equation}
\begin{aligned}
\max p(\mathbf{w}|x_i) \geq T \quad i = 1,\dots,N
\end{aligned}
\end{equation}
Thus, we define a classification with rejection as $ \hat {y}^{rejection}$, where, if a sample is rejected $\hat {y}^{rejection}_i = 0$, denotes rejection, else, $\hat {y}^{rejection}_i = \hat {y}_i$, where, $\hat {y}_i$ corresponds to the classification of the $i$th sample defined in Eq. \eqref{predict}.
\begin{table*}[t]
\centering
\resizebox{0.60\textwidth}{!}{%
\begin{tabular}{|l|l|l|l|l|l|}
\hline
\multicolumn{1}{|c|}{Algorithm name} & \multicolumn{4}{c|}{UQ-CHI} & \multicolumn{1}{c|}{\multirow{3}{*}{CHI}} \\ \cline{1-5}
\multicolumn{1}{|c|}{\multirow{2}{*}{Label ratio}} & \multicolumn{1}{c|}{\multirow{2}{*}{Training ratio}} & \multicolumn{3}{c|}{Rejection rate} & \multicolumn{1}{c|}{} \\ \cline{3-5}
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & Low = 20 & Medium = 40 & High = 60 & \multicolumn{1}{c|}{} \\ \hline
\multirow{3}{*}{Low = 10} & 30 & 0.69 & 0.74 & 0.81 & 0.61 \\ \cline{2-6}
& 50 & 0.73 & 0.76 & 0.83 & 0.62 \\ \cline{2-6}
& 70 & 0.75 & 0.77 & 0.85 & 0.65 \\ \hline
\multirow{3}{*}{Medium = 20} & 30 & 0.66 & 0.72 & 0.73 & 0.55 \\ \cline{2-6}
& 50 & 0.69 & 0.73 & 0.74 & 0.60 \\ \cline{2-6}
& 70 & 0.71 & 0.75 & 0.78 & 0.64 \\ \hline
\multirow{3}{*}{High = 50} & 30 & 0.64 & 0.69 & 0.72 & 0.53 \\ \cline{2-6}
& 50 & 0.67 & 0.71 & 0.73 & 0.56 \\ \cline{2-6}
& 70 & 0.70 & 0.73 & 0.75 & 0.60 \\ \hline
\end{tabular}}
\caption{Corresponding testing accuracies for different rejection options for the simulated dataset}
\label{Table_3}
\end{table*}
\subsection{Tractability of UQ-CHI related to design of prior distribution}\label{tract}
Recall that by applying the MED to our optimization problem we no longer learn the model parameter, and instead, we specify the probability distributions. These distributions give rise to penalty functions for the model and the margins via KL-divergence. In detail, the model distribution will give rise to a divergence term $KL(p(\mathbf{w})||p_0(\mathbf{w}))$, and the margin distribution will give rise to the divergence term $KL(p(\mathbf{\gamma})||p_0(\mathbf{\gamma}))$ which corresponds to the regularization penalty and the loss function respectively. The trade-off between classification loss and regularization now are on a common probabilistic scale, since both terms are based on probability distributions and KL-divergence. Hence, there is a relationship between defining a prior distribution over margins and parameters and defining the objective function and the penalty term in the original function. Recall that, $\mathbf{\gamma}_n$ are the classification margins as slack variables in the optimization which represent the minimum margin that $y_n\mathcal{D}(X_n;w)$ must satisfy. Hence, the choice of the margin distribution corresponds to the use of the slack variables in the formulation of the UQ-CHI. For example, in our case we set $p_{0}\left( \mathbf{\gamma}_n\right)= ce^{-c\left( 1-\mathbf{\gamma}_{n}\right) }$ and $\mathbf{\gamma}_{n}\leq 1$. If we mathematically expand the normalization function in \eqref{lemma_1_Z}, we get the two terms $Z_{\mathbf{w}}(\mathbf{\lambda})$ and $Z_{\mathbf{\gamma}}(\mathbf{\lambda})$ as shown in \eqref{finalZZ}, and given the choice of margin priors in Section \ref{constraints} we get:
\begin{equation}\label{penalty}
\begin{aligned}
log Z_{\mathbf{\gamma}_n}(\mathbf{\lambda}_n) =log \int ^{1}_{\mathbf{\gamma}_n = -\infty} c\exp\bigg({-c\big( 1-\mathbf{\gamma}_{n}\big) }\bigg)\exp\bigg(-\mathbf{\lambda}_n\mathbf{\gamma}_n\bigg)d\mathbf{\gamma}_n.
\end{aligned}
\end{equation}
From \eqref{penalty} we can see that a penalty occurs when the margins are smaller than $E[\mathbf{\gamma}_n] = 1 - \frac{1}{c}$, and any margins larger than this would not be penalized. The margin distribution becomes peaked when $\mathbb{\gamma}_n = 1$ that is when $c\rightarrow \infty$, and this is equivalent to having fixed margins. If the margin values are held fixed the discriminant function might not be able to separate the training examples with such pre-specified margin values. Because of non-separable datasets this will generate an empty convex hull for the solution space. Thus, we need to revisit the setting of the margin values, and the loss function upon them. The parameter $c$ will play an almost identical role as the regularization parameter which upper bounds the Lagrange multipliers. Note, if the objective function $J()$ grow without a bound, it may generate a search space for parameters that are no longer a convex hull. This compromises the uniqueness and solvability of the problem. Therefore, the selection of a prior forms a concave function $J()$ for a unique optimum in the Lagrange multiplier space.
\section{Numerical studies} \label{numerical studies}
In this section, we design our simulation studies to evaluate the efficacy of UQ-CHI in terms of prediction and uncertainty quantification, in comparison with the CHI model under a variety of practical scenarios.
\subsection{Simulated dataset} \label{simulated dataset}
We simulate data following the procedure described as follows. The synthetic dataset is generated with two classes with partial labels. We conduct several experiments with the simulated data to investigate the performance of our method across different settings. Without loss of generality, we assume that there are two groups, normal vs. diseased with a proportion of $60\%$ of class normal and $20\%$ of complete labels. For all the experiments, we set the number of features $d= 90$, For each class, we simulate $50$ subjects, where we assumed that $x^{k}_{n,t}\sim
N\big( u,\sigma ^{2}_{k}\big)$ for $k\in \left\{ 1,\ldots ,d\right\}$.
\subsection{Incomplete labels and length of longitudinal data}
UQ-CHI can handle partial labels well, i.e., by assigning a prior distribution of the labels and obtaining posterior distributions after model training, in our experiment, we consider a low, medium and high level of label availability, i.e., $10\%$, $20\%$ and $50\%$ of unlabeled examples. Also, we evaluate our methodology's robustness in the presence of down-sampling of the training data, i.e., only using a percentage of the data (for example, ranging from $30\%$, $50\%$ and $70\%$), to train both UQ-CHI and CHI models. A model that can predict well with less longitudinal data holds great value in clinical applications.
\subsection{Uncertainty quantification with rejection option}
As mentioned in \ref{rejction option}, UQ-CHI has a unique capacity of rejection option. The algorithm rejects to predict on a sample if it cannot be predicted reliably. The key parameter is the threshold that will be used in the rejection option. In our experiments, we use several levels of the threshold to create a range of rejection options from loose to strict, and further calculate the resulting accuracies on the predictions on the accepted samples. Specifically, we vary the size of the rejection region from $20\%$, $40\%$, to $60\%$.
\subsection{Parameter tuning and validation }
In our experiments, we randomly split the data into two parts, one for training and one for testing. For the training dataset, we use 10-fold cross-validation to tune the parameters. The average accuracies from the split of the testing dataset are reported in the result section. In Section \ref{tract} we specify under what condition the computation would remain tractable. It has been pointed out that, based on the choice of the margin distribution described in \ref{tract}, $\mathbf{\gamma}_n$ is bounded by the parameter $c$. Recall that $c$ is a parameter in the prior for the margins. Therefore, the parameter $c$ will play an important role. Hence, we conduct experiments with the parameter $c$ chosen from $\left(1.5,3,5,10,20,100 \right)$ to see the impact of various choices of $c$ on the testing accuracy.
\subsection{Discussion}\label{Discussion}
In the following, we discuss the tractability of the model given the simulated data for various choices of the parameter $c$ in Table \ref{Table_1}. We simulated different selection of the parameter $c$ to check its impact on the testing accuracy. If we observe that increasing this parameter imposes no effect on the performance, we would then ignore the higher values for reasons discussed in Section \ref{tract}. The results show that for a more significant quantity of parameter $c$ the accuracy decreases. As shown in Table \ref{Table_1}, additional potential terms of the parameter $c$ would not carry huge effects as the margin distribution may have become at its peak ($\mathbf{\gamma}_n$) which is equivalent to have fixed margins. Note that to test the impact of the parameter $c$ we simulated the data with a proportion of $60\%$ of class normal and $20\%$ complete labels. Here we can observe that after increasing the values for the parameter $c$ beyond $5$, the performance of the model doesn't change significantly, which indicates that the margin distribution may have become at its peak, and hence it is equal to a fixed value. Higher values of this parameter generate relatively similar performance. Consequently, lower values of $c$ preserve flexibility to estimate a distribution over parameters instead of using fixed margins.
Next, we examine how the incomplete label information would affect the performance of UQ-CHI with regards to the testing accuracy given different sampling ratios in Table \ref{Table_2}. A model which can be trained with less training data is more promising in healthcare applications where the data collection is relatively costlier than other real-world applications. The results in Table \ref{Table_2} show that with even a ratio of $50\%$ of incomplete label information the UQ-CHI can perform with a testing accuracy of $74\%$. This confirms that the model is capable of performing well in the face of lack of label information.
Incorporating a rejection option into the model improves the prediction accuracy of classifiers. There is a general relationship between the testing accuracy and rejection rate: the testing accuracy increases monotonically with increasing rejection rate. The testing accuracies for different rejection options are reported in Table \ref{Table_3}. Comparisons of varying rejection rates for the UQ-CHI confirms that for a high rejection rate of $60\%$, the testing accuracy could go up to $81\%$ for a given label ratio of $10\%$, which in comparison with a lower rejection rate, this can be a promising result. In Table \ref{Table_3}, we also compared our methodology with CHI framework. Recall that CHI is not strictly a supervised learning problem. In \cite{huang2017chi},
both simulation studies and real-world applications demonstrated that without label information, CHI method could still be trained and used to predict. However, we show that the UQ-CHI can generate relativity a better performance than CHI by incorporating the rejection option. UQ-CHI can obtain a testing accuracy in a range of $75\%$ to $81\%$ for a given rejection rate of $60\%$ and a labeling ratio of $10\% - 50 \%$.
\begin{table}[!ht]
\centering
\resizebox{0.23\textwidth}{!}{%
\begin{tabular}{|l|l|}
\hline
Parameter c & Testing accuracy \\ \hline
1.5 & 81.2 \\ \hline
3 & 80.2 \\ \hline
5 & 79.8 \\ \hline
10 & 77.2 \\ \hline
20 & 77.3 \\ \hline
100 & 76.1 \\ \hline
\end{tabular}}
\caption{Model average testing accuracy ($\%$) for simulated dataset}
\label{Table_1}
\end{table}
\begin{table}[t]
\centering
\resizebox{0.45\textwidth}{!}{%
\begin{tabular}{|l|l|l|l|}
\hline
\multicolumn{1}{|c|}{\multirow{2}{*}{Sample ratio}} & \multicolumn{3}{c|}{Label ratio} \\ \cline{2-4}
\multicolumn{1}{|c|}{} & Low = 10\% & Medium = 20\% & High = 50\% \\ \hline
30 & 0.85 $\pm$ 0.033 & 0.80 $\pm$ 0.032 & 0.74 $\pm$ 0.033 \\ \hline
50 & 0.86 $\pm$ 0.060 & 0.83 $\pm$ 0.053 & 0.76 $\pm$ 0.027 \\ \hline
70 & 0.88 $\pm$ 0.074 & 0.85 $\pm$ 0.041 & 0.78 $\pm$ 0.037 \\ \hline
\end{tabular}}
\caption{The average classification accuracies and standard deviations (\%) for the simulated dataset}
\label{Table_2}
\end{table}
\section{Real-world application on Alzheimer's disease} \label{ADNI}
We further test UQ-CHI on an Alzheimer's disease data which exhibited monotonic disease progression. We use the FDG-PET images of 162 patients (Alzheimer's Disease: 74, Normal aging: 88) downloaded from the ADNI (\url{www.loni.usc.edu/ADNI}). The data is sampled at irregular time points where each patient has at least three time points and at most seven-time points. The data is preprocessed, and the Automated Anatomical Labeling (AAL) is used to segment each image into 116 anatomical volumes of interest (AVOIs). For this study, 90 AVOIs that are in the cerebral cortex are selected (each AVOI becomes a variable here). According to the mechanism of FDG-PET, the measurement data of each region are the local average FDG binding counts, which represents the degree of glucose metabolism. The glucose metabolism declines as the function of aging, and the progression of many neurodegenerative diseases such as AD further accelerates this declination. Thus, ADNI dataset facilitates a perfect application example to test the proposed method. While the ADNI dataset consists of fully labeled examples, we exploit the dataset settings to create a variety of uncertainties to the label information.
The results for tuning the parameter $c$ for the ADNI dataset is reported in Table \ref{Table_4}. The results show that for a more significant quantity of parameter $c$ the accuracy decreases. Table \ref{Table_5} shows the performance of the UQ-CHI across different uncertainty levels as well as different sampling ratios. The proposed method shows an excellent capability to quantify the uncertainties for the real-world dataset. As shown in Table \ref{Table_5}, The UQ-CHI is even capable of dealing with a data that has $50\%$ of incomplete labels with an accuracy in the range of $70\%-77\%$ for the ADNI dataset.
On the other hand, we show that by only using a small proportion of the training samples as low as $30\%$ of the data, we still can maintain reasonable performance in a range of $70\%-82\%$, which indicates that UQ-CHI can be trained with less training data. The rejection options against the testing accuracy as well as these values against the training ratios are shown in Tables \ref{Table_6}. Incorporating a rejection option into the model improves the prediction accuracy of classifiers. Comparisons of different rejection rates for the UQ-CHI confirms that for a high rejection rate of $60\%$, the testing accuracy could go up to $80\%$ or higher, which compared with a lower rejection rate, this can be a promising result.
\begin{table}
\centering
\resizebox{0.23\textwidth}{!}{%
\begin{tabular}{|l|l|}
\hline
Parameter c & Testing accuracy \\ \hline
1.5 & 78.8 \\ \hline
3 & 77.9 \\ \hline
5 & 77.3 \\ \hline
10 & 75.3 \\ \hline
20 & 72 \\ \hline
100 & 68.9 \\ \hline
\end{tabular}}
\caption{Model average testing accuracy ($\%$) for ADNI dataset}
\label{Table_4}
\end{table}
\begin{table}
\centering
\resizebox{0.45\textwidth}{!}{%
\begin{tabular}{|l|l|l|l|}
\hline
\multicolumn{1}{|c|}{\multirow{2}{*}{Sample ratio}} & \multicolumn{3}{c|}{Label ratio} \\ \cline{2-4}
\multicolumn{1}{|c|}{} & Low = 10\% & Medium = 20\% & High = 50\% \\ \hline
30 & 0.82 $\pm$ 0.022 & 0.79 $\pm$ 0.052 & 0.70 $\pm$ 0.032 \\ \hline
50 & 0.84 $\pm$ 0.014 & 0.82 $\pm$ 0.005 & 0.74 $\pm$ 0.049 \\ \hline
70 & 0.87 $\pm$ 0.040 & 0.83 $\pm$ 0.032 & 0.76 $\pm$ 0.043 \\ \hline
\end{tabular}}
\caption{The average classification accuracies and standard deviations (\%) for ADNI dataset}
\label{Table_5}
\end{table}
\begin{table*}
\centering
\resizebox{0.60\textwidth}{!}{%
\begin{tabular}{|l|l|l|l|l|l|}
\hline
\multicolumn{1}{|c|}{Algorithm name} & \multicolumn{4}{c|}{UQ-CHI} & \multicolumn{1}{c|}{\multirow{3}{*}{CHI}} \\ \cline{1-5}
\multicolumn{1}{|c|}{\multirow{2}{*}{Label ratio}} & \multicolumn{1}{c|}{\multirow{2}{*}{Training ratio}} & \multicolumn{3}{c|}{Rejection rate} & \multicolumn{1}{c|}{} \\ \cline{3-5}
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & Low = 20 & Medium = 40 & High = 60 & \multicolumn{1}{c|}{} \\ \hline
\multirow{3}{*}{Low = 10} & 30 & 0.71 & 0.76 & 0.83 & 0.64 \\ \cline{2-6}
& 50 & 0.75 & 0.78 & 0.84 & 0.66 \\ \cline{2-6}
& 70 & 0.77 & 0.79 & 0.87 & 0.70 \\ \hline
\multirow{3}{*}{Medium = 20} & 30 & 0.67 & 0.71 & 0.72 & 0.58 \\ \cline{2-6}
& 50 & 0.70 & 0.72 & 0.75 & 0.62 \\ \cline{2-6}
& 70 & 0.71 & 0.75 & 0.76 & 0.63 \\ \hline
\multirow{3}{*}{High = 50} & 30 & 0.66 & 0.70 & 0.71 & 0.55 \\ \cline{2-6}
& 50 & 0.69 & 0.71 & 0.73 & 0.58 \\ \cline{2-6}
& 70 & 0.71 & 0.72 & 0.74 & 0.62 \\ \hline
\end{tabular}}
\caption{Corresponding testing accuracies for different rejection options for the ADNI dataset}
\label{Table_6}
\end{table*}
\section{Conclusion}\label{concl}
In this paper, we develop the UQ-CHI method to enable uncertainty quantification for continuous patient monitoring. This probabilistic generalization will facilitate a few extensions to the basic CHI model for decision-making purposes. For example, in many degenerative disease conditions such as AD, it is essential to triage patients to determine the priority of resource allocations and patient care. Therefore, the UQ-CHI framework would equip us with an optimal decision considering imperfect and continuous delivery of knowledge. In the future, we would like to extend this method to other diseases that may show different degradation characteristics in the context of degenerative diseases. Another extension of this methodology is to apply on a non-linear index and further explore the feasibility of varying discriminant functions.
|
1,314,259,994,238 | arxiv | \section{Introduction} \label{sec:intro}
Visual object recognition is fundamental to our daily lives. Identifying and categorizing objects in the environment is essential for human survival, indeed our brain is able to assign the correct object class within a fraction of a second, independent of substantial variations in appearance, illumination and occlusion \cite{logothetis1996visual}. Recognition mainly takes place along the ventral pathway \cite{dicarlo2012does}, i.e., visual perception induces a hierarchical processing stream from local patterns to complex features, in feed-forward fashion \cite{logothetis1996visual}. Signals are filtered to low frequency and used in parallel also in a top-down manner\cite{bar2006top}, emphasizing the robustness and invariance to deviations in appearance.
Inspired by our brain's visual perception, convolutional neural architectures build upon the hierarchical intuition, stacking multiple convolutional layers to induce feed-forward learning of basic concepts to compositions of complex objects. Indeed, early findings suggested that neurons in deeper layers are activated by increasingly complex shapes, while the first layers are mainly tailored towards low-level features such as color, texture and basic geometry. However, recent work indicates the opposite: \acp{cnn} exhibit a strong bias to base their decision on texture cues \cite{geirhos2021partial, gatys2017texture, Hermann2020Advances, geirhos2018imagenet}, which heavily influences their performance, in particular under domain shifts, which typically affect local texture more than global shape.
This inherent texture bias has led to a considerable body of work that tries to minimize texture and style bias and shift towards ``human-like" shape bias in object recognition. Common to all approaches is the principle of incorporating adversarial texture cues into the learning pipeline -- either directly in input space \cite{geirhos2018imagenet, hendrycks2021many, li2020shape} or implicitly in feature space \cite{nuriel2021permuted, nam2021reducing, kang2022style, jeon2021feature}. Perturbing the texture cues in the input forces the neural network to make ``texture-free" decisions, thus focusing on the objects' shapes that remain stable during training. While texture cues certainly boost performance in fine-grained classification tasks, where local texture patterns may indicate different classes, they can cause serious harm when the test data exhibit a domain shift \wrt training. In this light, texture bias can be seen as a main reason for degrading domain generalization performance, and various algorithms have been developed to improve generalization to new domains with different texture properties (respectively image styles), \eg,~\cite{geirhos2018imagenet, hendrycks2021many, li2020shape,nuriel2021permuted, nam2021reducing, kang2022style, jeon2021feature, wang2022feature}.
However, while a considerable number of algorithms have been proposed to address texture bias, they are neither evaluated on common datasets nor with common evaluation metrics. Moreover they often employ inconsistent model selection criteria or do not report at all how model selection is done.
With the present paper we promote the view that:
\begin{itemize}[parsep=2pt]
\item the large domain shifts induced by texture-biased training cause large fluctuations in accuracy, which call for a particularly rigorous evaluation;
\item model selection has so far been ignored in the literature; together with the high training volatility, this may have lead to overly optimistic conclusions from spurious results;
\item in light of the difficulties of operating under drastic domain shifts, experimental results should include a notion of uncertainty in the evaluation metrics.
\end{itemize}
Motivated by these findings, we have implemented \emph{BiasBed}{}, an open-source PyTorch \cite{paszke2019pytorch} testbed that comes with six datasets of different texture and shape biases and seven fully implemented algorithms. Our framework allows the addition of new algorithms and datasets with few lines of code, including full flexibility of all parameter settings. In order to tackle the previously discussed limitations and difficulties for evaluating such algorithms, we have added a carefully designed evaluation pipeline that includes training multiple runs and employing multiple model selection methods, and we report all results based on sound statistical hypothesis tests -- all run with a single command.
In addition to our novel framework, the present paper makes the following contributions:
\begin{itemize}[parsep=2pt]
\item We categorize existing literature into two main strategies, which we term input-augmented and feature-augmented approaches.
\item We show that input-augmented methods are systematically superior in a statistically rigorous evaluation.
\item We highlight the importance of statistically well-founded testing, not only in the context of texture bias, but also for domain generalization and beyond.
\end{itemize}
In Sec.~\ref{sec:relatedwork}, we provide a comprehensive overview of existing work on reducing texture bias. Furthermore we describe the main forms of statistical hypothesis testing. We continue in Sec.~\ref{sec:biasedlearning} with a formal introduction to biased learning, and systematically group existing algorithms into families with common properties. In Sec.~\ref{sec:evaluationpractices}, we investigate previous evaluation practices in detail, discuss their limitations, and give a formal introduction to hypothesis testing. Sec.~\ref{sec:BiasBedframework} describes our proposed \emph{BiasBed}{} evaluation framework in detail, while Sec.~\ref{sec:experiments} presents experiments, followed by a discussion (Sec.~\ref{sec:discussion}), conclusions (Sec.~\ref{sec:conclusion}) and a view towards the broader impact of our work (Sec.~\ref{sec:broaderimpact}).
\section{Related work}\label{sec:relatedwork}
\begin{figure*}[th]
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{0.6}
\centering
\begin{subfigure}[b]{0.28\textwidth}
\begin{tabular}{cc}
\includegraphics[width=0.45\textwidth, height=0.45\textwidth]{images/imagenet/n02018207_15250.JPEG} &
\includegraphics[width=0.45\textwidth, height=0.45\textwidth]{images/imagenet/n02099601_5690.JPEG} \\
\includegraphics[width=0.45\textwidth, height=0.45\textwidth]{images/imagenet/n02690373_101.JPEG} &
\includegraphics[width=0.45\textwidth, height=0.45\textwidth]{images/imagenet/n02814533_725.JPEG}
\end{tabular}
\subcaption{ImageNet}
\end{subfigure}
\begin{subfigure}[b]{0.28\textwidth}
\begin{tabular}{cc}
\includegraphics[width=0.45\textwidth]{images/cueconflict/airplane5-bottle3.png} &
\includegraphics[width=0.45\textwidth]{images/cueconflict/bicycle1-dog3.png} \\
\includegraphics[width=0.45\textwidth]{images/cueconflict/boat3-clock3.png} &
\includegraphics[width=0.45\textwidth]{images/cueconflict/bottle1-chair2.png}
\end{tabular}
\subcaption{Cue-Conflict}
\end{subfigure}
\begin{subfigure}[b]{0.28\textwidth}
\begin{tabular}{cc}
\includegraphics[width=0.45\textwidth]{images/edge/airplane10.png} &
\includegraphics[width=0.45\textwidth]{images/edge/car4.png} \\
\includegraphics[width=0.45\textwidth]{images/edge/chair7.png} &
\includegraphics[width=0.45\textwidth]{images/edge/dog5.png}
\end{tabular}
\subcaption{Edge}
\end{subfigure}
\begin{subfigure}[b]{0.28\textwidth}
\begin{tabular}{cc}
\includegraphics[width=0.45\textwidth]{images/silhouette/bear2.png} &
\includegraphics[width=0.45\textwidth]{images/silhouette/chair5.png} \\
\includegraphics[width=0.45\textwidth]{images/silhouette/dog3.png} &
\includegraphics[width=0.45\textwidth]{images/silhouette/truck8.png}
\end{tabular}
\subcaption{Silhouette}
\end{subfigure}
\begin{subfigure}[b]{0.28\textwidth}
\begin{tabular}{cc}
\includegraphics[width=0.45\textwidth]{images/sketch/0001_ske_dnn_0_airplane_00_airplane-0001-sketch-0.png} &
\includegraphics[width=0.45\textwidth]{images/sketch/0351_ske_dnn_0_cat_00_cat-0002-sketch-1.png} \\
\includegraphics[width=0.45\textwidth]{images/sketch/0557_ske_dnn_0_elephant_00_elephant-0014-sketch-20.png} &
\includegraphics[width=0.45\textwidth]{images/sketch/0761_ske_dnn_0_truck_00_truck-0088-sketch-41.png}
\end{tabular}
\subcaption{Sketch}
\end{subfigure}
\begin{subfigure}[b]{0.28\textwidth}
\begin{tabular}{cc}
\includegraphics[width=0.45\textwidth]{images/imagenetstylized/ILSVRC2012_val_00008098.png} &
\includegraphics[width=0.45\textwidth]{images/imagenetstylized/ILSVRC2012_val_00011329.png} \\
\includegraphics[width=0.45\textwidth]{images/imagenetstylized/ILSVRC2012_val_00015820.png} &
\includegraphics[width=0.45\textwidth]{images/imagenetstylized/ILSVRC2012_val_00040854.png}
\end{tabular}
\subcaption{Stylized ImageNet}
\end{subfigure}
\caption{Datasets used in our experiments. These datasets aim to capture different facets of the what is informally ``texture bias". They attempt to decouple large scale features, such as image structure and object shapes, from small scale texture features. While humans can easily recognize the objects in these images in most cases, convolutional neural networks often struggle to do so with the same accuracy that they achieve with natural images. }
\label{fig:datasets}
\end{figure*}
\paragraph{Texture and style bias.}
The seminal work of Geirhos \etal \cite{geirhos2018imagenet} showed that modern neural networks are heavily biased towards local patterns, commonly referred to as texture or style.\footnote{In this paper we prefer the word ``texture", but interchangeably use ``style" as part of already set names and expressions, as in ``style transfer".} The finding that, contrary to earlier hypotheses \cite{lecun2015deep, kriegeskorte2015deep}, the network output is dominated by domain-specific texture rather than generic, large-scale shape cues helps to explain their poor ability to generalize to unseen image domains. As a consequence, several authors have tried to address the issue under the umbrella term \emph{Domain Generalization} \cite{nam2021reducing, kang2022style, jeon2021feature}, although \emph{texture debiasing} would be a more accurate description of their approach: they take advantage of neural style transfer ideas \cite{kalischek2021light, gatys2017texture} and alter the texture features, thereby forcing the model to rely more on shape features. To that end, Geirhos \etal \cite{geirhos2018imagenet} put forward a new \textit{stylized} ImageNet dataset. That dataset contains images that are blended with a random texture image to generate a diverse set of ``texture-independent" samples, such that the contained texture is not informative about the class label anymore. However, enforcing shape bias tends to deteriorate performance on the source domain, \ie the domain trained on, \eg ImageNet. To balance in-domain and \ac{ood} performance, Li \etal \cite{li2020shape} propose debiased learning by linearly interpolating the target labels between the texture class and the content class.
Inspired by neural style transfer, a whole palette of work \cite{nuriel2021permuted, nam2021reducing, kang2022style, jeon2021feature, wang2022feature} deals with changing the texture statistics in feature space. The authors of \cite{nuriel2021permuted} randomly swap mean and standard deviation along the spatial dimension in certain network layers, while \cite{wang2022feature} add learnable encoders and noise variables for each statistic to align them over different layers. In \cite{nam2021reducing} an additional adversarial content randomization block tricks the network into predicting the right style despite changed content. Jeon \etal \cite{jeon2021feature} sample new style vectors by defining Gaussians over the channel dimension.
\paragraph{Evaluation frameworks.}
Comparing algorithms based on some relevant quantitative performance metric has become the norm in the deep learning community. A common strategy for domain generalization, which by definition requires two or more different datasets, is to evaluate each algorithm on the held-out testing portion of every dataset. From the outcomes it is typically concluded that methods which, on average, perform better is superior. To account for variance in learning procedures \cite{bouthillier2021accounting}, a test bed similar to ours \cite{gulrajani2020search} trains multiple models per algorithm and uses a mean of means $\mu$ over different datasets as the final metric, i.e., method A is considered better than method B if $\mu_A > \mu_B$. Clearly, this conclusion is not necessarily justified from a statistical viewpoint \cite{dror2017replicability}, as it does not distinguish between true impact and random effects \cite{bouthillier2021accounting}.
In fact, statistical testing offers a formal theory for comparing multiple algorithms over multiple datasets \cite{neyman1928use}, a tool that is routinely used in scientific fields like physics or medicine. Hypothesis tests are employed to answer the question whether two (or more) algorithms differ significantly, given performance estimates on one or more datasets. Depending on assumptions about the scores, the comparison may require a parametric or a non-parametric test. In machine learning (ML) the requirements for parametric tests like the repeated-measures ANOVA \cite{fisher1992statistical} are typically not met, \eg, scores may not be normally and spherically distributed \cite{demvsar2006statistical}.
Non-parametric tests like the Friedman test \cite{friedman1937use} require fewer assumptions and are more generally applicable. A recent line of work \cite{wainer2022bayesian} proposed a Bayesian form of the Bradley-Terry model to determine how confident one can be that one algorithm is better than another.
\section{Biased learning}\label{sec:biasedlearning}
Shape-biased learning tries to minimize the error that stems from distribution shifts between the training and test domains. Formally, we train a neural network $g$ on a training dataset \{$(x_i,y_i)\}_{i\in [n]}$, where the value $x_i \in \mathcal{X}$ of a random variable $X$ is the input to the network and $y_i \in \mathcal{Y}$ of a random variable $Y$ is the corresponding ground truth. Let $P(X)$ be the source domain distribution. In shape-biased learning however, we test on samples $x \in \mathcal{X}'$ such that $P(X) \neq P(X')$. In our setting the distribution shift mainly results from changes in texture, \eg going from photos to cartoons or sketches. Shape-biased learning is a special case of domain generalization \cite{li2017deeper}, which also encompasses distribution shifts due to factors other than texture.
Existing literature can be grouped into two families, input-augmented \cite{geirhos2018imagenet, hendrycks2021many, li2020shape} and feature-augmented \cite{nuriel2021permuted, nam2021reducing, kang2022style, jeon2021feature, wang2022feature} methods. The former create style-randomized versions of the training data, whereas the latter apply stylization to the latent representations inside a neural network. The methodology to transfer style remains the same for both. Given two images $I, I' \in \mathbb{R}^{H\times W\times 3}$ and a pretrained network (\eg, VGG-19 \cite{simonyan2014very}), feature statistics at suitable network layers are computed for both images. The most common approach is to use first and second moments as in \cite{huang2017arbitrary}. Let $F_l(x) \in \mathbb{R}^{H\times W\times C_l}$ be the $l$-th feature map with input $x$, $\mu_{F_l(x)} \in \mathbb{R}^{C_l}$ its channel-wise mean (averaged across the spatial dimensions) and $\sigma_{F_l(x)} \in \mathbb{R}^{C_l}$ the corresponding standard deviation. Then style transfer boils down to \cite{huang2017arbitrary}
\begin{equation}
F_l^{\text{new}}(x) = \sigma_{F_l(x')} \bigg(\frac{x - \mu_{F_l(x)}}{\sigma_{F_l(x)}}\bigg) + \mu_{F_l(x')},
\end{equation}
where $x, x'$ are two different samples.
In case of input augmentation, the (stylized) encodings are propagated through a decoder to generate images, either offline \cite{geirhos2018imagenet} or on the fly \cite{li2020shape}. This process is often applied to features at various representation levels to capture textures at different scales. Feature augmentation methods swap the statistics within their classification network and directly output the corresponding class predictions.
\section{Evaluation practices}\label{sec:evaluationpractices}
We first give an overview of current evaluation practices and point out limitations when moving out of domain. Then follows an introduction to formal hypothesis testing.
\begin{figure*}[h!tb]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{images/Cue-Conflict.png}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{images/Edge.png}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{images/Silhouette.png}
\end{subfigure}
\caption{Accuracies of four different methods on three datasets. For each method, ten independent instances are trained and evaluated after every epoch. Lines denote average performance across then ten runs, shaded areas denote their standard deviation, highlighting significant performance fluctuations that should be taken into account.}
\label{fig:testcurves}
\end{figure*}
\subsection{Current practice}
Existing literature about biased learning, and also about domain generalization in a broader sense, shares common evaluation practices. Algorithms are scored on different \ac{ood} datasets in terms of an evaluation metric, usually classification accuracy, and simply averaged across datasets. From a statistical perspective, this seemingly obvious practice has several deficiencies.
First, and most importantly, often only a single point estimate of the metric is reported. However, deep learning algorithms are trained in highly stochastic fashion. Performance varies between independently trained models, since they almost certainly correspond to different local minima of the loss function. This effect is often particularly strong in the presence of domain shifts (\cf \cref{fig:testcurves}). It is evident that best practice would be to, at the very least, train multiple models with the same data and report their mean accuracy and its standard deviation. In the context of texture bias this becomes all the more important, as the variability between (and also within) training runs tends to be high.
Second, neural network training is an iterative process without a natural, unambiguous stopping criterion. Which iteration to regard as the "final" model to be evaluated is therefore invariably based on some model selection rule. Common strategies are to pick the one with the best in-domain validation score, to use a fixed iteration count, or to declare convergence based on some early-stopping rule \cite{zhang2005boosting, yao2007early, baker2017accelerating}. Looking at the texture and shape literature, we find a lack of information about the chosen strategy. This makes a fair comparison all but impossible: texture bias schemes tend to exhibit unusually high fluctuations between training epochs, such that a different model selection rule may reverse the relative performance of two methods. This concerns all tested methods and various datasets (see \cref{fig:testcurves} and \cref{tab:bestvslast}).
In this regard, we emphasize the efforts of \cite{gulrajani2020search}, who defined a set of strategies to account for performance fluctuations and encourage multiple training runs to collect the necessary statistics for fair comparisons over numerous datasets. A diverse spectrum of image domains helps to mitigate over-fitting towards particular domains and favours generic mechanisms over dataset-specific heuristics. To obtain statistically sound conclusions it is, however, not enough to simply average scores over different datasets, as done in \cite{gulrajani2020search}. Naturally, different datasets possess different characteristics and as such are potentially not commensurable, so that simple averaging becomes meaningless \cite{webb2000multiboosting}.
We instead propose a rigorous, statistically founded hypothesis testing framework to compensate for the differences.
\subsection{Hypothesis testing}
This section serves as background for our experiments. We emphasize that it summarizes well-established textbook knowledge, \eg \cite{lehmann1986testing}.
The general aim is to compare $n$ algorithms $a_1, a_2, \dots, a_n$ on $m$ datasets $d_1, d_2, \dots, d_m$. We are interested in comparing the algorithms based on an evaluation metric $\mathcal{M}$, where $c_{ij}=\mathcal{M}(a_i, d_j)$ is the (averaged) score of algorithm $a_i$ on dataset $d_j$ (\eg, accuracy). Based on $c_{ij}$, we want to decide if the algorithms are statistically different \cite{demvsar2006statistical}. We assume that the underlying metric can be expressed additively by an effect due to the algorithm and an effect due to the dataset, and $c_{ij}$ is an estimate of this underlying metric. We define the null hypothesis and alternative hypothesis as following:
\begin{equation}
\begin{split}
H_0 : \forall ik\,\, \gamma(a_i) &= \gamma(a_k) \\
H_1 : \exists ik\,\, \gamma(a_i) &\neq \gamma(a_k),
\end{split}
\end{equation}
where $\gamma(a_i)$ is the effect of algorithm $a_i$. In case of $n=2$ and $m=1$, we have a two-sample setting and with a little abuse of notation, we can write $d=\mathcal{M}(a_1, d) - \mathcal{M}(a_2, d)$ to test
$H_0 : \gamma(a_1) = \gamma(a_2)$ versus $H_1 : \gamma(a_1) \neq \gamma(a_2)$.
The decision whether to reject the null hypothesis is based on the $p$-value, which is defined as the probability of obtaining a more extreme result than the one observed, assuming that $H_0$ is true:
\begin{equation}
p \coloneqq P(D > d \mid H_0) + P( D < d \mid H_0),
\end{equation}
where $D$ is the test statistic associated to our observed difference $d$. Beforehand, we must choose a (user-defined) significance level $\alpha$ and reject $H_0$ if $p < \alpha$. The setup gives rise to two (inevitable) error types: \textit{Type I error} when rejecting $H_0$ although it is true, and \textit{Type II error} when not rejecting $H_0$ although it is false. Here, the Type I error probability is equal to $\alpha$.
Generally, comparing multiple algorithms on multiple datasets follows a two-step procedure. First, an omnibus test is applied to the hypothesis of all algorithms performing equally well, \ie $H_0$ cannot be rejected. If $H_0$ can be rejected, a follow-up post-hoc test inspects each pair of algorithms to pinpoint where differences exist. A recommended omnibus test \cite{demvsar2006statistical} in the context of ML is the Friedman test \cite{friedman1937use}. It ranks all algorithms separately in each domain (\eg dataset), and uses the \emph{average rank} $R_i$ to calculate the Friedman statistic:
\begin{equation}
\chi_F^2 = \frac{12m}{n(n+1)}\bigg[\sum_{i=1}^n R_i^2 - \frac{n(n+1)^2}{4}\bigg],
\end{equation}
which follows approximately a $\chi^2$ distribution with $(n-1)$ degrees of freedom. The Iman-Davenport extension to the Friedman statistic compensates for too conservative decisions and is defined as
\begin{equation}
F_F = \frac{(m-1)\chi_F^2}{m(n-1)-\chi_F^2},
\end{equation}
which is approximately distributed according to an $F$-distribution with $n-1$ and $(m-1)(n-1)$ degrees of freedom.
If the $p$-value is below the pre-defined significance threshold $\alpha$, the Nemenyi post-hoc test \cite{nemenyi1963distribution} compares pairs of differences.
The Nemenyi test statistic for algorithms $a_i$ and $a_j$ is defined as
\begin{equation}
z = |R_{i} - R_{j}| / \sqrt{\frac{m(m+1)}{6n}}.
\end{equation}
For large values of $z$ the difference is significant.
Note that the two-step procedure is necessary to maintain the overall Type I error probability, which is not the case when testing all pairs within a two-sample framework at level $\alpha$.
\section{BiasBed framework}\label{sec:BiasBedframework}
We introduce \emph{BiasBed}{}, a highly flexible and generic test suite to implement, train and evaluate existing algorithms in a rigorous manner. We currently include seven texture debiasing algorithms, six datasets, different model selection strategies and an omnibus hypothesis test and a post-hoc hypothesis test. Adding existing and new algorithms, other datasets or running a hyperparameter search for one algorithm is a matter of a few lines. In fact, our framework provides a full integration with Weights and Biases \cite{wandb} providing the full range of logging, visualization and reports.
\paragraph{Algorithms.}
\emph{BiasBed}{} currently supports the following algorithms that are tailored towards shape biased learning. Our baseline algorithm is a plain ResNet-50 trained with standard empirical risk minimization \cite{vapnik1999overview}. The same algorithm can be trained on Stylized ImageNet to duplicate the approach by Geirhos \etal \cite{geirhos2018imagenet}. Additional algorithms are: Shape-Texture Debiased Neural Network (Debiased, \cite{li2020shape}), Style-Agnostic Network (SagNet, \cite{nam2021reducing}), Permuted Adaptive Instance Normalization (pAdaIN, \cite{nuriel2021permuted}), Informative Dropout (InfoDrop, \cite{shi2020informative}) and Deep Augmentation (DeepAug, \cite{hendrycks2021many}). We show in the Appendix how to easily add more algorithms besides the ones listed here.
\paragraph{Datasets.}
Besides the standard dataloader for ImageNet1k \cite{deng2009imagenet}, \emph{BiasBed}{} contains dataloaders for six additional datasets. Stylized ImageNet \cite{geirhos2018imagenet} and Cue-Conflict \cite{geirhos2021partial} are stylized versions of ImageNet (1000 classes) and 16-class ImageNet, respectively, where the latter particularly uses texture cues for stylization. Datasets that contain pure shape information are Edges, Sketches and Silhouettes from \cite{geirhos2021partial}.
\paragraph{Model selection.}
In \Cref{sec:evaluationpractices}, we have seen that when testing on out-of-distribution datasets, where performance fluctuations during training are rather high, the criterion used to pick the final model can have severe impacts on the final quality metrics. Therefore, we have implemented three model selection methods in \emph{BiasBed}{} to unify evaluation.
\begin{itemize}
\item \textit{Best-epoch} is an oracle method that chooses the best test set score over all epochs after training for a fixed number of epochs. Importantly, this method is \textit{cherry-picking} and should be discouraged to use. We include it here to highlight the severe effects of model selection on evaluation.
\item \textit{Last-n-epochs} averages the test set scores over the last $n$ epochs, to mitigate the fluctuations observed in the \ac{ood} regime. \Ie, once the training has converged the model is evaluated after every training epoch, and a representative average score is reported.
\item \textit{Best-training-validation} chooses a single test set score according that achieves the best \emph{in-domain} performance on the validation set. This method is the most rigorous one, but leads to the highest variance between independent test runs. To decrease that variance, more runs are needed.
\end{itemize}
\paragraph{Evaluation.}
\emph{BiasBed}{} includes a full, rigorous evaluation protocol that can be run with a single command. It collects the result according to the defined model selection method, algorithms and datasets, and computes the averaged scores. In a second step, the Friedman test with Iman-Davenport extension \cite{iman1980approximations} is applied with a possible post-hoc Nemenyi test to identify significant differences. All results are reported back in a Pandas dataframe \cite{reback2020pandas, mckinney-proc-scipy-2010} or optionally in \LaTeX~tables (as those in \Cref{sec:experiments}).
\section{Experiments}\label{sec:experiments}
We run experiments for all implemented algorithms and extensively compare different model selection methods, highlighting the need for a common, explicit protocol. We use hypothesis testing to properly compare across algorithms. The following results are all generated by running a single command for each algorithm in \emph{BiasBed}{}.
\subsection{Model Selection}
\begin{table}[ht]
\centering
\footnotesize
\begin{tabular}{@{}lccc@{}}
\toprule
Algorithm & Silhouette & Edge & Cue-Conflict \\
\midrule
ERM & 50.4 / 47.1 (\textcolor{red}{-3.3}) & 32.4 / 22.7 (\textcolor{red}{-9.7}) & 26.6 / 22.0 (\textcolor{red}{-4.6}) \\
SagNet & 46.7 / 42.2 (\textcolor{red}{-4.5}) & 28.2 / 19.9 (\textcolor{red}{-8.3}) & 22.1 / 21.0 (\textcolor{red}{-1.1})\\
pAdaIN & 48.8 / 45.6 (\textcolor{red}{-3.2}) & 28.4 / 21.0 (\textcolor{red}{-7.4}) & 24.4 / 21.5 (\textcolor{red}{-2.9})\\
InfoDrop & 51.0 / 46.6 (\textcolor{red}{-4.4}) & 32.2 / 20.5 (\textcolor{red}{-9.7}) & 26.8 / 23.4 (\textcolor{red}{-3.4})\\
\bottomrule
\end{tabular}
\caption{Comparison of best epoch (oracle) \vs last epoch for selected algorithms and datasets}
\label{tab:bestvslast}
\end{table}
We quantitatively elaborate on the findings of \cref{fig:testcurves}. We train each algorithm ten times and test on \textit{Silhouette}, \textit{Edge} and \textit{Cue-Conflict} after \textit{every} epoch during training. In \Cref{tab:bestval} we report the best accuracy over all epochs averaged across runs versus the last score received, averaged across all runs. Clearly, all algorithms drop severely in performance when one does not use an oracle model selection method, highlighting the need to properly define criteria how to select the model to be used. We emphasize that this step should always be part of an evaluation.
\subsection{Results}
\paragraph{Evaluation.}
We report results choosing the best validation performance for all algorithms and datasets in \Cref{tab:bestval} and choosing the last 30 epochs in \Cref{tab:lastn}. All results are once more gathered across multiple independent runs to account for randomness in training. In particular, we do not include an average column across dataset scores per algorithm, as discussed in \Cref{sec:evaluationpractices}.
\begin{table*}
\centering
\caption{Results choosing best training validation over 100 epochs per run.}
\label{tab:bestval}
\begin{tabular}{@{}lcccccc@{}}
\toprule
Algorithm & Silhouette & Edge & Sketch & CueConflict & ImageNet1k & Stylized IN \\
\midrule
ERM & 47.1 $\pm$ 2.3 & 22.7 $\pm$ 4.6 & 57.1 $\pm$ 1.4 & 22.0 $\pm$ 0.8 & 73.8 $\pm$ 0.1 & 7.7 $\pm$ 0.2 \\
Debiased & 48.7 $\pm$ 2.8 & 30.8 $\pm$ 5.1 & 60.5 $\pm$ 1.2 & 28.9 $\pm$ 1.1 & 74.5 $\pm$ 0.1 & 16.1 $\pm$ 0.3 \\
DeepAug & 51.9 $\pm$ 2.5 & 36.1 $\pm$ 4.8 & 63.8 $\pm$ 1.9 & 30.7 $\pm$ 0.8 & 73.7 $\pm$ 0.2 & 13.4 $\pm$ 0.3 \\
Geirhos \etal \cite{geirhos2018imagenet} & 47.1 $\pm$ 2.9 & 59.8 $\pm$ 3.1 & 70.3 $\pm$ 1.5 & 53.4 $\pm$ 0.9 & 56.0 $\pm$ 0.3 & 53.1 $\pm$ 0.3 \\
InfoDrop & 46.6 $\pm$ 2.8 & 19.9 $\pm$ 3.4 & 56.6 $\pm$ 1.5 & 23.4 $\pm$ 1.1 & 73.3 $\pm$ 0.1 & 8.0 $\pm$ 0.2 \\
SagNet & 42.5 $\pm$ 1.2 & 20.1 $\pm$ 1.7 & 58.4 $\pm$ 1.0 & 21.0 $\pm$ 0.4 & 72.7 $\pm$ 0.3 & 6.2 $\pm$ 0.4 \\
pAdaIN & 45.6 $\pm$ 2.2 & 21.0 $\pm$ 3.1 & 55.9 $\pm$ 1.8 & 21.5 $\pm$ 0.4 & 73.3 $\pm$ 0.1 & 8.0 $\pm$ 0.2 \\
\bottomrule
\vspace{1em}
\end{tabular}
\end{table*}
\begin{table*}
\centering
\caption{Results choosing the average over the last 30 epochs per run.}
\label{tab:lastn}
\begin{tabular}{@{}lcccccc@{}}
\toprule
Algorithm & Silhouette & Edge & Sketch & CueConflict & ImageNet1k & Stylized IN \\
\midrule
ERM & 46.6 $\pm$ 2.3 & 22.1 $\pm$ 4.0 & 56.5 $\pm$ 1.6 & 22.1 $\pm$ 0.9 & 73.4 $\pm$ 0.3 & 7.6 $\pm$ 0.3 \\
Debiased & 48.3 $\pm$ 2.8 & 29.4 $\pm$ 4.9 & 60.1 $\pm$ 1.3 & 29.1 $\pm$ 1.1 & 74.0 $\pm$ 0.3 & 15.6 $\pm$ 0.5 \\
DeepAug & 51.1 $\pm$ 2.2 & 34.8 $\pm$ 4.3 & 63.1 $\pm$ 1.5 & 30.3 $\pm$ 0.9 & 73.0 $\pm$ 0.3 & 13.0 $\pm$ 0.3 \\
Geirhos \etal \cite{geirhos2018imagenet} & 47.5 $\pm$ 2.4 & 59.2 $\pm$ 3.4 & 70.3 $\pm$ 1.3 & 53.7 $\pm$ 1.3 & 55.5 $\pm$ 0.5 & 52.5 $\pm$ 0.6 \\
InfoDrop & 46.6 $\pm$ 2.8 & 20.0 $\pm$ 3.1 & 57.3 $\pm$ 1.5 & 23.2 $\pm$ 1.0 & 72.9 $\pm$ 0.3 & 7.8 $\pm$ 0.3 \\
SagNet & 42.1 $\pm$ 1.3 & 19.9 $\pm$ 1.5 & 58.1 $\pm$ 1.1 & 21.1 $\pm$ 0.5 & 72.6 $\pm$ 0.3 & 6.1 $\pm$ 0.4 \\
pAdaIN & 45.5 $\pm$ 2.2 & 20.9 $\pm$ 3.1 & 56.0 $\pm$ 1.8 & 21.7 $\pm$ 0.7 & 73.1 $\pm$ 0.1 & 8.1 $\pm$ 0.2 \\
\bottomrule
\end{tabular}
\end{table*}
\paragraph{Hypothesis testing.}
We conduct the Friedman test on the scores of \Cref{tab:bestval}. We chose a significance level of $\alpha = 0.05$ and test for significance. The returned F-statistic is $7.41$ with an uncorrected $p$-value of $0.0001$, \ie $p < \alpha$ and therefore we reject $H_0$. In \Cref{tab:posthoc} we report all pairwise comparisons of algorithms using the Nemenyi post-hoc test. Only two pairs of algorithms have significant differences when compared on the six datasets: Debiased and SagNet, as well as DeepAug and SagNet. More importantly, using the best validation selection method, we find no significant difference between any algorithm and ERM.
\begin{table*}[th]
\centering
\caption{$P$-values of post-hoc Nemenyi test. Bold numbers are significant pairwise differences.}
\label{tab:posthoc}
\begin{tabular}{@{}lccccccc@{}}
\toprule
Algorithm & ERM & Debiased & DeepAug & Geirhos \etal \cite{geirhos2018imagenet} & InfoDrop & SagNet & pAdaIN \\
\midrule
ERM & & 0.74 & 0.66 & 0.81 & 0.90 & 0.66 & 0.90 \\
Debiased & & & 0.90 & 0.90 & 0.26 & \textbf{0.04} & 0.15 \\
DeepAug & & & & 0.90 & 0.19 & \textbf{0.02} & 0.11 \\
Geirhos \etal \cite{geirhos2018imagenet} & & & & & 0.33 & 0.05 & 0.19 \\
InfoDrop & & & & & & 0.90 & 0.90 \\
SagNet & & & & & & & 0.90 \\
\bottomrule
\end{tabular}
\end{table*}
\section{Discussion}\label{sec:discussion}
Our results, found in Tabs. \ref{tab:bestvslast}, \ref{tab:bestval}, \ref{tab:lastn}, and most importantly \cref{tab:posthoc}, allow us to draw several conclusions that support the usage of the formal hypothesis testing framework for this evaluation.
\paragraph{Model selection criteria.} If model selection is done based on validation accuracy, the margins are extremely small compared to the uncertainty, as measured by the standard deviation over 10 different runs of an algorithm. This suggests that, while validation performance is of course to some degree predictive of OOD domain performance, it apparently falls short of capturing all relevant effects that impact such generalization, and is not sufficient. It seems clear that having an explicit, formal model selection strategy is of paramount importance. We can also conclude that reporting results for a single run is not a reliable way of comparing different approaches, as the stochastic nature of the training alone can flip the relative performance of two methods.
\paragraph{Intra-run and inter-run variability.} While some authors acknowledge variability in the results across different runs, it is rare to find statements about intra-run variability, \ie, strong performance fluctuations between nearby training checkpoints, as is depicted by the shaded regions in \cref{fig:testcurves}. It is sometimes the case that authors report the mean and standard deviation of performance metrics for several runs to mitigate this, but the significant variations not only between independent training runs, but also among different epochs of an apparently converged training mean that averages without the associated uncertainty are problematic, and it is nearly impossible to draw reliable conclusions without a formal framework. The complex interplay between these factors needs to be acknowledged and analysed more closely. These types of variability stem from different sources and need to be handled in different ways.
\paragraph{Importance of hypothesis testing.} The presented results also highlight the importance of using a formal comparison framework when dealing with such complex cases. When informally analysing Tables \ref{tab:bestval} and \ref{tab:lastn}, such as by simply averaging each algorithm's performance across datasets, it is easy to arrive at erroneous conclusions based on spurious results. For instance, the results seen in \cref{tab:bestval} could lead one to believe that the Debiased and DeepAug algorithms do outperform competitors due to their high performance on datasets such as Sketch, CueConflict, and Sylized IN, with little or no performance loss on ImageNet1k when compared to ERM, the baseline result. But this analysis does not take into account the variances of these results and the complex interplay between the different measured accuracies, sample sizes, \etc. In fact, the results of the post-hoc Nemenyi test, reported in \cref{tab:posthoc}, tells us that these algorithms do not differ from ERM in a statistically significant way -- once can not refute that null hypothesis that the two methods are on par. It is possible that using more datasets would allow us to identify which algorithms, if any, have a statistically significant effect, but even with the rather many runs in our test bed, no staistically supported difference has been observed yet, and we cannot confidently establish a correct ranking.
\paragraph{Interpretation of hypothesis testing.} When using hypothesis test, one needs to be precise \wrt the conclusions drawn from the results. In particular, rejecting the null hypothesis means that if the null hypothesis were correct, we would almost certainly (with significance $\alpha$) not observe a so extreme (or even more extreme) difference. However, even if the significance level is high we cannot conclude superiority of one algorithm over another for an unseen domain. The \textit{significant} difference is only valid under the experimental setting, \ie within the datasets and algorithms used. In turn, being unable to rejecting the null hypothesis should not be misinterpreted as a proof of equality of algorithms.
\section{Conclusion}\label{sec:conclusion}
When analyzing existing methods tailored towards texture-free learning, common datasets, evaluation protocols and reports are missing. In this work, we introduced \emph{BiasBed}{} to alleviate the aforementioned limitations. In particular, we have seen that model selection methods play a critical role in OOD testing and fair comparisons are only possible if algorithms are evaluated in a rigorous fashion. Hypothesis testing can fill this gap by providing statistically sound comparisons. Moreover, on must be careful to draw the right conclusions, \eg, low significance of a difference does not necessarily mean that two methods perform on par, but may also indicate that there are too few observations to make a confident statement about their difference. Our framework provides the necessary tools to implement, test and compare existing and new algorithms. Our intention is not to make negative claims or invalidate any particular approach. Rather, we hope to encourage the community to leverage existing statistical expertise and ensure fair and rigorous quantitative evaluations that drive forward the field.
\section{Broader Impact}\label{sec:broaderimpact}
The aim of the work presented in this paper is to provide a solid foundation for future research on style bias of neural networks. Such hypothesis testing framework are commonplace in other fields where the effects of different experiments can only be observed in noisy results, such as many areas of physics, but also medicine and psychology. We hope that the presented framework be used in the future by other researchers through the openly released codebase to find and validate novel algorithms that mitigate texture bias. Furthermore, the proposed methodology -- and codebase -- can be used to perform rigorous testing of algorithms for solving other computer vision and machine learning problems, such as domain adaptation and domain generalization which are notably hard for the validation of different algorithms. It is our belief that setting a higher bar and expecting rigorous testing from authors who propose novel methods will, in the long run, improve the quality of research in this field.
\clearpage
{\small
\bibliographystyle{ieee_fullname}
\section{BiasBed}
\emph{BiasBed}{} is a Python package that can simply be installed with pip package manager. Once installed, we can run sets of experiments with a single command. We follow by default a common training protocol: the learning rate is set to $1e\text{-}4$, the learning decay after 30, 60 and 90 epochs is $0.1$, the SGD optimizer has momentum $0.9$ with weight decay $1e\text{-}4$, and we train for 100 epochs in total. We use ResNet-50 as a backbone model for all experiments.
\subsection{Adding algorithms}
New algorithms in \emph{BiasBed}{} can easily be added by extending the provided abstract algorithm class. The important function to implement here is \verb|update(x, y)|. This update function receives batches of input data and corresponding ground truth. It needs to predict the logits, compute the loss and backpropagate the gradients. To implement empirical risk minimization \cite{vapnik1999overview}, for example, we add a folder \verb|ERM| in the algorithms folder and add \verb|algorithm.py| with the corresponding \verb|update(x, y)| function:
\usemintedstyle{fruity}
\begin{minted}[frame=single, bgcolor=bg]{python}
def update(self, x, y):
self.optimizer.zero_grad(set_to_none=True)
with autocast(enabled=self.algorithm_cfg.mixedprec):
logits = self.model(x)
loss = self.loss(logits, y)
# Backward loss
self.scaler.scale(loss).backward()
self.scaler.step(self.optimizer)
self.scaler.update()
\end{minted}
Each algorithm folder contains two additional config files \verb|config.yaml| and \verb|sweep.yaml|. The former includes all algorithm specific (hyper-) parameters:
\usemintedstyle{fruity}
\begin{minted}[frame=single, bgcolor=bg]{yaml}
# Include default parameters of your algorithm here.
mixedprec: True
backbone: resnet50 # Net for MNIST
optimizer: SGD
learning_rate: 1e-4
milestones:
- 30
- 60
- 90
momentum: 0.9
weight_decay: 1e-4
gamma: 0.1
\end{minted}
and the latter includes all (hyper-) parameters necessary for sweeping over individual parameters in the algorithm and main config files, \eg
\begin{minted}[frame=single, bgcolor=bg]{yaml}
# Include sweep parameters of your algorithm here
parameters:
epoch:
values:
- 10
- 20
- 30
momentum:
values:
- 0.7
- 0.8
- 0.9
\end{minted}
\emph{BiasBed}{} takes care of registering and adding the algorithm to the framework. See \Cref{sec:runningbiasbed} on how to run experiments with the newly added algorithm.
\subsection{Adding datasets}
Adding dataloaders for datasets is equally simple. We need to add a folder with the dataloader name and implement the \verb|train_loader, val_loader, test_loader| from the dataloader template in a file called \verb|dataloader.py|, \eg in the case of Cue-Conflict
\usemintedstyle{fruity}
\begin{minted}[frame=single, bgcolor=bg]{python}
def train_loader(self) -> Iterable:
data_loader = DataLoader(self.dataset,
batch_size=self.config.training.batch_size,
num_workers=self.config.num_workers,
sampler=sampler)
return data_loader
\end{minted}
where \verb|self.dataset| is a PyTorch \verb|ImageFolder| dataset. Of course, each dataset comes with its own config file to include dataset specific (hyper-) parameters. Once the dataloader is added, the dataset can be seamlessly added to the main config file detailed in the following section.
\subsection{Running BiasBed}\label{sec:runningbiasbed}
\emph{BiasBed}{} supports various training modes, including full support of half precision, multi-GPU or hyperparameter sweeping with cluster support. We provide a fully flexible main configuration file to activate or deactivate settings. Only a single line has to be edited for training an algorithm on a single GPU or on a compute node with multiple GPUs. We fully integrated Weights and Biases \cite{wandb} into our framework, too, such that these are automatically used to search and tune hyperparameters. A \emph{BiasBed}{} user can either use our code to launch runs on common high performance computing environments or easily add a custom launcher.
We can start a single run with the command:
\usemintedstyle{fruity}
\begin{minted}[frame=single, bgcolor=bg]{python}
biasedbed single # Starts a single run with the algorithm set in config.yaml
\end{minted}
or start automated hyperparameter tuning with
\usemintedstyle{fruity}
\begin{minted}[frame=single, bgcolor=bg]{python}
biasedbed sweep # Starts a sweep with settings from sweeping/sweep.yaml
\end{minted}
In both cases, algorithm and dataset settings can be edited and activated in \verb|config.yaml|. For example, we can train SagNet \cite{nam2021reducing} with eight GPUs on standard ImageNet and evaluate the model on six different datasets with the following configuration:
\begin{minted}[frame=single, bgcolor=bg]{yaml}
# Distributed training
num_workers: 8
distributed: 1
world_size: 8
# Algorithm
algorithm:
name: SagNet
# Training
training:
dataset:
name: ImageNet1k
epochs: 100
# Testing
testing:
datasets:
- CueConflict
- Silhouette
- Sketch
- Edge
- ImageNetStylized
- ImageNet1k
interval: 1
\end{minted}
\section{Algorithms}
\paragraph{ERM} ``Empirical Risk Minimization" \cite{vapnik1999overview} minimizes the cross entropy loss across the training data and serves as our baseline algorithm.
\paragraph{Stylized ImageNet} ``Imagenet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness" \cite{geirhos2018imagenet} is the first paper to recognize and rigorously demonstrate texture bias in existing neural architectures. To reduce texture bias, the authors propose a stylized version of ImageNet, where they use AdaIN \cite{huang2017arbitrary} to change the texture of one image with another random image of ImageNet.
\paragraph{Debiased} ``Shape-texture debiased neural network training" \cite{li2020shape} extends the idea of \cite{geirhos2018imagenet} by augmenting the dataset online, \ie when feeding a batch of (original ImageNet) images into the network. Instead of only training on the content label, a convex combination of the style image class label and the content class label is used to guide the network to ``debiased" weights, \ie the network is forced to predict the content class solely from shape cues and the style class solely from texture cues. The authors argue, that performance is generally higher on all tested datasets (and not only on shape-biased sets) compared to \cite{geirhos2018imagenet}.
\paragraph{DeepAugment} ``The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization" \cite{hendrycks2021many} introduces additional deep augmentation techniques similar to \cite{geirhos2018imagenet}. In DeepAugment, an image is passed through an image-to-image network, but the forward pass is distorted by an altering the network. This distorts the resulting image in a similar way to augmentation methods. The authors defined a number of pertubations such as zeroing, negating, convolving, transposing, or switching activation functions and drew per image random samples from them. The networks used in DeepAugment are the pre-trained networks EDSR by \cite{EDSR} and CAE \cite{CAE}. The resulting augmented images for ImageNet are provided at \url{https://github.com/hendrycks/imagenet-r/tree/master/DeepAugment}. In principle, they can be combined with any other algorithm by appending them to the standard ImageNet dataset.
\paragraph{InfoDrop} ``Informative Dropout for Robust Representation Learning: A Shape-bias Perspective" \cite{shi2020informative} proposes an agnostic light-weight method to reduce texture bias in neural networks. The main idea is to enforce visual primitives such as edges and corners (\ie, regions with high shape information) and to reduce homogeneous and repetitive patterns (\ie, regions with low shape information). During training, neurons corresponding to input patches with low shape information are more likely to be zeroed out than neurons with high shape information patches.
\paragraph{SagNet} ``Reducing Domain Gap by Reducing Style Bias" \cite{nam2021reducing} introduces a style-agnostic network that becomes invariant to texture with a style randomization and content randomization network. Features of a shared encoder are randomly interpolated with style features from another image in the batch. The style network is forced to predict the correct style label and the content network needs to predict the correct content label. The gradients of the former network are adversarially used to update the shared encoder.
\paragraph{pAdaIN} ``Permuted AdaIN: Reducing the Bias Towards Global Statistics in Image Classification" \cite{nuriel2021permuted} follows a similar idea as SagNet but only incorporates a single style network that is forced to predict the correct content label from style-interpolated features.
\section{Full BiasBed results}
We report all individual results per algorithm according to the best in-domain validation score and average score over the last 30 epochs.
\subsection{Model selection: average of last 30 epochs}
\subsubsection{ERM \cite{vapnik1999overview}}
\begin{table}[H]
\centering
\begin{tabular}{@{}lcccccc@{}}
\toprule
Run & Silhouette & Edge & Sketch & CueConflict & ImageNet1k & ImageNetStylized \\
\midrule
1 & 44.3 $\pm$ 1.5 & 20.9 $\pm$ 1.8 & 56.7 $\pm$ 1.5 & 22.1 $\pm$ 0.7 & 73.6 $\pm$ 0.3 & 7.6 $\pm$ 0.2 \\
2 & 46.1 $\pm$ 1.7 & 20.8 $\pm$ 1.8 & 55.7 $\pm$ 1.0 & 22.1 $\pm$ 0.5 & 73.3 $\pm$ 0.4 & 7.4 $\pm$ 0.2 \\
3 & 46.7 $\pm$ 2.0 & 21.3 $\pm$ 3.1 & 54.1 $\pm$ 0.9 & 22.1 $\pm$ 0.6 & 73.2 $\pm$ 0.4 & 7.7 $\pm$ 0.3 \\
4 & 46.2 $\pm$ 1.8 & 21.8 $\pm$ 2.6 & 55.8 $\pm$ 1.0 & 20.9 $\pm$ 0.6 & 73.4 $\pm$ 0.3 & 7.8 $\pm$ 0.2 \\
5 & 48.1 $\pm$ 1.7 & 25.5 $\pm$ 1.8 & 57.5 $\pm$ 1.1 & 23.7 $\pm$ 0.8 & 73.3 $\pm$ 0.3 & 7.6 $\pm$ 0.2 \\
6 & 48.4 $\pm$ 2.0 & 21.4 $\pm$ 2.4 & 56.7 $\pm$ 1.1 & 22.2 $\pm$ 0.6 & 73.2 $\pm$ 0.3 & 7.3 $\pm$ 0.3 \\
7 & 46.8 $\pm$ 1.5 & 22.0 $\pm$ 2.4 & 58.0 $\pm$ 0.9 & 21.7 $\pm$ 0.7 & 73.5 $\pm$ 0.3 & 7.6 $\pm$ 0.3 \\
8 & 48.0 $\pm$ 1.7 & 15.8 $\pm$ 1.5 & 57.6 $\pm$ 1.3 & 21.9 $\pm$ 0.6 & 73.2 $\pm$ 0.3 & 7.8 $\pm$ 0.3 \\
9 & 44.4 $\pm$ 2.1 & 29.1 $\pm$ 2.1 & 56.6 $\pm$ 1.1 & 22.5 $\pm$ 0.8 & 73.6 $\pm$ 0.3 & 7.7 $\pm$ 0.3 \\
10 & 47.0 $\pm$ 1.7 & 20.9 $\pm$ 1.9 & 56.1 $\pm$ 1.0 & 22.2 $\pm$ 0.6 & 73.5 $\pm$ 0.3 & 7.6 $\pm$ 0.2 \\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{Debiased \cite{li2020shape}}
\begin{table}[H]
\centering
\begin{tabular}{@{}lcccccc@{}}
\toprule
Run & Silhouette & Edge & Sketch & CueConflict & ImageNet1k & ImageNetStylized \\
\midrule
1 & 50.5 $\pm$ 1.5 & 31.6 $\pm$ 3.3 & 60.0 $\pm$ 1.1 & 28.9 $\pm$ 0.8 & 74.0 $\pm$ 0.3 & 15.9 $\pm$ 0.5 \\
2 & 47.6 $\pm$ 1.6 & 32.7 $\pm$ 3.0 & 59.3 $\pm$ 1.3 & 29.1 $\pm$ 0.5 & 73.8 $\pm$ 0.3 & 15.3 $\pm$ 0.4 \\
3 & 46.1 $\pm$ 1.9 & 27.3 $\pm$ 2.3 & 60.8 $\pm$ 1.0 & 30.3 $\pm$ 0.5 & 74.1 $\pm$ 0.3 & 15.9 $\pm$ 0.4 \\
4 & 48.7 $\pm$ 1.9 & 20.1 $\pm$ 1.3 & 61.1 $\pm$ 1.1 & 28.7 $\pm$ 0.8 & 74.0 $\pm$ 0.3 & 15.4 $\pm$ 0.5 \\
5 & 48.0 $\pm$ 1.8 & 36.3 $\pm$ 2.0 & 60.2 $\pm$ 1.0 & 29.6 $\pm$ 0.8 & 74.2 $\pm$ 0.3 & 15.8 $\pm$ 0.5 \\
6 & 46.9 $\pm$ 2.0 & 27.7 $\pm$ 2.6 & 60.6 $\pm$ 0.9 & 29.6 $\pm$ 0.6 & 74.2 $\pm$ 0.4 & 15.8 $\pm$ 0.5 \\
7 & 44.5 $\pm$ 2.3 & 31.2 $\pm$ 2.4 & 59.0 $\pm$ 1.1 & 27.1 $\pm$ 0.5 & 74.0 $\pm$ 0.4 & 15.4 $\pm$ 0.5 \\
8 & 50.2 $\pm$ 2.0 & 27.1 $\pm$ 2.4 & 61.3 $\pm$ 0.9 & 30.2 $\pm$ 0.6 & 73.9 $\pm$ 0.3 & 15.7 $\pm$ 0.4 \\
9 & 49.3 $\pm$ 3.0 & 32.7 $\pm$ 2.6 & 59.8 $\pm$ 0.8 & 29.4 $\pm$ 0.5 & 74.0 $\pm$ 0.3 & 15.6 $\pm$ 0.4 \\
10 & 51.1 $\pm$ 1.2 & 27.3 $\pm$ 1.9 & 59.4 $\pm$ 0.8 & 28.0 $\pm$ 0.5 & 74.0 $\pm$ 0.3 & 15.4 $\pm$ 0.5 \\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{DeepAug \cite{hendrycks2021many}}
\begin{table}[H]
\centering
\begin{tabular}{@{}lcccccc@{}}
\toprule
Run & Silhouette & Edge & Sketch & CueConflict & ImageNet1k & ImageNetStylized \\
\midrule
1 & 50.3 $\pm$ 1.4 & 32.3 $\pm$ 2.3 & 61.4 $\pm$ 1.0 & 29.2 $\pm$ 1.0 & 73.0 $\pm$ 0.3 & 13.0 $\pm$ 0.3 \\
2 & 51.0 $\pm$ 1.8 & 32.3 $\pm$ 2.3 & 62.5 $\pm$ 1.1 & 31.2 $\pm$ 0.7 & 73.5 $\pm$ 0.1 & 13.2 $\pm$ 0.3 \\
3 & 53.7 $\pm$ 1.6 & 36.6 $\pm$ 2.2 & 62.8 $\pm$ 1.0 & 30.9 $\pm$ 0.7 & 72.8 $\pm$ 0.3 & 13.2 $\pm$ 0.3 \\
4 & 50.5 $\pm$ 1.7 & 29.1 $\pm$ 1.9 & 65.0 $\pm$ 0.7 & 30.3 $\pm$ 0.7 & 73.1 $\pm$ 0.2 & 12.9 $\pm$ 0.2 \\
5 & 52.5 $\pm$ 1.3 & 32.8 $\pm$ 2.4 & 62.5 $\pm$ 1.2 & 30.9 $\pm$ 0.7 & 73.0 $\pm$ 0.3 & 12.9 $\pm$ 0.3 \\
6 & 51.0 $\pm$ 1.4 & 34.8 $\pm$ 1.8 & 64.0 $\pm$ 1.1 & 30.6 $\pm$ 0.6 & 72.9 $\pm$ 0.3 & 13.1 $\pm$ 0.2 \\
7 & 53.4 $\pm$ 1.9 & 39.6 $\pm$ 1.4 & 63.3 $\pm$ 1.0 & 30.6 $\pm$ 0.8 & 73.0 $\pm$ 0.3 & 12.8 $\pm$ 0.2 \\
8 & 51.3 $\pm$ 1.2 & 39.2 $\pm$ 2.9 & 64.4 $\pm$ 1.2 & 29.4 $\pm$ 0.7 & 72.6 $\pm$ 0.3 & 13.0 $\pm$ 0.3 \\
9 & 51.0 $\pm$ 1.7 & 32.1 $\pm$ 3.1 & 65.0 $\pm$ 0.9 & 29.3 $\pm$ 0.9 & 72.8 $\pm$ 0.3 & 13.0 $\pm$ 0.2 \\
10 & 48.9 $\pm$ 1.9 & 39.4 $\pm$ 2.5 & 62.5 $\pm$ 1.2 & 29.9 $\pm$ 0.7 & 72.8 $\pm$ 0.3 & 13.1 $\pm$ 0.3 \\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{Stylized ImageNet \cite{geirhos2018imagenet}}
\begin{table}[H]
\centering
\begin{tabular}{@{}lcccccc@{}}
\toprule
Run & Silhouette & Edge & Sketch & CueConflict & ImageNet1k & ImageNetStylized \\
\midrule
1 & 48.5 $\pm$ 2.1 & 59.8 $\pm$ 3.6 & 69.8 $\pm$ 1.0 & 54.1 $\pm$ 1.1 & 55.9 $\pm$ 0.4 & 52.7 $\pm$ 0.5 \\
2 & 50.5 $\pm$ 1.4 & 61.4 $\pm$ 1.9 & 70.7 $\pm$ 0.7 & 52.6 $\pm$ 1.0 & 55.8 $\pm$ 0.4 & 52.6 $\pm$ 0.5 \\
3 & 48.0 $\pm$ 1.4 & 62.6 $\pm$ 2.2 & 69.5 $\pm$ 1.3 & 53.0 $\pm$ 1.2 & 55.6 $\pm$ 0.5 & 52.2 $\pm$ 0.5 \\
4 & 45.9 $\pm$ 1.7 & 61.1 $\pm$ 2.3 & 70.9 $\pm$ 1.0 & 54.1 $\pm$ 0.8 & 55.4 $\pm$ 0.4 & 52.8 $\pm$ 0.5 \\
5 & 45.2 $\pm$ 1.7 & 59.3 $\pm$ 2.2 & 68.5 $\pm$ 1.0 & 53.7 $\pm$ 1.4 & 55.1 $\pm$ 0.4 & 52.0 $\pm$ 0.5 \\
6 & 46.9 $\pm$ 1.9 & 60.0 $\pm$ 2.8 & 71.4 $\pm$ 0.9 & 52.7 $\pm$ 0.8 & 55.4 $\pm$ 0.4 & 51.9 $\pm$ 0.5 \\
7 & 45.9 $\pm$ 1.7 & 55.8 $\pm$ 2.9 & 71.3 $\pm$ 0.8 & 55.3 $\pm$ 1.2 & 55.8 $\pm$ 0.4 & 52.6 $\pm$ 0.6 \\
8 & 48.9 $\pm$ 2.2 & 56.2 $\pm$ 2.6 & 69.6 $\pm$ 0.9 & 53.8 $\pm$ 1.1 & 55.4 $\pm$ 0.4 & 52.4 $\pm$ 0.4 \\
9 & 49.2 $\pm$ 1.4 & 59.9 $\pm$ 2.4 & 71.4 $\pm$ 1.0 & 53.9 $\pm$ 1.1 & 55.2 $\pm$ 0.5 & 52.9 $\pm$ 0.5 \\
10 & 46.0 $\pm$ 2.1 & 56.4 $\pm$ 2.7 & 70.2 $\pm$ 0.8 & 53.5 $\pm$ 1.0 & 55.4 $\pm$ 0.5 & 52.6 $\pm$ 0.5 \\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{InfoDrop \cite{shi2020informative}}
\begin{table}[H]
\centering
\begin{tabular}{@{}lcccccc@{}}
\toprule
Run & Silhouette & Edge & Sketch & CueConflict & ImageNet1k & ImageNetStylized \\
\midrule
1 & 45.3 $\pm$ 1.8 & 18.1 $\pm$ 2.4 & 56.9 $\pm$ 1.1 & 23.3 $\pm$ 0.7 & 72.8 $\pm$ 0.3 & 8.1 $\pm$ 0.3 \\
2 & 47.9 $\pm$ 1.9 & 21.6 $\pm$ 3.2 & 56.4 $\pm$ 1.4 & 23.2 $\pm$ 0.6 & 73.0 $\pm$ 0.3 & 8.0 $\pm$ 0.3 \\
3 & 46.8 $\pm$ 1.4 & 20.0 $\pm$ 1.5 & 59.3 $\pm$ 1.2 & 23.0 $\pm$ 0.5 & 72.9 $\pm$ 0.3 & 7.7 $\pm$ 0.2 \\
4 & 47.2 $\pm$ 1.9 & 18.5 $\pm$ 2.0 & 57.4 $\pm$ 1.1 & 22.6 $\pm$ 0.7 & 72.7 $\pm$ 0.3 & 7.9 $\pm$ 0.2 \\
5 & 49.0 $\pm$ 1.6 & 22.5 $\pm$ 1.5 & 55.7 $\pm$ 1.9 & 24.0 $\pm$ 0.5 & 73.0 $\pm$ 0.3 & 7.6 $\pm$ 0.2 \\
6 & 43.2 $\pm$ 1.5 & 18.2 $\pm$ 1.5 & 57.7 $\pm$ 1.0 & 22.2 $\pm$ 0.6 & 72.9 $\pm$ 0.4 & 7.8 $\pm$ 0.2 \\
7 & 49.2 $\pm$ 1.7 & 20.1 $\pm$ 2.8 & 56.5 $\pm$ 1.0 & 23.5 $\pm$ 0.5 & 72.8 $\pm$ 0.3 & 7.8 $\pm$ 0.2 \\
8 & 48.3 $\pm$ 1.8 & 22.5 $\pm$ 1.7 & 57.7 $\pm$ 1.0 & 22.4 $\pm$ 0.6 & 72.6 $\pm$ 0.3 & 8.2 $\pm$ 0.2 \\
9 & 43.7 $\pm$ 2.3 & 15.6 $\pm$ 2.1 & 57.0 $\pm$ 1.2 & 25.3 $\pm$ 0.7 & 72.8 $\pm$ 0.4 & 7.9 $\pm$ 0.2 \\
10 & 45.2 $\pm$ 2.8 & 22.7 $\pm$ 2.2 & 58.0 $\pm$ 1.0 & 22.6 $\pm$ 0.5 & 73.0 $\pm$ 0.4 & 7.4 $\pm$ 0.3 \\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{SagNet \cite{nam2021reducing}}
\begin{table}[H]
\centering
\begin{tabular}{@{}lcccccc@{}}
\toprule
Run & Silhouette & Edge & Sketch & CueConflict & ImageNet1k & ImageNetStylized \\
\midrule
1 & 43.6 $\pm$ 0.7 & 19.7 $\pm$ 1.1 & 57.1 $\pm$ 0.4 & 21.3 $\pm$ 0.2 & 72.9 $\pm$ 0.1 & 6.1 $\pm$ 0.1 \\
2 & 41.6 $\pm$ 0.7 & 19.8 $\pm$ 0.7 & 58.6 $\pm$ 0.7 & 21.5 $\pm$ 0.2 & 72.3 $\pm$ 0.1 & 7.1 $\pm$ 0.1 \\
3 & 43.4 $\pm$ 0.6 & 19.3 $\pm$ 0.6 & 57.6 $\pm$ 0.5 & 20.3 $\pm$ 0.2 & 72.6 $\pm$ 0.1 & 5.4 $\pm$ 0.1 \\
4 & 40.7 $\pm$ 0.5 & 19.2 $\pm$ 0.9 & 56.4 $\pm$ 0.7 & 21.0 $\pm$ 0.2 & 72.1 $\pm$ 0.1 & 5.9 $\pm$ 0.2 \\
5 & 41.5 $\pm$ 0.7 & 19.7 $\pm$ 0.5 & 58.5 $\pm$ 0.4 & 21.4 $\pm$ 0.2 & 72.4 $\pm$ 0.1 & 6.3 $\pm$ 0.1 \\
6 & 43.5 $\pm$ 0.8 & 20.7 $\pm$ 0.9 & 58.4 $\pm$ 0.7 & 21.6 $\pm$ 0.2 & 72.8 $\pm$ 0.1 & 6.5 $\pm$ 0.2 \\
7 & 41.8 $\pm$ 0.8 & 22.9 $\pm$ 0.6 & 58.9 $\pm$ 0.5 & 20.8 $\pm$ 0.2 & 72.6 $\pm$ 0.1 & 5.6 $\pm$ 0.1 \\
8 & 41.0 $\pm$ 1.0 & 18.2 $\pm$ 0.8 & 59.4 $\pm$ 0.5 & 21.3 $\pm$ 0.2 & 72.7 $\pm$ 0.1 & 5.9 $\pm$ 0.2 \\
9 & 43.6 $\pm$ 0.6 & 19.4 $\pm$ 0.8 & 58.9 $\pm$ 0.5 & 20.1 $\pm$ 0.3 & 73.0 $\pm$ 0.0 & 6.6 $\pm$ 0.1 \\
10 & 41.9 $\pm$ 0.6 & 19.7 $\pm$ 0.8 & 57.2 $\pm$ 0.6 & 20.8 $\pm$ 0.2 & 72.4 $\pm$ 0.1 & 5.9 $\pm$ 0.2 \\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{pAdaIN \cite{nuriel2021permuted}}
\begin{table}[H]
\centering
\begin{tabular}{@{}lcccccc@{}}
\toprule
Run & Silhouette & Edge & Sketch & CueConflict & ImageNet1k & ImageNetStylized \\
\midrule
1 & 41.5 $\pm$ 1.5 & 19.4 $\pm$ 0.9 & 56.7 $\pm$ 0.8 & 22.0 $\pm$ 0.4 & 73.2 $\pm$ 0.1 & 8.1 $\pm$ 0.1 \\
2 & 45.1 $\pm$ 1.5 & 21.3 $\pm$ 1.1 & 53.4 $\pm$ 0.7 & 21.5 $\pm$ 0.4 & 73.1 $\pm$ 0.1 & 7.6 $\pm$ 0.1 \\
3 & 45.2 $\pm$ 0.9 & 14.6 $\pm$ 0.7 & 53.6 $\pm$ 0.5 & 22.3 $\pm$ 0.5 & 73.4 $\pm$ 0.1 & 7.9 $\pm$ 0.2 \\
4 & 46.6 $\pm$ 1.2 & 18.8 $\pm$ 1.2 & 55.2 $\pm$ 0.7 & 21.3 $\pm$ 0.4 & 73.1 $\pm$ 0.1 & 8.0 $\pm$ 0.2 \\
5 & 48.2 $\pm$ 1.3 & 23.0 $\pm$ 1.2 & 55.8 $\pm$ 0.7 & 20.9 $\pm$ 0.3 & 73.2 $\pm$ 0.1 & 8.0 $\pm$ 0.1 \\
6 & 46.7 $\pm$ 0.9 & 21.5 $\pm$ 1.1 & 57.0 $\pm$ 0.7 & 22.4 $\pm$ 0.3 & 73.0 $\pm$ 0.0 & 8.3 $\pm$ 0.1 \\
7 & 47.9 $\pm$ 0.7 & 26.4 $\pm$ 1.3 & 59.3 $\pm$ 0.6 & 22.6 $\pm$ 0.4 & 73.0 $\pm$ 0.1 & 8.3 $\pm$ 0.2 \\
8 & 46.0 $\pm$ 1.3 & 22.0 $\pm$ 1.0 & 55.6 $\pm$ 0.6 & 21.6 $\pm$ 0.3 & 73.2 $\pm$ 0.1 & 8.1 $\pm$ 0.1 \\
9 & 44.1 $\pm$ 0.9 & 20.6 $\pm$ 1.0 & 57.7 $\pm$ 0.5 & 21.4 $\pm$ 0.3 & 73.0 $\pm$ 0.1 & 8.1 $\pm$ 0.1 \\
10 & 44.2 $\pm$ 1.1 & 21.5 $\pm$ 1.3 & 55.2 $\pm$ 0.8 & 21.3 $\pm$ 0.4 & 73.1 $\pm$ 0.1 & 8.2 $\pm$ 0.1 \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Model selection: best validation score}
\subsubsection{ERM \cite{vapnik1999overview}}
\begin{table}[H]
\centering
\begin{tabular}{@{}lcccccc@{}}
\toprule
Run & Silhouette & Edge & Sketch & CueConflict & ImageNet1k & ImageNetStylized \\
\midrule
1 & 45.6 & 20.6 & 56.7 & 21.3 & 74.0 & 7.6 \\
2 & 48.1 & 20.6 & 56.5 & 21.6 & 73.7 & 7.4 \\
3 & 46.3 & 25.0 & 54.4 & 21.8 & 73.7 & 7.9 \\
4 & 45.6 & 21.9 & 56.6 & 20.9 & 73.8 & 7.8 \\
5 & 51.2 & 27.5 & 58.9 & 23.4 & 73.7 & 7.5 \\
6 & 47.5 & 20.6 & 56.9 & 22.7 & 73.7 & 7.4 \\
7 & 46.3 & 21.3 & 58.9 & 21.3 & 73.9 & 7.6 \\
8 & 49.4 & 15.6 & 58.1 & 21.7 & 73.7 & 8.1 \\
9 & 43.8 & 31.2 & 56.9 & 22.9 & 74.0 & 7.8 \\
10 & 46.9 & 23.6 & 56.6 & 21.9 & 73.8 & 7.9 \\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{Debiased \cite{li2020shape}}
\begin{table}[H]
\centering
\begin{tabular}{@{}lcccccc@{}}
\toprule
Run & Silhouette & Edge & Sketch & CueConflict & ImageNet1k & ImageNetStylized \\
\midrule
1 & 51.9 & 33.8 & 62.7 & 28.4 & 74.4 & 16.6 \\
2 & 48.1 & 35.6 & 59.5 & 29.0 & 74.3 & 15.6 \\
3 & 45.0 & 27.5 & 60.9 & 30.5 & 74.6 & 16.4 \\
4 & 48.8 & 21.3 & 61.0 & 27.8 & 74.4 & 15.9 \\
5 & 46.9 & 38.7 & 60.1 & 29.1 & 74.6 & 16.2 \\
6 & 48.8 & 26.9 & 60.0 & 29.1 & 74.7 & 16.3 \\
7 & 43.8 & 33.1 & 58.4 & 27.4 & 74.6 & 16.1 \\
8 & 51.2 & 30.6 & 61.9 & 30.9 & 74.3 & 16.2 \\
9 & 51.9 & 33.1 & 60.1 & 28.8 & 74.5 & 16.0 \\
10 & 50.6 & 27.5 & 60.1 & 28.3 & 74.5 & 15.5 \\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{DeepAug \cite{hendrycks2021many}}
\begin{table}[H]
\centering
\begin{tabular}{@{}lcccccc@{}}
\toprule
Run & Silhouette & Edge & Sketch & CueConflict & ImageNet1k & ImageNetStylized \\
\midrule
1 & 55.0 & 38.1 & 62.6 & 30.8 & 73.7 & 13.1 \\
2 & 51.2 & 31.9 & 60.5 & 31.2 & 73.8 & 13.6 \\
3 & 52.5 & 42.5 & 65.6 & 31.3 & 73.7 & 13.8 \\
4 & 48.8 & 30.6 & 65.9 & 30.3 & 73.8 & 13.1 \\
5 & 52.5 & 38.1 & 62.0 & 31.1 & 74.0 & 13.3 \\
6 & 55.6 & 38.7 & 63.7 & 31.4 & 73.7 & 13.1 \\
7 & 53.8 & 39.4 & 66.5 & 31.4 & 73.8 & 13.2 \\
8 & 50.6 & 33.8 & 64.5 & 29.5 & 73.3 & 13.8 \\
9 & 50.6 & 27.5 & 63.7 & 30.0 & 73.7 & 13.3 \\
10 & 48.1 & 40.0 & 62.6 & 29.6 & 73.5 & 13.4 \\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{Stylized ImageNet \cite{geirhos2018imagenet}}
\begin{table}[H]
\centering
\begin{tabular}{@{}lcccccc@{}}
\toprule
Run & Silhouette & Edge & Sketch & CueConflict & ImageNet1k & ImageNetStylized \\
\midrule
1 & 48.8 & 61.9 & 69.0 & 52.8 & 56.1 & 53.3 \\
2 & 51.9 & 61.3 & 70.5 & 52.7 & 56.2 & 53.3 \\
3 & 49.4 & 63.7 & 67.9 & 52.5 & 56.2 & 52.9 \\
4 & 45.6 & 63.1 & 71.4 & 54.8 & 55.8 & 53.4 \\
5 & 42.5 & 60.6 & 68.3 & 52.7 & 55.5 & 52.7 \\
6 & 45.6 & 61.3 & 72.3 & 52.4 & 55.9 & 52.6 \\
7 & 44.4 & 56.9 & 71.6 & 54.3 & 56.4 & 53.4 \\
8 & 50.0 & 56.9 & 70.1 & 53.4 & 55.8 & 53.0 \\
9 & 47.5 & 58.1 & 71.5 & 53.7 & 55.7 & 53.6 \\
10 & 45.0 & 54.4 & 70.7 & 54.5 & 56.1 & 53.2 \\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{InfoDrop \cite{shi2020informative}}
\begin{table}[H]
\centering
\begin{tabular}{@{}lcccccc@{}}
\toprule
Run & Silhouette & Edge & Sketch & CueConflict & ImageNet1k & ImageNetStylized \\
\midrule
1 & 46.8 & 16.9 & 59.0 & 25.0 & 73.3 & 8.0 \\
2 & 46.1 & 23.1 & 56.2 & 23.8 & 73.4 & 8.1 \\
3 & 47.7 & 18.8 & 56.2 & 23.0 & 73.3 & 8.0 \\
4 & 47.2 & 18.2 & 56.8 & 23.5 & 73.1 & 8.2 \\
5 & 50.8 & 20.9 & 53.6 & 23.7 & 73.4 & 7.7 \\
6 & 43.1 & 17.6 & 57.2 & 21.8 & 73.4 & 7.8 \\
7 & 50.6 & 23.6 & 57.5 & 23.8 & 73.2 & 8.1 \\
8 & 46.9 & 22.4 & 55.5 & 21.8 & 73.2 & 8.3 \\
9 & 42.2 & 13.7 & 56.4 & 25.2 & 73.2 & 8.1 \\
10 & 44.5 & 23.8 & 58.1 & 22.9 & 73.5 & 7.5 \\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{SagNet \cite{nam2021reducing}}
\begin{table}[H]
\centering
\begin{tabular}{@{}lcccccc@{}}
\toprule
Run & Silhouette & Edge & Sketch & CueConflict & ImageNet1k & ImageNetStylized \\
\midrule
1 & 42.8 & 21.9 & 57.5 & 20.9 & 73.0 & 6.0 \\
2 & 42.0 & 19.8 & 58.9 & 21.4 & 72.4 & 7.0 \\
3 & 44.1 & 17.9 & 58.1 & 20.5 & 72.7 & 5.6 \\
4 & 40.5 & 20.1 & 56.4 & 21.2 & 72.3 & 5.9 \\
5 & 42.4 & 20.9 & 59.6 & 21.1 & 72.5 & 6.5 \\
6 & 43.5 & 20.3 & 58.6 & 21.7 & 73.0 & 6.3 \\
7 & 43.1 & 23.3 & 59.4 & 20.8 & 72.7 & 5.8 \\
8 & 40.9 & 18.0 & 59.2 & 21.1 & 72.9 & 6.0 \\
9 & 43.7 & 19.5 & 59.0 & 20.4 & 73.1 & 6.6 \\
10 & 42.1 & 19.2 & 57.6 & 20.9 & 72.6 & 6.1 \\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{pAdaIN \cite{nuriel2021permuted}}
\begin{table}[H]
\centering
\begin{tabular}{@{}lcccccc@{}}
\toprule
Run & Silhouette & Edge & Sketch & CueConflict & ImageNet1k & ImageNetStylized \\
\midrule
1 & 42.5 & 20.0 & 55.9 & 22.0 & 73.3 & 8.0 \\
2 & 45.0 & 21.3 & 53.1 & 21.5 & 73.2 & 7.7 \\
3 & 45.6 & 14.4 & 53.9 & 21.6 & 73.5 & 7.8 \\
4 & 46.3 & 18.8 & 54.9 & 21.2 & 73.2 & 8.1 \\
5 & 48.8 & 24.4 & 55.3 & 20.9 & 73.3 & 7.9 \\
6 & 46.9 & 20.6 & 58.2 & 21.9 & 73.1 & 8.3 \\
7 & 48.1 & 25.6 & 58.6 & 21.8 & 73.1 & 8.0 \\
8 & 46.9 & 21.9 & 56.1 & 21.3 & 73.4 & 8.1 \\
9 & 42.5 & 20.6 & 57.7 & 21.6 & 73.1 & 8.0 \\
10 & 43.8 & 22.5 & 55.8 & 21.2 & 73.2 & 8.1 \\
\bottomrule
\end{tabular}
\end{table}
|
1,314,259,994,239 | arxiv | \section{Introduction}
The advent of quantum information theory has brought with it the idea that there is a limited sense in which physical systems -- not necessarily human or conscious -- might be said to perform observations. For instance, the reduction in visibility of quantum interference phenomena traditionally attributed to `observation' of which-path information does not require observation \tit{per se}, but only that the relevant information could be obtained \tit{in principle} from some extraneous physical system. Thus, any system capable of obtaining and storing information about another system through a physical interaction is capable of `observation' in the broader information-theoretic sense. Parallel to these developments, the quantum physics community has also recently expanded the formal notion of \tit{causality} beyond deterministic physics to encompass probabilistic causality, following seminal developments in statistical modeling of causation \cite{PEARL,SGS}. A key part of this new probabilistic notion of causality is the concept of manipulation by an external agent. The `agent' is usually assumed to be a human experimenter, but the term may be extended to encompass physical systems in general, provided the notion of a `manipulation' by such a system can be meaningfully defined \cite{WOODW}. Both these recent developments blur the line between the notion of `observer' and `physical system', and invite us to re-examine the meaning of the intertwined concepts of observation and causation in contemporary quantum physics.
The present work is part of a growing field of research on \tit{quantum causal modeling} \cite{COSHRAP,ALLEN,RIED,RIEDPHD,FRITZ,HLP,PIEBRUK,BARRETTQCM,GIARM,LEIF08,MILZ17}, which aims to consolidate quantum information theory and probabilistic causation into a single framework. A major initial stimulus for these efforts was the work of Wood \& Spekkens \cite{WOOD}, who showed that classical causal models could not explain quantum correlations without `unnatural fine-tuning'. Much of the subsequent literature on the topic can be seen as a concerted effort to show that quantum correlations can be explained without such fine-tuning by suitably generalizing the notion of a `causal model'. On this account the research program has been successful: most of the works just cited are more than capable of performing all practical functions of causal modeling for both quantum and classical systems, and are able to do so without fine-tuning, or invoking causal pathways that run counter to classical intuition. This literature strongly suggests that although quantum theory may not be local in the sense originally defined by Bell \cite{BELL76}, it may nevertheless be called a `causal theory' in the sense of probabilistic causation. This compelling idea is an invitation to examine the relevance of quantum causal modeling to quantum foundations.
However, in their emphasis carrying over the operational functionality of the classical causal models into the quantum domain, these approaches have so far skirted around foundational issues regarding the meaning of causality itself, and how it might be revised in light of quantum theory. That these matters have been overlooked is easy to verify: nearly all of the proposed models are compatible with any interpretation of quantum mechanics, and nearly all of them adopt without much critical reflection the basic definition of causality from the classical framework. None of them question the basic idea that causality is \tit{only} about manipulations; it is instead tacitly assumed that the sole task of a causal model is to produce probabilities for measurement outcomes in response to manipulations \tit{and nothing more}. Most of the proposed frameworks can therefore be transformed into each other with relative ease, as there are only so many ways to define a generalized quantum process that maps a set of inputs to a set of outputs. One therefore finds that the differences between these models are invariably of a mainly technical nature and not a matter of deep principle.
The present work has in common with these other works the commitment to a notion of causality that is probabilistic and manipulationist. We differ from them in that we define causality as a predominantly \tit{counterfactual} concept, rather than being entirely restricted to manipulations. Whereas an emphasis on manipulation would regard counterfactuals as being about the different possible manipulations that could be performed, we will instead consider manipulations to be just one of a broader class of counterfactual experiments that one could perform. We will argue that, apart from manipulations, it is also important to consider counterfactual experiments in which certain measurements are \tit{not} performed. Just as there is a rule for determining the result of a manipulation, our framework demands a rule for inferring the result of such an `un-measurement'. The rule in the classical case is trivial, which allows us to overlook it; but in the quantum case the rule takes on a fascinating mathematical form that is known in quantum foundations research in connection with the QBist interpretation of quantum theory. Our approach therefore immediately manifests a connection between causality and quantum foundations.
A related advantage of our counterfactual approach is that it emphasizes the observer's involvement in the process of causal discovery. The counterfactuals represent mutually exclusive alternative experimental situations, which by implication are referred to some observer who has the power to bring about one or the other. Consequently, we view causality as a relation that holds between the probabilities that the observer assigns to different experimental contexts. Which contexts? This brings us to an old puzzle in the foundations of causality.
Imagine that a fire burns down an apartment, and the investigation reveals that the tenant had left the gas stove on. The landlord accuses the tenant of causing the fire, since if he had remembered to switch off the stove, the fire would not have occurred. The tenant, who happens to be a philosopher, counters that if there had been no oxygen in the apartment, the fire also would not have occurred. To see what is wrong with this argument, one has to recognize that causal relations can only be established with reference to the `status quo' as to what is and what is not considered reasonably possible. The presence of oxygen in the apartment should not be regarded as a possible cause because nobody would reasonably expect it to be absent. If, on the other hand, the defendant lived inside a vacuum chamber that could be readily evacuated by the push of a button, the case might turn out differently.
When dealing with counterfactual causality it is therefore quite natural to introduce the idea of a `reference' experiment relative to which the observer is contemplating possible deviations. In this work we formalize this idea with the help of a control variable that indicates the different alternative `contexts' an observer can bring about. By contrast, an emphasis on manipulation would tend to presume the existence of some independent physical structure that conveys the input to the output, which hides the fact that what is considered a `mechanism' is itself observer-dependent. The mechanistic viewpoint thus tends to reify an observer's possibilities relative to a system into absolute properties of the system itself. A tractor is thought to be a `mechanism' even when there is nobody around who is trained to operate it. What use, then, in insisting that it is a mechanism? We can only make sense of such a claim by appealing to counterfactuals, for instance, by supposing what \tit{would} happen if there \tit{were} somebody there who knew how to drive the tractor -- but this brings us back to the dependence on an observer (or perhaps one should say `user'), which was hidden when one was thinking in terms of mechanisms.
Thus on a counterfactual account we are constantly reminded of the observer's role in establishing a causal connection. For on this account causality it is a statement about how an \tit{observer}'s probability assignments should be related between two counterfactual experiments that the \tit{observer} can concievably perform. This way of understanding causal relations is quite unheard of in fundamental physics, where the overwhelming preference is to appeal to mechanisms (deterministic or otherwise). We will argue, however, that it is the more fruitful way to think about causality when the systems in question are quantum.
We close the introduction with some remarks about how the observer-dependence of causality manifests itself in the present work. First, it leads us to make a general demand that all the fundamental rules of the framework should be expressed as equations that relate the probabilities of events in one experimental context to the probabilities of events in another context. In particular, although we make extensive use of standard formal devices like linear operators on Hilbert spaces, our aim is always to elimiate any reference to such objects from the basic rules of our framework. This is because probabilities are things that we may take to refer to the direct experience of the observer making the experiment, whereas Hilbert space operators are far removed from this experience and only make themselves felt in the way that they guide our probability assignments to events. This view is closely allied with the QBist interpretation of quantum theory, which takes probabilities to be subjective judgements of the observer. We note, however, that while we take QBism as our motivation in this regard, our framework is entirely compatible with an objective interpretation of probability. Secondly, in the course of applying our framework to quantum systems, we find it natural to postulate a fundamental symmetry with respect to the reversal of the direction of causality. This suggests that causal relations might be non-directional at the fundamental level, and that the direction may depend in some way on the observer's capacities relative to the system.
\tit{Remark:} Of course, this is not to imply that it is within the observer's powers to change the causal direction at will. A detailed discussion of this issue is beyond the scope of this paper, but it seems plausible that the observer's powers over a system are constrained by their own thermodynamic arrow of time, in which case reversing the direction of causality might be no easier than un-scrambling an egg, which is to say impossible for all practical purposes.
The paper is structured into three main sections. Section \ref{sec:defs} outlines a general framework for causal modeling of general physical systems (classical, quantum or otherwise) based on an emphasis on counterfactual inference. A key idea is the principle of \tit{causal sufficiency}, which asserts that the only formal mathematical structure needed for inferring counterfactuals, beyond the probabilities, is a graph of the causal structure. This is a driving force behind the whole work, which can be alternatively seen as a purely mathematical exercise in exorcising Hilbert spaces from causal modeling of quantum systems. (We mention that this general framework is in fact conceptually posterior to the causal modeling of quantum systems -- which comes later in Sec. \ref{sec:qbism} -- for it was the unique problems faced in causal modeling of quantum systems that motivated the counterfactual framework). Section \ref{sec:ccms} applies the framework to classical causal models, comparing and contrasting it to the way that these models are usually formulated. Section \ref{sec:qbism} then applies it to quantum systems, resulting in the definition of a quantum causal model. Along the way, we make contact with some of the formal mathematical apparatus of QBism, which we generalize to suit our purposes. The result is a model that is manifestly symmetric with respect to the reversal of the causal arrows. We discuss the meaning of this result and address other questions about the framework at the end in Sec. \ref{sec:discuss}.
\section{Causal modeling as counterfactual inference \label{sec:defs}}
In this section, we describe a general framework for causal modelling that applies to any class of physical systems, which emphasizes a counterfactual definition of causality.
\subsection{Preliminaries and notation \label{sec:notation}}
It is generally agreed (among non-philosophers) that causation is transitive: if $A$ causes $B$ and $B$ causes $C$, then $A$ must cause $C$ (in this case $A$ is called an `indirect' cause of $C$). Furthermore, it stands to reason that no cause can be its own effect, hence there can be no chain of causes leading from $A$ back to itself. Based on these axioms, the causal relations among a set of propositions $A,B,C,\dots$ can be represented schematically by a directed acyclic graph (DAG). In this representation, a variable $A$ is a \tit{cause} of $B$ iff there is at least one path leading from $A$ to $B$ following the directions of the arrows. It is also standard to use the additional terminology that $A$ is a \tit{direct cause} of $B$ if there is a single arrow from $A$ to $B$, and an \tit{indirect cause} if there is a path consisting of two or more arrows from $A$ to $B$. These special cases do not exclude each other, thus $A$ can be both a \tit{direct} and \tit{indirect} cause of $B$. (Note that while \tit{cause} is transitive, the more refined notion of \tit{direct cause} is not).
It is conventional to use `family tree' terminology to describe relationships in a DAG, eg. in addition to the \tit{parents} of $X$, one can define its \tit{children}, \tit{ancestors} and \tit{descendants} in an analogous way. In terms of the family tree nomenclature, $A$ is a cause of $B$ whenever $A$ is an \tit{ancestor} of $B$, irrespective of whether it is also a \tit{parent} of $B$.
In this work, $P(X)$ represents a normalized discrete probability function on the domain $\trm{dom}(X)$ of the random variable $X$. When evaluating probabilities for specific values of $X$, we adopt the shorthand $P(x) := P(X=x)$. Thus, for example, $P(X,y,w|Z)$ should be understood as a function of just two variables, $X,Z$, equivalent to the function $P(X,Y,W|Z)$ evaluated at the specific values $Y=y,W=w$. For sets of random variables, we often replace the `$\cup$' with just a comma (or a space) when taking the union, eg. $A,B,C = A \, B \, C := A \cup B \cup C$. Sets of random variables can be used to define composite random variables, which we denote by bold letters, eg. the composite variable $\tbf{X} := A,B$ takes values that are tuples $\tbf{x}:=(a,b)$ with $a \in \trm{dom}(A),\, b \in \trm{dom}(B)$.
\subsection{Causality and measurements \label{sec:measurements}}
The setting of \tit{causal modeling} is a collection of \tit{localized measurements} that are performed on some external arrangement of physical matter evolving in time for the duration of an experiment; this material is referred to simply as the \tit{system}. These measurements are assumed to be fixed in advance of the experiment and cannot be dynamically changed as the experiment progresses. We now specify more carefully what we mean by `measurements':\\
\tbf{Localized Measurement:} A localized measurement is a physical interaction of an observer with a system that takes place within a region of space-time that is \tit{localized} relative to the experiment, and produces an outcome (i.e. a piece of classical data). Here, \tit{localized} means that the space-time extent of each region is small compared to the sizes of the distances and times between the measurements in the experiment under consideration. (This is a slight generalization of the similar concept of a space-time random variable introduced in Ref. \cite{CBRenner}, where the regions were assumed to be point-like). \\
\tit{Remark:} The space-time co-ordinates of each measurement region are only given relative to the experimental instance or run. Thus, if we consider the whole ensemble of repetitions of the experiment (whether parallel in space or sequential in time), then strictly speaking each measurement actually corresponds to a whole set of disjoint space-time regions, each one corresponding to a unique experimental run. Conventionally it is useful to re-set the space-time co-ordinates at the beginning of each experimental run so as to identify multiple repetitions of the `same' measurement by giving it the same space-time co-ordinates in each run.
Each measurement is associated with a random variable $X_i$, $i \in \{1,2,\dots,N \}$ whose values $x_i \in \trm{dom}X_i$ correspond to the possible outcomes of the measurement, with $P(X_i = x_i)$ the probability of obtaining the outcome $x_i$. It is conventional to choose the labelling such that $i<i'$ whenever $X_i$ is a cause of $X_{i'}$. Note that under this definition, all random variables represent the `outcomes' of measurements, even in cases where the value of a variable is completely determined. For instance, the action of a scientist turning a dial to some chosen value is still considered a `localized physical action on a system that produces an outcome'. We thus avoid making any fundamental distinction between `settings' and `outcomes' as is typical elsewhere in the literature.
In this work, we adopt the usual manipulationist definition of causality, but carefully re-worded for our own purposes:\\
\tbf{MC. Manipulationist Causation:} \\
Let $A,B$ be correlated random variables. Then the statement `$A$ causes $B$' means that $A$ and $B$ \tit{remain} correlated in an experiment in which we perform a \tit{manipulation} of $A$ (and only $A$). Formally, we introduce a \tit{control variable} $C^{A}$ (the superscript indicates that it controls only how $A$ is measured), whose values $c \in \trm{dom}C$ represent different possible \tit{ways} of measuring $A$. In particular, let $C^A=\brm{do}$ represent a manipulation of $A$; then `$A$ causes $B$' means that:
\eqn{ \label{eqn:manipulationist}
P(AB| C^{A}=\brm{do} \, ) \neq P(A| C^{A}=\brm{do} \, ) P(B| C^{A}=\brm{do} \, ) \, .
}
In this instance, we call $A$ the \tit{cause} and $B$ the \tit{effect}.\\
The terminology `\tbf{do}' will be used to refer to an \tit{intervention}, which is a specific type of manipulation, but the definition above is intended to hold for manipulations more generally. The constraints on what defines a manipulation will be discussed in more depth in Sec. \ref{sec:activemanips}.
The above definition is rather different to what one usually finds in textbooks. Elsewhere, it is usually emphasized that $A$ can \tit{signal} to $B$, which in the present notation means there are two manipulations $C^A=\brm{do}(A=a')$ and $C^A=\brm{do}(A=a'')$ such that
\eqn{
P(B|C^A=\brm{do}(A=a')) \neq P(B|C^A=\brm{do}(A=a'')) \, .
}
However, this way of defining causality obscures the essential point that what matters is not the particular manipulation of $A$, but the bare \tit{fact} of manipulation. If $A$ and $B$ are correlated and I am controlling $A$, this is already sufficient to establish qualitatively that $A$ is the cause and $B$ is the effect, without needing to say precisely what I am doing to $A$. To be sure, we can always refine our notion of manipulation to so as to describe different settings of the lever as distinct manipulations, but this is incidental. Of course, for the purposes of predicting the \tit{quantitative} consequences of interventions, we will associate some distribution $P'(A)$ to the intervention on $A$; see Sec. \ref{sec:passactclass} .
The above definition draws our attention to a formal device that we will use throughout this work, whereby the \tit{physical method of measurement} of a variable $X$ is itself specified by the special variable $C^X$. Even for cases where the mode of measurement is in some sense `passive' or `neutral', we will reserve a special symbol `$\oslash$', which represents the default mode of measurement whenever no particular value of $C^X$ is specified; thus $P(X)$ is equivalent to $P(X|C^X=\oslash)$. This notation means that $P(X)$ and $P(X|C^X=\brm{do})$ represent the probabilities of \tit{distinct} events within the same sample space. Specifically, `$X=x|C^X=\oslash$' refers to the event that the outcome $X=x$ is measured without manipulating it, while `$X=x|C^X=\brm{do}$' refers to the event that $X=x$ is measured by intervention. The random variable $C^X$ here plays a special role of toggling between different subsets of the sample space, which represent different physical modes of observation of $X$. For this reason, we will generally not bother to assign probabilities to the values of $C^X$ and will always treat its value as conditioned upon. Since it is interpreted as defining part of the experimental context, we will not include it as a variable within the system and will not represent it by a node in causal diagrams. However, from a formal mathematical point of view, it can be regarded as just another random variable.
Some further clarification is needed regarding the case of a `non-manipulation' of $X$ that we express as $C^X=\oslash$. In this work, there are two special instances of a `non-manipulation' that we will consider. The first type represents an observation that is made expressly for the purposes of causal inference on the system, and is taken as the conventional laboratory standard of measurement:\\
\tbf{Reference measurement:} The notation $C^X=\oslash$ indicates a \tit{reference measurement} of $X$, which means that $X$ is the outcome of a measurement on the system that has the following essential properties:\\
(i) The value of $X$ is maximally informative about the system, in a technical sense that will be elaborated shortly at the end of Sec. \ref{sec:experiments};\\
(ii) The measurement of $X$ does not break causal chains, meaning, if $X$ appears in a causal chain $A \rightarrow X \rightarrow B$, then it is still the case that $A$ causes $B$, i.e. that $A,B$ remain correlated (ignoring the value of $X$) under manipulations of $A$.\\
The second type witnesses the actual \tit{absence} of a measurement of $X$:\\
\tbf{Un-measurement:} The notation $C^X=\brm{undo}$ (sometimes shortened to $\brm{un}(X)$) indicates an \tit{un-measurement} of $X$. This is characterized by the following properties:\\
(i) It represents the enforcement of the physical \tit{absence} of a measuring device in the designated space-time region of $X$ (eg. by removing a measuring device that was previously present there);\\
(ii) The un-measurement of $X$ does not break causal chains, meaning, if $X$ appears in a causal chain $A \rightarrow X \rightarrow B$, then it is still the case that $A$ causes $B$ after the un-measurement of $X$, i.e. that $A,B$ remain correlated (conditional on $C^X=\brm{undo}$) under manipulations of $A$.\\
Note that we do not insist that the reference measurements be `passive' or `non-disturbing' in any sense. For notational convenience, we will adopt the convention that all variables are measured as per the reference measurement scheme unless otherwise stated, and the conditionals of the form $C^A=\oslash$ will accordingly be omitted unless they are needed for clarity.
\tit{Remark:} The designation of what is an `un-measurement' is of course ambiguous. Consider a Young's double-slit experiment with single photons. Here, an un-measurement could mean the removal of any photon-counters from the location of one of the slits; but then it may be asked whether this un-measuremet also requires us to remove tiny particles of dust from the air around the slit, since the scattering of light by these particles would make available (in principle) the information about which slit the photon passed though, even if our apparatuses do not collect this information. This ambiguity is to be settled by choosing a convention. If the dust particles are imagined to constitute a kind of `measurement device' then an un-measurement would indeed require that they be physically removed; but if they are designated as part of what we call the system's `environment', then the process of un-measurement does not mandate their removal. The only difference lies in how we tell the story, say, to explain the loss of interference due to the dust particles' presence; in the first case it would be attributed to the presence of a `measuring device' at the slit, while in the second case it would be attributed to the system `interacting with its environment'. Evidently no inconsistency will arise, provided we are careful to state what is considered a `measuring device' for the purposes of a given experiment.
\subsection{Experiments and counterfactual inference \label{sec:experiments}}
In this work, we will be concerned with three special categories of experiment, classified according to how the non-exogenous variables in the system are measured. In a \tit{reference observational scheme} (also called the \tit{reference experiment}) all measurements are reference measurements. In a \tit{passive observational scheme} all the non-exogenous measurements are \tit{non-disturbing} (to be defined shortly in Sec. \ref{sec:passactclass} ). Finally, in a \tit{manipulationist scheme} some of the measurements in the system are \tit{manipulations} (and when these are interventions, it is called an \tit{interventionist scheme}). Any \tit{experiment} proceeds in two stages. In the preparation stage, a system of the desired type is identified, isolated inside the laboratory, and `cleaned up' to meet certain quality standards; in the measurement stage the set of localized measurements $\tbf{X}:=\{ X_i : i = 1,2,\dots,N \}$ are performed, always in the same way and with each measurement occurring within its designated region of space-time. From the collected data-set of outcomes, the observer may infer a joint probability distribution $P(\tbf{X})$; this will be called the system's \tit{behaviour} relative to the given experiment.
\tit{Remark:} Although we do often make reference to the `system', it is the concept of an \tit{experiment} that is more fundamental, because it is only through experimentation that the properties of the system become known to us. In fact, the system may be regarded entirely as a conceptual abstraction that serves to represent the object to which our (the observer's) measurements are supposed to be directed.
An experiment is assumed to be carefully designed to as to have the following features:\\
\noindent \tbf{(i).} Distinct measured variables should refer to logically distinct properties (eg. `number of ravens' $R$ and `number of birds' $B$ are not logically distinct, and so must not both be used in the same experiment);\\
\tbf{(ii).} If two variables $X$ and $Y$ in the system are judged to have a common cause that is in the system, this common cause must be measured and included as a variable (or, if it is impractical to measure, included as an unmeasured \tit{latent variable} -- see Sec. \ref{sec:physMarkov});\\
\tbf{(iii).} If the \tit{experiment} has a chance of failure, care must be taken not to introduce \tit{spurious correlations} in $P(\tbf{X})$ when post-selecting on the successful runs. (One method to avoid this is simply to enlarge the scope of the experiment to explicitly include its `failure modes' as a special class of possible outcomes).\\
\tbf{(iv).} The \tit{exogenous} variables -- those whose causes are judged to lie outside of the system of interest -- must be initialized or selected so as to be statistically independent of one another, i.e. $P(E_1,E_2)=P(E_1)P(E_2)$ whenever $E_1,E_2$ are exogenous. Note that this can be achieved by doing independent manipulations of the exogenous variables.\\
Causal relations are relevant to experiments because they enable us to make inferences about counterfactuals. Specifically, we define \tit{counterfactual inference} as the procedure of taking a certain experiment as a reference (hence defining its measurements as the \tit{reference measurements}) and then using the system's behaviour in the reference experiment to deduce what would happen in a variety of \tit{counterfactual experiments}, which are defined by allowing $C^{X_i}$ to take values other than $\oslash$ for some of the $X_i \in \tbf{X}$. The rules for making such counterfactual deductions from the reference experiment are called \tit{inference rules}.
Causal inference refers to any form of counterfactual inference whose inference rules depend on \tit{causal structure}, i.e. the causal relations between all of the measurements in the system. More importantly, it is assumed that the inference rules don't depend on anything beyond the bare causal structure. To formalize these ideas, we define a causal model:\\
\tbf{CM. Causal model:} Let $P(\tbf{X})$ represent data about a system obtained in the reference experiment, and let $G(\tbf{X})$ be a DAG that represents the (actual or hypothesized) \tit{causal structure} of the system. Then the pair $\{ P(\tbf{X}),G(\tbf{X})\}$ defines a \tit{causal model} for the system;\\
and we supplement this definition with the postulate:\\
\tbf{CS. Causal sufficiency:} Causal structure is sufficient for counterfactual inference. That is, given a causal model $\{ P(\tbf{X}),G(\tbf{X})\}$ for a system in an experiment, there exist \tit{inference rules} that can be used to deduce from this model what would happen under \tit{manipulations} of the variables in the system.\\
\tit{Remark:} Counterfactual inference is, in general, a one-way operation. An \tit{intervention} gives a concrete example of the one-way nature of inference: since intervening on $W \in \tbf{X}$ effectively removes causal influences on $W$ that previously existed, a specification of $P(\tbf{X}|C^W=\brm{do})$ and the causal structure of the intervened-upon system will not contain enough information to deduce what would have happened if the intervention had not been performed. This might be rephrased as saying that there is no inference rule for `un-interventions'. This induces a natural (possibly partial) ordering on the set of counterfactual experiments, such that the higher ranking members in the order are those with the power to make counterfactual inferences about the lower-ranking members, and not vice-versa. It is therefore most natural to choose the \tit{reference experiment} to be one of the experiments that ranks at the top of this natural hierarchy, because this guarantees that it can be used to make inferences about the largest possible set of counterfactual experiments under consideration. This is what we meant in the previous section by calling the reference measurements `maximally informative'. It also underlies our assumption that reference measurements do not break causal chains (i.e. because if they did we would have the same difficulty un-measuring them as we do with interventions).
The above definition \tbf{CM} and postulate \tbf{CS} are the core of our counterfactual approach to causal modeling. However, as stated, they are still very vague. There remain two details that must be specified in order to obtain a rigorous framework for causal modeling. The first is to specify exactly what conditions the pair $\{ P(\tbf{X}),G(\tbf{X})\}$ needs to satisfy in order that we can say that $G(\tbf{X})$ represents the ``actual or hypothesized causal structure" of the system; these conditions are called the \tit{physical Markov conditions} and will be discussed in the next section. The second important detail is what precisely are the \tit{inference rules} referred to in \tbf{CS}. The inference rules for un-measurements and interventions on classical systems will be discussed in Sec. \ref{sec:passactclass}, and their quantum counterparts deferred to Sec. \ref{sec:qinterf}.
\subsection{Physical Markov conditions \label{sec:physMarkov}}
Once we have fixed a convention for the reference experiment, it is useful for the purposes of inference to restrict our attention to some particular class of physical systems, which can be broadly or narrowly defined. For example one could restrict attention to the narrow class of classical pendulums, or to the broader class of relativistic $N$-body mechanical systems, and so forth. Suppose that we gather statistics from a reference experiment performed on many different systems all belonging to the particular class of physical systems of interest. Now, depending on the particular features of this class, one will find that certain conditions always hold between the variables in the reference experiment that depend explicitly on the causal structure $G(\tbf{X})$.
One condition that is natural and commonplace for many classes of physical systems is the condition that \tit{causation implies correlation}, i.e. that if the variables $A,B$ are found to be uncorrelated in the reference experiment then neither will be found to be a cause of the other under manipulations of either variable. In fact, if one subscribes to our definition \tbf{MC} (recall Sec. \ref{sec:measurements}), then causation presupposes correlation, so we cannot escape this rule. However, on a broader conception of causality, this need not be the case. One example is the class of cryptographic ``one-time pads", whereby the system is a string of bits called the `message' $M$ that is added modulo 2 to a random bit string called the `key' $K$ to produce a coded message called the `cipher' $C$. On eg. a mechanistic account of causality it seems natural to say that both $M$ and $K$ are causes of $C$, yet manipulations of either one (ignoring the values of the other) remain uncorrelated with $C$.
The principle that causation implies correlation is just one example of what we will call a \tit{physical Markov condition}, which describes any rule that relates the causal structure of a system (belonging to a particular class) to its anticipated behaviour in the reference experiment. To see why \tit{physical Markov conditions} play a central role in causal modeling, consider the class of \tit{deterministic systems}, defined by the property that the value of each variable $X_i$ is fully determined by the values of its parents $\pa{X_i}$. For such systems, the rule \tit{causation implies correlation} holds, as does another more interesting condition \cite{PEARL,SGS}:\\
\noindent \tbf{FCC. Factorization on common causes:} \\
Suppose neither of $X_1,X_2$ is a cause of the other and $\tbf{C}$ is the complete set of their shared ancestors, i.e. $\tbf{C}$ contains all `common causes' (see Fig. \ref{fig:threedags}(a)); then $P(X_1 X_2|\tbf{C})=P(X_1|\tbf{C})P(X_2 |\tbf{C})$ .\\
The \tbf{FCC} is a physical Markov condition that has an important consequence: if variables $A,B,C$ are found to be correlated in the reference experiment, and $A,B$ remain correlated conditional on the value of $C$, then it may be concluded that one of the pair $A,B$ must be a cause of the other, so long as we hold firm in our belief that the system is of the deterministic class. This provides a simple illustration of \tit{inference of causal structure}, whereby one leverages knowledge about the class of physical systems plus the behaviour in the reference experiment to infer facts about the causal structure. Thus, physical Markov conditions allow one to reduce the number of interventions needed to establish causal claims.
The most commonly considered class of systems in the literature on causal modeling is the class of \tit{classical stochastic systems}. This class can be defined as a slight generalization from the class of deterministic systems, as follows:\\
\noindent \tbf{Classical stochastic systems:}\\
These are systems defined by the requirement that, for every variable $X_i$ relevant to the system, it is possible to introduce a hypothetical auxiliary exogenous variable $A_i$ that has $X_i$ as its only child, such that the value of $X_i$ is fully determined by the values of $\pa{X_i}$ and $A_i$.\\
Informally, a \tit{classical stochastic system} is observationally equivalent to a deterministic system in which there are hidden sources of noise (represented by the $A_i$) independently affecting each node. This class of systems is particularly interesting because it has been found to be powerful enough to describe a wide range of real physical systems, including biological, ecological, and mechanical systems, and they form the basis for the standard textbooks on causal inference \cite{PEARL,SGS}. Let $P(\tbf{X})$ be the observed statistics of the reference experiment for any classical stochastic system with causal structure $G(\tbf{X})$. Then the physical Markov conditions for this class of systems may be summarized by the constraint:\\
\tbf{CMC. The Causal Markov Condition:}\\
$P(\tbf{X})$ factorizes according to:
\eqn{ \label{eqn:cmc}
P(\tbf{X})=\prod_i \, P(X_i|\pa{X_i})
}
where $\pa{X_i}$ are the parents of $X_i$ in $G(\tbf{X})$.\\
Given a class of physical systems, such as the classical stochastic systems, there are two distinct ways to establish the physical Markov conditions for this class. The first way is to start with an abstract formal definition of the class of systems (eg. by postulating a `mechanism') and then derive the \tbf{CMC} as a logical consequence ( see eg. Pearl \cite{PEARL}).
The second route is more empirical and involves the iteration of two fundamental steps. First we restrict attention to experiments in which the causal structure takes one of several simple forms (specifically, the common-cause, causal chain and common-effect structures shown in Fig. \ref{fig:threedags} of Sec. \ref{sec:cmc}). At this stage, the `class of systems' merely refers to some set of laboratory preparation procedures in which we are interested. Under these restrictions, we observe the behavior of the class of systems over many trials. Given the causal structure, any statistical independence that is found to hold for \tit{all} systems in the chosen class (or at least is true for any `typical' member of the class) is then declared to be a physical Markov condition for that class. In the second step, we extrapolate these empirically derived conditions to arbitrary causal structures. This extrapolation is \tit{postulated}, rather than derived, and serves to \tit{define} the class of systems in more general causal structures (the method of extrapolation is discussed in Sec. \ref{sec:cmc} ). This then enables us to infer causal structure by fixing the class of physical systems and comparing the observations to different candidate causal graphs.
The advantage of this second approach is that it emphasizes that causal structure and the properties of material systems are inextricably interwoven. First we make an assumption about the causal structure and use it to establish the behaviour of a class of systems, then we fix the class of systems and use it to deduce further causal relations. This way of thinking about physical Markov conditions has the advantage that it can easily accommodate the experimental evidence that quantum systems exhibit different behaviour than classical systems in the same causal structure. Whereas Bell's theorem makes this difference appear dramatic and even paradoxical, on the present account it is interpreted as displaying a simple empirical truth: that quantum systems interact with causal structure in a way that is fundamentally different to the way classical systems do. It also leads to a much more intuitive understanding of the condition of \tit{no fine-tuning}, which we will discuss next.
\subsection{Fine-tuning and latent variables}
The principle of \tit{no-fine-tuning} may be stated as follows:\\
\tbf{NFT. No Fine-tuning:}\\
Let $P(\tbf{X})$ be the behaviour of a typical member of a given class of physical systems, in an experiment where the causal structure is $G(\tbf{X})$. Then there are no statistical independences in $P(\tbf{X})$ beyond those that are implied by the Causal Markov Condition and $G(\tbf{X})$ for that class of systems.\\
The assumption \tbf{NFT} can be motivated from the considerations of the previous section. First consider the case that $G(\tbf{X})$ is one of the three `simple cases' (common-cause, causal chain, or common effect). Any conditional independences that hold in $G(\tbf{X})$ for all (or for `typical') members of the given class of systems must then be implied by the Causal Markov Condition, because the \tbf{CMC} has effectively been \tit{defined} so as to include them. In these cases, therefore, \tbf{NFT} holds as a matter of definition. For general causal structures, one essentially postulates that \tbf{NFT} continues to hold, hence that the \tbf{CMC} continues to capture all of the typical features of the class of systems in these more general experiments. This postulate is useful for it serves as a powerful aid to causal inference: it allows one to eliminate any causal structures that don't explain (via the \tbf{CMC}) all of the independences in the observed behaviour $P(\tbf{X})$.
\tit{Remark:} The reader may be uneasy about the vague usage of the word `typical' in the above. One way to formalize this notion is to imagine selecting a system at random from the given class, using some measure defined on the space of possible systems within the class. Then \tbf{NFT} can be read as saying that the subset of systems exhibiting extra independences beyond \tbf{CMC} has measure zero within the class. To maintain greater generality, however, we prefer to leave `typical' a flexible notion to be decided as a matter of practice.
So far we have talked about cases in which all relevant variables of the system are measured in the reference experiment. Frequently, it is not practical to measure all of the relevant variables. In such cases, the causal structure includes \tit{latent variables} $\tbf{L}$ that do not appear in the observed behaviour $P(\tbf{X})$. This defines the strictly larger class of \tit{classical stochastic systems with latent variables} (in which we can recover the classical stochastic systems by setting $\tbf{L}=\emptyset$). The physical Markov conditions for this class are given by the following (strictly weaker) conditions:\\
\tbf{CMC2. Causal Markov Condition (with latent variables):}\\
There exists an extended distribution $P(\tbf{X},\tbf{L})$, such that $P(\tbf{X},\tbf{L})$ satisfies the \tbf{CMC} for the causal structure of the system $G(\tbf{X},\tbf{L})$, and $P(\tbf{X})$ is obtained from $P(\tbf{X},\tbf{L})$ by marginalizing over the latent variables, i.e.
\eqn{
P(\tbf{X}) = \zum{\tbf{l}}{} \, P(\tbf{X},\tbf{l}) \, .
}
For the sake of simplicity, we will assume from here onwards (unless stated otherwise in the text) that there are no \tit{latent variables} in the systems of interest.
\subsection{Manipulations and un-measurements \label{sec:activemanips}}
In this work, we consider manipulations as modes of measurement that break causal connections, hence they do not include either reference measurements $C^X=\oslash$ or un-measurements $C^X=\brm{undo}$. We avoid making strong commitments as to whether `manipulations' must be effected by agents, and if so whether these should be conscious, etc, but propose only some minimal properties that manipulations ought to satisfy, of which the first is:\\
\tbf{Externality.} A manipulation represents the physical influence on the system by an \tit{external} entity, so as to exclude all \tit{causes} within the system from affecting the manipulated variable. We assume that the system's response to the manipulation does not depend on the nature of this external entity, i.e. whether it is a conscious agent, a physical system, an artificial intelligence, an environment, God, and so on.\\
Note that this leads us into a circularity, since manipulations depend on the definition of a \tit{cause} (i.e. by asserting that the influencing entity has no \tit{causes} in the system), but our proposed definition of manipulationist causation \tbf{MC} is itself based on the concept of a manipulation! Fortunately, this is not a vicious circle, as any realistic situation always involves some causal relations that may be postulated \tit{a priori}. For instance, it is generally accepted that the experimenter has the ability to freely choose which buttons to push on the apparatus, independently of the variables within the system. Having specified such originating causes, we can then deduce \tit{other} causal relations that hold \tit{within} the system.\\
Note that externality is necessary but not sufficient property of any manipulation. Hence any variable whose causes lie entirely outside the system is a \tit{candidate} for being a manipulation, but whether or not it \tit{is} a manipulation may depend on other considerations beyond the scope of our analysis (that we leave to philosophers). Thus, while the convention is to write $C^Z=\oslash$ for an exogenous variable $Z$ in the reference experiment -- thereby declaring it not to be a manipulation -- the property of externality suggests that our reasoning would be unaffected if we were to regard them as manipulations.
In fact, this allows us to infer what would happen if an exogenous variable were to be manipulated, since (according to externality) nothing about the system would change. We can formalize this as a special inference rule for manipulations of exogenous variables: \\
\tbf{Exogenous indifference:} Let $Z$ be an exogenous variable in $G(\tbf{X},Z)$. Then the externality of manipulations implies that the system's behaviour under manipulations of $Z$ is the same as in the reference experiment:\\
\eqn{ \label{eqn:exogextern}
P(\tbf{X}=\tbf{x},Z=z| C^Z=\brm{do}(Z=z)) = P(\tbf{X}=\tbf{x},Z=z| C^Z=\oslash) \qquad \, \forall \, z \in \trm{dom}(Z)
}
Besides externality, manipulations satisfy the following principle, which also applies to measurements more generally:\\
\tbf{CNS. Counterfactual no-signalling:} If one doesn't condition on the descendants of $Z$, then different ways of measuring $Z$ cannot affect the causal non-descendants of $Z$. Formally, let $C^Z=\{ c,c' \}$ toggle between different ways of measuring $Z$ (that need not be confined to manipulations) in a system whose causal relations are described by a DAG $G(\tbf{A} \tbf{D} \tbf{R} Z)$. Here, $\tbf{A}$ are the causal ancestors of $Z$, $\tbf{D}$ are the descendants of $Z$, and $\tbf{R}$ are the remainder. Then:
\eqn{ \label{eqn:extracontext}
P(\tbf{A} \, \tbf{R}|\,C^Z=c)=P(\tbf{A} \, \tbf{R}|\,C^Z=c') \, \qquad \forall c,c' \in \trm{dom}(C)\,.
}
It is important to note that \tbf{CNS} is conceptually distinct from the principle of \tit{no-signalling} found elsewhere in the literature, which states that, within the reference experiment, an exogenous variable $Z$ (often called a `measurement setting') can only be correlated with its causal descendants. Formally, it can be expressed as:\\
\tbf{NS. No-signalling:}\\
For an exogenous variable $Z$ with non-descendants $\tbf{R}$ we have:
\eqn{
P(\tbf{R}|Z=z) = P(\tbf{R}|Z=z') \, , \qquad \forall z \in \trm{dom}(Z) \, ,
}
or, more prosaically, $P(\tbf{R}|Z)=P(\tbf{R})$.\\
Although they are conceptually distinct, \tbf{CNS} and \tbf{NS} can be linked by the following rationale. Since an exogenous variable $Z$ has no causes within the system (i.e. $\tbf{A}=\emptyset$) we can, according to externality, equally imagine that it has an external cause given by a set of manipulations of the form $\{ C^Z=\brm{do}(Z=z) : \, z \in \trm{dom}(Z) \}$, and this should make no difference to the statistics, i.e.
\eqn{
P(\tbf{R}|C^Z=\brm{do}(Z=z)) = P(\tbf{R}|Z=z) \, , \qquad \forall z \in \trm{dom}(Z) \, .
}
By applying \tbf{CNS} to that equation, we recover rule \tbf{NS}, which can therefore be thought of as the special case of \tbf{CNS} applied to exogenous variables. This is significant because \tbf{CNS} is a general principle that is expected to hold regardless of the class of systems one is working with. Hence, within the general framework discussed here, all classes of physical systems are assumed to obey \tbf{CNS} and hence also no-signalling, regardless of whether they are classical, quantum, or something else.
The principles of externality and counterfactual no-signalling are assumed to apply to all manipulations, but specific types of manipulations may also have additional defining properties.
In classical causal modeling it is customary to restrict attention to interventions, but in this work we will include inferences about un-measurements. We add to the properties mentioned in Sec. \ref{sec:measurements} of un-measurements the condition that, so long as a variable does not depend on the value of $Z$, it also should not depend on whether or not $Z$ is measured. More precisely:\\
\tbf{CSO. Counterfactual screening-off:} Suppose that for some disjoint $\tbf{A},\tbf{B},Z$ we have that $\tbf{A}$ is independent of $Z$ conditional on $\tbf{B}$ in the reference behaviour. Then $\tbf{A}$ is also independent of \tit{whether or not Z is measured} conditional on $\tbf{B}$. Formally,
\eqn{
P(\tbf{A}|\tbf{B} Z , \, C^Z=\oslash) &=& P(\tbf{A}|\tbf{B} , \,C^Z=\oslash) \nonumber \\ \Rightarrow P(\tbf{A}|\tbf{B} , \, C^Z=\brm{undo} ) &=& P(\tbf{A}|\tbf{B} , \, C^Z=\oslash) \, .
}
\section{Counterfactual classical causal models \label{sec:ccms}}
In this section, we restrict attention to causal modeling with the class of classical stochastic systems, assuming no latent variables. We discuss the origin and characteristics of the physical Markov conditions that hold for these systems, and then we introduce \tit{non-disturbing measurements} and \tit{interventions} and discuss their corresponding counterfactual inference rules.
\subsection{The Causal Markov Condition \label{sec:cmc}}
\tit{Classical causal models} refer to causal models of classical stochastic systems. Hence the physical Markov conditions are those entailed in \tbf{CMC} and these tell us how the causal relations of the system constrain the allowed behaviour in the reference experiment under the assumption of \tbf{NFT}. We then have:\\
\tbf{Classical Causal Model:}\\
A Classical Causal Model consists of a pair $\{P(\tbf{X}), G(\tbf{X}) \}$ where $P(\tbf{X})$ satisfies the \tit{Causal Markov Condition} and no fine-tuning for the DAG $G(\tbf{X})$.\\
As discussed earlier in Sec. \ref{sec:physMarkov}, the \tbf{CMC} can be extrapolated to general causal structures from three special cases. We will now discuss the details of how this extrapolation is carried out. The first special case is already familiar; we repeat it here for convenience:\\
\noindent \tbf{FCC. Factorization on common causes:} \\
Suppose neither of $X_1,X_2$ is a cause of the other and $\tbf{C}$ is the complete set of their shared ancestors, i.e. $\tbf{C}$ contains all `common causes' (see Fig. \ref{fig:threedags}(a)); then $P(X_1 X_2|\tbf{C})=P(X_1|\tbf{C})P(X_2 |\tbf{C})$ \footnote{Perhaps contrary to one's first intuition, it is not sufficient to condition only on the set of variables that are \tit{parents} of both $X_1,X_2$. A trivial counterexample is $X_1 \leftarrow A_1 \leftarrow A_3 \rightarrow A_2 \rightarrow X_2$.}.\\
Taken as a postulate, the principle \tbf{FCC} was historically conceived as just one part of a more general postulate proposed by Hans Reichenbach \cite{RPCC}. The \tbf{FCC} is sometimes called the \tit{quantitative} part of Reichenbach's Principle to distinguish it from the \tit{qualitative} component \cite{CAVLAL}, which we will here simply refer to as `Reichenbach's Principle':\\
\noindent \tbf{RP. Reichenbach's Principle:}\\
If neither of $X_1,X_2$ is a cause of the other and they have no shared ancestors, then they are statistically independent: $P(X_1 X_2)=P(X_1)P(X_2)$. (Note: Following Ref. \cite{ALLEN} we have presented it in the contrapositive of its more common form: `if two variables are correlated, one must cause the other or they must have a common cause, or both').\\
Since \tbf{RP} can be obtained from \tbf{FCC} by setting $\tbf{C}=\emptyset$, it is a strictly weaker principle. \tbf{RP} captures the intuitive fact that two systems with independent causal histories should be initially uncorrelated. Unlike \tbf{FCC}, whose application to quantum systems is controversial, \tbf{RP} is widely accepted to hold for quantum systems. This will be discussed further in Sec. \ref{sec:qmc}). In addition to \tbf{RP}, it is usually assumed that not only are physical systems independent prior to interaction, but that they are correlated afterwards. In Ref. \cite{PRICEBOOK}, Price calls this the `principle of independence' and summarized it by the slogan `innocence precedes experience'. However, it must be emphasized that the principle is composed of two conceptually distinct components: first, that systems are independent before they interact (\tbf{RP} in the present framework), and second, that they are typically correlated after they interact. We define this latter requirement as:\\
\noindent \tbf{PE. The Principle of Experience:}\\
If neither of $X_1,X_2$ is a cause of the other and they do have shared ancestors, then one generally expects them to be correlated: $P(X_1 X_2) \neq P(X_1)P(X_2)$. \\
Price's `principle of independence' in our framework is then the conjunction of \tbf{RP} and \tbf{PE}, which is a manifestly asymmetric combination. Price argues (and we agree) that this asymmetric combination, while intuitive in the macroscopic classical world, does not extend to microscopic systems, and hence that quantum systems should satisfy a more symmetric principle. Later on in the present work we will advocate retaining \tbf{RP} and dropping \tbf{PE} for quantum systems. For the moment we are discussing classical systems, and so will retain \tbf{PE}. The second simple case on which the \tbf{CMC} is based is that of the causal chain, for which the following is assumed to hold:\\
\noindent \tbf{SSO. Sequential screening-off:}\\
Suppose $X_1$ causes $X_2$ and every path connecting $X_1$ to $X_2$ is intercepted by a variable in $\tbf{D}$, i.e. contains a chain $X_1 \rightarrow D \rightarrow X_2$, where the $D$ are not causes of one another (see Fig. \ref{fig:threedags}(b)); then $P(X_1 X_2|\tbf{D})=P(X_1|\tbf{D})P(X_2 |\tbf{D})$ .\\
The principle \tbf{SSO} says that conditioning on $\tbf{D}$ `screens off' the future measurement $X_2$ from the past, because knowing $\tbf{D}$ makes the information $X_1$ redundant. The third simple case on which the \tbf{CMC} is based is:\\
\noindent \tbf{BK. Berkson's rule:}\\
Suppose neither of $X_1,X_2$ is a cause of the other and they have no shared ancestors, and suppose $B$ is a common descendant of $X_1,X_2$ (see Fig. \ref{fig:threedags}(c)); then one generally expects $X_1,X_2$ to be correlated conditional on $B$, i.e. $P(X_1X_2|B) \neq P(X_1|B)P(X_2|B)$.\\
The principle \tbf{BK} derives from a well-known result in statistics called `Berkson's Paradox', after the medical statistician Joseph Berkson \cite{BERK}. Despite not really being a paradox, newcomers to statistics often find it counter-intuitive.
\tit{Remark:} Whereas \tbf{FCC} and \tbf{SSO} give conditions under which statistical independence is \tit{necessary}, the principle \tbf{BK} gives conditions under which correlations are `typical' but not necessary. In fact it is a convention of causal modeling that physical Markov conditions either assert the \tit{necessity} of independence or the \tit{possibility} of correlation, but never assert the \tit{necessity} of correlation or the \tit{possibility} of independence. That is because it is not possible to encode all four types of statements within a single graph. If one adheres to the first two types of statements, the graph is called an `independence map' of correlations; if one opts for the latter two types, it is a `dependence map'.
\begin{figure}[!htb]
\centering\includegraphics[width=0.7\linewidth]{threedags.PNG}
\caption{Three special cases in which the Causal Markov Condition reduces to simpler principles: (a) variables $X_1,X_2$ only connected through common causes, where \tbf{CMC} reduces to \tbf{FCC}; (b) $X_1,X_2$ only connected through intermediate causes, where \tbf{CMC} reduces to \tbf{SSO}; (c) $X_1,X_2$ connected only through common effects, where \tbf{CMC} reduces to \tbf{BK}.}
\label{fig:threedags}
\end{figure}
Strictly speaking, the aforementioned conditions \tbf{FCC}, \tbf{RP}, \tbf{PE}, \tbf{SSO}, \tbf{BK} can only be applied to causal graphs having the form of one of the three special cases displayed in Fig. \ref{fig:threedags}. Ideally, we would like to have an empirical postulate that could apply to a system with arbitrary causal structure. One way to achieve this is to convert the conditions into corresponding \tit{graphical} rules that allow their consequences to be deduced by direct inspection of the causal structure. The graphical rules can then be jointly applied to any causal structure. The full details of how one obtains graphical rules from principles are given in Appendix \ref{app:graphrules}. The result is the following alternative graphical formulation of the \tbf{CMC} \cite{PEARL,SGS}:\\
\tbf{CMC3. Causal Markov Condition (graphical version)}:\\
Let $\tbf{U}$,$\tbf{V}$ and $\tbf{W}$ be disjoint sets of nodes in a DAG $G(\tbf{X})$. A path from $X_1$ to $X_2$ in $G(\tbf{X})$ is said to be \tit{blocked} by $\tbf{W}$ iff at least one of the following graphical conditions holds:\\
\noindent \tbf{g-FCC.} There is a fork $X_1 \leftarrow C \rightarrow X_2$ on the path where $C$ is in $\tbf{W}$;\\
\tbf{g-SSO.} There is a chain $X_1 \rightarrow C \rightarrow X_2$ on the path where $C$ is in $\tbf{W}$;\\
\tbf{g-BK.} There is a collider $X_1 \rightarrow C \leftarrow X_2$ on the path where $C$ is \tit{not} in $\tbf{W}$ and has no descendants in $\tbf{W}$.\\
If \tit{all} paths between $\tbf{U}$,$\tbf{V}$ are blocked by $\tbf{W}$, then the \tit{d-separation theorem} \cite{PEARL} states that $\tbf{U}$ and $\tbf{V}$ are independent conditional on $\tbf{W}$ in any distribution $P(\tbf{X})$ that satisfies the \tbf{CMC} (in the form of Eq. \eqref{eqn:cmc}) for the DAG $G(\tbf{X})$. \\
Note that the principles \tbf{PE} and \tbf{RP} are implicit in the graphical rules, in the following way. If \tbf{PE} did not hold, then there would be another way in which two variables could be independent, namely, by sharing a common cause that is not conditioned upon. The absence of such a rule in the above list is the result of enforcing \tbf{PE}. The principle \tbf{RP} is implicit in the rule \tbf{g-BK}. If \tbf{RP} were false, then the mere presence of a common descendant might enable two variables to be correlated, which would mean that \tbf{g-BK} could not be a graphical rule.
Thus by interpreting the conditions \tbf{FCC}, \tbf{RP}, \tbf{PE}, \tbf{SSO}, \tbf{BK} as \tit{graphical} criteria as described above, one can derive any and all constraints implied by the \tbf{CMC}. In this way, although the three conditions \tbf{FCC},\tbf{SSO}, \tbf{BK} individually apply to only limited classes of causal structures, they can be combined via the graphical representation to obtain a condition that applies to arbitrary causal structures, and this condition is the \tbf{CMC}.
\subsection{Observation and intervention in classical causal models \label{sec:passactclass}}
Our restriction to the class of classical stochastic systems allows us to further refine the measurements involved in the reference experiment. For causal modeling, we need to consider only two kinds of measurements: non-disturbing measurements and interventions.
A non-disturbing measurement is a measurement that can be performed without disturbing the system in any way. To formalize this idea, we need to clarify what is meant by a `disturbance of the system'. In the present framework we are concerned only with what can be detected at the level of probabilities, and so whether the presence or absence of the measurement affects the probabilities of other variables in the system.
More precisely, we partition the measurements on a system in the reference experiment into two sets $\tbf{X} \cup \tbf{Z}$, and write the system's behaviour as $P(\tbf{X} \tbf{Z}|C^Z=\oslash)$ where the conditional $C^Z=\oslash$ is to remind us that `the reference measurements $\tbf{Z}$ are actually performed in this experiment'. Conversely, let $P(\tbf{X}|C^Z=\brm{undo})$ indicate the behaviour of the system in a counterfactual experiment in which the measurements $\tbf{Z}$ are not performed. Then we define:\\
\tbf{ND. Non-disturbing measurements:} \\
The set of measurements $\tbf{Z}$ is called \tit{non-disturbing relative to the set of variables} $\tbf{S} \subseteq \tbf{X}$ if:
\eqn{ \label{eqn:nondisturb}
P(\tbf{S}|C^Z=\brm{undo}) &=& P(\tbf{S}|C^Z=\oslash) \, \nonumber \\
&=& \zum{\tbf{z}}{} \, P(\tbf{S},\tbf{z}|C^Z=\oslash) \, .
}
In the special case that $\tbf{Z}$ is non-disturbing relative to all other variables in the system (i.e. $\tbf{S} = \tbf{X}$) we will simply call $\tbf{Z}$ \tit{non-disturbing}.\\
We can summarize equation \eqref{eqn:nondisturb} as saying that $\tbf{Z}$ are non-disturbing iff not performing them is equivalent to performing them and summing over their outcomes (which resembles, but is conceptually distinct from, the `law of total probability'). Note that this equation is an example of counterfactual inference rule, although in this instance it does not depend on the causal structure of the system.
\tit{Remark:} Strictly speaking the above definition \tbf{ND} is ambiguous when applied to exogenous variables; since the causes of exogenous variables (if any) lie outside the system and are not subject to analysis, we cannot say what would have happened if an exogenous measurement were not performed. Indeed, since they play a role in setting the very conditions that define the system, we cannot `un-measure' them without enlarging the scope of analysis to include a larger encompassing system, but the new exogenous variables of the larger system will again necessarily be ambiguous.
In the case of classical causal models, we restrict attention to experiments in which all variables represent either non-disturbing measurements or interventions. In this case an experiment will be at the top of the hierarchy of counterfactual experiments (cf Sec. \ref{sec:experiments} ) if and only if all non-exogenous variables are non-disturbing. Thus, in classical causal models, the reference experiment is typically also a passive observational scheme. However, this need not be true in general, as we will see later with quantum systems.
In contrast to non-disturbing measurements, an \tit{intervention} is a type of manipulation that disturbs the system in a precise way that targets a specific variable. An intervention can usefully be thought of as equivalent to introducing a randomized control into an experiment. Physically, it means forcing a target variable $W$ to take a particular value in a manner that is independent of its causes $\pa{W}$ within the system. One example is actively controlling the temperature of a system instead of passively measuring its temperature under ambient conditions. Another is assigning patients in a drug trial randomly to the treatment or control groups, instead of allowing them to choose whether to take the treatment themselves.
An intervention $C^{W} = \brm{do}$ results in a new set of probabilities $P(\tbf{X}|C^{W}=\brm{do})$ that describes the behavior of the system in the counterfactual experiment in which the variable $W$ is intervened upon. If we wish to be more specific, we may use $C^{W} = \brm{do}(W=w)$ to mean that the intervention intends to fix the value of $W$ to $w$.
\tit{Remark:} We must be careful to distinguish our notation from that of the \tit{do-conditional} notation used in the literature, eg. $P(\tbf{X}|\trm{do}(W=w))$ \cite{PEARL,SGS}. The key difference is that our notation treats separately the fact that $W$ is intervened upon, as represented by $C^{W} = \brm{do}$ or $C^{W} = \brm{do}(W=w)$, from the fact that it takes the value $W=w$, which is expressed just by the value of $W$. Thus, for instance, we can assign a non-zero probability to the event $P(W=w'|C^W=\brm{do}(W=w''))$, $w' \neq w''$, which we interpret as the probability that an intervention whose aim is to fix $W$ to $w''$ results in the undesired outcome $W=w'$, as might occur if there were some noise or errors in the physical implementation of the intervention. By constrast, the `do conditional' expression $P(W=w'|\trm{do}(W=w''))$ is either undefined or defined to be zero. Our notation incorporates the standard do-conditional as a special case that obtains when the intervention perfectly achieves its aim, that is, when $P(W=w'|C^W=\brm{do}(W=w'')) = \delta(w',w'')$. Under that assumption, we may identify our expressions of the form $P(\tbf{X}|C^W=\brm{do}(W=w))$ with standard do-conditionals of the form $P(\tbf{X}|\trm{do}(W=w))$. For convenience, in the remainder of this work we will assume that this is the case.
Like all manipulations, interventions satisfy externality, which suggests that the causal structure after the intervention, $G(\tbf{X}|C^{W}=\brm{do})$, should be obtained from $G(\tbf{X})$ in the reference experiment by deleting all incoming arrows to $W$ (see Fig. \ref{fig:intervention}).\\
\begin{figure}[!htb]
\centering\includegraphics[width=0.7\linewidth]{intervention.PNG}
\caption{A causal graph before and after a manipulation (or an intervention) of $W$.}
\label{fig:intervention}
\end{figure}
Beyond this basic rule, interventions are assumed to precisely target $W$, which means that any effect the intervention has on other variables must be mediated through the intervention's effect on $W$ itself. More specifically, if we are contemplating performing interventions on any of the causal parents of some variable $D$, then the probabilities of $D$ should be insensitive to which, if any, of its parents are intervened-upon. Practically speaking, this rule amounts to the elimination of `placebo effects' in the experimental design, hence we define it as:\\
\tbf{NPE. No placebo effect:}\\
Let $\tbf{W} \subseteq \pa{X}$ be any subset of the parents of a variable $X$. Then, conditional on all of its parents, $X$ should be insensitive to whether $\tbf{W}$ is intervened-upon:
\eqn{ \label{eqn:NPE}
P(X|\pa{X},C^{\tbf{W}}=\brm{do}) = P(X|\pa{X}) \, .
}
For a single variable $X$, the intuition behind \tbf{NPE} is straightforward. Intervening on a parent of $X$ can only affect $X$ through two avenues: either through $X$'s direct dependence on the values of its parents, or through the indirect effect of deleting the causes incoming to $\pa{X}$; \tbf{NPE} states that only the former route is legitimate. The latter route can only directly affect the (non-parental) ancestors of $X$, since the induced correlations among these may depend on the causal links to $\pa{X}$ which are disrupted by interventions. For these effects to plausibly `propagate' to $X$ would require the existence of an unblocked path from these ancestors to $X$, however, all such paths are blocked: either the path contains a member of $\pa{X}$ and so is blocked by \tbf{g-SSO}, or else it contains an unconditioned collider and so is blocked by rule \tbf{g-BK}. Hence deleting causal arrows incoming to $\pa{\tbf{X}}$ should have no means within the causal structure of affecting the conditional probabilities $P(X|\pa{\tbf{X}})$; this is what \tbf{NPE} asserts.
In the present work, we will need to generalize this principle to more variables. It is not clear how to do this in the most general way, because the parents of one variable $X_1$ might be children of another variable $X_2$, so some variables might reasonably be sensitive to interventions on the parents of other variables. However, it suffices here to make a restricted generalization of the principle as follows:\\
\tbf{NPE2. Generalized no placebo effect:}\\
Let $\tbf{X}$ be a set of variables, let $\pa{\tbf{X}}$ be the union of all their parents, and let $\tbf{W} \subseteq \pa{\tbf{X}}$ be any subset of these. Finally, let $\tbf{D}$ be all descendants of $\tbf{X}$ such that all directed paths to $\tbf{D}$ from the ancestors of $\tbf{X}$ pass through $\tbf{X}$ itself. Under the assumption that the members of $\pa{\tbf{X}}$ are not causes of one another (direct or indirect), then, conditional on all $\pa{\tbf{X}}$, we expect both $\tbf{X},\tbf{D}$ to be insensitive to whether $\tbf{W}$ is intervened-upon:
\eqn{ \label{eqn:NPE2}
P(\tbf{X}\tbf{D}|\pa{\tbf{X}},C^{\tbf{W}}=\brm{do}) = P(\tbf{X}\tbf{D}|\pa{\tbf{X}}) \, .
}
As with \tbf{NPE}, this assumption is motivated on the grounds that, conditional on the values of $\pa{\tbf{X}}$, the mere fact of intervening on $\tbf{W} \subseteq \pa{\tbf{X}}$ can only directly affect the ancestors of $\pa{\tbf{X}}$. As these have no unblocked paths connecting them to $\tbf{X}$ or $\tbf{D}$, we expect $P(\tbf{X}\tbf{D}|\pa{\tbf{X}})$ to be insensitive to such effects, and this is what \tbf{NPE2} asserts. (Note: since the $\pa{\tbf{X}}$ are not causes of one another, it is again true that any path from the unconditioned ancestors of $\tbf{X}$ leading to $X,\tbf{D}$ must either go through $\pa{\tbf{X}}$ and so be blocked by \tbf{g-SSO}, or else contain an unconditioned collider and so be blocked by \tbf{g-BK}).
We are now ready to derive the counterfactual inference rule for interventions. It is enough to posit that the post-intervention probabilities $P(\tbf{X}|C^W=\brm{do})$ should satisfy the $\tbf{CMC}$ relative to the new DAG $G(\tbf{X}|C^{W}=\brm{do})$. This is a useful requirement, because it means that the post-intervention pair $\{P(\tbf{X}|C^{W}=\brm{do}), G(\tbf{X}|C^{W}=\brm{do})\}$ is again a valid causal model and can therefore be used as the starting point for further counterfactual inferences, such as additional interventions. Given the structure of $G(\tbf{X}|C^{W}=\brm{do})$, the \tbf{CMC} implies that $P(\tbf{X}|C^{W}=\brm{do})$ factorizes into a product of the general form:
\eqn{ \label{eqn:infruleprimer}
P(\tbf{X}|C^{W}=\brm{do})=P(W|\pa{W} , \, C^{W}=\brm{do}) \, \prod_i \, P(X_i|\pa{X_i} , \, C^{W}=\brm{do}) \, ,
}
where $\pa{W}$ refers to the variables that were parents of $W$ in the pre-intervention graph. By externality, we expect $W$ to be independent of its former parents after the intervention, so $P(W|\pa{W}, \, C^{W}=\brm{do})=P(W|C^{W}=\brm{do})$. If we wish to be more precise and use fine-grained ideal interventions, we can reduce this to $P(W|\pa{W}, \, C^{W}=\brm{do}(W=w'))=\delta(w,w')$.
The remaining terms $P(X_i|\pa{X_i} C^{W}=\brm{do})$ must be treated differently depending on whether $X_i$ is a descendant of $W$ or not. In both cases we obtain the same rule,\\
\eqn{ \label{eqn:infrule2}
P(X_i|\pa{X_i} C^{W}=\brm{do}) = P(X_i|\pa{X_i} C^{W}=\oslash) \, ,
}
but the justification differs in each case. For the non-descendants of $W$, \eqref{eqn:infrule2} follows from \tbf{CNS}, whereas for the descendants of $W$, it follows from \tbf{NPE}. Putting these together in \eqref{eqn:infruleprimer}, we finally obtain the inference rule for interventions:\\
\tbf{IR. Inference of interventions:} An observer's probability assignments for a counterfactual experiment where an intervention is performed on $W$ are given by:
\eqn{ \label{eqn:intervaxiom}
P(\tbf{X}|C^{W}=\brm{do}) = P'(W) \, \prod_i \, P(X_i|\pa{X_i} , \, C^{W}=\oslash) \, ,
}
where $P'(W):=P(W|C^{W}=\brm{do})$ is a distribution of values that characterizes the particular intervention. For the case of fine-grained interventions,
\eqn{ \label{eqn:intervaxiom2}
P(\tbf{X}|C^{W}=\brm{do}(W=w')) = \delta(w,w') \, \prod_i \, P(X_i|\pa{X_i} , \, C^{W}=\oslash) \, ,
}
Note that in textbooks the above rule is usually stipulated as an axiom, rather than derived from principles as we have done here. To infer the result of interventions on multiple variables $\tbf{W} \subseteq \tbf{X}$, the procedure for intervening on one variable can simply be iterated; it can easily be proven that the order of interventions does not affect the final result, i.e. sequential interventions on different variables commute.
\section{Counterfactual causation of quantum systems \label{sec:qbism}}
\subsection{Problems with defining quantum causal models \label{sec:qm}}
Following the general pattern that was established in the classical case, there are three main questions that need to be answered when attempting to define a quantum causal model. First, what should be used as the reference experiment, and what characterizes the reference measurements used in it? Second, what are the relevant physical Markov conditions for the class of quantum systems, and in what ways do these deviate from the classical ones (i.e. the \tbf{CMC})? Third, what are the inference rules that tell us how to compute the probabilities for interventions ($C^W=\tbf{do}$) and un-measurements ($C^W=\tbf{undo}$), using only the causal model consisting of the reference behaviour $P(\tbf{X})$ and the causal structure $G(\tbf{X})$? In order to answer these questions, we must first address two well-known obstacles to quantum causal modeling: the fact that quantum measurements are disturbing, and the fact that common-causes don't factorize (Bell's Theorem). These are the topics of the next two subsections.
\subsubsection{Screening-off and measurement disturbance \label{sec:soff}}
In quantum mechanics, the most general way to describe a quantum measurement associated with a random variable $Y$ is by a \tit{quantum instrument}:\\
\tbf{QI. Quantum instrument:} Given a random variable $Y$, a \tit{quantum instrument} assigns a completely positive (CP) linear map $\mathcal{M}_y : \mathcal{H}_{\trm{in}} \mapsto \mathcal{H}_{\trm{out}}$ to each outcome $y \in \trm{dom}(Y)$, subject to the conditions:\\
(i) The outcome probabilities can be expressed as $P(Y)=\tr{\mathcal{M}_y(\rho)}$ where $\rho$ is a density operator representing the \tit{input state} to $Y$;\\
(ii) The induced map $\mathcal{C}(\cdot) := \zum{y}{} \mathcal{M}_y (\cdot)$ defined by summing over the outcomes $Y$ is a valid \tit{quantum channel}, i.e. $\mathcal{C}$ is completely positive and trace-preserving (CPTP) and hence maps density operators on $\mathcal{H}_{\trm{in}}$ to density operators on $\mathcal{H}_{\trm{out}}$.\\
(iii) $\mathcal{H}_{\trm{in}}$ (resp. $\mathcal{H}_{\trm{out}}$) represents the Hilbert space of the system immediately prior to (resp. after) the measurement $Y$. Note: it is natural to assume that the measurement process preserves the dimension, so we will adopt the convention that dim($\mathcal{H}_{\trm{in}}$)=dim($\mathcal{H}_{\trm{out}}$):=$d_Y$, and will sometimes use the notation $\mathcal{H}_Y$ to refer to any Hilbert space of dimension $d_Y$.\\
A measurement described by a quantum instrument $\{ \mathcal{M}_Y \}$ is non-disturbing in the sense of \tbf{ND} only if it describes a channel that does not change the quantum state, i.e. only if $\mathcal{C}(\rho)=\rho$ for all relevant input states $\rho$. The proof is by counterexample: if $\mathcal{C}(\rho) \neq \rho$, then the probabilities $P(Z)$ of an immediately subsequent measurement $Z$ would suffice to probabilistically distinguish the state $\mathcal{C}(\rho)$ from $\rho$, and hence to distinguish an experiment in which $Y$ is performed prior to $Z$ (and its outcome disregarded) from an experiment in which $Y$ is not performed prior to $Z$, hence $Y$ is disturbing relative to $Z$.
A problem then arises because of the well-known fact that for quantum systems there is `no information without disturbance' \cite{BUSCH}. Formally, this is expressed by the mathematical theorem that the only quantum instrument that can represent a non-disturbing measurement is the \tit{trivial instrument}, whose elements $\mathcal{M}_y$ are all equal to some constant $c_y$ times the identity operator. Since $P(Y=y)=c_y$ is evidently independent of the input state, the outcome of such a measurement provides no information about the measured system.
\tit{Remark:} it is instructive to point out why non-trivial non-disturbing measurements can exist for classical systems. We may interpret the \tit{classical limit} as referring to the special case in which all relevant states $\rho$ are confined to a subset of \tit{classical states} that are defined to be diagonal in a particular basis of Hilbert space (whose selection may be justified, for instance, on the grounds of environmental decoherence). Classical measurements can then be modeled using the quantum formalism as a special case of quantum instruments that map the classical subspace to itself. An instrument is then non-disturbing only if it preserves the states within the classical subspace, which amounts to a much weaker constraint than requiring that all quantum states be preserved. For instance, a projective measurement in the classical basis using the L\"{u}der's rule to define the outgoing state is non-disturbing relative to the classical subspace, and yet it provides sufficient information to reconstruct the input state, i.e. it is \tit{informationally complete} relative to the classical subspace.
The fact of no information without disturbance implies that quantum reference measurements must be disturbing. Recalling from Sec. \ref{sec:cmc} that the screening-off condition \tbf{SSO} requires that the measurement produce enough information about the state to render its previous history redundant, this seems to put \tbf{SSO} at odds with the desire for measurements to be minimally disturbing. This has led some authors to propose that the \tbf{SSO} should be relaxed for quantum systems, either by introducing `quantum nodes' that cannot be conditioned upon as in Ref. \cite{HLP}, or by dropping the requirement altogether, as in Refs. \cite{PIEBRUK,FRITZ}. At the opposite extreme, one might choose to allow quantum measurements to be arbitrarily disturbing, even to the point of breaking the causal link between input and output. On the latter view, quantum measurements are a generalization of classical manipulation \cite{COSHRAP,ALLEN}. In these frameworks \tbf{SSO} can trivially be upheld for interventions in which the post-measurement state is simply discarded and a new state prepared in its place independently of the measurement outcome.
From the perspective of the present work, neither of these options is appealing. As one of the physical Markov conditions, \tbf{SSO} is supposed to tell us something fundamental about the nature of possible measurements on physical systems, namely, that it is possible to measure a system in such a way that the acquired information renders the information from previous measurements \tit{redundant} for future measurements (recall Sec. \ref{sec:cmc}). Achieving this by manipulations or interventions is too heavy-handed, for in that case the past information is not merely made redundant by the measurement outcome, but is actually \tit{destroyed} along with the causal link between input and output. Better would be to find a middle ground in which \tbf{SSO} can be retained for quantum reference measurements while avoiding the destructiveness of interventions.
To see how this can be done, first consider the simplest case of \tbf{SSO} involving three sequential measurements $A,B,C$, whose causal relations are assumed to be given by the causal chain $A \rightarrow B \rightarrow C$. Let the measurement of $B$ be represented by a quantum instrument $\{\mathcal{M}_B \}$. The state after preparing a state $\rho_A$ as the input to $B$ and obtaining the outcome $B=b$ may be written as:
\eqn{
\rho_b(A) := \frac{\mathcal{M}_b(\rho_A)}{\tr{\mathcal{M}_b(\rho_A)}} \, .
}
According to \tbf{SSO}, we must have that $A$ and $C$ become uncorrelated conditional on the value of $B$, i.e. that $P(A,C|B)=P(A|B)P(C|B)$. Since the input state to $C$ conditional on $B=b$ is $\rho_b(A)$, it will always be possible to find some measurement $C$ whose outcome is correlated with $A$, so long as $\rho_b(A)$ depends explicitly on the value of $A$. The only way to avoid such correlations for any $\rho_A$ is therefore to demand that $\mathcal{M}_b$ has the form:
\eqn{
\mathcal{M}_b(\rho_A) := \tr{ \mathcal{M}_b \rho_A } \, \rho(b)
}
for all $b \in \trm{dom}(B)$, where $\rho(b)$ is a density matrix that can only depend on the outcome $b$. The quantum channel produced by such measurements after summing over $B$ has the form:
\eqn{ \label{eqn:Holevo}
\mathcal{C}(\cdot):= \zum{b}{} \, \tr{ \mathcal{M}_b (\cdot) } \, \rho(b) \, ,
}
which characterizes a class of channels first studied by Holevo \cite{HOLEVO}. These have the interesting property of being equivalent to the class of \tit{entanglement breaking} channels \cite{HORODECKI}, which are defined by the property that the output state $\mathcal{C}(\rho_A)$ cannot be entangled to any other systems, regardless of the input. The form \eqref{eqn:Holevo} shows that it is possible to maintain \tbf{SSO} while at the same time preserving a causal link between the input and output of the measurement $B$. This can be seen in a number of ways, but it is sufficient to note that the output state $\mathcal{C}(\rho_A)$ after summing over $B$ may be written in `ensemble' form as:
\eqn{
\mathcal{C}(\rho_A) = \zum{b}{} \, P(B=b|A) \, \rho(b) \, ,
}
from which it can clearly be seen that the output state depends on $A$ not through the individual states $\rho(b)$, but through their relative weights $P(B=b|A)$ in the ensemble. Therefore, so long as $\rho(b)$ maintains an explicit dependence on $B$, and so long as $\mathcal{M}_b$ is non-trivial (i.e. to ensure that the weights $P(B=b|A)$ do depend on $A$), the instruments of this class do not break the causal link.
Having identified the general form of the quantum reference measurements, we might ask whether there is some sense in which they are analogous to the classical passive observations. One benefit of defining an appropriate quantum analog of a classical passive observational scheme is that this would enable us to ask whether quantum systems are better resources for causal inference than classical systems, under the constraint of passive observation (or its analog), see eg. \cite{KUEBLER, RIED, RIEDPHD}. We will discuss this later in Sec. \ref{sec:discuss}; for the time being we take the conservative view that quantum reference measurements belong to their own special class, distinct from both manipulations and passive observations.
\subsubsection{Common causes and entanglement \label{sec:comm}}
In the previous sections, we explained how \tbf{SSO} could be maintained for quantum reference measurements, which were found to be necessarily disturbing measurements. In this section we turn to another of the classical Markov conditions, \tbf{FCC}, and review how it fails for quantum systems. Consider a quantum experiment consisting of three measurements $A,B,C$ represented by the common cause graph $B \leftarrow A \rightarrow C$. In this case the \tbf{FCC} (if applicable to quantum systems) would imply $P(A,B,C)=P(B|A)P(C|A)P(A)$.
To see how this condition can fail, consider a particular implementation of this experiment in which $A$ measures a system with Hilbert space $\mathcal{H}_A := \mathcal{H}_B \otimes \mathcal{H}_C $, and $B,C$ are subsequently performed on the parts of the system that are represented by the respective sub-spaces $\mathcal{H}_B$ and $\mathcal{H}_C $. If the state $\rho_a$ produced by the event $A=a$ is entangled between the partitions corresponding to $B,C$, then it is possible to violate the factorization condition \tbf{FCC}, even under ideal experimental conditions. When dealing with classical systems, a natural response would be to guess that there must be additional \tit{latent variables} $\tbf{L}$ serving as additional common causes of $B,C$, which, if conditioned on together with $A$, would eliminate the correlations (i.e. that the extended principle \tbf{CMC2} should still hold).
There are two strong reasons why this explanation does not work in the quantum implementation just described. The first is the observation that there are no variables in the standard quantum formalism to play the role of $\tbf{L}$, and so the standard quantum formalism would have to be regarded as incomplete, and the missing variables sought after experimentally. Yet despite much effort (and notwithstanding philosophical arguments for their inclusion) direct experimental evidence for such variables remains elusive.
Perhaps the most compelling argument against the existence of the hypothesized latent variables is Bell's theorem \cite{BELL76}, which is widely regarded as showing that such hidden variables, if they exist, must possess some highly counter-intuitive properties. Bell's theorem requires that we introduce new exogenous variables $S_A,S_B$ corresponding to the respective measurement settings of $A,B$. In this experiment the corresponding causal graph is assumed to be the common-cause scenario as shown in Fig. \ref{fig:Bell} and the \tbf{CMC2} (allowing for latent variables \tbf{L}) implies the constraint:
\eqn{
P(A,B,C,S_A,S_B)= \zum{\tbf{l}}{} P(A|S_A,C,\tbf{l})P(B|S_B,C,\tbf{l})P(S_A)P(S_B)P(C,\tbf{l}) \, .
}
This constraint can be proven to imply mathematical inequalities on the marginal distribution $P(A,B,C,S_A,S_B)$, which have been found to be violated in experiments using entangled states, in agreement with quantum theory, ruling out any reasonable explanation in terms of latent common causes.
A landmark paper by Wood and Spekkens \cite{WOOD} showed that Bell's theorem can be alternatively expressed as the impossibility of explaining quantum correlations using \tit{any} classical causal model, under the assumptions of \tbf{CMC2} and \tit{no fine-tuning}. This way of formulating Bell's theorem is very powerful. Since it refers to causal structure, it may be generalized to contextuality scenarios in which space-like separation is not important \cite{CAVNFT}. For the same reason, it applies even to latent variables that defy known physics by travelling faster than light or backwards in time. While this has led some authors to questioned whether the assumption of \tbf{NFT} is always reasonable (see eg. Ref. \cite{ALMADA} ), we cannot give it up in our framework because as we discussed in Sec. \ref{sec:physMarkov}, \tbf{NFT} is here taken as a fundamental assumption by which physical Markov conditions such as \tbf{CMC2} come to be established. Instead, our approach forces us to simply reject the classical Markov conditions \tbf{CMC} and \tbf{CMC2}, and replace them with something else that is better suited to quantum systems. In particular, in light of Bell's theorem, the quantum Markov conditions should not include \tbf{FCC}.
\begin{figure}[!htb]
\centering\includegraphics[width=0.3\linewidth]{bell.PNG}
\caption{The Bell scenario, \tit{aka} the common-cause scenario with variable measurement settings. Assuming no fine-tuning, this causal hypothesis is ruled out as an explanation for quantum systems whose statistics violate Bell inequalities.}
\label{fig:Bell}
\end{figure}
\subsection{Quantum reference measurements and counterfactual inference \label{sec:born}}
We are now in a position to tackle the first key question of causal modeling: what are the reference measurements that define a reference observational scheme for quantum systems? In Sec. \ref{sec:soff} it was determined that the quantum reference measurements are distinct from either non-disturbing measurements or interventions. In this section we will propose, as a matter of convention, a precise form for the quantum reference measurements that will provide us with particularly elegant mathematical expressions. Along the way, we unexpectedly make contact with an approach to quantum foundations known as QBism.
For simplicity, let us again take as our reference experiment the example of the `causal chain', in which three measurements $X,Y,Z$ are performed in succession (that is, in time-like separated space-time regions) and whose causal relations are represented by the causal graph $X \rightarrow Y \rightarrow Z$. (Note: we use $X,Y,Z$ here instead of $A,B,C$ to avoid confusing $C$ with the control variable). Here, $Y$ is the quantum measurement whose properties will be investigated. Now suppose that the detector or measuring apparatus that is responsible for measuring $Y$ in its designated space-time region is to be removed from the experiment, or deactivated, as indicated by the conditonal $C^Y=\brm{undo}$.
Recall that in the special case where $Z$ is non-disturbing relative to $\tbf{X}$, the appropriate inference rule is (by definition) that given by Eq. \eqref{eqn:nondisturb} in Sec. \ref{sec:passactclass}. For quantum reference measurements, however, a different rule is required. Returning to our example of the causal chain, we begin by asking what is the inference rule to obtain $P(X,Z|C^Y=\brm{undo})$. In fact, this rule has been worked out elsewhere in the literature, for quite different reasons: in the ``QBist" approach to quantum theory (eg. \cite{QBCOH,QPLEX} and references therein) it appears in the guise of the Born rule expressed probabilistically (with no direct reference to Hilbert space operators). Due to its importance in QBism it is there named the \tit{Urgleichung}. We now review how the rule is derived.
First note that if we could fully reconstruct the state $\rho(x)$ (which represents the input to the measurement $Y$ conditioned on $X=x$) using only the probabilities $P(Y|X)$, and also fully reconstruct the POVM elements $\{ E_{z} : z \in \trm{dom}Z \}$ from the probabilities $P(Z|Y)$, then the inference rule for obtaining the probabilities $P(X,Z|C^Y=\brm{undo})$ would just be the Born rule itself:
\eqn{ \label{eqn:Born}
P(X,Z|\brm{undo}) = \tr{\rho(X) E_{Z} } \, ,
}
since then the RHS could then be expanded into some function of the reference probabilities $P(Y|X),P(Z|Y)$. Evidently the full reconstruction of an arbitrary $\rho(X)$ from the probabilities $P(Y|X)$ is possible if and only if $Y$ is an \tit{informationally complete} (IC) instrument:\\
\tbf{ICM. Informationally complete instrument:} An instrument $\{ \mathcal{M}_Y \}$ represents an \tit{informationally complete instrument} if its elements can be decomposed as:
\eqn{
\mathcal{M}_y(\cdot) = \sqrt{F_{y} \vphantom{F^{\dagger}_{y}}} \, (\cdot) \, \sqrt{F^{\dagger}_{y}} \, ,
}
such that $\{ F_y : y \in \trm{dom}(Y) \}$ spans the space of linear operators on $\mathcal{H}_Y$ and $\zum{y}{} F_{y} = \mathbb{I}_{Y}$, where $\mathbb{I}_{Y}$ is the identity matrix in $\mathcal{H}_{Y}$. (Note that a random variable $Y$ can only be associated with an informationally complete instrument on a system if $Y$ has at least as many outcomes as the square of the system's dimension, $d^2_Y$, since otherwise there won't be enough $F_y$'s to span the space of linear operators on $\mathcal{H}_{Y}$).\\
For the purposes of obtaining simple and elegant expressions, we choose $Y$ to be a \tit{symmetric} informationally complete instrument (called a \tit{SIC-instrument}): \\
\tbf{SIC. Symmetric informationally complete instrument:} An instrument $\{ \mathcal{M}_Y \}$ represents a \tit{SIC}-measurement if dom$(Y)=d^2_Y$ and
\eqn{
\mathcal{M}_y(\cdot) = \frac{1}{d_Y} \Pi_{y} \, (\cdot) \, \Pi_{y} \, ,
}
where $\{ \Pi_y : y \in \trm{dom}Y \}$ have the special property:
\eqn{
\tr{\Pi_y \Pi_{y'}}=\frac{d \, \delta(y,y')+1}{d+1} \, \, , \qquad \forall \, y,y' \in \trm{dom}Y \, ,
}
and $\{ \frac{1}{d_Y} \Pi_Y \} := \{ \frac{1}{d_Y} \Pi_y : y \in \trm{dom}Y \}$ defines a POVM called a \tit{SIC-POVM}.\\
\tit{Remark:} The post-measurement state corresponding to the outcome $Y=y$ is equal to the pure state projector $\Pi_y$. This may be regarded as an `unsharp' generalization of the L\"{u}ders rule for updating the state after measurement \cite{BUSCH2}, as the $\Pi_y$ are necessarily not quite orthogonal.
Since the elements of the SIC-POVM define a basis for the space of linear operators, we can expand $\rho(X)$ and the POVM $E_Z$ as:
\eqn{ \label{eqn:rhoE}
\rho(X) &=& \zum{y}{} \, \alpha_y(X) \, \frac{1}{d_Y} \Pi_y \nonumber \\
E_{Z} &=& \zum{y}{} \, \beta_y(Z) \, \frac{1}{d_Y} \Pi_y \, .
}
The coefficients $ \alpha_Y(X), \, \beta_Y(Z)$ are related to the measurement probabilities according to \cite{QBCOH}:
\eqn{ \label{eqn:coeffab}
\alpha_Y(X) &=& d_Y(d_Y+1)P(Y|X )-1 \nonumber \\
\beta_Y(Z) &=& (d_Y+1)P(Z|Y) - \frac{1}{d_Y} \zum{y}{} \, P(Z|y) \, .
}
By substituting \eqref{eqn:rhoE},\eqref{eqn:coeffab} into the right hand side of the Born Rule \eqref{eqn:Born}, one can then establish the QBist re-formulation of the Born rule, called the \tit{Urgleichung} \cite{QBCOH}:
\eqn{ \label{eqn:urg}
P(Z|X, C^Y=\brm{undo})= \zum{y}{} \, P(Z|y) \left[ (1+d_Y) P(y|X)-\frac{1}{d_Y} \right] \, .
}
(Note that the probabilities on the RHS of this equation refer to the reference measurement scheme; we have suppressed the conditioning on $C^Y=\oslash$). The \tit{Urgleichung} gives us $P(Z|X, C^Y=\brm{undo})$, but we require the full distribution $P(X,Z| C^Y=\brm{undo})$. Using elementary probability theory, we can decompose this as:
\eqn{ \label{eqn:preurg}
P(X,Z|C^Y=\brm{undo}) = P(Z|X,C^Y=\brm{undo})P(X|C^Y=\brm{undo}) \, .
}
To proceed, we can make use of the fact that un-measurements satisfy counterfactual no-signalling \tbf{CNS}. Hence the value of $C^Y$ (whether or not $Y$ is measured) should not affect $X$ (a causal ancestor of $Y$), i.e.
\eqn{ \label{eqn:abminus}
P(X|C^Y=\brm{undo})=P(X|C^Y=\oslash) \, .
}
Substituting \eqref{eqn:abminus} and the Urgleichung \eqref{eqn:urg} into \eqref{eqn:preurg} we finally obtain the sought-after inference rule:
\eqn{ \label{eqn:causalurg}
P(X,Z|C^Y=\brm{undo}) &=& P(Z|X,C^Y=\brm{undo})P(X) \, \nonumber \\
&=& \zum{y}{} \, P(Z|y) \left[ (1+d_Y) P(y|X)-\frac{1}{d_Y} \right] P(X) \, .
}
Therefore, in the context of causal modeling, the Urgleichung gives us the foundation for an inference rule for un-measurements. Note that the rule depends on the causal structure. To see this explicitly, consider what would happen if the accompanying causal structure had instead been $G(X,Y,Z):=X \leftarrow Y \leftarrow Z$. In that case the condition \eqref{eqn:abminus} would not follow from \tbf{CNS}, since $X$ is now in the causal future of $Y$. Instead, \tbf{CNS} implies $P(Z|C^Y=\brm{undo})=P(Z|C^Y=\oslash)$, and we obtain a different form of the rule:
\eqn{ \label{eqn:causalurgbackwards}
P(X,Z|C^Y=\brm{undo}) &=& P(X|Z,C^Y=\brm{undo})P(Z|C^Y=\brm{undo}) \, \nonumber \\
&=& P(X|Z,C^Y=\brm{undo})P(Z) \, \nonumber \\
&=& \zum{y}{} \, P(X|y) \left[ (1+d_Y) P(y|Z)-\frac{1}{d_Y} \right] P(Z) \, ,
}
which in general is not equivalent to the constraint \eqref{eqn:causalurg} (more precisely, neither of \eqref{eqn:causalurg},\eqref{eqn:causalurgbackwards} implies the other, but nor do they exclude each other, i.e. both can be satisfied simultaneously). What is important is that the causal structure is sufficient to determine the particular form of the inference rule, and hence the principle of causal sufficiency \tbf{CS} is maintained for un-measurements on quantum systems (cf Sec. \ref{sec:experiments} ). Of course, this \tit{inference rule} still needs to be generalized to more interesting causal structures; this will be done in Sec. \ref{sec:qinterf}. We conclude this section with our final definition of the quantum reference measurements:\\
\tbf{QRM. Quantum reference measurements:} The \tit{quantum reference measurements} are quantum instruments that are \tit{informationally complete} and whose corresponding channels are of the Holevo form \eqref{eqn:Holevo}, i.e. they are entanglement-breaking. By convention, we take them to be SIC-instruments.\\
Accordingly, a \tit{reference experiment} on a quantum system is an experiment (as per Sec. \ref{sec:experiments}) in which the non-exogenous variables represent SIC-instruments. The terminal nodes (i.e. those that have no effects in the causal graph) may be represented by SIC-POVMS, while the exogenous variables (those without causes) will be assumed to have the maximally mixed state as input. The justification for this convention will be given in the next section.
\tit{Remark:} It is currently not known whether SIC-POVMs actually exist in all Hilbert space dimensions, but that does not present a problem to our program, which requires only the informational completeness of the measurements and not necessarily their symmetry or minimality; the latter are adopted on purely aesthetic grounds. Nevertheless, it remains an intriguing idea to ask what theory results from elevating the Urgleichung to the level of a postulate that holds prior to the existence of any Hilbert space representation; the QBists explore this idea in Ref.\cite{QPLEX}.
\subsection{Quantum Markov Conditions \label{sec:qmc}}
Now that the essential details of a quantum \tit{reference experiment} have been identified, we turn to the second key problem, namely, that of finding a set of \tit{physical Markov conditions} for quantum systems based upon how they would behave in the reference experiment. We begin by considering the general properties of quantum systems for the three special cases considered in Sec. \ref{sec:cmc}: common causes, causal chains, and common effects. We will discover that there is an opportunity for the physical Markov conditions of quantum systems to be \tit{causally symmetric}, that is, invariant under a reversal of the directions of all arrows in the causal graph. This is because, besides the rejection of \tbf{FCC}, quantum systems also satify a condition \tbf{BK*} that is equivalent to the causal inverse of Berkson's rule. To achieve full symmetry, we will further impose the restriction as a matter of convention that the marginal probabilities of the exogenous nodes are uniformly distributed over their outcomes (equivalently, that the inputs to the exogenous nodes are maximally mixed states) and that these are preserved by the dynamics. This constraint enforces an additional physical Markov condition that we call \tbf{RP*}, which is the causal inverse of Reichenbach's Principle, and whose inclusion makes the whole set of quantum Markov conditions causally symmetric. In order to extend these conditions to arbitrary causal structures, we postulate a causally symmetric graphical criterion based on them, which we call the Quantum Markov Condition (QMC).
To begin with, we will argue that quantum systems continue to satisfy the physical Markov conditions \tbf{SSO}, \tbf{RP} and \tbf{BK} which also hold classically.
The argument for retaining \tbf{SSO} has already been given in Sec. \ref{sec:soff} for the case of a simple causal chain. We only need to show that \tbf{SSO} extends also to the more general `multi-chain' case shown in Fig. \ref{fig:threedags} (b) of Sec. \ref{sec:cmc}. This can be done by noting that the tensor product of a set of quantum instruments is again a quantum instrument, namely $\{ \mathcal{M}_{\vec{D}} \} := \{\mathcal{M}_{D_1} \otimes \dots \otimes \mathcal{M}_{D_N} \}$. The principle \tbf{SSO} will be upheld in this scenario if it is upheld for the total instrument $\{ \mathcal{M}_{\vec{D}} \}$ applied to an arbitrary input state $\rho_{X_1}$ defined on the tensor product Hilbert space of the individual measurements $\mathcal{H}:= \mathcal{H}_{D_1} \otimes \dots \otimes \mathcal{H}_{D_N}$. To see that it is upheld, it is enough to note that the tensor product of a set of entanglement breaking channels is also entanglement breaking, and so the total instrument is of the Holevo form \eqref{eqn:Holevo} and we can apply the same reasoning as in Sec. \ref{sec:soff} to conclude that it respects \tbf{SSO}, i.e. that knowledge of all the outcomes $D_1,\dots,D_N$ renders $X_1$ and $X_2$ uncorrelated: $P(X_1 X_2|D_1,\dots,D_N)=P(X_1|D_1,\dots,D_N)P(X_2|D_1,\dots,D_N)$. (It should be noted that the total instrument $\{ \mathcal{M}_{\vec{D}} \}$ is also informationally-complete, a fact that will become important in Sec. \ref{sec:qinterf}).
Turning now to \tbf{RP}, let us recall its definition: if neither of $X_1,X_2$ is a cause of the other and they have no shared ancestors, then they are statistically independent: $P(X_1 X_2)=P(X_1)P(X_2)$. To see how this can be arranged to hold for quantum systems in a natural manner, let $\tbf{E}_1$ be the set of ancestors of $X_1$ that are exogenous, and similarly let $\tbf{E}_2$ be the exogenous ancestors of $X_2$. By assumption, $\tbf{E}_1,\tbf{E}_2$ have no members in common, and are independent $P(\tbf{E}_1,\tbf{E}_2)=P(\tbf{E}_1)P(\tbf{E}_2)$ (condition (iv), Sec. \ref{sec:experiments} ). These measurements can be regarded as preparing a quantum state with the form $\rho_{\tbf{E}_1} \otimes \rho_{\tbf{E}_2}$ on a Hilbert space $\mathcal{H}_{\tbf{E}_1} \otimes \mathcal{H}_{\tbf{E}_2}$, which is mapped by some channel $\mathcal{T}$ to the input spaces $\mathcal{H}_{X_1} \otimes \mathcal{H}_{X_2}$ of the measurements $X_1,X_2$. Since by assumption the causal structure contains no causal pathways from $\tbf{E}_1$ to $X_2$ or from $\tbf{E}_2$ to $X_1$, it is reasonable to posit that the channel $\mathcal{T}$ does not generate correlations, that is, to postulate that $\mathcal{T} = \mathcal{T}_1 \otimes \mathcal{T}_2$ where $\mathcal{T}_1: \mathcal{H}_{\tbf{E}_1} \mapsto \mathcal{H}_{X_1}$ and $\mathcal{T}_2: \mathcal{H}_{\tbf{E}_2} \mapsto \mathcal{H}_{X_2}$. This shows that we can enforce \tbf{RP} for quantum systems without difficulty.
\tit{Remark:} The fact that \tbf{RP} can be retained despite the loss of \tbf{FCC} is one of the main motivations for thinking that quantum correlations could be explained by a suitably defined causal model without fine-tuning. For example, \tbf{RP} is identified and claimed to hold for quantum systems in many of the early works on the topic \cite{FRITZ,PIEBRUK,CAVLAL}.
Next we recall the definition of \tbf{BK}: If neither of $X_1,X_2$ is a cause of the other and they have no shared ancestors, and $B$ is a common descendant of them, then they are typically correlated conditional on $B$. This principle holds for quantum systems for essentially the same reasons as it did the classical case: conditioning on common effects of independent variables typically renders them correlated. The only new feature that appears in the quantum case is that the induced `spurious correlations' can be stronger than classical, i.e. they can exhibit entanglement. This effect has already been studied under the name of the ``quantum Berkson effect" in Refs. \cite{SPEKBERK,RIEDPHD}.
Beyond the above three conditions, there are also some additional physical Markov conditions that are special to quantum systems. The case of \tbf{FCC} has already been partially dealt with in Sec. \ref{sec:comm}, where we pointed out that entangled quantum systems exhibit counterexamples to it. However, this leaves open the possibility that \tbf{FCC} might nevertheless hold for any `typical' quantum system, in which case we might have grounds to retain it as a physical Markov condition, albeit in weaker form. But does it typically hold?
To make this more precise, consider the case of a single common cause, $X_1 \leftarrow C \rightarrow X_2$. Conditioning on $C=c$ effectively results in the preparation of an outgoing post-measurement state $\rho_c$ on the Hilbert space $\mathcal{H}_{C}$. The causal arrows mean that manipulations of this state can signal to the measurements at $X_1$ and $X_2$, which implies that the system's dynamics can be expressed as a quantum channel $\mathcal{T}: \mathcal{H}_{C} \mapsto \mathcal{H}_{X_1} \otimes \mathcal{H}_{X_2}$ which conveys information about $\rho_C$ to each of $X_1$ and $X_2$, but is otherwise unconstrained. The result is that the conditioned probabilities can be expressed as:\\
\eqn{ \label{eqn:antiberk}
P(X_1,X_2|C=c) = \frac{1}{d_1d_2} \, \tr{ \Pi_{x_1} \otimes \Pi_{x_2} \cdot \rho'_c} \nonumber \, ,
}
where $\rho'_c := \mathcal{T}(\rho_c)$ may be assumed to be an arbitrary state on $\mathcal{H}_{X_1} \otimes \mathcal{H}_{X_2}$. For any reasonable measure on the set of density matrices, the ones that do not exhibit any correlations between the $\mathcal{H}_{X_1}$ and $\mathcal{H}_{X_2}$ subspaces will be a set of measure zero.
\tit{Remark:} It is tempting to attribute the typicality of correlations in this case to entanglement. However, depending on the measure one uses, the correlations will generally be separable and will not necessarily exhibit entanglement even in most cases \cite{ZYCZ}). Notwithstanding this observation, the fact that the entangled density operators have full measure in the space of all density operators is enough to rule out any hope of rescuing \tbf{FCC} by appealing to latent common causes and typicality arguments.
Thus we are led not only to reject \tbf{FCC} but to introduce a \tit{new} physical Markov condition that asserts the contrary, namely the typicality of correlations conditional on common causes:\\
\noindent \tbf{BK*. Non-factorization on common causes (a.k.a. the causal inverse of Berkson's rule):}\\
Suppose neither of $X_1,X_2$ is a cause of the other and they have no shared descendants, and suppose $C$ is a common ancestor of $X_1,X_2$; then one generally expects them to be correlated conditional on $C$, i.e. that $P(X_1X_2|C) \neq P(X_1|C)P(X_2|C)$. (This condition is related to \tbf{BK} by switching the roles of `ancestors' and `descendants' in its definition, which is why we have labelled it the causal inverse of \tbf{BK}). \\
\tit{Remark:} the restriction to variables $X_1,X_2$ that have no shared descendants might seem arbitrary here, but it is included so as to make \tbf{BK*} perfectly symmetric with \tbf{BK}. To remove this clause would be to assert something extra, namely, that variables are not correlated by the mere fact of interaction in their common future. The latter assertion is in fact a corollary of the principle \tbf{RP}, so to assert it here would be redundant.
\tbf{BK*} marks an interesting departure from classical stochastic systems. In the \tbf{CMC} there was a marked asymmetry in the physical Markov conditions, due to the simultaneous presence of \tbf{FCC} and \tbf{BK}, which together asserted that variables are correlated conditioned on common effects but not on common causes. What is remarkable is that not only do quantum systems reject \tbf{FCC}, but they actually replace it with \tbf{BK*}, which as we have noted perfectly restores the symmetry with \tbf{BK}. This points to the intriguing possibility that the quantum Markov conditions for quantum systems might have the property of being fully causally symmetric.
In order to achieve this, there is still another asymmetry that needs to be dealt with, present in the opposition between \tbf{RP} and \tbf{PE}. Note that these principles refer to what is implied when there are common effects (i.e. `future interactions') or common causes (`past interactions'), respectively, whose outcomes are \tit{not} conditioned upon. The point at stake in these principles is whether the \tit{mere occurrence} of a common future or common past measurement can imply correlations or independence between two variables. As discussed in Sec. \ref{sec:cmc} the asymmetric pairing of these two principles for macroscopic classical systems is commonplace, where it is summarized by the principle that systems are uncorrelated before interaction and typically correlated afterwards (or as Price put it, `innocence precedes experience' \cite{PRICEBOOK} ).
Whereas Price prefers to restore symmetry by rejecting \tbf{RP} for quantum systems, thereby asserting that quantum states may be correlated due to the mere existence of a future interaction, we are inclined, given the marked importance of \tbf{RP} in quantum causal modeling, to take the opposite route and restore symmetry by upholding \tbf{RP} and rejecting \tbf{PE}. This leads us to the counter-intuitive proposition that quantum systems can remain independent of one another even after interaction. On further exploration, however, we find that this idea is more sensible than it first appears.
While it is true that in the laboratory it is commonplace to see independent systems becoming correlated after interaction, this typically occurs only when the systems have been carefully prepared \tit{in known initial states}. In our framework, that means only when we are conditioning on the values of the exogenous variables. But in that case, as we have just pointed out above, \tbf{BK*} would lead us to expect correlations. The point is that a rejection of \tbf{PE} only mandates independence when the common past is \tit{not conditioned upon}, and this represents a situation that is rarely encountered in practice. In any normal laboratory setting, we are not ignorant of the values of the exogenous variables. For instance, one usually does not begin an optics experiment until one has verified that one's sources are producing photons. Conditional that these are working, one then gathers statistics, and one then has the choice whether to post-select on the functioning of the detectors or keep all of the statistics including the cases where the \tit{detectors} failed to detect photons. If we now wish to imagine a scenario in which the exogenous variables are \tit{not} conditioned upon, we would have to gather statistics for the whole experiment even in cases where the photon \tit{sources} failed to work. This runs quite counter to intuition and efficiency: why would anybody go ahead with an experiment in which they knew their sources were not working? It would be beyond the scope of this work to account for this asymmetry in the way we conduct experiments (Price's book \cite{PRICEBOOK} does a good job); the main point is that \tit{if we were} to take statistics even when our sources were not working, thus \tit{not conditioning} on the exogenous variables in the system, we might well find that variables with a common source remain uncorrelated (or, in a phrase, `garbage in, garbage out'). The rejection of \tbf{PE}, then, is not so unnatural as it first appears. We can formalize this idea by postulating the causal inverse of \tbf{RP}, namely:\\
\noindent \tbf{RP*. Causal inverse of Reichenbach's Principle:}\\
If neither of $X_1,X_2$ is a cause of the other and they have no shared descendants, then they are statistically independent: $P(X_1 X_2)=P(X_1)P(X_2)$.\\
The main implication of this would be that $X_1,X_2$ should be statistically independent of each other even if they possess one or more common ancestors (not conditioned upon). This is the principle we would expect to hold in an experiment in which we have contrived to be `ignorant' about the starting conditions. To make this more rigorous, consider again the simple case of a single common cause, $X_1 \leftarrow C \rightarrow X_2$. Since now we are not conditioning on $C$, its values in \eqref{eqn:antiberk} must be summed over, leading to the probabilities:
\eqn{ \label{eqn:antiRP}
P(X_1,X_2) &=& \zum{c}{} \, P(X_1,X_2|C=c)P(C=c) \nonumber \\
&=& \zum{c}{} \frac{1}{d_1d_2} \, \tr{ \Pi_{x_1} \otimes \Pi_{x_2} \cdot \rho_c} \, \frac{1}{d_C} \, \tr{\Pi_c \, \rho_{\trm{prep}}} \, ,
}
where $P(C=c) = \tr{\Pi_c \, \rho_{\trm{prep}}}$ is the probability of obtaining $C=c$ when doing a SIC-instrument on the initial state $\rho_{\trm{prep}}$, and where $\rho_c := \mathcal{T}(\Pi_c)$ is the input state to the measurements $X_1,X_2$ conditioned on $C=c$, obtained by passing the post-measurement state of $C$ through some channel $\mathcal{T}$. We now introduce the notion of an \tit{unbiased} quantum channel (borrowing the terminology of Ref. \cite{COSTA17}): \\
\tbf{UB. Unbiased quantum channel:} A quantum channel $\mathcal{T}$ is \tit{unbiased} (or `maximally-mixed-state-preserving') iff it preserves the maximally mixed state, i.e.
\eqn{
\mathcal{T}(\frac{1}{d_{\trm{in}}} \mathbb{I}_{\trm{in}})=\frac{1}{d_{\trm{out}}} \mathbb{I}_{\trm{out}} \, .
}
Note that when $d_{\trm{in}}=d_{\trm{out}}$ this reduces to the definition of a \tit{unital} (identity-preserving) channel.\\
We can now make the following observation: if $\mathcal{T}$ is unbiased and we restrict attention to quantum systems initially prepared in the maximally mixed state $\rho_{\trm{prep}} = \frac{1}{d_C}\mathbb{I}$, then
\eqn{ \label{eqn:antiRP2}
P(X_1,X_2) &=& \zum{c}{} \frac{1}{d_1d_2} \, \tr{ \Pi_{x_1} \otimes \Pi_{x_2} \cdot \mathcal{T}(\Pi_c)} \, \frac{1}{d^2_C} \, \nonumber \\
&=& \frac{1}{d_1d_2} \, \tr{ \Pi_{x_1} \otimes \Pi_{x_2} \cdot \mathcal{T}(\frac{1}{d_C} \mathbb{I})} \, \nonumber \\
&=& \frac{1}{d_1d_2} \, \tr{ \Pi_{x_1} \otimes \Pi_{x_2} \cdot \frac{1}{d_1d_2} \mathbb{I} } \, \, \nonumber \\
&=& \frac{1}{d^2_1d^2_2} = P(X_1)P(X_2)\, ,
}
and hence \tbf{RP*} can be satisfied. We therefore see that there exists a special sub-class of experiments in which (i) systems are prepared in the maximally mixed state and (ii) evolve only through \tit{unbiased} quantum channels, for which quantum systems satisfy the condition \tbf{RP*}. Within this sub-class, we can combine \tbf{RP} and \tbf{RP*} into the following simple condition:\\
\noindent \tbf{SRP.} Symmetric Reichenbach Principle:\\
If neither of $X_1,X_2$ is a cause of the other, then they are statistically independent: $P(X_1 X_2)=P(X_1)P(X_2)$. \\
It is can be checked by inspection of the definitions that the set of physical Markov conditions for this sub-class, $\{ \trm{\tbf{SSO}, \tbf{BK}, \tbf{BK*}, \tbf{SRP}} \}$ is invariant under switching of the direction of causal arrows. We will refer to quantum systems observed under these special conditions as the class of \tit{causally reversible quantum systems}.
The restriction to maximally mixed exogenous inputs and unbiased processes has an important consequence, which is that if one does not condition on the \tit{ancestors} of a variable, then the other variables do not depend on whether it is un-measured. Formally this can be expressed as:\\
\tbf{IUM. Indifference to un-measurements:} If one doesn't condition on the ancestors of $Z$, then measuring or un-measuring $Z$ cannot affect the non-ancestors of $Z$. Formally, let $C^Z=\{ \oslash, \brm{undo} \}$ toggle between measuring and un-measuring $Z$ in a system whose causal relations are described by a DAG denoted $G(\tbf{A} \tbf{D} \tbf{R} Z)$, where $\tbf{A}$ are the causal ancestors of $Z$, $\tbf{D}$ are the descendants of $Z$, and $\tbf{R}$ are the remainder. Then:
\eqn{ \label{eqn:ium}
P(\tbf{D} \, \tbf{R}|\,C^Z=\brm{undo})=P(\tbf{D} \, \tbf{R}|\,C^Z=\oslash) \, .
}
The justification is intuitive: when $Z$'s ancestors are not conditioned on, the input to $Z$ is a maximally mixed state uncorrelated with any of its non-descendants $\tbf{R}$. Since measuring $Z$ and ignoring its outcome is the same as applying an unbiased channel from its input to its output, it is equivalent to an unbiased channel from its input to the inputs of its children. Hence, regardless of whether $Z$ is performed and its outcome ignored or not measured at all, the inputs to the children of $Z$ are the same: a maximally mixed state uncorrelated to the inputs to $\tbf{R}$. The iteration of this argument to each child of $Z$ then shows that the inputs to $Z$'s grand-children, and great-grand-children, etc, are similarly unaffected, hence all of $\tbf{D}$. This establishes \eqref{eqn:ium}.\\
It is interesting to note that the classical stochastic systems cannot so easily be made symmetric by the rejection of \tbf{PE} because they still suffer from the asymmetry between \tbf{FCC} and \tbf{BK}. To break this asymmetry, one must reject one or the other principle. It would be interesting to investigate whether this route would lead to new interesting classes of causally reversible classical systems besides the most obvious case of the deterministic classical systems -- we comment on this further in Sec. \ref{sec:discuss}.
\tit{Remark:} In principle, it is always possible to simulate any quantum phenomenon using a system that is causally reversible (given sufficient extra resources). For instance, preparation of an arbitrary pure state can be simulated by post-selecting on the outcome of a suitable SIC-instrument that has the desired pure state as one of its elements. An arbitrary channel can then be simulated by coupling a system to a suitably prepared ancilla and post-selecting on the outcomes of a SIC-instrument on the ancilla. Since we lose no fundamental generality in insisting upon the property of causal reversibility, we will continue restrict our attention to this class of systems from here onwards.
We are now ready to obtain a general Quantum Markov Condition starting from a graphical interpretation of the physical Markov conditions \tbf{SSO}, \tbf{BK}, \tbf{BK*}, \tbf{SRP}. To this end, we extrapolate that the physical Markov conditions for quantum systems with \tit{arbitrary} causal structure are given by the following graphical condition (see Appendix \ref{app:graphrules} ):\\
\noindent \tbf{QMC.} Quantum Markov Condition (graphical version):\\
Let $\tbf{U}$,$\tbf{V}$,$\tbf{W}$ be disjoint subsets of variables in a DAG $G(\tbf{X})$. A distribution $P(\tbf{X})$ is said to satisfy the Quantum Markov Condition relative to $G(\tbf{X})$ iff $P(\tbf{U}\tbf{V}|\tbf{W})=P(\tbf{U}|\tbf{W})P(\tbf{V}|\tbf{W})$ holds whenever every path between $\tbf{U}$ and $\tbf{V}$ is blocked by $\tbf{W}$. A path between two variables is said to be `blocked' by the set $\tbf{W}$ iff at least one of the following conditions holds:\\
\tbf{g-SSO:} There is a chain $A \rightarrow C \rightarrow B$ along the path whose middle member $C$ is in $\tbf{W}$;\\
\tbf{g-BK:} There is a collider $A \rightarrow C \leftarrow B$ on the path where $C$ is \tit{not} in $\tbf{W}$ and has no descendants in $\tbf{W}$.\\
\tbf{g-BK*:} There is a fork $A \leftarrow C \rightarrow B$ on the path where $C$ is \tit{not} in $\tbf{W}$ and has no ancestors in $\tbf{W}$.\\
\tit{Remark:} The principle \tbf{SRP} is implicit in both graphical conditions \tbf{g-BK*} and \tbf{g-BK}. To see this, note that if \tbf{SRP} were false, then the graphical rules \tbf{g-BK*} and \tbf{g-BK} would be insufficient to indicate statistical independence; for then it would be possible to have variables $A,B$ such that neither is a cause of the other and with all paths between them blocked via \tbf{g-BK*} and \tbf{g-BK}, yet where they are still correlated. What prevents correlations in this case is precisely \tbf{SRP}. In more rigorous langage, the conditions \tbf{BK}, \tbf{BK*} only suggest that the graphical rules \tbf{g-BK*} and \tbf{g-BK} are \tit{neccessary} criteria for the path to be blocked, whereas \tbf{SRP} elevates them to \tit{sufficient} criteria.
Our methodology of postulating \tbf{QMC} first as a graphical criterion has the advantage that it immediately supplies a graphical algorithm, called a \tit{graph-separation criterion}, for efficiently determining by inspection of a DAG whether two subsets of variables are independent conditional on a third subset. It would be interesting to compare the present criterion to others that have been proposed in the literature, particularly those in Refs. \cite{HLP,PIEBRUK}. It is unclear whether the \tbf{QMC} can be expressed as a single factorization condition similar to Eq. \eqref{eqn:cmc}; this is left to future work.
For classes of quantum systems in which latent variables are suspected, i.e. where the observed behaviour $P(\tbf{X})$ is suspected to be only a marginal of an extended system with causal structure $G(\tbf{X},\tbf{L})$, it is natural to posit an extension of the \tbf{QMC} to this broader class in an analogous way to how we obtained the classical \tbf{CMC2}:\\
\tbf{QMC2.} Quantum Markov Condition (with latent variables):\\
There exists an extended distribution $P(\tbf{X},\tbf{L})$, such that $P(\tbf{X},\tbf{L})$ satisfies the \tbf{QMC} for the causal structure $G(\tbf{X},\tbf{L})$, and $P(\tbf{X})$ is obtained from $P(\tbf{X},\tbf{L})$ by marginalizing over the latent variables. \\
We can now finally define a \tit{quantum causal model}:\\
\tbf{Quantum Causal Model:}\\
A Quantum Causal Model consists of a pair $\{P(\tbf{X}), G(\tbf{X}) \}$ where $P(\tbf{X})$ satisfies the \tbf{QMC} and \tit{no fine-tuning} for the DAG $G(\tbf{X})$.\\
\subsection{Inference rules for quantum causal models \label{sec:qinterf}}
In this section we take up the third key question regarding the counterfactual inference rules for quantum causal models. We focus first on the case of interventions, and then deal with un-measurements. We will see that the very possibility of an inference rule for interventions is not guaranteed, and that the fundamental postulate of causal sufficiency \tbf{CS} cannot be upheld for arbitrary causal structures. To accomodate this, we impose the restriction that the causal structure must be `layered', which ultimately enables us to derive the necessary inference rules and uphold \tbf{CS}.
\subsubsection{Interventions on quantum systems \label{sec:qinterv}}
An intervention on a variable $W$ in a quantum system may be usefully represented by associating a quantum instrument to $W$ that measures the local subsystem at the input, obtains an outcome $U=u$, and then re-prepares an arbitrary new state $\sigma_w$ with probability $P'(W=w)$ at the output, independently of the value of $U$ (thus breaking the causal connection between $W$ and its parents as required). Formally, we define:\\
\tbf{Quantum intervention:} An intervention on $W$ in a quantum system is associated with an instrument $\{ \mathcal{M}_{u w} \}$ whose elements have the form:
\eqn{ \label{eqn:simpleinterv}
\mathcal{M}_{u w}(\rho_{\trm{in}}) := \tr{\rho_{\trm{in}} F_u} P'(w) \, \sigma_w \, ,
}
where $\{ F_u : u \in \trm{dom}(U) \}$ is an arbitrary POVM, $P'(w)$ an arbitrary probability, and $\sigma_w$ an arbitrary state, which together define the intervention. For simplicity, we may sometimes consider the case where $F_u = \frac{1}{d_W} \Pi_u$, $P'(w)=\frac{1}{d^2_W}$, and $\sigma_w=\Pi_w$, which we call a \tit{SIC-intervention}, because it represents doing a SIC-POVM on the input and then re-preparing a SIC state uniformly at random at the output.
\tit{Remark:} In this most general definition, an intervention is associated with \tit{two} variables: the intervened-upon variable $W$ and a \tit{new} variable $U$ having the same domain as $W$ but treated as an independent variable. The reason for this is not due to any special feature of quantum theory. It arises naturally in the quantum setting due the usage of quantum instruments to formally represent measurements, because these make it explicit that measurements have both an `input' and an `output'. Classically we could do the same thing by formally including, as part of the intervention, an extra variable $U$ that represents the outcome of a measurement on the former parents of $W$. One then recovers the usual classical formalism by summing over the values of $U$. (One such example is the ``split-node" classical causal models defined in Ref.\cite{BARRETTQCM}).
This remark suggests a special case of quantum interventions that look more similar to the usual classical treatment of interventions, in which the `measurement of the parents' $U$ is simply discarded. We will call these \tit{simple interventions}:\\
\tbf{Quantum simple intervention:} A \tit{simple intervention} on $W$ in a quantum system is associated with an instrument $\{ \mathcal{M}_W \}$ whose elements have the form:
\eqn{ \label{eqn:discardinterv}
\mathcal{M}_w(\rho_{\trm{in}}) := \tr{\rho_{\trm{in}}} \, P'(w) \, \sigma_w \, .
}
Similarly, we can define the special class of \tit{simple SIC interventions} by setting $P'(w)=\frac{1}{d^2_W}$, and $\sigma_w=\Pi_w$. In what follows, unless stated otherwise, we will restrict attention to quantum simple interventions and continue to neglect the variable $U$.
To model an intervention as a counterfactual we introduce an associated control variable $C^{W}$ with possible values $\{ \oslash,\brm{do} \}$, such that $P(\tbf{X},W|C^W=\oslash)=P(\tbf{X},W)$ is the behaviour in the reference experiment and $P(\tbf{X},W|C^W=\brm{do})$ represents the probabilities when an intervention is performed on $W$. In the special case of interventions that specify a particular value of $W$, we can choose $P'(w)=\delta(w,w')$ and write $C^W=\brm{do}(W=w')$ (cf Sec. \ref{sec:ccms} ).\\
As in the classical case, quantum interventions are manipulations, so we adopt the same rule for updating the causal structure when a quantum intervention is performed on $W$, namely, that the incoming arrows to $W$ are deleted (For general interventions, we may include $U$ as a terminal node that is a child of all the former parents of $W$.) Since quantum causal models satisfy the graphical rules \tbf{g-SSO} and \tbf{g-BK}, it is reasonable to also assume that quantum interventions satisfy \tbf{NPE2} (cf the justification for \tbf{NPE2} in Sec. \ref{sec:passactclass} ). So far, there is essentially no difference between the quantum and classical definitions of an intervention.
The difference arises in how we obtain the inference rule for interventions. Classically we obtained the rule \eqref{eqn:intervaxiom} by demanding that the new probabilities $P(\tbf{X}|C^W=\brm{do})$ should satisfy the \tbf{CMC} for the new causal graph. In the quantum case, evidently, we must replace the \tbf{CMC} with the \tbf{QMC}. However, it is then not clear whether this constraint is sufficient to guarantee the existence of an inference rule. In fact, we show in the next section that an inference rule for interventions on quantum systems does not exist for arbitrary causal structures.
\subsubsection{Impossibility of a general inference rule for quantum interventions \label{sec:qintervimposs}}
Consider a system with the causal structure shown in Fig. \ref{fig:qintervprob} (a). The circuit diagram shown in Fig. \ref{fig:qintervprob} (b) represents the most general possible realization of this system. It is convenient to represent this circuit as a \tit{quantum comb} \cite{COMB1,COMB2,COSHRAP,POLLOCKPRA}. To do so, we introduce the Choi-Jamio\l kowski (CJ) matrix representation $M \in \mathcal{H}^{X_I} \otimes \mathcal{H}^{X_O}$ of a completely positive linear map $\mathcal{M}:\mathcal{H}^{X_I} \mapsto \mathcal{H}^{X_O}$ as:
\eqn{ \label{eqn:genbornce}
M := \zum{i=1,j=1}{d_{X_I}} \, \ket{i}\bra{j} \otimes \, \left( \mathcal{M}(\ket{j}\bra{i}) \right)^T
}
for some conventionally chosen orthonormal basis $\{ \ket{i} : i=1,2,\dots,d_{X_I} \}$ of $\mathcal{H}^{X_I}$, where $T$ denotes the transpose in that basis. Then the probabilities in the reference experiment for this circuit may be expressed as:
\eqn{ \label{eqn:genbornce2}
P(D,W,A) = \tr{ \Pi_{A} \otimes M^{W_I W_O}_{W} \otimes \frac{1}{d_D} \Pi_{D} \cdot K^{A W_I W_O D} }
}
where $\Pi_{A},\Pi_{D}$ are the SIC-projectors on $\mathcal{H}^{A},\mathcal{H}^{D}$ corresponding to the outcomes of $A,D$ respectively, $M^{W_I W_O}_W$ is the CJ matrix representation of the SIC-instrument $\mathcal{M}_W:\mathcal{H}^{W_I} \mapsto \mathcal{H}^{W_O}$, and $K^{A W_I W_O D} \in \mathcal{H}^{A} \otimes \mathcal{H}^{W_I} \otimes \mathcal{H}^{W_O} \otimes \mathcal{H}^{D}$ is a positive linear matrix (called a `quantum comb') that represents the circuit fragment contained in the dashed lines in Fig. \ref{fig:qintervprob} (b). Since $M^{W_I W_O}_{W}$ is a SIC-instrument, its CJ matrix has the simple form $M^{W_I W_O}_W = \Pi_W \otimes ( \frac{1}{d_W}\Pi_W)$, which allows us to simplify \eqref{eqn:genbornce2} to:
\eqn{ \label{eqn:genbornce3}
P(D,W,A) = \frac{1}{d_W d_D} \tr{ \Pi_{A} \otimes \Pi_W \otimes \Pi_W \otimes \Pi_{D} \cdot K^{A W_I W_O D} } \, .
}
When $W$ is intervened upon, the circuit reduces to that of Fig. \ref{fig:qintervprob} (c), and the probabilities are obtained by replacing $M^{W_I W_O}_W$ in \eqref{eqn:genbornce} with the CJ matrix for a quantum intervention $\mathcal{M}_{UW}$ as defined in \eqref{eqn:simpleinterv}. Choosing the intervention on $W$ to be a SIC-intervention, the CJ matrix of $\mathcal{M}_{UW}$ is given by $M^{W_I W_O}_{U W} = \Pi_U \otimes ( \frac{1}{d_W}\Pi_W)$. Hence the post-intervention probabilities are:
\eqn{
P(D,U,W,A|C^W=\brm{do}) = \frac{1}{d_Wd_D} \tr{ \Pi_{A} \otimes \Pi_{U} \otimes \Pi_{W} \otimes \Pi_{D} \cdot K^{A W_I W_O D} } \, .
}
Note that $\{ \Pi_{A} \otimes \Pi_{U} \otimes \Pi_{W} \otimes \Pi_{D} \}$ is a set of $d^2_{A}d^4_{W}d^2_{D}$ linearly independent operators that span the space of linear operators on $\mathcal{H}^{A} \otimes \mathcal{H}^{W_I} \otimes \mathcal{H}^{W_O} \otimes \mathcal{H}^{D}$. Hence a specification of $P(D,W,A|C^W=\brm{do})$ is equivalent to a full specification of the comb representing the circuit fragment, $K^{A W_I W_O D}$. In order for the reference probabilities $P(D,W,A)$ to allow inference of $P(D,W,A|C^W=\brm{do})$, they must therefore be able to re-construct an arbitrary $K^{A W_I W_O D}$. However, from Eq. \ref{eqn:genbornce3} we see that $\{ \Pi_{A} \otimes \Pi_W \otimes \Pi_W \otimes \Pi_{D} \}$ are a set of only $d^2_{A}d^2_{W}d^2_{D}$ linearly independent operators, which is insufficient to span the full operator space, thus making it impossible in general to reconstruct an arbitrary $K^{A W_I W_O D}$ from $P(D,W,A)$.
\begin{figure}[!htb]
\centering\includegraphics[width=0.6\linewidth]{intervprob.PNG}
\caption{(a) a causal diagram and (b) its most general possible circuit realization. Single arrows represent quantum systems and double arrows represent classical data. The boxes SIC-instruments. (c) is the circuit under an intervention of $W$. As explained in the text, the probabilities in this case are not uniquely specified by those in the pre-intervention circuit.}
\label{fig:qintervprob}
\end{figure}
We conclude that the causal model $\{P(A,D,W),G(A,D,W) \}$ is not sufficient to deduce what would happen under an arbitrary intervention; more information is needed. This violates the postulate of causal sufficiency \tbf{CS}, which is a core axiom of our framework.
\tit{Remark:} It can be shown that this problem is not alleviated if we restrict ourselves to quantum simple interventions, and we conjecture that this remains true if one additionally restricts all processes to be unbiased. Instead, we argue that \tbf{CS} can be upheld by imposing constraints on the set of causal structures that are considered to be valid hypotheses for the reference experiment. In the next section, we motivate an ansatz that restricts the allowed causal structures and show that it restores causal sufficiency.
\subsubsection{Inference rules for quantum interventions in layered DAGs \label{sec:qintervsoln}}
Consider the causal graph shown in Fig. \ref{fig:qintervsoln} (a) and corresponding circuit shown in Fig. \ref{fig:qintervsoln} (b). It is exactly the same circuit as that shown in Fig. \ref{fig:qintervprob}, except that an extra SIC-instrument $Z$ has now been introduced.
\begin{figure}[!htb]
\centering\includegraphics[width=0.6\linewidth]{intervsoln.PNG}
\caption{(a) a causal graph and (b) a corresponding quantum circuit, similar to that shown in Fig. \ref{fig:qintervprob}, except with an additional SIC-instrument $Z$. In contrast to that case, the result of an intervention on $W$ (or on $Z$) in this circuit can be deduced directly from the reference probabilities $P(A,D,W,Z)$ for an arbitrary circuit. This enables us to derive rules for counterfactual inference.}
\label{fig:qintervsoln}
\end{figure}
In this example, the conflict with \tbf{CS} does not arise: since the tensor product of two SIC-POVMs is an informationally-complete POVM (though not itself symmetric), the statistics $P(A,D,W,Z)$ now are sufficient to reconstruct the two CPTP maps that comprise the circuit, implying that it ought to be possible to define an inference rule to compute $P(A,D,W,Z|C^W=\brm{do})$. From this example, we extrapolate the following ansatz describing a class of DAGs for which this solution is expected to work:\\
\tbf{LDAG. Layered DAG.} A DAG for a set of variables $\tbf{X}:=\{ X_i \}$ is said to be a \tit{layered} DAG or `LDAG' iff $\tbf{X}$ decomposes into $M$ disjoint subsets $\tbf{X} = \tbf{L}_1 \cup \tbf{L}_2 \cup \dots \cup \tbf{L}_M$ called `layers' such that no member of a layer is a cause of any other member of the same layer, and such that for any triplet $\tbf{L}_i,\tbf{L}_j,\tbf{L}_k$ with $i<j<k$, each path connecting $\tbf{L}_i$ to $\tbf{L}_k$ is intercepted by $\tbf{L}_j$, i.e. contains a causal chain $A \rightarrow B \rightarrow C$ whose middle member $B$ is in $\tbf{L}_j$.\\
We will now prove that, for any LDAG, there is an inference rule that can be used to obtain the probabilities under intervention, $P(A,D,W|C^W=\brm{do})$, using only those of the reference experiment, $P(A,D,W)$ plus the causal structure.
First, consider the quantum causal model $\{ P(\tbf{X}), G(\tbf{X}) \}$ where $G(\tbf{X})$ is now assumed to be an LDAG. Let $\tbf{L}$ be the members (excluding $W$) of the layer containing the to-be-intervened-upon node $W$. The nodes $\tbf{R}$ can then be subdivided according to whether they causally precede or follow the layer: let $\tbf{R}_{D}$ be the subset of $\tbf{R}$ that are descendants of $\tbf{L}$ and let $\tbf{R}_A$ be the subset that are ancestors of $\tbf{L}$.
As already discussed, the post-intervention DAG $G(\tbf{X}|C^W=\brm{do})$ is obtained from $G(\tbf{X})$ by deleting the incoming arrows to $W$, which implies that the resulting graph is automatically also an LDAG. As we did for the classical case, we will derive the inference rule by postulating that the post-intervention probabilities satisfy the \tbf{QMC} relative to the new graph. This implies that the following conditional independence must hold:
\eqn{ \label{eqn:drsy}
P(\tbf{D} \tbf{R}_D|W \tbf{L} \tbf{A} \tbf{R}_A, \, C^W=\brm{do}) = P(\tbf{D} \tbf{R}_D|W \tbf{L}, \, C^{W}=\brm{do}) \, ,
}
which is simply a consequence of applying $\tbf{g-SSO}$ to the new LDAG. Another (less obvious) conditional independence implied by the \tbf{QMC} is:
\eqn{ \label{eqn:saray}
P(W|\tbf{L} \tbf{A} \tbf{R}_A , \, C^W=\brm{do}) &=& P(W| C^W=\brm{do}) \, \nonumber \\
&:=& P'(W) \, ,
}
which follows from the fact that, since $W$ has no ancestors in the intervened graph, every path connecting $W$ to any member of $\tbf{L} \tbf{A} \tbf{R}_A$ must contain a collider $A \rightarrow C \leftarrow B$ whose middle member $C$ is in $\tbf{D}$. Hence as long as we do not condition on $D$, the rule $\tbf{g-BK}$ implies that $\tbf{L} \tbf{A} \tbf{R}_A$ must be independent of $W$, which is just what \eqref{eqn:saray} says. Using these results, we obtain:
\eqn{ \label{eqn:qintervrule}
P(\tbf{X}|C^W=\brm{do}) &=& P(\tbf{D} \tbf{R}_D W \tbf{L} \tbf{A} \tbf{R}_A |C^W=\brm{do}) \nonumber \\
&=& P(\tbf{D} \tbf{R}_D|W \tbf{L} \tbf{A} \tbf{R}_A C^W=\brm{do}) \, P(W|\tbf{L} \tbf{A} \tbf{R}_A C^W=\brm{do}) \, P(\tbf{L} \tbf{A} \tbf{R}_A|C^W=\brm{do}) \nonumber \\
&=& P(\tbf{D} \tbf{R}_D|W \tbf{L} , \, C^W=\brm{do}) \, P'(W) \, P(\tbf{L} \tbf{A} \tbf{R}_A|C^W=\brm{do}) \nonumber \\
&=& P(\tbf{D} \tbf{R}_D|W \tbf{L} , \, C^W=\oslash) \, P'(W) \, P(\tbf{L} \tbf{A} \tbf{R}_A|C^W=\oslash) \, .
}
To obtain the third line in the above calculation we made use of \eqref{eqn:drsy} and \eqref{eqn:saray}. To obtain the last line we made use of \tbf{CNS} and \tbf{NPE2}. In the final line, all the terms on the RHS (except $P'(W)$, which is specified by the intervention) refer to probabilities in the reference experiment. Thus, Eq. \eqref{eqn:qintervrule} defines the inference rule for interventions on quantum systems whose causal structure is given by an LDAG. We summarize it as:\\
\tbf{QIR.} For quantum systems, the behaviour under an intervention on $W$ is related to the reference behaviour by (suppressing the $C^W=\oslash$ on the RHS):
\eqn{ \label{eqn:qintervaxiom}
P(\tbf{X}|C^W=\tbf{do}) = P'(W) \,P(\tbf{D} \tbf{R}_D|W \tbf{L}) \, P(\tbf{L} \tbf{A} \tbf{R}_A) \, .
}
For a fine-grained intervention (that sets $W$ to a particular value) this becomes:
\eqn{ \label{eqn:qintervaxiomfine}
P(\tbf{X}|C^W=\tbf{do}(W=w')) = \delta(w,w') \,P(\tbf{D} \tbf{R}_D|W \tbf{L}) \, P(\tbf{L} \tbf{A} \tbf{R}_A) \, .
}
For multiple interventions, this rule can simply be iterated. Thus, provided the causal structure of the system in the reference experiment is an LDAG, we can uphold \tbf{CS}.
\tit{Remark:} Strictly speaking, we have only shown that the restriction to LDAGs is \tit{sufficient} for causal inference to be possible. To establish it as a necessary condition would require us to generalize the counter-example we gave earlier to arbitrary causal structures, that is, to show that for any DAG that is not an LDAG, full process tomography cannot be achieved. In the companion work Ref. \cite{JACQUES2}, we prove this using the \tit{process matrix} formalism for quantum causal models introduced in Ref. \cite{COSHRAP}.
Under the joint assumptions that the causal structure is an LDAG, the class of causally reversible quantum systems has the curious feature that the state at any moment -- if we \tit{do not condition on the outcomes of any measurements} -- is always maximally mixed. In this sense, the dynamics is trivial, or, as some have put it, `eternal noise' \cite{COECKE}. Formally we may state it as follows:\\
\tbf{EN. Eternal noise:} In a causally symmetric quantum causal model on an LDAG, the unconditioned marginal distribution $P(\tbf{L})$ for any layer is the uniform random distribution over the outcomes of its variables. That is, \\
\eqn{
P(\tbf{L}) = \prod_{X_i \in \tbf{L}} \, P(X_i) = \prod_{X_i \in \tbf{L}} \, \frac{1}{d^2_{X_i}} \, .
}
This follows because if we don't condition on any other layers, the input state to any layer is equivalent to the maximally mixed state after propagation through some unbiased channel, hence is maximally mixed on the input Hilbert space of the given layer. (\tbf{EN} will be useful for proving some results in the Appendices).\\
\subsubsection{The generalized Urgleichung for un-measurements \label{sec:qunmeas}}
In Sec. \ref{sec:born} we discussed the inference rule for a quantum un-measurement in the simple case of a causal chain. For that case we found that the inference rule was given by the QBist Urgleichung, Eq. \eqref{eqn:causalurg}. In this section we will generalize this rule to arbitrary LDAGs. (For clarity in the equations, we will replace $C^{Z} = \brm{undo}$ with the shorter notation $\tbf{un}(Z)$, and will drop $C^{Z} = \oslash$ altogether, leaving it implicit whenever no value of $C^{Z}$ is specified).
Let us consider the reference behaviour $P(\tbf{X} Z|C^Z=\oslash)=P(\tbf{X} Z)$ in a system with causal structure given by an LDAG $G(\tbf{X} Z)$, where $Z$ is the variable to be un-measured. Assuming $Z$ is contained in the $j_{\trm{th}}$ layer, let $\tbf{L}^{(-Z)}_j := \tbf{L}_j \setminus Z$ denote the elements of $\tbf{L}_j$ other than $Z$. From the \tbf{QMC} (more specifically, \tbf{SSO}) we can decompose the probabilities as:
\eqn{ \label{eqn:lsplit}
P(\tbf{X} Z) &=& \prod_{i} \, P(\tbf{L}_i|\tbf{L}_{i-1}) \nonumber \\
&=& P(\tbf{L}_M \tbf{L}_{M-1} \cdots \tbf{L}_{j+2}|\tbf{L}_{j+1}) \, P(\tbf{L}_{j+1}|\tbf{L}_{j} \tbf{L}_{j-1}) \, P(\tbf{L}^{(-Z)}_j \, Z \, \tbf{L}_{j-1} \cdots \tbf{L}_{1}) \, .
}
Assuming the probabilities have a similar decomposition after the un-measurement leads us to posit the following ansatz:
\eqn{ \label{eqn:preansatz}
&& P(\tbf{X}|\brm{un}(Z)) := \nonumber \\
&& P(\tbf{L}_M \tbf{L}_{M-1} \cdots \tbf{L}_{j+2}|\tbf{L}_{j+1} , \, \brm{un}(Z)) \,
P(\tbf{L}_{j+1}|\tbf{L}^{(-Z)}_{j} \tbf{L}_{j-1} , \, \brm{un}(Z)) \,
P(\tbf{L}^{(-Z)}_j \,\tbf{L}_{j-1} \cdots \tbf{L}_{1} | \, \brm{un}(Z)) \, .
}
Now we may invoke the principle \tbf{CSO} (recall Sec. \ref{sec:activemanips}) to deduce that:
\eqn{
P(\tbf{L}_M \tbf{L}_{M-1} \cdots \tbf{L}_{j+2}|\tbf{L}_{j+1} , \, \brm{un}(Z)) \, &=&
P(\tbf{L}_M \tbf{L}_{M-1} \cdots \tbf{L}_{j+2}|\tbf{L}_{j+1}),
}
and
\eqn{
P(\tbf{L}^{(-Z)}_j \,\tbf{L}_{j-1} \cdots \tbf{L}_{1} | \brm{un}(Z)) &=&
P(\tbf{L}^{(-Z)}_j \,\tbf{L}_{j-1} \cdots \tbf{L}_{1}) \, .
}
Substituting these into \eqref{eqn:preansatz} we obtain:
\eqn{ \label{eqn:unmeasansatz}
P(\tbf{X}| \brm{un}(Z)) &:=&
P(\tbf{L}_M \cdots \tbf{L}_{j+2}|\tbf{L}_{j+1}) \, P(\tbf{L}_{j+1}|\tbf{L}^{(-Z)}_{j} \tbf{L}_{j-1} , \, \brm{un}(Z)) \, P(\tbf{L}^{(-Z)}_j \,\tbf{L}_{j-1} \cdots \tbf{L}_{1}) \, ,
}
and the only term that is not the same as in the reference behaviour is the middle factor, $P(\tbf{L}_{j+1}|\tbf{L}^{(-Z)}_{j} \tbf{L}_{j-1} , \,\brm{un}(Z))$. This term represents the probabilities of the measurements in the layer $\tbf{L}_{j+1}$, conditional that all other measurements in $\tbf{L}^{(-Z)}_j$ were performed, and conditional on the values of the preceding layer $\tbf{L}_{j-1}$. More generally, one might consider un-measuring $K$ variables in the $j_{\trm{th}}$ layer, as shown in Fig. \ref{fig:genUrg}.
\begin{figure}[!htb]
\centering\includegraphics[width=0.5\linewidth]{genurg.PNG}
\caption{A causal graph of three layers within a layered DAG. The main text explains how to infer the probabilities for an un-measurement of $K$ variables in the middle layer.}
\label{fig:genUrg}
\end{figure}
By inspection of the figure, one sees a close analogy with the scenario in which the original Urgleichung was derived, except that now the relevant `causal chain' involves entire layers, $\tbf{L}_{j-1} \rightarrow \tbf{L}_{j} \rightarrow \tbf{L}_{j+1}$. This suggests that it might be more straightforward to first derive the inference rule for the un-measurement of the \tit{entire} middle layer $\tbf{L}_{j}$, and then specialize this to the case where only a subset, or the single member $Z$, is un-measured. We may therefore pose the problem as follows. Let $\rho_{\tbf{L}_{j-1}}$ be an operator on $\mathcal{H}_{\tbf{L}_j}$ representing the quantum state input to all measurements in the layer $\tbf{L}_j$, conditional on the outcomes $\tbf{L}_{j-1}$ measured at the previous layer. Then the desired probabilities can be expressed in the form:
\eqn{ \label{eqn:genurgstart}
P(\tbf{L}_{j+1}|\tbf{L}_{j-1} , \, \brm{un}(\tbf{L}_j)) &=& \tr{\rho_{\tbf{L}_{j-1}} D_{\tbf{L}_{j+1}} } \, ,
}
where $D_{\tbf{L}_{j+1}}$ is some POVM comprised of operators on $\mathcal{H}_{\tbf{L}_j}$, whose outcomes correspond to the variables in $\tbf{L}_{j+1}$. Let us label the variables in $j_{\trm{th}}$ layer as $\tbf{L}_j := V_1 \cup \dots \cup V_N$, and associate the layer $\tbf{L}_j$ to the joint Hilbert space $\mathcal{H}_{\tbf{L}_j}:=\mathcal{H}_{V_1} \otimes \dots \otimes \mathcal{H}_{V_N}$. Since each $V_i$ is the outcome of a SIC-instrument, the whole layer $\tbf{L}_j$ is associated with a tensor product of SIC-instruments, which is itself an IC-instrument on $\mathcal{H}_{\tbf{L}_j}$. Hence the reference probabilities $P(\tbf{L}_{j+1}|\tbf{L}_{j}),P(\tbf{L}_{j}|\tbf{L}_{j-1})$ contain sufficient information to fully reconstruct the operators $\rho_{\tbf{L}_{j-1}}, D_{\tbf{L}_{j+1}}$. This means we can write these operators as functions of the probabilities and insert the resulting expressions into the RHS of Eq. \eqref{eqn:genurgstart} to obtain the desired inference rule for un-measuring $\tbf{L}_{j}$. The main obstacle is that a tensor product of SIC-instruments is not itself a SIC-instrument, and so the final equation will have a different form than the Urgleichung. Indeed, one might expect it be rather complicated and ugly. Remarkably, following the procedure just outlined, we arrive at the unexpectedly simple result (derived in Appendix \ref{app:genurg1} ):
\eqn{ \label{eqn:genurgall}
P(\tbf{L}_{j+1}|\tbf{L}_{j-1} , \, \brm{un}(\tbf{L}_j)) &=& \zum{\tbf{v} \tbf{v}'}{} \, P(\tbf{L}_{j+1}|\tbf{v}') \left( \prod_{n=1}^{N} \, \left[ (d_n+1)\delta_{v_n v'_n}-\frac{1}{d_n} \right] \right) \, P(\tbf{v}|\tbf{L}_{j-1}) ,
}
where $d_n$ is the dimension of $\mathcal{H}_{V_n}$, and the summation over each $v_n$ runs up to $d^2_n$ (i.e. because that is the number of outcomes of the $n_{\trm{th}}$ SIC-instrument). Notice that this equation gives us $P(\tbf{L}_{j+1}|\tbf{L}_{j-1} , \, \brm{un}(\tbf{L}_j))$ purely as a function of the reference probabilities, i.e. the terms on the RHS are understood to be conditioned on $C^{\tbf{L}_j}=\oslash$, so it gives us the desired inference rule for an un-measurement of $\tbf{L}_j$.
We next tackle the question of what form this rule should take when we only wish to un-measure a subset of the $\tbf{L}_j$. Without loss of generality, we can partition $\tbf{L}_j := \{U_i : i=1,2,\dots ,K \} \cup \{W_i : i=1,2,\dots,N-K \} := \tbf{U} \cup \tbf{W}$ for some $K < N$, and contemplate an un-measurement of all $\tbf{U}$, while keeping the measurements $\tbf{W}$ in place. Our goal is then to calculate the probabilities for $\tbf{L}_{j+1}$ conditional that $\tbf{U}$ are un-measured, and conditional that $\tbf{W}$ are measured and attain specific values $\tbf{W} =\{w_1 \dots w_{N-K} \}$. Since a SIC-instrument $W_i$ with outcome $W_i=w_i$ projects the measured sub-system into the post-measurement state $\Pi_{w_i}$, we can express the desired probabilities as:
\eqn{ \label{eqn:genurgstart2}
P(\tbf{L}_{j+1}|\tbf{L}_{j-1} \tbf{w}, \, \brm{un}(\tbf{U})) &=& \tr{\left( \rho^{\tbf{U}}_{\tbf{L}_{j-1}}\otimes \Pi_{w_{1}} \otimes \cdots \otimes \Pi_{w_{N-K}} \right) D_{\tbf{L}_{j+1}} } \, ,
}
where $\rho^{\tbf{U}}_{\tbf{L}_{j-1}} := \trc{W}{\rho_{\tbf{L}_{j-1}} }$ is the reduced state of the input to the first $K$ measurements, defined on the Hilbert space $\mathcal{H}_{U_1} \dots \mathcal{H}_{U_K}$, and conditioned on the outcomes $\tbf{L}_{j-1}$ from the previous layer. We can then carry out the calculation in exactly the same way as before. To do so, we partition the layer as $\tbf{L}_j := \{V_i : i=1,2,\dots ,K \} \cup \{V_i : i=1,2,\dots,N-K \} := \tbf{U} \cup \tbf{W}$, where $\tbf{U}$ are the variables to be un-measured. Hence $\tbf{U}$ takes values that are $K$-tuples $\tbf{u}:=(v_1,\dots,v_K)$ and $\tbf{W}$ takes values that are $(N-K)$-tuples $\tbf{w}:=(v_{K+1},\dots,v_{N})$. It will also be useful to define the variable $\tbf{V}$ covering the whole layer, having the $N$-tuple values $\tbf{v}:=(v_{1},\dots,v_{N})$. We then obtain (details in Appendix \ref{app:genurg2}):
\eqn{ \label{eqn:genurgsubset}
P(\tbf{L}_{j+1}|\tbf{L}_{j-1} \tbf{w} , \, \brm{un}(\tbf{U})) &=& \zum{\tbf{u}' \tbf{u}}{} \, P(\tbf{L}_{j+1}|\tbf{u}\tbf{w}) \, \left( \prod^{K}_{n=1} \, \left[ (d_n+1)\delta_{v'_n v_n}-\frac{1}{d_n} \right] \right) \, P(\tbf{u}'|\tbf{L}_{j-1}) \, ,
}
This equation tells us the probabilities conditional on the outcomes $\tbf{W}=\tbf{w}$. It can be combined with $P(\tbf{W} |\tbf{L}_{j-1} , \, \brm{un}(\tbf{U}))$ to find the result after marginalizing over $\tbf{W}$, namely:
\eqn{ \label{eqn:qLTP}
P(\tbf{L}_{j+1}|\tbf{L}_{j-1} , \, \brm{un}(\tbf{U})) &=& \zum{\tbf{w}}{} \, P(\tbf{L}_{j+1}|\tbf{L}_{j-1} \tbf{w} , \, \brm{un}(\tbf{U})) \, P(\tbf{w} |\tbf{L}_{j-1} \brm{un}(\tbf{U})) \, \nonumber \\
&=& \zum{\tbf{w}}{} \, P(\tbf{L}_{j+1}|\tbf{L}_{j-1} \tbf{w} , \, \brm{un}(\tbf{U})) \, P(\tbf{w} |\tbf{L}_{j-1}) \, ,
}
where in the last line we used the fact that, as a consequence of \tbf{CNS},
\eqn{
P(\tbf{w} |\tbf{L}_{j-1} , \, \brm{un}(\tbf{U}) ) = P(\tbf{w} |\tbf{L}_{j-1}) \nonumber \, ,
}
i.e. the probabilities for the measurements of $\tbf{W}$ are the same whether or not the measurements of $\tbf{U}$ are performed or not.
It is illuminating to consider some special cases of this inference rule. First, consider the case that $N=1$, that is, the layer $\tbf{L}_j$ consists of a single measurement $V_1 := Z$ which is to be un-measured. In that case Eq. \eqref{eqn:genurgsubset} reduces to:
\eqn{ \label{eqn:oneZ}
P(\tbf{L}_{j+1}|\tbf{L}_{j-1} \, , \brm{un}(Z))
&=& \zum{z' z}{} \, P(\tbf{L}_{j+1} |z ) \, \left[ (d_Z+1)\delta_{z' z}-\frac{1}{d_Z} \right] \, P(z'|\tbf{L}_{j-1}) \, \nonumber \\
&=& \zum{z}{} \, P(\tbf{L}_{j+1} |z ) \, \left[ (d_Z+1) P(z|\tbf{L}_{j-1}) -\frac{1}{d_Z} \right] \, ,
}
which recovers the usual Urgleichung, as expected. Next, consider the case where the probabilities at the layer $\tbf{L}_{j+1}$ only depend on the subsystems \tbf{W} of the previous layer $\tbf{L}_{j}$ (this could occur if, for instance, the un-measured \tbf{U} variables are all terminal nodes in the DAG, meaning that the sub-systems are discarded after measurement). In this case it should not make any difference to our considerations whether the \tbf{U} are un-measured, or simply measured and then discarded. In other words, we would expect to recover the usual rule for non-disturbing measurements in this case. And indeed, substituting our assumption $P(\tbf{L}_{j+1}|\tbf{U}\tbf{W}) \mapsto P(\tbf{L}_{j+1}|\tbf{W})$ into the RHS of \eqref{eqn:genurgsubset} we obtain:
\eqn{
P(\tbf{L}_{j+1}|\tbf{L}_{j-1} \tbf{w} , \, \brm{un}(\tbf{U})) &=& \zum{\tbf{u}' \tbf{u}}{} \, P(\tbf{L}_{j+1}|\tbf{w}) \, \left( \prod^{K}_{n=1} \, \left[ (d_n+1)\delta_{v'_n v_n}-\frac{1}{d_n} \right] \right) \, P(\tbf{u}'|\tbf{L}_{j-1}) \, \nonumber \\
&=& \zum{\tbf{u}'}{} \, P(\tbf{L}_{j+1}|\tbf{w}) \, \left( \prod^{K}_{n=1} \, \zum{v_n}{} \, \left[ (d_n+1) \delta_{v'_n v_n}-\frac{1}{d_n} \right] \right)P(\tbf{u}'|\tbf{L}_{j-1}) \, \nonumber \\
&=& \zum{\tbf{u}'}{} \, P(\tbf{L}_{j+1}|\tbf{w}) \, \left( \prod^{K}_{n=1} \, \left[ (d_n+1)-(d^2_n)\frac{1}{d_n} \right] \right)P(\tbf{u}'|\tbf{L}_{j-1}) \, \nonumber \\
&=& \zum{\tbf{u}'}{} \, P(\tbf{L}_{j+1}|\tbf{w}) P(\tbf{u}'|\tbf{L}_{j-1}) \, \nonumber \\
&=& P(\tbf{L}_{j+1}|\tbf{w}) \, ,
}
and inserting this into the RHS of \eqref{eqn:qLTP} gives us:
\eqn{
P(\tbf{L}_{j+1}|\tbf{L}_{j-1} , \, \brm{un}(\tbf{U})) &=& \zum{\tbf{w}}{} \, P(\tbf{L}_{j+1}|\tbf{w}) \, P(\tbf{w} |\tbf{L}_{j-1}) \, ,
}
which recovers the rule for non-disturbing measurements, Eq. \eqref{eqn:nondisturb}.
The question still stands as to what should be the causal graph $G(\tbf{X})$ that holds for the counterfactual behaviour $P(\tbf{X}|\,\brm{un}(Z))$. Given that un-measurements do not break causal chains (property (ii) of un-measurements, Sec. \ref{sec:activemanips} ), the following is a natural postulate:\\
\tbf{UNDAG. The DAG after an un-measurement:}\\
Let $G(\tbf{X},Z)$ represent the causal structure of a quantum system in the reference experiment, where $Z$ is not an exogenous node. Then the DAG conditional on un-measuring $Z$, denoted $G(\tbf{X}|C^Z=\brm{undo})$, is obtained from $G(\tbf{X},Z)$ by the following procedure:\\
(i) Directly connect every parent of $Z$ to every child of $Z$;\\
(ii) Delete $Z$ and its incoming and outgoing arrows.\\
It is natural to ask what conditional independences are implied by the \tbf{QMC} in the DAG after un-measuring $Z$. To obtain the answer, note that the removal of $Z$ in the manner described in \tbf{UNDAG} cannot un-block any path between variables that were previously blocked (unless they were blocked by $Z$) and it cannot block any path that was previously un-blocked. This means that the conditional independences implied by the DAG $G(\tbf{X}|C^Z=\brm{undo})$ after the un-measurement are simply the set of conditional independences implied by the original LDAG $G(\tbf{X} \, Z)$, excluding those that involve $Z$.
The next important question is whether the probabilities after the un-measurement continue to satisfy these conditional independences, i.e. whether $P(\tbf{X}|\,C^Z=\brm{undo})$ satisfies the \tbf{QMC} relative to the new DAG $G(\tbf{X}|C^Z=\brm{undo})$. True enough, we effectively \tit{assumed} that this is the case in order to obtain the general decomposition into layers of Eq. \eqref{eqn:unmeasansatz}. However, this decomposition only relies on the rule \tbf{SSO}, which does not apply between layers. In particular, we have \tit{not} assumed that the probabilities $P(\tbf{L}_{j+1}|\tbf{L}_{j-1} \tbf{W} , \, \brm{un}(\tbf{U}))$ as defined by Eq. \eqref{eqn:genurgsubset} obey the constraints of the \tbf{QMC} relative to $G(\tbf{X}|C^Z=\brm{undo})$. In fact, establishing this turns out to be highly non-trivial and we were not able to do it in full generality. Instead, it is left as a conjecture. Evidence in its favour is presented in Appendix \ref{app:qunmeas}, where it is shown to be true for a significant class of conditional independence relations.
\tit{Remark:} It is important to note that the DAG $G(\tbf{X}|C^Z=\brm{undo})$ after un-measuring $Z$ is not necessarily an LDAG. When this occurs, it may not be possible to perform further un-measurements or interventions (i.e. because by not being an LDAG the new model does not meet the requirements of a reference experiment, and the desired inference rules may not exist). However, the rule \eqref{eqn:genurgsubset} is sufficiently general to allow unmeasurements of any sub-set of variables within a single layer. Moreover, variables can be un-measured in multiple layers, provided each affected layer is sandwiched between a pair of layers that are not themselves subject to un-measurements. Hence the fact that un-measurements do not preserve the LDAG structure does not entirely prevent us from making inferences about multiple un-measurements, so long as these meet the above requirements.
\begin{figure}[!htb]
\centering\includegraphics[width=0.5\linewidth]{qunmeas.PNG}
\caption{The causal graph before and after un-performing the measurement $Z$; the causal pathways from its parents to its children are preserved.}
\label{fig:qunmeas}
\end{figure}
\subsection{Discussion \label{sec:discuss}}
In this work we have seen how a quantum causal model can be constructed within a general framework for causal modeling that defines causality in terms of probabilities for observations in counterfactual experiments. We proposed that quantum reference measurements should be informationally complete, and on grounds of simplicity we adopted the convention of using SIC-instruments (definition \tbf{QRM} in Sec. \ref{sec:born}). Using this as our guide, we identified a set of physical Markov conditions applicable to quantum systems, namely $\{ \tbf{RP}, \tbf{SSO}, \tbf{BK}, \tbf{BK*}, \tbf{PE} \}$. We drew attention to a quasi-symmetry in this set under causal inversion, and we proposed to make the symmetry complete by dropping \tbf{PE}, and pairing \tbf{RP} with its causal inverse \tbf{RP*} to form the `symmetric Reichenbach principle' \tbf{SRP}. We argued that the resulting set of causally symmetric Markov conditions, $\{ \tbf{SRP}, \tbf{SSO}, \tbf{BK}, \tbf{BK*} \}$, could be enforced by restricting to experiments involving unbiased quantum processes acting on maximally mixed input states, which we defined as the set of causally symmetric quantum systems. We then converted the causally symmetric Markov conditions into graphical rules that we used to obtain the corresponding rules for arbitrary causal structures, as given by the \tbf{QMC} (cf Sec. \ref{sec:qmc}).
Using the \tbf{QMC}, we attempted to derive counterfactual inference rules for interventions and quantum un-measurements in Sec. \ref{sec:qinterf}. The two main challenges to this task were (i) upholding the principle of causal sufficiency \tbf{CS} (Sec. \ref{sec:experiments}) for interventions on quantum systems, and (ii) defining an inference rule for un-measurements on quantum systems, which in general are disturbing measurements. The first was overcome by positing that the causal structure in the reference experiment should be layered (see definition \tbf{LDAG} in Sec. \ref{sec:qintervsoln}), while the second was overcome by adapting an existing result from the literature on QBism (the Urgleichung, Eq. \eqref{eqn:urg}) to our framework and generalizing it to more general causal structures, as described by the inference rule Eq. \eqref{eqn:genurgsubset}. We now anticipate and address some questions about this framework.\\
\tit{What is the physical significance of the restriction to LDAGs?} \\
This restriction implies that it must always be possible -- in principle -- to perform measurements on a quantum system so as to have the causal structure conform to that of an LDAG. We note that this structure is strongly reminiscent of the network structure of a \tit{Markovian} quantum process as discussed elsewhere in Eg. \cite{POLLOCKPRL,POLLOCKPRA,COSHRAP}. However, to make this comparison precise is difficult because the aforementioned works are primarily concerned with Markovianity as a property of \tit{process tensors} \cite{POLLOCKPRL} or \tit{process matrices} \cite{COSHRAP}, whereas the present framwork formulates the \tbf{QMC} only at the level of probabilities. Nevertheless, our model was explicitly constructed with operator representations of quantum networks in mind, and so we expect a link to exist between our definitions and those pertaining to entities like process tensors and process matrices. In particular, we note that the decomposition of the first line in \eqref{eqn:lsplit} is essentially identical to the definition of a classical Markovian process except that the measurements $\tbf{L}_i$ do not necessarily occur simultaneously at a single time-step, but are ordered by the more general consideration of causal influence in the LDAG. This enables us to apply the \tit{Corollary} in Ref. \cite{POLLOCKPRL} to deduce that any representation of our quantum causal model in terms of process tensors will necessarily result in a process tensor that is Markovian by the definition of that work, at least with regards to measurements and interventions performed on whole layers. Thus, the restriction to LDAGs may be understood as imposing the convention that the reference experiment should be Markovian.\\
\tit{What is the significance of the causal symmetry?} \\
We can formalize the causal symmetry of the \tbf{QMC} in the following way: suppose that $P(\tbf{X})$ satisfies the \tbf{QMC} relative to a DAG $G(\tbf{X})$. Let $G^{*}(\tbf{X})$ be the DAG obtained from $G(\tbf{X})$ by reversing the directions of all arrows. Then $P(\tbf{X})$ also satisfies the \tbf{QMC} relative to $G^{*}(\tbf{X})$. Thus if one were ignorant of the overall direction of the causal structure in a system -- though we must suspend disbelief to imagine how one could ever be so ignorant -- then one would be powerless to determine this direction just from examining the behaviour of the system in the reference experiment (\tit{sans} other information about eg. the actual timing of the measurements). To obtain information about the causal direction one would have to either perform an intervention or an un-measurement and see how the system responds. This leaves open the intriguing possibility of interpreting the \tit{direction} of causal structure as a property that has no definite value prior to an act of \tit{manipulation}, in analogy to how the properties of a quantum system are regarded to have no definite values prior to their measurement. Similar ideas have been explored in related frameworks by other authors \cite{ORESH, GUERIN,OCB}, and we show in a companion work (Ref. \cite{JACQUES2}) that the causal symmetry we defined here at the level of probabilities can also be extended to the process matrix causal modeling framework of Ref. \cite{COSHRAP}. More work is needed to understand the relevance of such results to the quantum gravity setting, where space-time relations are treated as dynamical variables that may be quantized.
\tit{What is the classical limit of causally symmetric quantum causal models?} \\
We remarked in Sec. \ref{sec:soff} that a `classical limit' can be obtained from a quantum network by restricting all measurements to be performed in a single basis, and restricting all states to be diagonal in this same basis. In the present framework, doing so leads to the result that the reference measurements can be non-disturbing, just as in the case of classical stochastic systems. However, if applied to the class of causally reversible quantum systems, it appears unlikely that the classical limit would produce a model that satisfies \tbf{FCC}, as this would break the symmetry between $\tbf{BK}*$, $\tbf{BK}$. The possibility of a stochastic classical causal model violating \tbf{FCC} is intriguing, for it would indicate that the failure of \tbf{FCC} (and for that matter, causal reversibility) is not by itself a uniquely quantum phenomenon. It may be, for instance, that there is no fundamental difference between classical and quantum systems in terms of the physical Markov conditions that each can satisfy, but that the difference lies entirely in the form of the counterfactual inference rules. More work is needed to investigate this possibility.\\
\tit{Is there a quantum advantage to causal inference?} \\
We remarked in Sec. \ref{sec:soff} that in order to establish a basis for comparison of the different powers of quantum versus classical systems in causal inference, it is necessary to establish a quantum analog of the classical `passive observational scheme'. Our concept of a `reference observational scheme' is a natural candidate for the quantum analog of a passive observational scheme, and it may be fruitful to make comparisons between the quantum and classical causal models on this basis, such as comparing our present constraint of causal symmetry (characteristic of our reference observational scheme) with the different notion of \tit{informationally symmetric measurements} introduced in Refs. \cite{RIED, RIEDPHD}. Another interesting avenue to explore is the role of un-measurements in the task of inferring causal structure. As we have shown, un-measurements can reveal causal structure in a way that does not break causal chains, which raises questions about their limitations and advantages in comparison to interventions. Furthermore, the concept of an un-measurement might be carried over to classes of classical systems in which measurements are known to be disturbing. Situations abound in which the mere act of measuring an experimental subjet may change that subject's behaviour, and it would be interesting to see whether our framework can be used to model the response to such measurements using an appropriately tailored inference rule for un-measurements.
\section{Conclusions}
Science is primarily concerned with the acquisition of knowledge about the world, but in order to achieve this role it must also be concerned with understanding the limitations on how such knowledge can be acquired. On a manipulationist interpretation of causality, one is concerned with the information available to an observer prior to performing `manipulations' on the system, as encoded in the observer's probability assignments for measurements in a conventionally defined reference experiment. From this point of view, the reference measurements should be maximally informative about counterfactual experiments, hence should be informationally complete. For quantum systems, this leads naturally to the idea of SIC-instruments as the elementary causal relata, and this in turn leads to the quantum physical Markov Conditions and then to inference rules for interventions and un-measurements.
Curiously, we have found that the reference experiment may be defined so that the physical Markov conditions of quantum systems are symmetric under causal inversion. If we reject the assumption that this symmetry represents ignorance of a true underlying causal orientation, then we may entertain the idea that the asymmetry of the direction of cause and effect is introduced at the moment that an observer interacts with a system by interacting with it, either to make an intervention or perform an un-measurement.
These results suggest two main conclusions. Firstly, it has been shown that the manifest asymmetry of the cause-effect relationship on a manipulationist and probabilistic account of causality may yet be compatible with the symmetry of physics at the fundamental level. That is to say, if one regards the reference experiment as representing the state of maximal information that is in principle available about a system for making inferences, and if one defines this experiment in the manner that makes it causally symmetric as we have done here, then it can be maintained that the very direction of causal structure itself need not be treated as a feature intrinsic to the observer-independent world, but may emerge through the very process of the observer's interaction with the system. This idea is explored further in the companion work \cite{JACQUES2}.
Secondly, our introduction of the concept of `un-measurement' (i.e. the enforcement of a lack of some measuring device at some location) as an essential part of causal inference on quantum systems is quite novel. Un-measurements are interesting in that they preserve the causal paths, and yet appear to depend upon (and thus have the potential to reveal) causal structure beyond what can be ascertained from the reference experiment. Future work is needed to understand whether un-measurements have a special role to play in the inference of causal structure in both quantum and classical settings.
\acknowledgments
I thank G. Barreto Lemos and C. Duarte for detailed feedback on early versions; C. Fuchs, B. Stacey and J. DeBrota for help in understanding the mathematics of QBism; P. Kauark Leite, Romeu Rossi Jr. and R. Chaves for stimulating discussions that influenced the course of the paper; and an anonymous referee for pointing out a serious error that has now been fixed. This work was supported in part by the John E. Fetzer Memorial Trust.
|
1,314,259,994,240 | arxiv | \section{Introduction}
It has long been speculated that the vast energy of supernova remnants (SNRs) is tapped to produce the bulk of Galactic cosmic rays. Those SNRs interacting with molecular clouds are excellent targets to detect evidence of cosmic ray acceleration. The nearby dense gas acts as a target for hadronic cosmic rays, producing $\gamma$-rays via the decay of neutral pions (e.g., Drury et al. 1994).
Interest in cosmic rays from interacting SNRs has been reinvigorated by recent detections of $\gamma$-ray emission at GeV- to multi-TeV-energies from supernova remnants. The HESS galactic plane survey yielded 14 new TeV detections, 7 of which are coincident with SNRs \citep{aharonian06}. Targeted observations are now capable of reaching sufficient sensitivities to resolve the morphology of $\gamma$-ray emission associated with the remnant \citep{hess_ctb37a,hess_w28}. In particular, a TeV-energy counterpart to IC 443 was observed to be spatially offset from the GeV-energy $\gamma$-ray source \citep{magic_ic443,veritas_ic443}. The spatial variations with energy may indicate the diffusion of cosmic rays accelerated by the supernova remnant out into the adjacent molecular cloud \citep{torres08}.
Interacting SNRs represent a promising class of $\gamma$-ray sources which are likely to be uncovered in increasing numbers by the current generation of $\gamma$-ray observatories. However, to date the identification of $\gamma$-ray counterparts to SNRs has been inhibited by the poor spatial resolution and sensitivity of $\gamma$-ray telescopes \citep{esposito96,torres03}. Often, the association of $\gamma$-ray sources with interacting SNRs rests on supplemental evidence from common tracers of interaction with dense gas: broad molecular lines, bright at infrared to millimeter wavelengths (e.g., Reach et al. 2005), and the hydroxyl(OH) maser at 1720 MHz (e.g., Frail et al. 1994).
OH masers, when detected only at 1720 MHz, are a particularly powerful diagnostic which is only associated with SNR shocks. While these "SNR masers" are only detected in $\sim$10\% of remnants, they unambiguously trace cool (25--150 K), dense ($\sim$10$^5$ cm$^{-3}$) gas in the wake of non-dissociative shocks and permit a direct measure of magnetic field strength \citep{lockett99}. Prominent interacting SNRs W28, W44 and IC 443 have been noted for both bright masers and coincident EGRET sources, leading to the suggestion that masers may trace cosmic-ray acceleration sites \citep{claussen97}.
In this letter we address the nature of interacting SNRs with $\gamma$-ray counterparts: First, we show that an excellent correlation exists between SNR masers and a subset of GeV- and TeV-energy sources toward SNRs. This strengthens the argument for a hadronic origin for the observed $\gamma$-rays. Second, if pion decay is indeed the dominant emission mechanism, the implied cosmic ray enhancement is sufficient to explain the increased ionization needed to produce high columns of OH in the post-shock gas \citep{wardle99}. This suggests there is a more intimate relationship between SNR masers and $\gamma$-ray sources than merely being complimentary tracers of dense gas interaction.
\begin{deluxetable*}{rrlrrrrrrr}
\tablecaption{Known SNR Masers and Coincident $\gamma$-ray Sources}
\tablewidth{0pt}
\tabletypesize{\scriptsize}
\tablehead{
\colhead{$l$} & \colhead{$b$} & \colhead{SNR} & \colhead{Diameter} & \colhead{Distance} &
\colhead{L$_{GeV}$} & \colhead{$\alpha_{GeV}$} &
\colhead{L$_{TeV}$} & \colhead{$\alpha_{TeV}$} & Ref. \\
& & & \colhead{(\arcmin )} & \colhead{(kpc)} & \colhead{(ergs s$^{-1}$)} & & \colhead{(ergs s$^{-1}$)}
}
\startdata
\multicolumn{10}{c}{Group A}\\
\hline
6.4 &--0.1 &W28 &42 &2.0 &4.8e35 &2.1 &1.5e33 &2.7 & 1,2\\
34.7 &--0.4 &W44 &30 &2.5 &4.1e35 &1.9 &--- &--- & 1 \\
49.2 &--0.7 &W51 C &30 &6 &1.7e36 &-- &1.5e33 &-- & 3,4 \\
189.1 &+3.0 &IC 443 &50 &1.5 &1.0e35 &2.0 &1.2e33 &3.1 & 1,5,6\\
\hline
\multicolumn{10}{c}{Group B}\\
\hline
0.0 &+0.0 &SgrA East &2.5 &8.5 &1.2e37 &1.7 &1.3e36 &2.2 & \\
5.7 &--0.0 & &9 &3.2 &--- &--- &0.8e33 &2.3 & 2 \\
8.7 &--0.1 &W30 &45 &3.9 &--- &--- &1.7e34 &2.7 & \\
337.8 &--0.1 &Kes 41 &5 &12.3 &4.2e36 &2.5 &--- &--- & \\
348.5 &+0.1 &CTB 37A &10 &11.3 &3.5e36 &2.3 &5.3e33 &2.3 & 7\\
359.1 &--0.5 & &10 &5.0 &1.2e36 &2.2 &7.5e32 &2.7 & 8\\
\hline
\multicolumn{10}{c}{Group C}\\
\hline
1.0 &--0.1 &Sgr D SNR&8 &8.5 & \\
1.4 &--0.1 &&10 &8.5 & \\
5.4 &--1.2 &Duck &35 &5.2 & \\
9.7 &--0.0 &&11 &4.7 & \\
16.7 &+0.1 &&4 &2/14 & \\
21.8 &--0.6 &Kes 69 &20 &5.2 & \\
31.9 &--0.0 &3C 391 &8 &9 & \\
32.8 &--0.1 &Kes 78 &20 &5.5/8.5& \\
337.0 &--0.1 &CTB 33 &3 &11 & \\
346.6 &--0.2 &&8 &11 & \\
348.5 &--0.0 &&10 &13.7 & \\
349.7 &+0.2 &&2 &$>$11 & \\
357.7 &+0.3 &Square &24 &6.4 & \\
357.7 &--0.1 &Tornado &5 &$>$6
\enddata
\tablecomments{
{\scriptsize EGRET luminosity from 0.04-6 GeV and HESS or VERITAS luminosity from 0.3-10 TeV, where $\alpha$ is the photon index.
\noindent REFERENCES. -- (1) \citet{esposito96}, (2) \citet{hess_w28}, (3) \citet{abdo09}, (4) \cite{feinstein09}, (5) \cite{veritas_ic443}, (6) \citet{magic_ic443}, (7) \citet{hess_ctb37a}, (8) \cite{aharonian06} \label{tbl:list}} }
\end{deluxetable*}
\section{$\gamma$-ray Emission from Interacting SNRs}
First, we assess $\gamma$-ray sources which may be directly associated with supernova remnants (hereafter, "$\gamma$-ray SNRs). To identify potential $\gamma$-ray counterparts to SNRs we have drawn from three catalogs: EGRET sources \citep{torres03}, the $Fermi$ Bright Source List \citep{abdo09} and the TeVCat\footnote{TeVCat is an online catalog of very-high-energy ($>$50 GeV) $\gamma$-ray sources. It can be accessed at http://tevcat.uchicago.edu .}. It is not trivial to determine the origins of each $\gamma$-ray source. Potential associations can be confused by the presence of pulsars or pulsar wind nebulae, which are also capable of producing high-energy $\gamma$-ray emission. Some sources reported by \citet{torres03} as potential EGRET counterparts to SNRs have since been associated with other astrophysical sources. The source 3EG J1410-6147, coincident with SNR G312.4-0.4, has recently been associated with the young pulsar J1410-6132 with variability detected by AGILE \citep{obrien08}. The source 3EG J1824-1514 has been confirmed as the microquasar LS 5039 \citep{ls5039}, ruling out an association with SNR G16.8-1.1 \citep{torres03}. Sources 3EG J2016+3657 and 3EG J2020+4017, coincident with SNRs G74.9+1.2 and G78.2+2.1 respectively, are identified as extragalactic blazars \citep{iyudin07}. Finally, the EGRET counterparts for SNRs G180.0-1.7, G355.6+0.0, G39.2-0.3 and G74.8+1.2 \citep{torres03} do not appear in the Fermi bright source catalog \citep{abdo09}, and we therefore do not include them as detections in our analysis.
We include all remnants for which an association with the coincident $\gamma$-ray source has not been ruled out. There are 26 identified SNRs with $\gamma$-ray coincidences: 7 are young remnants ($\la$1 kyr), 12 are SNRs with evidence of interaction with dense clouds, and 7 are unclassified remnants. Of the 12 identified SNRs which are interacting with dense gas, all but two (MSH 11-61A and W49B) have detected SNR masers. The presence of masers gives several advantages: (1) masers signpost interaction with dense (10$^5$ cm$^{-3}$) clouds, which will enhance the pion decay signature, (2) the velocity of the maser gives a kinematic distance, and (3) an established velocity allows the adjacent cloud to be isolated from confusing Galactic emission along the line of sight.
Table \ref{tbl:list} lists all SNRs with masers and the properties of potentially associated $\gamma$-ray emission. In the first five columns the Galactic coordinates, name, diameter and kinematic distance are listed for each remnant. The $\gamma$-ray luminosity for all derived from the reported $\sim$100 MeV--10 GeV and $\gtrsim$1 TeV fluxes in columns 6 and 8. Spectral indices for are given in columns 7 and 9. References for detections are given in the last column.
The ten SNRs with coincident $\gamma$-ray sources are divided based on the certainty of their association: Group A includes four SNRs for which $\gamma$-rays are established as related to the SNR; Group B includes six SNRs with coincident $\gamma$-ray sources which have not been attributed to other astrophysical phenomenon (pulsars, blazars, etc.) but for which an association with the SNR is less than certain; Group C lists the sixteen SNRs with masers which do not yet have detected $\gamma$-ray sources. Given that both masers and $\gamma$-rays are detected for only 10$\%$ of SNRs, the large number of coincident detections makes an association between SNR masers and $\gamma$-ray emission as tracers of interaction quite plausible.
\begin{deluxetable*}{rrrrrrrrrrrrr}
\label{tbl:contingency}
\tablecaption{Contingency Table: Presence of SNR Masers versus SNRs with $\gamma$-ray Sources \label{tbl:contingency}}
\tablewidth{0pt}
\tabletypesize{\scriptsize}
\tablehead{
\colhead{} &
\multicolumn{3}{c}{$\gamma$-ray SNRs} &&
\multicolumn{3}{c}{No $\gamma$-ray Counterpart} \\
\cline{2-4} \cline{6-8} \\
\colhead{} & \colhead{} & \colhead{Expected} & \colhead{} && & \colhead{Expected} & \colhead{} \\
\colhead{OH (1720 MHz)} & & \colhead{from} & \colhead{} && \colhead{} & \colhead{from} & \colhead{} & \colhead{} \\
\colhead{Masers} & \colhead{Number} & \colhead{Counts} & \colhead{$\chi^2$} && \colhead{Number} & \colhead{Counts} & \colhead{$\chi^2$} & \colhead{Total}
}
\startdata
Masers present ....... & 10 & 2.4 & 24.7 && 14 & 21.6 & 2.7 & 24\\
Masers absent ........ & 14 & 19.5 & 1.6 && 185 & 179.0& 0.2 & 199\\
No observations....... & 2 & 4.1 & 1.1 && 41 & 37.9 & 0.1& 43 \\
Total ................ & 26 & & && 240 & & & 266 &
\enddata
\tablecomments{We do not include those 43 SNRs which have not been observed for SNR Masers.}
\end{deluxetable*}
\subsection{Contingency Table Analysis}
To explore the correlation between SNR masers and $\gamma$-ray sources, we use a contingency table analysis to test the null hypothesis that there is no association between the two groups. Here we include all remnants which have been searched for SNR masers \citep{frail96,green97,koralesky98,fyz99,hewitt09} and those $\gamma$-ray sources which have not been clearly identified as having a non-SNR origin.
Table \ref{tbl:contingency} is the resulting contingency table. Three rows are given for SNRs with masers present, absent or not surveyed. Two columns divide SNRs with and without coincident $\gamma$-ray detections. Each of the two columns lists the "Number" of SNRs which meet both classifications, the number "Expected from Counts" assuming the maser and $\gamma$-ray properties of remnants are completely independent, and "Contribution to $\chi^2$" which gives the $\chi^2$, (observed -- expected)$^2$/(expected). For this test the total $\chi^2$ is 31 with two degrees of freedom. This gives a probability of $<$0.0001$\%$ that SNR masers and $\gamma$-ray counterparts to SNRs are not correlated.
We note that two of the expected cell frequencies are less than five. To be statistically rigorous we also apply the Fisher Exact Probability Test in addition to the $\chi^2$ Test. The Fisher test considers all possible outcomes of the contingency table in order to determine the probability of finding a correlation at least as strong as is observed. Here again we find a clear rejection of independence between $\gamma$-ray and maser detections in SNRs, with a probability of 0.002$\%$. The two classes are clearly associated.
A similar contingency table analysis was used to quantify an association between SNR masers and mixed-morphology remnants, a class of SNRs with shell-type radio morphologies with prominent thermal X-ray emission from their interiors \citep{fyz03mmsnr}. Both classes of remnants are thought to result from interaction with dense gas and are strongly correlated. Of the identified $\gamma$-ray SNRs, 9 are classified as both mixed-morphology and maser-emitting, 2 are mixed-morphology without detected masers (SNRs W49B and MSH 11-61A), and only SNR G5.7-0.0 has detected masers but no X-ray detection. Therefore, either classification produces a nearly identically good correlation with $\gamma$-ray SNRs. Both X-rays and cosmic rays are capable of providing the ionization needed to produce SNR masers. Detailed discussion is given in Section \ref{sec:ionization}.
An additional complication arises in that many SNRs have associated pulsars with their own wind nebulae that are also capable of accelerating particles to TeV energies. To account for this we separate the $\gamma$-ray SNRs into two categories based on whether a PWN is or is not also detected. This reduces the number of sources that fall under each classification, lowering the probability of independence from the Fisher test to 1.2$\%$, still significant enough to reject the null hypothesis. Furthermore, there is evidence that at least four SNRs (see Table \ref{tbl:list}, "Group A") are associated with $\gamma$-ray emission from the SNR-cloud interaction, and not the PWN. Improved spatial resolution is needed to discriminate between $\gamma$-ray emission from the PWN and SNR, but we note that there are no markedly different characteristics between the two groups of SNRs with and without PWN.
\subsection{Properties of $\gamma$-ray SNRs with Masers}
For SNRs interacting with dense gas, $\gamma$-ray counterparts are likely to be dominated by emission from neutral pion decay originating from interactions between hadronic cosmic rays and gas nuclei. In contrast to young SNRs, older interacting SNRs have little if any detected X-ray synchrotron emission. An inverse-Compton scenario requires small magnetic field strengths $<$1 $\mu$G \citep{yamazaki06} whereas Zeeman splitting of SNR masers measures field strengths of order $\sim$1 mG \citep{fyz96,claussen97,brogan00}.
Electronic bremsstrahlung emission has also been proposed, but the expectation would be for $\gamma$-ray emission to trace the radio shell morphology \citep{bykov00}. TeV emission from W28 and IC 443 shows no correlation with the radio shell, and an excellent correlation with dense gas \citep{hess_w28,veritas_ic443}. Forthcoming Fermi observations will have sufficient resolution to resolve this question at GeV energies.
The presence of a large reservoir of dense gas around the SNR, the expectation that cosmic rays are accelerated by SNRs, and the relatively older ages of this subset of interacting SNRs supports a pion decay origin for $\gamma$-ray counterparts. The morphology of $\gamma$-ray emission is seen to be well matched to that of the molecular cloud. In Figure \ref{fig:histo}, histograms of $\gamma$-ray luminosity show medians of 1.7$\times$10$^{36}$ ergs s$^{-1}$ and 1.5$\times$10$^{33}$ ergs s$^{-1}$ for GeV and TeV detections, respectively. This is consistent with luminosity estimates if a significant fraction of the SN energy (10-20\%) is diverted to cosmic-rays which are incident on the adjacent molecular cloud \citep{drury94}. The orders of magnitude differences in the luminosity reflect the expected energy spectrum of accelerated particles.
On average a hardening of the photon spectrum from GeV to TeV energies is also observed. The $\gamma$-ray spectral index, given in columns 7 and 9 of Table \ref{tbl:list}, steepen at higher energies from a mean of $\alpha_{GeV} \sim$ 2.1 to $\alpha_{TeV} \sim$ 2.6. It has been suggested for nearby SNR IC 443 that this spectral steepening results either from the presence of both pion decay and a strong bremsstrahlung component \citep{bykov00}, or the diffusion of cosmic rays at different energies through the dense cloud \citep{torres08}. In addition to IC 443, the SNRs W28, Sgr A East, and G359.1-0.5 show this characteristic spectral hardening at higher energies, but CTB 37A does not. Future detailed spectral modeling of these sources can determine whether a spectral signature can be used to discriminate between different $\gamma$-ray emission mechanisms.
\begin{figure}
\centering
\includegraphics[height=1.5in]{fig1a.ps}\\
\includegraphics[height=1.5in]{fig1b.ps}
\caption{Histograms of GeV (top) and TeV (bottom) luminosities for SNRs with masers. Bin sizes of 1 and 0.5 are used. Characteristic luminosities of 1.7$\times$10$^{36}$ ergs s$^{-1}$ and 1.5$\times$10$^{33}$ ergs s$^{-1}$ are found, respectively.
\label{fig:histo}}
\end{figure}
\section{Enhanced Ionizations via Cosmic Rays\label{sec:ionization}}
\begin{deluxetable*}{rrrrrrrrrrr}
\tablecaption{Comparison of Cosmic Ray and X-ray Ionizations
\label{tbl:crionization}}
\tablewidth{0pt}
\tabletypesize{\scriptsize}
\tablehead{
\colhead{$SNR$} & \colhead{Distance} & \colhead{M$_{cloud}$} &
\colhead{F$_{\gamma}$($>$100 MeV)} & \colhead{$\zeta_{CR}$ } &
\colhead{$\zeta_{Xray}$ } & \colhead{Ref.} \\
& \colhead{(kpc)} & \colhead{(10$^5$ $\hbox{$M_\odot$}$)} & \colhead{(10$^{-8}$ cm$^{-2}$ s$^{-1}$)} & \colhead{(10$^{-16}$ s$^{-1}$)} & \colhead{(10$^{-16}$ s$^{-1}$)}
}
\startdata
W28 &2.0 &0.5 &74.2 & 3.4 & 2.3 & 1\\
W44 &2.5 &3.0 &88.9 & 1.1 & 4.5 & 2\\
W51 C &6.0 &1.9 &40.9 & 4.4 & 8.8 & 3\\
IC 443 &1.5 &0.1 &51.4 & 5.5 & 3.6& 4\\
\enddata
\tablecomments{References:
(1) \citet{hess_w28},
(2) \citet{seta98},
(3) \citet{feinstein09},
(4) \citet{dickman92}. Fluxes are taken from the Fermi Bright Source List \citep{abdo09}.}
\end{deluxetable*}
The strong correlation observed between $\gamma$-ray-bright and maser SNRs may result from more than just both tracing interaction with high gas densities. The large post-shock OH abundances required for masing to occur are not produced in chemical models of slow, dense shocks; instead all gas-phase oxygen is rapidly converted to water before the gas cools to 50 K \citep{kaufman96}.
The large observed OH column densities are thought to be produced by dissociation of the abundant post-shock water \citep{wardle99}. In this model energetic electrons produced by ionizations in the molecular gas will excite H$_2$, which will subsequently generate a weak flux of far-ultraviolet photons which photo-dissociate water molecules producing OH \citep{prasad83}. The high flux of X-rays from the hot interior characteristic of mixed morphology SNRs has been shown to be a viable ionization mechanism \citep{fyz03mmsnr}. Cosmic rays and X-rays are notionally lumped together in this model.
The $\gamma$-ray luminosity can be used to discriminate the relative importance of ionizations by cosmic rays. The $\gamma$-ray spectrum resulting from pion decay of hadronic cosmic ray interactions in SNRs has been described by many authors \citep{drury94,baring99,bykov00}.
Here we follow \citet{aharonian91}, formulating the $\gamma$-ray flux as a function of the cosmic ray enhancement in the cloud adjacent to SNR, neglecting any possible gradients through the cloud as a simplification. The $\gamma$-ray flux above energy E$_{\gamma}$ is given by the equation:
\begin{equation}
{\rm
F( \geq E_\gamma) \simeq 3\times10^{-13} \left(\frac{E_\gamma}{TeV}\right)^{-1.6} \omega_{CR} \left(\frac{M_{5}}{d_{kpc}^2} \right)
ph\ cm^{-2}\ s^{-1}
},
\label{eqn:1}
\end{equation}
where $d_{kpc}$ is the distance to the SNR in kpc, E$_\gamma$ is the threshold above which the $\gamma$-ray flux is totaled, $\omega_{CR}$ is the ratio of cosmic ray density at the SNR to the local solar value assuming the same $\gamma$-ray emissivity, and M$_5$ is the total mass of the SNR shell and adjacent molecular cloud in units of 10$^5$ \hbox{$M_\odot$} . We note that \citet{drury94} established that spectral variations between 2.0 and 2.7 changes this estimate by less than 20\% .
Assuming the observed $\gamma$-ray flux from the SNR is due to hadronic interactions with the surrounding dense gas, it is straightforward to roughly estimate the local energy density of cosmic rays, provided the distance and molecular environment are known. For EGRET and Fermi detections, E$_{\gamma}$ $>$ 100 MeV, equation (\ref{eqn:1}) can be rewritten to express the local cosmic ray enhancement $\omega_{CR}$:
\begin{equation}
\omega_{CR} \simeq\ \frac{F_{\gamma}(> 100 \ MeV)}{7 \times\ 10^{-7} {\rm\ ph \ cm^{-2}\ s^{-1}}} \ \left(\frac{d_{kpc}^2}{M_5}\right) \ .
\end{equation}
The cosmic ray ionization scales as the cosmic ray density, so we obtain the ionization rate due to cosmic rays $\zeta_{CR} \simeq\ \omega_{CR}\ \zeta_{\odot}$, where $\zeta_{\odot}$ is the local cosmic ray ionization rate of $\sim$4$\times$10$^{-17}$ s$^{-1}$ \citep{webber98}.
In Table \ref{tbl:crionization} we list the observed properties of SNRs from which the local enhancement of cosmic rays is estimated. We find cosmic ray densities are typically increased by 30--150 times that observed for quiescent molecular clouds. This provides an ionization rate sufficient to produce OH abundances of $\sim$10$^{-7}$--10$^{-6}$ in the post-shock gas.
For comparison we also list the ionization rate from interior X-ray emission derived by \citet{fyz03mmsnr}. The two sources of ionization are generally found to be comparable. Either could be the dominant ionization mechanism depending on the location of the gas with respect to the interior X-ray emitting plasma and cosmic ray acceleration sites, and may vary on a source-by-source basis.
We caution that our estimates of cosmic ray ionization rate are somewhat crude. Even if the dominant emission mechanism is hadronic, there may still be a significant bremsstrahlung component. Detailed spectral modeling with data from the $Fermi$ $\gamma$-ray Space Telescope will permit a much better estimate of the local cosmic ray density. Observations of direct chemical tracers of cosmic ray ionization rates, such as H$_3^+$ and He$^+$ \citep{oka06,dalgarno06,indriolo09}, could be used to confirm the cosmic ray enhancements in the immediate environment of these interacting SNRs.
\section{Summary \& Conclusions}
We have explored the class of $\gamma$-ray sources which can be explained by enhanced cosmic ray densities at the interaction sites between SNRs and molecular clouds. By correlating SNR masers with known GeV and TeV sources, we have identified an emerging class of $\gamma$-ray-bright interacting SNRs. Of the 24 known Maser SNRs currently identified there are ten with GeV to TeV-energy $\gamma$-ray associations, and six with both. If $\gamma$-ray emission from these sources is largely due to hadronic cosmic rays, the enhanced local cosmic ray ionization rates in these clouds can explain the production of OH molecules behind a C-type shock, suggested by \citet{wardle99}. Furthermore, cosmic ray ionization is typically comparable to or dominant over ionizations from interior thermal X-rays, though without detailed knowledge of the cosmic ray spectrum these results have large uncertainties. Interacting SNRs represent a promising class of $\gamma$-ray sources which are likely to be uncovered by future $\gamma$-ray observatories.
|
1,314,259,994,241 | arxiv | \section{An imaging search campaign for wide companions of exoplanet host stars}
Some of the exoplanet host stars were found to be components of binary systems and first
statistical differences between exoplanets around single stars and exoplanets located in binary
systems were already reported by Zucker \& Mazeh (2002) as well as Eggenberger et al. (2004). In
particular, it seems that planets with orbital periods shorter than 40 days exhibit a difference in
their mass-period and eccentricity-period distribution.
However, all the derived statistical differences are based only on a small number of known binary
systems among the exoplanet host stars, i.e. their significance is sensitive to any changes in the
sample size. Furthermore in the statistical analyses it is assumed that most of the exoplanet host
stars are single star systems expect these stars known to be a component of a binary system.
In that context it is important to mention that the whole sample of exoplanet host stars was not
systematically surveyed so far for neither wide nor close companions, i.e. several more exoplanet
host stars, considered today as single stars, might be members of binary systems. Only search
campaigns for companions of the exoplanet host stars will clarify their multiplicity status and
will finally verify the significance of the reported statistical differences.
Therefore, we have started an imaging search program for wide visual companions of exoplanet host
stars, carried out with UFTI/UKIRT, SofI/NTT as well as MAGIC/CA~2.2m. We can find all directly
detectable stellar and substellar companions (m$\ga$40\,$\rm{M_{Jup}}$) with projected separations
from about 50 up to 1000\,AU. Thereby companions are identified first with astrometry (common
proper motion) and their companionship is confirmed with photometry and spectroscopy later on. So
far, 6 wide companions were detected, see Mugrauer et al. (2005a) for further details.
\section{Gl\,86\,B, a white dwarf companion of an exoplanet host star}
Queloz et al. (2000) reported a long-term linear trend in the radial velocity of the exoplanet host
star Gl\,86. Furthermore, after combining Hipparcos measurements with ground-based astrometric
catalogues, Jahrei\ss~(2001) showed that this star is a highly significant $\Delta\mu$ binary. Both
results point out that there should be a companion of stellar mass in orbit around Gl\,86. Els et
al. (2000) indeed detected a faint common proper motion companion, Gl\,86\,B, with a separation of
only $\sim$ 2\,arcsec and concluded that it is a late L or early T brown dwarf.
With NACO/SDI observations, Mugrauer \& Neuh\"auser (2005b) detected the orbital motion of this
companion which is the final proof that it is orbiting the exoplanet host star. Furthermore they
showed with IR spectroscopy data that Gl\,86\,B is a white dwarf, i.e. this companion is the causer
of the reported linear trends in the radial and astrometric motion of the exoplanet host star.
Gl\,86\,B is the first known white dwarf detected as a close companion of an exoplanet host star.
With their high contrast NACO/SDI imaging, Mugrauer \& Neuh\"auser (2005b) can already exclude
further stellar companions around Gl\,86 with projected separations between 1 and 23\,AU.
We present here further complementary observations of the Gl\,86 binary system, carried out in our
wide companion search program using SofI/NTT. With these observations, we can clearly rule out
additional wide stellar companions around Gl\,86 with projected separations between 40 and 670\,AU
(see Fig.\,1).
\begin{figure}[tb]
\plotone{mugrauer1.eps}\caption{The left panel shows the 1st epoch H-band image of Gl\,86 obtained
with SofI/NTT in Dec. 2002. We observed the star again in 2nd epoch in June 2003. A detection limit
of H=18\,mag (S/N=10) is reached and substellar companions with a minimum-mass of
35\,$\rm{M_{Jup}}$ are detectable (see right upper plot). The proper motion between the 1st and 2nd
epoch imaging of all detected objects is illustrated in the right lower diagram.}
\end{figure}
|
1,314,259,994,242 | arxiv | \section{Depolarizing channel}
The Kraus operators of depolarizing channel for a single qubit are given by
\begin{equation}
\begin{split}E_0 = \sqrt{1- {3 \over 4} p} \mathds{1}, \ \ E_1 = \sqrt{{p \over 4}}\sigma_x, \\
E_2 = \sqrt{{p \over 4}}\sigma_y, \ \ E_3 = \sqrt{{p \over 4}}\sigma_z,
\end{split}
\end{equation}
and the eigenvalues of a $W$ state of three particles in the depolarizing channel appear as
$ \lambda_1 = {1 \over 8} (-2 + p)^2 p $;
$ \lambda_2 = -{1 \over 8} (-2 + p) p^2 $;
$ \lambda_3 = \lambda_4 = {1 \over 24} p (8 - 6 p + p^2) $;
$ \lambda_5 = {1 \over 24} p (16 - 24 p + 11 p^2) $;
$ \lambda_6 = {1 \over 24} (24 - 52 p + 42 p^2 - 11 p^3) $ and
$ \lambda_7 = \lambda_8 = {1 \over 24} (4 p - p^3) $, with the associated normalized eigenvectors. We do not give the eigenvectors and the lengthy calculations but using Eq.(\ref{eq:Eq8}) it is straightforward to show that the maximal mean QFI of a $|W_3\rangle$ state in depolarizing channel, starting from the level of QFI of a pure $|W_3\rangle$ state, exhibits a smooth decrease with respect to the depolarization strength and vanishes when the depolarization strength is maximum (see the green curve in Fig.1). In the case of depolarizing channel, only the starting point and therefore the steepness of the decrease of the QFI of $W$ states depends on the number of particles, and this result is similar to that of GHZ states.
\section{Amplitude damping channel}
The Kraus operators of the amplitude damping channel are given by
\begin{equation}
E_0 = |0\rangle\langle0| + \sqrt{1-p} |1\rangle\langle1|, \ \ E_1 = \sqrt{p} |0\rangle\langle1| \end{equation}
where $p$ is the probability of decay from upper level $|1\rangle$ to the lower level $|0\rangle$ with the damping rate $\gamma$ i.e. $1-p = e^{-\gamma t /2 }$. We find the eigenvalues of the density matrix of a $W$ state of three qubits as: $\lambda_1 = 1-p$ and $\lambda_2=p$, with the associated eigenvectors. Using Eq.(\ref{eq:Eq8}) we construct the \textbf{C} matrix and find the largest eigenvalue of $\textbf{C}$ matrix as $\lambda_{max} = 3 (1-2p)^2$, therefore the maximal mean quantum Fisher information of a $|W\rangle$ state in amplitude damping channel with decoherence strength $p$ as
\begin{equation}
\bar{F}_{max}=\begin{cases}
2.\overline{33}, & \text{$p=0$},\\
(1-2p)^2, & \text{$0<p\leq 1$},
\end{cases}
\end{equation}
which shows that, as plotted in Fig.1 (the blue curve), QFI of $W$ states exhibit a sudden drop to shot-noise level, when subjected to amplitude damping noise, and as the strength increases, QFI first vanishes and then increases back to shot-noise level.
\section{Phase damping channel}
The Kraus operators for the phase damping channel are given by
\begin{equation} E_0 = \sqrt{1-p} \mathds{1} , \ \ E_1 = \sqrt{p} |0\rangle\langle0|, \ \ E_2 = \sqrt{p} |1\rangle\langle1|. \end{equation}
In the phase damping channel, the eigenvalues of $|W_3\rangle$ state appears as
$ \lambda_1 = \lambda_2 = {1 \over 3} (2p - p^2)$;
$ \lambda_3 = {1 \over 3} (3 - 4p + 2 p^2)$ with the associated eigenvectors. Via Eq.(\ref{eq:Eq8}) we find that at any non-zero strength of phase damping, QFI vanishes, i.e.
\begin{equation}
\bar{F}_{max}=\begin{cases}
2.\overline{33}, & \text{$p=0$},\\
0, & \text{$0<p\leq 1$},
\end{cases}
\end{equation}
which implies a sudden death of quantum Fisher information, as plotted in Fig.1.
In conclusion, we have studied quantum Fisher information (QFI) of $W$ states with respect to SU(2) rotations under three decoherence channels and reported the interesting behavior of QFI of $W$ states when subjected to i) Depolarization: As decoherence starts and increases, QFI starts at the level of pure $W$ state, decreases smoothly and finally vanishes with full depolarization. ii) Amplitude Damping: As the decoherence starts, QFI directly decreases to shot noise limit, and with the increasing decoherence, QFI first vanishes and then starts to increase, reaching the shot-noise level at full decoherence; iii) Phase Damping: At any rate of decoherence, QFI vanishes, implying a sudden death of QFI. We also found that on the contrary to GHZ states, $W$ states do not provide phase sensitivity in $z$ direction and the phase sensitivities in $x$ and $y$ directions are equal to each other. Therefore QFI of $W$ states do not exhibit sudden change points due to the competition between directions. Besides the decoherence effects, quantum Fisher information has also been studied considering photon losses \cite{Jin2013PRA,Tsang2013NJP}. On the other hand, an intense effort has been devoted on preparation of large-scale photonic W states in the ideal case where no practical imperfections are taken into account \cite{Ozdemir2011NJP,Bugu2013A,Yesilyurt2013A,Ozaydin2014arXiv}. Therefore we believe that our work may be useful for the efforts in preparing large scale $W$ states, as well as the quantum critical phenomena and percolation in quantum networks \cite{Acin2007,Kieling2007} when the unavoidable natural decoherence effects are taken into account.
|
1,314,259,994,243 | arxiv | \section{Introduction}
\label{Intro}
The Sun often produces major eruptive phenomena that release vast amount of energy on the order of $10^{32}$ erg in a few seconds to minutes. It is now accepted that the source of energy for all of this solar activity is from magnetic fields. Active regions (ARs) are higher concentrations of magnetic field regions often seen with violent activity, like jets, flares, coronal mass ejections (CMEs), etc. The occurrence of these events relies on the information of triggering and driving mechanisms. Even after large number of studies, not only the onset mechanism but also the supply mechanism of magnetic free energy is not yet clarified. Theoretically speaking, there are two effects that can supply magnetic free energy and magnetic helicity from below the solar surface to the corona. One is the so-called flux emergence activity, in which vertical motion carries magnetic fluxes through the photosphere. If the emerging flux has magnetic helicity, it must work as a helicity injection and may efficiently supply free energy into the solar corona. Another mechanism is photospheric shear motion, which supplies magnetic free energy as well as magnetic helicity into the solar corona by generating magnetic shear in the coronal field. Thus, triggering of CMEs, flares has mostly concentrated on the problem of evolution of a magnetic field in the very tenuous highly conducting plasma of solar corona \citep{,Forbes1995,linj2000,linj2003,forbes2006}.
Previous studies demonstrated various types of mechanisms that contribute dominantly to the accumulation of free magnetic energy in the solar atmosphere. These are 1) magnetic flux emergence or cancellation \citep{zhanghongqi1995,chenshibata2000,zhangh2001,Sterling2010}, (2) shearing motion \citep{ambastha1993,zhanghongqi1995,demoulin2002,vemareddy2012c,vemareddy2017b} (3) sunspot rotation \citep{brown2003,tian2006,yanxl2008,vemareddy2012b,torok2013,vemareddy2016b}. The AR 9236 was reported to produce recurrent CMEs at an average time period of 10 hr \citep{gopalswamy2005} and the associated flares were not long decay events (LDEs). Report of the same AR by \citep{nitta2001} suggested the emerging magnetic flux as being responsible for repeated CMEs. Some studies also showed successive CMEs (e.g., AR 8038, 12371) in a timescale comparable to energy buildup by foot point motions. These were decaying ARs where the prolonged flux cancellation by converging motions and subsequent magnetic gradient increase about the polarity inversion line (PIL), introduce energy build-up in the AR magnetic system, which erupts into CMEs/flares \citep{shibu2000,liy2004,liy2010,vemareddy2017b,Vemareddy2018}. In the AR 12158, the two successive CME eruptions were triggered by helical-kink instability under the driving conditions of predominant sunspot rotation in a timescale of days \citep{vemareddy2016b}. Some ARs present with a combination of mechanisms in action. For example, recent reports on AR 11158 suggest that the shear and rotational motions of the observed fluxes played a significant role in transient activity with flares and CMEs \citep{sunx2012,vemareddy2012b}. Based on these mechanisms, the eruptive scenario of ARs under a particular evolving condition of boundary motion have been numerically modelled \citep{antiochos1999,amari2003a,amari2010,archontis2014}. All of these models are based on the physical concept that the foot point motions predominantly contribute to a coronal helicity budget to form a twisted flux rope (FR) during or before its ejection as CME.
The magnetic helicity describes the magnetic field complexity, including the twist, writhe, knot, and linkages of magnetic field. When coronal magnetic field is being pumped with helicity and energy, the magnetic complexity and non-potentiality increases. Conventionally, the magnetic complexity and non-potentiality are described in terms of parameters such as the magnetic shear \citep{Hagyard1986,WangT1994}, horizontal gradient of longitudinal magnetic field\citep{Falconer2003,SongH2006}, electric current \citep{leka1996,WangT1994}, twist parameter $\alpha$ \citep{pevtsov1994,hagino2004,Tiwari2009}, magnetic free energy \citep{Metcalf2005a}, current helicity \citep{Abramenko1996,ZhangBao1999}, etc. On the other hand, the EUV observations of the corona is seen with sigmoidal structures, a form of twisted flux rope associated with large-scale electric currents. From several such above studies of simultaneous magnetic field and coronal observations, it is now believed that the magnetic flux rope is built up by the line-tied photospheric motions, such as the magnetic flux emergence or the horizontal flows which injects the magnetic helicity into the higher solar atmosphere increasing the twist and kink of a flux rope (self-helicity) and the linkage between different flux ropes (mutual helicity). The magnetic helicity is conserved in an ideal MHD process and changes very slowly in a resistive process \citep{Taylor1974}. Thus, a flux rope with continuous injection of magnetic helicity inevitably erupts to remove the accumulated helicity, a manifestation of the CME. In this way, the magnetic helicity in the 3D volume is just a result of the helicity flux flowing into and out of the surface. \citet{Guoy2013}, for the first time, found a quantitative relationship between the helicity injection and the twist number of the MFR. Thus a better understanding of the solar eruptions, on one hand, requires the unravelling the magnetic configuration for the flux rope, and the mechanism for the energy storage on the other hand. This kind of study helps in revealing the connections of the nature of boundary evolution and flux rope formation.
From the magnetohydrodynamic (MHD) point of view, the flux rope is in equilibrium under the balance of magnetic pressure in the flux rope and the magnetic tension of the overlying magnetic field. If the twist number increases to some critical value, then kink instability would occur \citep{hood1979,torok2004}. On the other hand, if the decay index of the background field, in which the MFR is embedded, is larger than some critical value, torus instability can occur \citep{kliem2006,aulanier2010,demoulin2010}. \citet{demoulin2010} pointed out that the loss of equilibrium and the torus instability are two different views of the same physical mechanism, which is the Lorentz repulsion force of electric currents with different paths. In addition to the role play in the flux rope formation, the flux cancellation and tether cutting reconnection have also been invoked to account for the triggering the loss of equilibrium of the flux rope \citep{ballegooijen1989,moore2001,amari2003a,amari2003b}.
Because lack of direct routine observations of the coronal magnetic field, the extrapolation of observed photospheric field by force-free field approximation is typically used for the study of the AR magnetic structure \citep{Wiegelmann2012}. The force-free model to the coronal field is justified by the low-$\beta$ plasma and higher (several hundred km/s) Alfven speed compared to photospheric flow speed. With this model, the coronal field evolution, corresponding to the slow photospheric plasma motion, is approximated as a quasi-static evolution of force-free equilibria. This enables one to study the buildup of pre-eruptive 3D structure, like a sigmoid or flux rope, then to find hints of the most appropriate configurations leading to eruptions (e.g., \citealt{savcheva2012a,xudong2012a,Guoy2013,jiangc2014,vemareddy2014a,vemareddy2016b,Vemareddy2018}).
The structure of the magnetic field is usually characterized by the topological analysis of quasi-separatrix layers (QSLs) which denotes the places where the magnetic field line connectivity changes dramatically \citep{priest1995,demoulin1996,titov2002}. Two important QSL shapes are found to have relationship with the pre-eruptive configuration. The bald patch separatrix surface (BPSS) is separatrix surface with field lines touching the photosphere along PIL section where the transverse magnetic fields cross from negative to positive polarity as opposed to potential field case. The BPSS forms typically in cancelling and converging flux regions in the process of flux rope formation and has sigmoidal shape viewed from above. Hyperbolic flux tube (HFT) is another QSL structure with an X-line configuration that forms underneath the rising flux rope of BPSS topology. Alternatively, the HFT structure also forms above the flux rope or shared arcade facilitating breakout reconnection \citep{antiochos1999} in a quadrupolar magnetic configuration. In this case, the overlying flux contains a coronal null point which is possibly formed by a new flux emerging into an inverse pre-existing field \citep{WuST2005,Torok2009}.
\begin{figure*}[!htp]
\centering
\includegraphics[width=.98\textwidth,clip=]{Fig1}
\caption{{\bf first row:} HMI vectormagnetic field observations of the AR 12673 on three different days. Background map is radial magnetic field with contours of $\pm$110G. Green/red arrows denote horizontal field vector proportional to magnitude. Blue curves are traces of SPILs with shear angle greater than 45$^\circ$. Axis units are pixels of 0.5arcsec units. {\bf second and third row:} Observations of the AR in AIA 304 \AA, 131 \AA~band. }
\label{Fig1}
\end{figure*}
In this manuscript, we studied the most violent AR 12673 producing strongest flares in the 24th solar cycle. Previous studies of this AR focused on qualitative energy supply mechanisms and triggering of major eruptions \citep{YangS2017,VermaM2018,YanXL2018,HouYJ2018,LiuLijuan2018}. While the supply mechanism of energy and helicity is of prime importance, it is also very crucial to understand the rate at which they store in the corona to seek any relation to severe activity. This will be examined through a comparison with other ARs of various degree of activity in the form of strong flares/CMEs. Further, in order to establish the connection of the helicity flux injection with the coronal field configuration like flux rope topology, we also studied the coronal field evolution by force-free extrapolation of the observed photospheric magnetic field. To this end, we estimate the key parameters like QSL, twist number, relative magnetic helicity, and total energy at different epochs of time evolution. In section~\ref{obs}, observations and overview of the AR is presented. Details of the results are given in Section~\ref{res}. Summary of the results with a possible discussion is given in Section~\ref{summ}.
\begin{figure*}[!htp]
\centering
\includegraphics[width=.98\textwidth,clip=]{Fig_vel}
\caption{Velocity field of flux motions derived from DAVE4VM method. Background is HMI radial field map and the horizontal velocity is shown with red (green) arrows (normalized to 0.4km/s) in positive (negative) polarity. The velocity patterns in N1 and P1 imply a predominant shear and converging motion of these polarity patches over the entire evolution time. Axis units are pixels of 0\arcsec.5 units.}
\label{Fig_vel}
\end{figure*}
\section{Observations and Overview}
\label{obs}
In this study, we use vector magnetic field observations at a 12 minute cadence and 0\arcsec.5 per pixel obtained from Helioseismic and Magnetic Imager (HMI; \citealt{schou2012}) aboard Solar Dynamics observatory (SDO; \citealt{pesnell2012}). Details of retrieving vector field from stokes vectors of filtergrams and other related information about HMI data products can be referred in \citet{hoeksema2014,bobra2014}. These disambiguated vector observations of the AR patch in the native coordinate system (latitude, longitude) are remapped to disk center by cylindrical equal area (CEA) projection method such that the AR patch center matches the disk center and are provided as \texttt{hmi.sharp\_cea\_720s} data product. The data product contains essentially $(B_z, -B_y, B_x)$ in the local heliographic coordinate system (see the appendix section in \citealt{sunx2013a}). Supporting coronal imaging observations are obtained from Atmospheric Imaging Assembly \citep{lemen2012} at a cadence of 12 s and 0\arcsec.6 per pixel.
In Figure~\ref{Fig1} (first row), the HMI vector magnetic field observations of AR 12673 are displayed. The corresponding coronal observations in AIA 304 \AA~, 131 \AA~are also shown in the second and third row panels. AR 12673 starts emerging from September 2 at disk position E11S08 into pre-existing positive sunspot polarity (SP). As described by \citet[][Figure 1]{YangS2017}, the emergence on September 3 is through two bipolar regions nearby a pre-existing sunspot of positive polarity. During this time, the coronal images of AIA 304 \AA~, 131 \AA~pass bands indicate simple magnetic structure as seen in Figure~\ref{Fig1}. Following this on September 4, another two dipoles emerged within the existing patches. Notably, the first pair of the dipolar regions separated in east-west directions, whereas the later pair along north-south direction \citep{HouYJ2018}. The proper and shear motion of these emerging patches was interacted by the pre-existing spot and formed a large sheared main PIL. For convenience, we labelled the prominent polarity patches as P1, P2, N1, N2, N3 in Figure~\ref{Fig1}. These polarities are overally separated by a PIL of semi-circular shape. By September 4, the coronal images show an inverse-S sigmoidal structure (bent shape at the middle) with two hook shapes joining at the middle of main PIL between N1, P1.
\section{Results}
\label{res}
\subsection{Magnetic evolution}
\subsubsection{Shearing and converging motion}
From the time series (every 12 minutes) vector magnetic field data, we derived the vector velocity field by using DAVE4VM \citep{schuck2008}. In Figure~\ref{Fig_vel}, we plot the horizontal velocity (arrows) on $B_z$ map. Contours of 100 G are also overlaid to identify a polarity patch boundary. Different features move with different velocity at different epochs, where the velocity is spread up to a maximum value of 0.8 km\,s$^{-1}$. For a large scale flow patter, we have averaged the velocity maps for over 2 hours (10 time shots), which then reduces the flow velocity to 0.4 km\,s$^{-1}$. The velocity field in N1 is coherent with a net organized flow pattern in northward, whereas that in P1 is southeast. Anti-clockwise whirl-pool motion pattern is also seen that is consistent with the sunspot rotation of N3, N1 and P2 which were reported to play major role in triggering two major eruptions on September 6, 2017 \citep{YanXL2018}. However, these sunspot rotations are smaller in spatial and time scales compared to large fraction of flux involved in shear motion about the mail PIL over entire evolution period. Therefore, these motions indicate a predominant shearing and converging motion of N1 and P1 over the entire evolution time and is suggested to play prime role in energy and helicity storage in the magnetic system. Consequently, the sigmoidal structure builds as seen in corona after a day of the AR emergence and is persisted for days through the continuous helicity and energy input by these motions as earlier studied cases for example \citet{vemareddy2017b}. These observations are consistent with \citep{VermaM2018} linking the shear flows and head-on collision of new and pre-existing flux with the origin of two X-class flares.
\subsubsection{Net magnetic flux}
In Figure~\ref{Fig_met}, we plot evolution of magnetic parameters related to build-up of non-potential nature of the AR. A rapid emergence commenced early on September 3 growing sunspot groups and forming full AR in a time of a day. In Figure~\ref{Fig_met}(a), net flux in positive ($B_z>0$) and negative ($B_z<0$) polarities are plotted with time. Disk integrated GOES X-ray flux is also shown in the same panel with y-axis scale on right side. From September 4 to September 10, the AR produced a total 4 X-class flares, 27 M-class flares and a multiple of small flares \citep{YangS2017}. From this GOES X-ray light curve, we can divide the flaring activity into two phases as indicated by orange shades. The first phase starts from early September 4 and ends on September 5 at around 18:00\,UT. This phase correlates with the flux emergence and includes M-class flares. The second phase starts from 11:00\,UT on September 6 and continues till September 10. This phase is the most energetic with the largest X-class flares in solar cycle 24. The net flux varies 2--10$\times10^{21}$ Mx in each polarity on September 3rd, due to rapid emergence of flux. The later evolution follows gradual flux emergence and its areal spreading, increasing the net flux to $34\times10^{21}$ Mx till end of September 8. As found in preliminary study of \citet{SunX2017}, the AR has the fastest flux emergence of any observed values at an average of $4.93\times10^{20}$ Mx\,hr$^{-1}$ over 5-day period.
\subsubsection{Net electric current and SPILs}
While growing, the interaction of opposite polarity regions creates compact regions forming sheared polarity inversion lines (SPILs) at their interface. The opposite motion of polarity regions parallel to PIL is referred to shear motion and generates stress in magnetic field connecting those regions. In that case, the adjacent field vectors are parallel to PIL, and the extent of the shear is measured by $\theta = cos^{-1} (\mathbf{B}_o \cdot \mathbf{B}_p / |\mathbf{B}_o| |\mathbf{B}_p|)$, \citep{ambastha1993} where $\mathbf{B}_o$ is the observed field and $\mathbf{B}_p$ is the potential field. Thus, sheared PIL (SPIL) is a measure of stressed magnetic configuration in the AR. In Figure~\ref{Fig1}, the HMI vector magnetic field observations of AR 12673 are displayed. We also traced the sheared PILs by an automated procedure similar to the one applied for tracing PILs of strong vertical field gradients \citep{mason2010} and applied in a statistical study \citep{Vasantharaju2018}. In this procedure, we smooth the $B_z$ map to a smoothing factor of 8 pixels (4 arcsecs) and identified the zero Gauss contour with shear angle greater than 45$^\circ$. The value of SPIL length is slightly dependent on degree of smoothness. The error in the SPIL length is estimated by varying smoothing factor, and could be upto 3-5\,Mm. The maps are overplotted with these traced SPIL segments. It is indicated that with the emergence of the AR, the interaction of opposite polarities increased the interface of SPILs between P1 and N1.
Vertical component of electric current ${{J}_{z}}=\frac{1}{{{\mu }_{0}}}{{\left( \nabla \times \mathbf{B} \right)}_{z}}$ is another non-potential measure accounting horizontal field gradings and is readily computed with vector field observations. In Figure~\ref{Fig_met}(b), the net current ($I=\sum\limits_{N}{{{\left( {{J}_{z}} \right)}_{i}}} dA$, where dA is area of the pixel) obtained in each polarity is plotted with time. A threshold of $|\mathbf{B}|>150$ G is used for the reliability of the values above sensitivity, noise, inversion errors. The I increases with the emergence of the AR on September 3, and reaches to $5\times10^{12}$ A in magnitude in each polarity by the end of that day. The later evolution follows its further increase reaching a maximum value of $13\times10^{12}$ A around 12:00\,UT on September 6, when a major eruption with X9.2 flare occurred. Note that the I is negative (positive) in positive (negative) polarity, indicating a dominant negative chirality of the AR magnetic structure according to the definition of current helicity.
\begin{figure*}[!htp]
\centering
\includegraphics[width=.87\textwidth,clip=]{Fig_metrics}
\caption{Magnetic evolution in AR 12673 a) net flux in positive (north) and negative (south) magnetic polarities. Disk integrated GOES X-ray (1.0--8.0 \AA~passband) flux is also shown with y-axis scale on the right, b) Systematic evolution of net vertical current from north and south polarities c) Neutralization of net current in individual polarities. Horizontal dashed line marks the neutralization level of unity. Total length of all SPIL segments are also plotted with y-axis scale on right. It follows the $|DC/RC|$ profile indicating the relevance of SPILs and degree of neutrality. d)time-rate of helicity flux normalized by averaged net flux of positive and negative polarities. Normalized accumulated helicity (time integrated helicity flux rate) is also plotted with y-scale on right. The normalized helicity flux reaches to 0.09 turns indicating the moderately twisted flux system. e) Energy flux injection and its accumulated quantity. Rapid flux emergence phase is marked with grey shade and the two major flaring phases are indicated with orange shade. }
\label{Fig_met}
\end{figure*}
In a magnetic polarity, the net current is theorized to be neutralized by canceling volume and sheath currents of the flux tubes \citep{parker1996}, which is found to nearly valid in isolated sunspot ARs \citep{venkat2009}. However, when interacting opposite polarities with SPILs exists, the net current breaks neutralization \citep{georgoulis2012}. According to the flux rope models of CMEs eruptions \citep{zakharov1986}, the breakdown of net current neutralization refers to a form of Lorentz force development and stability loss. As a reason, breakdown of neutrality is proposed to be a proxy assessing the ability of ARs to produce major eruptions \citep{YangLiu2017}. Following this, breakdown of net current neutralization is found to be correlated with the presence of SPILs and the observed activity in many ARs \citep{Vemareddy2019}.
In Figure~\ref{Fig_met}(c), the ratio of direct current (DC) and return current (RC), as the dominant and non-dominant currents, in each polarity is plotted with time. HMI provides inversion errors of field vectors which vary upto 50 G. On considering an average error of $\delta B_x=\delta B_y=40$ G in a typical distribution of $n=10^4$ pixels, we found that the range of uncertainty ($\sqrt{n}\times \delta J_z dx dx$) of a signed net current in a given polarity can never be larger than $0.1\times10^{12}$A \citep{vemareddy2017d, Vemareddy2019} . Here $dx$ is HMI pixel size of $0\arcsec.5$ Therefore, our estimation of $|DC/RC|$ can have a maximum error limit of 0.14, and is very small compared to the range of $|DC/RC|$ evolution in CME and flare producing ARs.
\begin{figure*}[!htp]
\centering
\includegraphics[width=.99\textwidth,clip=]{Fig_ext_j2d}
\caption{ Magnetic structure modelled by NLFFF {\bf first column:} magnetic field lines rendered on $B_z$ map at different epochs of AR evolution. Field line color is due to $|{\bf J}|$ distribution. Arrow indicates flux rope structure in 04T18:00\,UT panel. {\bf second column:} the same field lines on AIA 94\AA~channel images obtained at respective times. The global magnetic structure in the AR convincingly mimics the morphology of the plasma emission. {\bf third column:} vertically integrated electric current distribution ($\sum_z |\mathbf J|$) approximately depicting the plasma emission in AIA 94\AA~passband shown in second column panels. The maps are scaled within $0-500$ Am$^{-2}$. Field-of-view is the region enclosed by rectangular box shown in 04T18:00\,UT panel of first column. {\bf fourth column:} AIA 304 \AA~observations showing the sigmoidal morphology. These panels indicates the plasma emission is related to dense current distribution produced by twisted magnetic structure in the corona.}
\label{Fig_ext_j2d}
\end{figure*}
A value of $|DC/RC|$ at one indicates perfect neutralization and a greater value refers to degree of non-neutrality \citep{Torok2014}. We can notice that neutrality breaks down with the emergence of the flux from 3rd September onward and reached to 1.4 by the end of the day. After the fast emergence phase, the $|DC/RC|$ varies about 1.5 in the later phase of evolution till 8th September. Importantly, the $|DC/RC|$ follows the total length of all SPIL segments plotted in the same panel with y-axis scale on right. This also substantiates for the relevance of SPIL amid the compact regions in the ARs and the breakdown of net current neutrality \citep{georgoulis2012}. Breakdown mainly occurs in the rapid emergence phase, restoring to neutrality in the later phase during which the flux regions separate and become isolated without SPILs. Unlike the case in emerging ARs \citep{Vemareddy2019}, the persistent higher degree of non-neutrality of net electric current throughout the evolution is an indicator of eruptive capability of this AR.
\subsubsection{Pumping of magnetic helicity and energy}
While the AR flux emerges and spreads through plasma motions, the rate of build up of complexity is measured by the helicity flux injection through the photospheric surface of the AR given by \citep{Berger1984}
\begin{equation}
{{\left. \frac{dH}{dt} \right|}_{S}}=2\int\limits_{S}{\left( {{\mathbf{A}}_{P}}\bullet {{\mathbf{B}}_{t}} \right){{\text{V}}_{\bot n}}dS}-2\int\limits_{S}{\left( {{\mathbf{A}}_{P}}\bullet {{\mathbf{V}}_{\bot t}} \right){{\text{B}}_{n}}dS}
\end{equation}
where $\mathbf{A}_p$ is the vector potential of the potential field $\mathbf{B}_p$, $\mathbf{B}_t$ and $B_n$ denote the tangential and normal magnetic fields, and $\mathbf{V}_{\perp t}$ and $\mathbf{V}_{\perp n}$ are the tangential and normal components of velocity $\mathbf{V}_\perp$, the velocity perpendicular to the magnetic field lines. From the time series (every 12 minutes) vector magnetic field data, we derived the vector velocity field by using DAVE4VM \citep{schuck2008} and compute $dH/dt$ (see also \citealt{LiuYang2012,vemareddy2015a}). Although there is no way that we can actually compute errors, a Monte Carlo experiment is used to represent probable error in helicity flux computation. Here we randomly added noise of magnitude 100 Gauss to three components of the vector magnetic field, and repeated the vector velocity and helicity flux computations for 200 times \citep{LiuYang2012}. The maximum error is 1$\sigma$ error of all 200 experiments and was found to be 23\%. To represent the average complexity per flux tube (twist rate), we normalized the $dH/dt$ with the half of the unsigned flux $\Phi=\frac{1}{2}\int|B_z(x, y, z=0)| dxdy$ and plotted in Figure~\ref{Fig_met}(d). The fast emergence phase on September 3rd is accompanied by higher twist injection upto $-0.5\times10^{-6}$ turns/s. Note that the negative sign denotes the negative chirality of the AR. It refers that the emerging flux is having pre-existing twist from convection zone. This phase follows a decrease of twist injection rate to $-0.7\times10^{-6}$ turns/s where horizontal motions dominate over the gradual flux emergence over days. The time integrated $dH/dt/\Phi^2$ is coronal accumulation of helicity flux $H(t)/\Phi^2$ and is plotted in the same panel with y-axis scale on the right. The $H(t)/\Phi^2$ reaches to 0.09 turns by 5th September, indicating a moderately twisted flux system. We point that the helicity flux calculations in emerging ARs are more useful for the studies of CME occurrence because the values represent the AR flux without missing the pre-existing/emerged structure. Further, \citet{YanXL2018} interpret that the counter-clockwise rotating sunspots (here N1, P2, N3) inject negative helicity and relate with the successive X-class flares on September 6, 2017. Our results suggest that the AR is already in critically non-potential state by predominant shear motions over entire evolution period, which in addition of sunspot rotation could trigger eruptions on the September 6.
The recurrent CME producing AR 12371 is found to accumulate 0.15 turns, whereas CME-poor flare-rich AR 12192 is having 0.02 turns \citep{vemareddy2017b} over a similar time of evolution. In the former case flux rope (sigmoid) structure is observed whereas no flux rope in the later. As suggested by \citet{vemareddy2017b}, a higher value $H(t)/\Phi^2$ denotes more twisted flux system like flux rope and have lesser confining flux. From these cases, we suggest that the magnetic flux normalized helicity flux is an important parameter in distinguishing strong erupting ARs.
\begin{figure*}[!htp]
\centering
\includegraphics[width=.95\textwidth,clip=]{Fig_dvr_FR}
\caption{ NLFFF core field structure on different days of evolution in AR 12673. {\bf first column:} volume rendering of 3D distribution of $|\mathbf{J}|$. {\bf second column:} top view. A vertical slice is placed along horizontal magenta line for further analysis. {\bf third column:} perspective view. In all the panels, the structure mimics the twisted flux rope. The field lines rendered are biased with intense $|\mathbf{J}|$ and are color coded accordingly.
} \label{Fig_core_fld}
\end{figure*}
Similarly, the energy flux injection (Poynting flux) across the surface \citep{KusanoK2002}
\begin{equation}
{{\left. \frac{dE}{dt} \right|}_{S}}=\frac{1}{4\pi }\int\limits_{S}{B_{t}^{2}{{V}_{\bot n}}dS-\frac{1}{4\pi }}\int\limits_{S}{\left( {{\mathbf{B}}_{t}}\bullet {{\mathbf{V}}_{\bot t}} \right){{B}_{n}}dS}
\label{eq_dedt}
\end{equation}
is computed and plotted in Figure~\ref{Fig_met}(e). Corresponding to the dH/dt profile, the energy flux injection $dE/dt$ (a positive definite quantity always) also shows a higher rate of input during rapid emerging phase which reaches to $8\times10^{27}$erg/s. Assuming this value of flux for two days, the coronal energy budget would be $1.35\times10^{33}$ergs that probably supplied for the M-class flaring activity on September 5. Following this phase, the $dH/dt$ maintains its constant influx at an average value of $7\times10^{27}$erg/s. Given this constant supply of energy flux, the accumulated quantity is $4\times10^{33}$ergs by the end of September 7. Notedly, $dE/dt$ in this AR is strong by a factor of two compared to less intense-flaring ARs \citep{vemareddy2015a,vemareddy2017b}. From this energy flux study, we suggest that the coronal magnetic field is constantly driven to a stressed state of critical energy level that is significant enough to power the sequential X-flares with CMEs.
\subsection{NLFFF model of AR magnetic structure}
The AR magnetic structure is reconstructed by performing nonlinear force-free field (NLFFF) extrapolation of the observed photospheric vector magnetic field \citep{wiegelmann2010}. In order to weaken the effects of the lateral boundaries, the observed boundary is inserted in an extended field of view and computations are performed on a uniformly spaced computational grid of $800 \times 800 \times 400$ representing physical dimensions of $291 \times 291 \times 146$ Mm$^3$. At different epochs of AR evolution, the flux imbalance is less than 10\%. To satisfy the force-free conditions, the magnetic field components are pre-processed \citep{wiegelmann2006}. We first initiated the NLFFF code with potential field (PF) but found that the modelled field failed to reproduce the structured AIA emission especially the hook shapes. This is due to the fact that the large scale structure is not close to potential field. Therefore, the NLFFF code is initiated with the 3D linear force-free field (LFFF) constructed from the vertical field component of the observed field \citep{gary1989}. The force-free parameter used is obtained by minimizing the least-square difference of the modelled and observed transverse field and is known as $\alpha_{\rm best}$ \citep{pevtsov1994,hagino2004}. Direct use of this LFFF with $\alpha_{\rm best}$ parameter as initial condition for NLFFF results in over injection of magnetic twist into the field lines leading to their unexplained shape. We make several runs with varying $\alpha_{\rm best}$ and compare the final NLFFF structure with coronal images and then found that a value of half of the estimated $\alpha_{\rm best}$ would result in for the suitable initial model for NLFFF. The NLFFF relaxation is proceeded by minimizing the functional $L$ containing volume integral terms of Lorentz-force, magnetic field divergence and a surface integral term that accounts the measurement errors while injecting the boundary observations. The final solution is assessed by the current-weighted sine angle $sin\theta_J$ between magnetic field $\mathbf{B}$ and electric current density $\mathbf{J}$ and magnetic field divergence in the computational box \citep{Wheatland2000}. In our cases of NLFFF extrapolation, the relaxation converges to $10^{-3}$ of initial $L$ with 10-12$^\circ$ of $\theta_J$ and an average magnetic field divergence of the order of $10^{-4}$. Note that the initial condition of LFFF is implemented to reproduce the unexplained structure by NLFFF resulted with PF, and is constrained by the observed configuration of the coronal images after several trial runs with varying $\alpha_{\rm best}$.
\begin{figure}[!htp]
\centering
\includegraphics[width=.5\textwidth,clip=]{Fig_null_topology}
\caption{ Null point topology at 04T18:00 UT time frame. {\bf top:} top view. Fan field lines (orange) diverge from null point to negative polarities surrounding the positive polarity. Spine field lines (blue) overly the flux rope (red) with footpoints in the P1 and P2 polarities, {\bf bottom:} perspective view. Background image is $B_z$ distribution.
} \label{Fig_null_fr}
\end{figure}
The NLFFF magnetic structure at different epochs is shown in Figure~\ref{Fig_ext_j2d}. The rendered field lines are overlaid on $B_z$ map in the first column and on AIA 94 \AA~passband images in the second column panels. The field lines connecting P2, N1 form lower lobe and that connecting N3, P1 form another northern hook structure. The field lines anchored near the SPIL are inverse S-shaped and graze the PIL manifesting low lying flux rope (blue arrow in 04T18:00\,UT panel) and are overlaid by high lying potential like field lines. This flux rope structure persisted in all panels as indicated by arrow. This modelled structure mimics the sigmoidal shape very well exhibited in AIA 94 \AA~images. Especially the hooked structure in the top and bottom of the inverse S-sigmoid matched well. The global magnetic structure seems not changed much from September 4 to 6, which means that the photospheric flux motions maintains the global non-potentiality by sigmoidal shape. While there are active events releasing stored energy by the field reconfiguration, it is replenished by underlying photospheric shear and converging flux motions through quasi-static evolution.
It is important to note the projection effect with observations. When the AR moves away from the disk center, the radial direction becomes increasingly departed from the line of sight, so in order to compare the modeled field lines with the coronal AIA observations, the best practice is to tilt the modelled magnetic structure by an angle of the AR position on the disk \citep{Guoy2016,Guoy2017a}. Although should have corrected, this projection effect has little contribution in our overlaid panels of AIA 94\AA~. Especially, the field lines in the lower limb of the sigmoid on September 6, which are higher in height, appear to deviate a bit from the emission pattern.
In the third column panels, we display the vertically integrated electric current ($\sum_z |\mathbf{J}|$) distribution. The coronal field is driven by photospheric shear motions which naturally builds coronal volume currents in the stressed configuration. Corresponding to the sheared/twisted field structure (flux rope), the intense $|\mathbf{J}|$ are present along main PIL. The overall morphology of this current distribution is similar to the sigmoid observed in EUV 304 \AA~images displayed in fourth column panels and demonstrates the NLFFF model captures the most of the observed features.
In Figure~\ref{Fig_core_fld}, the rendered core field structure is displayed in top view (second column panels) and perspective view (third column panels). For a better representation of volume current with the field lines, we also display the volume rendering of $|\mathbf{J}|$ in the first column panels. In all panels, the background image is $B_z$ distribution. The field lines are selected at intense locations of $|\mathbf{J}|$ and are color coded accordingly. In all the panels on different days, the structure mimics the twisted flux rope above the main PIL. Especially, the NLFFF structure at 18:00 UT on September 4 is compact with coherent continuous twisted field lines. On other days, there exists highly sheared arcade in addition to low lying continuous inverse-S ones. However, the field lines connecting north-west bipolarities are part of sheared arcade in the top hook structure of the sigmoid.
A careful examination reveals a null point topology in the 04T18:00 UT frame as depicted in the Figure~\ref{Fig_null_fr}. The null point position is located by algorithm described in \citep{vemareddy2014a}. Basically it involves scanning for the null to locate a possible grid cell and then finding the precise position using a tri-linear interpolation with the help of an iterative Newton–Raphson scheme within the grid cell. The null point is located above a positive polarity on the east side of the N1 polarity. It is at a height of around 5.8 Mm. Null point topology is typically associated with a fan-spine field line structure. The null point properties are described by eigen values and eigen vectors of the Jacobian $\delta B=\nabla_j B_i=\partial B_i/\partial x_j$ obtained in the vicinity of the null \citep{lau1990}. Two of the eigenvectors (with the same sign for the eigenvalues) define the fan surface and the third one specifies the spine direction. Knowing fan plane orientation by two of the eigen vectors, field lines away from the null can be traced in a circle of points on either side of the fan-plane to visualize the local null topology. Here the spine field lines overly the flux rope and have foot points spreading in P1 and P2 polarities as confining structure. This is one of the the proposed structures in a complex quadrapolar magnetic configuration facilitating breakout reconnection the underlying flux rope eruption \citep{antiochos1999}. Note that here the flux rope is under the spine field lines instead of fan field lines as for example in \citet{Guoy2016}. These kind of structures form in regions of new emerging flux into a preexisting flux of inverse configuration \citep{WuST2005,Torok2009}. All together, the modelled field reproduces the complex magnetic structure resembling the observed EUV sigmoid with embedded twisted flux rope.
\subsection{Quantitative estimates of energy, helicity, twist number}
The total magnetic energy is estimated by $\int_VB^2/8\pi dV$ in the AR volume. Owing to the net flux increase in the AR, the potential energy $E_p$ increases from $12.4\times 10^{32}$ ergs on 04T18:00 UT panel to $18.7\times10^{32}$ ergs at 06T11:24 UT panel. The total energy $E$ have increasing nature from $18\times10^{32}$ ergs on 04T18:00 UT to $26.6\times10^{32}$ ergs on 06T08:36 UT. This indicates that the AR reaches to higher non-potential state by this time and an X2.2 flare was seen peaking at 06T09:10 UT. The total magnetic energy is less by $2.8\times10^{32}$ by 06T10:36 UT compared to the earlier panel indicating energy during the flare. However, it increased to $25.9\times10^{32}$ ergs by 06T11:24 UT showing energy storage for the X9.3 flare which occurred commencing from 06T11:53 UT. Since the free energy is above $5.6\times10^{32}$ ergs at any time, an M or X class flare associated with CME is expected depending on the other favorable non-potential conditions as discussed in the section 3.1. We can compare these energy estimations to that derived from energy injection method (equation~\ref{eq_dedt}) by time integration as $E_{acc}=\int_0^T (dE/dt)\,dt$. As can be seen in Table~\ref{tab1}, the $E$, $E_{acc}$ agrees each other by orders of magnitude except the fact that $E$ corresponds to the activity events indicating storing/releasing phase unlike the monotonous accumulation in the later.
\begin{table*}[!ht]
\centering
\caption{Quantitative estimates of helicity, energy, and flux rope twist. }
\begin{tabular}{ccccccc}
\hline
time [UT]& $H_{acc} [10^{43}Mx^2]$ & $E_{acc} [10^{32} erg]$ & $H_R [10^{43}Mx^2]$ & $E_p [10^{32}erg]$ & $E [10^{32}erg]$ & $<T_w >[turns]$ \\
\hline
2017-09-04T18:00 & -2.5 & 8.8 & -4.66&12.4 &18.3 &-1.22 \\
2017-09-05T12:00 & -4.4 & 14.1 & -5.33&14.9 &20.9 &-0.37\\
2017-09-06T00:12 & -5.7 & 17.3 & -5.73&16.8 &23.2 &-0.34 \\
2017-09-06T08:36 & -6.4 & 19.1 & -6.83&18.2 &26.9 &-0.87\\
2017-09-06T10:36 & -6.6 & 19.9 & -6.06&18.5 &24.1 &-0.77 \\
2017-09-06T11:24 & -6.8 & 20.5 & -6.72&18.7 &25.9 &-0.93 \\
\hline
\end{tabular}
\label{tab1}
\end{table*}
From the helicity injection method discussed in section 3.1.4, the coronal magnetic helicity is estimated as $H_{acc}=\int_0^T(dH/dt)\,dt$. Alternatively, this can also be computed from volumetric distribution of magnetic field above the AR, which is generally a model like force-free extrapolation \citep[for more details on different methods, refer to][]{Valori2016}. To compare the coronal helicity budget, we calculate relative magnetic helicity \citep{Berger1984} from the 3D modeled field as given by
\begin{equation}
H_{R}=\int_V \left(\mathbf{A}+\mathbf{A_p}\right)\cdot\left(\mathbf{B}-\mathbf{B}_p\right)dV
\end{equation}
Here the reference field is potential field $\mathbf{B}_p$ and $\mathbf{A}_p$ is the corresponding vector potential. The main assumption involved is that the reference field should have the same normal field component as that of the real magnetic field on the boundaries. We construct the vector potentials with the formalism given in \citet{Devore2000} under the Coulomb gauge condition as
\begin{equation}
\mathbf{A}_p(x,y,z)=\nabla\times \hat{z}\int_z^\infty dz' \phi(x,y,z)
\end{equation}
where the scalar function obeys the Laplace equation $\nabla^2\phi=0$ recovering the potential field such that $\nabla\times\mathbf{A}_p=-\nabla\phi$. Using the Green's function for Laplace equation as integral kernel,
\begin{equation}
\phi(x,y,z)=\frac{1}{2\pi}\int\int dx'dy' \frac{B_z(x,y,z=0)}{\left[(x-x')^2+(y-y')^2+z^2\right]^{1/2}}
\end{equation}
This result is used to calculate the vector potential $\mathbf{A}_p$ at $z=0$. These boundary values are used to compute the $\mathbf{A}$ for the actual magnetic field $\mathbf{B}$, by integration
\begin{equation}
\mathbf{A}(x,y,z)=\mathbf{A}_p(x,y,0)-\hat{z}\times\int_0^z dz' \mathbf{B}(x,y,z')
\end{equation}
The values of $H_R$ and $H_{acc}$ are tabled in Table~\ref{tab1}. They match each other in orders of magnitude. A point to be noted is that the information of helicity taken away by intermittent CMEs is not included in $H_{acc}$ and show monotonous increase in time whereas the $H_R$ decreases after X2.2 flare (10:36 UT) and then increases (11:24 UT) towards X9.3 flares on September 6. Moreover, $H_R$ is consistent with the $E$ for being the fact that they are related each other proportionally.
Given 3D field distribution, one can calculate the twist number for each field line \citep{berger2006,inoue2011, LiuRui2016}
\begin{equation} \label{Eq-twist}
T_w = \int\limits_{L} \frac{\mu_0 \mathbf{J}_{||}}{4\pi B} ~\rmd l
= \int\limits_{L} \frac{\nabla \times \mathbf{B} \centerdot \mathbf{B}}{4\pi B^2} ~\rmd l
\end{equation}
Here the twist is related to the parallel electric current given by ${{\mathbf{J}}_{||}}=\frac{\mathbf{J}\centerdot \mathbf{B}}{\left| B \right|}$ and the line integral is along the selected magnetic field line of length $L$. The average twist number then is given by
\begin{equation}
<T_w>=\frac{\Sigma_i \Phi_i T_{w,i}}{\Sigma_i \Phi_i}
\end{equation}
where $\Phi_i$ is the magnetic flux of flux tube i. These are estimated in the flux rope cross section as shown in the last column of Figure~\ref{Fig_qf}. The flux rope boundary is assumed as the QSLs of the large Q-values, within which the field lines are twisted. The values of $<T_w>$ are furnished in Table~\ref{tab1}. The $T_w$ values in the flux rope at 04T18:00UT ranges to 2 turns (negative is for left hand helicity) but the $<T_w>$ becomes 1.2 turns indicating favorable condition for kink-instability as also reported by \citet{YangS2017}. In the 05T12:00 UT and 06T00:12 UT time shots, the $<T_w>$ is small for the fact that the field lines are in sheared arcade form. $T_w$ has maximum values of 1.72 in the 06T08:36 UT panel, but the average value comes down to 0.87 turns. In the later times, the $<T_w>$ recovers to 0.99 turns before the large X9.3 flare at 11:53 UT. Further, the $<T_w>$ values correspond to $H_{rel}$ values at different epochs because the later is the measure of twist, linking, and shear which mostly come from the core part. The $<T_w>$ values represent the physical structure of magnetic field derived from the simultaneous observations, thus are not in agreement with the $H_{acc}$ which in itself has no information of the loss of helicity by flux rope eruption i.e., flux rope present or not. Helicity injection builds the coronal structure of flux rope, expecting the buildup of the flux rope twist with the coronal helicity accumulation as reported in the modeling analysis of \citet{Guoy2013}. However, this is possible as long as no eruption occurs in the observation time window. In any AR since its emergence, helicity builds to form structures like flux ropes which erupt after some critical value. A persistent injection of helicity would builds such structures over a time scale of tens of hours after every eruption successively and has been the subject of recent studies \citep{vemareddy2017b}. Altogether, our estimated parameters are evidently shown to change with the flux rope presence or its eruption. This modeling analysis quantitatively captures the most of the theoretical aspects as seen with the coronal activity.
From the $T_w$ computation, we can also estimate the self-helicity of the flux rope as $H_s=\Sigma_i ^N T_{w,i}\Phi_i^2$, with the summation running over N flux tubes. We use the flux rope cross section defined by the closed-QSLs in the boundary. The values vary between $0.3-1.0\times 10^{40}$ Mx$^2$ at different time snapshots. This is consistent with the previous findings of \citet{Guoy2013,Guoy2017b} that the self-helicity becomes negligible for large N.
\begin{figure*}[!htp]
\centering
\includegraphics[width=.9\textwidth,clip=]{Fig_q3d}
\caption{ 3D distribution of $LogQ$ for the NLFFF magnetic structure in AR 12673. The background is $Bz$ image. Notedly, the Q-distribution outlines the sigmoid structure with the field lines at the core of the flux rope, the overlying potential arcade, and 2J field lines. Higher values of $LogQ$ are located in the core part of the sigmoid. An online only animation for 04T18:00\,UT shot describes the relationship of the Q-distribution with variable opacity and the field lines. } \label{Fig_q3d}
\end{figure*}
\subsection{QSLs and magnetic connectivity domains}
Quasi separatrix layers (QSLs) are the regions of the magnetic volume where the field line connectivity experiences dramatic but continuous variations \citep{demoulin1996}. The locations of QSLs are determined by computing the squashing factor (Q; \citealt{titov2002}). From the 3D PF and NLFFF, we calculate Q using the code developed by \citet{LiuRui2016} according to the formalism given in \citet{Pariat2012}. In Figure~\ref{Fig_q3d}, the 3D-rendering of $LogQ$ is shown for different time shots of NLFFF structure. There are large Q values upto 15 orders, but a scaling of $0<LogQ<4$ is applied in order for a better visualization of all range. As is obvious from the panels, the Q-distribution outlines the sigmoid structure with the field lines in the core of the flux rope, the overlying potential arcade, and 2J field lines, similar to the QSL analysis of \citet{Tassev2017} for the \citet{titov1999} flux rope. Higher values of Q are located in the core part of the sigmoid above the main PIL. Online only available animation demonstrates the relationship of the Q-distribution with variable opacity and the field lines. QSLs (in blue) above the main PIL majorly separates the quasi-connectivity domains from P1 and N1 polarities. The QSLs of large Q-values are likely places for reconnection, in which the line-of-sight integrated emission resembles the given shape for the X-ray or EUV sigmoid. However, all QSLs of higer Q-values are not associated with the intense volume currents as described in \citep{savcheva2012b,Guoy2013,YangKai2015}.
\begin{figure*}[!ht]
\centering
\includegraphics[width=.99\textwidth,clip=]{Fig_qf}
\caption{Distribution of $Q$ in a vertical slice (xz plane, along magenta line in Figure~\ref{Fig_core_fld}) placed across the flux rope. {\bf first Column:} LogQ computed from potential field approximation. QSL of two domains intersect at null point in the corona (see Figure~\ref{Fig_null_fr}) in the 04T18:00 UT panels and disappears in later time panels. {\bf second column:} LogQ computed from NLFFF model. Thin blue/red curves represent $B_z$ contours at $\pm100$\,G in the slice and their adjoining position on the photosphere (z=0) traces the main PIL. The coronal volume above the main PIL is surrounded by magnetic domains of highly sheared arcade enclosed in a less sheared potential like field. The inner core of the sheared arcade is twisted flux rope extended upto 5 Mm and is pointed by blue arrows. With the continued shear motion of magnetic patches about the PIL, the existing arcade becomes highly stressed (red arrows) resulting in its increased coronal height from 17 Mm to 34 Mm. {\bf third column:} QSL of the flux rope cross-section enclosed in the red rectangle. }
\label{Fig_qf}
\end{figure*}
Further, the Q (in logarithmic scale) in a vertical slice placed across the flux rope (see Figure~\ref{Fig_ext_j2d}) is plotted in Figure~\ref{Fig_qf}. $B_z$ contours at $\pm$80G are also shown with thin blue/red curves. The joining locations of these contours at the photosphere (z=0) corresponds to the main PIL and is surrounded by magnetic domains of sheared arcade enclosed in a less sheared potential field. The important QSLs are identified by large values of Q i.e., the black lanes (or patches) in our negated maps. Owing to the presence of stressed field configuration, there is clear difference in the domains of sheared arcade in both the PF and NLFFF, especially the candle-flame or inverse tear-drop like shape in the later. The inner core of the sheared arcade is highly twisted manifesting a flux rope extended upto 6\,Mm (pointed by blue arrow).
To be more clear, the Q maps for flux rope cross section (red rectangle) is displayed in third column of the same figure. The large values of Q in the 04T18:00 UT panel are located in the flux rope border which differentiates the domains of flux rope with twisted field lines and the surrounding sheared arcade, similar to the studies of \citet{Guoy2013,Jiezhao2014, LiuRui2016}. However, this flux rope structure is not clear with large QSLs in the border for the 05T12:00 UT, 06T00:12 UT panels probably because flux rope was not formed, after the first phase of flaring activity from 04T18:00 UT, from the sheared arcade. This is consistent with the inference of \citet{YangKai2016} who found that the closed-QSL of flux rope becomes smaller as a consequence of the flare. In the 06T08:36UT panel, these QSLs appear to be candle flame shape with less width compared to 04T18:00 UT case. And this structure is diffused in 06T10:36 UT panel, probably for the fact that the flux rope might partially erupted during X2.2 flare at 06T08:57 UT. The 06T11:24 UT panel clearly shows the semi-closed QSL implying a developed state of the flux rope before X9.3 flare at 11:53 UT. We should point from these QSL analysis that the flux rope has an HFT topology from 06T08:36 UT onwards as the QSL legs cross each other below as also revealed by the numerical study of this AR by \citet{JiangChaowei2018}. Differently, the topology would be BPSS in 04T18:00 UT panel as the QSLs touch the photosphere tangentially.
Further, there are other QSLs associated to large scale magnetic structure enclosing the flux rope. These are also self-closed QSLs differentiating less sheared field lines over the flux rope and the potential ones in even higher height. Importantly, there exists intersecting self-closed QSLs in the 04T18:00 UT. The intersection is at the null point above the positive polarity on the east side of the flux rope as displayed in Figure~\ref{Fig_null_fr}. Thus the spine field lines extends in the P1, P2 over the flux rope. The reconnection in the null point helps to reduce the overlying field, as in a breakout reconnection scenario, enabling the flux rope eruption \citep{WuST2005,Torok2009}. As the positive polarity moves southward, these intersecting QSLs become separated after 05T12:00 UT panels.
It is important to note the height of the large scale QSLs over the time evolution. They appears at increasing height from 04T18:00 UT panel to 06T10:36 UT panel. In the presence of persistent slow shear motions of the magnetic patches (N1 and P1) at the photosphere (Figure~\ref{Fig_vel}), the field lines are increasingly stressed. This results in the decreased distance of the QSL legs rooted in the photosphere on either side of the PIL. The configuration still remains in equilibrium by increasing the extent of sheared arcade height from 18 Mm on 04T18:00UT to 34 Mm on 06T10:36 UT. We believed that this highly energized sheared arcade system is critically stable and is prone to erupt to a small perturbation like kink-instability of the flux rope in the inner core \citep{YangS2017}.
\begin{figure*}[!htp]
\centering
\includegraphics[width=.98\textwidth,clip=]{comp_norm_dhdt}
\caption{Comparison of normalized helicity flux injection $dH/dt/\Phi^2$ in 4 flare/CME producing ARs. Starting times are 2017-08-31T19:12 UT, 2015-06-19T00:00 UT, 2014-10-21T00:00 UT, 2012-03-06T00:00 UT , for AR 12673, 12371, 12192, 11429 respectively. Normalized helicity flux injection in AR12673 is comparatively high with major injection phase co-temporal with rapid flux emergence. The mean values in the given time windows of these ARs are noted as $-0.52\times10^{-6}$ s$^{-1}$, $-0.19\times10^{-6}$ s$^{-1}$, $-0.08\times10^{-6}$s$^{-1}$, $-0.25\times10^{-6}$s$^{-1}$ respectively. }
\label{Fig_norm_dhdt}
\end{figure*}
\section{Summary and Discussion}
\label{summ}
Using continuous time-series SDO observations, we studied the long term-evolution of flare-prolific AR 12673 right from its emergence on the visible photosphere. The AR emergence occurred with a couple of dipoles in the vicinity of pre-existing positive polarity sunspot whose interaction by shear/proper motions builds into compact $\delta$-AR complex with curved SPIL. This main SPIL is seen with persistent strong shear and converging flows of opposite polarities on either side. A major helicity injection occurs during rapid flux emergence consistent with the very fast flux emergence cases \citep{SunX2017}.
In order to have more insight on helicity flux input, we compared the normalized helicity flux ($\frac{1}{\Phi^2}\frac{dH}{dt}$) injection in different ARs in Figure~\ref{Fig_norm_dhdt}. The value is a measure of non-potentiality per unit flux per unit time and indicates how fast the AR accumulates energy and helicity. As is clear from the plot, this parameter evolves at a higher rate by a factor of 3 in AR 12673 compared to other ARs. The mean values of $\frac{1}{\Phi^2}\frac{dH}{dt}$ in the given time windows of these ARs are noted as $-0.52\times10^{-6}$ s$^{-1}$, $-0.19\times10^{-6}$ s$^{-1}$, $-0.08\times10^{-6}$ s$^{-1}$, $-0.25\times10^{-6}$ s$^{-1}$ for AR 12673, 12371, 12192, 11429 respectively. The AR 12192 is a flare-prolific region without CMEs \citep{sunx2015} in contrast to the rest of the ARs \citep{vemareddy2017b} and has small injection value per flux tube, Interestingly, this value in our AR is quite stronger by a factor of 2 than the strong CME-prolific ARs 12371, AR11429 which is suggested to be key parameter for generating severe space-weather events as the case in AR12673.
While this helicity flux builds up the sigmoid by September 4, the helicity injection by the continued shear and converging motions in the later evolution contributes to sigmoid sustenance and its core field twist as a manifestation of the flux rope which erupts after exceeding critical value of twist. Moreover, the total length of SPIL segments correlates with the non-neutralized current ($|DC/RC|$) and maintains a higher value in both the polarity regions in the AR. This higher value of non-neutralized currents is a signature of strong non-potentiality and eruptive capability of the AR according to the flux rope models \citep{zakharov1986,Torok2014}, and is suggested to be a proxy assessing the ability of ARs to produce major eruptions \citep{YangLiu2017,vemareddy2017d, Vemareddy2019}.
Corresponding to the photospheric magnetic field evolution, we also studied the magnetic configuration by modeling the coronal field. The modelled magnetic field qualitatively reproduces the sigmoidal structure capturing major features like twisted core flux as flux rope, and hook-shaped parts connecting P1 and N3 polarity regions. Topological study indicates that the AR consists of QSLs in the surroundings of the flux rope in the core and those in the large structure surrounding the sheared arcade. The flux rope was likely having BPSS topology during the emergence phase (04T18:00 UT) of the AR whereas it was HFT in later time on September 6. However, the twist number of the field lines in the flux rope have more than 1.2 turns (with $<T_w>\ge1$ turn) indicating a possible kink-unstable nature of the twisted flux in the core. In addition, the magnetic structure reveals a null point topology with spine field lines overlying the flux rope. Therefore, both kink-instability and null-point reconnection in the overlying field would have played role in triggering the first phase of the activity from September 4 (Figure~\ref{Fig_met}). In the second phase of the activity on September 6, although the AR was critically stable, kink-instability and/or null point reconnection below the flux rope might play the role of triggering the eruptions, as also revealed by numerical studies of \citet{JiangChaowei2018}.
Further, the twist number of the flux rope, the QSL structure surrounding the flux rope and coronal helicity and energy budgets are shown to change with the presence of flux rope and its eruption. This implies that the the energy and helicity injections from the bottom boundary help build the essential physical condition for flare/CME occurrence in the long term evolution. QSL study shows that the sheared arcade is stressed to a critically stable state and its coronal height becomes doubled from September 4 to 6. In addition to the many distinguished non-potential characteristics, the critically stable state from the mid of September 6 is a crucial factor to explain the recurrent eruptive nature of this AR. Once the magnetic system is critically stable, there are basically three alternatives for triggering solar eruptions viz., internal tether cutting, external tether-cutting, and MHD instability or loss of equilibrium such as kink-instability and/or torus-instability \citep{linj2000,antiochos1999,torok2005}. The unique non-potential characteristics found in this study are in agreement with previous studies and interpretations. Studies of this AR 12673 by \citet{YangS2017} showed that the flux rope eruption is triggered by kink-instability because of the exceeding critical twist. They had proposed a block-induced complex structure for a flare-productive AR. Similarly, \citet{YanXL2018} suggested the sunspot rotation and shearing motion played an important role in the buildup of free-energy and the formation of flux ropes in the corona which produces solar flares and CMEs. We believe that the long term evolution of the AR has characteristic clues for the nature and strength of the activity, which by studying different AR cases can help constrain the space-weather prediction models.
\acknowledgements SDO is a mission of NASA's Living With a Star Program. I thank the anonymous referee for insightful comments and suggestions that undoubtedly improved the presentation of the results. P.V is supported by an INSPIRE grant under AORC scheme of Department of Science and Technology. The NLFFF code is developed by Dr. T. Wiegelmann of Max Planck Institute for Solar System. 3D rendering is due to VAPOR (\url{www.vapor.ucar.edu}) software. We acknowledge an extensive usage of the multi-node, multi-processor high performance computing facility at Indian Institute of Astrophysics.
\bibliographystyle{apj}
|
1,314,259,994,244 | arxiv | \section{Introduction}
Assistive devices such as robotic arms and powered wheelchairs are designed to return independence to people with motor impairments such as spinal cord injury (SCI), amyotrophic lateral sclerosis (ALS), and cerebral palsy (CP), among others. Controlling assistive devices can be challenging---especially when trying to control more complex assistive devices with more limited interfaces~\cite{Cowan2012}.
Commercially-available interfaces used for controlling assistive devices can be either proportional---where the user has control over both \textit{which} control signal and its \textit{magnitude}---or discrete---where the user only has control over the selection of \textit{which} signal to turn on or off. Common control interfaces can be one, two, or three dimensional. To fully control all six Degrees-of-Freedom (DoF) of a robotic arm's end-effector, which includes the three linear and three angular positions in space, the user must switch between subsets of the control dimensions. The subsets are referred to as \textit{control modes}. With a one-dimensional interface, the user must switch between six modes to fully control all 6-DoF. This increases the physical and cognitive burden on the user.
Autonomy can help offset the challenges that exist in operating these interfaces~\cite{fehr2000}. A previous study with SCI users of powered wheelchairs found that they prefer to retain as much control over the assistive machine as possible~\cite{gopinath2017human}. Sharing control between the user and autonomy can allow the user to retain control authority while benefiting from the assistance of autonomy when the task becomes too burdensome~\cite{argall2013}. However, there is no one-size-fits-all method of control sharing as each person is unique in their desired control preference~\cite{erdogan:2017:ras}. Their motor abilities and required level and type of assistance also can change. For control sharing to be useful, practical and accepted in the realm of assistive robots for motor-impaired users, it is furthermore important for the control-sharing to be able to adapt to new scenarios and user skills~\cite{Urdiales2009}.
In this study, we investigate different performance measures to quantify spinal cord injured and uninjured user teleoperation characteristics. Specifically, we investigate these different performance measures for their potential as useful cues for when autonomy should be triggered to be most helpful to the user. These measures are important as they can furthermore inform the design of control sharing paradigms that are able to adapt to the user's variability over time. We also investigate differences between different subject groups and determine whether subject specific training data is needed for algorithm development. Towards this, we conduct a 20 person pilot study with 8 SCI and 12 uninjured subjects using a robotic arm to accomplish various Activities of Daily Living (ADL) tasks using three different types of commercially-available interfaces. We determine relationships between the selection of performance measures and assess the strength of correlations between these and the user's perceived task difficulty. Our premise is that the information content encoded within the user's control signals has useful characteristics and implications for algorithms capable of learning from and adapting to the user.
In Section~\ref{sec:background} we cover related work on control sharing in assistive robotics. In Section~\ref{sec:exp_design} we provide a detailed description of our experimental design. We present our results in Section~\ref{sec:results} followed by a discussion of our findings in Section~\ref{sec:discussion}. Finally we
present our conclusions and future work in Section~\ref{sec:conclusion}.
\section{Background}
\label{sec:background}
In this section we overview related work in existing methods of shared control and user signal characterization in the domain of assistive robotics.
There are various examples of different control sharing algorithms in the literature~\cite{music:2017:arc}, which can largely be classified based on the way control is allocated between the user and autonomy. In general, control-sharing algorithms can be divided into those where (a) the amount or \textit{level} of control authority is shared between the human and the robot, and (b) those where there is an idea of \textit{subtask} allocation. In addition to various ways of sharing control between humans and robots in assistive settings, there are also many ways in which shared control can be triggered.
In schemes where the level of autonomy is shared, the amount of control shared by autonomy can be static or dynamic~\cite{beer:2014:hri}. The dynamic change in the control authority is typically either implicit based on a change in the environment~\cite{Simpson1998}, or explicitly set by the human user~\cite{parikh:2004}.
In schemes where there is an idea of subtask allocation, there are specific behaviors where the robot will take control authority---including but not limited to safe stopping at obstacles or obstacle avoidance and manipulation of grasped objects. These subroutines can either be always on, or triggered explicitly by the end-user~\cite{bourhis:2001}, or implicitly by a change in the environment or the robot state~\cite{lankenau:2001}.
In order for control-sharing to be useful and effective for motor-impaired end-users, these methods must be able to adapt to the end-user---whether it be in the appropriate level of relinquishing control to autonomy at a given time, or when to trigger autonomy to take over certain subtasks. Some research has focused on adapting the control sharing to changes in a powered wheelchair operator's environment and skilled user intent~\cite{Soh2015}. Other research also looked at adaptable learning from expert assistants (i.e. occupational therapists) and not the actual end-user~\cite{Kucukyilmaz2015}.
Limited work has been done that considers variation in user skill. Vanhooydonck \textit{et al.} use a neural network to model a personalized shared-assistance wheelchair system~\cite{Vanhooydonck2010}. However, the model relies on gathering a large number of training data for any new condition---including changes in skill level or the environment---and cannot adapt to changes online. Fdez-Carmona \textit{et al.} propose a navigation skill profile to trigger the adaptive assistance of a powered wheelchair~\cite{Fdez-Carmona2014}. They use smoothness and directness to define the skill profile. In addition to smoothness and directness, safety and disagreement have also been used as factors to characterize user signal while operating a powered wheelchair~\cite{Urdiales2013}. To our knowledge, there has been no previous work done on characterizing teleoperation control signals of people operating an assistive robotic arm.
Smoothness and directness are characteristics of the user command signal. In the domain of shared-control assistive robots, both the user and the robot signals can be characterized. There are also examples of user signal characterization outside the domain of assistive robotics. In sensory-motor performance research, movement and trajectory smoothness is often used to measure performance skill~\cite{Hogan2009}. In the domain of automotive engineering, parameters such as steering angle and frequency of user inputs are used to characterize driving behavior~\cite{Zuojin2017, Fernandez2016}.
We aim in this work to identify computable measures to sufficiently represent characteristics of the user's operation of an assistive robot, with the long term goal of using these measures within algorithms that can adapt to the user's unique skill level, needs, and changing control behavior. Towards this end, we identify performance measures which can be useful as real-time predictors of changes in perceived user difficulty. \par
\section{METHODS}\label{sec:exp_design}
This section provides a detailed description of the research design and procedures used in this experiment.
\subsection{Hardware}
The study was conducted using the MICO robotic arm (Kinova Robotics, Canada). This platform was chosen because it is a commercially available assistive device designed for users of powered-wheelchairs.
Each subject was asked to perform two different tasks using three different interfaces (Fig.~\ref{fig:interfaces}): (1) IPD Ultima 3-axis joystick (CH-Products Industrial, CA, USA), which is a three dimensional proportional controller, (2) Sip/Puff switch (Origin Instruments, TX, USA), which is a one dimensional proportional controller, and (3) 105 electronic head array system (ASL, TX, USA), which is a one dimensional discrete controller. These interfaces were selected as they are commonly used interfaces employed by SCI users of powered wheelchairs.
Four wearable BioStampRC sensors (MC10 Inc., MA, USA) were used to measure physiological signals. One sensor was configured as an ECG sensor to measure heart rate. The other three were configured as accelerometers to measure hand, head, and torso kinematics. This selection was made because these sensors are wireless and therefore do not limit the human-user's motion.
\begin{figure}[t]
\centering
\vspace{0.2cm}
\includegraphics[height=2.2cm]{fig/joystick_scaled.jpg}
\includegraphics[height=2.2cm]{fig/headarray_scaled.png}
\includegraphics[height=2.2cm]{fig/sip_puff_scaled.png}
\caption{Control interfaces used in this study. \textit{From left to right:} 3-axis joystick, head array system, sip/puff switch.}
\label{fig:interfaces}
\end{figure}
\subsection{Participants}
The study consisted of 20 participants: 12 uninjured and 8 with spinal cord injury (levels C3, C5-C6, C5-C7 incomplete, C6 incomplete, C5-C6, C3 incomplete, C3-C4, T2-3). From the spinal cord injured group, one participant was not able to use the joystick due to no upper-limb control, and one participant required a neck brace and was not able to use the head array switch. The age and gender demographics are provided in Table~\ref{table:demog}.
\begin{figure*}[h]
\centering
\vspace{0.2cm}
\begin{subfigure}[b]{.32\linewidth}
\includegraphics[width=\linewidth]{fig/knife_scaled.png}
\caption{}\label{fig:task1}
\end{subfigure}
\begin{subfigure}[b]{.32\linewidth}
\includegraphics[width=\linewidth]{fig/scoop_scaled.png}
\caption{}\label{fig:task2}
\end{subfigure}
\begin{subfigure}[b]{.32\linewidth}
\includegraphics[width=\linewidth]{fig/plate_scaled.png}
\caption{}\label{fig:task3}
\end{subfigure}
\begin{subfigure}[b]{.32\linewidth}
\includegraphics[width=\linewidth]{fig/cup_scaled.png}
\caption{}\label{fig:task4}
\end{subfigure}
\begin{subfigure}[b]{.32\linewidth}
\includegraphics[width=\linewidth]{fig/cereal_scaled.png}
\caption{}\label{fig:task5}
\end{subfigure}
\begin{subfigure}[b]{.32\linewidth}
\includegraphics[width=\linewidth]{fig/lid_scaled.png}
\caption{}\label{fig:task6}
\end{subfigure}
\caption{The six ADL tasks used in the experiment:
(\subref{fig:task1}) Open drawer, pick up butter knife, place it inside the drawer.
(\subref{fig:task2}) Scoop cereal without spilling, position spoon towards mouth.
(\subref{fig:task3}) Pick up plate from top of drawers, place in dish rack.
(\subref{fig:task4}) Pick up inverted cup from the dish rack, rotate and stack inside upright cups.
(\subref{fig:task5}) Grasp cereal box, pour into bowl without spilling.
(\subref{fig:task6}) Unscrew jar lid, lift lid straight up.}
\label{fig:tasks}
\end{figure*}
\begin{table}[H]
\centering
\caption{Demographics of Study Participants.}
\label{table:demog}
\begin{tabular}{ |c|c|c| }
\cline{2-3}
\multicolumn{1}{c}{} & \multicolumn{1}{|c|}{SCI} & Uninjured \\
\hline
Number & 8 & 12 \\ \hline
Age (mean$\pm$std) & 45.3$\pm$16.7 & 26.9$\pm$3.64 \\ \hline
Female-Male& 1-7 & 6-6 \\
\hline
\end{tabular}
\end{table}
\subsection{Procedure}
The protocol and consent form were approved by Northwestern University's Internal Review Board (IRB) and all participants gave informed signed consent.
\subsubsection{Protocol}
Each session began by obtaining the participant's consent, demographics information, and completing a pre-session questionnaire. Then the BioStampRC sensors were attached and the participant's baseline signals were measured. A session consisted of three rounds---one per interface. Prior to starting the first trial in each round, the participants were given time to train with each new interface. The subjects then performed two separate tasks twice. Immediately after the second execution of each task, the participants completed post-task questionnaires to self-report task difficulty. The order of the tasks and interfaces were randomly counterbalanced across all subjects prior to recruitment, and each participant was randomly assigned conditions. The session ended by measuring the resting heart rate signal and a final questionnaire comparing interfaces and tasks. We collected a total of 222 trials: 78 spinal cord-injured (26 per interface) and 144 uninjured (48 per interface).
\subsubsection{Tasks}
A selection of six different ADL tasks were designed with the theme of eating and housekeeping (Fig~\ref{fig:tasks}). Tasks were selected in consultation with an occupational therapist. Each participant was assigned two tasks from this selection in a counterbalanced manner. Each subject performed their two tasks with all three interfaces. There were 15 unique sets of task conditions, and no two subjects performed the same combination of tasks.
\subsection{Experimental Design}
The experiment used a 2x3 mixed design, where the interface was considered a \textit{within-subjects} factor and whether or not the participant was uninjured or had a spinal-cord injury was a \textit{between-groups} factor. In this preliminary study we investigated task-independent characteristics of user control commands. Therefore, we did not include the selection of task as a between-subjects factor in the experimental design.
\subsection{Metrics}
The metrics detailed in the following section were the dependent variables analyzed for their significance in characterizing the user signals.
\subsubsection{Subjective}
\begin{itemize}
\item Task difficulty: The user's perception of how difficult it was to complete the task using the robotic arm and control interface. This was measured using a post-task Likert scale questionnaire, which was completed immediately after task execution and included the following three statements:
\begin{itemize}
\item Task Difficulty: It was easy for me to complete this task.
\item Robot Difficulty: The robot was easy for me to operate.
\item Interface Difficulty: I was able to issue my intended control command.
\end{itemize}
The overall difficulty is the sum of the three Likert scores.
\end{itemize}
\subsubsection{Physiological}
\begin{itemize}
\item Heart rate variability: The time-difference between successive heart beats. The heart rate was measured at a frequency of 250 Hz. The time-domain Root Mean Square of the Successive Differences (RMSSD)~\cite{Munoz2015} was used to assess the heart rate variability:
\begin{equation*}
RMSSD=\sqrt{\frac{1}{N-2}\sum_{n=3}^{N}[I(n)-I(n-1)]^2}
\end{equation*}
where $N$ is the total number of heart beats and $I(\cdot)$ is the interbeat interval.
\end{itemize}
\subsubsection{User interaction}
\begin{itemize}
\item Frequency of input commands: The rate at which the user issued a command through the interface. We used the Exponential Moving Average (EMA) $f_t$ over a window size of 10 input commands,
\begin{align*}
Y_t &= \frac{1}{10}\sum_{i=N_{t-10}}^{N_t}\frac{1}{t_i-t_{i-1}} \\
f_t &= \alpha \cdot Y_t + (1-\alpha) \cdot f_{t-1}
\end{align*}
where $N_t$ is the total number of commands in the window, $Y_t$ is the input rate at time $t$, $f_{t-1}$ is the value of the EMA at time $t-1$, and $\alpha$ is the degree of weight decrease.
\item Number of mode switches: The number of times the user switched between control modes.
\item Smoothness of user input commands: The quality of continuity or non-intermittency of input commands.
We measured smoothness using the Spectral Arc Length (SPARC)~\cite{Balasubramanian2012}: \\
\begin{align*}
SPARC &= - \int_{0}^{\omega_c}\bigg[\Big(\frac{1}{\omega_c}\Big)^2+\Big(\frac{d\hat{V}(\omega)}{d\omega}\Big)^2\bigg]^\frac{1}{2}d\omega; \\
\hat{V}(\omega)&=\frac{V(\omega)}{V(0)}\\
\omega_c &= min(\omega_c^{max}, min(\omega, \hat{V}(r) < \bar{V} ~\forall ~r>\omega))
\end{align*}
where $V(\omega)$ is the Fourier magnitude spectrum of user input velocity $\omega$, $\hat{V}(\omega)$ is the normalized spectrum of user input velocities, and $\omega_c$ is the dynamic cutoff frequency which determines sensitivity to noise. This metric was chosen because it was demonstrated to be a reliable measure of smoothness that is not affected by the amplitude or duration of input signals~\cite{Balasubramanian2015}.
\end{itemize}
\subsubsection{Robot execution}
\begin{itemize}
\item Trajectory smoothness: The continuity of the end-effector's trajectory. We measure the smoothness of the end-effector trajectories using SPARC, where $V(\omega)$ is the Fourier magnitude spectrum of end-effector velocities, and $\hat{V}(\omega)$ is the normalized spectrum of end-effector velocities.
\end{itemize}
\section{RESULTS}\label{sec:results}
For each metric, the Shapiro-Wilk test of normality was performed to ensure all dependent variables were approximately normally distributed. Both group sizes and variances were unequal, therefore we used a general linear model (GLM) to determine any inter-group significant interactions. For within-group interactions, we used Tukey's multiple comparison procedure. We used the Pearson's correlation coefficient to determine the strength and direction of any correlations between task difficulty and the objective metrics (R-value in Tables~\ref{table:corr} and~\ref{table:perf}).
\clearpage
\subsection{Subjective Measure}\label{sec:results_subjective}
\begin{wrapfigure}{r}{0.35\textwidth}
\begin{center}
\includegraphics[width=0.3\textwidth]{fig/DifficultyV3.jpg}
\caption{Analysis of user-perceived difficulty between all subjects and interfaces.}
\label{fig:difficulty}
\end{center}
\end{wrapfigure}
Our preliminary results show that the user's perceived difficulty in the uninjured group is strongly dependent on the interface used. However, the SCI group only reported moderately significant more difficulty when using the sip-puff interface compared to the head array switch. Furthermore, there is a statistically significant difference between perceived task difficulty while using the 3-axis joystick between the spinal cord injured and uninjured groups (Fig.~\ref{fig:difficulty}). The implications of this finding for the adaptable autonomy algorithm is two-fold: (1) this gives credence to the suggestion that the adaptable autonomy paradigm should be designed specifically for the data collected from the motor-impaired group, and (2) it is possible to perform some of the customization in advance based on the type of interface being used.
\begin{table}[H]
\caption{Factor Correlations with Task Difficulty}
\label{table:corr}
\centering
\begin{threeparttable}
\begin{tabular}{|c|c|c|c|c|}
\cline{2-5}
\multicolumn{1}{c}{} &\multicolumn{4}{|c|}{\textbf{Task Difficulty}} \\ \cline{2-5}
\multicolumn{1}{c}{} &\multicolumn{2}{|c|}{\textbf{SCI}} &\multicolumn{2}{c|}{\textbf{Uninjured}} \\ \cline{2-5}
\multicolumn{1}{c|}{} & R & p-value & R & p-value \\ \hline
\textbf{HRV} & -0.1429 & 0.1766 & 0.0963 & 0.2456 \\ \hline
\textbf{User Input Frequency} & \textbf{0.4155} & \textbf{\textless0.0001} & \textbf{0.4516} & \textbf{\textless0.0001} \\ \hline
\textbf{\# Mode Switches} & 0.2562 & 0.0142 & \textbf{0.4940} & \textbf{\textless0.0001} \\ \hline
\textbf{Input Smoothness} & \textbf{-0.3696} & \textbf{\textless0.001} & \textbf{-0.4760} & \textbf{\textless0.0001} \\ \hline
\textbf{Trajectory Smoothness} & \textbf{-0.4725} & \textbf{\textless0.0001} & \textbf{-0.6483} & \textbf{\textless0.0001} \\ \hline
\textbf{Task Completion Time} & 0.2886 & 0.005 & \textbf{0.5521} & \textbf{\textless0.0001} \\ \hline
\end{tabular}
\end{threeparttable}
\end{table}
\subsection{Physiological Measures}\label{sec:results_physiological}
\begin{wrapfigure}{r}{0.35\textwidth}
\begin{center}
\includegraphics[width=0.3\textwidth]{fig/n_HRV-V2.jpg}
\caption{Heart rate variability data averaged over subjects and tasks shows significant differences between subject groups for Sip/Puff interface.}
\label{fig:hrv}
\end{center}
\end{wrapfigure}
Our analysis of the heart rate variability data showed no evidence of any correlations with task difficulty (Tbl~\ref{table:corr}, row HRV).
As seen in Figure~\ref{fig:hrv} however, there are significant differences in heart rate variability between the two subject groups while using the Sip/Puff interface. The implications of this for the adaptable autonomy algorithms are: (1) physiological measure of heart-rate variability may not be as informative in shedding light on other metrics which will be measurable in real world operation, and
(2) additional support for the first implication from Section~\ref{sec:results_subjective} that the adaptable autonomy paradigm for the target SCI group should be designed specifically from data collected from SCI subjects.
\subsection{User Interaction Measures}\label{sec:results_user}Figure~\ref{fig:user_interact} shows that the user interaction metrics (input frequency, mode switches, input smoothness) are strongly dependent on the type of interface being used. Somewhat surprisingly, with the exception of frequency of input commands using the joystick interface, having a spinal cord injury does not seem to affect how the user provides control commands to the robotic arm. As seen in Table~\ref{table:corr}, the user interaction metrics (rows User Input Frequency, $\#$ Mode Switches, Input Smoothness) can be used to predict task difficulty in both groups---although stronger correlations are observed in the uninjured group.
\begin{figure}[H]
\centering
\vspace{0.2cm}
\includegraphics[width=0.3\textwidth]{fig/F_IP.jpg}
\includegraphics[width=0.3\textwidth]{fig/Num_MS.jpg}
\includegraphics[width=0.32\textwidth]{fig/Comm_Smoothness-V2.jpg}
\caption{Analysis of user interaction with the robotic arm. \textit{Top:} Frequency of user input commands; \textit{Middle:} Number of mode switches; \textit{Bottom:} Smoothness (continuity) of user input commands.}
\label{fig:user_interact}
\end{figure}
The results from analyzing the user interactions metrics have the following implications for the autonomy allocation paradigm: (1) an increase in user input frequency, an increase in the number of mode switches, or a decrease in user input smoothness all can signal an increase in task difficulty---which can be used as a cue for the autonomy to either increase or modify assistance, (2) the second implication of Section~\ref{sec:results_subjective} is corroborated as the results further show it is possible to perform some of the customization in advance by taking into account the control interface in use, and (3) if the autonomy allocation paradigm relies solely on the user interaction metrics, it may be generally acceptable to use a mixture of data from both SCI and uninjured groups to train the model on a larger number of data.
\subsection{Robot Execution Measure}\label{sec:results_robot}
Table~\ref{table:corr} shows that the smoothness of the end-effector's trajectory is strongly correlated with perceived task difficulty in both groups. As shown in Table~\ref{table:perf}, trajectory smoothness is strongly correlated with task completion times and weakly correlated with successful task completion. This shows that the trajectory smoothness can be an important factor in measuring user performance.
\begin{table}[H]
\centering
\caption{Performance Correlations with Trajectory Smoothness}
\label{table:perf}
\begin{threeparttable}
\begin{tabular}{|l|c|c|c|c|}
\cline{2-5}
\multicolumn{1}{c}{} &\multicolumn{4}{|c|}{\textbf{Trajectory Smoothness}} \\ \cline{2-5}
\multicolumn{1}{c}{} &\multicolumn{2}{|c|}{\textbf{SCI}} &\multicolumn{2}{c|}{\textbf{Uninjured}} \\ \cline{2-5}
\multicolumn{1}{c|}{} & R & p-value & R & p-value \\ \hline
\textbf{Success} & 0.29 & \textless0.001 & \textbf{0.36} & \textbf{\textless0.0001} \\ \hline
\textbf{Time} & \textbf{-0.71} & \textbf{\textless0.0001} & \textbf{-0.81} & \textbf{\textless0.0001} \\ \hline
\end{tabular}
\end{threeparttable}
\end{table}
We furthermore found great variations in trajectory smoothness within subject groups, interfaces and individuals. For example, Figure~\ref{fig:traj} shows trajectories from two users performing the same task using the same interface with a clear difference in the two trajectories. Figure~\ref{fig:robot_exec} also demonstrates that trajectory smoothness is strongly dependent on the interface used and whether the user has spinal cord injury.
\begin{figure}[h]
\vspace{0.2cm}
\includegraphics[width=0.4\textwidth]{fig/scooping.jpg}
\includegraphics[width=0.4\textwidth]{fig/scooping2.jpg}
\centering
\caption{Sample end-effector trajectories from two subjects performing the same task using the same interface.}
\label{fig:traj}
\end{figure}
The implications from the results of analyzing the robot execution metric are: (1) trajectory smoothness can be used as a cue for increasing assistance when smoothness decreases, because not only is the performance quality decreasing, but the user also is experiencing an increase in difficulty, and (2) the second implication from Sections~\ref{sec:results_subjective} and~\ref{sec:results_user} gains further support from these results.
\begin{wrapfigure}{r}{0.36\textwidth}
\includegraphics[width=0.32\textwidth]{fig/Traj_Smoothness-V2.jpg}
\centering
\caption{Analysis of end-effector trajectory smoothness between all subjects and interfaces}
\label{fig:robot_exec}
\end{wrapfigure}
\section{DISCUSSION}\label{sec:discussion}
The preliminary results can be interpreted as indicative of a relationship between the subjective measure of perceived task difficulty and objective measures that will be available to our assistive autonomy system at runtime during real-world operation---specifically, objective measures that are based on the robot execution (trajectory smoothness) and user interaction with the robot control system (frequency of input commands, number of mode switches, and user input smoothness). Although the preliminary sample size was small, the correlations were strong in the uninjured group, and moderately strong in the spinal cord injured group. These results are encouraging because, unlike perceived task difficulty, the performance measures of user-interaction and robot execution can be computed in real-time and without querying the user. These behavioral and operational metrics may thus be used online to predict whether a user is experiencing changes in the level of perceived difficulty. Additionally, the results showed strong correlations between the robot execution measure of trajectory smoothness and the performance measure of time. These results suggest that each of these factors---frequency of input commands, number of mode switches, user input smoothness, and trajectory smoothness---are potentially useful for modulating autonomy, because they are indicative of user-perceived difficulty.
Heart rate variability is highly age and gender dependent~\cite{Luque-Casado2013}. In general, the heart rate variability of men decreases with age, whereas the heart rate variability of women remains relatively unchanged~\cite{Luque-Casado2016}. In the samples we collected in this preliminary study, there was less variance in age among the uninjured group, and the gender was balanced. In comparison, the uninjured group had a higher mean age with much greater variance, and out of the eight participants there was only one female. The analysis of the heart rate variability data could have been effected by this imbalance. Further experimentation with a larger SCI group size and with gender- and aged-matched controls is necessary.
One of the aims of this preliminary study was to determine which, if any, of the objective measures were dependent on the injury status of a user. The analysis of the data indicated the measures of task difficulty, frequency of user input commands and robot execution were dependent on whether the user had spinal cord injury. These results imply that an adaptable autonomy paradigm should take into account not only the type of interface, but also should be designed specifically for the target end-user motor-impaired group. This is an important finding as much of the previous research has conducted experiments with mostly expert participants or uninjured participants. Though recruiting spinal-cord injured subjects can be difficult, and the experiments costly and time consuming, data from uninjured subjects will not appropriately inform the design of adaptable autonomy algorithms using the above measures.
An additional aim of the study was to determine how, if at all, the subjective and objective measures changed with the interface used for teleoperation. The data indicated that perceived task difficulty, user interaction measures, and robot execution measures were all dependent on the interface being used to control the robot arm. This implies that some of the customization can occur \textit{a priori} for the control interface in use.
Our future work will perform more extensive user studies that employ additional assistive robot platforms and gather data that further characterizes user state according to changes in the user's skill level, task difficulty, and stress levels. Our long term goal is to then use this characterization to inform the design of adaptable autonomy algorithms.
\section{CONCLUSION}\label{sec:conclusion}
In this paper we have presented our preliminary study on characterizing teleoperation commands of people controlling an assistive robotic arm. We investigated the potential usefulness of a variety of performance measures which can be measured in real-time as predictors of whether a user's perceived difficulty in controlling the robotic assistive device is changing. The results of this study found correlations between the objective metrics and the subjectively measured task difficulty, which can inform when the autonomy should step in and assist.
We also found evidence that the control characteristics of spinal cord-injured participants differed from those of the uninjured participants. This can be interpreted as evidence that to enhance the efficacy of an adaptable algorithm, the training data used must be group specific. We also found that some of the customization can happen \textit{a priori} based on the type of control interface used.
The next iteration of this work will focus on gathering more data from the target end-user group. Additionally, in our future work we will use the identified measures as cues for when the amount of autonomy in a shared-control assistive system should change. We will evaluate the developed algorithms with end-user subject studies.
\section*{ACKNOWLEDGMENT}
The authors would like to thank Jessica Pedersen for recruiting subjects with spinal cord injury. The authors would also like to thank Chaithanya Krishna Mummidisetty for his valuable comments and helpful suggestions.
This material is based upon work supported by the National Science Foundation under Grant No. 1552706. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
\flushbottom
\bibliographystyle{unsrt}
|
1,314,259,994,245 | arxiv | \section*{Acknowledgement}
This work was partially supported by Beijing Natural Science Foundation under Grant No. 4222027, National Natural Science Foundation of China under Grant No. 61872369 and 82161148011, Beijing Outstanding Young Scientist Program under Grant No. BJJWZYJH012019100020098, and the Outstanding Innovative Talents Cultivation Funded Programs 2021 of Renmin University of China.
We are grateful to Amazon Web Services for providing efficient GPU computing resource support and technical support for this NLP research project. Xin Zhao is the corresponding author.
\section*{\Large{\centering{Supplementary Material for \emph{ElitePLM\\[25pt]}}}}
\end{@twocolumnfalse}
]
\section{Conclusion}
This paper investigates the general language ability evaluation of pretrained language models. We design four kinds of language abilities of PLMs, including memory, comprehension, reasoning, and composition, and measure ten widely-used PLMs within five categories. For each language ability, we select multiple representative tasks to quantitatively evaluate the performance of PLMs. Our experimental results demonstrate that PLMs with varying objectives and strategies are good at different ability tests. Note that our final predicted outputs of PLMs can also be reused as an open resource for more depth and granularity in analyzing PLMs' language abilities. As a result, it is believed that this study will benefit future work about choosing or designing suitable PLMs for the target NLP tasks based on their properties.
\section{Experiments}
In this section, we first set up baselines, and then report the results and analysis on four ability tests.
\subsection{Models}
As mentioned before, we compare the performance of ten publicly released PLMs from five categories: (1) \textit{Bidirectional LMs}: BERT~\citep{DevlinCLT19}, RoBERTa~\citep{liu2019roberta}, and ALBERT~\citep{LanCGGSS20}; (2) \textit{Unidirectional LMs}: GPT-2~\citep{radford2019language}; (3) \textit{Hybrid LMs}: XLNet~\citep{YangDYCSL19} and UniLM~\citep{00040WWLWGZH19}; (4) \textit{Knowledge-enhanced LMs}: ERNIE~\citep{ZhangHLJSL19}; (5) \textit{Text-to-Text LMs}: BART~\citep{LewisLGGMLSZ20}, T5~\citep{RaffelSRLNMZLL20}, and ProphetNet~\citep{QiYGLDCZ020}.
We implement these models and ability tests mostly on huggingface~\citep{wolf-etal-2020-transformers}, fairseq~\citep{ott2019fairseq}, and jiant~\citep{phang2020jiant}. To reflect the true level of language abilities, we adopt the best hyper-parameter values reported in their original papers for each PLM.
\subsection{Memory Tests}
\label{sec-memory}
\paratitle{Datasets and Metrics.} The goal of memory tests is to assess how much knowledge and language patterns PLMs have memorized during pretraining. For this purpose, we adopt two datasets for evaluation, \emph{i.e.,}\xspace LAMA~\citep{petroni2019language} and English Wikipedia (2,500M words). Specifically, LAMA is a knowledge probe corpus containing a set of knowledge facts, where facts are either subject-relation-object triples or question-answer pairs. Each fact is converted into a cloze statement where the subject or object entity is masked. Wikipedia is one of the widely-used pretraining corpora for our selected PLMs (except GPT-2 and T5). Therefore, to conduct a fair comparison, we continuously train GPT-2 and T5 on Wikipedia using their pretraining objectives. Similar to LAMA, we randomly sample 100,000 texts from Wikipedia and then mask a proportion of $15\%$ tokens following BERT. By querying PLMs with the missing tokens on Wikipedia and LAMA, we can test the language patterns and factual knowledge in PLMs' memory. For metrics, we use \textit{Mean Precision at One ($P$@$1$)} of predicting missing tokens.
For efficiency, we measure it as the performance \emph{w.r.t.} the number of training epochs: the more efficient a model is, the fewer epochs to achieve a reference performance.
\begin{figure}[t]
\centering
\subfigure[Google-RE]{
\centering
\includegraphics[width=0.22\textwidth]{memory-1.pdf}
}
\subfigure[T-REx]{
\centering
\includegraphics[width=0.22\textwidth]{memory-2.pdf}
}
\centering
\caption{Memory efficiency ($P$@1) of five PLMs on Google-RE and T-REx datasets.}
\label{fig:memory-efficiency}
\end{figure}
\begin{table*}[t]
\small
\centering
\begin{tabular}{lrrrrrrrrrr}
\toprule
\multirow{2.5}*{\textbf{Models}}& \multicolumn{3}{c}{\textbf{Bidirectional}} & \multicolumn{1}{c}{\textbf{Uni.}}& \multicolumn{2}{c}{\textbf{Hybrid}} & \multicolumn{1}{c}{\textbf{KE}} & \multicolumn{3}{c}{\textbf{Text-to-Text}} \\
\cmidrule(lr){2-4}\cmidrule(lr){5-5}\cmidrule(lr){6-7}\cmidrule(lr){8-8}\cmidrule(lr){9-11}
& BERT & RoBERTa & ALBERT & GPT-2 & XLNet & UniLM & ERNIE & T5 & BART & ProphetNet \\
\midrule
\textbf{Vocab Size} & 28996 & 50265 & 30000 & 50257 & 32000 & 28996 & 28996 & 32100 & 50295 & 30522 \\
\cmidrule{1-11}
\textbf{LAMA} \\
\makecell[r]{Google-RE} & \textbf{11.0} & 7.1 & 3.3 & 3.9 & {\ul 10.0} & 9.6 & 1.3 & 4.0 & 9.4 & 0.1 \\
\makecell[r]{T-REx} & \textbf{29.2} & 23.9 & 21.0 & 12.0 & {\ul 28.9} & 28.4 & 13.4 & 21.7 & 15.8 & 1.1 \\
\makecell[r]{ConceptNet} & 19.1 & \textbf{21.6} & {\ul 20.0} & 6.4 & 19.5 & 18.3 & 13.0 & 17.1 & 7.7 & 0.3 \\
\makecell[r]{SQuAD} & 17.0 & \textbf{21.0} & 20.6 & 5.6 & {\ul 20.8} & 17.4 & 8.1 & 11.7 & 3.1 & 0.7 \\
\cmidrule{1-11}
\textbf{Wikipedia} & 70.9 & {\ul 71.1} & 63.9 & 42.7 & 68.7 & \textbf{71.5} & 45.7 & 65.0 & 47.8 & 31.3 \\
\bottomrule
\end{tabular}
\caption{Memory test results on LAMA and Wikipedia datasets (test set). These results are based on the \textsc{Large} version of each PLM and more results can be found in the Appendix~\ref{memory}. Bold and underlined fonts denote the best and the second best performance of a PLM (the same as below).}
\label{tab:memory-main}
\end{table*}
\begin{table*}[t]
\small
\centering
\begin{tabular}{c c r r r r r}
\toprule
\textbf{Relation} & \textbf{Template} & \textbf{BERT} & \textbf{RoBERTa} & \textbf{GPT-2} & \textbf{BART} & \textbf{T5} \\
\midrule
\multirow{3}*{<\texttt{[X]}, place\_of\_death, \texttt{[Y]}>} & \texttt{[X]} died in \texttt{[MASK]}. & 13.98 & 0.46 & 0.15 & 11.09 & 4.19 \\
& \texttt{[X]} passed away in \texttt{[MASK]}. & 13.46 & 0.46 & 0.62 & 3.54 & 1.51 \\
& \texttt{[X]}'s place of death was \texttt{[MASK]}. & 3.27 & 0.00 & 0.00 & 0.00 & 1.51 \\
\midrule
\multirow{3}*{<\texttt{[X]}, place\_of\_birth, \texttt{[Y]}>} & \texttt{[X]} was born in \texttt{[MASK]}. & 16.07 & 12.52 & 7.53 & 14.77 & 6.32 \\
& \texttt{[X]} was born in the place of \texttt{[MASK]}. & 2.83 & 1.29 & 0.00 & 0.00 & 1.39 \\
& \texttt{[X]}'s place of birth was \texttt{[MASK]}. & 12.16 & 1.87 & 0.00 & 0.00 & 3.12 \\
\bottomrule
\end{tabular}
\caption{The impact of template on eliciting PLMs' stored knowledge.}
\label{tab:memory-prompt}
\end{table*}
\paratitle{Results and Analysis}. To evaluate how much text PLMs have recalled in pretraining, we directly test PLMs using Wikipedia and LAMA without fine-tuning, similar to zero-shot learning. The results on $P$@$1$ metric are shown in Table~\ref{tab:memory-main}. Compared with bidirectional and hybrid LMs (\emph{e.g.,}\xspace BERT and XLNet), GPT-2 uses auto-regressive self-attention where every token can only attend to the context to its left. This unidirectional training objective naturally limits the performance of GPT-2 in terms of memorizing information. It has been previously reported that PLMs can remember more information by scaling up the model size~\cite{brown2020language}. However, in our tests, BART-large (400M) achieves worse results than RoBERTa-base (125M) with the same training corpus and similar vocabulary sizes (50,295 vs 50,265). During pretraining, RoBERTa adopts bidirectional objectives and novel strategies like larger training batches. It can be concluded that, as opposed to model size, \textbf{training objectives and strategies reflect the way
that PLMs memorize information, making significant impacts on PLMs' memory ability}. Besides, we can clearly observe that all PLMs achieve their best results in T-REx (created from Wikipedia triples) among LAMA, and perform relatively well on Wikipedia. This implies that PLMs indeed remember a large proportion of knowledge and language patterns from pretraining corpora.
To test the memory efficiency, we fine-tune five models, BERT, ALBERT, GPT-2, BART, and XLNet, for several epochs. As shown in Figure~\ref{fig:memory-efficiency}, to achieve a reference performance, the bidirectional training objective like BERT needs fewer epochs than other kinds of objectives. This further implies that the bidirectional training objective is also helpful to facilitate the memory efficiency since bidirectional language modeling can make PLMs more quickly capture the language patterns.
Based on the memory test results, we further analyze how to effectively elicit the information from PLMs' memory. LAMA hand-crafts templates to test PLMs by filling the \texttt{[MASK]} token. Therefore, we conduct a pilot study on designing different templates for two relations in Google-RE. Table~\ref{tab:memory-prompt} shows that different templates can result in substantial differences in eliciting PLMs' memory. The bidirectional LMs, \emph{e.g.,}\xspace BERT, show relatively adaptability to varying templates, further verifying their strength in memory ability. Therefore, with large-scale knowledge stored in PLMs, how to derive an effective and appropriate method to provoke them is a key challenge.
\begin{table*}[t]
\small
\centering
\begin{tabular}{lcccccccccc}
\toprule
\multirow{2}{*}{\textbf{Models}}& \textbf{WNLI} & \textbf{CoLA} & \textbf{MNLI} & \textbf{RTE} & \textbf{QNLI} & \textbf{SST-2} & \textbf{QQP} & \textbf{STS-B} & \textbf{MRPC} & \textbf{Avg.} \\
& Acc. & Matt. & M./MM. & Acc. & Acc. & Acc. & F1/Acc. & P/S Corr. & F1/Acc. \\
\midrule
BERT\textsubscript{\textsc{base}} & 65.1 & 52.1 & 84.6/83.4 & 66.4 & 90.5 & 93.5 & 69.9/88.2 & 77.4/73.7 & 79.0/85.1 & 76.5 \\
BERT\textsubscript{\textsc{large}} & 65.1 & 60.5 & 86.7/85.9 & 70.1 & 92.7 & 94.9 & 72.1/89.3 & 87.6/86.5 & 85.4/89.3 & 80.5 \\
RoBERTa\textsubscript{\textsc{base}} & 65.1 & 61.4 & 87.4/87.2 & 75.1 & 92.9 & 95.7 & 72.5/89.4 & 89.2/88.5 & 87.5/90.7 & 81.8 \\
RoBERTa\textsubscript{\textsc{large}} & \underline{89.0} & \underline{67.8} & \underline{90.8}/\underline{90.2} & \underline{88.2} & \underline{98.9} & \underline{96.7} & \underline{74.3}/\underline{90.2} & \underline{92.2}/\underline{91.9} & \underline{89.9}/\underline{92.4} & \underline{88.5} \\
ALBERT\textsubscript{\textsc{xlarge}} & 65.8 & 58.2 & 35.6/36.5 & 62.5 & 94.2 & 95.1 & 71.7/88.9 & 87.6/86.6 & 69.8/80.3 & 72.7 \\
ALBERT\textsubscript{\textsc{xxlarge}} & 64.4 & 64.7 & 89.7/89.6 & 70.4 & 95.3 & 96.0 & 70.7/88.4 & 91.3/90.6 & 68.1/80.4 & 80.6 \\
GPT-2\textsubscript{\textsc{small}} & 54.8 & 33.8 & 81.1/81.4 & 62.1 & 86.7 & 91.2 & 69.8/87.9 & 79.0/76.5 & 76.9/83.6 & 71.9 \\
GPT-2\textsubscript{\textsc{medium}} & 54.1 & 50.5 & 84.8/84.5 & 63.6 & 91.2 & 92.1 & 71.4/88.6 & 84.3/82.7 & 80.0/85.5 & 75.8 \\
XLNet\textsubscript{\textsc{base}} & 58.9 & 26.2 & 86.1/85.3 & 59.9 & 91.3 & 94.0 & 71.5/88.9 & 83.9/82.9 & 84.3/88.3 & 74.0 \\
XLNet\textsubscript{\textsc{large}} & \textbf{92.5} & \textbf{70.2} & \textbf{90.9}/\textbf{90.9} & \textbf{88.5} & \textbf{99.0} & \textbf{97.1} & \textbf{74.7}/\textbf{90.4} & \textbf{93.0}/\textbf{92.6} & \textbf{90.5}/\textbf{92.9} & \textbf{89.5} \\
UniLM\textsubscript{\textsc{base}} & 65.1 & 49.0 & 83.0/82.2 & 60.3 & 88.7 & 92.3 & 70.7/88.4 & 82.3/81.4 & 84.3/88.7 & 76.2 \\
UniLM\textsubscript{\textsc{large}} & 65.1 & 61.1 & 87.0/85.9 & 70.9 & 92.7 & 94.5 & 71.5/89.2 & 86.6/85.3 & 85.2/89.1 & 80.5 \\
ERNIE\textsubscript{\textsc{base}} & 65.1 & 52.3 & 84.0/83.2 & 68.8 & 91.3 & 93.5 & 70.5/88.4 & 85.1/83.8 & 80.3/85.9 & 70.7 \\
T5\textsubscript{\textsc{base}} & 78.8 & 51.1 & 87.1/86.2 & 80.1 & 93.7 & 95.2 & 72.6/89.4 & 89.4/88.6 & 87.5/90.7 & 82.7 \\
T5\textsubscript{\textsc{large}} & 85.6 & 61.2 & 89.9/89.6 & 87.2 & 94.8 & 96.3 & 73.9/89.9 & 89.9/89.2 & 89.8/92.4 & 86.4 \\
BART\textsubscript{\textsc{base}} & 65.1 & 52.8 & 85.1/84.3 & 69.5 & 92.6 & 94.4 & 72.5/89.7 & 87.6/86.6 & 86.1/89.5 & 79.5 \\
BART\textsubscript{\textsc{large}} & 58.9 & 62.4 & 90.2/89.3 & 83.5 & 94.8 & 96.3 & 73.6/90.1 & 91.1/90.4 & 87.8/91.1 & 83.1 \\
ProphetNet\textsubscript{\textsc{large}} & 52.1 & 24.2 & 81.3/80.8 & 51.3 & 93.2 & 93.6 & 70.6/88.1 & 73.5/72.3 & 69.7/80.8 & 69.2 \\
\bottomrule
\end{tabular}
\caption{Comprehension tests results on GLUE (test set). All results are scored by the GLUE evaluation server\protect\footnotemark. }
\label{tab:comprehension-main}
\end{table*}
\footnotetext{\url{https://gluebenchmark.com/}}
\begin{figure}[t]
\centering
\subfigure[CoLA]{
\centering
\includegraphics[width=0.22\textwidth]{few-shot-1.pdf}
}
\subfigure[QNLI]{
\centering
\includegraphics[width=0.22\textwidth]{few-shot-2.pdf}
}
\centering
\caption{Few-shot results of four PLMs on CoLA and QNLI tasks.}
\label{fig:comprehension-few-shot}
\end{figure}
\subsection{Comprehension Tests}
\label{sec-comprehension}
\paratitle{Datasets and Metrics.} In comprehension tests, we take into account three aspects of comprehension ability, including vocabulary, background knowledge, and linguistic structures. Therefore, we employ five datasets for comprehension tests, \emph{i.e.,}\xspace GLUE~\cite{WangSMHLB19}, SuperGLUE~\cite{WangPNSMHLB19}, SQuAD v1.1~\cite{RajpurkarZLL16}, SQuAD v2.0~\cite{RajpurkarJL18}, and RACE~\cite{LaiXLYH17}. Among these datasets, GLUE and SuperGLUE are two widely-used comprehension benchmarks. Several tasks, like word sense disambiguation and coreference resolution, can assess PLMs' understanding of vocabulary meaning and grammatical structure of a text. By contrast, SQuAD v1.1\&v2.0, and RACE are three popular question answering datasets. To answer the natural language questions, PLMs should be aware of the background knowledge about some particular topic. For example, to answer the question ``\emph{what can be used as rewards for dog training?}'', the background knowledge ``\emph{dogs like bones}'' will be helpful for PLMs to answer ``\emph{bones}''. For evaluation, we report the corresponding metrics results for each task, such as the \textit{Matthews corr.} metric for CoLA.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{squad.pdf}
\caption{PLMs Performance on SQuAD v1.1\&v2.0 stratified by five types of answer.}
\label{fig:comprehension-answer-type}
\end{figure}
\paratitle{Results and Analysis}. Table~\ref{tab:comprehension-main} presents the results of comprehension test in GLUE dataset (results in other four datasets can be found in Appendix~\ref{comprehension}). The last column in this table indicates the average overall performance across all tasks. Interestingly, the models behaving well in memory tests (\emph{e.g.,}\xspace RoBERTa and XLNet) also present good results in many comprehension tasks. The results indicate that \textbf{the improvement on memory ability is beneficial for the performance of comprehension ability}, which is in line with our intuition. Compared with bidirectional language modeling in BERT, permutation language modeling (relying on all permutations of the factorization order) used in XLNet enables PLMs to learn more context for enhancing PLMs' understanding of text, which seems to be effective for good comprehension ability.
Among these tasks, we observe a significant performance drop in the linguistic acceptability task (CoLA) since it has different data distribution from the pretraining corpora~\cite{wang2021entailment}. This kind of sensitiveness to unfamiliar tasks is also reflected in Figure~\ref{fig:comprehension-few-shot}, where the model performance on CoLA shows a more volatile fluctuation (ranging from 10 to 35) than QNLI (ranging from 15 to 20). It indicates that \textbf{the performance of PLMs is closely related to the similarity of data distributions in pretraining and fine-tuning}. To solve this challenge, it will be better to adopt intermediate fine-tuning, which involves first fine-tuning PLMs on an intermediate dataset similar to the final target dataset and then transferring tuned PLMs to the final dataset.
\begin{table*}[t]
\small
\centering
\begin{tabular}{lcccccccccc}
\toprule
\multirow{2.5}*{\textbf{Datasets}}& \multicolumn{3}{c}{\textbf{Bidirectional}} & \multicolumn{1}{c}{\textbf{Uni.}}& \multicolumn{2}{c}{\textbf{Hybrid}} & \multicolumn{1}{c}{\textbf{KE}} & \multicolumn{3}{c}{\textbf{Text-to-Text}} \\
\cmidrule(lr){2-4}\cmidrule(lr){5-5}\cmidrule(lr){6-7}\cmidrule(lr){8-8}\cmidrule(lr){9-11}
& BERT & RoBERTa & ALBERT & GPT-2 & XLNet & UniLM & ERNIE & T5 & BART & ProphetNet \\
\midrule
\textbf{CQA} & 55.9 & 72.2 & \textbf{80.0} & 60.8 & 62.9 & 62.3 & 54.1 & 69.8 & {\ul 75.8} & 21.3 \\
\textbf{ROCStories} & 90.2 & \textbf{97.4} & {\ul 97.1} & 59.9 & 93.8 & 86.9 & 84.7 & 91.4 & 91.7 & 82.2 \\
\textbf{SWAG} & 86.3 & {\ul 89.9} & \textbf{90.7} & 79.7 & 86.8 & 83.1 & 80.2 & 73.7 & 87.9 & 70.1 \\
\textbf{HellaSwag} & 47.3 & {\ul 85.2} & \textbf{90.1} & 60.4 & 79.7 & 46.7 & 44.5 & 79.1 & 76.6 & 26.4 \\
\textbf{SM-A} & 89.4 & \textbf{93.0} & 92.5 & 88.7 & 83.7 & 89.3 & 88.7 & {\ul 92.7} & 82.9 & 85.5 \\
\textbf{SM-B} & 85.8 & \textbf{92.3} & \textbf{92.3} & 73.4 & {\ul 88.7} & 86.4 & 87.7 & 88.2 & 67.9 & 78.0 \\
\textbf{ARCT} & 71.2 & 57.9 & 79.5 & 66.7 & {\ul 83.1} & 72.3 & 73.7 & 69.4 & \textbf{84.2} & 65.5 \\
\bottomrule
\end{tabular}
\caption{Reasoning tests results on seven datasets (test set). CQA is short for CommonsenseQA. SM-A and SM-B denote the Task A and Task B of Sense Making, respectively. We report the results of \textsc{Large} version for each model in this table and more results can be found in the Appendix~\ref{reasoning}.}
\label{tab:reasoning-main}
\end{table*}
To gain more insights into PLMs' comprehension ability, we choose four representative PLMs (\emph{i.e.,}\xspace BERT, RoBERTa, ALBERT, and BART) and humans to analyze their performance across the answer types of SQuAD v1.1\&v2.0. The results in Figure~\ref{fig:comprehension-answer-type} show that PLMs perform well on simple answers such as dates and persons. For these categories of answers, there are usually only a few plausible candidates and most answers are single tokens. The models are more challenged on other intricate answer types (\emph{e.g.,}\xspace noun and verb phrases) because there are many more plausible candidates and multiple tokens. Thus, improving PLMs' understanding of intricate named entities during the pretraining phase will possibly benefit PLMs' comprehension ability later.
\begin{figure}[t]
\centering
\subfigure[BERT\textsubscript{\textsc{large}}]{
\centering
\includegraphics[width=0.22\textwidth]{reason-multi-task-1.pdf}
}
\subfigure[T5\textsubscript{\textsc{large}}]{
\centering
\includegraphics[width=0.22\textwidth]{reason-multi-task-2.pdf}
}
\centering
\caption{Heatmaps of two-stage transfer learning.}
\label{fig:reasoning-intermediate}
\vspace{-0.2cm}
\end{figure}
\subsection{Reasoning Tests}
\label{sec-reasoning}
\paratitle{Datasets and Metrics.} In reasoning tests, we mainly consider three forms of reasoning, \emph{i.e.,}\xspace commonsense reasoning, deductive reasoning, and abductive reasoning, focusing on commonsense utilization, conclusion induction, and reason derivation, respectively. For evaluation, we choose six reasoning datasets, namely CommonsenseQA~\cite{TalmorHLB19}, ROCStories~\cite{mostafazadeh2016corpus}, SWAG~\cite{ZellersBSC18}, HellaSwag~\cite{ZellersHBFC19}, Sense Making~\cite{WangLZLG19}, and ARCT~\cite{HabernalWGS18a}. Specifically, CommonsenseQA requires PLMs to reason about commonsense knowledge in human experience of everyday life~\cite{liu2004conceptnet}. ROCStories, SWAG, HellaSwag, and Sense Making Task A are concerned with deriving the conclusions of stories and events, while Sense Making Task B and ARCT focus on identifying the reason behind a statement. For evaluation, we report the \textit{Accuracy} results for each dataset.
\paratitle{Results and Analysis}. Table~\ref{tab:reasoning-main} shows the model performances in reasoning ability. It can be clearly observed that performing well in comprehension tests, ALBERT and RoBERTa also achieve stronger performance in almost all reasoning tasks. In pretraining, ALBERT introduces an inter-sentence coherence objective to capture the relationship among sentences, which is helpful for the sentence-level reasoning ability of PLMs. It has been found that the next sentence prediction (NSP) loss in BERT might hurt the performance of PLMs in sentence-level tasks of downstream datasets~\cite{liu2019roberta}. Interestingly, despite being the best in comprehension tests, XLNet does not perform as well as we expected in reasoning tests. We speculate that the permutation operation in XLNet disturbs the semantic relationship between sentences, thus leading to poor reasoning ability.
\textbf{To improve PLMs' reasoning ability, it would be useful to design sentence-level reasoning objectives like inter-sentence coherence loss in ALBERT}.
Moreover, despite incorporating knowledge, ERNIE still shows mediocre performance in knowledge-related datasets such as CQA. A possible reason might be that ERNIE only uses trained KB embeddings to enhance semantic representations but ignores the reasoning structure of KBs. This inspires us that designing appropriate and effective fusion methods to integrate knowledge is more important.
To further analyze the transferability of PLMs' reasoning ability, we conduct a two-stage study on three task datasets, \emph{i.e.,}\xspace ROCStories, SM-A, and ARCT. We first train PLMs on source tasks with full data and then fine-tune PLMs on target tasks with ten instances. In Figure~\ref{fig:reasoning-intermediate}, it can be observed that \textbf{PLMs have better reasoning transferability between similar tasks} such as deductive reasoning tasks (ROCStories and SM-A). This shows that model performance on data-scarce reasoning tasks can be improved by incorporating additional training on data-rich similar tasks~\cite{wang2021entailment}.
\begin{table*}[t]
\small
\centering
\begin{tabular}{lrrrrrrrrrrrr}
\toprule
\multirow{2.5}*{\textbf{Models}} & \multicolumn{3}{c}{\textbf{CNN/DailyMail}} & \multicolumn{3}{c}{\textbf{GigaWord}} & \multicolumn{3}{c}{\textbf{SQuAD}} & \multicolumn{3}{c}{\textbf{WritingPrompts}} \\
\cmidrule(r){2-4}\cmidrule(r){5-7}\cmidrule(r){8-10}\cmidrule(r){11-13}
& \makecell[c]{R-1} & \makecell[c]{R-2} & \makecell[c]{R-L} & \makecell[c]{R-1} & \makecell[c]{R-2} & \makecell[c]{R-L} & \makecell[c]{B-4} & \makecell[c]{R-L} & \makecell[c]{ME} & \makecell[c]{B-4} & \makecell[c]{R-L} & \makecell[c]{ME} \\
\midrule
GPT-2 & 27.00 & 8.00 & 23.08 & 23.72 & 8.12 & 21.56 & 8.48 & 18.82 & 26.77 & 14.47 & {\ul 3.23} & {\ul 7.29} \\
UniLm & 43.44 & 20.21 & 40.51 & 38.45 & 19.45 & 35.75 & 4.42 & 17.43 & 20.13 & \textbf{26.88} & 1.84 & 5.01 \\
T5 & 42.50 & 20.68 & 39.75 & 34.75 & 16.26 & 31.49 & 11.19 & 22.35 & 30.53 & 8.61 & \textbf{4.19} & \textbf{9.51} \\
BART & {\ul 44.16} & \textbf{21.28} & {\ul 40.90} & {\ul 39.41} & {\ul 20.21} & {\ul 36.42} & \textbf{15.87} & \textbf{25.47} & \textbf{38.42} & 14.72 & 3.14 & 7.08 \\
ProphetNet & \textbf{44.20} & {\ul 21.17} & \textbf{41.30} & \textbf{39.51} & \textbf{20.42} & \textbf{36.69} & {\ul 14.20} & {\ul 23.97} & {\ul 35.99} & {\ul 19.31} & 2.59 & 7.19 \\
\bottomrule
\end{tabular}
\caption{Composition tests results on four datasets. R-1, R-2, R-L are short for ROUGE-1, ROUGE-2, ROUGE-L respectively. B-4 and MT denote BLEU-4 and METEOR, respectively. We report the result of \textsc{Large} version for each model in this table and more results can be found in the Appendix~\ref{composition}.}
\label{tab:composition-main}
\end{table*}
\begin{table}[t]
\small
\centering
\begin{tabular}{lr|rrrr}
\toprule
\multirow{2.5}*{\textbf{Models}} & \multicolumn{5}{c}{\textbf{GigaWord}} \\
\cmidrule{2-6}
& \textbf{TT (\%)} & \textbf{Flu.} & \textbf{Info.} & \textbf{Acc.} & \textbf{Overall} \\
\midrule
GPT-2 & 26.09 & 3.11 & 2.79 & 2.64 & 4.87 \\
UniLM & 50.34 & \textbf{4.02} & {\ul 3.49} & 3.45 & {\ul 6.73} \\
T5 & \textbf{53.67} & 3.95 & 3.45 & {\ul 3.46} & 6.68 \\
BART & 51.10 & 4.01 & 3.46 & \textbf{3.49} & {\ul 6.73} \\
ProphetNet & {\ul 53.02} & {\ul 3.99} & \textbf{3.52} & 3.45 & \textbf{6.74} \\
\cmidrule{1-6}
Gold & 40.77 & 3.61 & 3.29 & 3.15 & 6.05 \\
\midrule
\multirow{2.5}*{\textbf{Models}} & \multicolumn{5}{c}{\textbf{WritingPrompts}} \\
\cmidrule{2-6}
& \textbf{TT (\%)} & \textbf{Flu.} & \textbf{Info.} & \textbf{Rel.} & \textbf{Overall} \\
\midrule
GPT-2 & \textbf{45.70} & \textbf{3.42} & \textbf{3.17} & {\ul 3.20} & {\ul 5.87} \\
UniLM & 1.20 & 1.32 & 1.88 & 2.03 & 2.74 \\
T5 & 34.40 & 3.01 & 2.80 & 3.09 & 5.18 \\
BART & {\ul 45.20} & 3.37 & {\ul 3.16} & \textbf{3.39} & \textbf{5.96} \\
ProphetNet & 29.60 & 2.95 & 2.91 & 3.10 & 5.18 \\
\cmidrule{1-6}
Gold & 71.30 & 3.79 & 4.07 & 3.87 & 7.37 \\
\bottomrule
\end{tabular}
\caption{Turing test (TT) and human scores on the test set of GigaWord and WritingPrompts. Flu., Info., Acc. and Rel. denote fluency, informativeness, accuracy and relevance respectively. We report the result of \textsc{Large} version for each model in this table and more results can be found in the Appendix~\ref{composition}.}
\label{tab:turing-test}
\end{table}
\begin{figure}[t]
\centering
\subfigure[GigaWord]{
\centering
\includegraphics[width=0.22\textwidth]{turing-giga.pdf}
}
\subfigure[WritingPrompts]{
\centering
\includegraphics[width=0.22\textwidth]{turing-wp.pdf}
}
\centering
\caption{Impact factors of Turing Test.}
\label{fig:composition-analysis}
\vspace{-0.3cm}
\end{figure}
\subsection{Composition Tests}
\paratitle{Datasets and Metrics.} Composition is similar to the text generation task, aiming at generating new content from scratch. Therefore, we use four text generation benchmarks for composition tests, \emph{i.e.,}\xspace WritingPrompts~\cite{LewisDF18} on story generation, CNN/Daily Mail~\cite{HermannKGEKSB15} and GigaWord~\cite{RushCW15} on text summarization, and SQuAD v1.1~\cite{RajpurkarZLL16} on question generation. According to the length of the target text, text summarization and question generation is short text generation, while story generation is long text generation. For evaluation, we adopt three automatic metrics, \emph{i.e.,}\xspace BLEU~\cite{papineni2002bleu}, ROUGE~\cite{lin2004rouge}, and METEOR~\cite{BanerjeeL05}.
Besides, following~\cite{zou2021controllable}, we conduct human test from five aspects, \emph{i.e.,}\xspace \textit{Fluency}, \textit{Informativeness}, \textit{Accuracy}, \textit{Relevance} and \textit{Overall}.
The overall score is rated from 1 to 10, while the others are rated from 1 to 5. Inspired by~\citet{turing2009computing}, we further design a Turing test to assess the generation ability of PLMs, where a human interrogator is requested to distinguish whether the given text is generated by a human. From the generated texts of each model and gold texts, we randomly select 500 texts scored by judges. More details of human test and Turing test are shown in Appendix~\ref{composition}.
\paratitle{Results and Analysis}. Table~\ref{tab:composition-main} and Table~\ref{tab:turing-test} present the automatic evaluation and human evaluation results on composition ability, respectively. We can observe that ProphetNet and BART achieve great performance on short text generation, while GPT-2 and T5 show better results on long text generation. Specifically, BART employs denoising objectives for reconstructing the corrupted original text, and ProphetNet adopts future $n$-gram prediction, which is flexible for modeling the semantic relations between tokens and phrases in short texts. However, in long texts, a small ratio of masked tokens (\emph{i.e.,}\xspace 15\%) might be not effective in capturing the complex long-range dependency. By comparison, the left-to-right prediction objective in GPT-2 can be more suitable to model the long-range semantic continuity in long texts, and T5 has the largest model size to achieve a strong composition ability. For composition ability, we conclude that \textbf{the denoising objective is helpful for short text composition, while the left-to-right objective is more powerful for long text composition}. Besides, the model size is also an important factor in improving PLMs' composition ability.
To further investigate what factors affect the pass rate of the Turing test, we deeply analyze the intermediate scoring results in the human test and Turing test. As shown in Figure~\ref{fig:composition-analysis}, we calculate the pass rate of the Turing test for each human test metric across 1 to 5 scale. Moreover, we compute the Pearson correlation coefficient between the pass rate and each metric. In story generation (WritingPrompts), the coefficients for \textit{Fluency}, \textit{Informativeness}, and \textit{Relevance} are 96.63, 97.93, 96.44, respectively. While, in text summarization (GigaWord), the coefficients for \textit{Fluency}, \textit{Informativeness}, and \textit{Accuracy} are 96.08, 97.67, 98.38, respectively. From these analysis results, we can conclude that \textit{Informativeness} is more important for story generation, while \textit{Accuracy} is more influential in text summarization. Besides, we compute the text similarity between the generated texts from different PLMs, which is shown in Appendix~\ref{composition}.
\section{Discussion}
\label{sec-guide}
Based on the above four ability tests, we intend to provide a guideline for helping researchers choose, apply, interpret and design PLMs for NLP tasks.
In section~\ref{sec-comprehension}, we observe that the improvement in memory ability is likely to be helpful for the performance of comprehension ability. Hence, designing PLMs with special objectives like bidirectional language modeling in BERT and strategies like larger training batches in RoBERTa for larger memory capacity will further benefit PLMs in the downstream comprehension tasks. Besides, when applying PLMs to downstream tasks, the similarity of data distribution between pretraining and fine-tuning has a great impact on PLMs performance. Possible solutions such as introducing intermediate tasks or datasets can alleviate such a discrepancy. Moreover, we further find some limitations in PLMs' comprehension ability, where PLMs are good at simple single-token answer types in QA such as dates but perform worse in complex phrase answers.
Compared to comprehension, reasoning in section~\ref{sec-reasoning} is much more intricate and usually involves inferring the semantic relationships among multiple sentences. Therefore, PLMs such as ALBERT trained with sentence-level objectives can be more suitable for conducting reasoning tasks. Intuitively, incorporating sentence-level objectives during pretraining will help PLMs learn the correlation among different sentences. Note that PLMs have better reasoning transferability between similar tasks, thus data-scarce reasoning tasks can be improved by first training on data-rich tasks.
For composition ability, PLMs with denoising training objectives perform much better on short text composition, while PLMs with left-to-right objectives or larger model size are more suitable for long text composition. This might be because PLMs with different training objectives can finally capture different ranges of semantic dependency between tokens and phrases. Moreover, to obtain a higher pass rate of Turing test, different text generation tasks will be concerned with varying factors, such as informativeness is much more critical for story generation.
\section{Introduction}
\label{sec-intro}
Recent years have featured a trend towards Transformer~\citep{VaswaniSPUJGKP17} based pretrained language models (PLMs) in natural language processing (NLP) systems. By being pretrained on massive unlabeled text, PLMs can be directly fine-tuned on downstream tasks, entirely removing the need for task-specific architectures~\citep{radford2018improving}. This paradigm has led to significant progress on many challenging NLP tasks such as reading comprehension~\citep{DevlinCLT19} and text generation~\citep{brown2020language}.
With rising new state-of-the-art results that approach or surpass human performance on several tasks, it is a non-trivial research topic about how to systematically evaluate the language abilities of PLMs from a wide range of perspectives.
Given a wide range of publicly released PLMs, it is particularly useful to derive principles or guidelines for selecting suitable PLMs for specific downstream tasks.
However, existing works either target some single ability~\citep{TalmorEGB20,ZhouZCH20}, or consider a simple mixture of multiple (small-scale) tasks that lack a comprehensive design and test~\citep{WangSMHLB19,CLUEbenchmark}.
There has been no detailed and systematic analysis of PLM's abilities in large-scale NLP tasks. To fill the gap of PLMs evaluation, we introduce the gen\textbf{\underline{E}}ral \textbf{\underline{l}}anguage ab\textbf{\underline{i}}li\textbf{\underline{t}}y \textbf{\underline{e}}valuation (\textbf{ElitePLM}) for empirically and systematically assessing the general language abilities of PLMs.
The ideal goal behind PLMs is to create a human-like machine learner where it can understand the language and then perform any specific task related to language. In cognitive science, Wechsler Adult Intelligence Scale (WAIS)~\citep{kaufman2005assessing} is the most commonly used intelligence quotient (IQ) test for measuring the intelligence and cognitive ability of humans. This test would assess the level of individuals on verbal comprehension, perceptual reasoning, working memory, and processing speed. Thus, by imitating the intelligence test on humans, we design four evaluation dimensions in ElitePLM for measuring the abilities of PLMs, including \textit{memory}, \textit{comprehension}, \textit{reasoning}, and \textit{composition}. Following previous works~\citep{ZhouZCH20,WangSMHLB19}, for each ability in ElitePLM, we elaborate and select multiple representative tasks (\emph{e.g.,}\xspace question answering for the comprehension ability) and commonly-used benchmarks (\emph{e.g.,}\xspace GLUE and SQuAD) to quantitatively evaluate the performance of PLMs. These results can serve as numerical explanations of PLMs at a specific ability.
In human intelligence tests, the background of participants (\emph{e.g.,}\xspace gender, race, and occupation) should be as much as diverse. Thus, in ElitePLM, we select a diversity of PLMs to conduct generalized and meaningful comparisons. According to training objectives, PLMs can be divided into three types: bidirectional LMs (\emph{e.g.,}\xspace BERT~\citep{DevlinCLT19}) for natural language understanding (NLU), unidirectional LMs (\emph{e.g.,}\xspace GPT~\citep{radford2019language}) for natural language generation (NLG), and hybrid LMs (\emph{e.g.,}\xspace UniLM~\citep{00040WWLWGZH19}) for combining these two paradigms. Furthermore, knowledge-enhanced LMs (\emph{e.g.,}\xspace ERNIE~\citep{ZhangHLJSL19}) and text-to-text LMs (\emph{e.g.,}\xspace T5~\citep{RaffelSRLNMZLL20}) also emerge as important branches of PLMs.
Considering the variety, we finally select ten widely-used PLMs within the above five categories and evaluate their abilities on four dimensions. We show the comparisons of these PLMs in Table~\ref{tab:models} of Appendix~\ref{configuration}.
From the ability test results, we have three salient findings. First, PLMs with varying pretraining objectives and strategies are good at different kinds of downstream tasks. Specifically, we observe that bidirectional LMs like BERT and pretraining strategies like larger training batches in RoBERTa are helpful for memorizing pretraining corpora; permutation language modeling in XLNet is beneficial for modeling the bidirectional context in language comprehension; inter-sentence coherence objective in ALBERT is suitable for sentence-level reasoning tasks; text-to-text LMs using denoising objective like BART perform better in short text generation. Second, when fine-tuning PLMs in downstream tasks, their performance is typically sensitive to the data distribution in fine-tuning stage, which can be addressed by incorporating intermediate datasets or tasks to alleviate such a discrepancy. Third, PLMs have excellent transferability between similar tasks, especially reasoning tasks. This finding will inspire future researchers to leverage data-rich tasks for improving data-scarce tasks.
For more clarity, we illustrate the impact level of each factor for PLMs' abilities in Table~\ref{tab:score} of Appendix~\ref{configuration}.
Besides ElitePLM being an evaluation benchmark of PLMs' language ability, more importantly, the predicted results of ElitePLM can be used as an open resource for more depth and granularity in analyzing PLMs performance on each ability. For example, we further analyze the comprehension test results of PLMs across answer types in QA tasks. The analysis shows that PLMs are good at simple single-token answers such as dates but more challenged on intricate phrase answers. Moreover, by analyzing human test and Turing test results on composition, we observe that summaries with high accuracy are more likely to pass the Turing test while rich information is more important for story generation. Overall, ElitePLM can act as an analysis tool to gain more insight into PLMs. We show the details of our used datasets and predicted outputs of PLMs in Appendix~\ref{statistics}.
This paper is intended to help establish sound principles for choosing, applying, interpreting and designing PLMs for NLP tasks in practical settings. We have released the code and predicted results of each ability experiment, providing the research and industry community with off-the-shelf tools to evaluate and analyze their PLMs.
\section{ElitePLM}
\label{sec-glae}
In this section, we will detail these four kinds of language abilities, \emph{i.e.,}\xspace memory, comprehension, reasoning, and composition, in ElitePLM.
\paratitle{Memory Ability.} Memory is the most basic ability of humanity, involved in how much information we recall throughout our lives~\citep{miyake1999models}. By analogy, ElitePLM will measure how much knowledge and language patterns PLMs have memorized in pretraining, as assessed by tests of recalling words based on contexts. Based on the memorized information, PLMs can effectively adapt to downstream tasks for understanding and reasoning about the similar context in a specific text.
On the other hand, efficiency is also a critical aspect of memory ability for PLMs learning from new data distribution in the fine-tuning stage. Thus, besides recalling words, we also compare the memory efficiency of PLMs in terms of memorizing the given new information.
\paratitle{Comprehension Ability.} Comprehension is an intricate and multifaceted ability. It typically consists of understanding a text’s vocabulary, background knowledge of a specific topic, and comprehension of its linguistic structures like grammar~\citep{cain2008children}. In particular, background (prior) knowledge is used to comprehend a special situation, lesson, or text.
For example, readers should be aware of the background knowledge of dog behavior when reading a text about dog training.
In ElitePLM, we will assess PLMs' comprehension ability from three aspects, \emph{i.e.,}\xspace vocabulary, background knowledge, and linguistic structures.
\paratitle{Reasoning Ability.} Based on the comprehension of a text, reasoning ability refers to the power of the processes and strategies used in drawing inferences, reaching conclusions, arriving at solutions, and making decisions~\citep{kyllonen1990reasoning}.
In ElitePLM, we mainly focus on three types of reasoning abilities. In detail, commonsense reasoning requires PLMs to draw inferences using commonsense knowledge about the world, like the fact that ``\textit{matches}'' plus ``\textit{logs}'' usually equals ``\textit{fire}''~\citep{SapSBCR20}; Note that subtle differences exist between commonsense knowledge and background knowledge in comprehension ability. Commonsense knowledge is broadly defined as the total accumulation of facts and information that a person has gained from previous experiences.
Deductive reasoning involves PLMs drawing conclusions from a set of given premises in the form of categorical syllogisms (\emph{e.g.,}\xspace all $x$ are $y$) or symbolic logic (\emph{e.g.,}\xspace if $p$ then $q$)~\citep{johnson1999deductive}; Abductive reasoning involves reaching the most likely explanation for a set of facts, such as a scientific theory to explain a set of empirical findings~\citep{walton2014abductive}.
\paratitle{Composition Ability.}
In the literature~\cite{connors1997composition}, composition is a highly intelligent and synthetic process where a writer assembles words and sentences to create a coherent and meaningful work (\emph{e.g.,}\xspace poem, music, and novel) from scratch, which closely resembles to the text generation task in NLP~\citep{berninger1999coordinating}.
Therefore, in ElitePLM, we introduce several text generation tasks to evaluate the composition ability of PLMs, including story generation, text summarization, and question generation. Note that, story generation is a representative composition task which needs PLMs to not only comprehend the given story background, but also reason about and create reasonable and coherent story endings~\citep{LewisDF18}. During the composition process, PLMs should include a good vocabulary, grammar, spelling, and punctuation knowledge, and deliberate the text structure.
\section{Related Work}
\paratitle{Pretrained Language Models}. Owing to the great achievements Transformer~\cite{VaswaniSPUJGKP17} has made, the paradigm of pretrained language models (PLMs) is thriving~\cite{radford2019language,DevlinCLT19,liu2019roberta,LewisLGGMLSZ20,RaffelSRLNMZLL20}. It is widely recognized that PLMs can learn massive knowledge from corpora~\cite{LiTZW21}, leading to significant progress in various language tasks~\cite{textbox,LiTZWYW21}. With such encouraging results in extensive NLP tasks, it is a non-trivial topic to systematically evaluate the abilities of PLMs, which can further deepen our understanding of PLMs and facilitate their application to more fields.
\paratitle{Language Model Evaluation}. Many efforts have studied the evaluation of language model performance. \citet{Liu0BPS19} evaluate BERT~\cite{DevlinCLT19}, GPT~\cite{radford2018improving}, and ELMo~\cite{PetersNIGCLZ18} on a variety of linguistics tasks. Their findings indicate that the features generated by PLMs are sufficient for good performance on a board set of tasks but fall short on tasks requiring fine-grained linguistic knowledge. \citet{tenney2019you} evaluate similar models on a range of sub-sentence linguistic analysis tasks, showing that PLMs encode both syntax and semantics into parameters. \citet{ZhouZCH20} also report that PLMs can learn rich knowledge but focus on evaluating the commonsense. However, these studies only look at one dimension of PLMs ability evaluation. Other work such GLUE~\cite{WangSMHLB19} and CLUE~\cite{CLUEbenchmark} just consider a simple mixture of multiple tasks lacking comprehensive evaluation. To the best of our knowledge, this is the first work to systematically evaluate PLMs by defining various kinds of language abilities and performing extensive comparison. |
1,314,259,994,246 | arxiv | \section{Introduction}
The interaction of energetic particles with the solar wind is a topic of wide interest in space physics and astrophysics. Several varieties of charged particles populate the heliosphere, including energetic particles originating at the sun (solar energetic particles, or SEPs) and galactic cosmic rays (GCRs) that enter the heliosphere uniformly and nearly isotropically from the outside \citep{Kunow1991book}. These cosmic rays (CRs) are strongly guided and scattered by the solar wind and the turbulent fluctuations that transport with it \citep{parker1956modulation,parker1964scattering,jokipii1966cosmic}. As such, the study of the origin and transport of cosmic rays is an important problem in heliospheric physics, with implications ranging from space weather and exploration to fundamental space plasma physics \citep{jokipii1971review,fisk1978interactions,Kunow1991book}. The effects of these energetic particles on the health of astronauts \citep{Parker2005SWE} and the well-being of electronic components in spacecraft \citep{Tylka1997} are an immediate concern. In addition, the accuracy with which we can understand CR propagation also provides a testbed for energetic particle transport in numerous space and astrophysical applications \citep{Kulsrud1969ApJ,droge2003}. The solar wind provides us with an opportunity to observe, at close range, the behavior of energetic particles in random, turbulent magnetic fields \citep{bruno2013LRSP}. Such fields are ubiquitous in astrophysical systems \citep{Candia2004}, and the insights we glean from studies of CRs in the heliosphere can potentially find application elsewhere in the universe. Finally, observations of cosmic rays can also serve as probes into solar activity and solar wind structure, as CR variations are seen to be correlated with solar and geomagnetic activity \citep{snyder1963}.
Theories of the modulation of cosmic rays in the heliosphere attempt to explain the observed temporal and spatial variation in their spectra \citep{fisk1978interactions,potgieter2013solar}, and for that purpose, require a knowledge of the cosmic ray diffusion tensor. In fact, one of the key challenges in solving the Parker CR transport equation \citep{parker1965PSS} is the inadequate knowledge of the spatial, temporal, and rigidity dependence of the components of the diffusion tensor. In turn, the specification of this tensor through the heliosphere requires an understanding of two topics. First, a theoretical understanding of the diffusion process itself is needed, which would lead to predictions of the structure of the diffusion tensor itself. Equally important is the knowledge of the large scale flows and electromagnetic field in the plasma, and the distribution of background solar wind turbulence in which the particles are scattered. The present approach permits three dimensional, and (in principle) time-varying calculation of all three of these properties (diffusion tensor, large scale flow, large scale electromagnetic field) to be computed in a single model.
The formal structure of the diffusion tensor involves diagonal components corresponding to diffusion parallel and perpendicular to the interplanetary magnetic field (IMF), as well as off-diagonal components describing perpendicular drifts (e.g., \citealt{moraal1976SSRv,minnie2007ApJ}). While quasi-linear theory \citep{jokipii1966cosmic} extended to include time-dependent and non-linear corrections \citep{goldstein1976ApJ,bieber1994proton,droge2003} provides a relatively good accounting of parallel diffusion, theories of perpendicular diffusion have faced the challenge of accounting for non-linear effects such as transfer of particles across field lines, backscatter from parallel diffusion, and field-line random walk \citep{jokipii1966cosmic,Giacalone1999ApJ}. The non-linear guiding center (NLGC) theory (\citealt{matthaeus2003nonlinear}; see also \citealt{shalchi2009}) accounts for the above, and is further improved by the random ballistic interpretation of \cite{ruffolo2012random}. In the current work we focus on the parallel and perpendicular and diffusion coefficients; the drift motion could be a topic for future work.
Since turbulent fluctuations are responsible for scattering CRs, the diffusion theories mentioned above typically involve turbulence parameters such as the energy of the random magnetic fluctuations and correlation scales. In the solar wind, low-frequency turbulence evolves via a non-linear cascade, while also being transported and processed by the large-scale radially expanding solar wind. At all but the smallest scales, these processes are well described by magnetohydrodynamic (MHD) models \citep{marsch1989dynamics,zhou1990transport}. Over the years, turbulence models have incorporated simplifying assumptions relevant to the solar wind, yielding increased tractability of the governing equations \citep{zank1996evolution,matthaeus1999turbulence}. The increased sophistication of the models and improvements in computational power have led to numerical simulations yielding good agreement with \textit{Voyager, Ulysses, Helios}, and \textit{WIND} observations \citep{breech2008turbulence,usmanov2011solar}. These turbulence models have also been used to study the propagation of coronal mass ejections \citep{Wiengarten2015ApJ}. Extensions to the \cite{breech2008turbulence} model have been developed \citep{Oughton2011JGRA116,Zank2012ApJ745,Zank2017ApJ835}, and applied to the inhomogeneous solar wind \citep{Shiota2017ApJ837}.
Our strategy for evaluating the CR diffusion coefficients through the inner heliosphere consists of two steps: first, specification of the relevant turbulence parameters based on a global solar wind model, and second, evaluation of the CR diffusion coefficients using the specified heliographic distribution of turbulence. For the first step, we deduce turbulence parameters from a global, three-dimensional (3-D) MHD simulation of the solar wind \citep{usmanov2014three}.
The spatial resolution that can be realistically achieved in such simulations cannot resolve the small-scale fluctuations that cause scattering of CRs. For instance, the spatial resolution of our simulation, at 1 AU, can be estimated as roughly 0.03 AU. However, the mean free path, at 1 AU, for scattering perpendicular to the mean magnetic field has been estimated to be as low as 0.001 AU \citep{zhang2009ApJ,pei2010cosmic}, and the correlation scale of the turbulence has been estimated to be around 0.007 AU \citep{matthaeus2005,bruno2013LRSP}. This is where our turbulence model for the ``sub-resolution" physics comes in. Our simulation explicitly resolves the large-scale, mean solar wind bulk flow, which is coupled to small-scale inhomogeneities by means of an MHD-Reynolds-averaged Navier-Stokes \citep[RANS; see, e.g.,][]{mccomb1991physics} model for the random fluctuations. The simulation has been well-tested, and gives reasonable agreement with many spacecraft observations of large-scale solar wind fields, turbulence parameters (energy, cross helicity and correlation scale), as well as the temperature, for varying heliocentric distance, and where feasible, varying helio-latititude \citep{usmanov2011solar,usmanov2012three,usmanov2016four}. In recent ``applied" work, the simulation has been used to study the collisional age of the solar wind plasma \citep{chhiber2016solar}, and we view the present work as a continuation of such application-oriented studies.
Once the turbulence parameters are specified through the model heliosphere, for the second step of our calculation, we use, as a starting point, fairly standard, well-tested formalisms for parallel and perpendicular diffusion coefficients - quasi-linear theory \citep{jokipii1966cosmic,bieber1995diffusion,zank1998radial} to compute the parallel component of the diffusion tensor, and the random ballistic decorrelation (RBD) interpretation of NLGC theory \citep{matthaeus2003nonlinear,ruffolo2012random} for perpendicular diffusion.
Previous studies of the heliographic dependence of the CR diffusion coefficients include work based on both WKB models for Alfv\'{e}n waves \citep{volk1974spatial,morfill1979latitude}, and models for strong turbulence \citep{bieber1995diffusion,zank1998radial,pei2010cosmic}. The present work builds on these studies, but also makes some significant departures, motivated and enabled by recent advances in diffusion theory and sophistication of solar wind simulations. The major points of departure from previous work are listed below:
1. We use a fully 3-D global simulation of the solar wind that provides us with a reliable and self-consistent model heliosphere. Previous work has used one-dimensional (1-D) radial evolution models with spherical symmetry, with shear-driving effects included through a model \citep{zank1998radial,pei2010cosmic}. Thus, while examining latitudinal dependence of the diffusion tensor, these studies implicitly assume that they are far from regions with significant latitudinal gradients. In contrast, three dimensionality improves the physical authenticity of the simulation by explicitly including shear-driving effects on the flow across latitudes, and leads to improved data-visualization through two-dimensional (2-D) contour plots. A similar 3-D approach has been recently used in \cite{guo2016ApJ} to study the propagation of GCRs from 0.3 AU to the termination shock.
2. The computation of the CR diffusion tensor requires specification of the background solar wind speed, and the underlying large-scale heliospheric magnetic field. Previous work \citep{bieber1995diffusion,zank1998radial,pei2010cosmic} used a radially constant solar wind speed with some latitudinal variation, and a Parker-spiral type magnetic field model. However, the use of a prescribed model for the background fields has been found inadequate \citep{Reinecke1997AdSpR,steenberg1997alternative}, and instead we use the large-scale, resolved flow from our MHD-RANS simulation. This provides a complete specification of the background large-scale fields, with spatial variation that has been found to agree well with observations \citep{usmanov2014three}.
3. We examine the diffusion coefficients at radial distances between 2 $R_\odot$ and 3 AU, where $R_\odot$ denotes a solar radius. We are not aware of any other similar study that has probed regions this close to the sun, which are of prime interest for SEP propagation, space weather, and for upcoming spacecraft missions, including Solar Probe Plus. Resolving this entire domain ($2~R_\odot - 3$ AU) in one simulation is a challenge, as modeling approximations that are appropriate very close to the sun may not be valid at larger heliocentric distances. Furthermore, the timescales associated with the different domains are disparate \citep{hundhausen1972coronal,Tu1995SSRv,bruno2013LRSP}. We use an approach where the computational domain is split into three regions: inner ($1-20~R_\odot$), intermediate ($20 - 45~R_\odot$), and outer ($45~R_\odot - 3$ AU). The inner and intermediate regions employ a WKB Alfv\'{e}n wave model, and the outer region solves a full turbulence transport model, with the inner boundary conditions for each region being provided by the preceding one \citep{usmanov2014three}.
4. A magnetic dipole with its tilt (relative to the solar rotation axis) varying through the solar activity cycle is a first and rough approximation for the solar magnetic field \citep{babcock1961ApJ133}. We examine the effect of changing the tilt of the source solar dipole by using simulations with a dipole untilted with respect to the solar rotation axis, and a dipole with {30\degree} tilt, in contrast to previous work employing axisymmetric solar wind parameters \citep{zank1998radial,pei2010cosmic}. The tilt of the solar dipole and the warping of the helispheric current sheet \citep{smith2001JGRsheet} indicate high levels of solar activity \citep{heber2006}, which is a factor of interest since CR intensity is anticorrelated to solar activity levels \citep{forbush1954JGR,fisk1978interactions}. We note here that previous work that examined the effect of solar activity on CR-intensity variation \citep{jokipii1995conf} did not include turbulence modeling, and here we examine how varying turbulence levels influence the diffusion coefficients.
5. The perpendicular diffusion coefficient has been previously evaluated using the ``BAM" model \citep{bieber1997perpendicular} by \cite{zank1998radial}, and the NLGC theory \citep{matthaeus2003nonlinear} by \cite{pei2010cosmic} and \cite{zank2004JGR}. Recently, the NLGC theory has been reinterpreted by \cite{ruffolo2012random}, and their RBD theory yields a significantly improved agreement with numerical experiments, for magnetic fluctuation amplitudes comparable to the large-scale magnetic field. This makes it very well suited for application to the solar wind, where the IMF includes a strong fluctuating component \citep{belcher1969JGR,marsch1991pihp}, and we use the RBD theory to derive a new expression for the perpendicular diffusion coefficient.
6. With the above improvements, the present approach departs significantly from both SEP studies \citep[e.g.,][]{zhang2009ApJ} and GCR modulation studies \citep[e.g.,][]{Engelbrecht2013} that have used relatively simplified assumptions in one or more of the above categories, such as semiempirical diffusion coefficients and simple scalings with magnetic field magnitude.
The outline of the paper is as follows: We describe the form of the CR diffusion tensor in Section 2, and briefly discuss the turbulence model and the simulation in Section 3. Section 4 presents the heliographic distribution of the diffusion coefficients. In an Appendix we briefly describe how other types of diffusion coefficients might be estimated using similar approaches.
\section{Cosmic Ray Diffusion Tensor}
The CR diffusion tensor, $\kappa_{ij}$, describes the scattering of CRs by random fluctuations in the IMF. It may be expressed as \citep{parker1965PSS,jokipii1970ApJ}
\begin{equation}\label{eq:kappa}
\kappa_{ij} = \kappa_\perp \delta_{ij} + \frac{B_i B_j}{B^2} (\kappa_\parallel - \kappa_\perp) + \epsilon_{ijk} \kappa_A \frac{B_k}{B},
\end{equation}
where $\mathbf{B}$ is the mean IMF, $\delta_{ij}$ is the Kronecker delta, and $\epsilon_{ijk}$ is the Levi-Civita symbol. This work presents calculations of $\kappa_\parallel$ and $\kappa_\perp$, which are the diagonal components of the diffusion tensor parallel and perpendicular, respectively, to the mean IMF.
The present work does not calculate $\kappa_A$, which can describe particle drifts under the influence of large-scale gradients and curvature in the IMF. Our results are directly relevant to the outward propagation of SEPs, for which $\kappa_\parallel$ and $\kappa_\perp$ are needed to describe how the SEP distribution spreads in the parallel and perpendicular directions, whereas over the short time scale of the SEP outflow the drifts may mainly shift the lateral distribution over a small angle. The lateral distribution of particle injection is often unknown, and the effects of drifts are often neglected, though \cite{marsh2013ApJ} argue that they should be considered. Both diffusion and drifts are considered to be important to the modulation of GCR with the solar cycle and the small gradients in GCR density \citep{moraal1976SSRv,jokipii1981ApJ}, though these processes take place over a wider region than considered in the present work ($r\le3$ AU).
We shall also examine the radial diffusion coefficient
\begin{equation}\label{kappa_r}
\kappa_{rr} \equiv \kappa_\parallel \cos^2 \Psi + \kappa_\perp \sin^2 \Psi,
\end{equation}
which is of particular relevance to models of solar modulation of CRs. Here, $\Psi$ is the ``winding" angle between the IMF and the radial direction. Following previous work, we define mean free paths, $\lambda_{\parallel,\perp}$, that are equivalent to the diffusion tensor through
\begin{equation}
\lambda_{\parallel,\perp} \equiv 3 \kappa_{\parallel,\perp}/v,
\end{equation}
where $v$ is the particle speed.
We note that in the present work we use the large-scale flow from our simulation to specify $B$ and $\Psi$ as spatially varying fields through the 3-D heliosphere. This is in contrast to previous studies \citep{bieber1995diffusion,zank1998radial,pei2010cosmic}, where $B$ and $\Psi$ were specified through a Parker-type model and a radially constant solar wind speed (to compute $\Psi$). However, the features of the IMF have a major influence on CR transport, and a Parker-type field is an oversimplification, particularly at high heliolatitudes (See \cite{heber2006} for an overview of suggested modifications to the Parker field). Moreover, the use of a-priori prescribed background fields in modulation studies has been held responsible for restricting the diffusion tensor to values that preclude agreement of models with observations \citep{Reinecke1997AdSpR,steenberg1997alternative}, and the present work makes a significant improvement in this regard.
\subsection{Parallel mean free path}
In determining the parallel mean free path (mfp), the turbulence ``geometry", i.e., the distribution of energy over parallel and perpendicular wavevectors, is a controlling factor. Observations \citep{bieber1994proton} show that a pure ``slab" model of heliospheric turbulence \citep{jokipii1966cosmic} underestimates the parallel mfp. In the slab model, the magnetic fluctuations are polarized perpendicular to the mean field and their wave-vectors are parallel to the mean field. \cite{bieber1994proton} find that a composite model with a dominant 2-D part (fluctuations and their wave-vectors both perpendicular to the mean field) and a minor slab part provides a better approximate parametrization of the turbulence and an improved description of the observed mean free paths. Furthermore, theoretical studies and observations \citep{matthaeus1990JGR,zank1992waves,zank1993nearly,bieber1996dominant,ghosh1997anisotropy} suggest that around 80\% of magnetic fluctuation energy in the inertial range should reside in the 2-D component, with the rest in the slab component.
In the following, we take the $z$-component along the mean field. Considering parallel diffusion first, we note that in quasilinear theory the 2-D fluctuations are effectively invisible to CRs resonating with the turbulence, and the scattering by slab fluctuations (assumed to be axisymmetric) is described by the parallel mfp \citep{zank1998radial}
\begin{equation}\label{eq:mfp_p}
\begin{aligned}
\lambda_\parallel &= 6.2742 \frac{B^{5/3}}{\langle b^2_s \rangle} \left(\frac{P}
{c}\right)^{1/3} \lambda^{2/3}_{s} \\
&\times \left[1 + \frac{7A/9}{(1/3 + q)(q + 7/3)}\right],
\end{aligned}
\end{equation}
where
\begin{equation}
A = (1+s^2)^{5/6} - 1,
\end{equation}
\begin{equation}
q = \frac{5s^2/3}{1+s^2-(1+s^2)^{1/6}},
\end{equation}
\begin{equation}
s = 0.746834 \frac{R_L}{\lambda_s},
\end{equation}
and a model 1-D Kolmogorov spectrum is assumed, with a power spectrum of the form $\tilde{P}(k_\parallel) \propto (1+k_\parallel \lambda_s)^{-5/6}$. Here $c$ is the speed of light, $R_L = P/Bc$ the particle Larmor radius, $\langle b_s^2 \rangle$ the variance of the slab geometry fluctuation, $P \equiv \tilde{p}c/Ze$ the particle rigidity ($\tilde{p}$ and $Ze$ are the particle momentum and charge, respectively), $k_\parallel$ is the wave vector parallel to the mean field, and $\lambda_s$ the correlation length for slab turbulence. Equation \eqref{eq:mfp_p} is valid at rigidities ranging from from 10 MV to 10 GV \citep{zank1998radial}. At larger heliocentric distances, the fractional term in braces becomes significant due to high rigidity particles resonating with fluctuations in the energy containing range instead of the inertial range. This is discussed further below in the context of rigidity dependence of the mfps (Section 4.4).
\subsection{Perpendicular mean free path}
Perpendicular diffusion is often not considered as important as parallel diffusion in energetic particle studies, because it is usually inferred that $\lambda_\perp << \lambda_\parallel$ \citep{palmer1982RvGSP}. However, \cite{dwyer1997ApJ} found that for strong particle enhancements related to corotating interaction regions, $\lambda_\perp/\lambda_\parallel$ rose to $\sim1$ in the fast solar wind stream arriving after the stream interface. Using data from the \textit{Ulysses} spacecraft during the SEP event of 2000 Jul 14, \cite{zhang2003ApJ} inferred $\lambda_\perp/\lambda_\parallel\approx 0.25$. Our 3-D model inner heliosphere provides an opportunity to examine the domains where perpendicular diffusion can be comparable with parallel diffusion.
Quasi-linear theory \citep{jokipii1966cosmic} provides a physically appealing description of perpendicular diffusion in terms of the diffusive spread of magnetic field lines, with the gyrocenters of charged particles following the field lines. Other approaches have considered the relationship between $\kappa_\perp$ and $\kappa_\parallel$ \citep{Axford1965P&SS,Gleeson1969P&SS}, and applied the Taylor-Green-Kubo formulation (BAM, \citealt{bieber1997perpendicular}) to the problem. However, the field line random walk (FLRW) approach \citep{jokipii1966cosmic} overestimates the strength of the diffusion, while BAM predicts diffusion that is weaker than that observed in numerical experiments \citep{Giacalone1999ApJ,mace2000ApJ}. The NLGC theory \citep{matthaeus2003nonlinear} accounts for both the random walk of the field lines and the influence of parallel scattering, and shows good agreement with both observations \citep{bieber2004GRL} and simulations, with the NLGC results bracketed by the FLRW and BAM results \citep{matthaeus2003nonlinear}.
Recent work \citep{ruffolo2012random} has reinterpreted NLGC by replacing the diffusion of gyrocenter trajectories with a random ballistic decorrelation (RBD), where the guiding center motion is approximated as ballistic (i.e., with constant velocity) between scattering events. The RBD-modified theory agrees with numerical simulations over a wider range of fluctuation amplitudes than the original NLGC, specifically for fluctuations comparable in size to the large-scale field. This makes it particularly suited for application to the solar wind \citep{belcher1969JGR,marsch1991pihp}. Other improvements to NLGC have also been developed \citep[see, e.g.,][]{shalchi2009}.
The phenomenon of ``backtracking" due to parallel scattering causes a particle to reverse its motion along the field line, thus retracing its steps over a certain time-span. This leads to a negative $v_x$-correlation ($v_x$ being a component of the particle's velocity perpendicular to the mean field), which results in a reduction in the running perpendicular diffusion coefficient. With this backtracking correction, RBD yields the following perpendicular diffusion coefficient \citep{ruffolo2012random}:
\begin{equation} \label{eq:rbd}
\kappa_{\perp} = \frac{a^2 v^2}{6 B^2} \sqrt{\frac{\pi}{2}}
\int_0^\infty
\frac{S_{2} (k_\perp) \text{Erfc}(\alpha) 2 \pi k_{\perp} dk_{\perp}}
{k_\perp \sqrt{\langle \tilde{v}_x^2 \rangle}},
\end{equation}
where $a^2 = 1/3$, $v$ is the particle speed, $\tilde{v}_x$ is the $x$-component of the guiding center velocity, $S_{2}$ is the 2-D axisymmetric turbulent fluctuation spectrum, Erfc is the complementary error function, and $k_\perp$ is the component of the wave-vector perpendicular to the mean magnetic field. We also have
\begin{equation} \label{eq:alpha}
\alpha = \frac{v^2}{3 \kappa_\parallel k_\perp \sqrt{2 \langle \tilde{v}_x^2 \rangle}},
\end{equation}
and
\begin{equation}
\langle \tilde{v}^2_x \rangle = \frac{a^2 v^2 b^2} {6 B^2},
\end{equation}
where $b^2$ is the combined variance of the 2-D and slab magnetic fluctuations: $b^2 = \langle {b}_2^2 \rangle+\langle {b}_s^2 \rangle $. Note that in Equation \eqref{eq:rbd}, the slab turbulence spectrum does not appear. This is because we follow the suggestion by \cite{Shalchi2006A&A} that the direct contribution of the slab component to perpendicular transport is subdiffusive, and therefore the slab term should not contribute to Equation \eqref{eq:rbd}. This hypothesis has been supported by simulations \citep{ruffolo2012random,2012AGUFMSH21A2188R}, and accordingly, has been adopted in the present work as well. Slab fluctuations can, however, still influence $\kappa_\perp$ through $\kappa_\parallel$, which appears in Equation \eqref{eq:alpha} for $\alpha$, and $\langle \tilde{v}^2_x \rangle$.
The 2-D power spectrum may be expressed as a power law \citep{matthaeus2007spectral}
\begin{equation} \label{eq:twod1}
S_{2} (k_\perp \leq 1/\lambda_{2}) = C_{2} \langle b_{2}^2 \rangle \lambda_{2}^2
(\lambda_{2} k_\perp)^p,
\end{equation}
\begin{equation} \label{eq:twod2}
S_{2} (k_\perp > 1/\lambda_{2}) = C_{2} \langle b_{2}^2 \rangle \lambda_{2}^2
(\lambda_{2} k_\perp)^{-\nu -1},
\end{equation}
where $\lambda_2$ is the 2-D correlation scale, $C_2$ is a normalization constant, $\langle b_{2}^2 \rangle$ is the variance of the 2-D turbulent fluctuations, and $p$ is a power index that takes on integral values that correspond to different power spectra. We assume a Kolmogorov spectrum in the inertial range by taking $\nu=5/3$. From the requirement that $\langle b_2^2 \rangle = 2 \pi \int_0^\infty S_2(k) k~dk$, we get
\begin{equation}\label{eq:c2}
C_2 = \frac{(\nu-1)(p+2)}{2 \pi (p + \nu +1)}.
\end{equation}
Note that the inertial range ($k_\perp > 1/\lambda_2$) behavior is described by a conventional power law, and $p$ only determines the long-wavelength properties of the spectrum. The spectral behavior of interplanetary magnetic fluctuations at long wavelengths is not well determined from single point measurements \citep{matthaeus2016prl}, and there are ambiguities surrounding the question of whether the observed structures are spatial or temporal in origin. The observations of ``$1/f$" noise at low frequencies also complicate matters \citep{matthaeus1986prl}. All values of $p \geq -1$ yield power spectra that give rise to a finite energy, but these spectra may be differentiated based on the characteristic length scales associated with them. In addition to the standard correlation scale \citep{Batchelor1953book}, there is a distinct scale, called the ultrascale, which is of importance in applications of 2-D turbulence (\citealt{matthaeus2007spectral} and references therein). The ultrascale is so named because it is generally larger than the correlation scale, and it may be interpreted as a typical size of an ``island" of 2-D turbulence \citep{matthaeus1999conf} and as the perpendicular coherence length of the FLRW \citep{ruffolo2004ApJ}.
We consider the following cases \citep{matthaeus2007spectral}: $p=-1$ (infinite correlation scale and an infinite ultrascale), $p=0$ (finite correlation scale but an infinite ultrascale), and $p \geq 1$ (finite ultrascale and finite correlation scale). The case $p=2$ is of special interest since it corresponds to homogeneous turbulence. Each of the above possibilities is realizable as each yields a finite energy. However, unlike the correlation scale, the values taken by the ultrascale in space and astrophysical plasmas are not well known, and there is a paucity of established methods to measure it (see \citealt{matthaeus2007spectral} for a proposed technique). Therefore, it is of interest to examine the dependence of the diffusion coefficients on $p$. If there is a marked differentiation between the mfps computed for different cases, then observations of the mfps may be used to infer constraints on the ultrascales prevailing in the heliospheric plasma.
To finally obtain an expression for the perpendicular mean free path, we use Equations \eqref{eq:twod1} and \eqref{eq:twod2} in Equation \eqref{eq:rbd} and set $\nu = 5/3$ to get
\newcommand{\frac{v}{\lambda_\parallel \sqrt{2 \langle \tilde{v}^2_x \rangle}}}{\frac{v}{\lambda_\parallel \sqrt{2 \langle \tilde{v}^2_x \rangle}}}
\begin{equation} \label{eq:mfp_perp}
\begin{aligned}
\lambda_\perp &= \mathscr{F}_1 \Bigg\{ \frac{\lambda_{2}^{-2/3}}{5 \mathscr{F}_2^{5/3} \sqrt{\pi}} \bigg[ 3 \sqrt{\pi} \mathscr{F}_2^{5/3} \lambda_2^{5/3} \text{Erfc}\left( \mathscr{F}_2 \lambda_2 \right) \\
&+ \Gamma\left(\frac{1}{3}\right)- 3 \Gamma\left(\frac{4}{3},\mathscr{F}_2^2 \lambda_2^2 \right) \bigg] \\
&+ \delta_{p,-1} \lambda_2 \bigg[ \mathscr{F}_2 \lambda_2 \frac{2}{\sqrt{\pi}}
\tensor[_2]{\text{F}}{_2} \left(\frac{1}{2},\frac{1}{2};\frac{3}{2},\frac{3}{2};-\mathscr{F}_2^2 \lambda_2^2 \right)
\\
&-0.981755 - \log (\mathscr{F}_2 \lambda_2)
\bigg] \\
&+(1-\delta_{p,-1})\frac{\lambda_2}{p+1} \bigg[\text{Erfc}(\mathscr{F}_2 \lambda_2) \\
&- \frac{\mathscr{F}_2 \lambda_2}{\sqrt{\pi}}
E_{\frac{p}{2}+1}(\mathscr{F}_2^2 \lambda_2^2) \bigg]
\Bigg\},
\end{aligned}
\end{equation}
where
\begin{equation}
\mathscr{F}_1 = \sqrt{\pi^3} C_2 \frac{v \langle b_2^2 \rangle a^2}{B^2 \sqrt{2 \langle \tilde{v}^2_x \rangle}},
\end{equation}
and
\begin{equation}
\mathscr{F}_2 = \frac{v}{\lambda_\parallel \sqrt{2 \langle \tilde{v}^2_x \rangle}}.
\end{equation}
In Equation \eqref{eq:mfp_perp}, Erf is the error function, $\Gamma$ is the gamma function, $\tensor[_2]{\text{F}}{_2}$ is a hypergeometric function, $E_{\frac{p}{2}+1}$ is the generalized exponential integral function, and the Kronecker delta function is used as a switch between the four values of $p$. $C_2$ depends on the value of $p$, as can be seen from Equation~\eqref{eq:c2}. Note that in the corresponding NLGC result \citep{pei2010cosmic}, an implicit method is required to obtain $\lambda_\perp$, in contrast to the RBD result, which is an explicit solution for $\lambda_\perp$.
\section{Solar wind model}
Equations \eqref{eq:mfp_p} and \eqref{eq:mfp_perp} require specification of the large-scale IMF, and the magnetic fluctuation energies and correlation lengths for both slab and 2-D turbulence. For this purpose, we use a Reynolds-averaged Navier-Stokes approach, based on the Reynolds decomposition \citep[e.g.,][]{mccomb1991physics} of a physical field, $\tilde{\mathbf{a}}$, into a mean and a fluctuating component:
\begin{equation}
\tilde{\mathbf{a}} = \mathbf{a}+\mathbf{a'},
\end{equation}
where $\mathbf{a} = \langle \tilde{\mathbf{a}} \rangle$ is an ensemble average, associated with the large scales of motion, and $\mathbf{a'}$ is a fluctuating component, here assumed to be small scale. By construction, $\langle \mathbf{a'} \rangle = 0$. Application of this decomposition to the MHD equations, along with a series of approximations appropriate to the solar wind, leads us to a set of mean-flow equations that are coupled to the small-scale fluctuations via another set of equations for the statistical descriptors of turbulence. For the details of the procedure for handling the fluctuations, we refer the reader to \cite{breech2008turbulence}.
In this study, we use the solar wind model described in detail by
\cite{usmanov2014three}. It is a global, fully three-dimensional
magnetohydrodynamic model that accounts for the effects of fluctuations in
heating and acceleration of the solar wind flow. The computational domain,
which in the present study extends from the coronal base to 3~AU, is split
into three regions: inner ($1 - 20~R_\odot$), intermediate ($20 - 45~R_\odot$), and outer ($45~R_\odot - 3$ AU). In the inner region, steady-state solutions of one-fluid,
polytropic ($\gamma = 1.08$) solar wind equations with WKB Alfv\'en waves
are obtained by time relaxation starting from an initial state composed of
a Parker-type flow in a dipole magnetic field \citep{usmanov2000global,
usmanov2003tilted}. Two-fluid steady-state equations for protons
and electrons with Hollweg's electron heat flux and WKB Alfv\'en waves are
solved in the intermediate region by forward integration along the radius $r$
\citep{pizzo1978three,pizzo1982three,usmanov1993global}. The boundary conditions for
the intermediate region are extracted from the inner region solution. In
the outer region, we solve three-fluid (thermal protons, electrons, and
pickup protons) Reynolds averaged solar wind equations simultaneously with
transport equations for turbulence energy, cross helicity and
correlation length. Steady-state solutions in the outer region are obtained
by time relaxation, using an eddy-viscosity approximation for the Reynolds stress
tensor and turbulent electric field, with boundary conditions provided
by solutions in the intermediate region \citep{usmanov2014three}. The use of steady-state simulations is justified here since ambient solar wind conditions change on time scales long compared to the time energetic particles spend in the inner heliosphere.
In our calculations, we have used the same input parameters at the coronal
base as in \cite{usmanov2014three}: the driving amplitude of Alfven waves
is set to 35~km~s$^{-1}$, the initial density is $0.4\times
10^8$~cm$^{-3}$, and the initial plasma temperature is $1.8\times 10^6$ K.
The magnetic field magnitude is assigned as the field strength of the
source magnetic dipole on the poles. This parameter is set to 16~G to match the
magnitude of the heliospheric magnetic field observed by Ulysses. The
computations are carried out on a composite spherical grid
\citep{usmanov1996global,usmanov2012three} using the Central Weighted
Essentially Non-Oscillatory (CWENO) spatially third-order reconstruction
algorithm of \cite{kurganov2000third}. The spatial CWENO
discretization is combined with the Strong Stability-Preserving Runge-Kutta
scheme of \cite{gottlieb2001strong} for time integration and the method of
\cite{powell1994approximate} for maintaining the $\nabla\cdot{\bf B} = 0$ condition.
For our purposes here, we extract from the outer region simulation ($45~R_\odot - 3$ AU) the mean magnetic field, $\mathbf{B}$, the fluctuation energy, $Z^2$ (defined below), and the correlation length for the turbulence, $\lambda$. Here,
\begin{equation} \label{eq:Z}
Z^2 = \langle v'^2 + b'^2 \rangle,
\end{equation}
is twice the turbulent energy per unit mass, defined in terms of the velocity and magnetic field fluctuations, $\mathbf{v}'$ and $\mathbf{B}'$, respectively. The amplitude of magnetic fluctuations has been normalized to Alfv\'{e}n units using $\mathbf{b}' = \mathbf{B}'(4 \pi \rho)^{-1/2}$, where $\rho$ is the mass density. To extend our calculation closer to the sun, we use data from the inner ($1 - 20~R_\odot$) and intermediate ($20 - 45~R_\odot$) regions, where the simulation does not have a turbulence model for $Z^2$ and $\lambda$. Here we use the the WKB Alfv\'{e}n wave energy density \citep{usmanov2000global}, $\mathcal{E}$, as a proxy for the turbulent fluctuation energy via $Z^2 = 2 \mathcal{E} /\rho$. To get an approximation for the correlation scale in these regions, we use the hypothesis from \cite{Hollweg1986JGR} that the correlation length varies as the distance between magnetic field lines, which in turn depends on the field strength \citep{spruit1981NASSP}, so that $\lambda \propto B^{-1/2}$. We set the constant of proportionality such that $\lambda$ at the boundaries of the intermediate and outer regions matches. We are currently working on refinements of the model that will modify the region in which turbulence modeling is included, so that this region will extend closer to the sun.
To proceed with the calculation of the mfps, some assumptions must be made in order to relate the correlation scale of our turbulence model ($\lambda$) to the slab and 2-D correlation scales in Equations~\eqref{eq:mfp_p} and \eqref{eq:mfp_perp}, respectively. First, we note that the turbulent fluctuations in our model are primarily transverse to the mean magnetic field \citep{breech2008turbulence}, and thus identify the correlation scale of 2-D turbulence to be equal to the correlation scale of our turbulence model, so that $\lambda_2 = \lambda$. Observational studies \citep{osman2007ApJ654,Weygand2009JGRA,weygand2011JGR116} indicate that the slab correlation scale is about a factor of two larger than the 2-D correlation scale, and accordingly, we assume $\lambda_s = 2 \lambda_2$. In our approximate treatment, we assume in effect that the magnetic and velocity correlation functions are structurally similar \citep{zank1996evolution}, so that the magnetic correlation length is found to be equal to the single correlation scale that we follow dynamically. In the inner heliosphere where the cross helicity is large, it becomes advantageous to employ a two correlation length theory \citep{mattheus1994JGR99,wan2012JFM697,Zank2012ApJ745,Zank2017ApJ835}, as has been implemented, e.g., by \cite{Adhikari2015ApJ805}.
To approximate the energy in slab and 2-D magnetic fluctuations, we first convert $Z^2$ to $B'^2$ using Equation~\eqref{eq:Z}:
\begin{equation}\label{eq:mag_fluc}
\langle B'^2 \rangle = \frac{Z^2}{r_A+1} 4 \pi \rho,
\end{equation}
where $r_A = \langle v'^2 \rangle /\langle b'^2 \rangle$ is the Alfv\'{e}n ratio. An accurate dynamical model for $r_A$ is desirable, but must include complications such as non-local effects \citep[e.g.,][]{grappin1983AaA26,mattheus1994JGR99,hossain1995PhFl}. At present we maintain a simpler approach, and take $r_A$ to have a value of 1 in the inner and intermediate regions ($1 - 45~R_\odot$), and a value of $1/2$ for heliocentric distances larger than $45~R_\odot$. These values are motivated by spacecraft observations \citep{Tu1995SSRv}, but we recognize that attempts have been made to treat $r_A$ dynamically \citep{grappin1983AaA26,marsch1989JPlPh41,tu1990JPlPh44,mattheus1994JGR99,yokoi2007PhPl,Zank2012ApJ745}. See especially the comparison with observations by \cite{Adhikari2015ApJ805} and \cite{Zank2017ApJ835}.
Next, recalling the assumption that the magnetic fluctuations have a dominant 2-D component with a small slab contribution, and following observations \citep{matthaeus1990JGR,bieber1994proton} that find the ratio of the 2-D and slab energies to be 80\% to 20\%, we use
\begin{equation}\label{eq:slab_2d}
\frac{\langle b^2_s \rangle}{\langle b^2_2 \rangle} = \frac{20}{80} = 0.25
\end{equation}
to compute the slab and 2-D fluctuation energies from Equation~\eqref{eq:mag_fluc} and $\langle b^2_2 \rangle + \langle b^2_s \rangle = \langle B'^2 \rangle$. In recent work by \cite{Hunana2010ApJ718} and \cite{Zank2017ApJ835}, refinements to this simplified perspective on the breakdown of the slab and 2-D fluctuation energies are discussed. In particular, \cite{Zank2017ApJ835} solve separate equations for the slab and 2-D energies with a simplified IMF and background solar wind flow. They find that the evolution of the two components is markedly different in the outer heliosphere (beyond $\sim3$ AU), where driving by pickup ions leads to an increase in the slab component's energy, while the energy of the 2-D component continues to decrease with heliocentric distance. Their results show, however, that the radial evolution of slab and 2-D energies is not too dissimilar below 3 AU. Similar results are presented by \cite{Oughton2011JGRA116} using their two-component model. Therefore, for the purposes of our present work, where we focus on the inner heliosphere, our simple decomposition of $\langle B'^2 \rangle$ into slab and 2-D components, using the constant ratio expressed in Equation~\eqref{eq:slab_2d}, seems appropriate. Studies of CR diffusion in the outer heliosphere would undoubtedly benefit from using a two-component turbulence transport model. A detailed assessment of different transport equations for turbulence is beyond the scope of this work.
\section{Results}
\subsection{Solar wind model results}
We begin our presentation of the results with a discussion of the core fields from the simulation - $B, \lambda$, and $Z^2$ - which are the ingredients that go into our calculation of the diffusion coefficients. Figure \ref{fig:ingred1} shows the radial evolution of the turbulence energy and the turbulence correlation scale from our model and simulation with an untilted dipole source. The data are for a $7\degree$ heliolatitude, which we take to be the broadly-defined ecliptic region. Also shown are observational results from Voyager 2, Helios, and the National Space Science Data Center (NSSDC) Omnitape dataset, indicating a reasonable agreement with the simulation results. The observational data for $Z^2$ and $\lambda$ are from \cite{zank1996evolution} and \cite{smith2001JGR}, respectively. Note that the observations are for various times in the solar cycle, and are shown here for general context only. The dashed vertical lines in Figure~\ref{fig:ingred1} represent the boundaries of the different simulation regions, with red marking the inner-intermediate region boundary at $20~R_\odot$, and blue marking the intermediate-outer region boundary at $45~R_\odot$, respectively. Note that we present results for $r>2~R_\odot$ ($r$ is the radial distance measured from the solar center), even though the inner boundary of the inner region simulation is at $1~R_\odot$. The parallel mfp acquires extremely large values ($> 10$ AU) in the region very close to the solar surface, due to the large values of $B$ prevailing there. These large values of $\lambda_\parallel$ are not of physical relevance and present problems for visualization, and we therefore restrict our results to $r>2~R_\odot$.
Figure \ref{fig:ingred2} shows the distribution in the meridional plane of the three ingredients - $B, Z^2,$ and $\lambda$ - for a simulation with an untilted source dipole. The figures on the left are from the inner and intermediate regions ($2 - 45~R_\odot$), and the ones on the right are from the outer region ($0.21 - 3$ AU). For a detailed discussion of these simulation results, we refer the reader to \cite{usmanov2000global} and \cite{usmanov2014three}. We note here that the magnetic field results agree well with Ulysses observations \citep[see Figure~8 of][]{usmanov2014three}, with the field vanishing at the heliospheric current sheet (HCS) at $0\degree$ heliolatitude. The turbulence correlation scale increases with heliocentric distance, as is well known from observations \citep{Tu1995SSRv}. The turbulence energy increases on moving from the ecliptic plane towards higher heliolatitudes because of shear interactions between slow (low latitude) and fast (high latitude) wind \citep[See, e.g.,][]{breech2008turbulence}. In the following subsections, we will discuss how these distributions influence the behaviour of the diffusion length-scales.
\begin{figure}
\includegraphics[scale=.37]{ingred-eps-converted-to.pdf}
\caption{Model results near the ecliptic plane, for a run with an untilted solar dipole, are compared with observational data from Voyager 2, Helios, and the NSSDC Omnitape. The $Z^2$ data are from \cite{zank1996evolution}, and the $\lambda$ data are from \cite{smith2001JGR}. The solid lines are from our simulations. The different symbols represent different methods of calculation. The dashed vertical lines represent the boundaries of the different simulation regions, with red marking the inner-intermediate region boundary at $20~R_\odot$, and blue marking the intermediate-outer region boundary at $45~R_\odot$, respectively. Note that the observations are for various times in the solar cycle, and are shown here for general context only.}
\label{fig:ingred1}
\end{figure}
\begin{figure}
\includegraphics[scale=.4]{b1-eps-converted-to.pdf}
\includegraphics[scale=.4]{b2-eps-converted-to.pdf}
\includegraphics[scale=.4]{corr1-eps-converted-to.pdf}
\includegraphics[scale=.4]{corr2-eps-converted-to.pdf}
\includegraphics[scale=.4]{z1-eps-converted-to.pdf}
\includegraphics[scale=.4]{z2-eps-converted-to.pdf}
\caption{Contour plots of the heliospheric magnetic field ($B$), the turbulence correlation scale ($\lambda$), and the turbulence energy ($Z^2$) in the meridional plane for an untilted solar dipole. The figures on the left cover $2 - 45~R_\odot$, and the ones on the right cover $0.21 - 3$ AU ($45 - 645~R_\odot$).}
\label{fig:ingred2}
\end{figure}
\subsection{Radial evolution of mean free paths}
In Figure \ref{fig:mfp_untilt} we show the radial evolution of the parallel, perpendicular, and radial mfps (black, red, and blue lines, respectively) in the ecliptic region (Figure \ref{fig:mfp_untilt}a) and near the solar rotation axis ($86\degree$ heliolatitude, Figure \ref{fig:mfp_untilt}b), for an untilted source dipole. Also shown is the ratio of the perpendicular mfp to the parallel mfp (green lines). The solid, dotted, dashed, and dash-dotted lines correspond to $p=-1,0,1,$ and 2, respectively, and the mfps are computed for protons with rigidity equal to 445 MV, corresponding to a kinetic energy of 100 MeV. Here we would like to remind the reader that our turbulence parameters ($Z^2$ and $\lambda$) in the region $1 - 45~R_\odot$ are not from the turbulence model, but are calculated using the approximations detailed in Section 3. As such, these results represent a preliminary attempt at mapping the diffusion length scales in a region that will soon be investigated by upcoming spacecraft missions such as Solar Probe Plus.
Near the ecliptic plane (Figure \ref{fig:mfp_untilt}a), as one moves outward from the solar surface, the increasing strength of the turbulence energy (see Figure \ref{fig:ingred1}) leads to a sharp decrease in $\lambda_\parallel$ in the region $2 - 5~R_\odot$, with the rapidly decreasing IMF reinforcing this behaviour. In this region, $\lambda_\parallel \propto r^{-3.46}$, and there is a corresponding increase in $\lambda_\perp (\propto r^{3.55} \text{ for } p=-1 \text{ and } \propto r^{4.34} \text{ for } p=2)$. Since the IMF has a significant meridional component here, the large winding angle ($\Psi$) between the radial direction and the IMF leads to $\lambda_\perp$ having an influence on the radial mfp (see Equation \ref{kappa_r}), with $\lambda_{rr} \propto r^{-1.97}$. From $0.03 - 3$ AU, $\lambda_\parallel$ mostly increases as $r^{0.82}$, and $\lambda_\perp$ as $r^{0.79}$. From 0.1 to 3 AU, $\Psi$ is once again large because of the increased azimuthal component of the IMF, and $\lambda_\perp$ reduces the radial mfp, with $\lambda_{rr} \propto r^{0.53}$. Observational studies for $r< 3$ AU have found $\lambda_{rr} \propto r^b$ with $b$ ranging from $0.4 - 0.7$ \citep{beeck1987ApJ322}. Note that the radial mfp depends on the value of $p$ (through $\lambda_\perp$), but the $\lambda_{rr}$ curves for different $p$ coincide.
Moving on to the radial evolution of the mfps in the polar region, Figure \ref{fig:mfp_untilt}b shows that the radial mfp is completely dominated by $\lambda_\parallel$. This is because the IMF is near radial at the poles, with a very small winding angle. At the poles, $\lambda_{rr} \propto r^{-1.1}$ until 0.1 AU, after which it remains nearly constant, with identical behavior exhibited by $\lambda_\parallel$. From $2~R_\odot - 0.2$ AU, $\lambda_\perp \propto r^{2.10}$ for $p=-1$ and $\lambda_\perp \propto r^{2.34}$ for $p=2$. From $0.2 - 3$ AU, $\lambda_\perp \propto r^{0.78}$ for $p=-1$ and $\lambda_\perp \propto r^{0.69}$ for $p=2$.
Figure \ref{fig:mfp_tilt} shows the effect of a source dipole with a $30\degree$ tilt when one encounters the heliospheric current sheet (HCS) at around 1 AU: $\lambda_\parallel$ goes through a sudden dip of almost two orders of magnitude, while $\lambda_\perp$ has a corresponding increase of around an order of magnitude. (The radius where the HCS crosses our chosen heliolatitude of $7\degree$ depends on our choice of the azimuthal angle for which we plot results as a function of radius.) The vanishing mean magnetic field and non-vanishing turbulence amplitude at the HCS explain this behaviour, which will be further illustrated in the next subsection discussing the 2-D variation of the mfps in the meridional plane. We note from Figures \ref{fig:mfp_untilt} and \ref{fig:mfp_tilt} that the ratio $\lambda_\perp/\lambda_\parallel$ stays between 0.1 and 0.01 for most of the inner heliosphere, but it exceeds unity at the HCS. Keeping in mind that the current sheet is a singular region in our simulation, in its vicinity the fields do possess physically realizable values. Therefore we may stress the fact that similarly large values of $\lambda_\perp/\lambda_\parallel$ have been observed \citep{dwyer1997ApJ,zhang2003ApJ}. We will come across these domains of significant perpendicular diffusion once again in the meridional plane contours in Section 4.5, below.
In the results presented so far the choice of the long wavelength spectral index $p$ does not significantly alter the mfps, with $\lambda_\perp$ for $p=-1$ generally not more than a factor of two larger than $\lambda_\perp$ for $p=2$. Referring to the discussion in Section 2.2, this result indicates a rather weak dependence of the mfps on the ultrascale (via different $p$ values). The exception appears very close to the solar surface ($2~R_\odot$) in Figure \ref{fig:mfp_untilt}, where the perpendicular mean free path for the $p=-1$ case is several times larger than that for the $p=2$ case. This behaviour may be probed further in simulations with improved coronal turbulence models that are more reliable at such small heliocentric distances. In the following results, unless specified otherwise, we will choose $p=2$, which corresponds to homogeneous turbulence.
In Figure \ref{fig:turb_var} we examine the effect of varying the turbulence energy amplitude at the inner boundary (45 $R_\odot$) of the outer region of the simulation, again for 100 MeV protons. Such variation may arise due to solar activity. The solid lines represent a standard $Z^2$ specified at the inner boundary, and dashed and dotted lines represent simulations performed with double and half of this standard value specified at the inner boundary, respectively. In the ecliptic region ($7\degree$ heliolatitude), Figure \ref{fig:turb_var}a indicates, as expected, that an increasing turbulence level leads to a decrease in $\lambda_\parallel$ (and consequently $\lambda_{rr}$). The stronger turbulence increases $\lambda_\perp$ in proportion to $Z$, and therefore increases the extent to which particles may diffusively penetrate the heliosphere. Comparing Figures \ref{fig:turb_var}a and \ref{fig:turb_var}b, it is interesting to note that in the ecliptic region, varying turbulence at the inner boundary leads to an effect on $\lambda_\parallel$ that becomes less pronounced with radial distance. This is not the case in the polar regions with fast wind, however, where the turbulence is less ``aged" compared with low latitudes \citep{matthaeus1998JGR}. Stream interactions near the ecliptic plane reduce the turbulence at a faster rate compared to the rate in the polar regions far from such shearing interactions.
\begin{table}
\setlength{\tabcolsep}{5pt}
\renewcommand{\arraystretch}{1.2}
\centering
\caption{Parallel mfps in AU for 100 MeV protons at in the ecliptic region at 1 AU. B1, B2, and B3 are from \cite{breech2008turbulence}; P1 and P2 are from \cite{pei2010cosmic}; Cases 1 - 3 are our solutions for varying turbulence levels. Note that our calculation of $\lambda_\parallel$ is independient of $p$.} \label{tab1}
\begin{tabular}{c c c c c c c c c}
$p$ & B1 & B2 & B3 & P1 & P2 & Case 1 & Case 2 & Case 3 \\
\hline
-1 & 2.92 & 6.86 & 1.64 & 0.92 & 0.47 & \multirow{4}{*}{0.29} & \multirow{4}{*}{0.21} & \multirow{4}{*}{0.40} \\
0 & 2.33 & 5.49 & 1.31 & 0.74 & 0.38 & & & \\
1 & 2.14 & 5.03 & 1.20 & 0.68 & 0.35 & & & \\
2 & 2.04 & 4.80 & 1.15 & 0.64 & 0.33 & & & \\
\hline
\end{tabular}
\end{table}
We end this subsection by comparing our solutions in the ecliptic plane with ``consensus" constraints on observations \citep{palmer1982RvGSP,bieber1994proton}. Based on information compiled from several sources, the Palmer consensus finds that for particles in the rigidity range $0.5 - 5000$ MV, $\lambda_\parallel = 0.08 - 0.3$ AU. We note here that the values for the mfps obtained by fitting observational data may depend on the model used; \cite{reames1999SSRv} reviews some such results and suggests a higher parallel mfp of $\sim 1$ AU. Our $\lambda_\parallel$ for a 100 MeV proton at 1 AU varies from $0.29 - 0.40$ AU, and fits the consensus range well. Our solutions are smaller than the values from \cite{breech2008turbulence} and \cite{pei2010cosmic}, which we list in Table \ref{tab1}, along with our results. Here, cases 1, 2, and 3 refer to standard, doubled, and halved turbulence levels, as described above. Note that unlike our calculation of $\lambda_\parallel$, the calculations from \cite{breech2008turbulence} and \cite{pei2010cosmic} depend on the value of $p$.
Our improved agreement with the Palmer consensus range may be attributed to two improvements in modeling: (1) Here $B$ is a spatially varying field computed dynamically from a self-consistent 3-D model, in contrast to the Parker-type model used in \cite{breech2008turbulence} and \cite{pei2010cosmic}; (2) The effect of shear interactions is computed self-consistently in our turbulence model \citep{usmanov2014three}, unlike in \cite{breech2008turbulence} and \cite{pei2010cosmic}, where a shear-driving parameter is employed.
\begin{figure}
\includegraphics[scale=.36]{mfp_eclip_untilt-eps-converted-to.pdf}
\includegraphics[scale=.36]{mfp_pole_untilt-eps-converted-to.pdf}
\caption{Radial dependence of the parallel (black), perpendicular (red), and radial (blue) mfps (a) near the ecliptic plane ($7\degree$ heliolatitude) and (b) near the pole ($86\degree$ heliolatitude). Also shown is $\lambda_\perp/\lambda_\parallel$ (green). The solid lines are for $p=-1$, the dotted lines for $p=0$, the dashed lines for $p=1$, and the dash-dotted lines for $p=2$. Proton rigidity is 445 MV (100 MeV kinetic energy). Note that the curves for $\lambda_\parallel$ and $\lambda_{rr}$ coincide in (b).}
\label{fig:mfp_untilt}
\end{figure}
\begin{figure}
\includegraphics[scale=.36]{mfp_eclip_30tilt-eps-converted-to.pdf}
\caption{Radial dependence of the parallel (black), perpendicular (red), and radial (blue) mfps near the ecliptic plane ($7\degree$ heliolatitude), with a solar dipole having a $30\degree$ tilt. For our particular choice of azimuthal angle ($26\degree$), an HCS crossing occurs at 0.8 AU. Also shown is $\lambda_\perp/\lambda_\parallel$ (green). The solid lines are for $p=-1$, the dotted lines for $p=0$, the dashed lines for $p=1$, and the dash-dotted lines for $p=2$. Proton rigidity is 445 MV (100 MeV kinetic energy).}
\label{fig:mfp_tilt}
\end{figure}
\begin{figure}
\includegraphics[scale=.36]{turb_var_eclip-eps-converted-to.pdf}
\includegraphics[scale=.36]{turb_var_pole-eps-converted-to.pdf}
\caption{Radial dependence of the parallel (black), perpendicular (red), and radial (blue) mfps (a) near the ecliptic plane ($7\degree$ heliolatitude) and (b) in the polar region ($86\degree$), for varying turbulence amplitudes, with $p=2$. The dashed and dotted lines represent simulations with the turbulence energy ($Z^2$) at the inner boundary of the outer region ($45~R_\odot$) doubled and halved, respectively, relative to a standard level. See text for more details. Note that the curves for $\lambda_\parallel$ and $\lambda_{rr}$ coincide in (b).}
\label{fig:turb_var}
\end{figure}
\subsection{Latitudinal evolution of mean free paths}
Figure \ref{fig:mfp_lat} shows the variation of mfps with latitude at different heliocentric distances for an untilted solar dipole. We see from Figure~\ref{fig:mfp_lat}a that, in general, $\lambda_\parallel$ (solid lines) increases by almost an order of magnitude as one leaves the solar equatorial plane and moves to higher latitudes, and assumes a near constant value as one approaches the polar regions. The opposite behaviour is seen for $\lambda_\perp$ (dashed lines), which decreases on moving away from the equatorial plane. This is a combined result of the increase in the IMF strength and the correlation scale of the turbulence ($\lambda$) while moving away from the solar equatorial plane (i.e., away from the HCS), and the increase in the turbulence energy due to shear-interactions between slow and fast solar winds. We note that very close to the sun (4 $R_\odot$, black line), $\lambda_\parallel$ first decreases with latitude as one leaves the equatorial plane, then increases at higher latitudes, to values larger even than those seen at larger heliocentric distances. This behavior is because of the IMF increasing monotonically with latitude, close to the sun. At larger distances, the IMF plateaus with increasing latitude, and from 1 AU onwards it decreases in the polar regions (See Figure \ref{fig:ingred2}). Thus, particles experience less scattering in polar regions close to the sun. This also explains the latitudinal variation of $\lambda_\perp$ at 4 $R_\odot$.
Figure \ref{fig:mfp_lat}b shows the increase in $\lambda_{rr}$ as one moves towards the polar regions, and illustrates once again the fact that while $\lambda_{rr}$ is affected by $\lambda_\perp$ very close to the sun at low latitudes, near the polar regions it follows the trend set by $\lambda_\parallel$. Figure \ref{fig:mfp_lat}c shows that the ratio of $\lambda_\perp$ to $\lambda_\parallel$ decreases as one leaves the solar equatorial plane (i.e., away from the HCS), with the perpendicular mfp staying 1-2 orders of magnitude below the parallel mfp, except very close to the sun (4 $R_\odot$, black line) where it becomes 3 orders of magnitude smaller because of the low turbulence levels in that region. We will examine the latitudinal dependence of the mfps once again in meridional plane figures in Section 4.5, below.
\begin{figure}
\includegraphics[scale=.36]{mfp_pp_lat_untilt-eps-converted-to.pdf}
\includegraphics[scale=.36]{mfp_rr_lat_untilt-eps-converted-to.pdf}
\includegraphics[scale=.36]{mfp_rat_lat_untilt-eps-converted-to.pdf}
\caption{The top panel (a) shows the latitudinal dependence of parallel (solid lines) and perpendicular (dashed lines) mfps. The middle (b) and bottom (c) panels show the latitudinal variation of $\lambda_{rr}$ and $\lambda_\perp / \lambda_\parallel$, respectively. All panels are for an untilted solar dipole and $p=2$. Black, blue, green, and red lines represent radial distances of 0.02, 0.2, 1, and 3 AU (4, 45, 215, and 645 $R_\odot$), respectively. Proton rigidity is 445 MV (100 MeV kinetic energy).}
\label{fig:mfp_lat}
\end{figure}
\subsection{Rigidity dependence of mfps}
In Figure \ref{fig:rig} we plot the rigidity ($P$) dependence of mfps for protons at different radial distances in the ecliptic and polar regions. Below 1 AU, $\lambda_\parallel \propto P^{0.33}$ for all rigidities considered here ($10 - 10^4$ MV). Above 1 AU there is a steepening of the slope for rigidities larger than $10^3$ MV. As noted in Section 2.1, this is due to high energy particles resonating with turbulent fluctuations in the energy containing range instead of the inertial range. As the IMF ($B$) decreases with heliocentric distance, a high rigidity particle's Larmor radius ($R_L = P/Bc$) may become resonant with the correlation scale of the turbulence ($\lambda_s$). When $R_L/\lambda_s >> 1$, the expression in braces in Equation~\eqref{eq:mfp_p} scales with rigidity as $P^{5/3}$, and we have $\lambda_\parallel \propto P^2$ instead of $\lambda_\parallel \propto P^{1/3}$. Indeed, for rigidities $\sim 10^4$ MV we find that $\lambda_\parallel \propto P^{1.2}$ at 1 AU and $\lambda_\parallel \propto P^{1.8}$ at 3 AU \citep[See also the discussion on the effect of pickup ion driven turbulence on high-rigidity particles in the outer heliosphere in][]{zank1998radial}. Our results agree well with the observations shown in \cite{bieber1994proton}, with power indices ranging from 0.2 to 0.56 for a number of solar events where rigidity ranges from 10 to $10^3$ MV. Our results also agree with the theoretical and numerical findings in \cite{bieber1994proton} and \cite{pei2010cosmic}.
In general, $\lambda_\perp$ shows lower variation with rigidity. In the polar regions $\lambda_\perp$ stays nearly constant with rigidity. This behavior is consistent with the finding of \cite{bieber2004GRL} that NLGC predicts a very weak rigidity dependence, and they note that this is supported by observations for rigidities between $10^2 - 10^4$ MV. Note that the rigidity profiles of $\lambda_\parallel$ and $\lambda_\perp$ that we derive from simulation results and diffusion theories are quite different from some that have been employed in the literature to model solar modulation of Galactic cosmic rays \citep[e.g., see Figure 12 of][]{vos2015ApJ815}.
\begin{figure}
\includegraphics[scale=.36]{rig_eclip-eps-converted-to.pdf}
\includegraphics[scale=.36]{rig_pole-eps-converted-to.pdf}
\caption{Rigidity dependence of $\lambda_\parallel$ (solid line) and $\lambda_\perp$ (dashed line), (a) near the ecliptic plane ($7\degree$ heliolatitude), and (b) in the polar regions ($86\degree$ heliolatitude), for an untilted solar dipole and $p=2$. Black, blue, green, and red lines represent radial distances of 0.02, 0.2, 1, and 3 AU (4, 45, 215, and 645 $R_\odot$), respectively.}
\label{fig:rig}
\end{figure}
\subsection{Meridional plane contours}
In this section, we describe the variation of $\lambda_\parallel, \lambda_\perp, \lambda_{rr}$, and $\lambda_\perp/\lambda_\parallel$ in meridional planes for 100 MeV protons, complementing results of the previous sections. Figure \ref{fig:merid_untilt} shows results from a simulation with a source magnetic dipole that is untilted with respect to the solar rotation axis. It is clear that at the HCS, with its vanishing magnetic field, perpendicular diffusion is comparable to parallel diffusion in most of the inner heliosphere, with $\lambda_\perp$ and $\lambda_\parallel$ both around $0.01$ AU. In the broader ecliptic plane, however, $\lambda_\parallel$ remains 1-2 orders of magnitude above $\lambda_\perp$, varying from 0.01 to almost 1 AU within a radial distance of 10 $R_\odot$ to 3 AU, while $\lambda_\perp$ increases from $\sim 0.0001$ to $0.01$ AU. As noted in the 1-D plots, very close to the sun $\lambda_\parallel$ experiences a dramatic increase to a value of 1 AU due to the weak turbulence and strong magnetic field prevailing there.
We also see that at radial distances of $1.5 - 3$ AU, $\lambda_\parallel$ is a few times larger at lower latitudes, compared to values in polar regions. This is because the IMF decreases and the turbulence energy increases with latitude at these radial distances, leading to a reduction in parallel diffusion in the polar regions, and a corresponding increase in perpendicular diffusion. This can also be seen in Figure~\ref{fig:merid_untilt}h showing contours of $\lambda_\perp/\lambda_\parallel$, which increases by nearly one order of magnitude from low latitudes to the poles. The radial mfp increases uniformly with heliocentric distance at lower latitudes, but is dominated by $\lambda_\parallel$ in polar regions, because of the small winding angle between the IMF and the radial direction here. This leads to $\lambda_{rr}$ acquiring a nearly constant value of around 0.2 AU in polar regions beyond 2 AU.
Figure \ref{fig:merid_tilt} shows contour plots for mfps in the meridional plane at azimuthal angle equal to $26\degree$, for a simulation with a source magnetic dipole that is tilted by $30\degree$ with respect to the solar rotation axis. In this case, solar rotation produces an asymmetrical magnetic field structure, which has a striking effect on the diffusion parameters, with the displacement of the current sheet from the ecliptic plane modifying their distribution at low latitudes. Note that the blob-like structures in Figures~\ref{fig:merid_tilt}f and \ref{fig:merid_tilt}h arise due to grid points coinciding with the HCS. The rapid decrease in the magnitude of the IMF near the HCS leads to the formation of the blob contours around grid points where $B$ vanishes. This effect is not seen in Figure~\ref{fig:merid_untilt} for the untilted dipole case, where the HCS lies at $0\degree$ heliolatitude, where no grid points are present, by construction.
As noted previously in Section 4.2, observations indicate that the ratio $\lambda_\perp/\lambda_\parallel$ may approach, and even exceed unity. In our simulation, this happens in the HCS. The basic features described above for the untilted dipole are still present in this case, but are now organized with respect to the tilted HCS. During periods when solar activity levels are high, the warped current sheet is spread out across a larger portion of the heliosphere (Figure \ref{fig:merid_tilt}) compared with the low activity case (untilted dipole, Figure \ref{fig:merid_untilt}), and the HCS is thus more likely to influence CRs.
\section{Conclusions and Discussion}
We have presented a detailed analysis of the diffusion coefficients for cosmic ray transport in the inner heliosphere. The purpose is to use a well-tested, fully 3-D global simulation of the solar wind, with turbulence modeling, to obtain the heliospheric distribution of the large-scale heliospheric magnetic field, the energy in the turbulent fluctuations, and the correlation scale of the turbulence. This distribution has been coupled with a quasi-linear theory for parallel diffusion, and the recent random ballistic decorrelation interpretation of the non-linear guiding center theory for perpendicular diffusion. The present work extends previous studies on the heliospheric diffusion of cosmic rays by \cite{bieber1995diffusion}, \cite{zank1998radial},and \cite{pei2010cosmic}, but has a stronger focus on the inner heliosphere, with the inner boundary of our simulations at 1 $R_\odot$. Recent complementary work \citep{guo2016ApJ} carries out similar computations of diffusion coefficients for the outer heliosphere.
We find that at the heliospheric current sheet $\lambda_\perp$ can be greater than $\lambda_\parallel$, but usually $\lambda_\parallel$ is 1-2 orders of magnitude larger through most of the inner heliosphere. Very close to the sun ($2~R_\odot$), the strong IMF leads to a large value of $\lambda_\parallel$ ($\sim 0.5$ AU), which initially decreases for several solar radii before increasing with radial distance at low to intermediate latitudes, and becomes nearly constant at the polar regions. $\lambda_\perp$ increases with heliocentric distance throughout the inner heliosphere, and is larger in the polar regions compared to low latitudes. $\lambda_{rr}$ is dominated by $\lambda_\parallel$ through most of the inner heliosphere. However, $\lambda_\perp$ does affect $\lambda_{rr}$ in parts of the near-ecliptic region. Our estimations of $\lambda_\parallel$ near the ecliptic plane at 1 AU show good agreement with the Palmer consensus range of $0.08 - 0.3$ AU.
At heliocentric distances below 1 AU, we find that the parallel mfp varies with rigidity as $P^{0.33}$ for all rigidities considered here ($10 - 10^4$ MV). Above 1 AU, highly energetic particles begin to resonate with turbulent fluctuations in the energy containing scales, and the rigidity dependence of $\lambda_\parallel$ steepens. The perpendicular mfp is weakly dependent on rigidity. Our results on the rigidity dependence of mfps are consistent with observations.
The mfps are found to be weakly dependent on the type of power spectrum used to represent the large scale fluctuations. This suggests that any attempts to use spacecraft observations of mfps to infer constraints on the ultrascale would be challenging. The effects of solar activity (via a tilted solar dipole and variations of turbulence levels) are also studied, with increased activity leading to stronger perpendicular diffusion and weaker parallel diffusion.
The model we have adopted for turbulence transport has been thoroughly studied and tested \citep{breech2008turbulence}. More elaborate models, with more transport equations (and more free parameters) are available \citep{Zank2012ApJ745}. In particular, these models include extensions such as dynamically variable residual energy, separate transport equations for slab and 2-D fluctuations, and as many as three distinct dynamically evolving correlation lengths \citep{Oughton2011JGRA116,Zank2017ApJ835}. For the present we forgo the associated additional complication and rely on the present model's ability to account very well for a variety of observations \citep{usmanov2011solar,usmanov2012three,usmanov2014three}.
We also remark that the turbulent fluctuations we follow dynamically are the quasi-two dimensional fluctuations that we assume are energetically dominant. A variety of studies \citep{matthaeus1990JGR,zank1993nearly,bieber1994proton,bieber1996dominant} are consistent with dominance by quasi-2D turbulence in solar wind turbulence. In the present approach we assumed that the quasi-slab component of the fluctuations, which represent perhaps 20\% of the total fluctuation energy, are a constant fraction of the turbulence energy. Useful extensions have been presented by \cite{Oughton2011JGRA116,Zank2017ApJ835} that adopt somewhat different approaches with the common goal of independently transporting both 2-D and slab-like fluctuations. As noted above, these models find that the radial evolution of 2-D and slab fluctuation energies is not too dissimilar in the inner heliosphere, and therefore our decomposition of the total turbulence energy into slab and 2-D components using a constant ratio appears reasonable. These models also show that in the outer heliosphere (above 3-4 AU), the energy in the slab fluctuations increases with heliocentric distance due to driving by pickup ions, while the 2-D fluctuation energy continues to decrease. As such, studies of CR diffusion in the outer heliosphere would undoubtedly benefit from using a two-component turbulence transport model.
Such models have been implemented \citep{wiengarten2016ApJ833,Shiota2017ApJ837}, with many differences relative to the present model. For example, the \cite{Shiota2017ApJ837} model has a more elaborate transport formalism, as described above, but neglects the impact of turbulence on the background flow and relies on ad-hoc shear terms instead of fully coupling to the large-scale solar wind solutions. In contrast, we employ a dynamic eddy-viscosity model \citep{usmanov2014three} to achieve this coupling. Clearly no model at present is a complete treatment, and there are advantages and trade-offs in various approaches. We hope to advance our own model with additional refinements in the near future.
We anticipate that 3-D calculations of the CR diffusion coefficients in the way we have demonstrated here, employing large scale solar wind solutions with turbulence transport and turbulence modeling, will become increasingly important for realistic energetic particle transport calculations in the future. We also note that related types of diffusion coefficients, such as drag or self-diffusion, may be similarly estimated using adaptations of the above approach, as described briefly in the Appendix. Studies of phenomena such as shock-ensembles and super-events \citep{Mueller-Mellin1986,Kunow1991book}, where several shocks merge to influence energetic particle transport at widely separated locations, would benefit enormously from such 3-D studies in model heliospheres. Our findings of domains where $\lambda_\perp/\lambda_\parallel \geq 1$ may be used to further study the effects of significant perpendicular diffusion, which has been seen to reduce the SEP flux and make it more uniform \citep{zhang2009ApJ}. Additional development at the MHD level will be needed to utilize this kind of tool for explaining observed SEP events associated with transient phenomena such as flares, CMEs and interplanetary shocks \citep{ruffolo2006ApJ,droge2016ApJ,agueda2016ApJ}. In the present paper we have not undertaken specific calculations employing the diffusion coefficients we obtained using a global model; this is deferred to future work. We anticipate that this approach will be useful in understanding Solar Probe Plus observations of energetic particles near the Sun.
As we have now demonstrated that such an approach can provide detailed three dimensional information concerning both MHD transport and particle mean free paths, it becomes clear that what will be needed are improved methods for driving this kind of model with more sophisticated and detailed solar observations. Meanwhile, we are continuing to improve our MHD modeling by building a coronal module that includes a full turbulence transport model, and by further developing the eddy viscosity approach \citep{usmanov2014three}. Future work could also investigate the influence of drifts on CR modulation. To facilitate use of the present data from this model for particle transport calculations of relevance to the current generation energetic particle and Space Weather studies, we are uploading as Supplementary Material the 3-D grids of the diffusion coefficients that were described here.
\section{Appendix}
Here we present an estimation of a general turbulent diffusion coefficient that is based on Taylor's formulation of the problem \citep{taylor1921ProcLonMathSoc}. The diffusion coefficient for the passive transport of any quantity in a turbulent neutral fluid may be approximated by \citep{Choudhuri1998book}
\begin{equation} \label{eq:taylor_diff}
D_T \approx \frac{1}{3} \langle v^2 \rangle \tau_{\text{cor}},
\end{equation}
where $\langle v^2 \rangle$ is the mean square turbulent velocity and $\tau_{\text{cor}}$ is the correlation time of the turbulence. By assuming $\langle v^2 \rangle \sim Z^2$, and defining the turbulence correlation length $\lambda \sim Z \tau_{\text{cor}}$, we rewrite the above equation as
\begin{equation} \label{eq:drag}
D_T \propto Z \lambda.
\end{equation}
Note that any standard diffusion coefficient, drag coefficient, eddy viscosity, or other similar quantity can be expressed in a form similar to Equation~\eqref{eq:drag}, i.e., as a product of a characteristic velocity and a length scale (see, for example, \cite{Tennekes1972book}).
In Figure \ref{fig:merid_drag} we show contour plots for $D_T$ in the meridional plane, computed from a simulation with a solar dipole that is untilted with respect to the solar rotation axis. We may interpret $D_T$ as a turbulent drag coefficient, which is of relevance to the propagation of CMEs in the solar wind. At high heliolatitudes, the drag coefficient increases from the solar surface to 0.5 AU, and then gradually decreases. Notably, at heliocentric distances smaller than 0.5 AU, $D_T$ increases by an order of magnitude in moving from the ecliptic to polar regions. This implies that a CME would be ``channelled" to lower latitudes as it propagates through the inner heliosphere. Applications involving these more general approximations to diffusion processes may also be enabled by the approach described in the present paper.
\acknowledgments{This research is partially supported by NASA grant NNX14AI63G
(Heliophysics Grand Challenges Research), NASA LWS grants NNX15AB88G and NNX13AR42G,
and the Solar Probe Plus mission through the ISOIS project and SWRI subcontract D99031L, and the Thailand Research Fund (grant RTA5980003). The authors would like to thank the anonymous Referee for their thorough reading of the manuscript and useful suggestions for its improvement.}
\bibliographystyle{apj}
\section{Introduction}
The interaction of energetic particles with the solar wind is a topic of wide interest in space physics and astrophysics. Several varieties of charged particles populate the heliosphere, including energetic particles originating at the sun (solar energetic particles, or SEPs) and galactic cosmic rays (GCRs) that enter the heliosphere uniformly and nearly isotropically from the outside \citep{Kunow1991book}. These cosmic rays (CRs) are strongly guided and scattered by the solar wind and the turbulent fluctuations that transport with it \citep{parker1956modulation,parker1964scattering,jokipii1966cosmic}. As such, the study of the origin and transport of cosmic rays is an important problem in heliospheric physics, with implications ranging from space weather and exploration to fundamental space plasma physics \citep{jokipii1971review,fisk1978interactions,Kunow1991book}. The effects of these energetic particles on the health of astronauts \citep{Parker2005SWE} and the well-being of electronic components in spacecraft \citep{Tylka1997} are an immediate concern. In addition, the accuracy with which we can understand CR propagation also provides a testbed for energetic particle transport in numerous space and astrophysical applications \citep{Kulsrud1969ApJ,droge2003}. The solar wind provides us with an opportunity to observe, at close range, the behavior of energetic particles in random, turbulent magnetic fields \citep{bruno2013LRSP}. Such fields are ubiquitous in astrophysical systems \citep{Candia2004}, and the insights we glean from studies of CRs in the heliosphere can potentially find application elsewhere in the universe. Finally, observations of cosmic rays can also serve as probes into solar activity and solar wind structure, as CR variations are seen to be correlated with solar and geomagnetic activity \citep{snyder1963}.
Theories of the modulation of cosmic rays in the heliosphere attempt to explain the observed temporal and spatial variation in their spectra \citep{fisk1978interactions,potgieter2013solar}, and for that purpose, require a knowledge of the cosmic ray diffusion tensor. In fact, one of the key challenges in solving the Parker CR transport equation \citep{parker1965PSS} is the inadequate knowledge of the spatial, temporal, and rigidity dependence of the components of the diffusion tensor. In turn, the specification of this tensor through the heliosphere requires an understanding of two topics. First, a theoretical understanding of the diffusion process itself is needed, which would lead to predictions of the structure of the diffusion tensor itself. Equally important is the knowledge of the large scale flows and electromagnetic field in the plasma, and the distribution of background solar wind turbulence in which the particles are scattered. The present approach permits three dimensional, and (in principle) time-varying calculation of all three of these properties (diffusion tensor, large scale flow, large scale electromagnetic field) to be computed in a single model.
The formal structure of the diffusion tensor involves diagonal components corresponding to diffusion parallel and perpendicular to the interplanetary magnetic field (IMF), as well as off-diagonal components describing perpendicular drifts (e.g., \citealt{moraal1976SSRv,minnie2007ApJ}). While quasi-linear theory \citep{jokipii1966cosmic} extended to include time-dependent and non-linear corrections \citep{goldstein1976ApJ,bieber1994proton,droge2003} provides a relatively good accounting of parallel diffusion, theories of perpendicular diffusion have faced the challenge of accounting for non-linear effects such as transfer of particles across field lines, backscatter from parallel diffusion, and field-line random walk \citep{jokipii1966cosmic,Giacalone1999ApJ}. The non-linear guiding center (NLGC) theory (\citealt{matthaeus2003nonlinear}; see also \citealt{shalchi2009}) accounts for the above, and is further improved by the random ballistic interpretation of \cite{ruffolo2012random}. In the current work we focus on the parallel and perpendicular and diffusion coefficients; the drift motion could be a topic for future work.
Since turbulent fluctuations are responsible for scattering CRs, the diffusion theories mentioned above typically involve turbulence parameters such as the energy of the random magnetic fluctuations and correlation scales. In the solar wind, low-frequency turbulence evolves via a non-linear cascade, while also being transported and processed by the large-scale radially expanding solar wind. At all but the smallest scales, these processes are well described by magnetohydrodynamic (MHD) models \citep{marsch1989dynamics,zhou1990transport}. Over the years, turbulence models have incorporated simplifying assumptions relevant to the solar wind, yielding increased tractability of the governing equations \citep{zank1996evolution,matthaeus1999turbulence}. The increased sophistication of the models and improvements in computational power have led to numerical simulations yielding good agreement with \textit{Voyager, Ulysses, Helios}, and \textit{WIND} observations \citep{breech2008turbulence,usmanov2011solar}. These turbulence models have also been used to study the propagation of coronal mass ejections \citep{Wiengarten2015ApJ}. Extensions to the \cite{breech2008turbulence} model have been developed \citep{Oughton2011JGRA116,Zank2012ApJ745,Zank2017ApJ835}, and applied to the inhomogeneous solar wind \citep{Shiota2017ApJ837}.
Our strategy for evaluating the CR diffusion coefficients through the inner heliosphere consists of two steps: first, specification of the relevant turbulence parameters based on a global solar wind model, and second, evaluation of the CR diffusion coefficients using the specified heliographic distribution of turbulence. For the first step, we deduce turbulence parameters from a global, three-dimensional (3-D) MHD simulation of the solar wind \citep{usmanov2014three}.
The spatial resolution that can be realistically achieved in such simulations cannot resolve the small-scale fluctuations that cause scattering of CRs. For instance, the spatial resolution of our simulation, at 1 AU, can be estimated as roughly 0.03 AU. However, the mean free path, at 1 AU, for scattering perpendicular to the mean magnetic field has been estimated to be as low as 0.001 AU \citep{zhang2009ApJ,pei2010cosmic}, and the correlation scale of the turbulence has been estimated to be around 0.007 AU \citep{matthaeus2005,bruno2013LRSP}. This is where our turbulence model for the ``sub-resolution" physics comes in. Our simulation explicitly resolves the large-scale, mean solar wind bulk flow, which is coupled to small-scale inhomogeneities by means of an MHD-Reynolds-averaged Navier-Stokes \citep[RANS; see, e.g.,][]{mccomb1991physics} model for the random fluctuations. The simulation has been well-tested, and gives reasonable agreement with many spacecraft observations of large-scale solar wind fields, turbulence parameters (energy, cross helicity and correlation scale), as well as the temperature, for varying heliocentric distance, and where feasible, varying helio-latititude \citep{usmanov2011solar,usmanov2012three,usmanov2016four}. In recent ``applied" work, the simulation has been used to study the collisional age of the solar wind plasma \citep{chhiber2016solar}, and we view the present work as a continuation of such application-oriented studies.
Once the turbulence parameters are specified through the model heliosphere, for the second step of our calculation, we use, as a starting point, fairly standard, well-tested formalisms for parallel and perpendicular diffusion coefficients - quasi-linear theory \citep{jokipii1966cosmic,bieber1995diffusion,zank1998radial} to compute the parallel component of the diffusion tensor, and the random ballistic decorrelation (RBD) interpretation of NLGC theory \citep{matthaeus2003nonlinear,ruffolo2012random} for perpendicular diffusion.
Previous studies of the heliographic dependence of the CR diffusion coefficients include work based on both WKB models for Alfv\'{e}n waves \citep{volk1974spatial,morfill1979latitude}, and models for strong turbulence \citep{bieber1995diffusion,zank1998radial,pei2010cosmic}. The present work builds on these studies, but also makes some significant departures, motivated and enabled by recent advances in diffusion theory and sophistication of solar wind simulations. The major points of departure from previous work are listed below:
1. We use a fully 3-D global simulation of the solar wind that provides us with a reliable and self-consistent model heliosphere. Previous work has used one-dimensional (1-D) radial evolution models with spherical symmetry, with shear-driving effects included through a model \citep{zank1998radial,pei2010cosmic}. Thus, while examining latitudinal dependence of the diffusion tensor, these studies implicitly assume that they are far from regions with significant latitudinal gradients. In contrast, three dimensionality improves the physical authenticity of the simulation by explicitly including shear-driving effects on the flow across latitudes, and leads to improved data-visualization through two-dimensional (2-D) contour plots. A similar 3-D approach has been recently used in \cite{guo2016ApJ} to study the propagation of GCRs from 0.3 AU to the termination shock.
2. The computation of the CR diffusion tensor requires specification of the background solar wind speed, and the underlying large-scale heliospheric magnetic field. Previous work \citep{bieber1995diffusion,zank1998radial,pei2010cosmic} used a radially constant solar wind speed with some latitudinal variation, and a Parker-spiral type magnetic field model. However, the use of a prescribed model for the background fields has been found inadequate \citep{Reinecke1997AdSpR,steenberg1997alternative}, and instead we use the large-scale, resolved flow from our MHD-RANS simulation. This provides a complete specification of the background large-scale fields, with spatial variation that has been found to agree well with observations \citep{usmanov2014three}.
3. We examine the diffusion coefficients at radial distances between 2 $R_\odot$ and 3 AU, where $R_\odot$ denotes a solar radius. We are not aware of any other similar study that has probed regions this close to the sun, which are of prime interest for SEP propagation, space weather, and for upcoming spacecraft missions, including Solar Probe Plus. Resolving this entire domain ($2~R_\odot - 3$ AU) in one simulation is a challenge, as modeling approximations that are appropriate very close to the sun may not be valid at larger heliocentric distances. Furthermore, the timescales associated with the different domains are disparate \citep{hundhausen1972coronal,Tu1995SSRv,bruno2013LRSP}. We use an approach where the computational domain is split into three regions: inner ($1-20~R_\odot$), intermediate ($20 - 45~R_\odot$), and outer ($45~R_\odot - 3$ AU). The inner and intermediate regions employ a WKB Alfv\'{e}n wave model, and the outer region solves a full turbulence transport model, with the inner boundary conditions for each region being provided by the preceding one \citep{usmanov2014three}.
4. A magnetic dipole with its tilt (relative to the solar rotation axis) varying through the solar activity cycle is a first and rough approximation for the solar magnetic field \citep{babcock1961ApJ133}. We examine the effect of changing the tilt of the source solar dipole by using simulations with a dipole untilted with respect to the solar rotation axis, and a dipole with {30\degree} tilt, in contrast to previous work employing axisymmetric solar wind parameters \citep{zank1998radial,pei2010cosmic}. The tilt of the solar dipole and the warping of the helispheric current sheet \citep{smith2001JGRsheet} indicate high levels of solar activity \citep{heber2006}, which is a factor of interest since CR intensity is anticorrelated to solar activity levels \citep{forbush1954JGR,fisk1978interactions}. We note here that previous work that examined the effect of solar activity on CR-intensity variation \citep{jokipii1995conf} did not include turbulence modeling, and here we examine how varying turbulence levels influence the diffusion coefficients.
5. The perpendicular diffusion coefficient has been previously evaluated using the ``BAM" model \citep{bieber1997perpendicular} by \cite{zank1998radial}, and the NLGC theory \citep{matthaeus2003nonlinear} by \cite{pei2010cosmic} and \cite{zank2004JGR}. Recently, the NLGC theory has been reinterpreted by \cite{ruffolo2012random}, and their RBD theory yields a significantly improved agreement with numerical experiments, for magnetic fluctuation amplitudes comparable to the large-scale magnetic field. This makes it very well suited for application to the solar wind, where the IMF includes a strong fluctuating component \citep{belcher1969JGR,marsch1991pihp}, and we use the RBD theory to derive a new expression for the perpendicular diffusion coefficient.
6. With the above improvements, the present approach departs significantly from both SEP studies \citep[e.g.,][]{zhang2009ApJ} and GCR modulation studies \citep[e.g.,][]{Engelbrecht2013} that have used relatively simplified assumptions in one or more of the above categories, such as semiempirical diffusion coefficients and simple scalings with magnetic field magnitude.
The outline of the paper is as follows: We describe the form of the CR diffusion tensor in Section 2, and briefly discuss the turbulence model and the simulation in Section 3. Section 4 presents the heliographic distribution of the diffusion coefficients. In an Appendix we briefly describe how other types of diffusion coefficients might be estimated using similar approaches.
\section{Cosmic Ray Diffusion Tensor}
The CR diffusion tensor, $\kappa_{ij}$, describes the scattering of CRs by random fluctuations in the IMF. It may be expressed as \citep{parker1965PSS,jokipii1970ApJ}
\begin{equation}\label{eq:kappa}
\kappa_{ij} = \kappa_\perp \delta_{ij} + \frac{B_i B_j}{B^2} (\kappa_\parallel - \kappa_\perp) + \epsilon_{ijk} \kappa_A \frac{B_k}{B},
\end{equation}
where $\mathbf{B}$ is the mean IMF, $\delta_{ij}$ is the Kronecker delta, and $\epsilon_{ijk}$ is the Levi-Civita symbol. This work presents calculations of $\kappa_\parallel$ and $\kappa_\perp$, which are the diagonal components of the diffusion tensor parallel and perpendicular, respectively, to the mean IMF.
The present work does not calculate $\kappa_A$, which can describe particle drifts under the influence of large-scale gradients and curvature in the IMF. Our results are directly relevant to the outward propagation of SEPs, for which $\kappa_\parallel$ and $\kappa_\perp$ are needed to describe how the SEP distribution spreads in the parallel and perpendicular directions, whereas over the short time scale of the SEP outflow the drifts may mainly shift the lateral distribution over a small angle. The lateral distribution of particle injection is often unknown, and the effects of drifts are often neglected, though \cite{marsh2013ApJ} argue that they should be considered. Both diffusion and drifts are considered to be important to the modulation of GCR with the solar cycle and the small gradients in GCR density \citep{moraal1976SSRv,jokipii1981ApJ}, though these processes take place over a wider region than considered in the present work ($r\le3$ AU).
We shall also examine the radial diffusion coefficient
\begin{equation}\label{kappa_r}
\kappa_{rr} \equiv \kappa_\parallel \cos^2 \Psi + \kappa_\perp \sin^2 \Psi,
\end{equation}
which is of particular relevance to models of solar modulation of CRs. Here, $\Psi$ is the ``winding" angle between the IMF and the radial direction. Following previous work, we define mean free paths, $\lambda_{\parallel,\perp}$, that are equivalent to the diffusion tensor through
\begin{equation}
\lambda_{\parallel,\perp} \equiv 3 \kappa_{\parallel,\perp}/v,
\end{equation}
where $v$ is the particle speed.
We note that in the present work we use the large-scale flow from our simulation to specify $B$ and $\Psi$ as spatially varying fields through the 3-D heliosphere. This is in contrast to previous studies \citep{bieber1995diffusion,zank1998radial,pei2010cosmic}, where $B$ and $\Psi$ were specified through a Parker-type model and a radially constant solar wind speed (to compute $\Psi$). However, the features of the IMF have a major influence on CR transport, and a Parker-type field is an oversimplification, particularly at high heliolatitudes (See \cite{heber2006} for an overview of suggested modifications to the Parker field). Moreover, the use of a-priori prescribed background fields in modulation studies has been held responsible for restricting the diffusion tensor to values that preclude agreement of models with observations \citep{Reinecke1997AdSpR,steenberg1997alternative}, and the present work makes a significant improvement in this regard.
\subsection{Parallel mean free path}
In determining the parallel mean free path (mfp), the turbulence ``geometry", i.e., the distribution of energy over parallel and perpendicular wavevectors, is a controlling factor. Observations \citep{bieber1994proton} show that a pure ``slab" model of heliospheric turbulence \citep{jokipii1966cosmic} underestimates the parallel mfp. In the slab model, the magnetic fluctuations are polarized perpendicular to the mean field and their wave-vectors are parallel to the mean field. \cite{bieber1994proton} find that a composite model with a dominant 2-D part (fluctuations and their wave-vectors both perpendicular to the mean field) and a minor slab part provides a better approximate parametrization of the turbulence and an improved description of the observed mean free paths. Furthermore, theoretical studies and observations \citep{matthaeus1990JGR,zank1992waves,zank1993nearly,bieber1996dominant,ghosh1997anisotropy} suggest that around 80\% of magnetic fluctuation energy in the inertial range should reside in the 2-D component, with the rest in the slab component.
In the following, we take the $z$-component along the mean field. Considering parallel diffusion first, we note that in quasilinear theory the 2-D fluctuations are effectively invisible to CRs resonating with the turbulence, and the scattering by slab fluctuations (assumed to be axisymmetric) is described by the parallel mfp \citep{zank1998radial}
\begin{equation}\label{eq:mfp_p}
\begin{aligned}
\lambda_\parallel &= 6.2742 \frac{B^{5/3}}{\langle b^2_s \rangle} \left(\frac{P}
{c}\right)^{1/3} \lambda^{2/3}_{s} \\
&\times \left[1 + \frac{7A/9}{(1/3 + q)(q + 7/3)}\right],
\end{aligned}
\end{equation}
where
\begin{equation}
A = (1+s^2)^{5/6} - 1,
\end{equation}
\begin{equation}
q = \frac{5s^2/3}{1+s^2-(1+s^2)^{1/6}},
\end{equation}
\begin{equation}
s = 0.746834 \frac{R_L}{\lambda_s},
\end{equation}
and a model 1-D Kolmogorov spectrum is assumed, with a power spectrum of the form $\tilde{P}(k_\parallel) \propto (1+k_\parallel \lambda_s)^{-5/6}$. Here $c$ is the speed of light, $R_L = P/Bc$ the particle Larmor radius, $\langle b_s^2 \rangle$ the variance of the slab geometry fluctuation, $P \equiv \tilde{p}c/Ze$ the particle rigidity ($\tilde{p}$ and $Ze$ are the particle momentum and charge, respectively), $k_\parallel$ is the wave vector parallel to the mean field, and $\lambda_s$ the correlation length for slab turbulence. Equation \eqref{eq:mfp_p} is valid at rigidities ranging from from 10 MV to 10 GV \citep{zank1998radial}. At larger heliocentric distances, the fractional term in braces becomes significant due to high rigidity particles resonating with fluctuations in the energy containing range instead of the inertial range. This is discussed further below in the context of rigidity dependence of the mfps (Section 4.4).
\subsection{Perpendicular mean free path}
Perpendicular diffusion is often not considered as important as parallel diffusion in energetic particle studies, because it is usually inferred that $\lambda_\perp << \lambda_\parallel$ \citep{palmer1982RvGSP}. However, \cite{dwyer1997ApJ} found that for strong particle enhancements related to corotating interaction regions, $\lambda_\perp/\lambda_\parallel$ rose to $\sim1$ in the fast solar wind stream arriving after the stream interface. Using data from the \textit{Ulysses} spacecraft during the SEP event of 2000 Jul 14, \cite{zhang2003ApJ} inferred $\lambda_\perp/\lambda_\parallel\approx 0.25$. Our 3-D model inner heliosphere provides an opportunity to examine the domains where perpendicular diffusion can be comparable with parallel diffusion.
Quasi-linear theory \citep{jokipii1966cosmic} provides a physically appealing description of perpendicular diffusion in terms of the diffusive spread of magnetic field lines, with the gyrocenters of charged particles following the field lines. Other approaches have considered the relationship between $\kappa_\perp$ and $\kappa_\parallel$ \citep{Axford1965P&SS,Gleeson1969P&SS}, and applied the Taylor-Green-Kubo formulation (BAM, \citealt{bieber1997perpendicular}) to the problem. However, the field line random walk (FLRW) approach \citep{jokipii1966cosmic} overestimates the strength of the diffusion, while BAM predicts diffusion that is weaker than that observed in numerical experiments \citep{Giacalone1999ApJ,mace2000ApJ}. The NLGC theory \citep{matthaeus2003nonlinear} accounts for both the random walk of the field lines and the influence of parallel scattering, and shows good agreement with both observations \citep{bieber2004GRL} and simulations, with the NLGC results bracketed by the FLRW and BAM results \citep{matthaeus2003nonlinear}.
Recent work \citep{ruffolo2012random} has reinterpreted NLGC by replacing the diffusion of gyrocenter trajectories with a random ballistic decorrelation (RBD), where the guiding center motion is approximated as ballistic (i.e., with constant velocity) between scattering events. The RBD-modified theory agrees with numerical simulations over a wider range of fluctuation amplitudes than the original NLGC, specifically for fluctuations comparable in size to the large-scale field. This makes it particularly suited for application to the solar wind \citep{belcher1969JGR,marsch1991pihp}. Other improvements to NLGC have also been developed \citep[see, e.g.,][]{shalchi2009}.
The phenomenon of ``backtracking" due to parallel scattering causes a particle to reverse its motion along the field line, thus retracing its steps over a certain time-span. This leads to a negative $v_x$-correlation ($v_x$ being a component of the particle's velocity perpendicular to the mean field), which results in a reduction in the running perpendicular diffusion coefficient. With this backtracking correction, RBD yields the following perpendicular diffusion coefficient \citep{ruffolo2012random}:
\begin{equation} \label{eq:rbd}
\kappa_{\perp} = \frac{a^2 v^2}{6 B^2} \sqrt{\frac{\pi}{2}}
\int_0^\infty
\frac{S_{2} (k_\perp) \text{Erfc}(\alpha) 2 \pi k_{\perp} dk_{\perp}}
{k_\perp \sqrt{\langle \tilde{v}_x^2 \rangle}},
\end{equation}
where $a^2 = 1/3$, $v$ is the particle speed, $\tilde{v}_x$ is the $x$-component of the guiding center velocity, $S_{2}$ is the 2-D axisymmetric turbulent fluctuation spectrum, Erfc is the complementary error function, and $k_\perp$ is the component of the wave-vector perpendicular to the mean magnetic field. We also have
\begin{equation} \label{eq:alpha}
\alpha = \frac{v^2}{3 \kappa_\parallel k_\perp \sqrt{2 \langle \tilde{v}_x^2 \rangle}},
\end{equation}
and
\begin{equation}
\langle \tilde{v}^2_x \rangle = \frac{a^2 v^2 b^2} {6 B^2},
\end{equation}
where $b^2$ is the combined variance of the 2-D and slab magnetic fluctuations: $b^2 = \langle {b}_2^2 \rangle+\langle {b}_s^2 \rangle $. Note that in Equation \eqref{eq:rbd}, the slab turbulence spectrum does not appear. This is because we follow the suggestion by \cite{Shalchi2006A&A} that the direct contribution of the slab component to perpendicular transport is subdiffusive, and therefore the slab term should not contribute to Equation \eqref{eq:rbd}. This hypothesis has been supported by simulations \citep{ruffolo2012random,2012AGUFMSH21A2188R}, and accordingly, has been adopted in the present work as well. Slab fluctuations can, however, still influence $\kappa_\perp$ through $\kappa_\parallel$, which appears in Equation \eqref{eq:alpha} for $\alpha$, and $\langle \tilde{v}^2_x \rangle$.
The 2-D power spectrum may be expressed as a power law \citep{matthaeus2007spectral}
\begin{equation} \label{eq:twod1}
S_{2} (k_\perp \leq 1/\lambda_{2}) = C_{2} \langle b_{2}^2 \rangle \lambda_{2}^2
(\lambda_{2} k_\perp)^p,
\end{equation}
\begin{equation} \label{eq:twod2}
S_{2} (k_\perp > 1/\lambda_{2}) = C_{2} \langle b_{2}^2 \rangle \lambda_{2}^2
(\lambda_{2} k_\perp)^{-\nu -1},
\end{equation}
where $\lambda_2$ is the 2-D correlation scale, $C_2$ is a normalization constant, $\langle b_{2}^2 \rangle$ is the variance of the 2-D turbulent fluctuations, and $p$ is a power index that takes on integral values that correspond to different power spectra. We assume a Kolmogorov spectrum in the inertial range by taking $\nu=5/3$. From the requirement that $\langle b_2^2 \rangle = 2 \pi \int_0^\infty S_2(k) k~dk$, we get
\begin{equation}\label{eq:c2}
C_2 = \frac{(\nu-1)(p+2)}{2 \pi (p + \nu +1)}.
\end{equation}
Note that the inertial range ($k_\perp > 1/\lambda_2$) behavior is described by a conventional power law, and $p$ only determines the long-wavelength properties of the spectrum. The spectral behavior of interplanetary magnetic fluctuations at long wavelengths is not well determined from single point measurements \citep{matthaeus2016prl}, and there are ambiguities surrounding the question of whether the observed structures are spatial or temporal in origin. The observations of ``$1/f$" noise at low frequencies also complicate matters \citep{matthaeus1986prl}. All values of $p \geq -1$ yield power spectra that give rise to a finite energy, but these spectra may be differentiated based on the characteristic length scales associated with them. In addition to the standard correlation scale \citep{Batchelor1953book}, there is a distinct scale, called the ultrascale, which is of importance in applications of 2-D turbulence (\citealt{matthaeus2007spectral} and references therein). The ultrascale is so named because it is generally larger than the correlation scale, and it may be interpreted as a typical size of an ``island" of 2-D turbulence \citep{matthaeus1999conf} and as the perpendicular coherence length of the FLRW \citep{ruffolo2004ApJ}.
We consider the following cases \citep{matthaeus2007spectral}: $p=-1$ (infinite correlation scale and an infinite ultrascale), $p=0$ (finite correlation scale but an infinite ultrascale), and $p \geq 1$ (finite ultrascale and finite correlation scale). The case $p=2$ is of special interest since it corresponds to homogeneous turbulence. Each of the above possibilities is realizable as each yields a finite energy. However, unlike the correlation scale, the values taken by the ultrascale in space and astrophysical plasmas are not well known, and there is a paucity of established methods to measure it (see \citealt{matthaeus2007spectral} for a proposed technique). Therefore, it is of interest to examine the dependence of the diffusion coefficients on $p$. If there is a marked differentiation between the mfps computed for different cases, then observations of the mfps may be used to infer constraints on the ultrascales prevailing in the heliospheric plasma.
To finally obtain an expression for the perpendicular mean free path, we use Equations \eqref{eq:twod1} and \eqref{eq:twod2} in Equation \eqref{eq:rbd} and set $\nu = 5/3$ to get
\newcommand{\frac{v}{\lambda_\parallel \sqrt{2 \langle \tilde{v}^2_x \rangle}}}{\frac{v}{\lambda_\parallel \sqrt{2 \langle \tilde{v}^2_x \rangle}}}
\begin{equation} \label{eq:mfp_perp}
\begin{aligned}
\lambda_\perp &= \mathscr{F}_1 \Bigg\{ \frac{\lambda_{2}^{-2/3}}{5 \mathscr{F}_2^{5/3} \sqrt{\pi}} \bigg[ 3 \sqrt{\pi} \mathscr{F}_2^{5/3} \lambda_2^{5/3} \text{Erfc}\left( \mathscr{F}_2 \lambda_2 \right) \\
&+ \Gamma\left(\frac{1}{3}\right)- 3 \Gamma\left(\frac{4}{3},\mathscr{F}_2^2 \lambda_2^2 \right) \bigg] \\
&+ \delta_{p,-1} \lambda_2 \bigg[ \mathscr{F}_2 \lambda_2 \frac{2}{\sqrt{\pi}}
\tensor[_2]{\text{F}}{_2} \left(\frac{1}{2},\frac{1}{2};\frac{3}{2},\frac{3}{2};-\mathscr{F}_2^2 \lambda_2^2 \right)
\\
&-0.981755 - \log (\mathscr{F}_2 \lambda_2)
\bigg] \\
&+(1-\delta_{p,-1})\frac{\lambda_2}{p+1} \bigg[\text{Erfc}(\mathscr{F}_2 \lambda_2) \\
&- \frac{\mathscr{F}_2 \lambda_2}{\sqrt{\pi}}
E_{\frac{p}{2}+1}(\mathscr{F}_2^2 \lambda_2^2) \bigg]
\Bigg\},
\end{aligned}
\end{equation}
where
\begin{equation}
\mathscr{F}_1 = \sqrt{\pi^3} C_2 \frac{v \langle b_2^2 \rangle a^2}{B^2 \sqrt{2 \langle \tilde{v}^2_x \rangle}},
\end{equation}
and
\begin{equation}
\mathscr{F}_2 = \frac{v}{\lambda_\parallel \sqrt{2 \langle \tilde{v}^2_x \rangle}}.
\end{equation}
In Equation \eqref{eq:mfp_perp}, Erf is the error function, $\Gamma$ is the gamma function, $\tensor[_2]{\text{F}}{_2}$ is a hypergeometric function, $E_{\frac{p}{2}+1}$ is the generalized exponential integral function, and the Kronecker delta function is used as a switch between the four values of $p$. $C_2$ depends on the value of $p$, as can be seen from Equation~\eqref{eq:c2}. Note that in the corresponding NLGC result \citep{pei2010cosmic}, an implicit method is required to obtain $\lambda_\perp$, in contrast to the RBD result, which is an explicit solution for $\lambda_\perp$.
\section{Solar wind model}
Equations \eqref{eq:mfp_p} and \eqref{eq:mfp_perp} require specification of the large-scale IMF, and the magnetic fluctuation energies and correlation lengths for both slab and 2-D turbulence. For this purpose, we use a Reynolds-averaged Navier-Stokes approach, based on the Reynolds decomposition \citep[e.g.,][]{mccomb1991physics} of a physical field, $\tilde{\mathbf{a}}$, into a mean and a fluctuating component:
\begin{equation}
\tilde{\mathbf{a}} = \mathbf{a}+\mathbf{a'},
\end{equation}
where $\mathbf{a} = \langle \tilde{\mathbf{a}} \rangle$ is an ensemble average, associated with the large scales of motion, and $\mathbf{a'}$ is a fluctuating component, here assumed to be small scale. By construction, $\langle \mathbf{a'} \rangle = 0$. Application of this decomposition to the MHD equations, along with a series of approximations appropriate to the solar wind, leads us to a set of mean-flow equations that are coupled to the small-scale fluctuations via another set of equations for the statistical descriptors of turbulence. For the details of the procedure for handling the fluctuations, we refer the reader to \cite{breech2008turbulence}.
In this study, we use the solar wind model described in detail by
\cite{usmanov2014three}. It is a global, fully three-dimensional
magnetohydrodynamic model that accounts for the effects of fluctuations in
heating and acceleration of the solar wind flow. The computational domain,
which in the present study extends from the coronal base to 3~AU, is split
into three regions: inner ($1 - 20~R_\odot$), intermediate ($20 - 45~R_\odot$), and outer ($45~R_\odot - 3$ AU). In the inner region, steady-state solutions of one-fluid,
polytropic ($\gamma = 1.08$) solar wind equations with WKB Alfv\'en waves
are obtained by time relaxation starting from an initial state composed of
a Parker-type flow in a dipole magnetic field \citep{usmanov2000global,
usmanov2003tilted}. Two-fluid steady-state equations for protons
and electrons with Hollweg's electron heat flux and WKB Alfv\'en waves are
solved in the intermediate region by forward integration along the radius $r$
\citep{pizzo1978three,pizzo1982three,usmanov1993global}. The boundary conditions for
the intermediate region are extracted from the inner region solution. In
the outer region, we solve three-fluid (thermal protons, electrons, and
pickup protons) Reynolds averaged solar wind equations simultaneously with
transport equations for turbulence energy, cross helicity and
correlation length. Steady-state solutions in the outer region are obtained
by time relaxation, using an eddy-viscosity approximation for the Reynolds stress
tensor and turbulent electric field, with boundary conditions provided
by solutions in the intermediate region \citep{usmanov2014three}. The use of steady-state simulations is justified here since ambient solar wind conditions change on time scales long compared to the time energetic particles spend in the inner heliosphere.
In our calculations, we have used the same input parameters at the coronal
base as in \cite{usmanov2014three}: the driving amplitude of Alfven waves
is set to 35~km~s$^{-1}$, the initial density is $0.4\times
10^8$~cm$^{-3}$, and the initial plasma temperature is $1.8\times 10^6$ K.
The magnetic field magnitude is assigned as the field strength of the
source magnetic dipole on the poles. This parameter is set to 16~G to match the
magnitude of the heliospheric magnetic field observed by Ulysses. The
computations are carried out on a composite spherical grid
\citep{usmanov1996global,usmanov2012three} using the Central Weighted
Essentially Non-Oscillatory (CWENO) spatially third-order reconstruction
algorithm of \cite{kurganov2000third}. The spatial CWENO
discretization is combined with the Strong Stability-Preserving Runge-Kutta
scheme of \cite{gottlieb2001strong} for time integration and the method of
\cite{powell1994approximate} for maintaining the $\nabla\cdot{\bf B} = 0$ condition.
For our purposes here, we extract from the outer region simulation ($45~R_\odot - 3$ AU) the mean magnetic field, $\mathbf{B}$, the fluctuation energy, $Z^2$ (defined below), and the correlation length for the turbulence, $\lambda$. Here,
\begin{equation} \label{eq:Z}
Z^2 = \langle v'^2 + b'^2 \rangle,
\end{equation}
is twice the turbulent energy per unit mass, defined in terms of the velocity and magnetic field fluctuations, $\mathbf{v}'$ and $\mathbf{B}'$, respectively. The amplitude of magnetic fluctuations has been normalized to Alfv\'{e}n units using $\mathbf{b}' = \mathbf{B}'(4 \pi \rho)^{-1/2}$, where $\rho$ is the mass density. To extend our calculation closer to the sun, we use data from the inner ($1 - 20~R_\odot$) and intermediate ($20 - 45~R_\odot$) regions, where the simulation does not have a turbulence model for $Z^2$ and $\lambda$. Here we use the the WKB Alfv\'{e}n wave energy density \citep{usmanov2000global}, $\mathcal{E}$, as a proxy for the turbulent fluctuation energy via $Z^2 = 2 \mathcal{E} /\rho$. To get an approximation for the correlation scale in these regions, we use the hypothesis from \cite{Hollweg1986JGR} that the correlation length varies as the distance between magnetic field lines, which in turn depends on the field strength \citep{spruit1981NASSP}, so that $\lambda \propto B^{-1/2}$. We set the constant of proportionality such that $\lambda$ at the boundaries of the intermediate and outer regions matches. We are currently working on refinements of the model that will modify the region in which turbulence modeling is included, so that this region will extend closer to the sun.
To proceed with the calculation of the mfps, some assumptions must be made in order to relate the correlation scale of our turbulence model ($\lambda$) to the slab and 2-D correlation scales in Equations~\eqref{eq:mfp_p} and \eqref{eq:mfp_perp}, respectively. First, we note that the turbulent fluctuations in our model are primarily transverse to the mean magnetic field \citep{breech2008turbulence}, and thus identify the correlation scale of 2-D turbulence to be equal to the correlation scale of our turbulence model, so that $\lambda_2 = \lambda$. Observational studies \citep{osman2007ApJ654,Weygand2009JGRA,weygand2011JGR116} indicate that the slab correlation scale is about a factor of two larger than the 2-D correlation scale, and accordingly, we assume $\lambda_s = 2 \lambda_2$. In our approximate treatment, we assume in effect that the magnetic and velocity correlation functions are structurally similar \citep{zank1996evolution}, so that the magnetic correlation length is found to be equal to the single correlation scale that we follow dynamically. In the inner heliosphere where the cross helicity is large, it becomes advantageous to employ a two correlation length theory \citep{mattheus1994JGR99,wan2012JFM697,Zank2012ApJ745,Zank2017ApJ835}, as has been implemented, e.g., by \cite{Adhikari2015ApJ805}.
To approximate the energy in slab and 2-D magnetic fluctuations, we first convert $Z^2$ to $B'^2$ using Equation~\eqref{eq:Z}:
\begin{equation}\label{eq:mag_fluc}
\langle B'^2 \rangle = \frac{Z^2}{r_A+1} 4 \pi \rho,
\end{equation}
where $r_A = \langle v'^2 \rangle /\langle b'^2 \rangle$ is the Alfv\'{e}n ratio. An accurate dynamical model for $r_A$ is desirable, but must include complications such as non-local effects \citep[e.g.,][]{grappin1983AaA26,mattheus1994JGR99,hossain1995PhFl}. At present we maintain a simpler approach, and take $r_A$ to have a value of 1 in the inner and intermediate regions ($1 - 45~R_\odot$), and a value of $1/2$ for heliocentric distances larger than $45~R_\odot$. These values are motivated by spacecraft observations \citep{Tu1995SSRv}, but we recognize that attempts have been made to treat $r_A$ dynamically \citep{grappin1983AaA26,marsch1989JPlPh41,tu1990JPlPh44,mattheus1994JGR99,yokoi2007PhPl,Zank2012ApJ745}. See especially the comparison with observations by \cite{Adhikari2015ApJ805} and \cite{Zank2017ApJ835}.
Next, recalling the assumption that the magnetic fluctuations have a dominant 2-D component with a small slab contribution, and following observations \citep{matthaeus1990JGR,bieber1994proton} that find the ratio of the 2-D and slab energies to be 80\% to 20\%, we use
\begin{equation}\label{eq:slab_2d}
\frac{\langle b^2_s \rangle}{\langle b^2_2 \rangle} = \frac{20}{80} = 0.25
\end{equation}
to compute the slab and 2-D fluctuation energies from Equation~\eqref{eq:mag_fluc} and $\langle b^2_2 \rangle + \langle b^2_s \rangle = \langle B'^2 \rangle$. In recent work by \cite{Hunana2010ApJ718} and \cite{Zank2017ApJ835}, refinements to this simplified perspective on the breakdown of the slab and 2-D fluctuation energies are discussed. In particular, \cite{Zank2017ApJ835} solve separate equations for the slab and 2-D energies with a simplified IMF and background solar wind flow. They find that the evolution of the two components is markedly different in the outer heliosphere (beyond $\sim3$ AU), where driving by pickup ions leads to an increase in the slab component's energy, while the energy of the 2-D component continues to decrease with heliocentric distance. Their results show, however, that the radial evolution of slab and 2-D energies is not too dissimilar below 3 AU. Similar results are presented by \cite{Oughton2011JGRA116} using their two-component model. Therefore, for the purposes of our present work, where we focus on the inner heliosphere, our simple decomposition of $\langle B'^2 \rangle$ into slab and 2-D components, using the constant ratio expressed in Equation~\eqref{eq:slab_2d}, seems appropriate. Studies of CR diffusion in the outer heliosphere would undoubtedly benefit from using a two-component turbulence transport model. A detailed assessment of different transport equations for turbulence is beyond the scope of this work.
\section{Results}
\subsection{Solar wind model results}
We begin our presentation of the results with a discussion of the core fields from the simulation - $B, \lambda$, and $Z^2$ - which are the ingredients that go into our calculation of the diffusion coefficients. Figure \ref{fig:ingred1} shows the radial evolution of the turbulence energy and the turbulence correlation scale from our model and simulation with an untilted dipole source. The data are for a $7\degree$ heliolatitude, which we take to be the broadly-defined ecliptic region. Also shown are observational results from Voyager 2, Helios, and the National Space Science Data Center (NSSDC) Omnitape dataset, indicating a reasonable agreement with the simulation results. The observational data for $Z^2$ and $\lambda$ are from \cite{zank1996evolution} and \cite{smith2001JGR}, respectively. Note that the observations are for various times in the solar cycle, and are shown here for general context only. The dashed vertical lines in Figure~\ref{fig:ingred1} represent the boundaries of the different simulation regions, with red marking the inner-intermediate region boundary at $20~R_\odot$, and blue marking the intermediate-outer region boundary at $45~R_\odot$, respectively. Note that we present results for $r>2~R_\odot$ ($r$ is the radial distance measured from the solar center), even though the inner boundary of the inner region simulation is at $1~R_\odot$. The parallel mfp acquires extremely large values ($> 10$ AU) in the region very close to the solar surface, due to the large values of $B$ prevailing there. These large values of $\lambda_\parallel$ are not of physical relevance and present problems for visualization, and we therefore restrict our results to $r>2~R_\odot$.
Figure \ref{fig:ingred2} shows the distribution in the meridional plane of the three ingredients - $B, Z^2,$ and $\lambda$ - for a simulation with an untilted source dipole. The figures on the left are from the inner and intermediate regions ($2 - 45~R_\odot$), and the ones on the right are from the outer region ($0.21 - 3$ AU). For a detailed discussion of these simulation results, we refer the reader to \cite{usmanov2000global} and \cite{usmanov2014three}. We note here that the magnetic field results agree well with Ulysses observations \citep[see Figure~8 of][]{usmanov2014three}, with the field vanishing at the heliospheric current sheet (HCS) at $0\degree$ heliolatitude. The turbulence correlation scale increases with heliocentric distance, as is well known from observations \citep{Tu1995SSRv}. The turbulence energy increases on moving from the ecliptic plane towards higher heliolatitudes because of shear interactions between slow (low latitude) and fast (high latitude) wind \citep[See, e.g.,][]{breech2008turbulence}. In the following subsections, we will discuss how these distributions influence the behaviour of the diffusion length-scales.
\begin{figure}
\includegraphics[scale=.37]{ingred.eps}
\caption{Model results near the ecliptic plane, for a run with an untilted solar dipole, are compared with observational data from Voyager 2, Helios, and the NSSDC Omnitape. The $Z^2$ data are from \cite{zank1996evolution}, and the $\lambda$ data are from \cite{smith2001JGR}. The solid lines are from our simulations. The different symbols represent different methods of calculation. The dashed vertical lines represent the boundaries of the different simulation regions, with red marking the inner-intermediate region boundary at $20~R_\odot$, and blue marking the intermediate-outer region boundary at $45~R_\odot$, respectively. Note that the observations are for various times in the solar cycle, and are shown here for general context only.}
\label{fig:ingred1}
\end{figure}
\begin{figure}
\includegraphics[scale=.4]{b1.eps}
\includegraphics[scale=.4]{b2.eps}
\includegraphics[scale=.4]{corr1.eps}
\includegraphics[scale=.4]{corr2.eps}
\includegraphics[scale=.4]{z1.eps}
\includegraphics[scale=.4]{z2.eps}
\caption{Contour plots of the heliospheric magnetic field ($B$), the turbulence correlation scale ($\lambda$), and the turbulence energy ($Z^2$) in the meridional plane for an untilted solar dipole. The figures on the left cover $2 - 45~R_\odot$, and the ones on the right cover $0.21 - 3$ AU ($45 - 645~R_\odot$).}
\label{fig:ingred2}
\end{figure}
\subsection{Radial evolution of mean free paths}
In Figure \ref{fig:mfp_untilt} we show the radial evolution of the parallel, perpendicular, and radial mfps (black, red, and blue lines, respectively) in the ecliptic region (Figure \ref{fig:mfp_untilt}a) and near the solar rotation axis ($86\degree$ heliolatitude, Figure \ref{fig:mfp_untilt}b), for an untilted source dipole. Also shown is the ratio of the perpendicular mfp to the parallel mfp (green lines). The solid, dotted, dashed, and dash-dotted lines correspond to $p=-1,0,1,$ and 2, respectively, and the mfps are computed for protons with rigidity equal to 445 MV, corresponding to a kinetic energy of 100 MeV. Here we would like to remind the reader that our turbulence parameters ($Z^2$ and $\lambda$) in the region $1 - 45~R_\odot$ are not from the turbulence model, but are calculated using the approximations detailed in Section 3. As such, these results represent a preliminary attempt at mapping the diffusion length scales in a region that will soon be investigated by upcoming spacecraft missions such as Solar Probe Plus.
Near the ecliptic plane (Figure \ref{fig:mfp_untilt}a), as one moves outward from the solar surface, the increasing strength of the turbulence energy (see Figure \ref{fig:ingred1}) leads to a sharp decrease in $\lambda_\parallel$ in the region $2 - 5~R_\odot$, with the rapidly decreasing IMF reinforcing this behaviour. In this region, $\lambda_\parallel \propto r^{-3.46}$, and there is a corresponding increase in $\lambda_\perp (\propto r^{3.55} \text{ for } p=-1 \text{ and } \propto r^{4.34} \text{ for } p=2)$. Since the IMF has a significant meridional component here, the large winding angle ($\Psi$) between the radial direction and the IMF leads to $\lambda_\perp$ having an influence on the radial mfp (see Equation \ref{kappa_r}), with $\lambda_{rr} \propto r^{-1.97}$. From $0.03 - 3$ AU, $\lambda_\parallel$ mostly increases as $r^{0.82}$, and $\lambda_\perp$ as $r^{0.79}$. From 0.1 to 3 AU, $\Psi$ is once again large because of the increased azimuthal component of the IMF, and $\lambda_\perp$ reduces the radial mfp, with $\lambda_{rr} \propto r^{0.53}$. Observational studies for $r< 3$ AU have found $\lambda_{rr} \propto r^b$ with $b$ ranging from $0.4 - 0.7$ \citep{beeck1987ApJ322}. Note that the radial mfp depends on the value of $p$ (through $\lambda_\perp$), but the $\lambda_{rr}$ curves for different $p$ coincide.
Moving on to the radial evolution of the mfps in the polar region, Figure \ref{fig:mfp_untilt}b shows that the radial mfp is completely dominated by $\lambda_\parallel$. This is because the IMF is near radial at the poles, with a very small winding angle. At the poles, $\lambda_{rr} \propto r^{-1.1}$ until 0.1 AU, after which it remains nearly constant, with identical behavior exhibited by $\lambda_\parallel$. From $2~R_\odot - 0.2$ AU, $\lambda_\perp \propto r^{2.10}$ for $p=-1$ and $\lambda_\perp \propto r^{2.34}$ for $p=2$. From $0.2 - 3$ AU, $\lambda_\perp \propto r^{0.78}$ for $p=-1$ and $\lambda_\perp \propto r^{0.69}$ for $p=2$.
Figure \ref{fig:mfp_tilt} shows the effect of a source dipole with a $30\degree$ tilt when one encounters the heliospheric current sheet (HCS) at around 1 AU: $\lambda_\parallel$ goes through a sudden dip of almost two orders of magnitude, while $\lambda_\perp$ has a corresponding increase of around an order of magnitude. (The radius where the HCS crosses our chosen heliolatitude of $7\degree$ depends on our choice of the azimuthal angle for which we plot results as a function of radius.) The vanishing mean magnetic field and non-vanishing turbulence amplitude at the HCS explain this behaviour, which will be further illustrated in the next subsection discussing the 2-D variation of the mfps in the meridional plane. We note from Figures \ref{fig:mfp_untilt} and \ref{fig:mfp_tilt} that the ratio $\lambda_\perp/\lambda_\parallel$ stays between 0.1 and 0.01 for most of the inner heliosphere, but it exceeds unity at the HCS. Keeping in mind that the current sheet is a singular region in our simulation, in its vicinity the fields do possess physically realizable values. Therefore we may stress the fact that similarly large values of $\lambda_\perp/\lambda_\parallel$ have been observed \citep{dwyer1997ApJ,zhang2003ApJ}. We will come across these domains of significant perpendicular diffusion once again in the meridional plane contours in Section 4.5, below.
In the results presented so far the choice of the long wavelength spectral index $p$ does not significantly alter the mfps, with $\lambda_\perp$ for $p=-1$ generally not more than a factor of two larger than $\lambda_\perp$ for $p=2$. Referring to the discussion in Section 2.2, this result indicates a rather weak dependence of the mfps on the ultrascale (via different $p$ values). The exception appears very close to the solar surface ($2~R_\odot$) in Figure \ref{fig:mfp_untilt}, where the perpendicular mean free path for the $p=-1$ case is several times larger than that for the $p=2$ case. This behaviour may be probed further in simulations with improved coronal turbulence models that are more reliable at such small heliocentric distances. In the following results, unless specified otherwise, we will choose $p=2$, which corresponds to homogeneous turbulence.
In Figure \ref{fig:turb_var} we examine the effect of varying the turbulence energy amplitude at the inner boundary (45 $R_\odot$) of the outer region of the simulation, again for 100 MeV protons. Such variation may arise due to solar activity. The solid lines represent a standard $Z^2$ specified at the inner boundary, and dashed and dotted lines represent simulations performed with double and half of this standard value specified at the inner boundary, respectively. In the ecliptic region ($7\degree$ heliolatitude), Figure \ref{fig:turb_var}a indicates, as expected, that an increasing turbulence level leads to a decrease in $\lambda_\parallel$ (and consequently $\lambda_{rr}$). The stronger turbulence increases $\lambda_\perp$ in proportion to $Z$, and therefore increases the extent to which particles may diffusively penetrate the heliosphere. Comparing Figures \ref{fig:turb_var}a and \ref{fig:turb_var}b, it is interesting to note that in the ecliptic region, varying turbulence at the inner boundary leads to an effect on $\lambda_\parallel$ that becomes less pronounced with radial distance. This is not the case in the polar regions with fast wind, however, where the turbulence is less ``aged" compared with low latitudes \citep{matthaeus1998JGR}. Stream interactions near the ecliptic plane reduce the turbulence at a faster rate compared to the rate in the polar regions far from such shearing interactions.
\begin{table}
\setlength{\tabcolsep}{5pt}
\renewcommand{\arraystretch}{1.2}
\centering
\caption{Parallel mfps in AU for 100 MeV protons at in the ecliptic region at 1 AU. B1, B2, and B3 are from \cite{breech2008turbulence}; P1 and P2 are from \cite{pei2010cosmic}; Cases 1 - 3 are our solutions for varying turbulence levels. Note that our calculation of $\lambda_\parallel$ is independient of $p$.} \label{tab1}
\begin{tabular}{c c c c c c c c c}
$p$ & B1 & B2 & B3 & P1 & P2 & Case 1 & Case 2 & Case 3 \\
\hline
-1 & 2.92 & 6.86 & 1.64 & 0.92 & 0.47 & \multirow{4}{*}{0.29} & \multirow{4}{*}{0.21} & \multirow{4}{*}{0.40} \\
0 & 2.33 & 5.49 & 1.31 & 0.74 & 0.38 & & & \\
1 & 2.14 & 5.03 & 1.20 & 0.68 & 0.35 & & & \\
2 & 2.04 & 4.80 & 1.15 & 0.64 & 0.33 & & & \\
\hline
\end{tabular}
\end{table}
We end this subsection by comparing our solutions in the ecliptic plane with ``consensus" constraints on observations \citep{palmer1982RvGSP,bieber1994proton}. Based on information compiled from several sources, the Palmer consensus finds that for particles in the rigidity range $0.5 - 5000$ MV, $\lambda_\parallel = 0.08 - 0.3$ AU. We note here that the values for the mfps obtained by fitting observational data may depend on the model used; \cite{reames1999SSRv} reviews some such results and suggests a higher parallel mfp of $\sim 1$ AU. Our $\lambda_\parallel$ for a 100 MeV proton at 1 AU varies from $0.29 - 0.40$ AU, and fits the consensus range well. Our solutions are smaller than the values from \cite{breech2008turbulence} and \cite{pei2010cosmic}, which we list in Table \ref{tab1}, along with our results. Here, cases 1, 2, and 3 refer to standard, doubled, and halved turbulence levels, as described above. Note that unlike our calculation of $\lambda_\parallel$, the calculations from \cite{breech2008turbulence} and \cite{pei2010cosmic} depend on the value of $p$.
Our improved agreement with the Palmer consensus range may be attributed to two improvements in modeling: (1) Here $B$ is a spatially varying field computed dynamically from a self-consistent 3-D model, in contrast to the Parker-type model used in \cite{breech2008turbulence} and \cite{pei2010cosmic}; (2) The effect of shear interactions is computed self-consistently in our turbulence model \citep{usmanov2014three}, unlike in \cite{breech2008turbulence} and \cite{pei2010cosmic}, where a shear-driving parameter is employed.
\begin{figure}
\includegraphics[scale=.36]{mfp_eclip_untilt.eps}
\includegraphics[scale=.36]{mfp_pole_untilt.eps}
\caption{Radial dependence of the parallel (black), perpendicular (red), and radial (blue) mfps (a) near the ecliptic plane ($7\degree$ heliolatitude) and (b) near the pole ($86\degree$ heliolatitude). Also shown is $\lambda_\perp/\lambda_\parallel$ (green). The solid lines are for $p=-1$, the dotted lines for $p=0$, the dashed lines for $p=1$, and the dash-dotted lines for $p=2$. Proton rigidity is 445 MV (100 MeV kinetic energy). Note that the curves for $\lambda_\parallel$ and $\lambda_{rr}$ coincide in (b).}
\label{fig:mfp_untilt}
\end{figure}
\begin{figure}
\includegraphics[scale=.36]{mfp_eclip_30tilt.eps}
\caption{Radial dependence of the parallel (black), perpendicular (red), and radial (blue) mfps near the ecliptic plane ($7\degree$ heliolatitude), with a solar dipole having a $30\degree$ tilt. For our particular choice of azimuthal angle ($26\degree$), an HCS crossing occurs at 0.8 AU. Also shown is $\lambda_\perp/\lambda_\parallel$ (green). The solid lines are for $p=-1$, the dotted lines for $p=0$, the dashed lines for $p=1$, and the dash-dotted lines for $p=2$. Proton rigidity is 445 MV (100 MeV kinetic energy).}
\label{fig:mfp_tilt}
\end{figure}
\begin{figure}
\includegraphics[scale=.36]{turb_var_eclip.eps}
\includegraphics[scale=.36]{turb_var_pole.eps}
\caption{Radial dependence of the parallel (black), perpendicular (red), and radial (blue) mfps (a) near the ecliptic plane ($7\degree$ heliolatitude) and (b) in the polar region ($86\degree$), for varying turbulence amplitudes, with $p=2$. The dashed and dotted lines represent simulations with the turbulence energy ($Z^2$) at the inner boundary of the outer region ($45~R_\odot$) doubled and halved, respectively, relative to a standard level. See text for more details. Note that the curves for $\lambda_\parallel$ and $\lambda_{rr}$ coincide in (b).}
\label{fig:turb_var}
\end{figure}
\subsection{Latitudinal evolution of mean free paths}
Figure \ref{fig:mfp_lat} shows the variation of mfps with latitude at different heliocentric distances for an untilted solar dipole. We see from Figure~\ref{fig:mfp_lat}a that, in general, $\lambda_\parallel$ (solid lines) increases by almost an order of magnitude as one leaves the solar equatorial plane and moves to higher latitudes, and assumes a near constant value as one approaches the polar regions. The opposite behaviour is seen for $\lambda_\perp$ (dashed lines), which decreases on moving away from the equatorial plane. This is a combined result of the increase in the IMF strength and the correlation scale of the turbulence ($\lambda$) while moving away from the solar equatorial plane (i.e., away from the HCS), and the increase in the turbulence energy due to shear-interactions between slow and fast solar winds. We note that very close to the sun (4 $R_\odot$, black line), $\lambda_\parallel$ first decreases with latitude as one leaves the equatorial plane, then increases at higher latitudes, to values larger even than those seen at larger heliocentric distances. This behavior is because of the IMF increasing monotonically with latitude, close to the sun. At larger distances, the IMF plateaus with increasing latitude, and from 1 AU onwards it decreases in the polar regions (See Figure \ref{fig:ingred2}). Thus, particles experience less scattering in polar regions close to the sun. This also explains the latitudinal variation of $\lambda_\perp$ at 4 $R_\odot$.
Figure \ref{fig:mfp_lat}b shows the increase in $\lambda_{rr}$ as one moves towards the polar regions, and illustrates once again the fact that while $\lambda_{rr}$ is affected by $\lambda_\perp$ very close to the sun at low latitudes, near the polar regions it follows the trend set by $\lambda_\parallel$. Figure \ref{fig:mfp_lat}c shows that the ratio of $\lambda_\perp$ to $\lambda_\parallel$ decreases as one leaves the solar equatorial plane (i.e., away from the HCS), with the perpendicular mfp staying 1-2 orders of magnitude below the parallel mfp, except very close to the sun (4 $R_\odot$, black line) where it becomes 3 orders of magnitude smaller because of the low turbulence levels in that region. We will examine the latitudinal dependence of the mfps once again in meridional plane figures in Section 4.5, below.
\begin{figure}
\includegraphics[scale=.36]{mfp_pp_lat_untilt.eps}
\includegraphics[scale=.36]{mfp_rr_lat_untilt.eps}
\includegraphics[scale=.36]{mfp_rat_lat_untilt.eps}
\caption{The top panel (a) shows the latitudinal dependence of parallel (solid lines) and perpendicular (dashed lines) mfps. The middle (b) and bottom (c) panels show the latitudinal variation of $\lambda_{rr}$ and $\lambda_\perp / \lambda_\parallel$, respectively. All panels are for an untilted solar dipole and $p=2$. Black, blue, green, and red lines represent radial distances of 0.02, 0.2, 1, and 3 AU (4, 45, 215, and 645 $R_\odot$), respectively. Proton rigidity is 445 MV (100 MeV kinetic energy).}
\label{fig:mfp_lat}
\end{figure}
\subsection{Rigidity dependence of mfps}
In Figure \ref{fig:rig} we plot the rigidity ($P$) dependence of mfps for protons at different radial distances in the ecliptic and polar regions. Below 1 AU, $\lambda_\parallel \propto P^{0.33}$ for all rigidities considered here ($10 - 10^4$ MV). Above 1 AU there is a steepening of the slope for rigidities larger than $10^3$ MV. As noted in Section 2.1, this is due to high energy particles resonating with turbulent fluctuations in the energy containing range instead of the inertial range. As the IMF ($B$) decreases with heliocentric distance, a high rigidity particle's Larmor radius ($R_L = P/Bc$) may become resonant with the correlation scale of the turbulence ($\lambda_s$). When $R_L/\lambda_s >> 1$, the expression in braces in Equation~\eqref{eq:mfp_p} scales with rigidity as $P^{5/3}$, and we have $\lambda_\parallel \propto P^2$ instead of $\lambda_\parallel \propto P^{1/3}$. Indeed, for rigidities $\sim 10^4$ MV we find that $\lambda_\parallel \propto P^{1.2}$ at 1 AU and $\lambda_\parallel \propto P^{1.8}$ at 3 AU \citep[See also the discussion on the effect of pickup ion driven turbulence on high-rigidity particles in the outer heliosphere in][]{zank1998radial}. Our results agree well with the observations shown in \cite{bieber1994proton}, with power indices ranging from 0.2 to 0.56 for a number of solar events where rigidity ranges from 10 to $10^3$ MV. Our results also agree with the theoretical and numerical findings in \cite{bieber1994proton} and \cite{pei2010cosmic}.
In general, $\lambda_\perp$ shows lower variation with rigidity. In the polar regions $\lambda_\perp$ stays nearly constant with rigidity. This behavior is consistent with the finding of \cite{bieber2004GRL} that NLGC predicts a very weak rigidity dependence, and they note that this is supported by observations for rigidities between $10^2 - 10^4$ MV. Note that the rigidity profiles of $\lambda_\parallel$ and $\lambda_\perp$ that we derive from simulation results and diffusion theories are quite different from some that have been employed in the literature to model solar modulation of Galactic cosmic rays \citep[e.g., see Figure 12 of][]{vos2015ApJ815}.
\begin{figure}
\includegraphics[scale=.36]{rig_eclip.eps}
\includegraphics[scale=.36]{rig_pole.eps}
\caption{Rigidity dependence of $\lambda_\parallel$ (solid line) and $\lambda_\perp$ (dashed line), (a) near the ecliptic plane ($7\degree$ heliolatitude), and (b) in the polar regions ($86\degree$ heliolatitude), for an untilted solar dipole and $p=2$. Black, blue, green, and red lines represent radial distances of 0.02, 0.2, 1, and 3 AU (4, 45, 215, and 645 $R_\odot$), respectively.}
\label{fig:rig}
\end{figure}
\subsection{Meridional plane contours}
In this section, we describe the variation of $\lambda_\parallel, \lambda_\perp, \lambda_{rr}$, and $\lambda_\perp/\lambda_\parallel$ in meridional planes for 100 MeV protons, complementing results of the previous sections. Figure \ref{fig:merid_untilt} shows results from a simulation with a source magnetic dipole that is untilted with respect to the solar rotation axis. It is clear that at the HCS, with its vanishing magnetic field, perpendicular diffusion is comparable to parallel diffusion in most of the inner heliosphere, with $\lambda_\perp$ and $\lambda_\parallel$ both around $0.01$ AU. In the broader ecliptic plane, however, $\lambda_\parallel$ remains 1-2 orders of magnitude above $\lambda_\perp$, varying from 0.01 to almost 1 AU within a radial distance of 10 $R_\odot$ to 3 AU, while $\lambda_\perp$ increases from $\sim 0.0001$ to $0.01$ AU. As noted in the 1-D plots, very close to the sun $\lambda_\parallel$ experiences a dramatic increase to a value of 1 AU due to the weak turbulence and strong magnetic field prevailing there.
We also see that at radial distances of $1.5 - 3$ AU, $\lambda_\parallel$ is a few times larger at lower latitudes, compared to values in polar regions. This is because the IMF decreases and the turbulence energy increases with latitude at these radial distances, leading to a reduction in parallel diffusion in the polar regions, and a corresponding increase in perpendicular diffusion. This can also be seen in Figure~\ref{fig:merid_untilt}h showing contours of $\lambda_\perp/\lambda_\parallel$, which increases by nearly one order of magnitude from low latitudes to the poles. The radial mfp increases uniformly with heliocentric distance at lower latitudes, but is dominated by $\lambda_\parallel$ in polar regions, because of the small winding angle between the IMF and the radial direction here. This leads to $\lambda_{rr}$ acquiring a nearly constant value of around 0.2 AU in polar regions beyond 2 AU.
Figure \ref{fig:merid_tilt} shows contour plots for mfps in the meridional plane at azimuthal angle equal to $26\degree$, for a simulation with a source magnetic dipole that is tilted by $30\degree$ with respect to the solar rotation axis. In this case, solar rotation produces an asymmetrical magnetic field structure, which has a striking effect on the diffusion parameters, with the displacement of the current sheet from the ecliptic plane modifying their distribution at low latitudes. Note that the blob-like structures in Figures~\ref{fig:merid_tilt}f and \ref{fig:merid_tilt}h arise due to grid points coinciding with the HCS. The rapid decrease in the magnitude of the IMF near the HCS leads to the formation of the blob contours around grid points where $B$ vanishes. This effect is not seen in Figure~\ref{fig:merid_untilt} for the untilted dipole case, where the HCS lies at $0\degree$ heliolatitude, where no grid points are present, by construction.
As noted previously in Section 4.2, observations indicate that the ratio $\lambda_\perp/\lambda_\parallel$ may approach, and even exceed unity. In our simulation, this happens in the HCS. The basic features described above for the untilted dipole are still present in this case, but are now organized with respect to the tilted HCS. During periods when solar activity levels are high, the warped current sheet is spread out across a larger portion of the heliosphere (Figure \ref{fig:merid_tilt}) compared with the low activity case (untilted dipole, Figure \ref{fig:merid_untilt}), and the HCS is thus more likely to influence CRs.
\section{Conclusions and Discussion}
We have presented a detailed analysis of the diffusion coefficients for cosmic ray transport in the inner heliosphere. The purpose is to use a well-tested, fully 3-D global simulation of the solar wind, with turbulence modeling, to obtain the heliospheric distribution of the large-scale heliospheric magnetic field, the energy in the turbulent fluctuations, and the correlation scale of the turbulence. This distribution has been coupled with a quasi-linear theory for parallel diffusion, and the recent random ballistic decorrelation interpretation of the non-linear guiding center theory for perpendicular diffusion. The present work extends previous studies on the heliospheric diffusion of cosmic rays by \cite{bieber1995diffusion}, \cite{zank1998radial},and \cite{pei2010cosmic}, but has a stronger focus on the inner heliosphere, with the inner boundary of our simulations at 1 $R_\odot$. Recent complementary work \citep{guo2016ApJ} carries out similar computations of diffusion coefficients for the outer heliosphere.
We find that at the heliospheric current sheet $\lambda_\perp$ can be greater than $\lambda_\parallel$, but usually $\lambda_\parallel$ is 1-2 orders of magnitude larger through most of the inner heliosphere. Very close to the sun ($2~R_\odot$), the strong IMF leads to a large value of $\lambda_\parallel$ ($\sim 0.5$ AU), which initially decreases for several solar radii before increasing with radial distance at low to intermediate latitudes, and becomes nearly constant at the polar regions. $\lambda_\perp$ increases with heliocentric distance throughout the inner heliosphere, and is larger in the polar regions compared to low latitudes. $\lambda_{rr}$ is dominated by $\lambda_\parallel$ through most of the inner heliosphere. However, $\lambda_\perp$ does affect $\lambda_{rr}$ in parts of the near-ecliptic region. Our estimations of $\lambda_\parallel$ near the ecliptic plane at 1 AU show good agreement with the Palmer consensus range of $0.08 - 0.3$ AU.
At heliocentric distances below 1 AU, we find that the parallel mfp varies with rigidity as $P^{0.33}$ for all rigidities considered here ($10 - 10^4$ MV). Above 1 AU, highly energetic particles begin to resonate with turbulent fluctuations in the energy containing scales, and the rigidity dependence of $\lambda_\parallel$ steepens. The perpendicular mfp is weakly dependent on rigidity. Our results on the rigidity dependence of mfps are consistent with observations.
The mfps are found to be weakly dependent on the type of power spectrum used to represent the large scale fluctuations. This suggests that any attempts to use spacecraft observations of mfps to infer constraints on the ultrascale would be challenging. The effects of solar activity (via a tilted solar dipole and variations of turbulence levels) are also studied, with increased activity leading to stronger perpendicular diffusion and weaker parallel diffusion.
The model we have adopted for turbulence transport has been thoroughly studied and tested \citep{breech2008turbulence}. More elaborate models, with more transport equations (and more free parameters) are available \citep{Zank2012ApJ745}. In particular, these models include extensions such as dynamically variable residual energy, separate transport equations for slab and 2-D fluctuations, and as many as three distinct dynamically evolving correlation lengths \citep{Oughton2011JGRA116,Zank2017ApJ835}. For the present we forgo the associated additional complication and rely on the present model's ability to account very well for a variety of observations \citep{usmanov2011solar,usmanov2012three,usmanov2014three}.
We also remark that the turbulent fluctuations we follow dynamically are the quasi-two dimensional fluctuations that we assume are energetically dominant. A variety of studies \citep{matthaeus1990JGR,zank1993nearly,bieber1994proton,bieber1996dominant} are consistent with dominance by quasi-2D turbulence in solar wind turbulence. In the present approach we assumed that the quasi-slab component of the fluctuations, which represent perhaps 20\% of the total fluctuation energy, are a constant fraction of the turbulence energy. Useful extensions have been presented by \cite{Oughton2011JGRA116,Zank2017ApJ835} that adopt somewhat different approaches with the common goal of independently transporting both 2-D and slab-like fluctuations. As noted above, these models find that the radial evolution of 2-D and slab fluctuation energies is not too dissimilar in the inner heliosphere, and therefore our decomposition of the total turbulence energy into slab and 2-D components using a constant ratio appears reasonable. These models also show that in the outer heliosphere (above 3-4 AU), the energy in the slab fluctuations increases with heliocentric distance due to driving by pickup ions, while the 2-D fluctuation energy continues to decrease. As such, studies of CR diffusion in the outer heliosphere would undoubtedly benefit from using a two-component turbulence transport model.
Such models have been implemented \citep{wiengarten2016ApJ833,Shiota2017ApJ837}, with many differences relative to the present model. For example, the \cite{Shiota2017ApJ837} model has a more elaborate transport formalism, as described above, but neglects the impact of turbulence on the background flow and relies on ad-hoc shear terms instead of fully coupling to the large-scale solar wind solutions. In contrast, we employ a dynamic eddy-viscosity model \citep{usmanov2014three} to achieve this coupling. Clearly no model at present is a complete treatment, and there are advantages and trade-offs in various approaches. We hope to advance our own model with additional refinements in the near future.
We anticipate that 3-D calculations of the CR diffusion coefficients in the way we have demonstrated here, employing large scale solar wind solutions with turbulence transport and turbulence modeling, will become increasingly important for realistic energetic particle transport calculations in the future. We also note that related types of diffusion coefficients, such as drag or self-diffusion, may be similarly estimated using adaptations of the above approach, as described briefly in the Appendix. Studies of phenomena such as shock-ensembles and super-events \citep{Mueller-Mellin1986,Kunow1991book}, where several shocks merge to influence energetic particle transport at widely separated locations, would benefit enormously from such 3-D studies in model heliospheres. Our findings of domains where $\lambda_\perp/\lambda_\parallel \geq 1$ may be used to further study the effects of significant perpendicular diffusion, which has been seen to reduce the SEP flux and make it more uniform \citep{zhang2009ApJ}. Additional development at the MHD level will be needed to utilize this kind of tool for explaining observed SEP events associated with transient phenomena such as flares, CMEs and interplanetary shocks \citep{ruffolo2006ApJ,droge2016ApJ,agueda2016ApJ}. In the present paper we have not undertaken specific calculations employing the diffusion coefficients we obtained using a global model; this is deferred to future work. We anticipate that this approach will be useful in understanding Solar Probe Plus observations of energetic particles near the Sun.
As we have now demonstrated that such an approach can provide detailed three dimensional information concerning both MHD transport and particle mean free paths, it becomes clear that what will be needed are improved methods for driving this kind of model with more sophisticated and detailed solar observations. Meanwhile, we are continuing to improve our MHD modeling by building a coronal module that includes a full turbulence transport model, and by further developing the eddy viscosity approach \citep{usmanov2014three}. Future work could also investigate the influence of drifts on CR modulation. To facilitate use of the present data from this model for particle transport calculations of relevance to the current generation energetic particle and Space Weather studies, we are uploading as Supplementary Material the 3-D grids of the diffusion coefficients that were described here.
\section{Appendix}
Here we present an estimation of a general turbulent diffusion coefficient that is based on Taylor's formulation of the problem \citep{taylor1921ProcLonMathSoc}. The diffusion coefficient for the passive transport of any quantity in a turbulent neutral fluid may be approximated by \citep{Choudhuri1998book}
\begin{equation} \label{eq:taylor_diff}
D_T \approx \frac{1}{3} \langle v^2 \rangle \tau_{\text{cor}},
\end{equation}
where $\langle v^2 \rangle$ is the mean square turbulent velocity and $\tau_{\text{cor}}$ is the correlation time of the turbulence. By assuming $\langle v^2 \rangle \sim Z^2$, and defining the turbulence correlation length $\lambda \sim Z \tau_{\text{cor}}$, we rewrite the above equation as
\begin{equation} \label{eq:drag}
D_T \propto Z \lambda.
\end{equation}
Note that any standard diffusion coefficient, drag coefficient, eddy viscosity, or other similar quantity can be expressed in a form similar to Equation~\eqref{eq:drag}, i.e., as a product of a characteristic velocity and a length scale (see, for example, \cite{Tennekes1972book}).
In Figure \ref{fig:merid_drag} we show contour plots for $D_T$ in the meridional plane, computed from a simulation with a solar dipole that is untilted with respect to the solar rotation axis. We may interpret $D_T$ as a turbulent drag coefficient, which is of relevance to the propagation of CMEs in the solar wind. At high heliolatitudes, the drag coefficient increases from the solar surface to 0.5 AU, and then gradually decreases. Notably, at heliocentric distances smaller than 0.5 AU, $D_T$ increases by an order of magnitude in moving from the ecliptic to polar regions. This implies that a CME would be ``channelled" to lower latitudes as it propagates through the inner heliosphere. Applications involving these more general approximations to diffusion processes may also be enabled by the approach described in the present paper.
\acknowledgments{This research is partially supported by NASA grant NNX14AI63G
(Heliophysics Grand Challenges Research), NASA LWS grants NNX15AB88G and NNX13AR42G,
and the Solar Probe Plus mission through the ISOIS project and SWRI subcontract D99031L, and the Thailand Research Fund (grant RTA5980003). The authors would like to thank the anonymous Referee for their thorough reading of the manuscript and useful suggestions for its improvement.}
\bibliographystyle{apj}
|
1,314,259,994,247 | arxiv | \section{Introduction}\label{sec:intro}
Often the experimentalist needs a low-noise preamplifier for the
analysis of low-frequency components (below 10 Hz) from a 50 \ohm\
source. The desired amplifier chiefly exhibits low residual flicker
and high thermal stability, besides low white noise. Thermal
stability without need for temperature control is a desirable feature.
In fact the problem with temperature control, worse than complexity,
is that in a nonstabilized environment thermal gradients fluctuate,
and in turn low-frequency noise is taken in. A low-noise amplifier
may be regarded as an old subject, nonetheless innovation in analysis
methods and in available parts provides insight and new design. The
application we initially had in mind is the postdetection preamplifier
for phase noise measurements~\cite{rubiola02rsi}. Yet, there resulted
a versatile general-purpose scheme useful in experimental electronics
and physics.
\section{Design Strategy}\label{sec:strategy}
The choice of the input stage determines the success of a precision
amplifier. This issue involves the choice of appropriate devices and
of the topology.
Available low-noise devices are the junction field-effect transistor
(JFET) and the bipolar transistor (BJT), either as part of an
operational amplifier or as a stand-alone component. The white noise
of these devices is well
understood~\cite{van.der.ziel:noise-ssdc,van.der.ziel:fluctuations,netzer81pieee,erdi81jssc}.
Conversely, flicker noise is still elusive and relies upon models, the
most accredited of which are due to McWhorter~\cite{mcwhorter57} and
Hooge~\cite{hooge69pla}, or on smart narrow-domain analyses, like~\cite{green85jpd-1,green85jpd-2,jamaldeen99jap}, rather than on a unified theory. Even worse,
aging and thermal drift chiefly depend on proprietary technologies,
thus scientific literature ends up to be of scarce usefulness. The
JFET is appealing because of the inherently low white noise. The
noise temperature can be as low as a fraction of a degree Kelvin.
Unfortunately, the low noise of the JFET derives from low input
current, hence a high input resistance (some M\ohm) is necessary. The
JFET noise voltage is hardly lower than 5 \unit{nV/\sqrt{Hz}}, some
five to six times higher than the thermal noise of a 50 \ohm\ resistor
($\sqrt{4kTR}=0.89$ \unit{nV/\sqrt{Hz}}). The JFET is therefore
discarded in favor of the BJT\@.
A feedback scheme, in which the gain is determined by a resistive
network, is necessary for gain accuracy and flatness over frequency.
Besides the well known differential stage, a single-transistor
configuration is possible (Ref.~\cite{motchenbacher:low-noise:1ed},
page 123), in which the input is connected to the base and the
feedback to the emitter. This configuration was popular in early
audio hi-fi amplifiers. The advantage of the single-transistor scheme
is that noise power is half the noise of a differential stage. On the
other hand, in a dc-coupled circuit thermal effects are difficult to
compensate without reintroducing noise, while thermal compensation of
the differential stage is guaranteed by the symmetry of the
base-emitter junctions. Hence we opt for the differential pair.
\begin{table}
\begin{sideways}
\begin{minipage}{0.88\textheight}
\caption{\label{tab:opa}%
\vrule width0pt height2.5ex depth2ex
Selection of some low-noise BJT amplifiers.}
\centering
\begin{tabular}{|c|cccc|c|c|}\hline
& OP27\footnotemark[1]
& LT1028\footnotemark[1]
& MAT02\footnotemark[2]
& MAT03\footnotemark[2]
& \parbox{12ex}{unit}
& \parbox{12ex}{{\footnotesize MAT03}\\
measured\footnotemark[3]%
\vrule width0pt height0ex depth0.5ex}
\\\hline
\vrule width0pt height2.5ex depth0ex
WHITE NOISE&&&&&&\\
noise voltage\footnotemark[4] $\sqrt{h_{0,v}}$
& 3 & 0.9 & 0.9 & 0.7 & \unit{nV/\sqrt{Hz}} & 0.8 \\
noise current\footnotemark[4] $\sqrt{h_{0,i}}$
& 0.4 & 1 & 0.9 & 1.4~\footnotemark[5]& \unit{pA/\sqrt{Hz}} & 1.2\\
noise power $2\sqrt{h_{0,v}h_{0,i}}$
&$2.4{\times}10^{-21}$
&$1.8{\times}10^{-21}$
&$1.6{\times}10^{-21}$
& $2.0{\times}10^{-21}$
& \unit{W/Hz}
& $1.9{\times}10^{-21}$\\
noise temperature $T_w$
& 174 & 130 & 117 & 142 & K & 139 \\
optimum resistance $R_{b,w}$
& 7500 & 900 & 1000 & 500 & \ohm & 667 \\
$2{\times}50$\ohm-input noise
& 3.3 & 1.55 & 1.55 & 1.5 & \unit{nV/\sqrt{Hz}} & 1.5~%
\footnotemark[6]\\\hline
\vrule width0pt height2.5ex depth0ex
FLICKER NOISE&&&&&&\\
noise voltage\footnotemark[4] $\sqrt{h_{-1,v}}$
& 4.3 & 1.7 & 1.6 & 1.2 & \unit{nV/\sqrt{Hz}} & (~$0.4$~)%
\footnotemark[7] \\
noise current\footnotemark[4] $\sqrt{h_{-1,i}}$
& 4.7 & 16 & 1.6 &n.\,a.& \unit{pA/\sqrt{Hz}} & 11 \\
noise power $2\sqrt{h_{-1,v}h_{-1,i}}$
& $4.1{\times}10^{-20}$
& $5.3{\times}10^{-20}$
& $5.1{\times}10^{-21}$
& -- & \unit{W/Hz}
& (\ldots)\footnotemark[8] \\
1-Hz noise temperature $T_f$
& 2950 & 3850 & 370 & -- & K & (\ldots)\footnotemark[8] \\
optimum resistance $R_{b,f}$
& 910 & 106 & 1000 & -- & \ohm & (\ldots)\footnotemark[8] \\
$2{\times}50$\ohm-input noise
& 4.3 & 2.3 & 1.6 & -- & \unit{nV/\sqrt{Hz}} & 1.1~%
\footnotemark[6] \\\hline
\vrule width0pt height2.5ex depth0ex
THERMAL DRIFT
& 200 & 250 & 100 & 300 & nV/K & --
\\\hline
\end{tabular}
\footnotetext[1]{Low-noise operational amplifier.}
\footnotetext[2]{Matched-transistor pair.
MAT02 is \textsc{npn}, MAT03 is \textsc{pnp}.
Data refer to the pair, biased at $I_C=1$ mA.}
\footnotetext[3]{Some MAT03 samples measured in our laboratory.
See Sec.~\protect\ref{sec:frontend}}
\footnotetext[4]{Power-law model of the spectrum, voltage or current,
$S(f)=h_0+h_{-1}f^{-1}+h_{-2}f^{-2}+\ldots$}
\footnotetext[5]{Obtained from the total noise with 100 k\ohm\
input resistance.}
\footnotetext[6]{Measured on the complete amplifier
(Sec.~\protect\ref{sec:results}), independently
of the measurement of the above $S_v$ and $S_i$.}
\footnotetext[7]{Derives from the noise current through $r_{bb'}$.
See Sec.~\protect\ref{sec:results}.}
\footnotetext[8]{Can not be compared to other data because voltage
and current are correlated. See Sec.~\protect\ref{sec:results}.}
\end{minipage}
\end{sideways}
\end{table}
Table~\ref{tab:opa} compares a selection of low-noise bipolar
amplifiers. The first columns are based on the specifications
available on the web
sites~\cite{www.analog-devices,www.linear-technology}. The right-hand
column derives from our measurements, discussed in
Secs.~\ref{sec:frontend} and \ref{sec:results}. Noise is described in
terms of a pair of random sources, voltage and current, which are
assumed independent. This refers to the Rothe-Dahlke
model~\cite{rothe56ire}. Nonetheless, a correlation factor arises in
measurements, due to the distributed base resistance $r_{bb'}$.
Whether and how $r_{bb'}$ is accounted for in the specifications is
often unclear. The noise spectra are approximated with the power law
$S(f)=\sum_{\alpha}h_\alpha f^\alpha$. This model, commonly used in
the domain of time and frequency, fits to the observations and
provides simple rules of transformation of spectra into two-sample
(Allan) variance $\sigma_y(\tau)$. This variance is an effective way
to describe the stability of a quantity $y$ as a function of the
measurement time $\tau$, avoiding the divergence problem of the
$f^\alpha$ processes in which $\alpha\le-1$.
References~\cite{rutman78pieee} and \cite{rubiola01im} provide the
background on this subject, and application to operational amplifiers.
The noise power spectrum $2\sqrt{h_vh_i}$ is the minimum noise of the
device, i.e., the noise that we expect when the input is connected to
a cold (0~K) resistor of value $R_b=\sqrt{h_{v}/h_{i}}$, still under
the assumption that voltage and current are uncorrelated. When the
input resistance takes the optimum value $R_b$, voltage and current
contributions to noise are equal. The optimum resistance is $R_{b,w}$
for white noise and $R_{b,f}$ for flicker. Denoting by $f_{c}$ the
corner frequency at which flicker noise is equal to white noise, thus
$f_{c,v}$ for voltage and $f_{c,i}$ for current, it holds that
$R_{b,w}/R_{b,f}=\sqrt{f_{c,i}/f_{c,v}}$. Interestingly, with most
bipolar operational amplifiers we find
$f_{c,i}/f_{c,v}\approx50{-}80$, hence $R_{b,w}/R_{b,f}\approx7{-9}$.
Whereas we have no explanation for this result, the lower value of the
flicker optimum resistance is a fortunate outcome. The equivalent
temperature is the noise power spectrum divided by the Boltzmann
constant $k=1.38{\times}10^{-23}$ J/K\@. A crucial parameter of
Table~\ref{tab:opa} is the total noise when each input is connected to
a 50~\ohm\ resistor at room temperature. This calculated value
includes noise voltage and current, and the thermal noise of the two
resistors. In a complete amplifier two resistors are needed, at the
input and in the feedback circuit.
Still from Table~\ref{tab:opa}, the transistor pairs show lower noise
than the operational amplifiers, although the PNP pair is only
partially documented. Experience indicates that PNP transistors are
not as good as NPN ones to most extents, but exhibit lower noise. In
other domains, frequency multipliers and radio-frequency oscillators
make use of PNP transistors for critical application because of the
lower flicker noise. Encouraged by this fact, we tried a differential
amplifier design based on the MAT03, after independent measurement of
some samples.
\section{Input Stage}\label{sec:frontend}
\begin{figure}[t]
\centering\includegraphics[scale=1]{measure-mat}
\caption{Noise measurement of a transistor pair. For clarity,
the distributed base resistance $r_{bb'}$ is extracted from the
transistors.}
\label{fig:measure-mat}
\end{figure}
The typical noise spectrum of the MAT03, reported in the data sheet,
shows an anomalous slope at low frequencies (0.1--1 Hz), significantly
different from $f^{-1}$. This is particularly visible at low
collector current (10--100 $\mu$A), but also noticeable at $I_C=1$
mA\@. We suspect that the typical spectrum reflects the temperature
fluctuation of the environment through the temperature coefficient of
the offset voltage $V_{OS}$ rather than providing information on the
flicker noise inherent in the transistor pair. The measurement of a
spectrum from 0.1 Hz takes some 5 min. At that time scale, in a
normal laboratory environment the dominant fluctuation is a drift. If
the drift is linear, $v(t)=ct$ starting at $t=0$, the Fourier
transform is $V(\omega)=j\pi c\delta(\omega)-c/\omega^2$. Dropping
off the term $\delta(\omega)$, which is a dc term not visible in a
log-log scale, the power spectrum density, i.e., the squared Fourier
transform, is
\begin{equation}
\label{eq:f-drift}
S_v(\omega)=\frac{c^2}{\omega^4} \qquad\mbox{or}\qquad
S_v(f)=\frac{(2\pi)^4c^2}{f^4}~~.
\end{equation}
A parabolic drift---seldom encountered in practice---has a spectrum
proportional to $f^{-6}$, while a smoothly walking drift tends to be
of the $f^{-5}$ type. As a consequence, a thermal drift can be
mistaken for a random process of slope $f^{4}$ to $f^{5}$, which may
hide the inherent $f^{-1}$ noise of the device. For this reason, the
test circuit (Fig.~\ref{fig:measure-mat}) must be enclosed in an
appropriate environment. We used, with similar results, a Dewar flask
coupled to the environment via a heat exchanger, and a metal box
mounted on a heat sink that has a mass of 1 kg and a thermal
resistance of 0.6 K/W\@. These odd layouts provide passive
temperature stabilization through a time constant and by eliminating
convection, and evacuate the small amount of heat (200 mW) dissipated
by the circuit.
\begin{figure}[t]
\centering\includegraphics[scale=0.8,angle=0]{f695}
\caption{Typical spectrum of the noise voltage.}
\label{fig:f695}
\end{figure}
Due to the low value of $r_{bb'}$ (15--20 \ohm) the current
measurement can be made independent of voltage noise, but not vice
versa. Thus, we first measure the noise current setting
$R_B=8$~k\ohm, which is limited by the offset current; then we measure
the noise voltage setting $R_B=10$~\ohm. A technical difficulty is
that at 1 Hz and below most spectrum analyzers---including our
one---must be coupled in dc, hence high offset stability is needed in
order to prevent saturation of the analyzer. The measured spectra are
$S_i(f)=1.45{\times}10^{-24}+1.2{\times}10^{-22}f^{-1}$ \unit{A^2/Hz}
(i.e., 1.2\unit{pA/\sqrt{Hz}} white, and 11\unit{pA/\sqrt{Hz}}
flicker), and $S_v(f)=10^{-18}+1.8{\times}10^{-19}f^{-1}$
\unit{V^2/Hz} (i.e., 1\unit{nV/\sqrt{Hz}} white, and
425\unit{pV/\sqrt{Hz}} flicker). The current spectrum is the inherent
noise current of the differential pair. Conversely, with the voltage
spectrum (Fig.~\ref{fig:f695}) we must account for the effect of $R_B$
and $r_{bb'}$. With our test circuit, the expected white noise is
$h_{0,v}=4kTR+2qI_BR\simeq1.7{\times}10^{-20}R$ \unit{V^2/Hz}, which
is the sum of thermal noise and the shot noise of the base current
$I_B$. $R=2(R_B+r_{bb'})$ is the equivalent base resistance, while
the shot noise of the collector current is neglected. Assuming
$r_{bb'}=16$~\ohm\ (from the data sheet), the estimated noise is
$h_{0,v}\simeq9{\times}10^{19}$ \unit{V^2/Hz}. This is in agreement
with the measured value of $10^{-18}$ \unit{V^2/Hz}. Then, we observe
the effect of the current flickering on the test circuit is
$R^2h_{-1,i}\simeq1.6{\times}10^{-19}$ \unit{V^2/Hz}. The latter is
close to the measured value $1.8{\times}10^{-19}$ \unit{V^2/Hz}.
Hence, the observed voltage flickering derives from the current noise
through the external resistors $R_B$ and the internal distributed
resistance $r_{bb'}$ of the transistors. Voltage and current are
therefore highly correlated. As a further consequence, the product
$2\sqrt{h_{-1,v}h_{-1,i}}$ is not the minimum noise power, and the
ratio $\sqrt{h_{-1,v}/h_{-1,i}}$ is not the optimum resistance. The
corresponding places in Table~\ref{tab:opa} are left blank. Due to
the measurement uncertainty, we can only state that a true independent
voltage flickering, if any, is not greater than $4{\times}10^{-20}$
\unit{A^2/Hz}. The same uncertainty affects the optimum resistance
$R_{b,f}$, which is close to zero.
The measured white noise is in agreement with the data sheet. On the
other hand, our measurements of flicker noise are made in such unusual
conditions that the results should not be considered in contradiction
with the specifications, as the specifications reflect the the
low-frequency behavior of the device in a normal environment.
\section{Implementation and Results}\label{sec:results}
\begin{figure}[t]
\centering\includegraphics[scale=1]{scheme}
\caption{Scheme of the low-noise amplifier.}
\label{fig:scheme}
\end{figure}
Figure~\ref{fig:scheme} shows the scheme of the complete amplifier,
inspired to the ``super low-noise amplifier'' proposed in Fig.~3a of
the MAT03 data sheet. The NPN version is also discussed in
Ref.~\cite{franco:opa} (p.~344). The original circuit makes use of
three differential pairs connected in parallel, as it is designed for
the lowest white noise with low impedance sources ($\ll50$~\ohm), like
coil microphones. In our case, using more than one differential pair
would increase the flicker because of current noise.
The collector current $I_C=1.05$ mA results as a trade-off between
white noise, which is lower at high $I_C$, dc stability, which is
better at low dissipated power, flicker, and practical convenience.
The gain of the differential pair is $g_mR_C=205$, where
$g_m=I_C/V_T=41$~mA/V is the transistor transconductance, and $R_C=5$
k\ohm\ is the collector resistance. The overall gain is
$1+R_G/R_B\simeq500$. Hence the gain of the OP27 is of 2.5, which
guarantees the closed-loop stability (here, oscillation-free
operation). If a lower gain is needed, the gain of the differential
stage must be lowered by inserting $R_A$. The trick is that the
midpoint of $R_A$ is a ground for the dynamic signal, hence the
equivalent collector resistance that sets the gain is $R_C$ in
parallel to $\frac{1}{2}R_G$. The bias current source is a cascode
Wilson scheme, which includes a light emitting diode (LED) that
provides some temperature compensation.
The stability of the collector resistors $R_C$ is a crucial point
because the voltage across them is of 5~V\@. If each of these
resistors has a temperature coefficient of $10^{-6}$/K, in the worst
case there results a temperature coefficient of 10 $\mu$V/K at the
differential output, which is equivalent to an input thermal drift of
50~nV/K\@. This is 1/6 of the thermal coefficient of the differential
pair. In addition, absolute accuracy is important in order to match
the collector currents. This is necessary to take the full benefit
from the symmetry of the transistor pair.
\begin{figure}[t]
\centering\includegraphics[width=\textwidth]{franck-ampli-small}
\caption{Prototype of the low-noise amplifier.}
\label{fig:prototype}
\end{figure}
Two equal amplifiers are assembled on a printed circuit board, and
inserted in a $10{\times}10{\times}2.8$ \unit{cm^3}, 4 mm thick
aluminum box (Fig.~\ref{fig:prototype}). The box provides thermal coupling to the environment
with a suitable time constant, and prevents fluctuations due to
convection. $LC$ filters, of the type commonly used in HF/VHF
circuits, are inserted in series to the power supply, in addition to
the usual bypass capacitors. For best stability, and also for
mechanical compatibility with our equipment, input and output
connector are of the SMA type. Input cables should not PTFE-insulated
because of piezoelectricity (see the review
paper~\cite{fukada00uffc}).
\begin{figure}[t]
\centering\includegraphics[scale=0.8]{f691}
\caption{Residual noise of the complete amplifier,
input terminated to a 50~\ohm\ resistor.}
\label{fig:f691}
\end{figure}
Figure~\ref{fig:f691} shows the noise spectrum of one prototype input
terminated to a 50~\ohm\ resistor. The measured noise is
$\sqrt{h_0}=1.5$ \unit{nV/\sqrt{Hz}} (white) and $\sqrt{h_{-1}}=1.1$
\unit{nV/\sqrt{Hz}} (flicker). The corner frequency at which the
white and flicker noise are equal is $f_c=0.5$ Hz. Converting the
flicker noise into two-sample (Allan) deviation, we get
$\sigma_v(\tau)=1.3$ nV, independent of the measurement time $\tau$.
Finally, we made a simple experiment aimed to explain in practical
terms the importance of a proper mechanical assembly. We first
removed the Al cover, exposing the circuit to the air flow of the
room, yet in a quiet environment, far from doors, fans, etc., and then
we replaced the cover with a sheet of plain paper (80 \unit{g/m^2}).
The low-frequency spectrum (Fig.~\ref{fig:f694}) is
$5{\times}10^{-19}f^{-5}$ \unit{V^2/Hz} in the first case, and about
$1.6{\times}10^{-19}f^{-4}$ \unit{V^2/Hz} in the second case. This
indicates the presence of an irregular drift, smoothed by the paper
protection. Interestingly, Hashiguchi~\cite{sikula03arw} reports
on thermal effects with the same slope and similar cutoff frequencies,
observed on a low-noise JFET amplifier for high impedance sources.
\begin{figure}[t]
\centering\includegraphics[scale=0.8]{f694}
\caption{Thermal effects on the amplifier.}
\label{fig:f694}
\end{figure}
\def\bibfile#1{/Users/rubiola/Documents?workocs/bib/#1}
\bibliographystyle{amsalpha}
|
1,314,259,994,248 | arxiv | \section{Introduction}
\label{sec:intro}
Strongly coupled plasmas produced in experiments are often far from thermal equilibrium.
Ultracold neutral plasmas and the dense plasmas in sonoluminescent bubbles, for instance, can have electron and ion temperatures that differ by an order of magnitude or more~\cite{KillianPRL1999,PuttermanARFM2000}.
In inertial confinement fusion plasmas and ultracold plasma mixtures, there can be significant differences in temperatures not just between ions and electrons, but also between the different species of ions~\cite{BergesonDPP2016,RinderknechtPRL2015}.
The combination of strong coupling and multiple temperatures makes these plasmas especially challenging to model, since one cannot freely call upon results from equilibrium statistical mechanics to make predictions about the plasma's thermodynamic and transport properties.
One approach for strongly coupled, two-temperature plasmas is to extend integral equation theories for the equilibrium pair distribution functions to allow multiple temperatures.
In this work, we present the first direct comparisons of three such extensions and evaluate their accuracy against molecular dynamics (MD) simulations.
The pair distribution functions are normally applied in the context of thermal equilibrium, where they can be used to evaluate the pressure, internal energy, and other thermodynamic state variables using exact formulas from equilibrium statistical mechanics.
In non-equilibrium plasmas -- especially those far from equilibrium -- they are used in extensions of ideal gas kinetic theory to treat strongly coupled plasmas.
The pair distributions enter into these models in the form of effective scattering potentials~\cite{BaalrudPRL2013} or local field corrections~\cite{IchimaruPRA1985,DharmaWardanaPRE1998,DaligaultPRE2009,VorbergerPRE2010,BenedictPRE2017}, designed to take approximate account of how the collisional transfer of momentum and energy in the plasma is affected by the many-body physics of strong coupling.
The pair distribution functions of two-temperature systems are also useful for testing the range of validity of approximate one-component models of strongly coupled plasmas.
Most importantly, by treating electrons and ions on equal footing, two-component plasma models grant access to electron-ion transport physics that lie beyond the scope of a one-component treatment.
The present work makes use of a model plasma consisting of ions and positively charged electrons.
This approach is useful in both modeling and simulation (e.g., Ref.~\onlinecite{DaligaultPRE2009}) to circumvent the collapse (recombination) of classical electron-ion plasmas, which to date must be treated with softened electron-ion pseudopotentials.
By instead using positively charged electrons, we are able to isolate the relevant two-temperature physics, which should not depend on the sign of the charge.
Future work will use a recently developed method for modeling strongly coupled electron-ion plasmas~\cite{TiwariPRE2017} to explore the effect of negatively charged electrons on pair correlations and transport.
Notwithstanding, the results shown here are immediately applicable to ionic mixtures with unequal temperatures.
At weak coupling, the pair distribution functions are accurately described by the Debye-H\"uckel theory of electrolytes~\cite{DebyePZ1923}.
For strongly coupled plasmas, however, the triplet and higher-order correlations ignored in the Debye-H\"uckel approach become important.
At thermal equilibrium, these correlations are well approximated by integral equation methods developed from equilibrium statistical mechanics.
The most successful of these involve solving the Ornstein-Zernike (OZ) relations together with an approximate closure, e.g., the hypernetted-chain (HNC) approximation~\cite{HansenMacDonald}.
When the plasma has more than one temperature, two methodologies have been explored: (a) to map the multi-temperature plasma to an effective one-temperature plasma or (b) to extend equilibrium integral equation theories to allow multiple temperatures.
The canonical example of the mapping approach is the Yukawa one-component plasma (YOCP) model.
In the YOCP model, the plasma is partitioned into a strongly coupled component of interest and a weakly coupled background.
All the physics of this background are condensed into a constant screening parameter that modifies the interaction between the strongly coupled particles.
These conditions are realized in dusty plasmas and in present-day ultracold neutral plasma experiments, where the YOCP model has been successfully applied to study the dust and ions, respectively~\cite{BergesonPRA2011,StricklerPRX2016,MelzerPRE2000}.
However, the YOCP cannot be used to describe processes that involve electrons, e.g., ambipolar diffusion or electron-ion temperature relaxation.
The other class of approaches extends the theory of equilibrium density correlations to the case of a plasma with two distinct temperatures.
In an early investigation, Salpeter derived pair correlation functions for a weakly coupled electron-ion plasma using arguments in the vein of Debye-H\"uckel theory~\cite{SalpeterJGR1963}.
Boercker and More extended Salpeter's results to strong ion coupling using an ansatz for a two-temperature partition function, but still required that the electron-ion Coulomb coupling be weak~\cite{BoerckerPRA1986}.
The extension to arbitrary coupling involves a generalization of the theory of pair correlations in equilibrium liquids.
The approaches considered here introduce the notion of a ``cross'' temperature $T_{ab}$ that serves as the kinetic energy scale for inter-species correlations.
One must also determine if the OZ equations themselves should be modified.
An attractive feature of such models is that all species are treated on equal footing, in contrast to the YOCP.
This permits the direct calculation of \textit{all} pair correlation functions and further allows for the possibility of studying two-temperature physics when both species are strongly coupled.
Our main goal is to determine the most accurate approximation available to extend equilibrium integral-equation theories of density correlations to two-temperature plasmas.
There seems to be no consensus at present regarding the form of the cross temperatures, $T_{ab}$, or whether it is necessary to modify the Ornstein-Zernike relations.
This work considers three formulations\cite{SeuferlingPRA1989,BredowCPP2013,DharmaWardanaPRE2008}~that have appeared in recent work on two-temperature strongly coupled plasmas\cite{BredowCPP2013,DharmaWardanaPRE2008,SchwartzCPP2007,RosePOP2009,BenedictPRE2017}.
Our main finding is that the model proposed by Seuferling et al.~(``SVT'') in Ref.~\onlinecite{SeuferlingPRA1989} predicts pair distribution functions that agree with MD over a range of coupling strengths similar to what is seen for the usual equilibrium HNC theory.
We restrict our scope to a plasma with two species of classical point charges, labeled $i$ and $e$, with distinct masses and temperatures.
We focus on testing cases where $T_e \ge T_i$ and $m_e \le m_i$, i.e., the lighter ``electrons'' are warmer than the massive ``ions.''
This is the parameter regime of greatest importance in current strongly coupled plasma contexts.
We also take both species to have equal number density ($n_e=n_i=n/2$) and unit charge $Z_i=Z_e=1$, so that the interaction potential for all particles is the repulsive Coulomb potential,
\begin{equation}
\label{eq:v-coul}
v_{ab}(r) = \frac{e^2}{r}~,
\end{equation}
and the Coulomb coupling strength of each species is
\begin{equation}
\label{eq:gamma}
\Gamma_s = \frac{e^2/a_s}{k_{\textsc b}T_s}~,
\end{equation}
where $a_s=(3/4\pi n_s)^{1/3}$ is the mean spacing between particles of species $s$, $T_s$ is their temperature, $e$ is the elementary charge, and $k_B$ is the Boltzmann constant.
Another basic assumption of this work is the existence of a two-temperature steady state, or ``quasi-equilibrium.''
In a plasma, the collisional exchange of energy tends to be most efficient between particles of the same mass and least efficient between particles of very different mass.
It is frequently the case that particles of each species equilibrate among themselves before the system as a whole relaxes to thermal equilibrium.
On timescales longer than the intraspecies thermal relaxation time but shorter than the interspecies thermal relaxation time, it is often accurate to take the velocity distributions to be Maxwellian with temperatures $T_i$ and $T_e$.
Section~\ref{sec:hnc} introduces the three theories and discusses some of their asymptotic limits.
Section~\ref{sec:md} provides details on the MD techniques used to simulate a two-temperature quasi-equilibrium plasma.
Section~\ref{sec:compare} compares the pair distribution functions of the theoretical models with the MD results.
Section~\ref{sec:yocp} uses the SVT model to study when the YOCP model for ion-ion correlations breaks down as the electron coupling strength increases.
Section~\ref{sec:conc} offers some concluding remarks and describes how the present results will be of use to future studies of two-temperature plasmas.
\section{Candidate HNC Extensions}
\label{sec:hnc}
At thermal equilibrium, the Ornstein-Zernike (OZ) relations are~\cite{OrnsteinKNAW1914}
\begin{equation}
\label{eq:oz-eq}
\hat h_{ab}(k) = \hat c_{ab}(k) + \sum_{s=i,e} n_s \hat h_{as}(k) \hat c_{sb}(k)~,
\end{equation}
where $\hat h_{ab}(k)$ and $\hat c_{ab}(k)$ are the Fourier transformed total and direct correlation functions, respectively, and $k$ is the wavenumber.
The OZ equations must be solved in conjunction with approximate closure relations.
The hypernetted-chain (HNC) closure is given by~\cite{vanLeeuwenP1959}
\begin{equation}
\label{eq:hnc-eq}
g_{ab}(r) = \exp{ \left[ -\frac{v_{ab}(r)}{k_{\textsc b}T} + h_{ab}(r) - c_{ab}(r) \right] }~,
\end{equation}
where $g_{ab}(r) = 1 + h_{ab}(r)$ are the radial distribution functions (RDF).
The HNC closure is very accurate when $v_{ab}(r)$ is long-ranged, as is the case for the Coulomb potential.
However, it is reasonable to expect that the bridge functions (which are neglected in HNC) contribute non-negligibly to the RDFs when $\max{(\Gamma_i,\Gamma_e)} \gtrsim 10$, based on knowledge of the OCP RDFs.
A number of proposed improvements model the neglected bridge functions; see for example Refs.~\cite{RosenfeldPRA1979,IyetomiPRA1983,IyetomiPRA1992,KahlPRE1996}.
We revisit the approximate nature of the HNC closure when comparing with MD results in Sec.~\ref{sec:compare}.
To extend the theory of spatial correlations at equilibrium to multi-temperature systems one must address two points: (a) how to characterize the ``cross-temperatures'' $T_{ab}$ that set the kinetic energy scale for inter-species correlations and (b) whether the OZ relations should be modified.
Most investigations to date have extended the HNC-OZ system of equations in one of three ways.
We will refer to them in this work as the SQRT, MASS, and SVT models.
In the SQRT\cite{BredowCPP2013} and MASS\cite{DharmaWardanaPRE2008} models, the OZ relations are taken to be the same as at equilibrium, and the HNC closures are assumed to be
\begin{equation}
\label{eq:hnc-Tab}
g_{ab}(r) = \exp{ \left[ -\frac{v_{ab}(r)}{k_{\textsc b}T_{ab}} + h_{ab}(r) - c_{ab}(r) \right] }~.
\end{equation}
The models are distiguished by different ansatzes for the cross-temperatures,
\begin{subequations}
\begin{align}
\label{eq:T-sqrt}
& T^{\textsc{sqrt}}_{ab} = \sqrt{T_a T_b} \\
\label{eq:T-mass}
& T^{\textsc{mass}}_{ab} = \frac{m_a T_b + m_b T_a}{m_a + m_b} ,
\end{align}
\end{subequations}
respectively.
A distinguishing feature of the SQRT model is that it is mass-independent.
In the parameter space of this work ($m_i \ge m_e$, $T_i \le T_e$), it follows that $T^\textsc{sqrt}_{ei} \le T^\textsc{mass}_{ei}$.
Consequently, the SQRT model should be expected to result in stronger interspecies correlations.
In the SVT\cite{SeuferlingPRA1989} model, the cross-temperature is the same as in the MASS model from Eq.~\eqref{eq:T-mass}, but the OZ relations are modified to be
\begin{equation}
\label{eq:svt-oz}
\hat h_{ab} = \hat c_{ab} + \sum_{s=i,e} n_s \left( \frac{m_{ab}T_{as}}{m_aT_{ab}} \hat c_{as}\hat h_{sb} + \frac{m_{ab}T_{sb}}{m_bT_{ab}} \hat h_{as}\hat c_{sb} \right) ,
\end{equation}
which we will call the SVT-OZ equations~\footnote{The original formulas in Eq.~(38) of Ref.~\onlinecite{SeuferlingPRA1989} contain some typographical errors.}.
Here, $m_{ab}=m_am_b/(m_a+m_b)$ is the reduced mass of an $a,b$ pair.
The SVT model is based on an ansatz for the two- and three-particle phase-space distribution functions,
\begin{subequations}
\begin{equation}
\label{eq:bbgky-f2}
F^{(2)}_{ab} = f_a(p_1) f_b(p_2)~ g_{ab}(r_{12})
\end{equation}
\begin{equation}
\label{eq:bbgky-f3}
F^{(3)}_{abc} = f_a(p_1) f_b(p_2) f_c(p_3)~g_{abc}(r_{12}, r_{13}, r_{23})
\end{equation}
\end{subequations}
where
\begin{equation}
\label{eq:maxwellian}
f_s(p) = \left(2\pi m_s k_{\textsc b}T_s\right)^{-\frac{3}{2}} \exp{\left(\frac{-p^2}{2 m_s k_{\textsc b}T_s}\right)}
\end{equation}
is the Maxwell-Boltzmann distribution with temperature $T_s$ normalized to unity, and $g_{abc}$ is the triplet distribution function.
The cross-temperature, $T^\textsc{mass}_{ab}$ naturally arises after integrating the two-particle BBGKY equation over momenta, which gives an Yvon-Born-Green-like equation for the RDFs $g_{ab}(r_{12})$ in terms of the triplet functions $g_{abc}(r_{12},r_{13},r_{23})$~\cite{SeuferlingPRA1989,SchwartzCPP2007}.
The SVT-OZ relations are derived by assuming both the superposition approximation for the triplet functions, $g_{abc}\approx g_{ab}g_{ac}g_{bc}$, and the HNC approximation from Eq.~\eqref{eq:hnc-Tab} for the direct correlation functions.
Several steps from this point onward are missing from the derivation in Ref.~\onlinecite{SeuferlingPRA1989}.
These steps are written out in full, in Appendix~\ref{sec:svt-deriv}.
In the MASS and SVT models, the interplay between the mass and temperature dependence of $T_{ei}$ is important.
From Eq.~\eqref{eq:T-mass}, one sees that the mass dependence is dominant, causing $T_{ei}$ to rapidly converge to $T_e$ for $m_i \gtrsim 20 m_e$.
From this, one expects the strength of electron-ion correllations in the MASS and SVT models to be similar to that of the electron-electron correlations when the masses are sufficiently different.
The basic screening physics of each model can be understood through the weakly coupled limit.
In this limit, $\hat c_{ab} \approx -\hat v_{ab}/k_{\textsc b}T_{ab}$, and the OZ (or SVT-OZ) equations can be explicitly solved for the partial static structure factors,
\begin{equation}
\label{eq:sk-def}
S_{ab}(k) = \delta_{ab} + \sqrt{n_an_b}\,\hat h_{ab}(k)~,
\end{equation}
where $\delta_{ab}$ is the Kronecker delta.
The expressions for each model are written in Appendix~\ref{sec:wc-lim}, from which one can compare the models in both $k$-space and in real space.
First, each model shows qualitative differences in the long wavelength ($k \to 0$) limit.
The values of each model's $S_{ab}(0)$ are tabulated in Table~\ref{tab:Sklim}.
In the long-wavelength limit the SQRT and SVT model structure factors take finite values, as one would expect of a plasma that exhibits Debye screening.
In fact, when $m_e \ll m_i$, both SQRT and SVT give the $S_{ii}(k)$ of a weakly coupled one-component plasma screened by a background species.
However, in the MASS model, $S_{ab}(0)=0$, characteristic of the OCP~\cite{Baus19801}.
The physical content of these differences is further elucidated by examining the charge density structure factor, $S_{ZZ}(k) = \frac{1}{2} ( S_{ii} + 2S_{ei} + S_{ee} )$.
(Note that at weak coupling $S_{ZZ}$ is the same whether the electrons are positively or negatively charged due to the leading sign dependence in $S_{ei}$.)
In the long-wavelength limit, $S_{ZZ}(k)$ describes variations in the total charge density; the condition that the plasma be quasineutral is equivalent to having $S_{ZZ}(0) = 0$.
On the other hand, the long-wavelength limit of the partial structure factors $S_{ab}(0)$ describe screening.
Of the models studied here, only SVT satisfies $S_{ZZ}(0)=0$ with nonzero $S_{ab}(0)$.
Obviously, the MASS model is quasineutral as well, though $S_{ab}(0)=0$ suggests that it does so not by self-consistent screening, but by not allowing long-wavelength density variations of any kind.
The SQRT model is interesting in that its nonzero $S_{ii}(0)$ implies proper Debye screening of ions by electrons, yet it is not quasineutral (SQRT $S_{ZZ}(0) \ne 0$), suggesting that the effect of the ions on the electrons is not consistently treated.
Second, the pole structure of the static structure factors in each model gives rise to different functional forms for the RDFs.
In the SQRT model, $S_{ab}(k)$ has a single imaginary pole, which leads to an exponentially screened potential after inverse Fourier transformation.
In contrast, the MASS and SVT structure factors have two imaginary poles, so that the screening comes from the difference of two exponentials:
\begin{equation}
\label{eq:2-yuk}
g_{ab}(r) \simeq \exp\left\{-\frac{A_1 e^{-K_1r} + A_2 e^{-K_2r}}{4\pi\sqrt{n_an_b}r}\right\} ~.
\end{equation}
Here $A_1$, $A_2$, $K_1$, and $K_2$ are constant coefficients.
Their values for each model are listed in Table~\ref{tab:wc-coeffs}.
Observe that in the limit where $m_e\ll m_i$, the SVT $g_{ii}(r)$ is that of a weakly coupled YOCP screened by electrons.
The relationship between the SVT model and the screened OCP is expounded upon in Section~\ref{sec:yocp}.
\begin{table}
\centering
\begin{tabular}{cccc}
\hline\hline
$S_{ab}(0)$ & SQRT & SVT & SVT, $m_e\ll m_i$
\\ \hline
${i-i}$ & $\kappa_e^2/\kappa^2$ & $(\kappa^2\kappa_{ei}^2 - \kappa_e^2\kappa_i^2)/\kappa^2\kappa_{ei}^2$ & $\kappa_e^2/\kappa^2$
\\
${e-i}$ & $-\kappa_e\kappa_i/\kappa^2$ & $-\frac{m_iT_i+m_eT_e}{m_iT_e+m_eT_i}\kappa_i^2\kappa_e^2/\kappa^2\kappa_{ei}^2$ & -$\kappa_e^2/\kappa^2$
\\
${e-e}$ & $\kappa_i^2/\kappa^2$ & $(\kappa^2\kappa_{ei}^2 - \kappa_e^2\kappa_i^2)/\kappa^2\kappa_{ei}^2$ & $\kappa_e^2/\kappa^2$
\\ \hline\hline
\end{tabular}
\caption{Long-wavelength limits of the static structure factors for the weakly coupled limit of the SQRT and SVT models, as well as the SVT model when the mass difference is large. In the MASS model (not listed), all $S_{ab}(0)=0$. The various inverse screening lengths are defined in Appendix~\ref{sec:wc-lim}.}
\label{tab:Sklim}
\end{table}
\begin{table*}
\begin{tabular}{c|ccc|ccc|ccc|ccc}
\hline\hline
& \multicolumn{3}{|c|}{SQRT}
& \multicolumn{3}{|c|}{MASS}
& \multicolumn{3}{|c|}{SVT}
& \multicolumn{3}{|c}{SVT, $m_e\ll m_i$}
\\
& ${i-i}$
& ${e-i}$
& ${e-e}$
& ${i-i}$
& ${e-i}$
& ${e-e}$
& ${i-i}$
& ${e-i}$
& ${e-e}$
& ${i-i}$
& ${e-i}$
& ${e-e}$
\\ \hline
$A_1$
& $\kappa_i^2$
& $\kappa_i\kappa_e$
& $\kappa_e^2$
& $\tallfrac{\kappa_+^2(\kappa_i^2 - \kappa_-^2)}{\kappa_+^2-\kappa_-^2}$
& $\tallfrac{\kappa_{ei}^2\kappa_+^2}{\kappa_+^2-\kappa_-^2}$
& $\tallfrac{\kappa_+^2(\kappa_e^2 - \kappa_-^2)}{\kappa_+^2-\kappa_-^2}$
& $\tallfrac{\kappa_i^4}{\kappa^2-\kappa_{ei}^2}$
& $\tallfrac{\kappa_{ei}^2\kappa^2
- c
\kappa_i^2\kappa_e^2}{\kappa^2-\kappa_{ei}^2}$
& $\tallfrac{\kappa_e^4}{\kappa^2-\kappa_{ei}^2}$
& $\kappa_i^2$
& $\kappa_{e}^2$
& $\tallfrac{\kappa_e^4}{\kappa_i^2}$
\\
$K_1$
& $\kappa$
& $\kappa$
& $\kappa$
& $\kappa_+$
& $\kappa_+$
& $\kappa_+$
& $\kappa$
& $\kappa$
& $\kappa$
& $\kappa$
& $\kappa$
& $\kappa$
\\
$A_2$
& $0$
& $0$
& $0$
& $\tallfrac{\kappa_-^2(\kappa_i^2 - \kappa_+^2)}{\kappa_+^2-\kappa_-^2}$
& $\tallfrac{\kappa_{ei}^2\kappa_-^2}{\kappa_+^2-\kappa_-^2}$
& $\tallfrac{\kappa_-^2(\kappa_e^2 - \kappa_+^2)}{\kappa_+^2-\kappa_-^2}$
& $\tallfrac{\kappa_i^2(\kappa_{ei}^2-\kappa_e^2)}{\kappa^2-\kappa_{ei}^2}$
& $\tallfrac{\kappa_{ei}^4
- c
\kappa_i^2\kappa_e^2}{\kappa^2-\kappa_{ei}^2}$
& $\tallfrac{\kappa_e^2(\kappa_{ei}^2-\kappa_i^2)}{\kappa^2-\kappa_{ei}^2}$
& $0$
& $0$
& $\tallfrac{\kappa_e^2(\kappa_e^2-\kappa_i^2)}{\kappa_i^2}$
\\
$K_2$
& $-$
& $-$
& $-$
& $\kappa_-$
& $\kappa_-$
& $\kappa_-$
& $\kappa_{ei}$
& $\kappa_{ei}$
& $\kappa_{ei}$
& $-$
& $-$
& $\kappa_e$
\\
\hline\hline
\end{tabular}
\caption{Coefficients appearing in Eq.~\eqref{eq:2-yuk} for the weak-coupling form for the RDFs for each model, as well as for the SVT model when $m_e \ll m_i$. The various inverse screening lengths are defined in Appendix~\ref{sec:wc-lim}, and $c=\frac{m_eT_e+m_iT_i}{m_iT_e+m_eT_i}$ in the SVT column.}
\label{tab:wc-coeffs}
\end{table*}
\section{Simulation Model}
\label{sec:md}
Classical molecular dynamics simulations were carried out using the open-source code LAMMPS\cite{Plimpton1995}.
A two-component, two-temperature plasma was created in a three-dimensional periodic box.
The charged particles were made to interact through the repulsive Coulomb potential, Eq.~\eqref{eq:v-coul}, and the long-range part of the Coulomb interaction was accounted for using the particle-particle, particle-mesh method~\cite{Plimpton97}.
Every simulation system consisted of $10^4$ particles of each species, each singly charged.
The time step for numerical integration was chosen based on the inverse electron plasma frequency, $\omega_{pe}^{-1} = \sqrt{m_e/4\pi e^2 n_e}$.
All simulations used time steps in the range $\delta t =0.005-0.01\omega_{pe}^{-1}$, which was sufficient to resolve the dynamics of both species.
The equilibration of a two-species system to two different temperatures remains a nontrivial issue from a numerical point of view~\cite{fukushi_2000}.
For the present simulations, each species was coupled to its own Langevin thermostat.
The Langevin collision frequencies were chosen such that both species attained their target temperatures within $1\%$ statistical fluctuations.
Figure~\ref{fig:thermo-coeff} shows how if the thermostat collision frequency was too weak, the ions thermalized to a temperature that was higher than the target temperature.
One can see that even a $1\%$ drift from the requested $T_i$ is large enough to make a discernible difference in the ion-ion RDF.
We attribute this effect to the fact that in a two-temperature simulation, the thermostats must work against the plasma's natural inclination to thermally relax, which requires that the thermostat collision frequency be greater than the electron-ion collision frequency.
If these two rates are comparable, however, then one expects the ions (which couple to the thermostat inefficiently when their mass is large) to thermalize to a temperature greater than the thermostat temperature but less than the temperature they would attain if allowed to relax.
It was also observed that for high mass ratios, the ion-ion RDF takes much longer to stabilize than the ion temperature.
Even after the ions acquire the temperature of their heat bath, $T_i$, spatial correlations between ions continue to develop for hundreds to thousands of $\omega_{pe}^{-1}$ of simulation time.
In comparison, $g_{ee}$ and $g_{ei}$ stabilize on the same timescale as $T_i$, though small variations thereafter occur in response to the evolution of $g_{ii}$.
For reference, the OCP typically requires only a few plasma periods of averaging time for well-resolved RDFs.
Since the case of large mass ratio is of particular experimental importance, the computational burden of simulating such plasmas underscores the need for a reliable theoretical model of correlations in two-temperature plasmas.
We compared the RDFs obtained from a system under Langevin thermostats with those of a system equilibrated using two simultaneous Nos\'{e}-Hoover thermostats.
At higher mass ratios, the results remain identical irrespective of choice of thermostat.
At lower mass ratios ($m_i/m_e \lesssim 5$), the system under Nos\'{e}-Hoover thermostats displayed the ``flying ice cube effect,'' in which the system accumulated a spurious net momentum, leading to incorrect RDFs~\cite{harvey_98}.
The Langevin thermostats, however, were found to give consistent RDFs for all mass ratios.
The simulations were carried out in three stages.
First, we performed an initial thermostatting stage until each species reached its target temperature.
The required length of this phase depended on the mass ratio.
It was found that for $m_i=m_e$, $400$ electron plasma periods were sufficient and that this number scaled with increased ion mass as $\sqrt{m_i/m_e}$.
Second, the evolution of the RDFs was monitored until it was seen that the ion-ion correlations had fully developed.
Third, time-averaged RDFs were computed while keeping both the thermostats on.
The thermostats were kept active to prevent electron-ion temperature relaxation over the timescales necessary to accurately sample the RDFs.
Because the thermostats were left on during the entire simulation period, the total energy was not conserved.
\begin{figure}[t]
\includegraphics[width=0.9\columnwidth]{rdfs_thermo_coeff_onecol}
\caption{Effects of varying the thermostat Langevin collision frequency on the RDFs and temperature fluctuations. For the simulations shown, $\Gamma_i=50$, $\Gamma_e=1$, and $m_i=30m_e$. Lines show different values of the inverse Langevin collision frequency: $\nu^{-1}=5\delta t$ (solid red), $10\delta t$ (dashed blue), and $20 \delta t$ (dash-dotted green).}
\label{fig:thermo-coeff}
\end{figure}
\section{Comparison of HNC with MD}
\label{sec:compare}
We have evaluated each of the three HNC extensions described in Section~\ref{sec:hnc} and conducted MD simulations as described in Section~\ref{sec:md} for several combinations of coupling strengths and mass ratios.
Here we present an illustrative subset of the comparisons made, shown in Figure~\ref{fig:rdfs}.
Plots for other parameter combinations can be found the Supplementary Material.
Figures~\ref{fig:rdfs}a-b show the radial distribution functions for a plasma of strongly coupled ions and weakly coupled electrons, with $m_i=m_e$ and $m_i=30m_e$.
The first observation to make is that the strength of electron-ion correlations is clearly set by $T_{ei}^\textsc{mass}$, not by $T_{ei}^\textsc{sqrt}$.
The too-wide Coulomb hole in $g_{ei}(r)$ shows that the SQRT model overestimates the strength of electron-ion coupling.
Furthermore, the SQRT model predicts electron-electron correlation functions that qualitatively differ from MASS, SVT, and MD.
The physical reason is most clearly illustrated by examining the weakly coupled limit of the SVT $g_{ee}(r)$ when $m_e \ll m_i$.
Identifying the potential of mean force as $\phi_{ee} = -k_{\textsc b}T_e\ln g_{ee}$, one can write (see Eq.~\eqref{eq:2-yuk} and Table~\ref{tab:wc-coeffs})
\begin{equation}
\label{eq:svt-masslim-gee}
\phi^\textsc{svt}_{ee}(r) \simeq \frac{e^2}{r}
\left[ e^{-\kappa_e r} - \frac{\kappa_e^2}{\kappa_i^2}
\left(e^{-\kappa_e r} - e^{-\kappa r}
\right)
\right]~.
\end{equation}
The first term is the screened repulsion that electrons would experience from one another if they were an OCP, while the ``attractive'' second term results from the tendency for electrons to cluster when they form screening clouds around ions.
These two processes compete, giving rise to the slow decay in the SVT, MASS, and MD $g_{ee}(r)$ compared to the SQRT model, which lacks this second ``attractive'' part.
These deficiencies in the SQRT $g_{ei}(r)$ and $g_{ee}(r)$ were present at all coupling strengths and mass ratios investigated.
The errors between SQRT and MD worsen at stronger coupling strengths, as can be seen in the Supplementary Material.
The remaining comparison of the MASS and SVT models highlights the question of whether the OZ equations require modification to describe a two-temperature system.
In all cases studied, the SVT radial distribution functions more closely agree with MD, though the differences between the MASS and SVT RDFs often appear small.
In fact, in Ref.~\onlinecite{DharmaWardanaPRE2008}, the MASS model's apparent accuracy is cited as evidence that SVT's modified OZ equations are unnecessary.
Important differences in favor of the SVT approach surface when comparing the structure factors.
An example is shown in Figure~\ref{fig:sk-gmi-4-gme-0p1-mime-30}.
The ion-ion structure factor vanishes in the MASS model as $k\to 0$, indicating that the ions are thermodynamically similar to an \textit{unscreened} OCP, despite the presence of a screening electron background.
In contrast, the SVT model gives a finite value, in line with both MD and the YOCP model.
This behavior is demonstrated analytically in Sec.~\ref{sec:yocp}.
Since all the models considered are variants of the HNC approximation, it should be expected that they will all suffer inaccuracies at higher coupling due to the lack of bridge functions.
In the OCP, bridge functions primarily correct the RDF oscillation amplitudes, which are somewhat too small without the bridge functions.
Other differences such as the size of the Coulomb hole and the oscillation phase are relatively minor, so if these features are a point of disagreement between the models and MD, it is more likely due to the two-temperature modeling than the lack of bridge functions.
Figure~\ref{fig:rdfs}c shows the RDFs when both species are strongly coupled.
As expected, the SVT model underestimates the peak of $g_{ii}(r)$ but otherwise agrees well with MD.
In contrast, the MASS model appears to break down entirely in this regime of strong electron coupling.
An unexpected feature of the MD RDFs is that at high mass ratio, the height of the first peak of $g_{ee}$ exceeds that of $g_{ei}$.
Ordinarily, one expects the height of this peak to correlate with the strength of the bare interaction compared to the kinetic energy, so that since $T_i < T_{ei} < T_e$, one anticipates $\max{(g_{ii})} > \max{(g_{ei})} > \max{(g_{ee})}$.
For low mass ratios, both MD and the HNC models bear out this trend at all coupling strengths, while at higher mass ratios, the HNC models do not capture the augmented first correlation peak in $g_{ee}$ observed in MD.
Figure~\ref{fig:rdfs}d shows the breakdown of the two-temperature HNC models at higher ion coupling strength.
All three two-temperature models \textit{over}estimate the strength of correlations in the plasma, exhibiting Coulomb holes and RDF oscillations that are larger than those seen in the MD simulations.
This is in contrast to the usual equilibrium HNC theory fails, which underpredicts the peaks.
For higher mass ratios and/or lower temperature ratios (see the Supplementary Material), the SVT RDFs are in surprisingly good agreement with MD even at such strong coupling.
These are cases that happen to lie in the transitional regime where SVT goes from underpredicting to overpredicting the RDF peaks.
\begin{figure*}
\includegraphics[width=\textwidth]{rdfs_together}
\caption{Model RDFs compared with molecular dynamics simulation results. Connected black circles are MD, solid red lines are the SVT model, dotted orange lines are the MASS model, and dash-dotted blue lines are the SQRT model.}
\label{fig:rdfs}
\end{figure*}
\begin{figure}
\includegraphics[width=0.9\columnwidth]{{sk_gmi_4_gme_0.1_mi_30me_ii_only}.pdf}
\caption{Model ion-ion static structure factors compared with molecular dynamics simulation for $\Gamma_i=4$, $\Gamma_e=0.1$, and $m_i=30m_e$. The inset shows $S_{ii}(k)$ near $k=0$, including the YOCP model (green squares).}
\label{fig:sk-gmi-4-gme-0p1-mime-30}
\end{figure}
\section{Comparison with the Yukawa OCP }
\label{sec:yocp}
We now compare the ion-ion correlations of the SVT model to the Yukawa OCP to test the YOCP's limitations as $\Gamma_e$ increases.
In the classical YOCP model, the electrons are an ideal background that screens the ions.
The ions then interact through a Debye-screened potential,
\begin{equation}
\label{eq:v-yuk}
v^\textsc{y}_{ii}(r) = \frac{e^2}{r} e^{-\kappa_e r}~,
\end{equation}
where $\kappa_e=\sqrt{3\Gamma_e}a_e^{-1}$ is the inverse electron Debye length.
The YOCP model is valid only when the screening background is weakly coupled, while the SVT model predicts accurate ion-ion RDFs even when $\Gamma_e$ exceeds unity.
By comparing the YOCP ion-ion RDF $g_\textsc{y}(r)$ with $g_{ii}(r)$ from two-temperature SVT calculations, we can quantitatively assess at what $\Gamma_e$ the YOCP model fails.
A result of the weak-coupling approximation is that $\kappa_e$ does not depend on the sign of the electron charge.
For this reason, the weak-coupling assumption of the YOCP can be tested using positively charged electrons in the SVT calculations; however, an important caveat must be made.
As the electron coupling strength increases, the nature of how they screen the ions is expected to become increasingly dependent on the sign of their charge.
It is reasonable to expect, though, that the $\Gamma_e$ at which the exponential screening approximation fails is about the same value at which the sign of the electron charge becomes important, since they are both tied to the weak-coupling assumption.
We expect, then, that the $\Gamma_e$ threshold reported here should not strongly depend on the use of positively charged electrons.
For a given $\Gamma_i$, we solve the HNC-SVT-OZ equations for $g_{ii}(r)$ at several $\Gamma_e$ and solve the ordinary HNC-OZ equations for $g_\textsc{y}(r)$ at several $\kappa_e$.
For each $\Gamma_e$, the best-fit $\kappa_e$ was chosen to be the one that minimizes the integrated absolute difference between $g_\textsc{y}$ and $g_{ii}$ from HNC,
\begin{equation}
\label{eq:fit}
\Delta = \int d\vec r |g_\textsc{y}(r;\kappa_e) - g_{ii}(r)|~.
\end{equation}
Figure~\ref{fig:yocp-fit} shows the best-fit YOCP $\kappa_e$ over a wide range in $\Gamma_i$ and $\Gamma_e$ with the mass ratio fixed at $m_i=1836m_e$.
Immediately, one sees that when the electrons are weakly coupled, the best-fit $\kappa_e$ is independent of the ion coupling strength and furthermore is essentially the inverse electron Debye length, plotted in black in the figure.
The reason becomes clear upon investigating the SVT-OZ equations at weak electron coupling.
In the limit of weak electron coupling, the Debye-H\"uckel approximation should be excellent for the electron-electron direct correlation function.
Since $T_{ei}\approx T_e$, the same should be true of the electron-ion direct correlation function, giving
\begin{equation}
\label{eq:c-yocp}
\hat c_{ee}(k) \approx Z_i^{-1}\hat c_{ei}(k) \approx -\frac{4\pi e^2}{k_{\textsc b}T_e}\frac{1}{k^2}~.
\end{equation}
Due to the large mass ratio, the SVT-OZ equations from Eq.~\eqref{eq:svt-oz} become
\begin{subequations}
\label{eq:svt-masslim}
\begin{align}
& \hat h_{ii} = \hat c_{ii} + n_i \hat h_{ii} \hat c_{ii} + n_e \frac{T_e}{T_i} \hat h_{ei} \hat c_{ei}
\\
& \hat h_{ei} = \hat c_{ei} + n_i \hat h_{ii} \hat c_{ei} + n_e \hat h_{ei} \hat c_{ee}
\\
& \hat h_{ee} = \hat c_{ee} + n_i \hat h_{ei} \hat c_{ei} + n_e \hat h_{ee} \hat c_{ee}~.
\end{align}
\end{subequations}
Since $\hat c_{ei}$ and $\hat c_{ee}$ are known, $h_{ei}$ can be eliminated from the first equation to find
\begin{equation}
\label{eq:oz-scr}
\hat h_{ii} =
\left(
\hat c_{ii} + \frac{n_e\hat c_{ee}}{1 - n_e\hat c_{ee}} \frac{T_e}{T_i} \hat c_{ei}
\right)
\left(
1 + n_i \hat h_{ii}
\right)~.
\end{equation}
If we introduce the notion of the ``screened'' ion-ion direct correlation function
\begin{equation}
\label{eq:c-scr}
\hat c_{\mathrm{scr}}
= \hat c_{ii} + \frac{n_e\hat c_{ee}}{1 - n_e\hat c_{ee}} \frac{T_e}{T_i} \hat c_{ei}~,
\end{equation}
then the ion structure factor is given by
\begin{equation}
\label{eq:svt-dh-sii}
S_{ii}(k) = \frac{1}{1 - n_i\hat c_\mathrm{scr}(k)}~,
\end{equation}
meaning that $\hat c_\mathrm{scr}$ mediates a one-to-one mapping between the ion structure of the two-component plasma and that of an equivalent screened one-component plasma.
In the Debye-H\"uckel approximation for the electrons,
\begin{equation}
\label{eq:c-scr-dh}
n_i \hat c_\mathrm{scr} = n_i \hat c_{ii} + \frac{T_i}{T_e} \frac{1}{\lambda_{Di}^2k^2}\frac{1}{1 + \lambda_{De}^2k^2}~.
\end{equation}
Now if we decompose $\hat c_{ii}(k)$ into its singular Coulombic part and a remainder $\hat c^{R}_{ii}=\hat c_{ii} + \hat v_{ii}/k_{\textsc b}T_i$ that is regular as $k \to 0$~\cite{Baus19801,BausJPA1978}, we find
\begin{equation}
n_i \hat c_\mathrm{scr}(k) = n_i\hat c_{ii}^R(k) - \frac{\lambda_{De}^2}{\lambda_{Di}^2} \frac{1}{1 + \lambda_{De}^2k^2}~.
\end{equation}
Thus the long wavelength limit of the ion structure factor is
\begin{equation}
\lim_{k\to 0} S_{ii}(k) = \frac{1}{1 - n_i\hat c_{ii}^{R}(0) + (\lambda_{De}/\lambda_{Di})^2}~.
\end{equation}
Repeating these steps using the ordinary OZ equations results in Eq.~\eqref{eq:c-scr-dh}, but without the factor of $T_i/T_e$, which causes $\hat c_\mathrm{scr}$ to remain singular.
This is the reason why the MASS model structure factor is zero in the $k \to 0$ limit, while the same limit in the SVT model is YOCP-like (nonzero).
In passing, it is interesting to note that inserting Eq.~\eqref{eq:svt-dh-sii} and~\eqref{eq:c-scr-dh} back into Eq.~\eqref{eq:svt-masslim} give the same structure factors found by Boercker and More~\cite{BoerckerPRA1986}.
Figure~\ref{fig:yocp-fit-gr-cmp} demonstrates the breakdown of YOCP behavior when the electrons become strongly coupled.
Interestingly, even when $\Gamma_e \simeq 1$, both the fitted YOCP model and the Debye-H\"uckel model are in fair agreement with the full two-component SVT calculation.
However, further increases to $\Gamma_e$ result in ion-ion RDFs that rapidly become non-YOCP-like; even the fitted YOCP underpredicts the ion-ion correlation strength.
In other words, the mapping between the two-component system and effective one-component system given by Eq.~\eqref{eq:c-scr} can no longer be reproduced by an effective Yukawa potential.
\begin{figure}
\includegraphics[width=\columnwidth]{kp_fit_gme_dh_cbar.pdf}
\caption{Inverse screening length of the YOCP whose $g_\textsc{y}(r)$ (pink) best matches the SVT $g_{ii}(r)$ at the same ion coupling strength. The fit criterion is given by Eq.~\eqref{eq:fit}. Multiple points at the same $\Gamma_e$ are for different values of $\Gamma_i$.}
\label{fig:yocp-fit}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{{yocp_fit_cmp_gmi_11.3}.pdf}
\caption{Comparison of ion-ion RDFs obtained from SVT (solid red), YOCP with fitted $\kappa_e$ (dashed blue), and YOCP with $\kappa_e$ equal to the inverse electron Debye length (dotted black).}
\label{fig:yocp-fit-gr-cmp}
\end{figure}
\section{Conclusions }
\label{sec:conc}
By comparison with molecular dynamics simulations, it has been demonstrated that the model proposed by Seuferling, Vogel, and Toeppfer~\cite{SeuferlingPRA1989} accurately extends the Ornstein-Zernike theory of pair correlations to two-temperature plasmas up to and slightly beyond the coupling strengths achieved by present-day ultracold neutral plasma experiments.
The assumption of a mass-weighted ``cross-temperature'' correctly predicts the suppression of electron-ion correlations when the mass ratio is large.
Further, we have shown that the modifications made by SVT to the Ornstein-Zernike equations are necessary to give nonzero long-wavelength limits of the static structure factors, which correctly reflects the self-consistent screening of ions by electrons and vice-versa.
These findings are given additional weight by our direct comparisons of the ion-ion correlation functions in the SVT and Yukawa OCP models, which indicate that the Yukawa OCP model will become unsuitable even for modeling ion correlations once $\Gamma_e\gtrsim 1$.
The present work marks important progress towards a fully two-component description of correlations in classical strongly coupled plasmas.
In particular, it suggests that the SVT model can be used to obtain accurate effective scattering potentials or static local field corrections needed in quasi-static descriptions of transport and relaxation processes of strongly coupled plasmas\cite{BaalrudPRL2013,IchimaruPRA1985,DharmaWardanaPRE1998,DaligaultPRE2009,VorbergerPRE2010,BenedictPRE2017}.
However there remain interesting physical challenges to overcome.
Future work will address the issue of the electron charge, which was taken to be positive in this work to decouple the relevant two-temperature physics from the physics of classical recombination.
There is also the question of how to best simulate a two-temperature steady state, both in terms of technical choices regarding thermostats and in terms of the basic statistical mechanics of the simulated ensemble.
\section*{Supplementary Material}
See supplementary material for plots of the radial distribution functions and static structure factors for all coupling strengths and mass ratios investigated in this work.
\begin{acknowledgments}
This material is based upon work supported by the National Science Foundation under Grant No.~PHY-1453736 and by the Air Force Office of Scientific Research under award number FA9550-16-1-0221.
It used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by NSF Grant No.~ACI-1053575, under Project Award No.~PHY-150018.
\end{acknowledgments}
|
1,314,259,994,249 | arxiv | \section{Introduction}
In recent years, the higher-order cumulants of net-particle distributions
have been actively measured in heavy-ion collision experiments to study
QCD phase structure~\cite{bookKoch,Asakawa:2015ybt,Luo:2017faz,Bzdak:2019pkr}.
Higher-order cumulants ($C_{r}$, $r\leq6$) and their ratios of net-particle distributions
were reported by ALICE~\cite{Arslandok:2020mda}, HADES~\cite{Adamczewski-Musch:2020slf},
NA61/SHINE~\cite{Mackowiak-Pawlowska:2020glz} and STAR experiments~\cite{Aggarwal:2010wy,net_proton,net_charge,Adamczyk:2017wsl,Nonaka:2020crv,Adam:2020unf}.
In particular, the non-monotonic beam energy dependence of net-proton $C_{4}/C_{2}$
observed in Au+Au central collisions at the STAR experiment~\cite{Adam:2020unf}
could indicate a possible signature of the critical point
in low collision energies.
Recently, it was pointed out that
the strong enhancement of $C_{4}/C_{2}$
of the net-proton distributions observed
in Au+Au central collisions at $\sqrt{s_{\rm NN}}=$~7.7~GeV by the STAR
experiment~\cite{Adam:2020unf}
can be explained by the superposition of binomial and Poisson
distributions~\cite{Bzdak:2018uhv,Bzdak:2018axe}. Let us refer to this as a "bimodal" distribution throughout this paper.
In Ref.~\cite{Adam:2020unf}, cumulants are corrected for detector efficiencies
analytically by assuming that they follow binomial distributions
~\cite{eff_kitazawa,eff_koch,eff_psd_volker,eff_xiaofeng,Nonaka:2017kko}.
As this approach does not correct the net-proton distribution itself,
a so-called unfolding approach is necessary to reconstruct the distribution
in terms of detector efficiencies numerically and check the bimodal prediction.
On the other hand, there is a need to also consider other effects such as initial volume fluctuaions.
Various studies have been carried out to understand and correct for
initial volume fluctuations~\cite{Luo:2013bmi,Gorenstein:2011vq,Skokov:2012ds,Braun-Munzinger:2016yjz,Sombun:2017bxi,Rogly:2018ddx,Broniowski:2017tjq,Sugiura:2019toh}, but there has been no method to remove the volume fluctuations from the distributions.
The purpose of the present study is to present a new method to reconstruct the distribution
by addressing both volume fluctuations and detector inefficiencies.
Throughout this paper, we consider
the number of generated particles and antiparticles,
and measured particles after passing through the detectors,
denoted by ($N_{p}$,$N_{{\bar p}}$) and ($n_{p}$,$n_{{\bar p}}$), respectively.
The relation of these variables is given by
\begin{equation}
P(N_{p},N_{\bar p}) = \sum_{n_{p},n_{{\bar p}}}{\cal R}_{\rm rev}(N_{p},N_{{\bar p}};n_{p},n_{{\bar p}}){\tilde P({n_{p},n_{{\bar p}}})},
\end{equation}
where $P(N_{p},N_{\bar p})$ and $\tilde P(n_{p},n_{\bar p})$ are two-dimensional probability distribution functions
and ${\cal R}_{\rm rev}(N_{p},N_{{\bar p}};n_{p},n_{{\bar p}})$ is the
conversion matrix from the measured to generated coordinates, which we call
the "reversed response matrix" for the rest of this paper.
We also use the term "detector filter", which represents the Monte-Carlo way to
determine ($n_{p}$,$n_{{\bar p}}$) from a given ($N_{p}$,$N_{{\bar p}}$).
In this paper, the detector filter has efficiencies following the binomial distribution for simplicity:
\begin{equation}
B(n;\varepsilon,N) = \varepsilon^{n}(1-\varepsilon)^{N-n}\frac{N!}{n!(N-n)!},
\end{equation}
where $n$ and $N$ are measured and generated particles, and $\varepsilon$ is the detector efficiency.
Efficiencies for particles and antiparticles will be denoted separately by $\varepsilon_{p}$
and $\varepsilon_{\bar p}$, and are assumed to be independent.
Note that any efficiency including non-binomial distribution~\cite{binomial_breaking,Nonaka:2018mgw} can be properly taken into account in our
unfolding approach as will be briefly discussed in the following section.
The paper is organized as follows.
In Sec.~\ref{sec:sec2}, we introduce procedures to reconstruct
the particle and antiparticle number distributions in terms of the detector efficiencies.
The method is demonstrated in toy models for both extreme and realistic cases.
In Sec.~\ref{sec:sec3}, we discuss how to implement and correct
for initial volume fluctuations as well as detector efficiencies.
\section{Particle number unfolding\label{sec:sec2}}
In this section, we demonstrate the approach of particle number unfolding
through Monte-Carlo simulations, followed by brief discussions on
the smoothing and scaling of the response matrices.
\subsection{Methodology\label{sec:critical}}
Figure~\ref{fig:flowchart} shows a flowchart of the unfolding procedure.
As shown in the left half, the measured distributions in the real experiments
are modified by detectors, hence it's distorted from the true distributions.
Our goal is to obtain this unknown distribution.
The toy models presented in the rest of this section utilize two sets of data.
The first one is the distribution corresponding to experiments,
which is indicated by "Toy-Experiment" in Fig.~\ref{fig:flowchart}.
The other one is the virtual (simulation) distribution indicated by "Toy-MC".
Figure~\ref{fig:flowchart_pic} shows the distributions of
($N_{p}$,$N_{\bar p}$) or ($n_{p}$,$n_{\bar p}$) at various steps of the unfolding approach,
corresponding to (a)--(f) indicated in the boxes in Fig.~\ref{fig:flowchart}.
Let us explain the unfolding procedures based on Figs.~\ref{fig:flowchart}
and \ref{fig:flowchart_pic}.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=160mm]{FlowChart.pdf}
\end{center}
\caption{
Flowcharts of the unfolding approach. The left hand side corresponds to the real experiments,
and right-hand side
represents the toy models discussed in Sec.~\ref{sec:sec2}.
The dotted arrows show the procedures repeated for iterations.
The alphabets shown in the boxes correspond to those in Fig.~\ref{fig:flowchart_pic}.
Numbers in the square brackets are the same as the bullet number for the
toy-MC procedures in Sec.~\ref{sec:sec2}.
}
\label{fig:flowchart}
\end{figure*}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=160mm]{Fig1.pdf}
\end{center}
\caption{
Distributions of particle and antiparticle number
at various steps in the unfolding procedures:
(a) generated distribution for toy-experiment,
(b) measured distribution for toy-experiment,
(c) generated distribution for toy-MC,
(d) measured distribution for toy-MC,
(e) correction function in the generated coordinates, and
(f) correction function in the measured coordinates.
All distributions are normalized.
Values of mean and standard deviation in x and y axis are shown in the box.
White-colored bins in panel (b) represent the negative value.
}
\label{fig:flowchart_pic}
\end{figure*}
First, we generate the toy-experiment true distribution (a)
with the critical shape.
The Poisson distributions are generated for $P(N_{p})$,
and for $P(N_{{\bar p}})$ with $N_{p}>10$,
while the Gauss distributions are generated for $P(N_{{\bar p}})$ with $N_{p}<10$.
The detector filter with $\varepsilon_{p}=0.8$ and $\varepsilon_{\bar p}=0.6$
is applied to get the toy-experiment measured distribution in (b).
The rest of the procedure are iterations to be repeated many times,
which are shown by the dotted arrows in Fig.~\ref{fig:flowchart}.
Let us explain these procedures as follows.
The bullet number corresponds to numbers with square brackets in Fig.~\ref{fig:flowchart}.
\begin{description}
\item[1] Generate a toy-MC distribution, according to a Poissonian
with the mean values being 12 and 9 for particles and antiparticles, respectively.
\item[2] The detector filter is applied to (c) to get toy-MC measured distributions as shown in (d).
\item[3] During the MC process from \textbf{1} to \textbf{2}, we compute the reversed response matrices,
${\cal R}_{\rm rev}$, numerically without any inversion procedure.
Some examples of the response matrices are shown in Fig.~\ref{fig:RmRev}.
Each panel shows the probability distributions of ($N_{p}$,$N_{\bar p}$)
for the fixed ($n_{p}$,$n_{\bar p}$), which can be directly computed in the MC process.
\item[4] The correction function is determined by subtracting (d) from (b). See (f).
It represents the difference between toy-experiment and
toy-MC in the measured coordinates.
\item[5] We then multiply ${\cal R}_{\rm rev}$ to (f) to get (e) the correction
functions in the generated coordinates.
It should be noted that
smoothing is applied to the correction functions to make the resulting distribution smooth.
Further, we multiply a scaling factor $\alpha_{\rm sc}<1$ to (e) before it is added to (c)
in order to avoid possible negative content of bins.
Details on those parameters will be discussed in the following subsections.
\item[6] By adding (e) to (c), the toy-MC distribution is modified to be closer to (a).
\item[7] We repeat (1)--(6) until cumulants of the toy-MC net-particle distribution converge.
Reversed response matrices are updated for each iteration.
\end{description}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=130mm]{RmRev.pdf}
\end{center}
\caption{
Reversed response matrices, ${\cal R}_{\rm rev}(N_{p},N_{{\bar p}};n_{p},n_{{\bar p}})$,
with respect to ($N_{p}$,$N_{{\bar p}}$) for fixed $(n_{p},n_{{\bar p}})$
for the 1st iteration in Sec.~\ref{sec:sec2}. All distributions are normalized to unity.
}
\label{fig:RmRev}
\end{figure*}
Figure~\ref{fig:itr_change} shows three kinds of distributions from top to bottom.
The most right panels in the middle and bottom rows show the toy-experiment and
its net-particle distributions in the generated coordinates,
and the 1st to 4th panels from left-hand side
represent the toy-MC distributions at 0th (initial condition), 1st, 10th and 100th iterations.
The top row in Fig.~\ref{fig:itr_change} shows the correction functions
in the generated coordinates.
The correction function is seen to flatten with increasing iterations, which indicates that the toy-MC
distribution approaches the toy-experiment distribution
in the generated coordinates.
The middle row shows the toy-MC distribution in the generated coordinates.
The critical shape of the toy-experiment distribution
is found to be successfully reconstructed starting from the Poisson distribution in toy-MC samples.
The bottom row shows the toy-MC net-particle distribution.
The two-peak structure in toy-experiment samples is
reproduced in the toy-MC distribution.
Figure~\ref{fig:fig3} shows cumulants up to the fourth-order
of the toy-MC net-particle distribution in the generated coordinates
as a function of iteration.
To see the validity of the unfolding approach, 100 independent samples are generated for
both toy-experiment and toy-MC samples.
The averaged values of cumulants are shown in black solid lines,
and the bands show the statistical uncertainties.
Red boxes show the cumulants of the toy-experiment
distribution in the generated coordinates.
It is found that the cumulants become flat with increasing iterations
and consistent with those of the toy-experiment distribution within statistical uncertainties,
which indicates that our unfolding approach works well.
It is notable that somewhat larger statistical uncertainties for the toy-MC results
compared to those of the toy-experiment is due to the efficiency loss in the detector filter.
We note that any efficiency assumptions can be properly taken into account
in the unfolding approach. The most important thing is whether or not the detector filters
are close enough between toy-experiment and toy-MC samples.
If this is the case, the unfolding approach works as evident from Fig.~\ref{fig:flowchart} .
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=180mm]{Fig2.pdf}
\end{center}
\caption{
(Top) Correction functions in the generated coordinates.
White-colored bins represent the large negative value outside the z-axis range.
(Middle) Toy-MC distributions in the generated coordinates.
(Bottom) Toy-MC net-particle distributions in the generated coordinates.
The 1st to 4th row from left to right show distributions
at the 0th (initial condition), 1st, 10th and 100th iteration.
The most right panels show distributions for the toy-experiment sample.
}
\label{fig:itr_change}
\end{figure*}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=170mm]{fig3.pdf}
\end{center}
\caption{
Cumulants up to the 4th-order as a function of iteration.
Solid lines and bands show the averaged values and $\pm1\sigma$
for 100 independent trials.
The box drawn at $x\approx1500$ is the true values of cumulants
with $\pm1\sigma$ for the toy-experiment sample in the generated coordinates.
}
\label{fig:fig3}
\end{figure*}
\subsection{Smoothing of response matrices}
The smoothing parameter is used to ensure resulting distributions are smooth.
Since our focus is on the shape of the distribution itself, it is not useful
to smooth the toy-MC distribution directly. Instead, we try smoothing the correction function:
\begin{eqnarray}
P'_{\rm cf}(N_{p},N_{\bar p}) &=&
\cfrac {\sum_{x=N_{p}-5}^{N_{p}+5}\sum_{y=N_{\bar p}-5}^{N_{\bar p}+5}w(r,\alpha_{\rm sm})P_{\rm cf}(N_{p},N_{\bar p})}{\sum_{x=N_{p}-5}^{N_{p}+5}\sum_{y=N_{\bar p}-5}^{N_{\bar p}+5}w(r,\alpha_{\rm sm})},
\end{eqnarray}
where $P_{\rm cf}(N_{p},N_{\bar p})$ and $P'_{\rm cf}(N_{p},N_{\bar p})$ are correction functions
at the generated coordinate before and after the smoothing, and $w(r,\alpha_{\rm sm})$ is
the weight function defined as two-dimensional Gaussian centered at ($N_{p}$,$N_{\bar p}$).
The $w(r,\alpha_{\rm sm})$ is characterized by $r=\sqrt{(x-N_{p})^{2}+(y-N_{\bar p})^{2}}$
and the standard deviation $\alpha_{\rm sm}$. For the former, $x$ and $y$ are defined in a $11\times 11$ matrix region with
$\pm$5 neighboring bins from the central bin ($N_{p}$,$N_{\bar p}$)
which corresponds to (0,0) in the top row in Fig.~\ref{fig:sm}.
The correction factor is determined by averaging the surrounding bins,
which leads to a smooth distribution after the correction.
A larger value of $\alpha_{\rm sm}$ gives more flattened correction functions.
Figure~\ref{fig:sm} shows the smoothing functions, correction functions,
toy-MC distributions, and toy-MC net-particle distributions,
after substantial iterations for $\alpha_{\rm sm}=0$, $0.2$, $0.5$, $1.0$ and $2.0$.
The correction functions are more smeared with larger smoothing parameters,
leading to smoother toy-MC distributions.
This is also visible in toy-MC net-particle distributions,
where the dip structure around $N_{p}-N_{\bar p}\approx0$
is found to be smeared and higher relative to the two peaks with larger value of $\alpha_{\rm sm}$.
Figure~\ref{fig:smitr} shows the cumulants up to the 4th order as a function of iteration
with different smoothing parameters.
Although the convergence behavior seems to change with different smoothing parameters, the cumulant values are consistent
for all cases after 1000 iterations within statistical uncertainty.
This indicates that the smoothing process would not affect the final results of cumulants,
but we propose to try several smoothing parameters to test the stability of the
final results.
\subsection{Scaling of response matrices}
Another parameter for scaling is introduced to avoid negative content in the
toy-MC distribution after correction functions are applied.
As one can see from Fig.~\ref{fig:flowchart_pic}-(e) and (f), the correction functions have
some negative values shown in blue or white-colored bins. Negative values can thus
appear in the resulting toy-MC distributions in Fig.~\ref{fig:flowchart_pic}-(c).
The negative values in the toy-MC distributions affect the MC process
in the next iteration.
The detector filter cannot be applied to bins with a negative number
of events. We therefore set bins to be artificially zero in these cases.
In order to avoid such special treatment as much as possible, the correction functions are scaled down by the
parameter $\alpha_{\rm sc}<1.0$.
Figure~\ref{fig:sc} shows the cumulants up to the 4th order as a function of iteration
with different scaling parameters. It is found that the scaling parameter controls the
convergence speed, but the cumulant values are consistent within uncertainties
after substantial iterations.
Therefore, the scaling process does not greatly affect the final results.
A large scaling value is better if one wants to save calculation cost.
However, even then we advice to check the consistency of the final results with several
scaling parameters.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=180mm]{sm.pdf}
\end{center}
\caption{
(1st column) Weight function for different smoothing parameter $\alpha_{\rm sm}$.
The distribution is normalized so that the value at $(N_{p},N_{{\bar p}})=(0,0)$ becomes unity.
(2nd column) Correction functions in the generated coordinates, $P'_{\rm cf}(N_{p},N_{\bar p})$,
for different smoothing parameters.
White-colored bins represent large negative values outside the z-axis range.
(3rd column) Particle number distributions in the generated coordinate for toy-MC samples
at the $80$th iteration.
(4th column) Net-particle distributions in the generated coordinates for toy-MC samples
at the $80$th iteration.
The left-most panels show the distributions without smoothing ($\alpha_{\rm sm}=0$).
}
\label{fig:sm}
\end{figure*}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=170mm]{smitr.pdf}
\end{center}
\caption{
Influence of smoothing on cumulants up to the 4th-order as a function of iteration.
Solid lines and bands show the averaged value and $\pm1\sigma$ of the statistical uncertainties
for 100 independent trials.
Different band shadings represent results from different smoothing parameters $\alpha_{\rm sm}$.
The box drawn at $x\approx1500$ is the true value with $\pm1\sigma$
for the toy-experiment samples in the generated coordinates.
}
\label{fig:smitr}
\end{figure*}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=170mm]{sc.pdf}
\end{center}
\caption{
Influence of scaling on cumulants up to the 4th-order as a function of iteration.
Solid lines and bands show the averaged value and $\pm1\sigma$ of the statistical uncertainties
for 100 independent trials.
Different band shadings present results from different scaling parameters $\alpha_{\rm sc}$.
The box drawn at $x\approx1500$ is the true value with $\pm1\sigma$
for the toy-experiment samples in the generated coordinates.
}
\label{fig:sc}
\end{figure*}
\subsection{Application to bimodal distribution}
In order to check the sensitivity of the unfolding approach in a more realistic case,
we applied another model assuming a bimodal distribution
for the toy-experiment samples.
According to Ref.~\cite{Bzdak:2018uhv}, the large value of $C_{4}/C_{2}$ observed by the
STAR experiment in Au+Au central collisions at $\sqrt{s_{\rm NN}}=7.7$~GeV
can be described by the superposition of binomial and Poisson distributions:
\begin{eqnarray}
P_{ab}(N) &=& (1-w)P_{a}(N) + wP_{b}(N), \\
P_{a}(N) &=& \cfrac{B!}{N!(B-N)!}\varepsilon^{N}(1-\varepsilon)^{B-N},\;\;
P_{b}(N) = \cfrac{\lambda^{N}e^{-\lambda}}{N!}
\end{eqnarray}
where $P_{a}$ and $P_{b}$ are the binomial and Poisson distributions,
with $w=0.0033$, $B=350$, $\varepsilon=0.114$ and $\lambda=25.3$.
We generate the toy-experiment distribution according to $P_{ab}(N)$ for particles,
and another Poisson distribution for antiparticles with the mean value
being $0.3$ taken from Ref.~\cite{Adam:2020unf}.
The same detector filter described in Sec.~\ref{sec:critical} is used.
The scaling and smoothing parameters are chosen to be $\alpha_{\rm sc}=0.6$ and $\alpha_{\rm sm}=0$.
For the toy-experiment samples 150~k events are generated, while 1.5~M events
are generated for the toy-MC distribution. The 100 independent samples are generated to check statistical uncertainties.
Results of the cumulants up to the 4th order are shown in Tab.~\ref{tab:bimodal} for toy-experiment and unfolded toy-MC
distribution after 1000 iterations.
We checked that the cumulants are converged reasonably after the iterations.
Cumulant values for the unfolded toy-MC distribution are found to be consistent with those from
the toy-experiment distribution.
\begin{table}[htb]
\begin{tabular}{ccc} \hline
Cumulant & Toy-Experiment$\pm$stat.err & Toy-MC (Unfolded)$\pm$stat.err \\ \hline
$C_{1}$ & 39.554 $\pm$ 0.014 & 39.555 $\pm$ 0.017 \\ \hline
$C_{2}$ & 36.33$\pm$0.15 & 36.32$\pm$0.18 \\ \hline
$C_{3}$ & 18.4$\pm$1.6 & 18.3$\pm$2.2 \\ \hline
$C_{4}$ & 118 $\pm$ 22 & 112 $\pm$ 33 \\ \hline
\end{tabular}
\caption{Cumulants up to the 4th-order for the bimodal toy-experiment
distribution and toy-MC distribution after 1000 iterations (see Fig.~\ref{fig:bimodal}).}
\label{tab:bimodal}
\end{table}
Corresponding net-particle distributions for toy-experiment and unfolded toy-MC samples
are shown in Fig.~\ref{fig:bimodal}.
The binomial and Poisson distributions forming the toy-experiment distributions are also shown.
The data points for the unfolded distribution were calculated by averaging the results from 100
independent samples, while the statistical uncertainties are for one sample with 150k events.
It is demonstrated that the toy-MC distribution is successfully unfolded
having a small bump at the low side of the distribution with
15M and 150M effective events for toy-experiment and toy-MC samples, respectively.
Please note the data points would be expected to fluctuate as shown by the
large statistical uncertainties in Fig.~\ref{fig:bimodal}
in the case of only 150k events for the toy-experiment sample.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=90mm]{bimodal.pdf}
\end{center}
\caption{
Net-particle distributions for toy-experiment (black) and unfolded toy-MC (red) samples.
The binomial and Poisson distributions which form the toy-experiment distribution
are shown in shaded histograms.
}
\label{fig:bimodal}
\end{figure*}
\section{Volume fluctuation convoluted unfolding\label{sec:sec3}}
In this section, we expand the particle number unfolding discussed in Sec.~\ref{sec:sec2}
to deal with volume fluctuations.
The following subsections first define the volume fluctuations then discuss
the methodology.
\subsection{Volume fluctuation}
The initial volume in heavy-ion collisions is characterized by
the impact parameter $b$, which is defined as the distance between the center of two nuclei.
We also consider the number of participant nucleons,
spectator nucleons and binary collisions based on the Glauber model.
These variables are not accessible directly in experiments.
Thus, the particle production model is utilized to
produce the final state multiplicity from sources, $N_{\rm source}$,
which is usually expressed in terms of participant nucleons and binary collisions, or
a power function of the former.
The resulting multiplicity distribution is then compared to
experimentally measured multiplicity distribution to define the centrality.
In this case, one can easily imagine that the value of $N_{\rm source}$
fluctuates even with fixed multiplicity. This is called volume fluctuation.
Assuming that particles are produced from the independent source of $N_{\rm source}$ in a fixed volume,
the true cumulants of particle distributions are expressed
by superposition of cumulants for each source
\begin{equation}
C_{r}(\Delta N) = \sum_{N_{\rm source}}\kappa_{r}(\Delta m)
\end{equation}
with
\begin{equation}
\Delta N = N_{p} - N_{{\bar p}}, \;\; \Delta m = m_{p} - m_{{\bar p}},
\end{equation}
where $m_{p}$ and $m_{{\bar p}}$ are particles and antiparticles produced
per participant nucleon.
In this situation, the cumulants can be analytically corrected for volume fluctuations
~\cite{Skokov:2012ds,Braun-Munzinger:2016yjz}.
On the other hand, the particle distribution cannot be corrected in this way.
In the next subsection we explain how to implement volume fluctuations
into the particle number unfolding.
\subsection{Methodology}
Let us start from the toy-experiment distribution in the source coordinates.
Since the approach of particle number unfolding was found to work well
for both the extreme and the realistic cases in Sec.~\ref{sec:sec2},
we focus on the simple negative binomial distributions
for source distributions~\cite{Ansorge:1988kn}.
The particle distributions per source for toy-experiment samples are generated with 100k
events based on the negative binomial distribution:
\begin{equation}
P_{\mu,k}(m) = \cfrac{\Gamma(m+k)}{\Gamma(m+1)\Gamma(k)}\cdot\cfrac{(\mu/k)^{m}}{(1+\mu/k)^{m+k}},
\end{equation}
where $m$ is the particle number per source.
Parameters are chosen to be ($\mu_{p}$,$k_{p}$)$=$(0.2,5.0) and ($\mu_{\bar p}$,$k_{\bar p}$)$=$(0.15,3.0) for
particles and antiparticle, respectively.
The particle distribution per source for the toy-experiment sample is shown in Fig.~\ref{fig:VF_proc2}-(a).
The particle distributions are generated $N_{\rm source}$ times, which are superimposed to produce the
toy-experiment distribution in the generated coordinates.
We tried two ways to superimpose the particle distributions.
In one case the value of $N_{\rm source}$ is fixed for all events,
while in another $N_{\rm source}$ is defined as the distribution of participant nucleons from
the Glauber model for Au+Au collisions with the impact parameter $\sim9$~fm.
This process of superposition will be referred to as the "volume filter" at the rest of this paper.
Both cases are shown in Fig.~\ref{fig:VF_proc2}-(b) and (B), respectively.
The positive correlation of $N_{p}$ and $N_{\bar p}$ in Fig.~\ref{fig:VF_proc2}-(b) indicates
the effect of volume fluctuations.
We apply the detector filter as performed in Sec.~\ref{sec:sec2} to determine the
toy-experiment distributions in the measured coordinates as shown in Fig.~\ref{fig:VF_proc2}-(c).
The rest procedures are performed with toy-MC distributions, which are explained as follows.
The bullet numbers correspond to those with square brackets in Fig.~\ref{fig:flowchart_vf}.
Figure \ref{fig:flowchart_vf} depicts a flowchart of the unfolding procedure performed
with the toy-MC distributions shown in Fig.~\ref{fig:VF_proc2}.
\begin{description}
\item[0] Generate a toy-MC distribution per source (100k events), according to the negative binomial distribution
with parameters randomly fluctuated with $\pm20$\% compared to those in the toy-experiment samples.
See Fig.~\ref{fig:VF_proc2}-(d).
\item[1] The volume filter is applied to (a) to get toy-MC distributions in the generated coordinates.
Two kinds of volume filters with and without volume fluctuations
are applied as explained for toy-experiment samples above.
Resulting toy-MC distributions in the generated coordinates are shown in Fig.~\ref{fig:VF_proc2}-(e) and (E).
\item[2] The detector filter is applied to (e) to get toy-MC measured distributions as shown in Fig.~\ref{fig:VF_proc2}-(f).
\item[0'] During the MC process from \textbf{0} to \textbf{2}, the reversed response matrices
${\cal R}_{\rm rev}({m_{p},m_{{\bar p}};n_{p},n_{{\bar p}}})$
are computed. The toy-MC distributions in the measured
and source coordinates are connected as the following relation:
\begin{equation}
P(m_{p},m_{{\bar p}}) = \sum_{n_{p},n_{{\bar p}}}{\cal R}_{\rm rev}({m_{p},m_{{\bar p}};n_{p},n_{{\bar p}}}){\tilde P}(n_{p},n_{{\bar p}}).
\end{equation}
The important point here is that the generated coordinates ($N_{p}$,$N_{\bar p}$) is skipped
in the response matrices unlike the unfolding approach discussed in Sec.~\ref{sec:sec2}.
\item[3] The correction function is determined by subtracting
Fig.~\ref{fig:VF_proc2}-(c) from (f), which is shown in Fig.~\ref{fig:VF_proc2}-(g).
It represents the difference between toy-experiment and
toy-MC samples in the measured coordinates.
\item[4] We then multiply ${\cal R}_{\rm rev}({m_{p},m_{{\bar p}};n_{p},n_{{\bar p}}})$
to Fig.~\ref{fig:VF_proc2}-(g) to get the correction functions for the
source coordinates as shown in Fig.~\ref{fig:VF_proc2}-(h) .
Smoothing and scaling are carried out
based on the correction functions as done in Sec.~\ref{sec:sec2}.
Parameters are chosen to be $\alpha_{\rm sm}=0.1$ and $\alpha_{\rm sc}=1.0$.
\item[5] By adding Fig.~\ref{fig:VF_proc2}-(h) to (d), the toy-MC source distribution
is modified to be close to (a). See Fig.~\ref{fig:VF_proc2}-(d').
\item[Iteration] Repeat \textbf{1}--\textbf{5} until cumulants of the toy-MC net-particle distribution converge.
\end{description}
In this way, the toy-MC source distribution (Fig.~\ref{fig:VF_proc2}-(d)) is modified with iterations.
The toy-MC distribution in the generated coordinates with volume filters (Fig.~\ref{fig:VF_proc2}-(e))
is also modified accordingly.
The top row in Fig.~\ref{fig:VF_itr} shows cumulants up to the fourth order of
the toy-MC distributions in the generated coordinates as a function of iteration.
The volume fluctuations are convoluted for black lines, while
no volume fluctuations is taken into account (fixed $N_{\rm source}$) for red lines.
Results from 100 independent samples are plotted to see the statistical fluctuations.
Dashed-lines from -5 to 30 on x-axis represent the cumulants of toy-experiment
distributions in the generated coordinates.
Two results are found to be separated for the 3rd and 4th order cumulants,
which is due to the volume fluctuations.
It is found that the cumulants of toy-MC samples do converge to those
of toy-experiment samples after 25 iterations.
Bottom panels show correlation of cumulants for generated coordinates
between toy-experiment and toy-MC samples after iterations.
The consistency between x and y axis indicates
that our unfolding approach convoluting the volume fluctuation does work well.
One final remark is as follows. It was pointed out in Ref.~\cite{Sugiura:2019toh} that the
independent particle production model would be broken in the UrQMD framework,
as well as in the real experiment where we expect
strongly interacting hot and dense matter to form.
From discussions so far, it is obvious that one can implement any volume fluctuations
in the volume filter.
More importantly, one needs to check if the assumed volume fluctuations
are reasonable by model simulations.
As long as the assumption is correct, our unfolding approach can deal with
any kind of well-defined volume fluctuation to
reconstruct the true particle number distributions.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=160mm]{FlowChartVF.pdf}
\end{center}
\caption{
Flowcharts in the particle number unfolding convoluting volume fluctuations.
The dotted arrows show the procedures repeated for iterations.
The alphabets shown in the boxes correspond to those in Fig.~\ref{fig:VF_proc2}.
Numbers in the square brackets are the same as the bullet numbers for
toy-MC procedures in Sec.~\ref{sec:sec3}.
}
\label{fig:flowchart_vf}
\end{figure*}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=130mm]{VF_proc2_esumi.pdf}
\end{center}
\caption{
Particle and antiparticle number distributions of toy-experiment samples for
(a) each source
(b) superposition for fluctuating $N_{\rm source}$
(B) superposition for fixed $N_{\rm source}$
(c) superposition for fluctuating $N_{\rm source}$ with the detector filter.
The same plots are shown for toy-MC samples in panels (d)--(f) and (E).
Panel (g) shows the correction function in the measured coordinates.
Panel (h) shows the correction function in the source coordinates.
Panels (d'), (e') and (E') are are the distributions after the 1st iteration.
Panels (d''), (e'') and (E'') are are the distributions after the 2nd iteration.
White-colored bins in panels (g) and (h) represent large negative values outside the z-axis range.
}
\label{fig:VF_proc2}
\end{figure*}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=140mm]{VF_itr_esumi_nbd_glauber.pdf}
\end{center}
\caption{
(Top) Lines show the cumulants as a function of iteration.
Results from 100 independent trials are superimposed.
Dashed-lines show the cumulants of the toy-experiment samples
with and without volume fluctuations.
(Bottom) Correlation between input and output cumulants.
Black represents results which include volume fluctuations and
red represent results which do not.
}
\label{fig:VF_itr}
\end{figure*}
\section{Summary}
We presented the particle number unfolding methodology for
the measurement of higher-order cumulants of net-particle distributions.
The unfolding approach is applied to both an extreme case with very strong
critical shape and a more realistic case with a small/weak 2nd component.
The latter shows that our approach can successfully reconstruct
the bimodal distribution expected in Au+Au collisions
at $\sqrt{s_{\rm NN}}=7.7$~GeV~\cite{Bzdak:2018uhv,Adam:2020unf}.
We also demonstrated convolution of the volume fluctuations
through this approach by considering the particle production from independent sources.
Our method should be useful to reconstruct particle number distribution
in terms of both detector efficiencies and volume fluctuations,
which could be also extended to reconstruct simultaneously other event-wise quantities like
temperature (mean transverse momentum) fluctuations of the system created in heavy-ion collisions,
on top of initial volume fluctuation filters as one likes to.
\section{Acknowledgement}
We would like to thank X. Luo, B. Mohanty, A. Pandav, N. Xu and Y. Zhang for fruitful discussions.
We would like to thank Enago (www.enago.jp) for the English language review.
This work was supported by the National Natural Science Foundation of China No 11950410505,
China Postdoctoral Science Foundation funded project 2018M642878, Ito Science Foundation (2017) and
JSPS KAKENHI Grant No. 25105504 and 19H05598.
|
1,314,259,994,250 | arxiv | \section{Introduction}\label{sec:Bounds}
Axions and axion-like particles (ALPs) are a prediction of some of the best-motivated beyond the standard model physics scenarios (see, e.g.~\autocite{jaeckel_low-energy_2010,cicoli_axion-like_2013,ringwald_axions_2014} for reviews).
Many of their properties are determined by two quantities: the mass, $m$ and the so-called decay constant, $f_a$.
An important feature that all these particles share is that they enjoy a shift symmetry, a discrete version of which is preserved at the quantum level.
The existence of this symmetry protects their potential from quantum corrections that could otherwise be very large.
In the framework of quantum field theory, such particles arise as pseudo Nambu-Goldstone bosons of approximate global chiral symmetries~\autocite{peccei_cp_1977,weinberg_new_1978,wilczek_problem_1978,masso_light_1995,masso_new_1997,masso_planck-scale_2004}.
In other setups such as supergravity or string theory, particles with similar properties appear in the spectrum.
For instance, ALPs are a general consequence of the compactification of extra dimensions and string theory~\autocite{svrcek_axions_2006,douglas_flux_2007,arvanitaki_string_2010,acharya_m_2010,higaki_note_2011,marsh_axiverse_2011,cicoli_type_2012}.
In that context, there can be dozens of such particles whose potentials, kinetic terms and interactions may contain a large number of free parameters.
In an attempt to accommodate all these similar particle candidates, we will talk about ALPs in the general sense of a light (pseudo-)scalar particle, and we will reserve the term ``axion" to refer to ALPs that couple to the gluon field strength tensor through the QCD topological term and solve the strong CP problem.
Axion-like particles are excellent candidates to account for some or all the dark matter that we observe in the universe~\autocite{arias_wispy_2012,ringwald_exploring_2012,jaeckel_family_2014,marsh_axion_2016}.
Cosmological and astrophysical observations tell us that dark matter particles should be weakly interacting, stable at cosmological scales and cold.
ALPs can naturally fulfil all these requirements.
First, the discrete shift symmetry constrains their possible couplings to other fields, and those that are allowed are typically suppressed by $f_a$, which can be a large energy scale.
This fact, together with their small mass which limits the possible number and type of decay products as well as the phase space, makes them extremely stable.
Naively, the fact that they are very light might seem to contradict the requirement that the ALP dark matter population should be cold.
However, it is easy to see that this is not necessarily the case.
Because of their feeble interactions with other particles, ALPs are not produced thermally, but rather by the so-called misalignment mechanism, which yields a very non-relativistic population of ALPs that behave as cold dark matter~\autocite{preskill_cosmology_1983,abbott_cosmological_1983,dine_not-so-harmless_1983,arias_wispy_2012,jaeckel_family_2014}.
All in all, ALPs and axions are well motivated dark matter candidates, but their possible mass and decay constant span many orders of magnitude thereby providing a significant challenge for experimental tests.
Fortunately, their properties, in particular their low mass, also provides for new opportunities for experimental searches and theoretical arguments that can be used to probe their parameter space (see~\autocite{graham_experimental_2015} for a recent review).
Experimental tests are usually dependent on the coupling to Standard Model particles.
One example is a coupling to two gluons,
\begin{equation} \label{eq:coupling}
{\mathcal{L}}\supset\frac{\alpha}{8\pi f_{a}}\phi G_{\mu\nu}\tilde{G}^{\mu\nu}.
\end{equation}
This coupling also induces a coupling to a nucleon electric dipole moment (EDM),
\begin{equation}
{\mathcal{L}}\subset g_{d}\phi\bar{N}\sigma_{\mu\nu}F^{\mu\nu}N
\end{equation}
that is particularly important for searches when $\phi$ is dark matter\footnote{The coupling~\eqref{eq:coupling} also induces tree-level $P,T$-violating forces between nucleons, which can give a larger contribution to atomic EDMs than the loop-induced nucleon EDMs~\autocite{stadnik_axion-induced_2014}. This is relevant for EDM experiments that use atoms instead of free neutrons, like some of the ones presented in~\figref{fig:ALP_parameter_space}. For those, the limits and projections should be understood as applying directly to $f_a$ and not $g_d$.}.
The coupling constants are related via~\autocite{pospelov_theta-induced_1999,graham_new_2013}
\begin{equation}
g_{d}\approx \frac{2.4\cdot 10^{-16}}{f_a}\ \mathrm{e}\cdot\mathrm{cm} \approx 3.4\cdot 10^{3} \mathrm{GeV}^{-2} \left(\frac{\mathrm{GeV}}{f_{a}}\right).
\end{equation}
\begin{figure}[t!]
\centering
\includegraphics[width=0.625\textwidth,height=\textheight,keepaspectratio]{fig_ALP_parameter_space_final}
\caption{Parameter space for canonical axion-like particles, considering gravitational effects and interactions derived from the QCD $G\tilde{G}$ term.
On the horizontal axis we plot the mass of the ALP, while the vertical axis gives the decay constant $f_a$ on the right and the effective coupling to nucleons $g_d\propto f_a^{-1}$ on the left.
Canonical ALP models with a constant mass can only generate enough dark matter via the misalignment mechanism in the yellow and grey shaded areas.
Accounting for the anharmonicities of the potential and allowing for a fine-tuned initial condition, this region can be enlarged to also include the orange band (we take the lowest viable Hubble scale of inflation, $H_I\sim 4.5\cdot 10^{-23}$ GeV).
Note that the QCD axion models are restricted to lie on the magenta line.
Taking the interaction to be given by Eq.~\eqref{eq:coupling}, the region to the left of the QCD axion line is disfavoured by the unavoidable (temperature dependent) contribution to the mass from QCD effects~\autocite{blum_constraining_2014} (see also \secref{sec:QCD}).
This region is shown in light grey.
The dark blue region is excluded by the supernova limits estimated in~\autocite{raffelt_astrophysical_2008}.
Shaded in brown is the area where experiments looking for a static nuclear electric dipole moment (nEDM, see~\autocite{abel_search_2017}) would have found the oscillating one, while the dotted lines represent sensitivity estimates for future oscillating EDM experiments~\autocite{budker_cosmic_2014,hexenia_2017}.
In the dark green region ``BBN'' ALPs coupled to QCD are inconsistent with the production of the observed abundance of light elements during Big Bang Nucleosynthesis \autocite{blum_constraining_2014}.
The violet and dark red lines dubbed ``Earth" and ``Sun" correspond to constraints from the ALP field being sourced by dense astrophysical objects~\autocite{hook_probing_2017}.
The dark grey area is disfavoured by the observation of quickly rotating stellar black holes which would have been spun down in a superradiant process (from~\autocite{arvanitaki_discovering_2015}).
The area above the dashed black lines, plotted for different values of the Hubble scale of inflaton $H_I$, is disfavoured due to the generation of too much power in isocurvature perturbations at the scales probed by the Planck satellite~\autocite{ade_planck_2016} (see more details in~\secref{sec:Isocurvature_perturbations}).
Finally, $f_a$ is (softly) bounded from above by the requirement that it does not exceed the Planck scale.}
\label{fig:ALP_parameter_space}
\end{figure}
\figref{fig:ALP_parameter_space} summarises the constraints that can be cast on the canonical ALP dark matter scenario from these interactions with the visible sector.
In addition we show limits that arise from unavoidable gravitational interactions.
Unfortunately, some of the theoretically favoured existing models require high decay constants for the ALPs to be able to account for all the dark matter energy density that we observe in our Universe.
This means that some of the better motivated combinations of $(m,f_a)$ are not in the best position to be tested, be it through gravitational interactions or through couplings to gluons and nucleons or photons.
It is therefore timely to search for models that can accommodate low enough values of $f_a$ that can be in reach of these searches, while still being able to produce the required dark matter abundance. One option is to enlarge the field range by a monodromy~\autocite{silverstein_monodromy_2008,mcallister_gravity_2010,kaloper_natural_2009} as done in~\autocite{jaeckel_monodromy_2016}.
In this paper we pursue the same goal by employing a non-standard kinetic term for the ALP field.
This is a possibility that has been exploited in the literature~\autocite{alishahiha_dbi_2004,domcke_pbh_2017} in the context of inflationary models (though not so much for axion inflation), but to our knowledge such a study has not been performed for dark matter models.
As we will see, a very rich phenomenology arises when this possibility is allowed.
Of special interest is that this scenario will indeed be able to populate regions of the parameter space that can be tested in the near future, either with astrophysical observations or experimental searches.
Focusing on the coupling to nucleons, the main motivation for us in this respect is threefold.
First, as was already argued, we want to explore the possibility of building an ALP dark matter model with a larger such coupling.
Second, we ask ourselves if these models could lie on the region of parameter space to the left of the QCD axion band in~\figref{fig:ALP_parameter_space}.
Finally and concerning the Big Bang Nucleosynthesis bound that seems to restrict this area of parameter space, we would like to test its robustness constraining such ALP models.
In this work we study the viability of ALPs with a non-canonical kinetic term as dark matter candidates from a purely phenomenological perspective.
Let us nevertheless briefly mention some of the mechanisms that can give rise to this scenario.
For instance, a non-minimal coupling of the ALP field to gravity in the so-called Jordan frame induces a non-canonical kinetic function in the usual Einstein frame.
In the context of supergravity, an explicit breaking of the shift symmetry in the K{\"a}hler potential also results in non-standard kinetic terms for the ALP.
Finally, in the context of compactifications, string theory \textit{a priori} contains all the necessary ingredients to generate axions with non-canonical kinetic terms, caused, for example, by back-reaction effects.
However, no explicit construction of the models that we consider in this work exists as of today, and this task is beyond the scope of this paper.
We leave the study of the possibility of embedding this phenomenological study in a more complete framework for future work.
This paper is structured as follows: in~\secref{sec:NonstandardKT} we discuss the effects of non-canonical kinetic terms and set up our explicit case study. In~\secref{sec:Cosmological_evolution} we study how this modified kinetic terms affects the cosmological evolution of the ALP field, and in~\secref{sec:Isocurvature_perturbations} we analyse the isocurvature perturbations predicted in this setup. In~\secref{sec:QCD} we discuss the impact of allowing for a coupling to QCD in this scenario, and conclude in~\secref{sec:Conclusions}.
Before getting started on the details we note that, although in this paper we focus mainly on the example of gluon interactions, most of our discussion is completely general and can be applied to any other coupling.
Moreover, while the structure of interactions that we consider is inspired by that of pseudo-Nambu-Goldstone bosons, the essential qualitative features should also apply in the case of more general scalars and only depends on the singularities of the non-canonical kinetic terms.
\section{Non-canonical kinetic terms}\label{sec:NonstandardKT}
In this section we examine the effect that a non-standard kinetic term can have on the dynamics of the ALP field.
Let us start with the Lagrangian
\begin{equation}\label{eq:basic_lagrangian}
{\cal L} = \frac{1}{2} K^2(\phi)\partial^\mu\phi\partial_\mu\phi - V(\phi),
\end{equation}
where we have allowed for a general real scalar (and positive definite) function of $\phi$, $K^{2}(\phi)$, to scale the kinetic term and thus render it not canonically normalised.
For definiteness, we will work with the usual periodic potential for ALP fields,
\begin{equation}\label{eq:potential}
V(\phi) = \Lambda^4 \left( 1 - \cos \frac{\phi}{f_a} \right).
\end{equation}
We now proceed by performing a field redefinition to obtain the canonically normalised field.
The formal solution is to define
\begin{equation}\label{eq:field_redefinition_general}
\varphi (\phi) = \int K(\phi) d\phi \equiv g(\phi),
\end{equation}
and thus the Lagrangian for $\varphi$ is
\begin{equation}
{\cal L}(\varphi) = \frac{1}{2} \partial^\mu\varphi\partial_\mu\varphi - V(g^{-1}(\varphi)).
\end{equation}
Being canonically normalised, $\varphi$ is the physical (propagating) field.
Let us see what kind of functions $K$ result in $\varphi$ being a viable dark matter candidate.
The first condition is that $\varphi$ behaves like cold dark matter in the late universe. This requires that it oscillates harmonically at late times (see, e.g.~\autocite{arias_wispy_2012}).
Accordingly the kinetic term should not modify the dynamics close to the origin.
This is automatic if the kinetic term approaches a non-vanishing constant value close to the origin,
\begin{equation}
K\rightarrow {\rm const.}=1\quad{\rm for}\quad \varphi\rightarrow 0.
\end{equation}
As indicated in the equation, this constant can be chosen to be equal to $1$ by a suitable choice of normalisation.
So why should we now choose a non-trivial function for $K$ and what shall we choose?
As already mentioned in the introduction, we would like to find a model with larger couplings, i.e. smaller $f_{a}$, that still gives a sufficient dark matter density.
Roughly speaking the problem of obtaining a sufficient energy density can be understood as follows. For the potential Eq.~\eqref{eq:potential} the maximal initial energy density is given
by $\Lambda^4$. This is linked to the mass $m$ of the particle via $\Lambda^4=m^2f^2_{a}$.
If $f_{a}$ is too small the initial and in consequence the final energy density is too small to make up all of the dark matter.
One way to avoid this problem would be to break the periodicity of the potential~\eqref{eq:potential} such that the potential continues to grow for large field values, e.g. by exploiting a monodromy~\autocite{jaeckel_monodromy_2016}.
Here we will explore a different strategy. As long as the Hubble constant is sufficiently large the evolution of the field is frozen and the energy density is approximately constant.
As discussed below the evolution and consequently the dilution of the energy starts when $H^2\sim |V''(\varphi)|$. Hence, we can increase the energy density today by choosing the kinetic function $K$ such that the potential becomes very flat for large field values\footnote{An alternative is to start in a region of field space where $V'(\varphi)$ is very small, i.e. the field is close to a maximum. However, this is strongly limited by the existence of inflationary fluctuations~\autocite{wantz_axion_2010,di_cortona_qcd_2016} (see also~\secref{sec:Isocurvature_perturbations}).}. A cartoon of this is shown in~\figref{fig:SR_potential}.
\begin{figure}[t!]
\centering
\includegraphics[width=\textwidth/2,height=\textheight,keepaspectratio]{fig_SR_potential}
\caption{Slow roll-like potential. }
\label{fig:SR_potential}
\end{figure}
Using
\begin{equation}
\frac{\partial V}{\partial \varphi} = \frac{\partial V}{\partial \phi} \cdot \frac{\partial \phi}{\partial \varphi} = \frac{1}{K} \frac{\partial V}{\partial \phi},
\end{equation}
we see that this can be easily achieved if $K$ has a singularity at some field value $\phi=a$,
\begin{equation}
K\rightarrow\infty\quad{\rm for}\quad \varphi\rightarrow a.
\end{equation}
This singular structure has an additional advantage:
The non-canonically normalised field $\phi$ will never exceed $\phi=a$ during its evolution.
Limits such as the one discussed in~\secref{secbbn} arising from BBN that are based on a sizeable field value at some earlier epoch can thus be avoided if $a$ is sufficiently small.
A simple function that satisfies the above requirements while keeping the periodic properties intact is,
\begin{equation}\label{eq:non_canonical_kinetic_term}
K(\phi) = \frac{1}{\cos \left( \frac{N\phi}{f_a} \right)} .
\end{equation}
While this choice might seem rather arbitrary at first, there are some arguments that make it more general than it seems.
The approach for obtaining a flattened potential for a scalar via a non-canonical kinetic term has been widely used in the context of inflationary cosmology~\autocite{alishahiha_dbi_2004,domcke_pbh_2017}.
Indeed over the last years, $\alpha$-attractor models~\autocite{kallosh_superconformal_2013,kallosh_planck_2015} have attracted special attention.
In this context, \autocite{galante_unity_2015} showed that the determining property of this class of models is the existence of a pole in the kinetic term.
More precisely, it is the order and the residue of the pole that play a key role, and not so much the precise functional form of the kinetic function.
We can therefore be confident that our results will not depend much on the specific choice of $K$.
Similarly to~\autocite{galante_unity_2015}, here we focus on the case of a second-order pole.
As we mentioned before, this case is better motivated and may arise, for instance, as a consequence of a non-minimal coupling to gravity.
Nevertheless, we check in Appendix~\secref{sec:Appendix1} that our main conclusions remain unchanged if we allow for higher-order poles.
Also, recall that the shift symmetry $\phi\rightarrow \phi + \text{\textit{const.}}$ of the ALP field is what protects its mass from large corrections.
It thus seems sensible to preserve or only slightly break this symmetry.
Indeed, by our choice of potential Eq.~\eqref{eq:potential}, we are assuming that a small explicit breaking is present.
This breaking typically occurs at the nonperturbative level~\autocite{peccei_strong_2008, kim_axions_2010,} and crucially preserves the discrete shift symmetry $\phi\rightarrow \phi + 2 k \pi f_a$, which allows us to retain a sufficient level of protection against quantum corrections.
We would like the kinetic term to preserve, at least, this discrete shift symmetry, which requires that $K(\phi)$ is a periodic function of $\phi/f_a$.
These arguments quickly lead us to Eq.~\eqref{eq:non_canonical_kinetic_term}.
Once again, we stress that the fact that we are writing a specific kinetic term should not be understood as a construction of a complete model, bur rather as a benchmark for our phenomenological study.
\bigskip
The transformation to the canonically normalised field is given by
\begin{equation}\label{eq:field_redefinition}
\varphi(\phi) = \frac{2f_a}{N} \arctanh \left( \tan \frac{N\phi}{2f_a} \right).
\end{equation}
We should note that the poles of $K(\phi)$ are located at $\phi/f_a=\pi/(2N)$.
This means that, when doing the field redefinition \eqref{eq:field_redefinition}, we are restricting the field space to $\phi/f_a\in (- \frac{\pi}{2N}, \frac{\pi}{2N})$.
As already mentioned above this will become important when discussing the limits arising from a gluon coupling in~\secref{sec:QCD}.
In principle there exist a total of $N$ different branches $\phi/f_a\in \left( (k-\frac{1}{2})\frac{\pi}{N}, (k+\frac{1}{2})\frac{\pi}{2N}\right)$ where the field could be trapped.
However, the only one which has a minimum in the potential is the one closest to the origin.
In other branches, the field would slow-roll towards infinity\footnote{In principle one could have tunnelling between different branches. If the decay time of the metastable vacuum is small enough, the field would always eventually end up in the branch closest to zero. However, a calculation of the tunnelling rate is highly model dependent and beyond the scope of this work.}, making them unappealing for the phenomenologically purposes that the we have in mind.
For this reason, we focus on the phenomenologically viable region around zero.
Using the field redefinition~\eqref{eq:field_redefinition} the Lagrangian for the canonically normalised field is given by
\begin{equation}\label{eq:canonically_normalised_Lagrangian}
{\cal L} = \frac{1}{2} \partial^\mu\varphi\partial_\mu\varphi - \Lambda^4 \left[ 1 - \cos \left( \frac{2}{N} \arctan \left( \tanh \frac{N\varphi}{2f_a} \right) \right) \right].
\end{equation}
By expanding about the origin, it can be checked that we indeed recover the quadratic behaviour for small field values.
The potential is plotted in~\figref{fig:potential} for different values of $N$.
It indeed looks quite similar to what we imagined in~\figref{fig:SR_potential}.
What about the equations of motion?
Let us assume that we have a homogeneous and isotropic field, $\phi = \phi (t)$ and consequently $\varphi = \varphi (t)$.
The Klein-Gordon equation for a homogeneous and isotropic field in an expanding spacetime is
\begin{equation}
\ddot{\varphi} + 3H\dot{\varphi}+\partial_\varphi V(\varphi) = 0,
\end{equation}
where $H$ is the Hubble expansion parameter.
For convenience we introduce the dimensionless field variable,
\begin{equation}
\psi = \varphi / f_a,
\end{equation}
in analogy to how the $\theta$ angle relates to the original axion field.
Thus, we will be expressing the field value in terms of $f_a$ units.
The equation of motion can then be written as
\begin{equation}
\ddot{\psi} + 3H \dot{\psi} + m^2 \frac{1}{\cosh N\psi} \sin \left[ \frac{2}{N} \arctan \left( \tanh \frac{N\psi}{2} \right) \right]=0,
\end{equation}
where we define
\begin{equation}
m^2 = \frac{\Lambda^4}{f_a^2},
\end{equation}
which corresponds to the second derivative of the physical field around the minimum at
$\psi=\varphi=\phi=0$. $m$ is the physical mass of the dark matter particles.
\begin{figure}[t!]
\centering
\includegraphics[width=0.7\textwidth,height=\textheight,keepaspectratio]{fig_potential_nonorm}
\caption{Potential for the canonically normalised field, plotted for various values of $N$.
Note that the potential is quadratic for small field value but flattens away from the origin.}
\label{fig:potential}
\end{figure}
\newpage
\section{Cosmological evolution and dark matter production}\label{sec:Cosmological_evolution}
The goal of this section is to find an estimate for the dark matter density in the model defined above and compare it with the observed abundance.
The energy density of the field depends on the parameters $(f_a, m, N)$, as well as the initial conditions for the field and its cosmological evolution.
For this purpose it is useful to briefly recall the misalignment mechanism~\autocite{preskill_cosmology_1983,abbott_cosmological_1983,dine_not-so-harmless_1983}, which gives us the basic idea of how our field evolves in a cosmological setup.
\subsection{The misalignment mechanism}\label{sec:misalignment}
Here we briefly summarise how a misaligned light scalar field evolves in an expanding spacetime, closely following the description in~\autocite{arias_wispy_2012}.
Let us consider the simplified case of a real scalar field with Lagrangian
\begin{equation}
{\cal L} = \frac{1}{2}\partial_\mu \phi \partial^\mu \phi - \frac{1}{2}m_\phi^2\phi^2.
\end{equation}
Note that our final goal is not the harmonic case but a more complicated potential with strong anharmonicities.
However, solving this simplified equation will give us helpful insights to tackle the anharmonic potential.
In a homogeneous setting, the equation of motion for $\phi$ is
\begin{equation}\label{eq:simpleeom}
\ddot{\phi} + 3H\dot{\phi} + m_\phi^2 \phi = 0.
\end{equation}
This is the equation of a damped harmonic oscillator.
There are two distinct regimes in the evolution of $\phi$.
First, at very early times when $3H \gg m_\phi$, the oscillator is overdamped and so the solution is $\dot{\phi} = 0$, and the field is stuck at its initial value.
At a later time $t_1$ such that $3H (t_1) = m_\phi$, the damping has decreased enough so that the field can start to oscillate.
The equation of motion for the oscillating regime can then be solved using the WKB approximation:
\begin{equation}\label{eq:harmonic_WKB}
\phi(t) \simeq \phi(t_1) \left( \frac{a(t_1)}{a(t)} \right) ^{3/2} \cos \left( m_\phi (t-t_1) \right),
\end{equation}
where $a(t)$ is the scale factor.
We see that the energy density, which is proportional to the amplitude of the oscillations squared, dilutes with expansion as $a^{-3}$.
This means that the oscillating field behaves like pressureless matter for all processes mediated by gravitation.
In this simplified setup, the energy density in the axion field today is
\begin{equation}\label{eq:harm_energy_density}
\rho_\phi (t_0) \simeq 0.17 \frac{\text{keV}}{\text{cm}^3}\ \sqrt{\frac{m_\phi}{\text{eV}}} \left( \frac{\phi_0}{10^{11}\ \text{GeV}} \right)^2 {\cal F}(T_1),
\end{equation}
where
\begin{equation}
{\cal F}(T_1)= \frac{\left( g_\star(T_1) / 3.36 \right)^{\frac{3}{4}}}{\left( g_{\star S}(T_1) / 3.91\right)}
\end{equation}
is a smooth function (cf.~\autocite{arias_wispy_2012}) that varies from $1$ to $\sim 0.3$ when $T_1\in (T_0,200\ \text{GeV})$.
The last result assumes that the field starts oscillating during radiation domination and that the comoving entropy is conserved.
\subsection{Analytical estimate of the dark matter density}\label{sec:analytic_estimate}
After this small detour to explain the misalignment mechanism for the harmonic potential, let us go back to our case of interest: the ALP field with a non-standard kinetic term.
Recall that the equation of motion that we have obtained for the physical field $\psi$ is
\begin{equation}\label{eq:nonlinear_eom}
\ddot{\psi} + 3H \dot{\psi} + m^2 \frac{1}{\cosh N\psi} \sin \left[ \frac{2}{N} \arctan \left( \tanh \frac{N\psi}{2} \right) \right] = 0.
\end{equation}
We see that in the limit of small $\psi$, when $N\psi \ll 1$, this reduces to the simplified case \eqref{eq:simpleeom} and the evolution is exactly as we described in the simple real scalar field case.
However, the situation is different in the regime $N\psi \gtrsim 1$.
As we can expect by looking at~\figref{fig:potential}, the flatness of the potential away from the minimum at $\psi=0$ will have the effect of delaying the start of the oscillations.
Moreover, the oscillations, once they start, will not be harmonic until the damping has made the amplitude decrease enough to be in the small field regime.
This means that the WKB approximation might not be as good in this case.
Although we suspect that the WKB approximation might break down when the amplitude of the oscillations is big due to the anharmoniticity of the potential, we will use it as a first approximation to solve the equation of motion and get an analytical estimate of the result. We will later contrast this to a more precise numerical computation.
In the analytical approach, we will study the two regimes, where the damping is over- and under-critical, respectively, and build up the global evolution of the field by glueing together the solution for each regime.
Our goal is to compute the current energy density of dark matter-like particles given an initial condition for the physical field.
As we saw, the first thing to do is to find the time when the oscillations start.
In analogy with the simple case, where the condition was $3H = m_\phi$, we use a generalisation of this formula for a non harmonic potential, namely
\begin{equation}
3H = \left| V^{\prime\prime} (\psi_0) \right| ^{1/2}.
\end{equation}
In~\secref{sec:numerics} we will see that this indeed works reasonably well to determine when the oscillations start, as it takes into account the flatness of the potential away from the origin.
In the limit of large $N\psi\gg 1$, the second derivative of the potential can be written as
\begin{equation}
V^{\prime\prime}(\psi) \simeq -2Nm^2 \exp^{-N\psi}\sin \frac{\pi}{2N}.
\end{equation}
This turns out to be a very good approximation for intermediate and even small values of $N\psi$.
One key difference with the harmonic case is that here the point in time when oscillations begin depends on the initial field value $\varphi_0$.
With this we already see that the oscillations are exponentially delayed for big $N\psi$:
\begin{equation}\label{eq:tstart}
t_s \equiv t_{\text{start}} = \frac{3}{2\left| V^{\prime\prime}(\psi_0) \right| ^{1/2}} \simeq \frac{3}{2m} \left( 2N\sin \frac{\pi}{2N} \right)^{-1/2} \exp^{\frac{N\psi_0}{2}} \propto \exp^{\frac{N\psi_0}{2}},
\end{equation}
where we have assumed radiation domination so that $H=1/(2t)$.
We now use this as an initial condition for the WKB approximation.
In this approximation, the energy density of the physical field $\varphi$ is
\begin{equation}\label{eq:energy_density_semianalytic}
\rho_\varphi (T) = \frac{1}{2} m^2 f_a^2 \psi_0^2 \frac{g_{\star S}(T)}{g_{\star S}(T_s)} \left( \frac{T}{T_S}\right)^3,
\end{equation}
where we have used the conservation of comoving entropy $S=sa^3$ to express it in terms of temperatures instead of scale factors.
Using the expression for the Hubble constant during radiation domination
\begin{equation} \label{eq:Hubble_temperature}
H(T) = 1.66\sqrt{g_\star(T)} \frac{T^2}{m_{\text{pl}}},
\end{equation}
we can express the current energy density of the field as a function of the initial condition $\psi_0$,
\begin{equation}
\rho_\varphi \simeq 0.17\ \frac{\text{keV}}{\text{cm}^3} \cdot \sqrt{\frac{m}{1\ \text{eV}}} \left( \frac{f_a}{10^{11}\ \text{GeV}} \right)^{2} \psi_0^2\ {\cal F}(T_s) \cdot \left( 2N \sin \frac{\pi}{2N} \right)^{-3/4} \exp^{\frac{3}{4}N\psi_0}.
\end{equation}
We can compare this density with the one corresponding to a harmonic potential.
The result is
\begin{equation}\label{eq:enhancement_analytic}
\frac{\rho^{\text{anh}}}{\rho^{\text{harm}}} \simeq \frac{\mathcal{F}(T_1)}{\mathcal{F}(T_s)} \cdot \left( 2N \sin \frac{\pi}{2N} \right)^{-3/4} \exp^{\frac{3}{4}N\psi_0} \sim \exp^{\frac{3}{4}N\psi_0},
\end{equation}
so the energy density is exponentially enhanced\footnote{In Appendix~\secref{sec:Appendix1} we check that a significant enhancement also exists if we allow for a kinetic function with a higher-order pole.} for large $N$ and initial condition $\psi_0$.
The precise exponent that we obtain here should be taken as a very rough estimate.
Indeed, a numerical computation is needed to get a precise result, which is what we will aim for in the following section.
As we can see in \eqref{eq:enhancement_analytic} the enhancement is exponential in $N\psi$. This implies that the field values required to yield the correct dark matter abundance are usually not too large. In the phenomenologically interesting region we usually do not need to have values for $N\psi$ that are bigger than $50$.
The largest initial field values happen for $N=1$ and are of order $50$ in units of $f_a$.
Another constraint that we have to care about is that the field is behaving like dark matter once it comes to dominate the dynamics of the universe, i.e. we do want to avoid having an additional phase of inflationary expansion driven by $\psi$.
A sufficient condition for this is that the field has already started to oscillate at matter radiation equality.
Making use of the more precise numerical estimate that we will obtain in the next section, we can estimate what region of parameter space satisfies this condition,
\begin{equation}\label{eq:second_inflation_limit}
f_a \gtrsim 10^{-6}\ \mathrm{GeV}\cdot N\cdot \left( \frac{\mathrm{eV}}{m} \right)^{0.81}.
\end{equation}
This condition excludes the very small values of the mass and the decay constant in the upper left corner of \figref{fig:ALP_parameter_space}, which are already in tension with the nEDM experiment, BBN observations and the limits from \autocite{hook_probing_2017}.
\subsection{Numerical computation}\label{sec:numerics}
Having obtained a simple estimate of the cosmological evolution of the field, we now make use of a numerical solution of the equation of motion to have a more precise result.
Our goal in this subsection is to quantify how much the solution for the nonlinear equation of motion \eqref{eq:nonlinear_eom} deviates from the harmonic case \eqref{eq:simpleeom}.
Following the usual practice for dealing with anharmonicities in the ALP potential (see~\autocite{turner_cosmic_1986, lyth_axions_1992, strobl_anharmonic_1994, bae_update_2008},~\autocite{diez-tejedor_cosmological_2017} has a slightly different definition), we use an effective parametrisation in terms of an anharmonicity function $f(\psi_0)$, such that
\begin{equation}
\rho^{\text{anh}} = f(\psi_0) \rho^{\text{harm}},
\end{equation}
where $\rho$ is the energy density of the ALP field, computed late enough when it is already behaving as cold dark matter.
This function only depends on the initial misalignment angle, and it should account for all the deviations from the harmonic solution.
This approach is normally used to account for departures from the quadratic potential in the usual axion and ALP models.
Our case is slightly different, mostly because we are dealing with an unbounded field range.
As a consequence, the usual functional form for $f(\psi_0)$ does not work here.
Guided by the result obtained in the analytical approximation, we work with the following ansatz for the anharmonicity function:
\begin{equation}\label{eq:anh_func_ansatz}
f(\psi_0) = \mathrm{e}^{b N \psi_0},
\end{equation}
where $b$ is a real parameter to be determined.
This ansatz accounts for the exponential enhancement in energy density that we have found analytically.
The normalisation needed is that $f(\psi_0) \rightarrow 1$ when $\psi_0 \rightarrow 0$, so as to recover the harmonic case in the small field limit.
The goal now is to fit the ansatz to a numerical computation of the energy density.
To set the problem in a more straightforward way, we want to compare the numerical solution of
\begin{equation}
\ddot{\psi} + 3\tilde{H}(\tilde{t}) \dot{\psi} + \tilde{m}^2 \frac{1}{\cosh N\psi} \sin \left[ \frac{2}{N} \arctan \left( \tanh \frac{N\psi}{2} \right) \right] = 0
\end{equation}
with the solution for the damped harmonic oscillator equation
\begin{equation}
\ddot{\psi} + 3\tilde{H}(\tilde{t}) \dot{\psi} + \tilde{m}^2 \psi = 0.
\end{equation}
In this computation we use dimensionless quantities measured in units of $m$, denoted with a tilde: $\tilde{H}, \tilde{t}, \tilde{m} \dots$
In these units, the time for the start of the oscillations in the harmonic case is $\tilde{t}_1^{\text{harm}} = 3/2$ (assuming radiation domination), and the period of the oscillations is $2\pi$.
We solve the equations numerically until we are well within the adiabatic regime in both cases (that is, when the amplitude of the oscillations has decreased enough so that the non-canonical potential is well approximated by the harmonic one).
Then, we compute the energy density $\rho = (1/2)f_a^2\dot{\psi}^2 + V(f_a\psi)$ and extract the anharmonicity factor as the quotient of both energy densities.
As we are within the adiabatic regime, $\rho$ scales as $\rho\propto a^{-3}$ in both cases, so the quotient will stay constant.
An example of the numerical solution can be seen in~\figref{fig:anh_function_evolution}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.75\textwidth,height=\textheight,keepaspectratio]{fig_anh_function_evolution}
\caption{Numerical solution of the non-canonical equation of motion compared to the harmonic solution, using $N=5$ and $\psi_0=1.5$ as an example. The top panel shows the solution for the field as a function of time, while the middle and bottom ones show the energy density of the field and the quotient of energy densities for the harmonic and non-canonical equations of motion. Note that this quotient approaches a constant as the adiabatic regime is reached, allowing us to obtain the anharmonicity factor. As a comparison and confirmation of our analytical results, the top panel also shows the time at which the oscillations are predicted to start in our analytical approach, Eq. \eqref{eq:tstart}.}
\label{fig:anh_function_evolution}
\end{figure}
This process is repeated for a large number of values of $\psi_0$ and $N$ and we fit the results to the ansatz \eqref{eq:anh_func_ansatz}.
We obtain a very good fit with a value of $b = 0.56$, as can be seen in~\figref{fig:anh_function_fit}.
One should note that we are fitting a two dimensional data sample with just one parameter, so finding a good fit confirms that we have chosen an adequate ansatz.
\begin{figure}[t!]
\centering
\includegraphics[width=0.75\textwidth,height=\textheight,keepaspectratio]{fig_anh_function_fit_black}
\caption{Fit of the anharmonicity function to the ansatz in Eq. \eqref{eq:anh_func_ansatz}. We plot the result of the fit for a set of values of $N$ and a range of the initial misalignment angle $\psi_0\in(0,5)$.}
\label{fig:anh_function_fit}
\end{figure}
The anharmonicity function allows us to compute the energy density of the non-canonical ALP field in a very simple way, combining the harmonic solution \eqref{eq:harm_energy_density} with the anharmonicity function~\eqref{eq:anh_func_ansatz}.
As long as we are within the adiabatic regime, the energy density in this approximation is given by
\begin{equation}\label{eq:anharm_energy_density}
\begin{aligned}
\rho_\psi^{\text{anh}} (t) &\simeq \frac{1}{2} f_a^2 m^2 f(\psi_0,N)\psi_0^2 \left( \frac{a_1^{\text{harm}}}{a(t)} \right)^3 \\
&= \frac{1}{2} f_a^2 m^2 f(\psi_0,N) \psi_0^2 \frac{g_{\star S}(T)}{g_{\star S}(T_1^{\text{harm}})} \left( \frac{T}{T_1^{\text{harm}}}\right)^3.
\end{aligned}
\end{equation}
The key difference between this equation and \eqref{eq:energy_density_semianalytic} is that here we use the well known solution of the harmonic equation of motion, instead of the full noninear one that arises in our non-canonical setup.
All the information about the nonlinearity is encoded in the anharmonicity function, making it much more manageable.
In the analytical approach, we found that the quotient between non-canonical and canonical density scales as $\rho_\text{NC}/\rho_\text{C}\sim \mathrm{e}^{(3/4)N\psi_0}$.
In the full numerical approach\footnote{In this study we have limited ourselves to the homogeneous field evolution.
Recently, the authors of~\autocite{soda_cosmological_2017} showed that potentials like the one we are considering can lead to a parametric resonance instability that can make inhomogeneous modes grow.
This effect may help to alleviate some tension that has been pointed out in~\autocite{irsic_first_2017} between the existence of ultralight ALPs and Lyman $\alpha$ forest observations.} we find a somewhat lower coefficient for the exponent of $0.56$.
We have seen that a non-canonical kinetic term can indeed enhance the energy density of ALP dark matter.
In the next few sections we will make use of the solutions for the cosmological evolution of the non-canonical ALP field to make predictions about its phenomenology, and to apply it to some particularly interesting cases.
\section{Isocurvature perturbations}\label{sec:Isocurvature_perturbations}
So far, we have assumed the initial misalignment angle $\theta_0 = f_a \phi_0$ to be a constant value all throughout the universe, but of course we have to take into account fluctuations, e.g. those imprinted by inflation. We do this by taking the initial misalignment angle as a spatially varying quantity, and describing it in terms of its average and variance.
Two very distinct scenarios arise, depending on whether the mechanism that gives rise to the ALP field turns on before or after the inflationary epoch of our Universe.
If the ALP field was established, e.g. by spontaneous symmetry breaking, after inflation, the variance of the angle can be large even within our Hubble volume.
The mean value will be $\phi_0 = 0$ and the energy density is given by the fluctuations as well as other effects such as, e.g. the decay of topological defects~\autocite{sikivie_cosmic_1991,hagmann_axion_2001}. In particular the latter contributions are not well understood and may also have some model dependence when going beyond the QCD axion.
To avoid this, we will focus on the scenario where the ALP field was present during inflation.
Classically, if the ALP field was established before inflation, then the spatial variance of the field within a Hubble patch will be washed out as spacetime is stretched during inflation.
This means that $\sigma_\phi^2 \rightarrow 0$, and the misalignment field can take any value $\phi_0$ in our Hubble patch.
However, this is not completely true, as any light field present during inflation will acquire quantum fluctuations (see, e.g.~\autocite{linde_particle_2005}).
The power spectrum of such fluctuations for a canonically normalised scalar field is scale invariant,
\begin{equation}
\braket{\left| \delta \phi (k) \right|^2} = \left( \frac{H_I}{2\pi} \right)^2 \frac{1}{k^3 / (2\pi^2)}.
\end{equation}
These fluctuations can be thought of as arising from a thermal spectrum at the Gibbons-Hawking temperature $T_{GH}=H_I/(2\pi)$~\autocite{gibbons_cosmological_1977}.
As long as these fluctuations do not restore the spontaneously broken symmetry that gives rise to the ALPs, i.e., as long as\footnote{It is also necessary that the symmetry is not restored during reheating~\autocite{beltran_isocurvature_2007}. We will assume this to be true.} $T_{GH}<f_a$, this will imprint small fluctuations on top of the otherwise homogeneous ALP field.
The corresponding fluctuations of the misalignment angle in Fourier space will have an amplitude of $\sigma_\phi (k) = H_I/(2\pi f_a)$.
In real space, the fluctuations are of a size $\sigma_\phi = \gamma H_I/(2\pi f_a)$, where $\gamma \sim \orderof (1)$ is a dimensionless factor that effectively encodes the dispersive effect of the logarithmically divergent small $k$ modes (see~\autocite{lyth_axions_1992}).
Its value depends on the length scales that we are interested in. Following~\autocite{hertzberg_axion_2008} we will set $\gamma = 2$ for the CMB characteristic scale $k_\star = 0.05\ \text{Mpc}^{-1}$.
As the ALP has a negligible contribution to the total energy density of the universe during inflation, fluctuations in the field do not contribute to the usual curvature perturbations.
Rather, they manifest themselves as fluctuations in the ratio of the number density of ALPs to the total entropy density, and are completely uncorrelated with the curvature perturbations.
This is the reason why they are called \textit{entropy} or \textit{isocurvature perturbations}.
As their interactions with other standard model particles are greatly suppressed, ALPs do not thermalise with the other species and their perturbations remain isocurvature~\autocite{weinberg_must_2004}.
At later stages of the cosmological evolution, the dark matter ALPs pick up a significant contribution to the energy density of the universe, and so they contribute to the temperature and polarisation fluctuations of the CMB as cold dark matter isocurvature modes.
Planck has set strong bounds on isocurvature perturbations~\autocite{ade_planck_2016},
\begin{equation}
\beta_\text{iso} = \frac{\Delta_\phi^2(k_\star)}{\Delta_\phi^2(k_\star) + \Delta_{\cal R}^2 (k_\star)} < 0.038
\end{equation}
at $95 \%$ CL.
Here, $\Delta_\phi^2(k_\star)$ and $\Delta_{\cal R}^2 (k_\star)$ are the power spectrum of the axion and curvature perturbations at the pivot scale $k_\star$, respectively.
Once the value of $\Delta_{\cal R}^2 (k_\star)$ is set (Planck gives $ \Delta_{\cal R}^2 (k_\star) = 2.1(9)\times 10^{-9} $), this translates into a bound on the axion isocurvature fluctuations.
To use this limits to constrain our scenario, we have to compute our prediction for
\begin{equation}\label{eq:Isocurvature_spectrum}
\Delta_\phi^2 = \Eval{\Braket{ \left( \frac{\delta\rho_\phi}{\rho_\phi}\right) ^2 }}{t_\text{CMB}}{},
\end{equation}
that is, we need to evolve the fluctuations in the energy density until the time of emission of the CMB and compare them with the homogeneous average value.
If the evolution of the field is linear, as it is in the case of canonical ALP models with a purely quadratic potential, the power spectrum is constant during the cosmological evolution.
As a consequence, one can evaluate it at any point, such as right after inflation and before the onset of the oscillations in the ALP field.
However, in any model that contains anharmonicities, the evolution at early times will be nonlinear, which implies that $\Delta_\phi^2$ will evolve nontrivially after inflation.
Thus, to arrive at the correct prediction for the isocurvature perturbations, we have to track the evolution of the fluctuations until late times.
In addition to the limits from isocurvature fluctuations,
the inflationary fluctuations\footnote{Quantum fluctuations of the ALP field should also be considered, but their effect is negligible when compared to the inflationary fluctuations.} in the ALP field also forbid tuning the initial misalignment angle with arbitrary precision.
In fact, there is an unavoidable limit to this tuning, and it is that our tuning precision cannot be better than the fluctuations, with $\sigma_\theta = \gamma H_I/(2\pi f_a)$, as was argued in~\autocite{wantz_axion_2010}.
This has two related consequences.
The first is that the initial misalignment angle cannot be infinitely close to zero.
The requirement that the current ALP energy density is not bigger than the measured dark matter density $\Omega_C h^2 \sim 0.12 $ then sets a bound on the parameter space.
This bound is model independent (as long as all the potentials are approximately quadratic for small $\theta$) and roughly requires
\begin{equation}
m < \left( \frac{10^{12} \text{ GeV}}{H_I} \right)^4 \text{ eV}.
\end{equation}
Secondly, if the field range is compact (as for the usual canonical ALP), an argument similar to the one above tells us that some regions of the parameter space will not yield enough energy density to account for all the dark matter.
Indeed, it is not possible to tune the initial value of the field at the top of the potential with infinite precision, due to the presence of fluctuations.
The requirement here is that $\pi - \theta_0 < \gamma H_I/(2\pi f_a)$.
This particular limit will strongly depend on the anharmonicity of the potential, so it is not possible to give a more explicit expression.
We discuss some particular cases in the next subsection.
However, this last effect will not be relevant in our non-canonical model, as there we have an unbounded field range (our potential does not have a maximum).
\subsection{Isocurvature perturbations for anharmonic potentials}\label{sec:Isocurvature_perturbations_anharmonic}
We now present a general analytical expression to compute the isocurvature perturbations in general ALP models where the potential might have big anharmonicities.
We do this using the anharmonicity function formalism that we presented in the previous section.
An equivalent result was derived in~\autocite{kobayashi_isocurvature_2013} using the $\delta N$ formalism.
Here we provide a more straightforward derivation and extend the use of the formula to more general potentials.
To evaluate expression \eqref{eq:Isocurvature_spectrum}, we will use the fact that at $t_\text{CMB}$ the field should already be oscillating harmonically, as observations require it to behave as cold dark matter already by the time of matter-radiation equality.
As we are already well within the adiabatic regime, the anharmonicity function approach will work well to describe the evolution of the energy density, which means that we can use equation \eqref{eq:anharm_energy_density}.
As fluctuations are small, we can work to linear order in $\sigma_\phi$ to find\footnote{Here we implicitly assume that the fluctuations are still superhorizon when the adiabatic regime is reached. This is indeed the case for all the large scale modes of cosmological interest, like the ones probed by the CMB.}
\begin{equation}\label{eq:isocurvature_anharmonic}
\begin{aligned}
\Delta_\phi^2 &= \Eval{\Braket{ \left( \frac{\delta\rho_\phi}{\rho_\phi}\right) ^2 }}{t_\text{CMB}}{} = \left( \Eval{\frac{\partial \log \rho_\phi(t_\mathrm{CMB})}{\partial \log \phi}}{\phi_0}{}\right)^2 \Braket{ \left( \frac{\delta\phi_0}{\phi_0}\right) ^2 }\\
&=4 \frac{\sigma_\phi^2}{\phi_0^2} \left( 1 + \frac{1}{2} \Eval{ \frac{\d \log f(\theta)}{\d \log \theta}}{\theta_0}{} \right)^2 \\
&= 4\gamma^2 \frac{H_I^2}{4\pi^2 f_a^2 \theta_0^2} \left( 1 + \frac{1}{2} \Eval{ \frac{\d \log f(\theta)}{\d \log \theta}}{\theta_0}{} \right)^2.
\end{aligned}
\end{equation}
Note that even if this quantity is evaluated at $t_\text{CMB}$, it directly depends only on the initial misalignment angle and the statistics of its fluctuations at inflation.
All the information about the later evolution is encoded in the anharmonicity function.
We will now apply the formula \eqref{eq:isocurvature_anharmonic} to both the case of the canonical ALP with a cosine potential and to our non-canonical model, and compare the results with the harmonic approximation.
For the harmonic case, where $f(\theta_0) = 1$, we have the usual expression
\begin{equation}
\Delta^2_\phi = \gamma^2 \frac{H_I^2}{\pi^2 f_a^2 \theta_0^2}.
\end{equation}
The constraints that one finds, for different values of the energy scale of inflation, are presented in~\figref{fig:isocurvature}.
The harmonic case in particular corresponds to the first column of plots.
Of course, the harmonic case can only be an approximation valid for small $\theta$, as ALP models should preserve the shift symmetry $\theta + 2\pi$.
Among the potentials that satisfy this condition, the most commonly used is $V(\theta) = m^2(1-\cos\theta)$.
The anharmonicity function that appears in this case was studied in~\autocite{turner_cosmic_1986, lyth_axions_1992, strobl_anharmonic_1994, bae_update_2008}.
After comparing with numerical simulations, we have decided to use a slightly different version of it, proposed in~\autocite{diez-tejedor_cosmological_2017} and which provides a better fit to the numerical data,
\begin{equation}\label{eq:cosine_AF}
f(\theta_0) = \left[ \log \left( \frac{e}{1-\left( \theta_0 / \pi\right)^4}\right)\right]^{3/2} .
\end{equation}
With this, it is easy to arrive to the following expression for the isocurvature perturbations,
\begin{equation}
\Delta^2_\phi = \gamma^2 \frac{H_I^2}{\pi^2 f_a^2 \theta_0^2}\left( 1 + \frac{3}{f(\theta_0)^{2/3}} \cdot \frac{1}{(\pi / \theta)^4 -1} \right)^2.
\end{equation}
Note that this reduces to the harmonic result for small $\theta_0$.
However, for angles close to $\pi$, the isocurvature perturbations are greatly enhanced.
As expected, this function diverges at $\theta_0=\pi$, but as we have noted before, this limit is unattainable because of the fluctuations in the field.
In~\figref{fig:isocurvature}, we can see that the limits we can put on the parameter space are a bit stronger than in the harmonic case, in particular for low values of $m$ and $f_a$, which correspond to large values of the initial misalignment angle.
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{1\textwidth}
\includegraphics[width=1\linewidth]{fig_isocurvature_H7_inv}
\label{fig:isocurvature_H7}
\end{subfigure}\\[-3ex]
\begin{subfigure}[b]{1\textwidth}
\includegraphics[width=1\linewidth]{fig_isocurvature_H9_inv}
\label{fig:isocurvature_H9}
\end{subfigure}\\[-3ex]
\begin{subfigure}[b]{1\textwidth}
\includegraphics[width=1\linewidth]{fig_isocurvature_H11_inv}
\label{fig:isocurvature_H12}
\end{subfigure}\\[-3ex]
\caption{Isocurvature limits arising in the three different models studied: the harmonic potential (left), the canonical ALP (centre) and the non-canonical one (right). From top to bottom, we plot the limits in the $(m,f_a)$ parameter space for different values of the energy scale of inflation $H_I$. Note that a higher $H_I$ puts stronger bounds on ALP models. In fact, $H_I \gtrsim 10^{12}$ GeV rules out the complete parameter space, whereas for $H_I \lesssim 10^6$ GeV, the limits are very weak.}
\label{fig:isocurvature}
\end{figure}
Finally, we turn to the non-canonical case.
The main difference with the canonical ALP, aside from the shape of the potential, is that here we are dealing with an unbounded field range.
As the potential is asymptotically flat, it is always possible to enhance the production of ALPs by choosing a larger initial misalignment angle, as we saw in~\secref{sec:Cosmological_evolution}.
This means that this model can always evade the limits related with to underproduction of dark matter.
Using the anharmonicity function that we derived in the previous section, we find that the isocurvature power spectrum generated in this scenario is
\begin{equation}
\Delta^2_\phi = \gamma^2 \frac{H_I^2}{\pi^2 f_a^2 \theta_0^2}\left( 1 + \frac{1}{2} b N \theta_0 \right)^2,
\end{equation}
where $b=0.56$.
Again, this reduces to the harmonic case for small $\theta$.
The last column of plots in~\figref{fig:isocurvature} illustrate the limits that arise from the Planck data.
Note that in the harmonic and canonical model featuring a compact field range a strong restriction on the parameter space is given by the requirement to produce enough dark matter (the limits arising from this condition are shaded in purple in~\figref{fig:isocurvature}).
As we have already argued, this limit is not present in our non-canonical setup, which features an unbounded field range.
As a consequence, this model opens up a large region of parameter space, corresponding to low masses and decay constants, that was disfavoured until now.
Finally let us remark that, as is well known, high scale inflation strongly constraints ALP models due to the generation of large isocurvature perturbations, which are not seen in the CMB.
The tensor to scalar ratio $r$ is strongly correlated with a high scale of inflation, so a detection of primordial gravitational waves would put a strong constraint on all axion and ALP dark matter models, including ours.
Future experiments~\autocite{matsumura_mission_2014,kogut_primordial_2011,abazajian_cmb-s4_2016} are expected to increase the sensitivity in measuring $r$ and thus the energy scale of inflation.
\section{Coupling to QCD: Temperature dependent mass}\label{sec:QCD}
So far, we have not assumed a coupling of the ALP to any other field.
In what follows, we will allow for a coupling to gluons via a term $\theta G \tilde{G}$.
We will study two distinct cases.
First, we contemplate the possibility of having a non-canonical kinetic term in an otherwise QCD-axion model.
Then, we add an extra term to the Lagrangian which, as we will see, allows us to construct a model of light ALPs that enjoys relatively strong gluon couplings.
\subsection{The QCD axion}
In this section we will focus on the QCD axion as introduced by Peccei and Quinn as a solution to the strong CP problem in quantum chromodynamics (QCD)~\autocite{peccei_cp_1977,weinberg_new_1978,wilczek_problem_1978}.
The Lagrangian for the canonically normalised axion field is now
\begin{equation}\label{eq:QCD_lagrangian}
{\cal L}_\phi = \frac{1}{2} \partial^\mu \phi \partial_\mu \phi - \Lambda_{\text{QCD}}^4 \left( 1-\cos \frac{\phi}{f_a}\right),
\end{equation}
and as usual we can define the angle $\theta = \phi/f_a$, so that $\theta\in \left( -\pi,\pi \right]$.
For our modification with a non-canonically normalised field, we have
\begin{equation}\label{eq:QCD_noncan_lagrangian}
{\cal L}_{\phi} = \frac{1}{2 \cos ^2 \left( N \frac{\phi}{f_a} \right)} \partial^\mu \phi \partial_\mu \phi - \Lambda_{\text{QCD}}^4 \left( 1-\cos \frac{\phi }{f_a}\right),
\end{equation}
and after we perform a field redefinition to have it canonically normalised, we find the Lagrangian
\begin{equation}\label{eq:QCD_noncan_lagrangian_norm}
{\cal L}_\varphi = \frac{1}{2} \partial^\mu \varphi \partial_\mu \varphi
- \Lambda_{\text{QCD}}^4 \left[ 1-\cos \left( \frac{2}{N} \arctan \left( \tanh \frac{N\varphi}{2f_a} \right) \right) \right].
\end{equation}
There is just one difference that makes the QCD axion case particular, and it is that here the energy scale appearing in the potential is fixed by QCD to be~\autocite{di_cortona_qcd_2016}
\begin{equation}\label{eq:QCD_energy_scale}
\Lambda_{\text{QCD}} = f_\pi m_\pi \frac{\sqrt{m_u m_d}}{m_u + m_d} \simeq 76\ \text{MeV}.
\end{equation}
It is easy to see that the mass of the axion, $m_a$, is given by $f_am_a = \Lambda_{\text{QCD}}^2$.
It is important to note that the numerical value quoted above is only valid at zero (or very low) temperatures.
Indeed, the axion potential is affected by finite temperature effects, such that the mass of the axion varies with temperature.
At low temperatures below the QCD critical temperature $T_\mathrm{crit}\sim 160-170\ \text{MeV}$, the mass remains roughly constant\footnote{The small temperature dependence can be computed using chiral perturbation theory as in~\autocite{di_cortona_qcd_2016}}.
That said, much of the dynamics that is of interest to us will happen in the early universe, at temperatures close or above $T_\mathrm{crit}$. There are different ways to compute the temperature dependence of the axion mass~\autocite{turner_cosmic_1986, bae_update_2008, wantz_axion_2010, wantz_topological_2010, di_cortona_qcd_2016, borsanyi_axion_2016}.
The function that controls the temperature dependence of the axion mass is the topological susceptibility $\chi(T)$, which is usually parametrised as a power law:
\begin{equation}\label{eq:axion_mass_T}
m_a^2(T) = \frac{\chi (T)}{f_a^2}, \quad \text{where}\quad \chi(T) \simeq \chi_0 \left( \frac{T}{T_{\mathrm{crit}}}\right)^{2\alpha}.
\end{equation}
Here we will use $2\alpha = -7.1$ and $\chi_0 = 0.11$, from recent lattice computations~\autocite{borsanyi_axion_2016} that are consistent with the instanton values up to an overall normalisation factor.
We see that the main effect is that the mass of the axion is approximately constant until $T_\mathrm{crit}$, and then it drops as a power law, so that the axion is essentially massless at high temperatures.
The most important implication of the temperature dependent mass is that a smaller mass at early times can delay the start of the oscillations of the field, which in turn results in a higher energy density of axionic dark matter today.
This happens both for the canonical and non-canonical axion models.
\subsection{Anharmonicity function and isocurvature perturbations revisited: Temperature dependence}
We have seen that coupling ALPs to QCD through $\phi G\tilde{G}$ results in a temperature-dependent mass for the ALP, both in the canonical and non-canonical setup.
This of course has an impact on its cosmological evolution, which can be of importance in computing observables such as the isocurvature perturbations that we discussed in~\secref{sec:Isocurvature_perturbations}.
To account for this effect, we will modify the anharmonicity function formalism that we introduced in~\secref{sec:numerics} to incorporate the temperature dependence.
That is, we want to compute
\begin{equation}
F_T(\theta_0,f_a) \equiv \frac{\rho^{\mathrm{anh}}_T}{\rho^{\mathrm{harm}}},
\end{equation}
evaluated at a point in time late enough so that the anharmonic and temperature-dependent axion field has already entered the adiabatic regime.
\begin{figure}[t!]
\centering
\includegraphics[width=0.6\linewidth]{fig_QCD_isocurvature_cosine_inv}
\caption{Isocurvature constraints on the axion scale $f_{a}$ as a function of the inflation scale $H_{I}$ for the QCD axion with potential \eqref{eq:QCD_lagrangian}. Both the anharmonicities of the potential and the temperature dependence of the mass are taken into account through the anharmonicity function defined in \eqref{eq:temperature_anharmonicity_function}. Our results differ slightly from the ones obtained in~\autocite{visinelli_dark_2009} and~\autocite{kobayashi_isocurvature_2013} due to the fact that we are using the more recent data from the Planck satellite and a different anharmonicity function.}
\label{fig:QCD_isocurvature_cosine}
\end{figure}
For definiteness, we will use the following expression for the axion mass,
\begin{equation}\label{eq:QCD_axion_mass}
m_a(T) = \begin{cases}
m_a \left( \frac{T}{T_\mathrm{crit}} \right)^\alpha &\mathrm{if}\ T\geq T_\mathrm{crit} ,\\
m_a &\mathrm{if}\ T\leq T_\mathrm{crit} .\\
\end{cases}
\end{equation}
First of all, we note that this temperature dependence will only have an effect if the field starts oscillating before the QCD critical temperature $T_\mathrm{crit}$.
In the harmonic limit, this means that if the mass is smaller than $m_a^*$, defined by $3H(T_\mathrm{crit})=m_a^*$, the field will have acquired its late-time mass by the time it starts oscillating.
Thus, the later evolution of the field will be insensitive to the temperature effects that happened earlier on.
In terms of decay constants, this sets a distinct scale
\begin{equation}
f_a^* \simeq 8.7\cdot 10^{16}\ \mathrm{GeV}.
\end{equation}
If we take into account the anharmonicities of the potential, it might happen that the start of the oscillations is delayed until after $T_\mathrm{crit}$, even if $f_a<f_a^*$.
The condition to be in this regime is that the initial misalignment angle $\theta_0$ is larger than some value $\theta_0^* (f_a)$.
This value is given for a general anharmonicity function\footnote{Note the difference between $f(\theta_0)$, which is the anharmonicity function presented in~\secref{sec:numerics} and induced purely by the shape of the potential, and $F_T(\theta_0,f_a)$, which also includes the effects of the temperature-dependent mass.} by
\begin{equation}
\left( \frac{f_a^*}{f_a} \right)^{3/2} = f(\theta_0^*).
\end{equation}
For the case of a canonical axion with a cosine potential like in \eqref{eq:QCD_lagrangian}, we find
\begin{equation}
\theta_0^* (f_a) \simeq \pi \left[ 1 - \exp \left( 1 - \frac{f_a^*}{f_a} \right) \right]^{1/4},
\end{equation}
whereas in the non-canonical case \eqref{eq:QCD_noncan_lagrangian_norm}, we find
\begin{equation}
\psi_0^* (f_a,N) \simeq \frac{3}{2bN} \log \left( \frac{f_a^*}{f_a} \right).
\end{equation}
\begin{figure}[t!]
\centering
\includegraphics[width=0.6\linewidth]{fig_QCD_isocurvature_noncanonical_inv}
\caption{Isocurvature constraints on the axion scale $f_{a}$ as a function of the inflation scale $H_{I}$ for the QCD axion with a non-canonical kinetic term \eqref{eq:QCD_noncan_lagrangian}. Both the anharmonicities of the potential and the temperature dependence of the mass are taken into account through the anharmonicity function defined in \eqref{eq:temperature_anharmonicity_function}.}
\label{fig:QCD_isocurvature_noncan}
\end{figure}
For any set of decay constants and initial misalignment angles that satisfy $f_a<f_a^*$ and $\theta_0<\theta_0^*$, we compute $F_T$ for a generic anharmonic potential, finding
\begin{equation}
F_T(\theta_0,f_a) \simeq \left( \frac{f_a^*}{f_a} \right)^{\frac{\alpha}{2(2-\alpha)}}\cdot\left( f (\theta_0) \right)^{\frac{2(3-\alpha)}{3(2-\alpha)}}.
\end{equation}
The details of the derivation of this result are given in Appendix~\secref{sec:Appendix2}.
Here we see that the result depends critically on the exponent of the temperature-dependence of the axion mass at high temperatures above the QCD critical temperature.
To sum up, we can write the full temperature-dependent anharmonicity function as follows,
\begin{equation}\label{eq:temperature_anharmonicity_function}
F_T(\theta_0,f_a) =
\begin{cases}
f(\theta_0) &\mathrm{if}\ f_a > f_a^*,\\
f(\theta_0) &\mathrm{if}\ f_a < f_a^*\ \mathrm{and}\ \theta_0 > \theta_0^*,\\
\left( \frac{f_a^*}{f_a} \right)^{\frac{\alpha}{2(2-\alpha)}}\cdot\left( f (\theta_0) \right)^{\frac{2(3-\alpha)}{3(2-\alpha)}}\quad &\mathrm{if}\ f_a < f_a^*\ \mathrm{and}\ \theta_0 < \theta_0^*.
\end{cases}
\end{equation}
With this, we can use the same approach as in~\secref{sec:Isocurvature_perturbations_anharmonic} to compute the isocurvature perturbations, this time using the temperature-dependent anharmonicity function,
\begin{equation}\label{eq:isocurvature_anharmonic}
\Delta_\phi^2 = 4\gamma^2 \frac{H_I^2}{4\pi^2 f_a^2 \theta_0^2} \left( 1 + \frac{1}{2} \Eval{ \frac{\d \log F_T(\theta)}{\d \log \theta}}{\theta_0}{} \right)^2.
\end{equation}
We apply this formula for both the canonical QCD axion and for our non-canonical model, and obtain the results presented in \figref{fig:QCD_isocurvature_cosine} and \figref{fig:QCD_isocurvature_noncan}, respectively.
For the canonical QCD axion, our results are an update from the ones obtained in~\autocite{visinelli_dark_2009} and~\autocite{kobayashi_isocurvature_2013}, as we are using the more recent data from the Planck satellite and a better fitting anharmonicity function.
\subsection{ALPs coupled to QCD with $f_{a} m << \Lambda_{\rm QCD}^2$}
Let us now study the possibility of an ALP having a coupling to the $G\tilde{G}$ term while satisfying $f_a m << \Lambda_{\rm QCD}^2$.
This is an interesting region of the parameter space, as ALPs that satisfy these conditions may be found by looking for an oscillating nucleon or atomic electric dipole moment.
There exist a number of proposed laboratory searches focusing on this direction~\autocite{graham_new_2013,graham_axion_2011,budker_cosmic_2014,hexenia_2017} .
However, we have seen that coupling the ALP to QCD via a term proportional to $G\tilde{G}$ induces an irreducible contribution to the mass, given by \eqref{eq:QCD_energy_scale}.
Explicitly this contributes
\begin{equation}\label{eq:fatoma}
m^{2}_a (T=0) \simeq \left(5.7\times 10^{-5} {\rm eV }\left( \frac{10^{11}{\rm GeV}}{f_a} \right)\right)^2,
\end{equation}
to the square of the axion mass as given in~\autocite{di_cortona_qcd_2016}.
This contribution will also have a temperature dependence as described by \eqref{eq:axion_mass_T}.
A priori, this irreducible contribution to the axion mass seems irreconcilable with the condition $f_a m << \Lambda_{\rm QCD}^2$~\autocite{blum_constraining_2014}.
The only known way of circumventing this caveat is to precisely cancel this contribution with an additional, fine-tuned term in the Lagrangian.
Acknowledging the flaws of this \textit{ad hoc} approach, we follow it and study the phenomenology of such models when allowing for a non-canonical kinetic term.
At the level of the Lagrangian, we add an extra term to the potential so that it becomes
\begin{equation}
V(\phi) = \Lambda_{\rm QCD}^4\left( 1-\cos \frac{\phi}{f_a} \right) - \Lambda_0^4\left(1- \cos \left(\frac{\phi}{n f_a} + \alpha \right) \right).
\end{equation}
In principle there can exist a phase difference between both contributions.
For our purposes, it will be necessary to require that this phase difference vanishes, so we will take $\alpha=0$.
This can be viewed as equivalent to asking for a separate a solution to the strong CP problem.
In principle any integer $n$ is possible but for simplicity we will limit ourselves to the $n=1$ case.
In the small $\phi$ limit, this potential induces a mass for the ALP
\begin{equation}
m^2 = m_a^2(T) - m_0^2,
\end{equation}
where $m_0f_a = \Lambda_0^2$ and recall that $m_a$ is completely fixed by $f_a$ as in equation \eqref{eq:fatoma}.
It is then possible to choose $m_0$ so that we get any zero-temperature mass for the ALP, i.e. we can set $m_0^2 = m_a^2(T=0) - m^2$.
We are interested in the $m^2 \ll m_a^2(T=0)$ regime.
The full mass can then be expressed as
\begin{equation}
m^2 (T) = m_a^2(T) - m_a^2(0) + m^2.
\end{equation}
Because at early times the QCD contribution is strongly suppressed, in that regime we have $m^2 (T) < 0$.
We will use the following simplified expression for the temperature dependent mass of the ALP
\begin{equation}\label{eq:temp_mass}
m^2 (T) = \begin{cases}
m^2 & \quad{\rm for}\quad T<T_{\rm crit} \\
-m_a^2(0) & \quad{\rm for}\quad T>T_{\rm crit}
\end{cases}
\end{equation}
Note that the negative mass does not indicate an unstable potential but only that $\phi=0$ is not the minimum at that time.
\subsubsection{Canonical case}
As a first step, we implement the mass subtraction and the resulting temperature dependence in an ALP model with a canonically normalised scalar field with potential given by
\begin{equation}\label{eq:temp_dep_N0}
V(\phi) = f_a^2\ m^2 (T) \left( 1-\cos \frac{\phi}{f_a} \right),
\end{equation}
with \(m(T)\) defined in \eqref{eq:temp_mass}.
The most relevant feature of this scenario is that before the QCD phase transition, the potential is minimised at $\theta=\pi$ rather than at $\theta=0$.
Accordingly, at early times the field evolves towards its minimum at $\pi$, around which it will oscillate with damped amplitude.
Then, after the QCD phase transition, the potential rapidly acquires its late-time shape, with a minimum at the origin.
The field thus oscillates around its CP-conserving value $\theta=0$ at late times.
The main role of the first set of oscillations is to set the initial condition for the second one to be close to $\pi$.
We refer to \figref{fig:N0_sketch} for a cartoon explaining this evolution.
It should be noted that this discussion is only valid if $f_am\ll \Lambda^{2}_{\mathrm{QCD}}$, that is, if we lie to the left of the QCD axion band in \figref{fig:N0_parameter_space}.
In the other limit, \textit{ie} $f_am\gg \Lambda^{2}_{\mathrm{QCD}}$, the contribution of the QCD mass is negligible and we recover the usual constant mass ALP scenario.
\begin{figure}[t!]
\centering
\includegraphics[width=0.5\linewidth]{fig_N0_sketch}
\caption{Cartoon explaining the evolution of the field.
The red and blue lines represent the potential before and after the QCD phase transition, respectively.
The green dots and arrows represent the evolution of the field. (The oscillations are not drawn explicitly in order to simplify the figure.)
The initial misalignement angle $\theta_0$ and the value of the field at the QCD phase transition, $\theta_{\mathrm{crit}}$, are depicted.}
\label{fig:N0_sketch}
\end{figure}
Let us now be a bit more quantitative.
Initially, $H$ is large and the field is stuck at its initial value $\theta_0$.
Then, as long as the early-time mass $m_a(0)$ overcomes the Hubble friction before the QCD phase transition, the field will oscillate around $\pi$.
The condition for this to happen is roughly $f_a\gtrsim10^{17}$ GeV, but this value can be modified by the anharmonicities depending on the initial misalignment.
These oscillations continue until the temperature decreases to $T_{\rm crit}$, at which time the amplitude is approximately given by
\begin{equation}
\left( \pi-\theta_{\mathrm{crit}} \right) \simeq \left( \pi-\theta_0 \right)\left( \frac{\mathcal{F}(T_{\rm crit})}{\mathcal{F}(T_1)} \right)^{1/2}\left( \frac{f_a}{2\cdot 10^{17}\ \mathrm{GeV}} \right)^{3/4} f^{1/2}(\theta_0).
\end{equation}
Here, the anharmonicity function is given by \eqref{eq:cosine_AF} and $T_1$ is defined by $3H(T_1)=m_a(0)$.
The value of $\theta_{\mathrm{crit}}$ gives the initial condition for the oscillations that happen after the QCD phase transition, now around $\theta=0$ and with frequency given by the late-time mass $m$.
Typically, $\theta_{\mathrm{crit}}$ is very close to $\pi$ so the anharmonicites of the potential will play a key role.
Taking this into account, we can compute the energy density of the oscillating scalar field as
\begin{equation}\label{eq:N0_energy_density}
\rho\simeq 0.17 \frac{\mathrm{keV}}{\mathrm{cm}^3}\ \mathcal{F}(T_2)\ \sqrt{\frac{m}{\mathrm{eV}}}\left(\frac{f_a}{10^{11}\ \mathrm{GeV}}\right)^2 \ \theta^2_\mathrm{crit}\ f(\theta_{\mathrm{crit}}),
\end{equation}
where $3H(T_2)=m$.
\begin{figure}[t!]
\centering
\includegraphics[width=0.7\linewidth]{fig_N0_parameter_space_final}
\caption{Parameter space of the model defined by the potential given in \eqref{eq:temp_dep_N0}.
We present the isocurvature constraints for different values of $H_I$, ranging from $10^6$ to $10^{11}$ GeV.
A higher scale of inflation restricts the model to lie in the respective coloured areas.
For the purpose of visualisation we have continuously connected the solution in the two different regimes that we have considered, i.e. to the left and to the right of the QCD axion band.
All the other limits presented in~\figref{fig:ALP_parameter_space} are also applicable in this scenario, as they only depend on the dynamics of the field after the QCD phase transition.}
\label{fig:N0_parameter_space}
\end{figure}
We can then determine in what region of parameter space the right dark matter abundance can be generated with an initial misalignment angle $\theta_0$ of order $\orderof (1)$.
It is possible to either enhance or suppress the energy density given in \eqref{eq:N0_energy_density} by tuning the initial misalignment angle closer to zero or $\pi$.
However, due to equation \eqref{eq:isocurvature_anharmonic}, there is an enhancement of the isocurvature perturbations each time the field gets close to a maximum of the potential, where the anharmonicity function becomes large.
Because of this, the available tuning of the initial misalignment angle is very limited in this scenario due to the stringent constraints on isocurvature fluctuations.
\figref{fig:N0_parameter_space} shows how the allowed parameter space shrinks for larger values of the Hubble scale of inflation.
Despite the strong isocurvature constraints, we can see that this scenario populates some unexplored regions of parameter space to the left of the QCD axion line that could be probed by upcoming experiments looking for ALPs.
\subsubsection{Non-canonical case}
We now want to implement the temperature dependent potential~\eqref{eq:temp_dep_N0}
in our non-canonical ALP scenario.
In terms of the cosmological evolution of the field, this is effectively done by writing
\begin{equation}
V(\varphi) = f_a^2 m^2 (T) \left[ 1 - \cos \left( \frac{2}{N} \arctan \left( \tanh \frac{N\varphi}{2f_a} \right) \right) \right].
\end{equation}
This is the same potential as we had before, except that for high temperatures $T>T_{\rm crit}$ the mass squared will be negative and will be a function only of $f_a$, as given in \eqref{eq:temp_mass}.
This tells us that, depending on the value of the parameters $m$ and $f_a$, we will have two very different behaviours, which qualitatively can be understood as follows.
First, if the field does not start rolling until after the QCD phase transition, then all the dynamics and the observables will not be affected at all by the features of the potential at high temperatures.
This is because there is no evolution while the field is frozen by Hubble friction.
Only after it has acquired its late time mass $m$ does it start rolling, and thus the cosmological evolution is exactly as we computed in~\secref{sec:Cosmological_evolution}.
However, the key difference is that now the ALP is coupled to $G\tilde{G}$, so it may be tested by observables and experiments that exploit this coupling.
The other option is, of course, that the field starts rolling before the QCD phase transition.
Then the dynamics can depend strongly on the initial conditions and is rather complicated.
However, we will see that this scenario leads to an overproduction of ALPs whose energy density exceeds the observed CDM one.
As we are only interested in ALPS as dark matter candidates, the second scenario is not interesting for us and we just need to focus on the first one.
Let us now be more quantitative and compute what region of the parameter space allows for ALP dark matter with a non-canonical kinetic term and coupled to QCD.
As we have anticipated, this ALP will only be a good dark matter candidate if its evolution is frozen until after the QCD phase transition.
Then, the present ALP energy density will only depend on $f_a$, the present mass $m$ and the initial misalignment angle $\psi_0$.
The latter is given by equation \eqref{eq:anharm_energy_density}, and satisfies
\begin{equation}\label{eq:initial_misalignment_angle}
\psi_0^2\ \mathrm{e}^{bN\psi_0} \simeq \frac{7.26}{{\cal F}(T_1)} \sqrt{\frac{{\rm eV}}{m}} \left( \frac{10^{11}\text{ GeV}}{f_a} \right)^2,
\end{equation}
where $T_1$ is the temperature at which the oscillations start.
We now need to find what the region of the parameter space is where the field starts oscillating only after the QCD phase transition.
The QCD phase transition happens at a temperature of around $T_{\rm crit}\sim 160\ {\rm MeV}$, which corresponds to a Hubble parameter of $H(T_{\rm crit})\sim 10^{-11}\ {\rm eV}$.
By asking that $3H(T_{\rm crit}) > | V^{\prime\prime}(\psi_0) |^{1/2}$, we get the condition
\begin{equation}\label{eq:QCD_rolling_condition}
H(T_{\rm crit}) > 3.24\times 10^{-6}{\rm eV }\cdot \sqrt{2N\sin\frac{\pi}{2N}} \ {\cal F}(T_1)^{1/(2b)}\ \psi_0^{1/b} \left( \frac{m}{{\rm eV}} \right)^{1/(4b)} \left( \frac{f_a}{10^{11}\text{ GeV}} \right)^{1/b-1}.
\end{equation}
This region is plotted in~\figref{fig:temperature_BBN_EDM}, together with the further cosmological and astrophysical bounds that restrict the parameter space.
Finally we still have to justify our claim that if the field starts oscillating before the QCD phase transition we always get an overproduction of ALPs.
For a given $(m,f_a)$, any initial misalignment angle bigger than the one given by \eqref{eq:initial_misalignment_angle} will lead to an energy density in ALPs greater than the observed dark matter one.
But if the condition \eqref{eq:QCD_rolling_condition} is not satisfied, then the field will start rolling towards bigger $\psi$ values, because $m^2(T)<0$ at high temperatures.
Thus, the effect of the rolling at high temperatures is to drive the field away from the required misalignment angle to give the correct dark matter abundance.
This statement is independent of what misalignment angle we start with, and thus rules out ALPs in the region coloured in white in~\figref{fig:temperature_BBN_EDM} as dark matter candidates\footnote{Such an overproduction could, e.g. be ameliorated in scenarios with two stages of inflation~\autocite{davoudiasl_inflatable_2016,hoof_qcd_2017}.}.
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=1\linewidth]{fig_Noncan_QCD_parameter_space_N1_final}
\label{fig:Noncan_QCD_parameter_space_N1}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=1\linewidth]{fig_Noncan_QCD_parameter_space_N10_final}
\label{fig:Noncan_QCD_parameter_space_N10}
\end{subfigure}\\[-3ex]
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=1\linewidth]{fig_Noncan_QCD_parameter_space_N100_final}
\label{fig:Noncan_QCD_parameter_space_N100}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=1\linewidth]{fig_Noncan_QCD_parameter_space_N200_final}
\label{fig:Noncan_QCD_parameter_space_N200}
\end{subfigure}\\[-3ex]
\caption{Parameter space for ALPs with a non-canonical kinetic term of the form \eqref{eq:non_canonical_kinetic_term} coupled to QCD via a $G\tilde{G}$ term, along with constraints coming from its cosmological evolution and searches for an oscillating EDM. Each panel represents a different value of the parameter $N$. This scenario can provide the right dark matter density in the yellow shaded region, while the areas excluded by overproduction of dark matter or the condition \eqref{eq:second_inflation_limit} to avoid a second period of inflation are coloured in white. The brown region is excluded by re-analysing data originally intended to search for a static neutron EDM in order to look for an oscillating one~\autocite{abel_search_2017}. The dark green region in the first figure is inconsistent with the production of the observed abundance of light elements during Big Bang Nucleosynthesis~\autocite{blum_constraining_2014}. This limit is effective only for $N<4$ and absent in the other figures. Similarly, the limit from~\autocite{hook_probing_2017} corresponding to the ALP field being sourced at the Sun only applies for small values of $N$, while the Earth one stays valid in all cases. Finally, $f_a$ is (softly) bounded from above by the requirement that it does not exceed the Planck scale, and from below by the supernova limits estimated in~\autocite{raffelt_astrophysical_2008}.}
\label{fig:temperature_BBN_EDM}
\end{figure}
\subsection{Big Bang Nucleosythesis}\label{secbbn}
Aside from a potential over (or under) production there is an additional constraint that rules out large areas of experimentally accessible parameter space. This arises from cosmology, more precisely BBN~\autocite{blum_constraining_2014}. A non-vanishing $\theta$ angle at the time of BBN can spoil the production of light elements such as $^4$He.
This is due to the fact that a non-vanishing $\theta$ angle induces a difference between the mass of the proton and the neutron~\autocite{ubaldi_effects_2010}
\begin{equation}
\delta Q \equiv m_n-m_p = c_+ \frac{m_d^2-m_u^2}{\sqrt{m_u^2+m_d^2+2m_um_d\cos\theta}},
\end{equation}
where $c_+\simeq 2.5$ can be determined by looking at the mass splitting $M_\Theta-M_N$ in the baryon octet~\autocite{crewther_chiral_1979}.
A larger mass splitting means that the freeze-out abundance of neutrons with respect to protons would be lower.
In addition to that, the free neutron decay rate is enhanced, which means that more neutrons decay between freeze-out and nucleosynthesis.
This depletion of neutrons\footnote{There are other effects that play a role, like the change in the deuteron binding energy or the rise in the freeze-out temperature. We have found that the contribution of these effects is smaller than the one considered above, so we neglected them for this analysis.} eventually turns into an underproduction of $^4$He.
Based on the discussion in~\autocite{blum_constraining_2014,stadnik_can_2015}, these effects result in a shift that can be estimated as
\begin{equation}
\frac{\delta Y_p}{Y_p} \equiv \frac{Y_p^0-Y_p(\theta)}{Y_p^0} = \left( 1- \frac{Y_p^0}{2} \right) \left( \frac{\delta \left(n/p\right)_{\mathrm{fr}}}{\left(n/p\right)_{\mathrm{fr}}} + \delta\Gamma_n t_{\mathrm{nuc}} \right) \simeq 0.66 \theta^2.
\end{equation}
Using the values $Y_p^0=0.25$ and $t_{\mathrm{nuc}}=880$ s~\autocite{mukhanov_physical_2005}.
One can now take the conservative limit $|\delta Y_p/Y_p|<10\%$ to see that successful nucleosynthesis requires
\begin{equation}
\theta_\mathrm{BBN} < 0.39.
\end{equation}
In our non-canonical model, the first thing we notice is that the $\theta$ angle is bounded,
\begin{equation}
| \theta | = | \frac{2}{N}\arctan\left( \tanh \frac{N\psi}{2} \right) | \leq \frac{\pi}{2N},
\end{equation}
so the BBN bound is completely avoided if $N>4$.
For smaller N we are in the region of small $\theta$ and the behaviour is approximately that of a canonical ALP.
Here we use the bound given in~\autocite{blum_constraining_2014}.
The corresponding excluded region is shaded in darker and labeled ``BBN'' in~\figref{fig:temperature_BBN_EDM}.
\section{Conclusions}\label{sec:Conclusions}
The question raised in this paper can be summarised in the following way: Is it possible to have an axion-like particle (ALP) with a non-canonical kinetic term as a phenomenologically viable and interesting dark matter candidate? Our study points towards an affirmative answer.
Using in particular a non-canonical term with singularities similar to those used in $\alpha$-attractor models for inflation we find a significantly enlarged parameter space for dark matter.
In particular, regions with larger couplings -- where canonical ALPs are underproduced -- now become viable, offering interesting possibilities for near future experiments.
For the production via the misalignment mechanism the key feature of the non-canonical kinetic term is that today's ALP energy density is enhanced due to a delay in the start of the oscillations.
This arises because the effective potential is flattened by the growing non-canonical kinetic term, which also makes the field range of the physical field unbounded.
As a consequence, any combination of mass and decay constant can generate enough ALP energy density to account for all the dark matter that we observe in the universe.
An important cosmological constraint arises from isocurvature fluctuations imprinted by inflation.
To apply these constraints to our scenario we give a simple derivation of the size of isocurvature fluctuations in general models with arbitrary potential and even a temperature dependence of the potential.
As a useful crosscheck we have updated the isocurvature constraints~\autocite{visinelli_dark_2009, kobayashi_isocurvature_2013} using the newest Planck data~\autocite{ade_planck_2016} and the most recent results for the QCD topological susceptibility~\autocite{borsanyi_axion_2016}.
The result can be found in \figref{fig:QCD_isocurvature_cosine}.
In our non-canonical setup the isocurvature constraints are even slightly weaker as can be seen in \figref{fig:QCD_isocurvature_noncan}.
An interesting non-trivial situation arises if the ALP is coupled to the strong interactions, i.e. via a term $\sim\phi G\tilde{G}$.
This is of particular interest since a number of experiments are currently searching for ALP dark matter with this coupling~\autocite{abel_search_2017, budker_cosmic_2014,hexenia_2017}.
The coupling to gluons leads to two non-trivial features: the generation of a temperature-dependent, irreducible contribution to the ALP mass and an effective ALP field value dependent nucleons mass.
The former naively makes large parts of the low mass region explored by current experiments inaccessible~\autocite{blum_constraining_2014}.
This can be avoided by invoking a precise cancellation with an additional term in the ALP potential (with or without non-canonical terms).
The latter leads to strong constraints from Big Bang Nucleosynthesis.
These are significantly weakened in our scenario with a non-canonical kinetic term.
This opens up significant parameter space that can be explored in near future experiments such as Casper~\autocite{budker_cosmic_2014} and HeXeniA~\autocite{hexenia_2017}, as well as EDM storage rings~\autocite{chang_axion_2017}.
\section*{Acknowledgements}
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreements No $690575$ (RISE InvisiblesPlus) and No $674896$ (ITN ELUSIVES).
|
1,314,259,994,251 | arxiv | \section{\bf Introduction}
Genome dynamics is an important problem in population genetics [1-3]
and in molecular evolution [4-9]. Many authors investigated dynamics
of evolution [10-13]. The Crow-Kimura and the Eigen models are very
popular in evolution theory, describing quite well population
genetics, the RNA virus evolution, and artificial evolution of
molecules. The Crow-Kimura model describes evolutionary process
where mutation and selection are two parallel processes and
describes mutations during the life time. The Eigen model describes
the case where mutations occur during the birth of new viruses
(molecules) and is quite realistic for the RNA virus evolution.
While exact solution is known for a simple case of single-peak
fitness [14-16], there has been no success thus far in calculating
exact dynamics for a general fitness landscape. As in molecular
evolution, there are numerous attempts to solve this problem at
least approximately [10-13]. The fact is that evolution models are
very subtle mathematical objects and approximate solutions often
give misleading or inadequate results, especially in dynamics.
Finding exact dynamics for these two models is well-known to be
still an open issue. In this article we introduce Hamilton-Jacobi
equations (HJE) as a mean to resolve it. These equations have been
already applied in evolution theory to investigate population
genetics of virus evolution with a finite population \cite{ro03}. In
Ref.~\cite{ro03} HJE were applied and solved approximately for
linear fitness. Also, HJE had been utilized in Refs.~[18-19] to
derive exact steady-state solutions for evolution models with a
general fitness. In this work we show that it is possible to obtain
exact dynamical solutions of the Hamilton-Jacobi equations for the
models where fitness is defined in terms of the Hamming distance
from a reference (wild) sequence. The possibility of having
analytical solutions that give the dynamics in a closed form is an
important breakthrough in the theory of biological evolution. It
allows one the investigation of a plethora of evolutionary pathways
within one consistent formalism. By mapping evolution model to
Hamiltonian mechanics and looking at the corresponding potential, it
is possible to derive phase structure of the dynamics when exact
dynamics are unavailable by other means. We show here the way to
precisely calculate the movement of the maximum of the distribution
for the population originally localized at a fixed distance from a
reference sequence. This article is organized as follows. In Sec.~II
we review the known results for the Crow-Kimura model, analyze its
dynamics via HJE when population is initially localized at some
Hamming distance from a reference sequence, and investigate the case
when originally population is uniformly distributed across the
sequence space. In Sec.~III we solve the dynamics of the Eigen
model. Our results are discussed in Sec.~IV.
\section{\bf The Crow-Kimura model}
\subsection{Main known results}
The $2^N$ genome configuration sequences are defined as chains of
$N$ spins $s_n, 1\le n \le N$, that can take on only two values
$s_n=\pm 1$. The reference configuration has all spins $+1$. The
Hamming distance between a given configuration and the reference
configuration is $\sum_n(1-s_n)/2 = N(1-m)/2$, where $m$ is an
overlap. This model describes the dynamics of probability
distribution. We denote configuration $i$ by $S_i\equiv ({s_i^1,
\dots, s_i^N})$. The state of the system is specified by $2^N$
relative frequencies $P_i, 1 \le i \le 2^N$:
\begin{eqnarray}
\label{e1}
\frac{{dP}_i}{dt}=\sum_jA_{ij}P_j-P_i\sum_jP_jr_j ,\nonumber\\
A_{ij}=\delta_{ij}r_j+m_{i j}.
\end{eqnarray}
Here $m_{ij}$ is the rate of mutation from configuration $S_j$ to a
new configuration $S_i$, and $r_{i}$ is the fitness. Two
configuration states have a Hamming distance $d_{ij}=(N-\sum_k s_i^k
s_j^k)/2$, and $m_{ii}=-\gamma_0 N$. When $d_{ij}=1$ then
$m_{ij}=\gamma_0$ and $m_{ij}=0$ for $d_{ij}>1$ \cite{ba97}. For
index $i$, the set of values $1\le i \le 2^N$ is equivalent to the
collection of $N$ spins $s_k$. Identifying $f_0(s_1...s_N)\equiv
r_i$, we define the mean fitness $R$:
\begin{eqnarray}
\label{e2}R\equiv \sum_iP_ir_i.
\end{eqnarray}
The model defined here by Eq.(\ref{e1}) had been introduced in
Ref.~[3] to describe the Drosphilla's evolution in a multi-allele
model with simultaneously present mutation and selection processes.
Because this model describes genetics of diploid evolution in
infinite population the random drift is necessarily absent. The
diploid evolution model of Ref.[3] is described by an equation in
analogy with Eq.(\ref{e1}) except that $r_i$ are linear functions of
$p_i$. In the model of [4] our Eq.(\ref{e1}) describes an infinite
population asexual evolution when there are either many alleles in
one locus or many loci with two alleles in each. The selection and
mutation processes are decoupled in Eq.(\ref{e1}), i.e., our model
describes selection and mutation as parallel processes. This is
different to a well-known model introduced by Eigen
\cite{ei71,ei89}, where it is assumed that mutations originate as
replication errors on the occasion of reproduction events. Nowadays
the Eigen's model is widely applied to describe the virus evolution.
The model of Ref.~\cite{ba97} as well as the Eigen's model
\cite{ei71,ei89} have been suggested as molecular evolution models.
Both these ``connected mutation-selection" schemes of
Refs.~\cite{ei71,ei89} and "parallel","decoupled" scheme of
Ref.~\cite{ba97} are similar, giving similar pictures of evolution
with only a slight difference in dynamics (e.g., see Fig.~1 in
Ref.~\cite{sh04b}). The difference between the connected
multi-selection scheme and the parallel mutation-selection scheme of
this work becomes transparent when both models are treated by a
quantum Hamiltonian approach \cite{ba97,sh04a}: the parallel scheme
is described in terms of Hermitian Hamiltonian and the connected
scheme is described in terms of non-Hermitian Hamiltonian.
A value of $R$ in steady state ($dP_i/dt=0$) is the main target of
theoretical investigations. One can calculate $R$ as maximal
eigenvalue of a matrix $A_{ij}$ \cite{ei89,bw01}. Connection between
the Crow-Kimura model and quantum mechanics has been established in
Ref.~\cite{ba97}, where matrix $-A_{ij}$ has been identified with
the quantum Hamiltonian $H$ for $N$ interacting quantum spins. One
can calculate the maximal eigenvalue of the operator $-H$
\cite{bw01,sh04c} as
\begin{eqnarray}
\label{e3} R=\lim_{\beta\to\infty}\frac{\ln Tr \exp[-\beta
H]}{\beta},
\end{eqnarray}
where
\begin{eqnarray}
\label{e4}
-H=\gamma_0\sum_{k=1}^N(\sigma^x_k-1)+f_0(\sigma^z_1...\sigma^z_N),
\end{eqnarray}
where $\sigma^z_k$ and $\sigma^x_k$ are Pauli matrices acting on the
spin in the $k$th position \cite{sh04c}. We are interested in
symmetric-fitness case with $f_0(s_1...s_N)\equiv
Nf(\sum_{k=1}^Ns_k/N)$. For a symmetric fitness function and
permutation-symmetric initial distributions all configurations at
the Hamming distance $l$ from the reference sequence (selected with
$s_n=1, 1\le n\le N$) have one value of probability so the
probability of selecting the entire class of configurations is
$cp_l$. For symmetric fitness the mean fitness is calculated as in
Refs.~\cite{ba02,sh04c,ba98}:
\begin{eqnarray}
\label{e5}
\frac{R}{N}\equiv k= \max_{-1\le x\le1}U(x), \nonumber\\
U(x)= f(x)-1+\sqrt{1-x^2}.
\end{eqnarray}
The maximum point of Eq.(\ref{e5}) occurs at $x= x_c$. It follows
from Eq.(\ref{e3}) that $x_c$ can be interpreted as ``bulk
magnetization" in analogy with other models of statistical mechanics
\cite{ba97,bw01,ba98}:
$$ x_c=\lim_{\beta\to\infty}\frac{ Tr \exp[-\beta
H]\sum_{k=1}^N\sigma^z_k}{N Tr \exp[-\beta H]}. $$
Despite the lack of direct
biological meaning, we need to find $x_c$ to calculate the mean
fitness. For symmetric fitness function and permutation-invariant
original distribution there is a set of differential equations for
($N+1$) relative probabilities $p_l,0\le l\le N$ \cite{bw01}:
\begin{eqnarray}
\label{e6}
\frac{d{p_l}}{dt}= \nonumber \\
p_l [Nf(1-\frac{2l}{N})- N]+(N-l+1)p_{l-1}+(l+1)p_{l+1}.
\end{eqnarray}
Probability of finding all configurations at the Hamming distance
$l$ is $p_l/\sum_kp_k$. Mapping of the system of nonlinear equations
(1) onto the system of linear equations (6) was calculated in
Refs.~[20-21]). In Eq.(6) we omit $p_{-1}$ and $p_{N+1}$ for $l=0$
and $l=N$, and set $\gamma_0=1$. In biological applications a
magnetization-like measure of surplus or surface magnetization can
be defined as
\begin{equation}
\label{e7} x_m=\frac{\sum_l(1-2l/N)p_l}{\sum p_l}.
\end{equation}
The main goal of this work is to calculate the dynamic of $x_m$ from
given initial distribution. Having the value of $x_c$ it is possible
to calculate the value of $x_m$ in steady state by solving:
\begin{equation}
\label{e8} f(x_m)=k.
\end{equation}
Various interpretations of bulk magnetization $x_c$ and surface
magnetization $x_m$ were analyzed in Ref.~\cite{ba98,bw01}. In next
sections we solve the model for the dynamics and determine explicit
role of $x_c$ for various sub-phases in dynamics.
\subsection{\bf HJE for Crow-Kimura model}
As in Ref.~\cite{sh07}, at a discrete $x=1-2l/N$ we use the ansatz
$p_l(t)\equiv p(x,t)\sim \exp[Nu(x,t)]$. Equation~(6) can be then
written as Hamilton-Jacobi equation for $u\equiv \ln p(x,t)/N$ (in
\cite{sh07} we gave an equation for individual probabilities in the
sequence):
\begin{equation}
\label{e9} \frac{\partial u}{\partial t}+H(u',x)=0,
\end{equation}
where $u'= \partial u/\partial x$,
\begin{equation}
\label{e10} -H(u',x)=f(x)-1+\frac{1+x}2e^{2u'}+\frac{1-x}2e^{-2u'},
\end{equation}
where the domain of $x$ is $ -1\le x\le 1$, and the initial
distribution is $u(x,0)=u_0(x)$. Equation~(9) describes a class of
probabilities and the equation describing one sequence of
probabilities was given in Ref.~\cite{sh07}. In the limit of $t \to
\infty$ the asymptotic solution of Eq.~(9) is
\begin{eqnarray}
\label{e11} u(x,t;k) = kt+u_k(x),
\end{eqnarray}
where $u_k(x)$ can be calculated from Eq.(9) \cite{sh07} and the
mean fitness is $Nk$. Function $U(x)$ in Eq.(5) has a simple
physical interpretation as potential, i.e., the minimum of $-H(u,v)$
with respect to $v$ at a fixed $x$: $U(x)=min_v[- H(v,x)]$. It is
well-known from mechanics that motion is possible on an interval
when energy of the system is larger than potential $U(x)$ inside
this interval. In maximum-principle approach the largest eigenvalue
is identified with the mean fitness $k$. Similarly, $-k$ is the
maximal energy of the Hamiltonian $H(v,x)$ in Eq.~(10). A realistic
hypothesis would be to assume that the asymptotic solution
$u(x,t;k)$ is stable against perturbations only if $k$ is calculated
according to Eq.~(5). It is possible to obtain more results even
without solving the dynamics exactly. We know from physics that
motion in potential that has a single minimum is drastically
different from motion in potential with two or more minima.
Therefore, when in Fig.~1 function $U(x)$ changes from that depicted
by the continuous line to that presented by the dashed line but for
potential well $U(x)$ that has two maxima and two minima near $x=0$
we should anticipate phase transition.
\begin{figure}
\centerline{\includegraphics[width=0.6\columnwidth]{ffigure1.eps}}
\caption{ Function $U(x) = f(x)+\sqrt{1-x^2}-1$ for $f(x)=x^2$
(solid curve) and for $f(x)=4\exp(-8+8x)$ (dashed curve). For the
latter there are two extrema where $U'(x)=0$: the maximum at
$0.9995$ (it is too high in is not shown in the graphics) and the
minimum at $0.497$. } \label{fig1}
\end{figure}
Here, we focus on the fitness $f(x)=cx^2/2$ [4] (the solid curve in
Fig.~1 corresponds to $c=2$). It results from Eq.~(5) that in this
case $U(x)$ has two extrema located on the interval $[-1;+1]$: the
minimum at $x=0$, and the maximum at $x=x_m$. To solve Eq.(\ref{e2})
subject to these initial data we use a standard procedure
\cite{mel98,eva02} by allowing to reduce the corresponding partial
differential equation to a system of ordinary differential
equations. Namely, consider the following set of equations:
\begin{equation}
\begin{aligned}
\label{e12}
\dot x =H_v(x,v)=-(1+x)\>e^{2v}+(1-x)\>e^{-2v},\\
\dot v = -H_x(x,v)=f'(x) + (e^{2v}-e^{-2v})/2, \\
\dot u = v\,H_v(x,v)-H(x,v)=v\dot x+q,
\end{aligned}
\end{equation}
subject to the following initial conditions: $x(0)=x_0$,
$v(0)=v_0(x_0)$, $u(0) = u_0(x_0)$. Here, $v=\partial u/\partial x$,
$v_0(x)= u_0'(x)$, and $q=\partial u/\partial t$. The corresponding
solution of Eq.~(\ref{e12}) in the $(x,t)$-space is called the {\it
characteristic} of Eq.~(\ref{e9}). Further, Eqs.~(\ref{e9}) and
(\ref{e12}) imply $\dot q=0$. Along the characteristic $x=x(t)$ and
variable $q$ is constant, so $q$ is selected to parameterize these
curves. Using the equation $q=f(x)-1+(1+x)/2e^{2v}+(1-x)/2e^{-2v}$,
we transform the first equation in Eq.(\ref{e12}) into
\begin{equation}
\label{e13} \dot x = \pm2\sqrt{[q+1-f(x)]^2 + x^2 - 1}.
\end{equation}
Having the solution of the characteristic system given by Eq.(12),
we can derive the solution of the original Eq.(\ref{e9})
\cite{mel98} by integrating the equation $\dot u = v\dot x+q$. For
biology applications it is important to know motions of distribution
maxima. For the purpose of finding these motions consider the
following initial distribution
\begin{equation}
\label{e14} u_0(x) = -a(x-x_0)^2.
\end{equation}
It is relatively easy to derive relaxation formulae for large values
of parameter $a$. We can calculate them directly from Eq.~(13),
using equation $q(x^*,t^*)=f(x^*)$ for the maximum point location
$x^*$. The maximum of the distribution moves along the branch of
Eq.(\ref{e13}) that preserves the sign of $x_0$. By integrating
Eq.(\ref{e13}) along the characteristic through the point
$(x^*,t^*)$ and assuming that $\dot x(t)$ does not change its sign,
we are getting
\begin{equation}
\label{e15} t^* = \frac{\mathop{\rm sgn}\nolimits x_0}{2}\int\limits_{x^*}^{x_0}
\frac{d\xi}{\sqrt{(f(x^*)+1-f(\xi))^2+\xi^2-1}}.
\end{equation}
If at some point $x_1$ the characteristic $x(t)$ changes its
direction the point $x_1$ can be determined from the condition
\begin{equation}
\label{e16} [f(x^*)+1-f(x_1)]^2+x_1^2-1=0.
\end{equation}
In the latter case the integrals should be summed up over the
intervals $(x_0,x_1)$ and $(x^*,x_1)$. This summation gives
\begin{equation}
\label{e17}
\begin{aligned}
t^* = \frac{\mathop{\rm sgn}\nolimits x_0}2&(\int\limits_{x_0}^{x_1}
\frac{d\xi}{\sqrt{(f(x^*)+1-f(\xi))^2+\xi^2-1}}\\+&
\int\limits_{x^*}^{x_1}
\frac{d\xi}{\sqrt{(f(x^*)+1-f(\xi))^2+\xi^2-1}}).
\end{aligned}
\end{equation}
Let $T_1$ be such that for $t\le T_1$ Eq.(\ref{e15}) holds, and for
$t>T_1$ Eq.(\ref{e17}) holds. At $T_1$ we have the condition
\begin{equation}
\label{e18} T_1=\frac{\mathop{\rm sgn}\nolimits x_0}{2}\int\limits_{X_1}^{x_0}
\frac{d\xi}{\sqrt{(f(X_1)+1-f(\xi))^2+\xi^2-1}},
\end{equation}
where $X_1$ is a root of $[f(X_1)+1-f(x_0)]^2+x_0^2-1=0$. For the
quadratic fitness $f(x)=cx^2/2$ with $c>0$ a selective phase exists
at $c>1$. Then, $x_m=1-\frac{1}{c}$ and $x_c=\sqrt{1-c^{-2}}$ [4].
When $t\rightarrow\infty$ the maximum converges to $x = x_m$. To
define the dynamics of the maximum at $-x_c\le x_0\le x_c$ we use
Eqs.(\ref{e15}) and (\ref{e17}), where
$$
x_1 = \mathop{\rm sgn}\nolimits
x_0\>\frac{\sqrt{c^2{x^*}^2+2(c-1)-2[(c-1)^2-c^2{x^*}^2]^{1/2}}}c
$$.
In the region where $x_c\le|x_0|\le1$ we use Eq.(15). To find $T_1$
in accordance with Eq.(\ref{e18}) we use
\begin{equation}
\label{e19} X_1 = \mathop{\rm sgn}\nolimits
x_0\sqrt{x_0^2-\frac{2[1-(1-x_0^2)^{1/2}]}c}.
\end{equation}
Figure~2 shows the evolution of the maximum for $c=2$ for $x_0 =
0,\,0.1,\,0.3,\,0.7,\,0.95$. These results demonstrate the excellent
agreement of analytic solutions given by Eqs.(15) and (17) with the
results of the numerical integration of Eq.(6). Note, Fig.~2 shows
that for $x_0<x_m$ the maximum moves initially away from the wild
configuration and returns to its neighborhood in later times. The
minimal $x^*(t)$ is just $X_1$.
\begin{figure}
\centerline{\includegraphics[width=0.6\columnwidth]{ffigure2.eps}}
\caption{ The dynamics of the maximum point $x(t)$ for the
Crow-Kimura model ($f(x)=x^2$) for different initial values $x_0$ in
the distribution (14). The continuous curves are analytic results of
Eqs.(15) and (17). The symbols are the results of numerical
solutions of the Crow-Kimura model given by Eq.(6), where $N=1000$.
} \label{fig2}
\end{figure}
If $x^*(t)$ describes the position of maxima then
$v(x^*(t),t)=\frac{dv(x^*(t),t)}{dt}=0$ and Eqs.({\ref{e12}) give
\begin{eqnarray}
\label{20} \frac{dx^*(t)}{dt} = -2 x^*(t) -
\frac{f'(x^*(t))}{u_{xx}(x^*(t),t)},\quad x(0) = x_0,
\end{eqnarray}
where $u_{xx}(x,t) = {\partial v}/{\partial x}$. The motion of the
maximum of the distribution either towards the wild sequence or in
the opposite direction depends on the sign of
$f'(x^*(t))+2x^*(t)u''(x^*(t),t)$.
\subsection{\bf The flat original distribution}
When any of $2^N$ configurations is uniformly populated then the
initial condition for the entire probability class, having
probability $(^{\phantom{(1+x)}N!}_{(N(1+x)/2)!})\frac{1}{2^N}$,
yields
\begin{eqnarray}
\label{e21} u_0(x)= -\frac{1+x}{2} \ln\frac{1+x}{2}-\frac{1-x}{2}
\ln\frac{1-x}2.
\end{eqnarray}
Solution (21) has a peak at $x=0$. Let us calculate threshold-time
$T_2$ such that for $t\le T_2$ the population peak is in the class
of $x=0$. Assuming that at the moment $t^*$ the maximum is at point
$x^*$, we solve Eq.(13) for the characteristic with end-point
$(x^*,t^*)$ and, thus, take $q=f(x^*)$. The related characteristic
curve starts at the point $x(0)=x^*$, passes through the point
$(x_1,t^*/2)$ ($x_1$ is computed from Eq.(16)), turns, and finally
reaches the point $(x*,t*)$. Thus, Eq.(\ref{e16}) gives
\begin{equation}
\label{e22} t^* = {\mathop{\rm sgn}\nolimits x^*}\int\limits_{x^*}^{x_1}
\frac{d\xi}{\sqrt{(f(x^*)+1-f(\xi))^2+\xi^2-1}}.
\end{equation}
Now we take the limit as $x^*\to 0$ and find the threshold time
$T_2$. When $f(x)=cx^2/2$ and $c>1$ this time is
\begin{equation}
\label{e23} T_2 = {cos}^{-1}(\sqrt{1-1/c})/{\sqrt{c-1}}.
\end{equation}
\section{\bf The Eigen model}
As shown in Refs.~[5-6], for $2^N$ probabilities $P_i$ there is a
set of equations
\begin{equation}
\label{e24} \frac{dP_i}{d\tau}= \sum_{j=1}^{2^N}Q_{ij}r_j P_j-P_i[
\sum_{j=1}^{2^N}r_{j}P_j].
\end{equation}
Elements $Q_{ij}$ of the mutation matrix give the probabilities that
an offspring of configuration $j$ belongs to configuration $i$. In
this model mutations are quantified by
$Q_{ij}=q^{N-d(i,j)}(1-q)^{d(i,j)}$ and $\gamma=N(1-q)$, where
$\exp[-\gamma]\equiv q^N$ is the probability of having exact copy,
$r_j=f(1-2l/N)$ is the fitness, and $l$ is the Hamming distance of
the $j$th configuration from the reference configuration. The
Hamming distance between configurations $i$ and $j$ (that have spins
spins $s^i_n$ and $s^i_n$, respectively) is
$d(i,j)=\sum_n(1-s^i_ns^j_n)$. Considering again the ($N + 1$)
Hamming-class probabilities $p_l$ for $p_l\equiv \exp[Nu(x,t)] and
x=1-2l/N$, Eq.(24) of Ref.~\cite{sh07} has been mapped onto the
following equation
\begin{eqnarray}
\label{e25} {\frac {\partial u}{\partial t}} =f(x) e^{\gamma [
\mathrm{ch} (2u')+x \mathrm{sh} (2u') -1]},
\end{eqnarray}
where $\tau=tN$. Asymptotic solutions $u(x,t;k)=kt+u_k(x)$ ($k$ is a
mean fitness \cite{sh06}) in the limit of $t\to \infty$ are as
follows
\begin{eqnarray}
\label{e26} k= \max_{-1\le x\le1}U(x),\quad U(x)=f(x)\exp(\gamma
[-1+\sqrt{1-x^2}]) ,
\end{eqnarray}
where $x_c$ and $x_m$ are obtained from
\begin{equation}
\label{e27}
\begin{aligned}
U'(x_c)=0,\quad f(x_m)=f(x_c)\exp(-\gamma [1-\sqrt{1-x_c^2}]).
\end{aligned}
\end{equation}
When $x_c<|x_0|<1$ then for initial distribution given by Eq.(14)
with $a >> 1$ the position of the maximum $(t*,x*)$ is
\begin{equation}
\label{e28} t^*=\frac{\mathop{\rm sgn}\nolimits
x_0}2\int\limits_{x^*}^{x_0}\,\frac{d\xi}
{f(x)\,\sqrt{\left(\ln\frac{f(x)}{f(\xi)}+\gamma\right)^2-{\gamma}^{2}(1-\xi^2)}
}.
\end{equation}
For all other cases the solution is
\begin{equation}
\label{e29}
\begin{aligned}
t^*=\frac{\mathop{\rm sgn}\nolimits
x_0}2&\Big(\int\limits_{x_0}^{x_1}\,\frac{d\xi}{f(x^*)\,
\sqrt{\left(\ln\frac{f(x^*)}{f(\xi)}+\gamma\right)^2-{\gamma}^{2}(1-\xi^2)}}+\\
{}+&\int\limits_{x^*}^{x_1}\,\frac{d\xi}{f(x^*)\,
\sqrt{\left(\ln\frac{f(x^*)}{f(\xi)}+\gamma\right)^2-{\gamma}^{2}(1-\xi^2)}}\Big
),
\end{aligned}
\end{equation}
where $x_1$ can be calculated from the condition
\begin{eqnarray}
\label{e30}
\left(\ln\frac{f(x^*)}{f(x_1)}+\gamma\right)^{\!2}-{\gamma}^{2}(1-x_1^2)=0.
\end{eqnarray}
Finally, for relaxation from the flat distribution we get:
\begin{eqnarray}
\label{e31} t^*=\mathop{\rm sgn}\nolimits{x^*}\int\limits_{x^*}^{x_1}\,\frac{d\xi}
{f(x^*)\,\sqrt{\left(\ln\frac{f(x^*)}{f(\xi)}+\gamma\right)^{\!2}-{\gamma}^{2}(1
-\xi^2)}},
\end{eqnarray}
\begin{figure}
\large \unitlength=0.1in
\begin{picture}(42,12)
\put(-2.2,-13.5){\special{psfile=ffigure3.eps hscale=25 vscale=25}}
\put(16.5,-15){\special{psfile=ffigure4.eps hscale=25 vscale=25}}
\end{picture}
\caption{Dynamics of maximum density points $x^*(t^*)$ for the flat
initial distribution given by Eq.(14): (a) Crow-Kimura model where
(i) $f(x)=8x$, (ii) $f(x)=x^2$, (iii) $f(x)=x^2+0.2x^4$, (iv)
$f(x)=4\exp(x-1)$, and $f(x)=4\exp(-8[1-x])$ (dashed line); (b)
Eigen model where $\gamma=2$ and (i) $f(x)=2(x+1)$, (ii) $f(x)=x^2$,
and (iii) $f(x)=\exp(4x)$. Continuous curves are the analytical
results. The symbols are the solutions of numerical integration.}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=0.6\columnwidth]{ffigure5.eps}}
\caption{ The dynamics of the mean fitness $R(t)$ for the
Crow-Kimura model ($f(x)=x$) for different initial values $x_0=0.5$
in the distribution (14). The symbols are the results of numerical
solutions of the Crow-Kimura model given by Eq.(6), where $N=1000$.
The upper line is an approximate result by diffusion method, the
lower line is our exact result. } \label{fig5}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=0.6\columnwidth]{ffigure6.eps}}
\caption{ The dynamics of the mean fitness $R(t)$ for the
Crow-Kimura model ($f(x)=x^2$) for different initial values
$x_0=0.5$ in the distribution (14). The symbols are the results of
numerical solutions of the Crow-Kimura model given by Eq.(6), where
$N=1000$. The upper line is an approximate result by diffusion
method, the lower line is our exact result. } \label{fig6}
\end{figure}
\begin{center}
\begin{table}[t]
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline $c$ & 1.1 & 1.2 & 1.3 & 1.4 & 1.5 & 1.6 \\
\hline $T_2$ &2.397 &1.791 & 1.466 &1.252 & 1.098 &0.980 \\
\hline $t_2$ &3.998 & 2.572 &1.953 & 1.591 & 1.351 & 1.177 \\
\hline
\end{tabular}
\caption{ Comparison of $t_2$, the result of \cite{ba98} for the
threshold time period in case of initially flat distribution, with
$T_2$, our exact result by Eq.(23) for Crow-Kimura model with
$f(x)=cx^2/2$. }
\end{table}
\end{center}
\section{\bf Discussion}
We have considered discrete-error classes in continuum
approximation, replacing the system of equations for molecular
evolution by a single Hamilton-Jacobi equation. Dynamics have been
obtained by solving this equation. This method is qualitatively
similar to semi-classical methods, well-known in quantum mechanics.
Our approach has an accuracy of $1/N$, where $N$ is genome length.
There is straightforward connection between our current method and
methods that utilize statistical-physics analogies with Ising spins.
Specifically, two different sub-phases that have been determined
with our method describe two different relaxation regimes (i.e.,
Eqs.(15) and (17) for Crow-Kimura's model; and, Eqs.(28-30) for
Eigen's model). These two relaxation regimes correspond exactly to
two different magnetization values as discussed in
Refs.~\cite{bw01,ba02,ba07}. Singularities $x_c$ in relaxation
periods correspond to bulk magnetization. Initially, when the entire
virus population is in one genetic configuration that is closer to
the wild configuration than the sequences with the same value of
$x_c$, the maximum in the population distribution moves to the
steady state $x_m$. This is in analogy with surface magnetization.
On the other hand, when the initial configuration is far away from
$x_c$, the maximum of the distribution moves away from the wild
configuration in the initial phase and moves towards $x_m$ in a
later phase. The single minimum at $x=0$ of the evolution potential
$U(x)$ (i.e., Eq.(5) for Kimura's model, and Eq.(26) for Eigen's
model) gives smooth dynamics (see Eqs.(15), (17), Fig.~2, and
Eqs.(28-30)). Equations (22) and (31) give the evolution from the
original flat distribution in the Crow-Kimura and the Eigen models,
respectively. These results are presented in Fig.~3 for several
choices of fitness function. Analytical dynamics of maximum-density
points $x^*(t^*)$ is in excellent agreement with numerical solutions
for the original formulation of these models. The second phase of
the dynamics with a jump in the position of $x^*(t)$ (seen as the
dashed line in Fig.~3a) is related to the presence a potential well
(indicated by the dashed line in Fig.~1). Preliminary numerical
studies of similar problems indicate the existence of a similar
phase with a jump that does not require a potential well but a steep
potential. The evolution dynamics is a highly non-trivial
phenomenon. As we demonstrated in this work, even for monotonic and
smooth fitness landscapes it is possible to have discontinuous
dynamics in analogy with the punctuated evolution of
Ref.~\cite{dr01} or the shock waves of Ref.~\cite{ca05}). Such
discontinuous dynamics for smooth fitness function has been also
found in Ref.~\cite{ba01}, where the dynamic of the evolution model
was investigated numerically for four-valued spins. In the current
article we suggest the analytical method to investigate
discontinuous evolution for a general fitness case. In
Ref.~\cite{ba98} an analytic approximation that would be accurate
for large $c$ have been suggested for the dynamic of Crow-Kimura
model. In Table 1 we compare our exact result for $T_2$ obtained
from Eq.(23) with the corresponding expression derived by the method
of Ref.~\cite{ba98} (by setting $\lambda=1$ in Eqs.(4) and (65) of
Ref.~\cite{ba98}). Our method gives the full distribution, while the
method of Ref.~\cite{ba98} gives the position of the distribution
maximum. In summary, we considered HJE to obtain exact dynamics and
used Hamiltonian mechanics for qualitative analysis of evolution
models. Our results are valid for any analytic fitness function. The
diffusion method of Refs.~[10-13] is valid only near the maximum of
distribution or for the case of weak selection, and yields
inaccurate results when applied for long relaxation periods or for
calculating mean fitness. This yields the error greater than $50\%$
after $t=0.2$ (see Figs.~4-5). The HJE approach is self-consistent,
with no need to use genome length (which is in contrast to
Refs.~[10-13]), and gives the dynamic with the $1/N$-accuracy.
We thank M.\,W.~Deem, A.~Kolakowska, A.~Melikyan, L.~Peliti,
S.~Nazarian, and D.~Waxman for discussions. D.~B.~Saakian thanks the
Volkswagenstiftung grant ``Quantum Thermodynamics", U.S. Civil
Research Development Foundation ARP2-2647-Ye-05, National Center for
Theoretical Sciences in Taiwan and Academia Sinica (Taiwan), Grant
No. AS-95-TP-A07.
|
1,314,259,994,252 | arxiv | \subsection*{Acknowledgments}
The authors would like to acknowledge enlightening conversations with Ben Goldys, which led to a convenient approximation scheme used in \S\ref{sec:Ex}, as well as useful discussions with Michael Cowling and Bill McLean, and bibliographical suggestions of Hans Triebel. The research of GF and CGT is supported by an ARC Future Fellowship and an ARC Discovery Project (DP110100068).
\section{Introduction}
Random (or forced) dynamical systems are invaluable models of systems
exhibiting time dependence. Even though, for us, the terms random and
forced are interchangeable, we use exclusively the term random dynamical
systems (rds). These systems arise naturally in situations with
time-dependent forcing, as well as being a natural model for systems in
which some neglected or ill-understood phenomena lead to uncertainty
in the evolution.
One particular motivation concerns transport phenomena, such as oceanic and atmospheric flow.
Although randomness appears in the title, this work covers a variety
of situations, ranging from deterministic forcing-- for example when the driving depends
only on the time of the day-- to independent
identically distributed noise. The results of this paper deal with very
general driving systems: the conditions on the base dynamics are that it
should be stationary (i.e. have an invariant probability measure), ergodic
and invertible; in particular, no mixing properties are needed.
The predecessor works \cite{FLQ1,FLQ2,GTQuas} provide a unified framework for the study of random absolutely continuous invariant measures, exponential decay of correlations and coherent structures in random dynamical systems.
The abstract results therein, called semi-invertible Oseledets theorems, show a dynamically meaningful way of \textit{splitting} Banach spaces arising from the study of associated transfer operators into subspaces with specific growth rates.
Such results have applications in the context of random, non-autonomous or time-dependent systems, provided there are some good statistics for the time-dependence and
a \textit{quasi-compactness} type property holds.
The references above provide explicit applications in the setting of random compositions of piecewise smooth expanding maps.
Having demonstrated the existence of a splitting, it is natural to ask about its stability under different types of perturbations.
This is known to be a very difficult problem in general, even in finite dimensions, where positive results \cite{young86,LedrappierYoung} rely on absolute continuity and uniformity of the perturbations.
In the infinite-dimensional case, research has focused on transfer operators, and Perron-Frobenius operators in particular.
Considering a single map $T$ with expanding or hyperbolic properties, the transfer operator is quasi-compact in the right Banach space setup, and one can ask about the stability of eigenprojections of the transfer operator with respect to perturbations.
As the transfer operators typically preserve a non-negative cone, by the Ruelle-Perron-Frobenius theorem, there is an eigenvalue of largest magnitude, which is positive.
In the Perron-Frobenius case, this eigenvalue is 1, and the corresponding eigenprojection(s) represent invariant densities of the generating map $T$.
Numerical methods for approximating invariant densities rely on stability of the density under particular perturbations; those induced by the numerical method.
A very common perturbation is Ulam's method, a relatively crude, but in practice extremely effective, approach.
Positive stability results in a variety of settings include \cite{BaladiIsolaSchmitt,FroylandSRB,DingZhou,BlankKeller97,MurrayThesis,KeaneMurrayYoung,Froylandrandom,Murraynonuniform}. A mechanism causing instability is described in \cite{Keller82}.
Ulam's method can also be used to estimate other non-essential spectral values \cite{FroylandCMP,BlankKeller98,BaladiHolschneider,FroylandUlamApprox}.
Stability under convolution-type perturbations is treated in \cite{BaladiYoung,AlvesVilarinho}, and \cite{BlankKellerLiverani,DemersLiverani} consider static perturbations, as well as of convolution-type.
A seminal paper in this area is \cite{KellerLiverani99}, which provides a rather general template for stability results for single maps.
Despite the considerable volume of results in the autonomous setting, only a few results are known about stability of Oseledets splittings in the non-autonomous situation
\cite{BaladiKondahSchmitt,BaladiQuenched,Bogenschutz}.
Each of these results concerns stability of Oseledets splittings for small random perturbations of a \emph{fixed} expanding map; thus these results concern stability of \emph{non-random} eigenprojections of a fixed unperturbed transfer operator.
In contrast, we begin with a random dynamical system that possesses a (random) splitting, and demonstrate stability of this random splitting under perturbations.
Our techniques handle convolution-type perturbations (the random map experiences integrated noise), static perturbations (the random map is perturbed to another random map), and finite-rank perturbations (stability under numerical schemes).
This paper deals with stability of the \textit{top} space of the splitting. In particular,
our results answer a question raised by Buzzi in \cite{BuzziEDC} about stability of random acims for Lasota-Yorke maps. Stability under other types of perturbations relevant for applications and numerical studies, such as Ulam and Fourier-based schemes are also be treated with our method.
The approach we take is modeled on results of Keller and Liverani \cite{KellerLiverani99} and adapted to the random setting.
We point out that although their results may be
applied directly to some random perturbations of a single system, they yield
information about expectation of the random process only. In contrast,
ours yields information about almost all possible realizations.
Baladi and Gou\"ezel \cite{BaladiGouezel} introduced a relevant functional analytic setup for quasi-compactness of single transfer operators.
This followed other setups \cite{BlankKellerLiverani,GouezelLiverani,BaladiTsujii,DemersLiverani}.
In the present work we rely on the constructions from \cite{BaladiGouezel} because the results of \cite{GTQuas} allow one to show the existence of Oseledets splittings in that setting, a fact that is heavily exploited in our approach.
\subsection{Statement of the main results}
A random dynamical system consists of base dynamics (a measure-preserving map $\sigma$ of a probability space $\Omega$) and a family of
linear maps $\mathcal L_\omega$ from a Banach space $X$ to itself (in our applications these are the Perron-Frobenius operators of piecewise
expanding maps, $T_\omega$, of the circle). The results address stability of the dominant Oseledets subspace of the random dynamical system
when the linear maps are perturbed (leaving the base dynamics unchanged). We consider three classes of perturbation:
\begin{enumerate}[(A)]
\item \textit{Ulam-type perturbation.} For a fixed $k$, we define perturbed operators $\mathcal L_{k,\omega}$ to be $\mathbb E_k\circ \mathcal L_\omega$,
where $\mathbb E_k$ is the conditional expectation operator with respect to the partition into intervals of length $1/k$.
\item
\textit{Convolution-type perturbation}. Given a family of densities $(Q_k)$ on the circle, we define perturbed operators $\mathcal L_{k,\omega}$ by
$\mathcal L_{k,\omega}f=Q_k * \mathcal L_\omega f$. If one applies $T_\omega$ and then adds a noise term with distribution given by $Q_k$,
then $\mathcal L_{k,\omega}$ is the random Perron-Frobenius operator. That is, the expectation of the Perron-Frobenius operators of
$\tau_y\circ T_\omega$ where $y$ has density $Q_k$ and $\tau_y$ is translation by $y$. Notable examples of perturbations of this type are the
cases where $Q_k$ is uniformly distributed on an interval $[-\epsilon_k,\epsilon_k]$ or where $Q_k$ is the $k$th Fej\'er kernel.
\item
\textit{Static perturbation}. Here one replaces the entire family of transformations $T_\omega$ by nearby transformations $T_{k,\omega}$. These are
much more delicate than the other two types of perturbation (composing with convolutions and conditional expectations generally make
operators more benign). Notice that by enlarging the probability space, perturbations of this type can include transformations with (for
example) independent identically distributed additive noise. To see this, let $\Xi$ denote the space of sequences taking values in $[-1,1]$,
equipped with the product of uniform measures and let $\bar\Omega=\Omega\times\Xi$ and $\bar\sigma$ be the product of $\sigma$ on the
$\Omega$ coordinate and the shift on the $\Xi$ coordinate. Then defining $T_{k,(\omega,\xi)}(x)=T_\omega(x)+\xi/k$ gives a family of perturbed
maps (with the common base dynamics being $\bar\Omega$). The unperturbed dynamics $(T_\omega)$ can, of course, also be seen as being driven
by $\bar\Omega$. Notice that this is \emph{not} the same thing as the perturbation obtained by convolving with a uniform $Q$. In the static
case, the results obtained would give a result that holds for compositions of $\mathcal L_{\omega,\xi}$ for almost every $\omega$ and almost every
sequence of perturbations $\xi$, whereas a result for the convolution perturbation would give a result that holds for the expectation of
these operators obtained by integrating over the $\xi$ variables. The convolution type perturbations are also known in the physics literature
as annealed systems, while the static perturbations are quenched systems.
\end{enumerate}
Below we outline the main application results of this paper. We refer the reader to \S\ref{sec:Ex} for
definitions, and to
Theorems \ref{thm:Ulam}, \ref{thm:pertConv} and \ref{thm:NstepStab} for the precise statements.
\textbf{Theorem A:} \textit{(Stability under Ulam discretization).}
Let $\mathcal L$ be a random Lasota-Yorke map satisfying the conditions of \S\ref{S:LYMaps}.
Let $\{\mathcal L_k\}_{k\in \mathbb{N}}$ be the sequence of Ulam discretizations of $\mathcal L$, corresponding to uniform partitions of the domain into $k$ bins.
Assume $\mathcal L$ satisfies \textit{good} Lasota-Yorke inequalities (see \S\ref{Ssec:Ulam} for the precise meaning).
Then, for each sufficiently large $k$, $\mathcal L_k$ has a unique random acim.
Let $\{F_k\}_{k\in \mathbb{N}}$ be the sequence of random acims for $\mathcal L_k$. Then, $\lim_{k\to \infty}F_k=F$ fibrewise in $L_1$\footnote{\label{fn:conv} In fact, the fibrewise convergence holds in some fractional Sobolev norm ${\mc{H}_p^{t'}}$, with $0<t'<\frac{1}{p}<1$, which in particular implies convergence in $L_p$ for some $p>1$. Since the domain is bounded, this yields convergence in $L_1$ as well.}.
\textbf{Theorem B:} \textit{(Stability under convolutions).}
Let $\mathcal L$ be a random Lasota-Yorke map satisfying the conditions of \S\ref{S:LYMaps}.
Assume $\mathcal L$ satisfies \textit{good} Lasota-Yorke inequalities (see \S\ref{Ssec:PertByConvolution} for the precise meaning).
Let $\{\mathcal L_k\}_{k\in \mathbb{N}}$ be a family of perturbations, arising from convolution with positive kernels $Q_k$, such that $\lim_{k\to \infty}\int Q_k(x)|x|\,dx=0$\footnote{This condition is equivalent to weak convergence of $Q_k$ to $\delta_0$.}.
Then, for sufficiently large $k$, $\mathcal L_k$ has a unique random acim.
Let us call it $F_k$.
Then, $\lim_{k\to \infty}F_k=F$ fibrewise in $L_1\,^{\ref{fn:conv}}$.
\textbf{Theorem C:} \textit{(Stability under static perturbations).}
Let $\mathcal L$ be a random Lasota-Yorke map satisfying the conditions of \S\ref{S:LYMaps}.
Let $\{\mc{L}_k\}_{k\in \mathbb{N}}$ be a family of random Lasota-Yorke maps over the same base as $\mathcal L$, satisfying the conditions of \S\ref{S:LYMaps}, with the same bounds as $\mc{L}$.
Assume that there exists a sequence $\{\rho_k\}_{k>0}$ with $\lim_{k\to \infty}\rho_k= 0$ such that for $\bbp\text{-a.e. } \om \in \Om$, $d_{LY}(T_{k,\omega}, T_\omega)\leq \rho_k$, where $d_{LY}$ is a metric on the space of Lasota-Yorke maps. Furthermore, suppose that \emph{either}
\begin{enumerate}[(i)]
\item
$\{ T_\omega\}_{\omega \in \Omega}$ satisfies a generalized \textit{no-periodic turning points} condition (see \S\ref{Ssec:DetPert} for full details); \emph{or}
\item
The expansion is sufficiently strong ($\mu^\gamma>2$, where $\mu$ is a lower bound on $|DT_\omega(x)|$, and $0<\gamma\leq 1$ is the H\"older exponent of $DT_\omega$), and $T_\omega$ depends continuously on $\omega$.
\end{enumerate}
Then, for every sufficiently large $k$, $\mathcal L_k$ has a unique random acim.
Let $\{F_k\}_{k\in \mathbb{N}}$ be the sequence of random acims for $\mathcal L_k$.
Then, $\lim_{k\to \infty}F_k=F$ fibrewise in $L_1\,^{\ref{fn:conv}}$.
\subsection{Structure of the paper}
The paper is organized as follows.
An abstract stability result, Theorem~\ref{thm:StabRandomAcim}, is presented in \S\ref{S:StabilityResult}, after introducing the underlying setup. Examples are provided in \S\ref{sec:Ex}. They include perturbations arising from finite-rank discretization schemes, perturbations by convolution, and static perturbations of random Lasota-Yorke maps. The theoretical results are illustrated with a numerical example in \S\ref{sec:numEx}.
Section~\ref{S:techPfs} contains proofs of the technical results.
\section{A stability Result}\label{S:StabilityResult}
\subsection{Preliminaries}
In this section, we introduce some notation.
\begin{defn}\label{defn:RandomDS}
A \textbf{strongly measurable separable random linear system with ergodic and invertible base}, or for short a \textbf{random dynamical system}, is a tuple $\mc{R}=(\Omega,\mc{F}, \mathbb P, \sigma, X, \mathcal L)$ such that
$(\Omega, \mc{F}, \mathbb P)$ is a Lebesgue space, $\sigma: (\Omega, \mc{F}) \circlearrowleft$ is an invertible and ergodic $\mathbb P$-preserving transformation, $X$ is a separable Banach space, $L(X)$ denotes the set of bounded linear maps of $X$, and $\mathcal L: \Omega \to L(X)$ is a strongly measurable map. We use the notation $\mathcal L_\omega^{(n)}=\mathcal L(\sigma^{n-1}\omega) \circ \dots \circ \mathcal L(\omega)$.
\end{defn}
\begin{defn}
A random dynamical system $\mc{R}$ is called \textbf{quasi-compact} if $\kappa^*(\mc{R})<\lambda^*(\mc{R})$, where $\kappa^*$, the \textbf{index of compactness}, and $\lambda^*$, the \textbf{maximal Lyapunov exponent}, are defined as the following $\mathbb P$-almost everywhere constant limits:
\begin{align*}
\lambda^*&:=\lim_{n \to \infty} \frac{1}{n} \log \|\mathcal L_{\omega}^{(n)}\|,\\
\kappa^*&:= \lim_{n\to \infty} \frac{1}{n} \log \|\mathcal L_\omega^{(n)}\|_{ic(X)},
\end{align*}
where $\|A\|_{ic(X)}$ denotes the measure of non-com\-pact\-ness
\[
\|A\|_{ic(X)}:=\inf \{r>0 : A(B_X) \text{ can be covered by finitely
many balls of radius } r \},
\]
and $B_X$ denotes the unit ball in $X$.
The \textbf{Lyapunov spectrum} at $\omega\in \Omega$ is the set $\Lambda(\mc{R}(\omega)):=\big\{ \lim_{n \to \infty} \frac{1}{n} \log \|\mathcal L_{\omega}^{(n)}f\| : f\in X \big\}$.
A number $\lambda$ is called an \textbf{exceptional Lyapunov exponent} if
$\lambda \in \Lambda(\mc{R}(\omega))$ for $\bbp\text{-a.e. } \om \in \Om$ and $\lambda>\kappa^*$.
\end{defn}
\begin{defn}\label{def:temp}
A function $f:\Omega \to \mathbb{R}$ is called
\textbf{tempered with respect to $\sigma$}, or simply \textbf{tempered} if $\sigma$ is clear from the context, if for $\mathbb P$-almost every $\omega$, $\lim_{n \to \pm \infty} \frac{1}{|n|} \log |f(\sigma^n \omega)|=0$.
\end{defn}
\begin{rmk}\label{rmk:temp}
It is straightforward to check that $f$ is tempered if and only if for every $\epsilon>0$, there exists a measurable function $g:\Omega \to \mathbb{R}_+$ such that $f(\sigma^n \omega)\leq g(\omega) e^{\epsilon|n|}$. Furthermore, $g$ may be chosen to be tempered. Indeed, one may replace $g$ by $\tld{g}(\omega):=\inf_{n\in \mathbb{Z}} g(\sigma^n\omega)e^{\epsilon|n|}$. Also,
$\lim\sup_{n \to \pm \infty} \frac{1}{|n|} \log |\tld{g}(\sigma^n \omega)|\leq \epsilon$.
A consequence of Tanny's theorem presented in \cite[Lemma C.3]{GTQuas}, states that either $\tld{g}$ is tempered or $\lim\sup_{n \to \pm \infty} \frac{1}{|n|} \log |\tld{g}(\sigma^n \omega)|=\infty$ for $\bbp\text{-a.e. } \om \in \Om$. Therefore, the previous paragraph shows that $\tld{g}$ is tempered.
\end{rmk}
We will rely on the following statement, which extends the work of Lian and Lu \cite{LianLu} by providing the existence of an \textit{Oseledets splitting} in the context of non-invertible operators.
\begin{thm}\cite[Theorem 2.10]{GTQuas}\label{thm:OselSpl}
Let $\mc{R}=(\Omega,\mc{F}, \mathbb P, \sigma, X, \mathcal L)$ be a quasi-compact strongly measurable separable linear random dynamical system with ergodic invertible base such that
$\int \log^+\|\mathcal L\| \, d\mathbb P<\infty$.
Let $\lambda^*=\lambda_1>\dots > \lambda_l>
\kappa^*$ be the (at most countably many) exceptional Lyapunov exponents of $\mc{R}$, and $m_1,
\dots, m_l\in \mathbb{N}$ the corresponding multiplicities
(or in the case $l=\infty$, $\lambda_1>\lambda_2>\ldots$ with $m_1,m_2,\ldots$ the
multiplicities).
Then, up to $\mathbb P$-null sets, there exists a unique, measurable,
equivariant splitting of $X$ into closed subspaces, $X=V(\omega)\oplus
\bigoplus_{j=1}^l E_j(\omega)$, where generally $V(\omega)$ is infinite
dimensional and $\dim E_j(\omega)=m_j$. Furthermore, for every $y\in
E_j(\omega)\setminus \{0\}$, $\lim_{n \to \infty} \frac{1}{n}\log
\|\mathcal L_{\omega}^{(n)}y\|=\lambda_j$, for every $v\in V(\omega)$, $ \limsup_{n
\to \infty} \frac{1}{n}\log \|\mathcal L_{\omega}^{(n)}v\|\leq \kappa^*$ and the
norms of the projections associated to the splitting are tempered
with respect to $\sigma$.
\end{thm}
\begin{rmk}
The notation used for projections associated to the splitting are as follows.
$\Pi_{j,\omega}$ will denote the projection onto $E_j(\omega)$ along $V(\omega)\oplus
\bigoplus_{i=1, i\neq j}^l E_i(\omega)$; $\Pi_{\leq j,\omega}$ will denote the projection onto $\bigoplus_{i=1}^{j} E_i(\omega)$ along $V(\omega)\oplus \bigoplus_{i= j+1}^l E_i(\omega)$; $\Pi_{>j,\omega}:=I - \Pi_{\leq j,\omega}$.
\end{rmk}
\begin{defn}
A random dynamical system $\mc{R}=(\Omega,\mc{F}, \mathbb P, \sigma, X, \mathcal L)$ is called \textbf{splittable} if it has an Oseledets splitting.
Equivalently, one may also say that $\mc{R}$ \textbf{splits}, or \textbf{splits with respect to $X$} if the choice of Banach space needs to be emphasized.
\end{defn}
We conclude by making the following notational convention.
\begin{conv}
Throughout the paper, $C_\#$ denotes various constants that depend only on parameters $t, t', p, \gamma$. The value of $C_\#$ may change from one appearance to the next.
\end{conv}
\subsection{Setting}\label{S:setting}
Let $(Y,|\cdot|)$, $(X,\|\cdot\|)$ be Banach spaces, with a compact embedding $X\hookrightarrow Y$, and a continuous embedding $Y\hookrightarrow L^1(Leb)$.
Consider splittable random linear dynamical systems $\mc{R}=(\Omega,\mc{F}, \mathbb P, \sigma, X, \mathcal L)$ and $\mc{R}_k=(\Omega,\mc{F}, \mathbb P, \sigma, X, \mathcal L_k)$, $k\geq 1$, with a common ergodic invertible base $\sigma:\Omega \circlearrowleft$, and satisfying the following:
\begin{enumerate}
\renewcommand{\theenumi}{H\arabic{enumi}}
\renewcommand{\labelenumi}{(\textbf{\theenumi})}
\setcounter{enumi}{-1}
\item \label{it:TopExp}
$\int \log^+\|\mathcal L\| \, d\mathbb P<\infty$ and for every $k\in \mathbb{N}$, $\int \log^+\|\mathcal L_k\| \, d\mathbb P<\infty$.
Furthermore, for every $k\in \mathbb{N}$, $f\in X$ and $\bbp\text{-a.e. } \om \in \Om$,
$\mathcal L_{k,\omega}$ and $\mathcal L_\omega$ preserve the cone of non-negative functions, and satisfy
$\int \mathcal L_{k,\omega}f\,dm=\int f \,dm=\int \mathcal L_\omega f \,dm$.
\end{enumerate}
\begin{rmk}\label{rmk:H1}
In all the examples of this paper, condition \eqref{it:TopExp} is clearly satisfied. Thus, we will henceforth assume it holds and use it wherever needed.
\end{rmk}
\begin{enumerate}
\renewcommand{\theenumi}{H\arabic{enumi}}
\renewcommand{\labelenumi}{(\textbf{\theenumi})}
\item \label{it:UnifLY}
There exist a constant $B>0$ and a measurable $\alpha:\Omega \to \mathbb{R}_+$ with
$\kappa:=\int \log \alpha(\omega) \, d\mathbb P(\omega)<0$, such that for every $f\in X$, $k\in \mathbb{N}$ and $\bbp\text{-a.e. } \om \in \Om$,
\begin{equation}\label{eq:UnifLY}
\|\mathcal L_{k,\omega} f\|\leq \alpha(\omega) \|f\|+B|f|.
\end{equation}
\end{enumerate}
\begin{rmk}\label{rmk:indComp}
Whenever \eqref{it:UnifLY} holds, \cite[Lemma C.5]{GTQuas} ensures that $\kappa^*(\mc{R}_k)<0$.
\end{rmk}
A version of the following statement was used by Buzzi \cite{Buzzi}, and is also derived in \cite[Lemma C.5]{GTQuas}:
Condition \eqref{it:UnifLY} is implied by the following more practical condition.
\begin{enumerate}
\renewcommand{\theenumi}{H\arabic{enumi}'}
\renewcommand{\labelenumi}{(\textbf{\theenumi})}
\item \label{it:GenUnifLY}
$\{\log \|\mathcal L_{k,\omega}\|\}_{k\in \mathbb{N}}$ is dominated by a $\mathbb P$-integrable function, and there exist measurable functions $\tld{\alpha}, \tld{B}:\Omega \to \mathbb{R}_+$ with
$\int \log \tld{\alpha}(\omega) \, d\mathbb P(\omega)<0$, such that for every $f\in X$, $k\in \mathbb{N}$ and $\bbp\text{-a.e. } \om \in \Om$,
\begin{equation}\label{eq:GenUnifLY}
\|\mathcal L_{k,\omega} f\|\leq \tld{\alpha}(\omega) \|f\|+ \tld{B}(\omega)|f|.
\end{equation}
\end{enumerate}
Furthermore, $\kappa$ in \eqref{eq:UnifLY} may be chosen arbitrarily close to $\int \log \tld{\alpha}(\omega) \, d\mathbb P(\omega)$.
\begin{enumerate}
\renewcommand{\theenumi}{H\arabic{enumi}}
\renewcommand{\labelenumi}{(\textbf{\theenumi})}
\setcounter{enumi}{1}
\item \label{it:PowBd}
For $\bbp\text{-a.e. } \om \in \Om$, $\sup_{k,n\in \mathbb{N}}|\mathcal L_{k,\sigma^{-n}\omega}^{(n)}|=:C(\omega)$ is well defined and $C$ is tempered with respect to $\sigma$.
\item \label{it:SmallPert} There exists $\tau_k\to0$ as $k\to \infty$ such that for $\bbp\text{-a.e. } \om \in \Om$,
\[
\sup_{\|g\|=1} |(\mathcal L_\omega-\mathcal L_{k,\omega})g|\leq \tau_k.
\]
\end{enumerate}
We conclude this section with the following lemma, which provides a temperedness condition analogous to that of \eqref{it:PowBd}, but with respect to the strong norm.
It will be used in bootstrapping arguments in the examples of \S\ref{sec:Ex}.
\begin{lem}\label{lem:PowerBoundStrongNorm}
Suppose conditions \eqref{it:UnifLY} and \eqref{it:PowBd} hold. Then,
$\sup_{k,n\in \mathbb{N}} \|\mathcal L_{k,\sigma^{-n}\omega}^{(n)}\|=:C'(\omega)$ is well defined for $\bbp\text{-a.e. } \om \in \Om$, and $C'$ is tempered with respect to $\sigma$.
In particular, $\lambda_{k,1}\leq 0$ for every $k\in \mathbb{N}$.
Furthermore, if \eqref{it:TopExp} holds and if there exists $f\in X$ with $\int f\,dm\neq 0$, then for every $k\in \mathbb{N}$, $\lambda_{k,1}=0$, where $\lambda_{k,1}:=\lambda^*(\mc{R}_k)$ is the maximal Lyapunov exponent of $\mathcal L_{k}$.
\end{lem}
\begin{proof}
Iterating \eqref{eq:UnifLY}, we get
\begin{align*}
\|\mathcal L_{k,\sigma^{-n}\omega}^{(n)} f\| &\leq \alpha_{\sigma^{-n}\omega}^{(n)} \|f\| + B\sum_{j=0}^{n-1} \alpha_{\sigma^{-n+j+1}\omega}^{(n-j-1)} |\mathcal L_{k,\sigma^{-n}\omega}^{(j)} f|,
\end{align*}
where $\alpha_\omega^{(l)}$ denotes the product $\alpha_\omega \cdot \alpha_{\sigma\omega}\dots \alpha_{\sigma^{l-1}\omega}$ when $l>0$, and 1 when $l=0$.
Let $\kappa=\int \log \alpha \, d\mathbb P<0$, and let $0<\epsilon<-\kappa$. Then, there exists a tempered function $A$ such that for every $l\geq 0$ and $\bbp\text{-a.e. } \om \in \Om$, $\alpha_{\sigma^{-l}\omega}^{(l)}\leq A(\omega) e^{(\kappa+\epsilon/2)l}\leq A(\omega)$.
Since $C$ from \eqref{it:PowBd} is tempered, Remark~\ref{rmk:temp} shows there exists a tempered function $D:\Omega \to \mathbb{R}_+$ such that $C(\sigma^l\omega)\leq D(\omega) e^{\epsilon|l|/2}$.
Then,
\begin{align*}
\|\mathcal L_{k,\sigma^{-n}\omega}^{(n)} f\| &\leq
A(\omega) \|f\| + B \sum_{j=0}^{n-1} A(\omega) e^{(n-j-1)(\kappa+\epsilon/2)} C(\sigma^{-n+j}\omega) |f|\\
&\leq A(\omega)\Big( 1+ B D(\omega) \sum_{j=0}^{n-1} e^\epsilon e^{(n-j-1)(\kappa+\epsilon)} \Big) \|f\|\leq C'(\omega)\|f\|,
\end{align*}
where $C'(\omega):= A(\omega) \Big( 1+ B D(\omega)e^\epsilon/(1- e^{\kappa+\epsilon}) \Big)$.
Since $A$ and $D$ are tempered, $C'$ is tempered, as claimed.
The second part of the lemma is immediate.
\qed \end{proof}
\subsection{Stability of random acims}
For each $n\in \mathbb{N}$ and $G:\Omega \times I\to \mathbb{R}$, with $g_\omega:=G(\omega,\cdot)\in X$,
we let $\mathcal L^n G:\Omega \times I\to \mathbb{R}$ be the function defined fibrewise by $\mathcal L^n G(\omega,\cdot):=\mathcal L_{\sigma^{-n}\omega}^{(n)} g_{\sigma^{-n}\omega}$. Let $\mathcal L^n_k G$ be defined analogously.
Condition~\eqref{it:TopExp} shows that if $\mathcal L$ (or $\mathcal L_k$) has a non-negative fixed point, then one can in fact choose it to be fibrewise normalized in $L^1(\text{Leb})$. In a slight abuse of notation, we call any such fixed point $F$ a \textit{random acim} for $\mathcal L$ (or $\mathcal L_k$), provided $\omega \mapsto f_\omega:=F(\omega,\cdot)$ is $(\mc{F}, \mc{B}_X)$ measurable, where $\mc{B}_X$ is the Borel $\sigma$-algebra of $(X, \|\cdot\|)$.
The main result of this section is the following.
\begin{thm}\label{thm:StabRandomAcim}
Suppose $\mc{R}$ and $\mc{R}_k$, $k\geq 1$ are
strongly measurable separable linear random dynamical systems with a common ergodic invertible
base satisfying conditions \eqref{it:TopExp}--\eqref{it:SmallPert}.
Assume $1\in X$\footnote{The conclusions remain valid if the condition $1\in X$ is replaced by the existence of a non-zero, non-negative element in $X$.}.
Assume that $\mc{R}$ splits (i.e. has an Oseledets splitting) with respect to $|\cdot|$
and $\dim E_1=1$. Then, $\mc{R}$ has a unique random acim, $F$.
For sufficiently large $k$, there is a unique random acim for $\mc{R}_k$, which is denoted by $F_k$.
Furthermore, $\lim_{k\to \infty}F_k=F$ fibrewise in $|\cdot|$. That is, for $\bbp\text{-a.e. } \om \in \Om$, $\lim_{k\to \infty}|f_\omega-f_{k,\omega}|=0$.
\end{thm}
\begin{proof}
We divide the argument into three steps.
\begin{enumerate}
\renewcommand{\theenumi}{\Roman{enumi}}
\renewcommand{\labelenumi}{\textit{(\theenumi)}.
\item
\textit{If $\dim E_{1}=1$, then there is an attractive random acim.}\\
Recall that Lemma~\ref{lem:PowerBoundStrongNorm} ensures $\lambda_{1}=0$.
Let $\lambda_2<0$ be the second Lyapunov exponent of $\mc{R}$\footnote{In case 0 is the only exceptional Lyapunov exponent for $\mc{R}$, we let $\lambda_2=\kappa$, where $\kappa$ is as in \eqref{it:UnifLY}.}.
Let $f_\omega\in E_1(\omega)$ be such that $\int f_\omega \,dm=1$.
This normalization condition is possible, because
$X_0:=\{f\in X: \int f\,dm=0\}$ is the Oseledets complement to the top Oseledets space $E_{1}(\omega)$.
That is, $X_0=V(\omega)\oplus \bigoplus_{j=2}^l E_j(\omega)$ for $\bbp\text{-a.e. } \om \in \Om$.
This follows from condition \eqref{it:TopExp}. In particular, if $g\in X$ is such that $\int g \,dm\neq 0$,
then $\int \Pi_{1,\omega}(g)\,dm=\int g \,dm\neq 0$. Thus, $\Pi_{1,\sigma^{-n}\omega}(1)\neq 0$ for every $n\in \mathbb{N}$ and $\bbp\text{-a.e. } \om \in \Om$.
Let $g_n:=\Pi_{1, \sigma^{-n}\omega}(1)$, $h_n:=\Pi_{>1,\sigma^{-n}\omega}(1)$.
\cite[Lemma 2.13(1)]{GTQuas} ensures that for each $\epsilon>0$ there is a measurable function $D'(\omega)$ such that
\[
\|\mathcal L_{\sigma^{-n}\omega}^{(n)}h_n\|\leq D'(\omega) e^{(\lambda_2+\epsilon)n}\|h_n\|.
\]
Temperedness of $\Pi_{>1}$, coming from Theorem~\ref{thm:OselSpl}, shows there is a measurable function $\tld{D}(\omega)$ such that $\|h_n\|\leq \tld{D}(\omega) e^{\epsilon n}$. Combining, with the above, one gets
\[
\|\mathcal L_{\sigma^{-n}\omega}^{(n)}h_n\|\leq D'(\omega)\tld{D}(\omega) e^{(\lambda_2+2\epsilon)n}.
\]
By linearity of $\mathcal L^{(n)}_{\sigma^{-n}\omega}$, $\lim_{n\to \infty} d_X \big( \mathcal L^{(n)}_{\sigma^{-n}\omega} 1,E_1(\omega) \big)=0$, where $d_X$ denotes the distance with respect to the norm on $X$, $\|\cdot\|$.
Also, the normalization condition on $f_\omega$ ensures that
$\lim_{n\to \infty} \mathcal L^{(n)}_{\sigma^{-n}\omega} 1= f_\omega $ in $X$, for $\bbp\text{-a.e. } \om \in \Om$. In particular, $f_\omega$ is non-negative, as $X$ is continuously embedded in $L^1$ by assumption.
Thus $F=\lim_{n\to \infty} \mathcal L^n 1$ (fibrewise limit in $X$) is a random acim for $\mathcal L$. Measurability follows from measurability of $E_1(\omega)$, guaranteed by Theorem~\ref{thm:OselSpl}.
\item
\textit{For sufficiently large $k$, $\mc{R}_k$ has a unique random acim.}\\
Lemma~\ref{lem:PowerBoundStrongNorm} yields $\lambda_{k,1}=0$ for every $k$.
Thus, Remark~\ref{rmk:indComp} ensures $\mc{R}_k$ is quasi-compact and Theorem~\ref{thm:OselSpl}
shows that $\mc{R}_k$ splits with respect to $X$ for every $k\in \mathbb{N}$.
Let $f\in X$. Then,
\[
\mathcal L^{(n)}_\omega f -\mathcal L_{k,\omega}^{(n)}f=
\sum_{j=0}^{n-1}\mathcal L^{(j)}_{\sigma^{n-j}\omega}(\mathcal L_{\sigma^{n-j-1}\omega}-\mathcal L_{k,\sigma^{n-j-1}\omega}) \mathcal L_{k,\omega}^{(n-j-1)}f.
\]
Let $g_{k,\omega, i}:=(\mathcal L_{\sigma^{i}\omega}-\mathcal L_{k,\sigma^{i}\omega}) \mathcal L_{k,\omega}^{(i)}f$.
Then, since $\int g_{k,\omega,i}(x)dx=0$, we have that $g_{k,\omega, i}\in E_{>1}(\omega')$ for
$\mathbb P\text{-a.e. } \omega' \in \Omega$,
where $E_{>1}(\omega):=V(\omega) \oplus \bigoplus_{j=2}^l E_j(\omega)$ is the complementary Oseledets space to $E_1(\omega)$.
Thus, \cite[Lemma 2.13(1)]{GTQuas} ensures that for each $\epsilon>0$ there is a measurable function $D'(\omega)$ such that
\begin{equation}\label{eq:uniq1}
\begin{split}
|\mathcal L^{(n)}_\omega f -\mathcal L_{k,\omega}^{(n)}f| &\leq \sum_{j=0}^{n-1} D'(\sigma^n \omega)e^{j(\lambda_2+\epsilon)}
|(\mathcal L_{\sigma^{n-j-1}\omega}-\mathcal L_{k,\sigma^{n-j-1}\omega}) \mathcal L_{k,\omega}^{(n-j-1)}f| \\
&\leq
\sum_{j=0}^{n-1} D'(\sigma^n \omega)e^{j(\lambda_2+\epsilon)} \tau_k \| \mathcal L_{k,\omega}^{(n-j-1)}f\|.
\end{split}
\end{equation}
We let $A, C'$ be the tempered functions from the proof of Lemma~\ref{lem:PowerBoundStrongNorm}.
Thus, we have
\begin{align*}
\| \mathcal L_{k,\omega}^{(n-j-1)}f\|\leq \alpha_{\omega}^{(n-j-1)}\|f\|+C'(\sigma^{n-j-1}\omega)|f|.
\end{align*}
Substituting into \eqref{eq:uniq1}, we get
\begin{equation}\label{eq:uniq2}
\begin{split}
|\mathcal L^{(n)}_\omega f -\mathcal L_{k,\omega}^{(n)}f| &\leq
D'(\sigma^n \omega) \tau_k \sum_{j=0}^{n-1} e^{j(\lambda_2+\epsilon)} \Big( \alpha_{\omega}^{(n-j-1)}\|f\|+C'(\sigma^{n-j-1}\omega)|f| \Big)\\
&\leq D'(\sigma^n \omega) \tau_k \sum_{j=1}^{n} e^{j(\lambda_2+\epsilon)}
\Big( A(\omega)e^{(\kappa+\epsilon)(n-j)}\|f\|+\tld{C}(\sigma^{n}\omega)e^{2\epsilon j} |f| \Big),
\end{split}
\end{equation}
where $\tld{C}$ is such that for every $l\in \mathbb{Z}$, $C'(\sigma^l \omega)\leq \tld{C}(\omega) e^{2\epsilon|l|}$.
Now, since $\mc{R}_k$ splits, $\dim E_{k,1}(\omega)<\infty$. In particular, there exists a measurable function $J_k(\omega)>0$ such that
$\sup_{f\in E_{k,1}(\omega)\setminus \{0\}} \frac{\|f\|}{|f|}\leq J_k(\omega)$.
Let $M_{J_k}$ be such that $\mathbb P(\{\omega: J_k(\omega)<M_{J_k}\})>0.9$, and let $G_{J_k}=\{\omega: J_k(\omega)<M_{J_k}\}$.
Thus, if $\omega \in G_{J_k}$, and $f\in E_{k,1}(\omega)$,
\begin{align}\label{eq:contWeakNorm3}
|\mathcal L^{(n)}_\omega f -\mathcal L_{k,\omega}^{(n)}f| &\leq \tau_k D''(\sigma^n \omega) A'(\omega) \big(n e^{(\lambda_2+\epsilon)n} M_{J_k}+ 1 \big)|f|,
\end{align}
where $D''= D' \max (1, \tld{C}),$ and $A'=\max(A,\frac{1}{1-e^{\lambda_2+3\epsilon}})$, where we assume $\epsilon$ is such that $\lambda_2+3\epsilon<0$.
Let $M_A, M_{D}$ be such that $\mathbb P(\{\omega: A'(\omega)<M_A\})>0.9$ and $\mathbb P(\{\omega: D''(\omega)<M_D\})>0.9$. Let $G_A=\{\omega: A'(\omega)<M_A\}$, $G_D=\{\omega: D''(\omega)<M_D\}$.
Thus, if $k$ is sufficiently large, there exists $N=N_k$ such that if $n\geq N$, $\omega \in G_A\cap G_{J_k}$\footnote{Let us observe that the intersection is non-empty by the choice of $M_{J_k}$ and $M_A$.}, $\sigma^n \omega \in G_D$, and $f\in E_{k,1}(\omega)$, we have
\begin{equation}\label{eq:contWeakNorm1}
|\mathcal L^{(n)}_\omega f -\mathcal L_{k,\omega}^{(n)}f| \leq \tfrac{1}{4}|f|.
\end{equation}
Now suppose also $\int f \,dm=0$.
Then, the assumptions on $\mc{R}$ show that there exists $N'=N'(\omega)$ such that for every $n\geq N'$,
$|\mathcal L^{(n)}_\omega f|<\tfrac{1}{4}|f|$.
Combining with \eqref{eq:contWeakNorm1}, we get that there exists $\tld{N}=\tld{N}_k(\omega)$ such that if $n\geq \tld{N}$,
$\omega \in G_A\cap G_{J_k}$, $\sigma^n \omega \in G_D$, then
\begin{equation}\label{eq:contWeakNorm2}
|\mathcal L_{k,\omega}^{(n)}f| \leq \tfrac{1}{2}|f|,
\end{equation}
for every $f\in E_{k,1}(\omega)$ with $\int f \,dm=0$.
Now, suppose for a contradiction that $d_{k,1}:=\dim E_{k,1}>1$ (recall that $\dim E_{k,1}$ is $\mathbb P$ almost everywhere constant).
We can find a constant $M_{\tld{N}_k}$ such that the set
$$G_k:=\big\{\omega\in \Omega: J_k(\omega)<M_{J_k},\ A'(\omega)< M_A,\ D''(\omega)< M_D,\ \tld{N}_k(\omega)< M_{\tld{N}_k},\ \dim E_{k,1}(\omega)=d_{k,1} \big\}$$ has positive $\mathbb P$-measure.
Then, by Birkhoff's ergodic theorem, for each $\epsilon>0$ there is a subset $G'_k\subset G_k$ of full $\mathbb P$-measure in $G_k$, such that for each $\omega \in G'_k$, there exists $N_0=N_0(\omega)$ such that for every $n\geq N_0$,
there exists a sequence $0= n_0<n_1<n_2<\dots n_l \leq n$, with $l\geq \frac{n(1-\epsilon)\mathbb P(G_k)}{\tld{N}_k}$, such that
(i) for every $0\leq j \leq l$, $\sigma^{n_j}\omega \in G_k$, and
(ii) for every $0\leq j < l$, $n_{j+1}-n_{j}\geq \tld{N}_k$.
The $\tld{N}_k$ in the denominator comes from condition (ii), as we may have to choose one out of each $\tld{N}_k$ visits to $G_k$ to ensure this, in the worst-case scenario.
Let $\omega \in G_k'$, and $f\in E_{k,1}(\omega)$ be such that $\int f \,dm=0$ and $|f|=1$.
Using \eqref{eq:contWeakNorm2} at times $n_1, \dots, n_l$, we get that
$|\mathcal L_{k,\omega}^{(n_l)}f| \leq \frac{1}{2^l}$,
so $\frac{1}{n_l}\log |\mathcal L_{k,\omega}^{(n_l)}f| \leq - \frac{l}{n_l} \log 2 \leq - \frac{(1-\epsilon)\mathbb P(G'_k)}{\tld{N}_k} \log 2$. Thus, $\liminf_{n\to \infty}\frac{1}{n}\log |\mathcal L_{k,\omega}^{(n)}f|<0$.
Since $f\in E_{k,1}(\omega)$, this contradicts the fact that $\lambda_{k,1}=0$, because
\cite[Theorem 3.3]{FroylandStancevic} shows that $\mathbb P$-almost surely, $\liminf_{n\to \infty}\frac{1}{n}\log |\mathcal L_{k,\omega}^{(n)}f|=\lambda_{k,1}$. That is, the Lyapunov exponents of $f$ with respect to strong and weak norms coincide.
Therefore, $\dim(E_{k,1})=1$ for every sufficiently large $k$.
Existence and uniqueness of a random acim for $\mc{R}_k$ follow from the previous step.
\item
\textit{Fibrewise convergence of random acims.}\\
For each sufficiently large $k$, let $F_k$ be the unique random acim guaranteed by the previous step.
Let
\begin{align*}
R_{k,n, \omega}:=(\mathcal L^{(n)}_{\sigma^{-n} \omega} - \mathcal L^{(n)}_{k,\sigma^{-n} \omega}) 1.
\end{align*}
We keep the notation as above.
Starting from Equation~\eqref{eq:uniq2} and arguing as in the previous step, we get
\begin{align*}
|R_{k,n,\omega}|&\leq \tau_k D''(\omega) A'(\sigma^{-n}\omega) K \|1\|,
\end{align*}
where $K:=\max_{n\in \mathbb{N}} \big(n e^{(\lambda_2+\epsilon)n} + 1 \big)$.
Thus, whenever $\omega, \sigma^{-n} \omega \in G:=G_A\cap G_D$,
\begin{align}\label{eq:boundRem}
|R_{k,n,\omega}|&\leq \tau_k M_D M_A K \|1\|.
\end{align}
Since $\mathbb P(G)>0$, we may choose $G'\subset G$ with $\mathbb P(G')>0$ such that for every $\omega \in G'$, there exists an infinite sequence $(n_j)$ such that $\sigma^{-n_j}\omega \in G$, by the Poincar\'e recurrence theorem.
Recalling that $\lim_{k\to\infty}\tau_k=0$, \eqref{eq:boundRem} shows that for every $\omega \in G'$,
\begin{align*}
\lim_{k\to \infty}|R_{k,n_j,\omega}|= 0, \text{uniformly in }j.
\end{align*}
Furthermore, as we showed in steps (I) and (II), $\mathcal
L^{(n)}_{\sigma^{-n}\omega}1$ converges to $f_\omega$ as $n\to\infty$ and for
sufficiently large $k$, $\mathcal L^{(n)}_{k,\sigma^{-n}\omega}1$ converges to
$f_{k,\omega}$.
Hence, all the following limits exist (in $|\cdot|$) and
\begin{align*}
f_\omega-f_{k,\omega}=\lim_{n\to \infty} \mathcal L_{\sigma^{-n}\omega}^{(n)} 1 - \mathcal L_{k,\sigma^{-n}\omega}^{(n)} 1=\lim_{n\to \infty} R_{k,n,\omega}.
\end{align*}
Therefore, for every $\omega \in G'$, $\lim_{k\to \infty}|f_\omega-f_{k,\omega}|=\lim_{k\to \infty} \lim_{j\to \infty} |R_{k,n_j,\omega}|=0$.
Since the set of $\omega$ for which $\lim_{k\to \infty}f_{k,\omega}= f_\omega$ is $\sigma$-invariant, ergodicity of $\sigma$ yields fibrewise convergence for $\bbp\text{-a.e. } \om \in \Om$, as claimed.\qed
\end{enumerate}
\end{proof}
\section{Examples}\label{sec:Ex}
The first subsection, \S\ref{S:LYMaps}, is devoted to introducing the setting of random Lasota-Yorke maps.
As the hypotheses of the stability theorem of \S\ref{S:StabilityResult} do not hold in the classical setting of functions of bounded variation, the alternative norms used, as well as some relevant properties thereof, are presented.
Three applications of the stability theorem in the context of random Lasota-Yorke maps are presented in sections \S\ref{Ssec:Ulam}--\S\ref{Ssec:DetPert}. These correspond to Ulam approximations, perturbations with additional randomness that arise by taking convolution with non-negative kernels, and static perturbations,
respectively.
In \S\ref{sec:numEx}, we illustrate the results with a numerical example.
\subsection{Setting: Random (non-autonomous) Lasota-Yorke maps}\label{S:LYMaps}
For $0<\gamma\leq 1$, let $\text{LY}^\gamma$ be the space of finite-branched piecewise $C^{1+\gamma}$ expanding maps of the interval.
For each $T\in \text{LY}^\gamma$, let $\nint^T$ is the number of branches of $T$, and $\{ I_i^T\}_{1\leq i \leq \nint^T }$ the continuity intervals of $T$ and $DT$.
We recall the definition of the metric $d_{\text{LY}}$ on the space of random Lasota-Yorke maps $\text{LY}^{1+\gamma}$ used in \cite{GTQuas}
\footnote{For convenience, the $\gamma$ dependence of the distance does not explicitly appear in the notation. We think of $\gamma$ as fixed.}.
Let $S,T\in\text{LY}^{1+\gamma}$. Let the branches for $S$ be
$(I^S_i)_{i=1}^{\nint^S}$ and for $T$ be $(I^T_i)_{i=1}^{\nint^T}$, considered
as an \emph{ordered} collection. If $\nint^T\ne \nint^S$,
or $I^S_i\cap I^T_i=\emptyset$ for some $i$, we define $d_{\text{LY}}(S,T)=1$.
Otherwise we define
\[
d_{\text{LY}}(S,T)=
\max_i \|(S_i-T_i)\vert_{I_i^S\cap I_i^T}\|_{C^{1+\gamma}} + \max_i\Big | \|S_i \|_{C^{1+\gamma}}- \|T_i \|_{C^{1+\gamma}} \Big| +\max_i
d_H(I_i^S, I_i^T),
\]
where $d_H$ denotes Hausdorff distance. We endow $\text{LY}^{1+\gamma}$ with the corresponding Borel
$\sigma$-algebra $\mc{B}_{LY}$.
\begin{defn}
Let $(\Omega,\mc{F})$ be a measurable space.
A \textbf{random Lasota-Yorke map} $\mc{T}$ is a measurable function $\mc{T}: (\Omega, \mc{F}) \to (\text{LY}^\gamma, \mc{B}_{LY})$.
\end{defn}
\begin{rmk}
A random Lasota-Yorke map can be made into a random dynamical system by fixing a separable Banach space $X$ on which transfer operators $\mathcal L_{\omega}$ associated to $\mc{T}(\omega)=:T_\omega$ are bounded linear maps.
\end{rmk}
Let $\mc{T}$ be a random Lasota-Yorke map.
Also, assume there exist uniform bounds $\nint, \dist, \mu$ as follows
\begin{enumerate}
\renewcommand{\theenumi}{M\arabic{enumi}}
\renewcommand{\labelenumi}{(\theenumi)}
\item \label{it:M1} $\nint_\omega\leq \nint$, where $\nint_\omega$ is the cardinality of the partition of $\dom=\{I_{1,\omega}, \dots, I_{\nint_\omega, \omega}\}$ into domains of differentiability of $T_\omega$,
\item \label{it:M2} $\|T_{i,\omega}^{\pm 1}\|_{C^{1+\gamma}}\leq \dist$,
\item \label{it:M3} $\essinf_{\omega\in\Omega,x\in \dom}|(DT_\omega)_x|>\mu>1$.
\end{enumerate}
\begin{rmk}\label{rmk:UnifBdNorm}
It follows from \cite[\S4]{BaladiGouezel} (see also \cite{Thomine} or \cite[\S3]{GTQuas}) that
conditions \eqref{it:M1}--\eqref{it:M3} yield $\esssup_{\omega\in \Omega} \|\mathcal L_\omega\|<\infty$.
\end{rmk}
Furthermore, assume $\mc{T}$ enjoys Buzzi's \textit{random covering condition} \cite{BuzziEDC}.
That is, assume that for every non-trivial interval $J\subset I$ and $\bbp\text{-a.e. } \om \in \Om$, there exists some $n\in \mathbb{N}$ such that $T_\omega^{(n)}(J)=I \pmod{0}$.
This ensures $\dim(E_1)=1$ \cite{BuzziEDC}, as required by Theorem~\ref{thm:StabRandomAcim}.
Let us call $F$ the unique random acim of $\mc{T}$.
Next, let us observe that conditions \eqref{it:M1}--\eqref{it:M3} above yield, for each $N\in \mathbb{N}$, a Lasota-Yorke inequality
for the norms $BV, L^1$ of the form:
\begin{equation}\label{eq:L1BV-LY-N}
\|\mathcal L_{\omega}^{(N)} f\|_{BV}\leq \alpha_N(\omega) \|f\|_{BV}+B_N(\omega)\|f\|_1,
\end{equation}
for $\bbp\text{-a.e. } \om \in \Om$.
Furthermore, the work of Rychlik \cite{Rychlik} shows that if $N$ is sufficiently large (depending on the constants from \eqref{it:M1}--\eqref{it:M3}), one can ensure $\int \log \alpha_N(\omega)\, d\mathbb P<0$.
We will assume the above holds for $N=1$, and discuss the necessary modifications needed to cover the case $N>1$ separately in each subsection. That is, we assume for $\bbp\text{-a.e. } \om \in \Om$,
\begin{equation}\label{eq:L1BV-LY}
\|\mathcal L_{\omega} f\|_{BV}\leq \alpha(\omega) \|f\|_{BV}+B(\omega)\|f\|_1,
\end{equation}
with $\int \log \alpha(\omega)\, d\mathbb P<0$.
Also, since $\mathcal L_{\omega}$ preserves the non-negative cone and $\int f =\int \mathcal L_\omega f \,dm$, then
$\|\mathcal L_{\omega}^{(n)}\|_1\leq 1$ for $\bbp\text{-a.e. } \om \in \Om$ and $n\in \mathbb{N}$.
Thus, Lemma~\ref{lem:PowerBoundStrongNorm} ensures $\sup_{n\in \mathbb{N}}\|\mathcal L_{\sigma^{-n}\omega}^{(n)}\|_{BV}$ is tempered with respect to $\sigma$.
Further,
\begin{equation}\label{eq:LinftyBd}
\|\mathcal L_{\sigma^{-n}\omega}^{(n)} f\|_\infty\leq \|\mathcal L_{\sigma^{-n}\omega}^{(n)} 1\|_\infty \|f\|_\infty\leq \|\mathcal L_{\sigma^{-n}\omega}^{(n)} 1\|_{BV} \|f\|_\infty,
\end{equation}
which ensures $\sup_{n\in \mathbb{N}}\|\mathcal L_{\sigma^{-n}\omega}^{(n)}\|_{\infty}$ is tempered as well.
The Riesz-Thorin interpolation theorem (see e.g. \cite[VI.10.12]{DunfordSchwartz}) then yields temperedness of $\sup_{n\in \mathbb{N}}\|\mathcal L_{\sigma^{-n}\omega}^{(n)}\|_{p}$ for every $1\leq p \leq \infty$.
It follows from \cite[\S3]{GTQuas}, which in turn builds on \cite{BaladiGouezel}, that there exist
$t,p\in \mathbb{R}$, with
\[
0<t<\tfrac{1}{p}<1,
\]
such that $\mc{T}$ splits with respect to ${\mc{H}_p^t}$, where ${\mc{H}_p^t}$ is the subset of the fractional Sobolev space with parameters $p$ and $t$, whose support lies inside the interval $I$.
That is,
\[
{\mc{H}_p^t}= \mc{F}^{-1}(m_{-t} \mc{F}(L_p)) \cap \{f\in L_p: \mathop{\mathrm{supp}}(f)\subset I\},
\]
where $\mc{F}$ denotes the Fourier transform, and $m_t(\zeta):=
(1+|\zeta|^2)^{\frac{t}{2}}$.
The norm on ${\mc{H}_p^t}$ is given by
\[
\|f\|_{{\mc{H}_p^t}}:=\|\mc{F}^{-1}(m_t \mc{F}(f))\|_{p}.
\]
The specific choice of parameters $p$ and $t$ depends on the random map.
In general, $p>1$ may need to be chosen arbitrarily close to 1. Also, $0<t<\frac{1}{p}$ is a smoothness parameter, and it can not be larger than the smoothness of the (derivative of the) maps.
It was also shown in \cite[\S3]{GTQuas} that there are
constants $\tld{N} \in \mathbb{N}, \tld{\alpha}_{\tld{N}}(\omega), \tld{B}_{\tld{N}}(\omega)>0$ such that for every $f\in {\mc{H}_p^t}$, and $\bbp\text{-a.e. } \om \in \Om$,
\begin{equation}\label{eq:LY-hpt-lp}
\|\mathcal L_{\omega}^{\tld{N}}f\|_{{\mc{H}_p^t}} \leq \tld{\alpha}_{\tld{N}}(\omega) \|f\|_{{\mc{H}_p^t}}+\tld{B}_{\tld{N}}(\omega)\|f\|_p,
\end{equation}
with $\int \log\tld{\alpha}_{\tld{N}}(\omega)\, d\mathbb P<0$.
From temperedness of $\sup_{n\in \mathbb{N}}\|\mathcal L_{\sigma^{-n}\omega}^{(n)}\|_{p}$, a second application of Lemma~\ref{lem:PowerBoundStrongNorm} ensures $\sup_{n\in \mathbb{N}}\|\mathcal L_{\sigma^{-n}\omega}^{(n)}\|_{{\mc{H}_p^t}}$ is tempered with respect to $\sigma$.
Finally, we fix our attention on yet another pair of norms, which will be the pair for which we use the results of \S\ref{S:StabilityResult}.
This choice is motivated by the necessity of a splitting for $\mc{T}$ with respect to the weak norm, coming from Theorem~\ref{thm:StabRandomAcim}.
Let
$t$ and $p$ be as above, and let
\[
0<t'<t,
\]
be sufficiently close to $t$ so that $\mc{T}$ splits with respect to ${\mc{H}_p^{t'}}$ (by
the same argument as above).
The ${\mc{H}_p^t}$ norm is stronger than ${\mc{H}_p^{t'}}$, and the embedding ${\mc{H}_p^t}\hookrightarrow {\mc{H}_p^{t'}}$ is compact.
Also, the ${\mc{H}_p^{t'}}$ norm is stronger than $L_p$, so \eqref{eq:LY-hpt-lp} implies
\[
\|\mathcal L_{\omega}^{\tld{N}}f\|_{{\mc{H}_p^t}} \leq \tld{\alpha}_{\tld{N}}(\omega) \|f\|_{{\mc{H}_p^t}}+\tld{B}_{\tld{N}}(\omega)
\|f\|_{{\mc{H}_p^{t'}}},
\]
for every $f\in X$, and $\bbp\text{-a.e. } \om \in \Om$, with $\int \log\tld{\alpha}_{\tld{N}}(\omega)\, d\mathbb P<0$.
As above, we will assume the above holds for $\tld{N}=1$,
and discuss the case $\tld{N}>1$ separately in each section.
We conclude this section with the following remark,
which will allow us to work interchangeably on either $({\mc{H}_p^t}, {\mc{H}_p^{t'}})$, or the corresponding fractional Sobolev spaces on the circle, $({{H}_p^t(\T)}, {H}_p^{t'}(\mathbb{T}))$.
\begin{rmk}\label{rmk:EquivHptNorms}
Let ${{H}_p^t(\T)}$ be the fractional Sobolev space on the circle. That is,
\begin{equation}\label{def:hptorus}
{{H}_p^t(\T)}=\big\{ f \in D'(\mathbb{T}): \|f\|_{{{H}_p^t(\T)}}:= \big\|\sum_{j\in \mathbb{Z}}\langle j\rangle^{t/2} \hat{f}(j) e^{2\pi ijx} \big\|_p <\infty \big\},
\end{equation}
where $\langle j \rangle := 1+j^2$ and $D'(\mathbb{T})$ is the dual of the space of $C^\infty$ functions on $\mathbb{T}$.
Then, if $0<t<\frac{1}{p}$, $\|\cdot\|_{{{H}_p^t(\T)}}$
and $\|\cdot\|_{{\mc{H}_p^t}}$ are equivalent norms \cite[\S4.3.2 \& \S4.11.1]{TriebelInterpolation}.
\end{rmk}
\subsubsection{Approximation by smooth functions}\label{S:approxSmoothFunctions}
Let $f\in {\mc{H}_p^t}$ and $\epsilon>0$. Set
\begin{equation}\label{eq:fep}
f_\epsilon:=\sum_{j\in \mathbb{Z}} e^{-\epsilon (1+(2\pi j)^2)} a_j \phi_j(x),
\end{equation}
where $\phi_j(x)=e^{2\pi i j x}$ and $a_j:=\int_{0}^1 \phi_j(y)f(y)\, dy$.
Then, $f_\epsilon \in C^\infty$. Since $|a_j|\leq \|f\|_1$, a direct calculation shows
\begin{equation}\label{eq:BdFepC2}
\|f_\epsilon\|_{C^2}\leq \sum_{j=1}^\infty e^{-\epsilon (1+(2\pi j)^2)} (2\pi j)^2\|f\|_1\leq C(\epsilon)\|f\|_{{\mc{H}_p^t}},
\end{equation}
for some decreasing function $C:(0,\infty)\to \mathbb{R}_+$ such that $\lim_{\epsilon\to 0^+} C(\epsilon)=\infty$.
\begin{conv}
For the remainder of the section, whenever $\|\cdot\|$ and $|\cdot|$ appear, they stand for $\|\cdot\|_{{\mc{H}_p^t}}$ and $\|\cdot\|_{{\mc{H}_p^{t'}}}$, respectively. In particular, whenever the hypotheses of Theorem~\ref{thm:StabRandomAcim} are verified, it is meant with respect to this pair of norms.
\end{conv}
We will make use of the following approximation lemma, whose proof is deferred until \S\ref{pf:regApprox}.
\begin{lem}\label{lem:regApprox}
Let $f\in {\mc{H}_p^t}$. Then,
$|f_\epsilon-f| \leq C_\# \epsilon^{\frac{t-t'}{2}} \|f\|$.
\end{lem}
\subsection{The Ulam scheme}\label{Ssec:Ulam}
For each $k\in \mathbb{N}$, let $\mc{P}_k=\{B_1, \dots, B_k\}$ be a partition of $\dom$ into $k$ subintervals of uniform length, called bins.
Let $\mathbb E_k$ be given by the formula
\[
\mathbb E_k(f)=\sum_{j=1}^k \frac{1}{m(B_j)}\Big(\int 1_{B_j}\,f\,\,dm\, \Big)1_{B_j},
\]
where $m$ denotes normalized Lebesgue measure on $\dom$.
Let $\mathcal L_k$ be defined as follows.
For each $\omega \in \Omega$, $\mathcal L_{k,\omega}:=\mathbb E_k \mathcal L_\omega$.
This is the well-known Ulam discretization \cite{Ulam}, in the context of non-autonomous systems. It provides a way of approximating the transfer operator $\mathcal L$ with a sequence of (fibrewise) finite-rank operators $\mathcal L_k$, each
taking values on functions that are constant on each bin.
It is well known that $\mathbb E_k$ is a contraction in $L_p$. For ${\mc{H}_p^t}$ norms, we are not aware of any similar results in the literature. Here we establish the following, which may be of independent interest.
\begin{lem}\label{thm:hptbound}
Let $p>1$ and let $0<t<1/p$. There exists a constant $C_\#$
such that $\|\mathbb E_k f\| \le C_\#\|f\|$ for all $k\ge 1$ and all $f\in {\mc{H}_p^t}$\footnote{We remind the reader that $C_\#$ may depend on parameters $p,t$, and that $\|\cdot\|$ denotes $\|\cdot\|_{{\mc{H}_p^t}}$ throughout this section.}.
\end{lem}
The proof of Lemma~\ref{thm:hptbound} is deferred until \S\ref{S:pfBddEnHpt}.
We can therefore obtain a uniform inequality like \eqref{eq:UnifLY}, provided the expansion of $T_\omega$ is sufficiently strong.
\begin{thm}\label{thm:Ulam}
Let $\mathcal L$ be a non-autonomous Lasota-Yorke map, as defined in \S\ref{S:LYMaps}.
For each $k\in \mathbb{N}$, let $\mathcal L_k$ be the sequence of Ulam discretizations, corresponding to the partition $\mc{P}_k=\{B_1, \dots, B_k\}$ introduced above.
Assume $\mathcal L$ satisfies the Lasota-Yorke inequalities \eqref{eq:L1BV-LY} and \eqref{eq:GenUnifLY} with
$\alpha$ and $\tld{\alpha}$ such that $\int \log \alpha(\omega)\, d\mathbb P<0$ and $\log C_\# + \int \log \tld{\alpha}(\omega)\, d\mathbb P<0$, where $C_\#=\sup_{k\in \mathbb{N}} |\mathbb E_k|$, guaranteed to be finite by Lemma~\ref{thm:hptbound}\footnote{If the contraction condition of Theorem~\ref{thm:Ulam} requires taking either $N>1$ in \eqref{eq:L1BV-LY-N} and/or $\tld{N}>1$ in \eqref{eq:LY-hpt-lp}, the conclusions remain valid provided the projection $\mathbb E_k$ is taken after $\max\{N, \tld{N}\}$ compositions.}.
Then, for each sufficiently large $k$, $\mathcal L_k$ has a unique random acim.
Let $\{F_k\}_{k\in \mathbb{N}}$ be the sequence of random acims for $\mathcal L_k$. Then, $\lim_{k\to \infty}F_k=F$ fibrewise in $|\cdot|$.
\end{thm}
\begin{proof}
We will verify the assumptions of Theorem~\ref{thm:StabRandomAcim}. \eqref{it:TopExp} is immediate.
The assumptions combined with Lemma~\ref{thm:hptbound} ensure that \eqref{it:GenUnifLY} holds as well.
Condition~\eqref{it:PowBd} follows exactly as in the bootstrapping argument in \S\ref{S:LYMaps}.
The last condition to check in order for Theorem~\ref{thm:StabRandomAcim} to apply is \eqref{it:SmallPert}, which follows from the next proposition.
\begin{prop}\label{lem:smallPertUlam}
There exists a sequence $\{\tau_k\}_{k>0}$ with $\lim_{k\to \infty}\tau_k= 0$ such that
\[
\sup_{\|g\|=1} |(\mathcal L_\omega-\mathcal L_{k,\omega})g|\leq \tau_k.
\]
\end{prop}
\begin{proof}[Proof of Proposition~\ref{lem:smallPertUlam}]
Let $\eta_k= m(B_j)=\frac{1}{k}$ be the diameter of the partition elements of $\mc{P}_k$.
Let $f\in {\mc{H}_p^t}$, and for each $\epsilon>0$, let $f_\epsilon$ as in \eqref{eq:fep}. For each $k\in \mathbb{N}$, we have
\begin{align}\label{eq:UlamError}
|(\mathcal L_{k,\omega}-\mathcal L_\omega)f| &\leq |(\mathcal L_{k,\omega}-\mathcal L_\omega)f_\epsilon| + |(\mathcal L_{k,\omega}-\mathcal L_\omega)(f_\epsilon-f)|=: (U1) + (U2).
\end{align}
We will bound each term separately.
The fact that
\begin{equation}\label{eq:U2}
(U2)\leq C_\#\epsilon^{\frac{t-t'}{2}}\|f\|
\end{equation}
follows from Lemma~\ref{lem:regApprox} and Remark~\ref{rmk:UnifBdNorm}, after recalling that $|\mathbb E_k|$ is bounded independently of $k$, by Lemma~\ref{thm:hptbound}.
Now we estimate $(U1)$. Let $\big\{ I_{i,\omega} \}_{1\leq i \leq \nint_\omega}$ be the partition of $I$ into domains of differentiability of $T_\omega$, $Q_{i,\omega}=T_\omega (I_{i,\omega})$, and $\xi_{i,\omega}:=\big( T_\omega|_{I_{i,\omega}} \big)^{-1}$.
Then, the transfer operator $\mathcal L_\omega$ is given by the following expression,
\[
\mathcal L_\omega f=\sum_{i=1}^{\nint_\omega} 1_{Q_{i,\omega}} \cdot f \circ \xi_{i,\omega} \cdot |D\xi_{i,\omega}|.
\]
This is the sum of at most $\nint_\omega$ terms of the form $1_J g$, where $J\subset I$ is an interval, and $g=f \circ \xi_{i,\omega} \cdot |D\xi_{i,\omega}|$ for some $i\in \mathbb{N}, \omega\in \Omega$. Furthermore, when $f\in C^\gamma$, each such $g$ is also $C^\gamma$, and $\|g\|_{C^\gamma}\leq C\|f\|_{C^\gamma}$, where $C$ depends on the random map $\mc{T}$, but not on $f$.
In this case, $\mathcal L_\omega f$ may be rewritten as
\[
\mathcal L_\omega f= (\mathcal L_\omega f)^h + (\mathcal L_\omega f)^s,
\]
where $(\mathcal L_\omega f)^h\in C^\gamma$ is such that $\|(\mathcal L_\omega f)^h\|\leq C\|f\|_{C^\gamma}$,
and $(\mathcal L_\omega f)^s$ is the sum of at most $2\nint_\omega$ step functions, with jumps of size at most $\nint_\omega\|f\|_\infty$.
Then, $\|(\mathcal L_\omega f)^s\|_\infty\leq 2\nint_\omega^2\|f\|_\infty$ and
\begin{align*}
(U1)= |(\mathbb E_k -I)\mathcal L_\omega f_\epsilon|\leq |(\mathbb E_k -I)(\mathcal L_\omega f_\epsilon)^h|+|(\mathbb E_k -I)(\mathcal L_\omega f_\epsilon)^s|=: (U11)+(U12).
\end{align*}
To estimate the first term, we rely on the following lemma, whose proof is deferred until \S\ref{pf:UlamError4Holder}.
\begin{lem}\label{lem:UlamError4Holder}
Let $g\in C^\gamma$, with $t'<\min\{\gamma, \frac{1}{p}\}$. Then, $|(\mathbb E_k -I)g|\leq C_\#\|g\|_{C^\gamma}\eta_k^{\gamma-t'}$.
\end{lem}
The previous lemma, combined with the bound on $\|f_\epsilon\|_{C^\gamma}$ implied by \eqref{eq:BdFepC2}, immediately yields
\begin{align}\label{eq:U11}
(U11)\leq C_\#\|(\mathcal L_\omega f_\epsilon)^h\|_{C^\gamma}\eta_k^{\gamma-t'} \leq C_\#C \|f_\epsilon\|_{C^\gamma} \eta_k^{\gamma-t'}
\leq C(\epsilon) \|f\|\eta_k^{\gamma-t'}.
\end{align}
For the second term, we note that $(\mathbb E_k -I)(\mathcal L_\omega f_\epsilon)^s$ is a step function with at most $2\nint_\omega$ steps with non-zero value. Also, the change of variables formula shows that for each interval $J\subset I$, one has $|1_J|\leq C_\# m(J)^{\frac{1}{p}-t'}$. Hence, recalling that $\|(\mathcal L_\omega f)^s\|_\infty\leq 2\nint_\omega^2\|f\|_\infty$, we get
\begin{align}\label{eq:U12}
(U12)\leq 4 \nint_\omega^3 \|f_\epsilon \|_\infty \sup_{1\leq j \leq k} |1_{B_k}| \leq
C_\# \nint_\omega^3 \|f_\epsilon \|_\infty \eta_k^{\frac{1}{p}-t'}
\leq
C_\# \nint_\omega^3 C(\epsilon)\|f\| \eta_k^{\frac{1}{p}-t'},
\end{align}
where the last inequality follows once again from \eqref{eq:BdFepC2}.
Combining \eqref{eq:U2}, \eqref{eq:U11} and \eqref{eq:U12} into \eqref{eq:UlamError}, we get
\begin{align}
|(\mathcal L_{k,\omega}-\mathcal L_\omega)f| &\leq C_\# \Big( \nint^3 C(\epsilon) \eta_k^{\big(\min(\gamma, \frac{1}{p})-t' \big)} + \epsilon^{\frac{t-t'}{2}} \Big) \|f\|,
\end{align}
where $\nint$ is the uniform bound on number of branches of $T_\omega$, coming from \eqref{it:M1}.
Choosing $\epsilon$ the infimum such that $\nint^3 C(\epsilon)\leq \eta_k^{-\frac{1}{2}(\min(\gamma, \frac{1}{p})-t')}$ provides $\tau_k$ as desired.
\qed \end{proof}
\qed \end{proof}
\subsection{Convolution-type perturbations}\label{Ssec:PertByConvolution}
In this section we make use of the equivalent norm on ${\mc{H}_p^t}$ described in Remark~\ref{rmk:EquivHptNorms}, identifying functions in ${\mc{H}_p^t}$ (and their norms) with functions in ${{H}_p^t(\T)}$.
We consider perturbations of non-autonomous maps that arise from convolution with non-negative kernels $Q_k\in L^1(m)$, with $\int Q_k \,dm=1$. They give rise to transfer operators as follows.
\begin{equation}\label{eq:pertByConv}
\mathcal L_{k,\omega} f(x):= \int \mathcal L_\omega f(y) Q_k(x-y) dy.
\end{equation}
They model at least two interesting types of perturbations:
\begin{enumerate}
\item Small iid noise. In this case, $Q_k$, supported on $[-\frac{1}{k},\frac{1}{k}]$, represents the distribution of the noise, which is added after applying the corresponding map $T_\omega$.
See e.g. \cite[\S3.3]{BaladiBook} for details.
\item Ces\`aro averages of Fourier series. In this case, $Q_k$ is the Fej\'er kernel $Q_k(x)=\frac{\sin(\pi kx)^2}{k\sin(\pi x)^2}$, and $Q_k*f=\frac{1}{k}\sum_{j=0}^{k-1} S_j(f)$, where $S_k(f)(x) =\sum_{j=-k}^k \hat{f}(j) e^{2\pi i j x}$ is the truncated Fourier series of $f$.
\end{enumerate}
\begin{rmk}
We point out that the Galerkin projection on Fourier modes, corresponding to truncation of Fourier series, is obtained from convolution with Dirichlet kernels, which are not positive. Although a convergence result in this case remains open, the numerical behavior appears to be good as well.
This is illustrated in \S\ref{sec:numEx}.
\end{rmk}
\begin{thm}\label{thm:pertConv}
Let $\mathcal L$ be a non-autonomous Lasota-Yorke map, as defined in \S\ref{S:LYMaps}, satisfying $\int \log\tld{\alpha}\, d\mathbb P<0$.
Let $\{\mathcal L_k\}_{k\in \mathbb{N}}$ be a family of random perturbations, as in \eqref{eq:pertByConv}, such that $\lim_{k\to \infty}\int Q_k(x)|x|\,dx=0$\footnote{This condition is equivalent to weak convergence of $Q_k$ to $\delta_0$. If the contraction condition of Theorem~\ref{thm:pertConv} requires taking either $N>1$ in \eqref{eq:L1BV-LY-N} and/or $\tld{N}>1$ in \eqref{eq:LY-hpt-lp}, the conclusions remain valid provided the convolutions are taken after $\max\{N, \tld{N}\}$ compositions.}.
Then, for sufficiently large $k$, $\mathcal L_k$ has a unique random acim.
Let us call it $F_k$.
Then, $\lim_{k\to \infty}F_k=F$ fibrewise in $|\cdot|$.
\end{thm}
\begin{proof}
We will show that conditions \eqref{it:TopExp}--\eqref{it:SmallPert} of Theorem~\ref{thm:StabRandomAcim} are satisfied.
\eqref{it:TopExp} is clear.
Recall from e.g. \cite[\S I.2]{Katznelson} that $L_p$ is a homogeneous Banach space. That is, for every $\tau\in \mathbb{T}$, $\|f_\tau\|_p= \|f\|_p$, and $\lim_{\tau\to 0}\|f_\tau-f\|_p=0$, where $f_\tau(x):=f(x-\tau)$.
Hence, it follows from definition \eqref{def:hptorus}
and the fact that $\hat{f_\tau}(j)=e^{-2\pi i \tau j}\hat{f}(j)$ that ${{H}_p^t(\T)}$ is a homogeneous Banach space.
Thus, $\|Q_k* f \|\leq \|Q_k\|_1\|f\|=\|f\|$ (see e.g. \cite[\S I.2]{Katznelson}). This yields \eqref{it:GenUnifLY} and \eqref{it:PowBd}.
In view of Remark~\ref{rmk:UnifBdNorm}, \eqref{it:SmallPert} may be checked as follows.
Let $f_\epsilon$ be as in \eqref{eq:fep}.
For each non-negative $Q\in L^1(m)$, with $\int Q \,dm=1$, we have
\begin{equation}
|(I-Q)*f|=|(I-Q)*(f-f_\epsilon)|+|(I-Q)*f_\epsilon|.
\end{equation}
The first term is bounded as $|I-Q|\cdot |f-f_\epsilon|$, which is controlled by Lemma~\ref{lem:regApprox}.
To control the second term, we consider the map $f\mapsto
(I-Q)*f_\epsilon$ as a Fourier multiplier.
We want to compare the weak norm of
$(I-Q)*f_\epsilon=\sum_je^{-\epsilon(1+(2\pi
j)^2)}(1-\hat Q(j)) \hat{f}(j)\phi_j$
with the strong norm of $f$.
That is we want to compare the $L^p$ norm of
$$\sum_je^{-\epsilon(1+(2\pi j)^2)}(1+(2\pi j)^2)^{(t'-t)/2}(1-\hat Q(j))a_j\phi_j=:\sum_j c_j a_j\phi_j$$
with the $L^p$ norm of $\sum_j a_j\phi_j$. Here $a_j= \langle j\rangle^{t/2} \hat{f}(j)$, so that $\|f\|=\|\sum_j a_j\phi_j\|_p$, by definition.
This is clearly a Fourier multiplier on $L^p$. We estimate its norm via Lemma~\ref{lem:Lpbound}.
The variation of the coefficients $(c_j)$ is overestimated by their $\ell^1$
norm. This is estimated as
\begin{equation}\label{eq:multbound}
\|c_j\|_1\le (2|J|+1)\max_{|j|\le J}|1-\hat Q(j)|+
2\sum_{|j|\ge J}e^{-\epsilon(1+(2\pi j)^2)}(1+(2\pi j)^2)^{(t'-t)/2}.
\end{equation}
We can choose $J$ so that the second term is at most $\delta/(3C_p)$
where $C_p$ is the constant in Lemma~\ref{lem:Lpbound}, so that it
then suffices to force $\max_{|j|\le J}|1-\hat Q(j)|\le
\delta/(3C_p(2|J|+1))$.
Notice that for $j\le J$, $|1-\hat Q(j)|=|\int_{-1/2}^{1/2}Q(x)(1-e^{-2\pi
ijx})\,dx|\le \int_{-1/2}^{1/2}Q(x)|1-e^{-2\pi ijx}|\,dx
\le 2\pi J\int_{-1/2}^{1/2}Q(x)|x|\,dx$. Hence, for sufficiently small
values of $\int Q(x)|x|\,dx$, we obtain the required estimate,
$|(I-Q)*f_\epsilon|\leq \delta\|f\|$.
\qed \end{proof}
\subsection{Static perturbations}\label{Ssec:DetPert}
In this section, we establish the following application of Theorem~\ref{thm:StabRandomAcim}.
\begin{thm}\label{thm:NstepStab}
Let $\mc{T}$ be a non-autonomous Lasota-Yorke map, as defined in \S\ref{S:LYMaps}.
For each $k\in \mathbb{N}$, let $\{\mc{T}_k\}_{k\in \mathbb{N}}$ be a family of random Lasota-Yorke maps with the same base as $\mc{T}$, satisfying \eqref{it:M1}--\eqref{it:M3}, with the same bounds as $\mc{T}$.
Assume that there exists a sequence $\{\rho_k\}_{k>0}$ with $\lim_{k\to \infty}\rho_k= 0$ such that for $\bbp\text{-a.e. } \om \in \Om$, $d_{LY}(T_{k,\omega}, T_\omega)\leq \rho_k$.
Let $\mathcal L,\mathcal L_k$ be the corresponding Perron-Frobenius operators associated to $\mc{T}$ and $\mc{T}_k$.
Then, there exist a constant $A_{\mc{T}}$, independent of $n$, and a measurable function $B_{\mc{T}, n}(\omega)$ such that for every sufficiently large $k\in \mathbb{N}$, and $\bbp\text{-a.e. } \om \in \Om$,
\begin{equation}\label{eq:StrongLYi}
\|\mathcal L_{k,\omega}^{(n)}f\|_{{\mc{H}_p^t}} \leq A_{\mc{T}} \nint^{n(1-\frac{1}{p})} \mu^{-n(1+t-\frac{1}{p})} \|f\|_{{\mc{H}_p^t}} + B_{\mc{T}, n}(\omega) \|f\|_p.
\end{equation}
Let $\tld{\alpha}_n:=A_{\mc{T}} \nint^{n(1-\frac{1}{p})} \mu^{-n(1+t-\frac{1}{p})}$, and let $N$ be such that $\int\log\tld{\alpha}_N \,d\mathbb P<0$\footnote{The existence of such an $N$ is guaranteed, provided $p>1$ is sufficiently close to 1. If necessary, also enlarge $N$ so that a $(BV, L_1)$ Lasota-Yorke inequality \eqref{eq:L1BV-LY-N}, with $\int\log\alpha_N \, d\mathbb P<0$, holds for $\mc{T}^N$.}.
Furthermore, suppose that either
\begin{enumerate}[(i)]
\item \label{it:NPTP}
$\essinf_{\omega\in \Omega, 0\leq l< j \leq N} \min_{1\leq i, i' \leq \nint}|T^{(j-l)}_{\sigma^l\omega}a_{i,\sigma^l\omega}-a_{i',\sigma^j\omega}| >0$, where $\{a_{1,\omega}, \dots, a_{\nint, \omega}\}$ are the endpoints of the monotonicity partition of $T_\omega$; or
\item \label{it:enoughExp}
$(\Omega,\mc{F})$ is a compact topological space equipped with its Borel $\sigma$-algebra, the map $\mc{T}:\Omega \to \text{LY}^{1+\gamma}$ is continuous, and $\mu^\gamma>2$, where $\mu$ is as in \eqref{it:M3}, and $\gamma\leq 1$ is the H\"older exponent of $DT_\omega$.
\end{enumerate}
Then, for every sufficiently large $k$, $\mathcal L_k$ has a unique random acim.
Let $\{F_k\}_{k\in \mathbb{N}}$ be the sequence of random acims for $\mathcal L_k$.
Then, $\lim_{k\to \infty}F_k=F$ fibrewise in $|\cdot|$.
\end{thm}
\begin{proof}
We will show that there exists $n\in\mathbb{N}$ such that $\mc{T}^{(n)}$ and $\mc{T}_k^{(n)}$ satisfy the hypotheses of Theorem~\ref{thm:StabRandomAcim}, with $n=N$ in case~\eqref{it:NPTP}.
\eqref{it:TopExp} is easy to verify for $\mc{T}^{(n)}$ and $\mc{T}_k^{(n)}$.
Condition~\eqref{it:PowBd} (with respect to $|\cdot|_{{\mc{H}_p^{t'}}}$) will follow via Lemma~\ref{lem:PowerBoundStrongNorm}, exactly as explained in the bootstrapping argument of \S\ref{S:LYMaps}, once \eqref{it:GenUnifLY} is established (this will be the last step of this proof).
Condition \eqref{it:SmallPert} follows from the next proposition.
\begin{prop}\label{prop:smallPertDeterm}
Assume that there exists a sequence $\{\rho_k\}_{k>0}$ as in the statement of Theorem~\ref{thm:NstepStab}.
Then, for each $n\in \mathbb{N}$, there exists a sequence $\{\tau_k\}_{k>0}$ with $\lim_{k\to \infty}\tau_k= 0$ such that
\[
\sup_{\|g\|=1} |(\mathcal L^{(n)}_\omega-\mathcal L^{(n)}_{k,\omega})g|\leq \tau_k.
\]
\end{prop}
\begin{proof}[Proof of Proposition~\ref{prop:smallPertDeterm}]
We will establish the claim for $n=1$. The general case of fixed $n>1$ follows immediately from the identity
\[
\mathcal L^{(n)}_\omega -\mathcal L_{k,\omega}^{(n)}=
\sum_{j=0}^{n-1}\mathcal L^{(j)}_{\sigma^{n-j}\omega}(\mathcal L_{\sigma^{n-j-1}\omega}-\mathcal L_{k,\sigma^{n-j-1}\omega}) \mathcal L_{k,\omega}^{(n-j-1)},
\]
and the fact that $\esssup_{\omega\in \Omega,\, k \in \mathbb{N},\, 0\leq j <n} \|\mathcal L_{k,\omega}^{(j)}\|$ and $\esssup_{\omega\in \Omega,\, 0\leq j <n} |\mathcal L_{\omega}^{(j)}|$ are finite for every $n\in \mathbb{N}$, because of the uniform assumptions \eqref{it:M1}--\eqref{it:M3}; see Remark~\ref{rmk:UnifBdNorm}.
Throughout the proof, functions are regarded as being defined on the circle, $\mathbb{T}$, via the identification of Remark~\ref{rmk:EquivHptNorms}.
We start with the following.
\begin{myclaim}\label{lem:C2approx}
Let $g\in C^2$, and suppose $S,T \in \text{LY}$ satisfy the uniform bounds \eqref{it:M1}--\eqref{it:M3} of \S\ref{S:LYMaps}. Then,
\[
|(\mathcal L_T-\mathcal L_S)g |\leq C_1 d_{LY}(T,S)^{\frac{1}{p}-t'} \|g\|_{C^2}.
\]
\end{myclaim}
The proof of this claim follows from \cite[\S3]{GTQuas}, given the uniformity assumptions for $T,S$.
Let $f\in {\mc{H}_p^t}$, $\epsilon>0$, and let $f_\epsilon$ be defined as in \eqref{eq:fep}. Recalling
\eqref{eq:BdFepC2} and Lemma~\ref{lem:regApprox} from \S\ref{S:approxSmoothFunctions}, we get:
\begin{align*}
|(\mathcal L_T-\mathcal L_S)f| &\leq |(\mathcal L_T-\mathcal L_S)f_\epsilon| + |(\mathcal L_T-\mathcal L_S)(f_\epsilon-f)| \\
&\leq C_1C(\epsilon) d_{LY}(T,S)^{\frac{1}{p}-t'} \|f\|+ C_\# C \epsilon^{\frac{t-t'}{2}} \|f\|,
\end{align*}
where $C$ is an upper bound on $\{|\mathcal L_\omega|\}_{\omega \in \Omega}$, $C(\epsilon)$ comes from \eqref{eq:BdFepC2}, and $C_1$ comes from Claim~\ref{lem:C2approx}.
Choosing $\epsilon$ the infimum such that $C_1 C(\epsilon)\leq \rho_k^{-\frac{1}{2}(\frac{1}{p}-t)}$ provides $\tau_k$ as desired, concluding the proof of the proposition.
\qed \end{proof}
The rest of the proof is concerned with verifying condition \eqref{it:GenUnifLY}.
We will show the following.
\begin{prop}\label{lem:hptLpNstepLY}
Let $\mc{T}$ and $\{\mc{T}_k\}_{k\in \mathbb{N}}$ be as in Theorem~\ref{thm:NstepStab}. Then:
In case \eqref{it:NPTP}, there exist a constant $A_{\mc{T}}$ and a measurable function $B_{\mc{T},N}(\omega)$ such that for every sufficiently large $k\in \mathbb{N}$, and $\bbp\text{-a.e. } \om \in \Om$, we have
\begin{equation}
\|\mathcal L_{k,\omega}^{(N)}f\|_{{\mc{H}_p^t}} \leq A_{\mc{T}} \nint^{N(1-\frac{1}{p})} \mu^{-N(1+t-\frac{1}{p})} \|f\|_{{\mc{H}_p^t}} + B_{\mc{T},N}(\omega) \|f\|_p.
\end{equation}
This yields \eqref{it:GenUnifLY} for $\mc{T}^{(N)}$ and $\{\mc{T}^{(N)}_k\}_{k\in \mathbb{N}}$.
In case \eqref{it:enoughExp}, there exist constants $A_{\mc{T}}$, independent of $n$, and $B_{\mc{T}, n}$ such that for every sufficiently large $k\in \mathbb{N}$, every $n\in \mathbb{N}$, and $\bbp\text{-a.e. } \om \in \Om$, we have
\begin{equation}\label{eq:StrongLYii}
\|\mathcal L_{k,\omega}^{(n)}f\|_{{\mc{H}_p^t}} \leq A_{\mc{T}} \nint^{n(1-\frac{1}{p})} \mu^{-n(1+t-\frac{1}{p})} 2^n \|f\|_{{\mc{H}_p^t}} + B_{\mc{T}, n} \|f\|_p.
\end{equation}
In particular, if $\mu^\gamma>2$, one may choose $p>1$ sufficiently close to $1$ and $t< \min\{\gamma, \frac{1}{p}\}$ sufficiently close to $\gamma$, such that if $n$ is sufficiently large,
\eqref{it:GenUnifLY} holds for $\mc{T}^{(n)}$ and $\{\mc{T}^{(n)}_k\}_{k\in \mathbb{N}}$.
\end{prop}
In order to demonstrate this, we shall make use of a characterization of ${\mc{H}_p^t}$, due to
Strichartz \cite{Strichartz}.
\begin{thm}[Strichartz]\label{thm:Str1}
Let $p>1$ and $0<t<1$, and $f:\mathbb{R} \to \mathbb{R}$ with $\mathop{\mathrm{supp}}{f}\subseteq [0,1]$. Then $f\in{\mc{H}_p^t}$ if and only if
$\|f\|_p+\|D_tf\|_p<\infty$, and the implied norm is equivalent to the
standard ${\mc{H}_p^t}$ norm, where $D_tf$ is given by
\begin{equation}\label{eq:Dt}
D_tf(x)=\lim_{\epsilon\to 0}\int_{|y|\geq \epsilon}\frac{f(x+y)-f(x)}{|y|^{1+t}}dy,
\end{equation}
and the limit is in $L_p$.
\end{thm}
The proof of Proposition~\ref{lem:hptLpNstepLY} relies on the following claim, whose proof is deferred until \S\ref{pf:properSpt}.
\begin{myclaim} \label{lem:properSpt}\ \\
Let $f \in {\mc{H}_p^t}$ be such that $\mathop{\mathrm{supp}}(f) \subset [a,b]$. Let $a'<a$, $b'>b$ and $c=\min \{ |a-a'|, |b-b'| \}$. Then,
\[
\|D_t f - 1_{[a', b']} D_t f \|_p \leq C_\# |b-a|^{1-\frac{1}{p}} c^{\frac{1}{p}-1-t} \|f\|_p.
\]
Furthermore, if $f_1, \dots, f_M \in {\mc{H}_p^t}$ are such that $\mathop{\mathrm{supp}}(f_j) \subset [a_j,b_j]$ with $\max_{1\leq j \leq M} \{b_j-a_j\}\leq l$; $a'_j<a_j$, $b'_j>b_j$ are such that $\min_{1\leq j \leq M} \{ |a_j-a'_j|, |b_j-b'_j| \}\geq c$; and the intersection multiplicity of $\{[a'_j,b'_j] \}_{1\leq j \leq M}$ is $\tld{M}$, then
\[
\Big \|\sum_{j=1}^M D_t f_j \Big \|_{p}^{p} \leq C_\# \tld{M}^{p-1} \sum_{j=1}^M \|D_t f_j \|_p^{p} + C_\#(Ml)^{p-1} c^{1-p-pt} \sum_{j=1}^M \|f_j\|_p^{p},
\]
where the intersection multiplicity of a collection $\mathcal C$ of subsets of a set is given by $\max_{x\in\bigcup \mathcal
C}\#\{C\in\mathcal C\colon x\in C\}$.
\end{myclaim}
\begin{proof}[Proof of Proposition~\ref{lem:hptLpNstepLY}]\ \\
\begin{enumerate}
\renewcommand{\theenumi}{\Roman{enumi}}
\renewcommand{\labelenumi}{\textit{(\theenumi)}.}
\item
\textit{$({\mc{H}_p^t}, L_p)$ Lasota-Yorke inequality for $\mc{T}$.}
Recall that $\| \mathcal L_\omega^{(n)} f\|_{{\mc{H}_p^t}} \leq C_\# \| \mathcal L_\omega^{(n)} f\|_{p}+ C_\# \| D_t(\mathcal L_\omega^{(n)} f)\|_{p}$, and by definition,
\[
\mathcal L_\omega^{(n)} f(x)=\sum_{i=1}^{\nint_\omega^{(n)}} (1_{I_i} |DT_\omega^{(n)}|^{-1} f)\circ \xi_i(x).
\]
(Although the intervals $I_i$ and inverse branches $\xi_i$ depend on $\omega$ and $n$, we do not write this dependence explicitly, unless needed.)
Using changes of variables and the inequality $(\sum_{i=1}^M x_i)^p \leq M^{p-1}\sum_{i=1}^M x_i^p$, a direct calculation yields
\begin{equation}\label{eq:LpBoundPFOp}
\| \mathcal L_\omega^{(n)} f\|_{p} \leq \big( C_e(T_\omega^{(n)})\big)^{1-\frac{1}{p}}\||DT_\omega^{(n)}|^{-1}\|_{\infty}^{1-\frac{1}{p}}\|f\|_p,
\end{equation}
where $C_e(T)$ is the intersection multiplicity of $\{ \overline{T(I_i^T)} \}_{1\leq i \leq
\nint^T}$, named \textit{complexity at the end} in \cite{BaladiGouezel}.
In order to bound $ \|D_t (\mathcal L_\omega^{(n)}f)\|_{p}$, we will use the following two claims.
\begin{myclaim} \label{it:boundPFhpt.1}
There exists some $C_{\mc{T}}$ such that for every $u \in {\mc{H}_p^t}$,
\begin{align*}
\|(1_{I_i} |DT_\omega^{(n)}|^{-1} u)\circ \xi_i\|_{{\mc{H}_p^t}}^p &\leq C_{\mc{T}} \||DT_\omega^{(n)}|^{-1}\|_{\infty}^{p-1} \| u \|^p_p \\
& \quad + C_{\mc{T}} \||DT_\omega^{(n)}|^{-1}\|_{\infty}^{p+pt-1}\|u\|_{{\mc{H}_p^t}}^p.
\end{align*}
\end{myclaim}
\begin{myclaim} \label{it:boundPFhpt.2}
Let $\eta:\mathbb{R} \to [0,1]$ be a $C^\infty$ function supported in $[-1,1]$. For $m\in \mathbb{Z}$, let $\eta_m(x)=\eta(x-m)$, and suppose that the family $\{\eta_m\}_{m\in \mathbb{Z}}$ forms a partition of unity with intersection multiplicity two. Then, for every $u\in {\mc{H}_p^t}$,
\[
\sum_{m\in \mathbb{Z}} \| \eta_m u \|^p_{{\mc{H}_p^t}} \leq C_\# \|u\|_{{\mc{H}_p^t}}^p.
\]
For each $r>0$, let $R_r(x)=rx$, and $\eta_{m,r}:=\eta_m \circ R_r$. The set $\{\eta_{m,r}\}_{m\in \mathbb{Z}}$ is again a partition of unity.
Furthermore,
\[
\sum_{m\in \mathbb{Z}} \|\eta_{m,r} u\|^p_{{\mc{H}_p^t}} \leq C_\# \big( (1+r^{pt})\|u\|_p^p + \|u\|_{{\mc{H}_p^t}}^p \big).
\]
\end{myclaim}
We will show these claims in \S\ref{S:pfboundPFhpt.1} and \S\ref{S:pfboundPFhpt.2}, respectively. Now, we proceed with the proof relying on them.
Let us fix $\omega \in \Omega$, and $\delta=\delta(\omega,n)=\min_{1\leq i \leq \nint_\omega^{(n)}} \text{Leb}(I_i)$ be the size of the shortest branch of $T_\omega^{(n)}$. For every $1\leq i \leq \nint_\omega^{(n)}$, let $[\tld{a}_{i}, \tld{b}_{i}]=\overline{T_{\omega}^{(n)}(I_i)}$.
Let $\{\eta_m\}_{m\in \mathbb{Z}}:\mathbb{R} \to [0,1]$ be a partition of unity as in Claim~\ref{it:boundPFhpt.2}. We note that $\mathop{\mathrm{supp}}(\eta_{m,r})=r^{-1} \mathop{\mathrm{supp}}(\eta_m)$. Hence,
the intersection multiplicity of the supports of $\{\eta_{m,r}\}_{m\in \mathbb{Z}}$ is also 2.
For every $i,m$ with $I_i \cap \mathop{\mathrm{supp}}(\eta_{m,r})\neq \emptyset$, let $[\tld{a}_{i,m,r}, \tld{b}_{i,m,r}]=\overline{ T_\omega^{(n)}(I_i \cap \mathop{\mathrm{supp}}(\eta_{m,r}))}$ and let $[\tld{a}'_{i,m,r}, \tld{b}'_{i,m,r}]=[\tld{a}_{i,m,r}-\delta, \tld{b}_{i,m,r}+\delta]$.
By definition of $\delta$ and the fact that $T_\omega^{(n)}$ is piecewise expanding, we know that $\tld{b}_{i}-\tld{a}_{i}>\delta$. Thus,
\[
[\tld{a}'_{i,m,r}, \tld{b}'_{i,m,r}]\subset [\tld{a}_{i}, \tld{b}_{i}] \cup [\tld{a}_{i}-\delta, \tld{b}_{i}-\delta] \cup [\tld{a}_{i}+\delta, \tld{b}_{i}+\delta].
\]
Recall that $C_e(T_\omega^{(n)})$ is the intersection multiplicity of $\{ [\tld{a}_{i}, \tld{b}_{i}] \}_{1\leq i \leq \nint_\omega^{(n)}}$, that $T_\omega^{(n)}|_{I_i}$ is injective,
and that the intersection multiplicity of the supports of $\{\eta_{m,r}\}_{m\in \mathbb{Z}}$ is 2.
Hence, the intersection multiplicity of $\{[\tld{a}'_{i,m,r}, \tld{b}'_{i,m,r}]\}_{m\in \mathbb{Z}, 1\leq i \leq \nint_\omega^{(n)}}$ is at most $6 C_e(T_\omega^{(n)})$.
Let $f\in {\mc{H}_p^t}$.
For each $r>0$, $m\in \mathbb{Z}$ and $1\leq i \leq \nint_\omega^{(n)}$, let
\[
f_{i,m,r}:= \Big(1_{I_i} |DT_\omega^{(n)}|^{-1} \eta_{m,r} f\Big)\circ \xi_i.
\]
Hence, $\mathop{\mathrm{supp}} (f_{i,m,r}) \subset [\tld{a}_{i,m,r}, \tld{b}_{i,m,r}]$.
Assume that $r\geq 3$. Since $\mathop{\mathrm{supp}}(\eta_{m,r})\subset [r^{-1}(m-1), r^{-1}(m+1)]$, there are at most $r+3\leq 2r$ integers $m$ such that $\mathop{\mathrm{supp}}(\eta_{m,r})\cap [0,1]\neq \emptyset$.
Assume also that $r> 2\delta^{-1}$. Then, the support of each $\eta_{m,r}$ intersects at most two intervals $I_i$. Let $\Gamma_r:=\{(m,i): f_{i,m,r}\not\equiv 0, 1\leq i \leq \nint_\omega^{(n)}, m\in \mathbb{Z}\}$.
Then, $\# \Gamma_r \leq 4r$.
Set $r=3\delta^{-1}$. (Note that since $\nint_\omega^{(n)}\geq 2$, then $\delta\leq \frac{1}{2}$ and so $r\geq 6$.)
Now we apply Claim~\ref{lem:properSpt} with the collection $\{f_{i,m,r}\}$.
By the arguments above, we may choose $M=4r$, $\tld{M}=6C_e(T_\omega^{(n)})$, $c=\delta$ and $l=2r^{-1}\dist^n$, where $\dist$ is, as in \eqref{it:M2}, an upper bound on $\|DT_{\omega'}\|_\infty$ for $\mathbb P$-almost every $\omega'$. We get
\begin{align*}
& \|D_t (\mathcal L_\omega^{(n)}f)\|_{p}^p = \Big\|\sum_{i=1}^{\nint_\omega^{(n)}} \sum_{m\in \mathbb{Z}} D_t ( f_{i,m,r} ) \Big\|_{p}^p = \Big\| \sum_{(m,i)\in \Gamma_r} D_t ( f_{i,m,r} ) \Big\|_{p}^p\\
& \quad \leq C_\# C_e(T_\omega^{(n)})^{p-1} \sum_{(m,i)\in \Gamma_r} \big\| D_t (f_{i,m,r}) \big\|_p^p
+ C_\# \dist^{n(p-1)} \delta^{1-p-pt} \sum_{(m,i)\in \Gamma_r} \big\| f_{i,m,r}\big\|_p^p.
\end{align*}
We recall from Theorem~\ref{thm:Str1} that $\|D_t g\|_p\leq C_\# \|g\|_{{\mc{H}_p^t}}$ for every $g\in {\mc{H}_p^t}$.
We now use Claim~\ref{it:boundPFhpt.1} combined with the fact that the support of each $\eta_{m,r}$ intersects at most two intervals $I_i$ to bound the first term; and changes of variables combined with the identity $\sum_{m,i} 1_{I_i} \eta_{m,r}=1$ in $L_p$ to bound the second term. We get
\begin{align*}
&\|D_t (\mathcal L_\omega^{(n)}f)\|_{p}^p \\
&\leq C_\mc{T} C_e(T_\omega^{(n)})^{p-1} \sum_{m\in \mathbb{Z}} \Big( \Big\||DT_\omega^{(n)}|^{-1}\Big\|_{\infty}^{p-1}\Big\| \eta_{m,r} f \Big\|^p_p +\Big\||DT_\omega^{(n)}|^{-1}\Big\|_{\infty}^{p+pt-1}\Big\| \eta_{m,r} f \Big\|_{{\mc{H}_p^t}}^p \Big)\\
&+ C_\# \dist^{n(p-1)} \delta^{1-p-pt} \||DT_\omega^{(n)}|^{-1}\|_{\infty}^{p-1} \|f\|_p^p.
\end{align*}
Finally, using Claim~\ref{it:boundPFhpt.2} we get
\begin{align*}
&\|D_t (\mathcal L_\omega^{(n)}f)\|_{p}^p\\
&\leq C_{\mc{T}} C_e(T_\omega^{(n)})^{p-1} \Big( \Big\||DT_\omega^{(n)}|^{-1}\Big\|_{\infty}^{p-1} \|f\|_p^p + \Big\||DT_\omega^{(n)}|^{-1}\Big\|_{\infty}^{p+pt-1} \big( (1+ r^{pt})\|f\|_{p}^p+ \|f\|_{{\mc{H}_p^t}}^p \big) \Big)
\\
&+ C_\# \dist^{n(p-1)} \delta^{1-p-pt} \||DT_\omega^{(n)}|^{-1}\|_{\infty}^{p-1} \|f\|_p^p.
\end{align*}
Combining with \eqref{eq:LpBoundPFOp}, we obtain
\begin{equation}\label{eq:LYunpert}
\begin{split}
&\|\mathcal L_\omega^{(n)}f\|_{{\mc{H}_p^t}}^p \\
&\leq
C_\mc{T} \cdot \big( C_e(T_\omega^{(n)})\big)^{p-1} \||DT_\omega^{(n)}|^{-1}\|_{\infty}^{p-1}
\Big(\||DT_\omega^{(n)}|^{-1}\|_{\infty}^{pt} \|f\|_{{\mc{H}_p^t}}^p + \dist^{n(p-1)} \delta^{1-p-pt} \|f\|_p^p \Big).
\end{split}
\end{equation}
The proof is concluded by
letting $A_{\mc{T}}=C_\mc{T}^{\frac{1}{p}}$ and
\[
B_{\mc{T}, n}(\omega)=\Big(C_\mc{T} \big( C_e(T_\omega^{(n)})\big)^{p-1} \|DT_\omega^{(n)}|^{-1}\|_{\infty}^{p-1} \dist^{n(p-1)} \delta(\omega, n)^{1-p-pt} \Big)^{\frac{1}{p}}.
\]
\item \textit{Uniform $({\mc{H}_p^t}, L_p)$ Lasota-Yorke inequality for $\mc{T}_k$.}
We extend the previous argument to the perturbed random map.
The main difference arises from the fact that the monotonicity partition for $T_{k,\omega}^{(N)}$ may have more elements than that of $T_{\omega}^{(N)}$. There will always be
\textit{admissible} intervals, which can be matched to corresponding ones in the monotonicity partition for $T_{\omega}^{(N)}$. There may also be \textit{non-admissible} ones, which may appear when $T^{(j-l)}_{\sigma^l\omega}a_{i,\sigma^l\omega}=a_{i',\sigma^j\omega}$
for some $i, i', j, l\in \mathbb{N}, \omega\in \Omega$. This is exactly as in the case of a single map (see \cite[\S3.3]{BaladiBook}).
We point out that \textit{admissibility} and \textit{non-admissibility} depends on the \textit{reference} map $T_{\omega}^{(N)}$.
Condition \eqref{it:NPTP} prevents new branches from being created during the first $N$ steps. That is, there are no non-admissible elements for $T_{\omega,k}^{(N)}$ (with respect to $T_{\omega}^{(N)}$).
Hence, the size of the shortest branch of $T_{k,\omega}^{(N)}$, $\delta(k,\omega, N)$, is close to $\delta(\omega, N)$ for sufficiently large $k$.
Thus, the argument from the previous step remains applicable for $\mc{T}_k$, for sufficiently large $k$. Noting also that $C_e(T_{k,\omega}^{(N)})\leq \nint^N$ yields \eqref{eq:StrongLYi}.
Now we deal with condition \eqref{it:enoughExp}.
First, for each $\omega$ and $n\in \mathbb{N}$, there exists $\tld{\delta}(\omega,n)>0$ such that if $d_{\text{LY}}(T_{\sigma^j\omega},S_j)<\tld{\delta}(\omega,n)$ for each $0\leq j < n$, and the maps $S_j$ satisfy \eqref{it:M1}--\eqref{it:M3} with the same constants as $\mc{T}$,
then for every non-admissible element $\eta$ of the monotonicity partition of $S^{(n)}:=S_{n-1} \circ \dots \circ S_0$ with respect to $T_{\omega}^{(n)}$, one has that $\eta\subset \eta' \cup \eta''$, for some $\eta', \eta''$ elements of the monotonicity partition of $T_\omega^{(n)}$. That is, all non-admissible intervals for $S^{(n)}$ are small compared to elements of the monotonicity partition of $T_\omega^{(n)}$. The upshot of this is that, even though a group of up to $2^n$ branches may arise near each endpoint of the monotonicity partition of $T_\omega^{(n)}$ from the perturbation, these groups will remain separated from each other if the perturbation is sufficiently small, depending on $T_{\omega}^{(n)}$.
Compactness of $\Omega$ and continuity of $\mc{T}$ ensure there exist $\omega_1, \dots, \omega_M\in \Omega$ such that
$\Omega= \cup_{i=1}^M B_{\text{LY}}(T_{\omega_i}, \tld{\delta}(T_{\omega_i},n)/2 )$, where $B_{\text{LY}}(T,\delta)$ denotes the ball of radius $\delta$ around $T$, measured with respect to $d_{\text{LY}}$.
Let $\rho = \min_{1\leq i\leq M} \tld{\delta}(T_{\omega_i},n)/2$. Then, if $d_{\text{LY}}(T_\omega, T_{k,\omega})<\rho$, one can
follow the argument of the previous step, to obtain a Lasota-Yorke inequality for $T_{k,\omega}^{(n)}$, with the main difference being that now more branches of $T_{k,\omega}^{(n)}$ may intersect the support of each $\eta_{m,r}$, where $r$ is chosen so that at most two branches of $T_{\omega_j}^{(n)}$ intersect the support of each $\eta_{m,r}$, for $1\leq j \leq M$.
Specifically, $\Gamma^k_r:=\{(m,i): f_{k,i,m,r}\not\equiv 0, 1\leq i \leq \nint_{k,\omega}^{(n)}, m\in \mathbb{Z}\}$,
where $f_{k,i,m,r}:= \Big(1_{I_{k,i}} |DT_{k,\omega}^{(n)}|^{-1} \eta_{m,r} f \Big)\circ \xi_{k,i}$,
may contain several non-admissible branches of $T_{k,\omega}^{(n)}$,
with respect to $T_{\omega_j}^{(n)}$, for some $1\leq j \leq M$. Thus, by the previous paragraph, for sufficiently large $r$, $\# \Gamma^k_r \leq 2^n 4r$, where the factor $4r$ is as in $\# \Gamma_r$ of the previous step.
This difference contributes a factor of $2^{np}$ on the right-hand side of \eqref{eq:LYunpert}. Making use of the fact that $C_e(T_{k,\omega}^{(N)})\leq \nint^N$, \eqref{eq:StrongLYii} is established.\qed
\end{enumerate}
\end{proof}
\qed \end{proof}
\subsection{Numerical examples}\label{sec:numEx}
In this section we provide a brief demonstration that the stability results of \S\ref{Ssec:Ulam} and \S\ref{Ssec:PertByConvolution} can be used to rigorously approximate random invariant densities.
Let $\Omega$ be a circle of unit circumference let the driving system $\sigma:\Omega\circlearrowleft$ be a rigid rotation by angle $\alpha=1/\sqrt{2}$.
For $x\in \Omega$ considered to be a point in $[0,1)$, we define a random map as:
\begin{equation}
\label{mapeg}
T_\omega(x)=\left\{
\begin{array}{ll}
3(x-\omega)-2.9(x-\omega)(x-\omega-1/3), & \hbox{$\omega\le x<\omega+1/3$;} \\
-3(x-\omega)+1-2.9(x-\omega-1/3)(x-\omega-2/3), & \hbox{$\omega+1/3\le x<\omega+2/3$;} \\
7/3(x-\omega-2/3)+2\omega/9, & \hbox{$\omega+2/3\le x<\omega+1$.}
\end{array}
\right..
\end{equation}
Graphs of $T_\omega$ for three different $\omega$ are shown in Figure \ref{threemaps}.
\begin{center}
\begin{figure}[hbt]
\begin{center}
\includegraphics[width=17cm]{RandomUlamFig3.pdf}\\
\caption{Graphs of the maps $T_{\sigma^{20}\omega}, T_{\sigma^{21}\omega}, T_{\sigma^{22}\omega}$, $\omega=0$.}
\label{threemaps}
\end{center}
\end{figure}
\end{center}
The graphs $T_\omega$ rotate with $\omega$ and one of three branches is also translated up/down with $\omega$.
The minimum slope of $\{T_\omega\}_{\omega\in\Omega}$ is bounded below by 2.
We employ the Ulam scheme with $k=1000$ (1000 equal subintervals) and a Fej\'{e}r kernel with $k=100$ (100 Fourier modes).
In the Ulam case, we use the well-known formula for the Ulam matrix to construct a matrix representation of $\mathcal{L}_{k,\omega}$: $[\mathcal{L}_{k,\omega}]_{ij}=m(B_i\cap T^{-1}_\omega B_j)/m(B_j)$, $i,j=1,\ldots,k$, which is the result of Galerkin projection using the basis $\{\mathbf{1}_{B_1},\ldots,\mathbf{1}_{B_k}\}$.
Lebesgue measure in the formula for $[\mathcal{L}_{k,\omega}]_{ij}$ is approximated by a uniform grid of 1000 test points per subinterval, and the estimate of $[\mathcal{L}_{k,\omega}]$ takes less than a second to compute in MATLAB.
In the Fourier case, we first use Galerkin projection onto the basis $\{1,\sin(2\pi x),\cos(2\pi x),\ldots,\sin(2k\pi x),\cos(2k \pi x)\}$.
The relevant integrals are calculated using adaptive Gauss-Kronrod quadrature and we have limited the number of modes to $k=100$ to place an upper limit of 10 minutes of CPU time (on a standard dual-core processor) to calculate the Galerkin projection matrix $[\mathcal{L}'_{k,\omega}]$, representing the projected action of $\mathcal{L}_{\omega}$ on the first $k$ Fourier modes.
We then take a Ces\`{a}ro average to construct $[\mathcal{L}_{k,\omega}]=\frac{1}{k}\sum_{j=0}^{k-1}[\mathcal{L}'_{j,\omega}]$.
Estimates of $f_{k,\sigma^j\omega}, \omega=0, j=21, 21, 22$ are shown in Figure \ref{pushforward}.
\begin{center}
\begin{figure}[hbt]
\includegraphics[width=17cm]{RandomUlamFig4.pdf}\\
\caption{Estimates $f_{k,\sigma^j\omega}$, $\omega=0, j=20, 21, 22$ using Ulam's method (upper row) and Fej\'{e}r kernels (lower row, thick [blue, online]). The pure Galerkin estimates using the Galerkin Fourier matrices $[\mathcal{L}'_{k,\omega}]$ are shown as thinner [red, online] curves in the lower row.}\label{pushforward}
\end{figure}
\end{center}
The invariant density estimate $f_{k,\sigma^{20}\omega}$ was created by pushing forward Lebesgue measure (at ``time'' $\omega=0$) by $[\mathcal{L}_{k,\sigma^{19}\omega}]\circ\cdots\circ [\mathcal{L}_{k,\omega}]$, and then pushing two more steps for the estimates $f_{k,\sigma^{21}\omega}$ and $f_{k,\sigma^{22}\omega}$.
By inspecting Figures \ref{threemaps} and \ref{pushforward}, one can see how $\mathcal{L}_{\sigma^j\omega}$ transforms the estimate of $f_{\sigma^j\omega}$ to $f_{\sigma^{j+1}\omega}$, ($j=20,21$), particularly coarse features such as a change in the number of inverse branches.
Though the pure Galerkin estimates are more oscillatory, they appear to pick up more of the finer features than the smoother Fej\'{e}r kernel estimates.
The Ulam estimates are likely the most accurate, given the greater dimensionality of their approximation space.
While the Fourier-based estimates converge slowly in this example (relative to computing time), numerical tests on $C^\infty$ random maps demonstrated rapid convergence, with the Fourier approach taking full advantage of the system's smoothness, to the extent that the influence of modes higher than $k=20$ on the matrix $[\mathcal{L}'_{k,\omega}]$ was of the order of machine accuracy.
\section{Technical proofs}\label{S:techPfs}
\subsection{Proof of Lemma~\ref{lem:regApprox}}\label{pf:regApprox}
We start with a lemma about sequences of bounded variation.
Let $b=(b_j)$, indexed by $\mathbb{Z}$. We define its \emph{variation} by
$\text{var}(b):=\sum_{j\in\mathbb{Z}}|b_j-b_{j-1}|$.
Let $(\phi_j)$ denote the standard orthonormal basis for $L^2(\mathbb T)$.
That is, $\phi_j(x):=e^{2\pi i jx}$.
For a bounded sequence $b$, define an operator on $L^p(\mathbb T)$ by
\begin{equation}\label{eq:defMultiplier}
M_b\colon \sum_j a_j\phi_j\to \sum_j a_jb_j\phi_j.
\end{equation}
\begin{lem}\label{lem:Lpbound}
Let $b=(b_j)$ be a sequence of non-negative reals such that $\text{var}(b)<\infty$
and $b_j\to 0$ as $j\to\pm\infty$.
Then for each $p>1$, $\|M_b\|_p\le C_p\var(b)$.
\end{lem}
The following auxiliary result will be used in the proof.
\begin{lem}\label{lem:comb}
Let $(b_j)$ be as in the statement of Lemma \ref{lem:Lpbound}.
Define sets $\mc{S}_1$ and $\mc{S}_2$ as follows:
\begin{align*}
\mc{S}_1&=\bigcup_j\{j\}\times [0,b_j)\\
\mc{S}_2&=\bigcup_i\bigcup_{j\ge i}\left(\{i,i+1,\ldots,j\}\times [\max(b_{i-1},b_{j+1}),\min(b_i,b_{i+1},\ldots,b_j))\right),
\end{align*}
Then $\mc{S}_1=\mc{S}_2$ and the union in $\mc{S}_2$ is a disjoint union.
Writing $I_{i,j}$ for
\[
[\max(b_{i-1},b_{j+1}),\min(b_i,b_{i+1},\ldots,b_j)),
\]
with the convention that
$[c,d)$ is empty if $d\le c$, and setting $h_{i,j}=|I_{i,j}|$, we have
$\sum_{i,j}h_{i,j}=\frac12\text{var}(b)$.
\end{lem}
The content of this lemma is illustrated in Figure~\ref{fig:sequence}.
\begin{center}
\begin{figure}[hbt]
\begin{center}
\includegraphics[width=4in]{RandomUlamFig1.pdf}
\vskip3ex
\includegraphics[width=4in]{RandomUlamFig2.pdf}
\caption{Illustration of Lemma~\ref{lem:comb}.}
\label{fig:sequence}
\end{center}
\end{figure}
\end{center}
\begin{proof}[Proof of Lemma \ref{lem:Lpbound}]
Consider $b\colon \mathbb{Z}\to[0,\infty)$ as a function on the integers. By Lemma~\ref{lem:comb}, we can write
\begin{equation*}
b=\sum_{i,j}h_{i,j}\mathbf 1_{[i,j]},
\end{equation*}
where $h_{i,j}$ is given by $|I_{i,j}|$, the length of the interval defined in Lemma~\ref{lem:comb}.
In particular, we deduce
\begin{equation*}
M_b=\sum_{i,j}h_{i,j}S_{[i,j]},
\end{equation*}
where $S_{[i,j]}(f):= \sum_{l=i}^j a_l\phi_l$.
Thus, $\|M_b\|_p\le \sum_{i,j}h_{i,j}C_p=\frac12\text{var}(b)C_p$, where $C_p$ is a uniform bound on $\|S_k\|_p$, and $S_k$ is the truncated Fourier series $S_k(f):=\sum_{|j|\leq k} a_j\phi_j$.
\qed \end{proof}
\begin{cor}\label{cor:FourierMultiplier}
Let $(b_j)$ be as in the statement of Lemma \ref{lem:Lpbound}.
Suppose $(b_j)$ is piecewise monotonic with at most $K$ pieces. Then,
$\|M_b\|_p \leq C_p K \|b\|_\infty$.
\end{cor}
\begin{proof}[Proof of Lemma~\ref{lem:regApprox}]
Let $\epsilon>0$, and $b_{j,\epsilon}=\langle j \rangle^{\frac{t'-t}{2}} (1-e^{-\epsilon\langle j \rangle})$.
Then,
\begin{equation}\label{eq:errorRegApprox}
|f-f_\epsilon|=\Big\| \sum_{j=1}^\infty b_{j,\epsilon} \langle j \rangle^{\frac{t}{2}} \hat{f}(j)\phi_j \Big\|_p =\|M_b (J_t f)\|_p \leq \|M_b\|_p\|f\|_{{\mc{H}_p^t}},
\end{equation}
where $M_b$ is the operator defined in \eqref{eq:defMultiplier},
and $J_t: {{H}_p^t(\T)} \to L^p(\mathbb{T})$ is given by $J_t(f):=\mc{F}^{-1}m_t\mc{F}(f)$, with $m_t(\xi)=\langle \xi\rangle^{\frac{t}{2}}=(1+|\xi|^2)^{\frac{t}{2}}$.
For $0<\gamma<1$, let $h(x)=x^{-\gamma}(1-e^{-\epsilon x})$. One can check that $h$ has two
intervals of monotonicity. For $x<1/\epsilon$, one has $h(x)\leq \epsilon x^{1-\gamma}$,
while for $x\ge 1/\epsilon$, one has $h(x)\leq x^{-\gamma}$. In particular, one has
$\|h\|_\infty \leq\epsilon^{\gamma}$.
Using the above with $\gamma=\frac{t-t'}{2}$, the lemma follows from \eqref{eq:errorRegApprox} and Corollary~\ref{cor:FourierMultiplier}.\qed
\end{proof}
\subsection{Boundedness of $\mathbb E_k$ in ${\mc{H}_p^t}$.}\label{S:pfBddEnHpt}
In order to demonstrate this, we shall make use of a theorem of
Strichartz \cite{Strichartz}.
\begin{thm}[Strichartz]\label{thm:Strichartz2}
Let $p>1$ and $0<t<1$, and $f:\mathbb{R} \to \mathbb{R}$ with $\mathop{\mathrm{supp}}{f}\subseteq [0,1]$.
Then $f\in{\mc{H}_p^t}$ if and only if
$\|f\|_p+\|S_tf\|_p<\infty$ and the implied norm is equivalent to the
standard ${\mc{H}_p^t}$ norm, where $S_tf$ is given by
\begin{equation}\label{eq:St}
S_tf(x)=\left(\int_{0}^{\infty}\frac{dr}{r^{1+2t}}\left(
\int_{-1}^1|f(x+ry)-f(x)|\,dy\right)^2\right)^{1/2}.
\end{equation}
\end{thm}
\begin{proof}[Proof of Lemma \ref{thm:hptbound}]
We shall use the notation $A\lesssim B$ to indicate that the
quantity $A$ is bounded by a constant multiple of the quantity $B$,
where the constant is independent of $k$ and any function to which the
inequality is being applied.
Let $k$ be fixed (although we ensure that all bounds that we give are
independent of $k$). For $x\in [0,1)$, let $j(x)$ denote the index of
the interval to which $x$ belongs. That is $j(x)=\lfloor kx\rfloor$.
We have $\|\mathbb E_kf\|_p\le \|f\|_p$ so it suffices to show that
$\|S_t(\mathbb E_kf)\|_p\lesssim \|S_tf\|_p$.
We let $H_rf(x)$ be the outer integrand in $S_tf(x)$, that is
\begin{equation}\label{eq:Hr}
H_rf(x)=\frac 1{r^{1+2t}}\left(\int_{-1}^1|f(x+ry)-f(x)|\,dy\right)^2.
\end{equation}
Notice that $S_tf(x)\le S_t^{(1)}f(x)+S_t^{(2)}f(x)$, where
\begin{align*}
S_t^{(1)}f(x)&=\left(\int_0^{1/(2k)}H_rf(x)\,dr\right)^{1/2}\text{ and }\\
S_t^{(2)}f(x)&=\left(\int_{1/(2k)}^\infty H_rf(x)\,dr\right)^{1/2}.
\end{align*}
We start by establishing an inequality that we use several times. Let
$I_j$ denote the interval $[(j-1)/k,j/k)$.
\begin{myclaim}\label{claim:f-Ef}
\begin{equation}\label{eq:claimone}
\int_0^1|\mathbb E_kf(x)-f(x)|^p\, dx
\lesssim
k^{-pt}\int_0^1dx\,\left(\int_{2/k}^{3/k}H_rf(x)\,dr\right)^{p/2}.
\end{equation}
\end{myclaim}
\begin{proof}
Let $x\in [0,1]$. If $\frac 2k\le r\le \frac 3k$, then
\begin{align*}
H_rf(x)&\gtrsim k^{1+2t}\left(\int_{-1}^1|f(x+ry)-f(x)|\,dy\right)^2\\
&\gtrsim k^{3+2t}\left(\int_{x-r}^{x+r}|f(s)-f(x)|\,ds\right)^2\\
&\ge k^{3+2t}\left(\int_{I_{j(x)}}|f(s)-f(x)|\,ds\right)^2\\
&\ge k^{3+2t}\left(\int_{I_{j(x)}}|\mathbb E_kf(s)-f(x)|\,ds\right)^2\\
&=k^{1+2t}|\mathbb E_kf(x)-f(x)|^2.
\end{align*}
Integrating in $r$ over the range $[\frac2k,\frac 3k]$ and raising to
the $p/2$ power, we obtain
\begin{equation*}
|\mathbb E_kf(x)-f(x)|^p\lesssim k^{-pt}\left(\int_{2/k}^{3/k}H_rf(x)\,dr\right)^{p/2},
\end{equation*}
which establishes the claim upon integrating with respect to $x$.\qed
\end{proof}
For a function $f(x)$, let $f_j$ denote the value of $\mathbb E_kf$ on the
interval $I_j$. Recalling definitions \eqref{eq:Hr} and \eqref{eq:St} of $H_r$ and $S_t$, the above implies the inequality
\begin{equation}\label{eq:f-Ef}
k^{pt}\int_0^1|f_{j(x)}(x)-f(x)|^p\,dx\lesssim \|S_tf\|_p^p.
\end{equation}
Straightforward modifications also establish the inequality
\begin{equation}\label{eq:f-TEf}
k^{pt}\int_0^1|f_{j(x)+1}(x)-f(x)|^p\,dx\lesssim \|S_tf\|_p^p.
\end{equation}
We now estimate $S_t^{(2)}\mathbb E_kf(x)$. Letting $r>1/(2k)$, we have
\begin{align*}
&H_r(\mathbb E_kf)(x)=\frac{1}{r^{1+2t}}
\left(\int_{-1}^1|\mathbb E_kf(x+ry)-\mathbb E_kf(x)|\,dy\right)^2\\
&\lesssim\frac{(f(x)-\mathbb E_kf(x))^2}{r^{1+2t}}
+
\frac{1}{r^{1+2t}}\left(\int_{-1}^1|\mathbb E_kf(x+ry)-f(x)|\,dy\right)^2.
\end{align*}
Hence we have
\begin{equation}
\begin{split}\label{eq:splitSt2}
S_t^{(2)}(\mathbb E_kf)(x)\lesssim&
\left(|f(x)-\mathbb E_kf(x)|^2\int_{1/(2k)}^\infty\frac1{r^{1+2t}}\,dr\right)^{1/2}\\
&+\left(\int_{1/(2k)}^\infty\frac
{(\int_{-1}^1|\mathbb E_kf(x+ry)-f(x)|\,dy)^2}
{r^{1+2t}}\,dr\right)^{1/2}\\
&\sim k^t|f(x)-\mathbb E_kf(x)|+(*),
\end{split}
\end{equation}
where $(*)$ denotes the term on the second line of the inequality.
We then estimate (*) as follows.
\begin{align*}
&\int_{-1}^1|\mathbb E_kf(x+ry)-f(x)|\,dy\\
&=\frac{1}{2r}\int_{x-r}^{x+r}|\mathbb E_kf(s)-f(x)|\,ds\\
&\lesssim\frac{1}{2r}\sum_{\{j\colon I_j\cap [x-r,x+r]\ne\emptyset\}}
\int_{I_j}|\mathbb E_kf(s)-f(x)|\,ds\\
&\le \frac{1}{2r}\sum_{\{j\colon I_j\cap [x-r,x+r]\ne\emptyset\}}\int_{I_j}|f(s)-f(x)|\,ds\\
&\lesssim \int_{-1}^1\left|f\left(x+(r+\tfrac1k)y\right)-f(x)\right|\,dy,
\end{align*}
so that
$(*)\lesssim\left( \int_{1/(2k)}^\infty H_{r+\frac1k}f(x)\,dr\right)^{1/2}\le S^{(2)}_tf(x)$.
Hence, by Theorem~\ref{thm:Strichartz2} and definition of $S^{(2)}_tf$, $\|(*)\|_p\lesssim \|f\|_{{\mc{H}_p^t}}$.
Combining this with \eqref{eq:splitSt2} and \eqref{eq:f-Ef}, we deduce
$\|S_t^{(2)}(\mathbb E_kf)\|_p\lesssim \|f\|_{{\mc{H}_p^t}}$.
It remains to show that $\|S_t^{(1)}(\mathbb E_kf)\|_p\lesssim \|f\|_{{\mc{H}_p^t}}$.
Let $x=\frac jk-h$, where we assume $h<1/(2k)$ (the other case being similar).
We have $H_r\mathbb E_kf(x)=0$ if $r\le h$ and, recalling that $r\leq 1/(2k)$, $H_r\mathbb E_kf(x)\le |f_{j+1}-f_j|^2/r^{1+2t}$ if $r>h$.
Hence
\begin{align*}
S_t^{(1)}\mathbb E_kf(x)&\le |f_{j+1}-f_j|\left(\int_{h}^{1/(2k)}\frac{1}{r^{1+2t}}\,dr\right)^{1/2}\\
&\lesssim |f_{j+1}-f_j|h^{-t}.
\end{align*}
Integrating the $p$th power, we see
\begin{align*}
\|S_t^{(1)}\mathbb E_kf\|_p^p&\lesssim \sum_j|f_{j+1}-f_j|^pk^{pt-1}\\
&\sim k^{pt}\int_0^1 |f_{j(x)+1}-f_j(x)|^p\,dx\\
&\sim k^{pt}\left(\int_0^1 |f_{j(x)+1}-f(x)|^p\,dx+\int_0^1 |f_{j(x)}-f(x)|^p\,dx\right).
\end{align*}
The desired bound then follows from \eqref{eq:f-Ef} and \eqref{eq:f-TEf}.
\qed \end{proof}
\subsection{Proof of Lemma~\ref{lem:UlamError4Holder}}\label{pf:UlamError4Holder}
Let $g\in C^\gamma$ and $t<\min(\gamma,1/p)$. We will show that $\|(\mathbb E_k-1)g\|_{{\mc{H}_p^t}}
\le C_\#\|g\|_{C^\gamma}k^{t-\gamma}$.
We use the Strichartz equivalent characterization of ${\mc{H}_p^t}$ of Theorem~~\ref{thm:Strichartz2} again.
Let $x\in[0,1]$ be at a distance $s$ from one of the endpoints of
the partition of the interval into subintervals of length $1/k$.
Let $g\in C^\gamma$ and let $h=\mathbb E_kg-g$. We check that
$|h|(z)\le \|g\|_\gamma k^{-\gamma}$ for all $z$.
We have $\|h\|_{H^p_t}\approx \|h\|_{p}+\|S_th\|_{p}$ where
\begin{equation*}
S_th(x)=\left(\int_0^\infty \frac{dr}{r^{1+2t}}
\left(\int_{-1}^1|h(x+ry)-h(x)|\,dy\right)^2\right)^{1/2}.
\end{equation*}
We split the integration over the ranges $[0,s]$ and $[s,\infty)$:
\begin{align*}
& \left(\int_0^s \frac{dr}{r^{1+2t}}
\left(\int_{-1}^1|h(x+ry)-h(x)|\,dy\right)^2\right)^{1/2}\\
&=\left(\int_0^s \frac{dr}{r^{1+2t}}
\left(\int_{-1}^1|g(x+ry)-g(x)|\,dy\right)^2\right)^{1/2}\\
&\le \left(\int_0^s \frac{dr}{r^{1+2t}}\left(\int_{-1}^1
\|g\|_{C^\gamma}|ry|^{\gamma} \,dy \right)^2 \right)^{1/2}\\
&=C_\#\|g\|_{C^\gamma}\left(\int_0^s dr\
r^{2\gamma-1-2t}\right)^{1/2}\le C_\#\|g\|_{C^\gamma}k^{t-\gamma}.
\end{align*}
Using the uniform bound on $h$, we have
\begin{equation*}
\left(\int_s^\infty \frac {dr}{r^{1+2t}}
\left(\int_{-1}^1|h(x+ry)-h(x)|\,dy\right)^2\right)^{1/2}\\
\le C_\#\|g\|_{C^\gamma}k^{-\gamma}s^{-t}.
\end{equation*}
Since the $L^p$ norm of each part is of the form
$C_\#\|g\|_{C^\gamma}k^{t-\gamma}$, the desired result is obtained.\qquad \qed
\subsection{Proof of Claim~\ref{lem:properSpt}}\label{pf:properSpt}
Let $A=\{ y: |y| \geq c \}$. Using that $\mathop{\mathrm{supp}}(f) \subset [a,b]$, Jensen's inequality and Fubini's theorem we get
\begin{align*}
\|D_t f &- 1_{[a', b']} D_t f \|_p^{p} = \int_{\mathbb{R}\setminus [a', b']} \Big | \int \frac{f(x+h) -f(x)}{|h|^{1+t}}dh \Big |^{p} dx \ \\
& \leq \int_{\mathbb{R}\setminus [a', b']} \Big | \int \frac{1_A(h)f(x+h)}{|h|^{1+t}}dh \Big |^{p} \,dx
\leq |b-a|^{p-1} \int_{\mathbb{R}\setminus [a', b']} \int \frac{1_A(h)|f(x+h)|^{p}}{|h|^{p+tp}}\,dh \,dx \ \\
& \leq |b-a|^{p-1} \int \frac{1_A(h)}{|h|^{p+tp}} \int_{\mathbb{R}\setminus [a', b']} |f(x+h)|^{p} \,dx \,dh
\leq \frac{2|b-a|^{p-1} c^{1-p-pt}}{p+pt-1} \|f\|_p^{p},
\end{align*}
which gives the first part. For the second part, we first apply the triangle inequality to get
\begin{align*}
&\Big \|\sum_{j=1}^M D_t f_j \Big \|_p \leq \Big \|\sum_{j=1}^M 1_{[a'_j, b'_j]} D_t f_j \Big \|_p + \sum_{j=1}^M \|D_t f_j - 1_{[a'_j, b'_j]} D_t f_j\|_p.
\end{align*}
We now use the following inequality, valid for non-negative $x, y$, $(x+y)^{p} \leq 2^{p-1}(x^{p}+y^{p})$, the fact that the intersection multiplicity of $\{[a'_j,b'_j] \}_{1\leq j \leq M}$ is $\tld{M}$ to bound the $p$-th power of the first sum, and the previous part to bound the $p$-th power of the second sum. We get
\begin{align*}
&\Big \|\sum_{j=1}^M D_t f_j \Big \|_{p}^{p} \leq C_\# \tld{M}^{p-1}\sum_{j=1}^M \|1_{[a'_j, b'_j]} D_t f_j \|_p^{p} + C_\# l^{p-1}c^{1-p-pt} \Big( \sum_{j=1}^M \|f_j\|_p \Big)^{p}.
\end{align*}
Finally, using the fact that for non-negative numbers $x_i$, we have that $(\sum_{i=1}^M x_i)^{p} \leq M^{p-1}\sum_{i=1}^M x_i^{p}$ to bound the second sum. We obtain
\begin{align*}
&\Big \|\sum_{j=1}^M D_t f_j \Big \|_{p}^{p} \leq C_\# \tld{M}^{p-1}\sum_{j=1}^M \|D_t f_j \|_p^{p} + C_\# (Ml)^{p-1}c^{1-p-pt} \sum_{j=1}^M \|f_j\|_p^{p}. \quad \qed
\end{align*}
\subsection{Proof of Claim~\ref{it:boundPFhpt.1}}\label{S:pfboundPFhpt.1}
Fix $x_0\in \overline{I_i}$ maximizing $|A_i|$, where $A_i:=D\xi_i(x_0)$. Then,
\begin{align*}
\|(1_{I_i} |DT_\omega^{(n)}|^{-1} u)\circ \xi_i\|_{{\mc{H}_p^t}}^p= \|(1_{I_i} |DT_\omega^{(n)}|^{-1} u)\circ A_i \circ (A_i^{-1} \circ \xi_i)\|_{{\mc{H}_p^t}}^p.
\end{align*}
Using that $A_i$ is linear in the equality and \cite[Lemmas 3.4 and 3.5]{GTQuas}
in the second inequality, we get
\begin{align*}
&\|(1_{I_i} |DT_\omega^{(n)}|^{-1} u)\circ A_i\|_{{\mc{H}_p^t}}^p \\
&\leq
C_\# \|(1_{I_i} |DT_\omega^{(n)}|^{-1} u)\circ A_i\|_{p}^p+ C_\#\|D_t\big( (1_{I_i} |DT_\omega^{(n)}|^{-1} u)\circ A_i \big)\|_{p}^p\\
& \quad = C_\# |A_i|^{-1}\|1_{I_i} |DT_\omega^{(n)}|^{-1} u\|^p_p + C_\# |A_i|^{pt-1}\|D_t(1_{I_i} |DT_\omega^{(n)}|^{-1} u)\|_{p}^p\\
& \quad \leq C_\# |A_i|^{-1}\| D\xi_i\|^p_{\infty} \| u\|^p_p + C_\# |A_i|^{p+pt-1}\Big\| \frac{D\xi_i}{A_i} \Big\|^p_{\alpha} \|u\|_{{\mc{H}_p^t}}^p.
\end{align*}
Using conditions \eqref{it:M2} and \eqref{it:M3}
and a standard distortion estimate (see e.g. \cite{ManeETDD}), we get that there exists some constant $C_\mc{T}$ such that $\Big\| \frac{D\xi_i}{A_i} \Big\|^p_{\alpha} \leq C_{\mc{T}}$ for all $n, \omega$. Then, by the choice of $A_i$, we get that
\begin{align*}
&\|(1_{I_i} |DT_\omega^{(n)}|^{-1} u)\circ A_i\|_{{\mc{H}_p^t}}^p \\
&\quad \leq C_\# \||DT_\omega^{(n)}|^{-1}\|^{p-1}_{\infty} \| u\|^p_p + C_\# C_{\mc{T}}\||DT_\omega^{(n)}|^{-1}\|^{p+pt-1}_{\infty} \|u\|_{{\mc{H}_p^t}}^p.
\end{align*}
Also, there exists $K=K(\mc{R})>0$ such that $\sup_{i, \omega, n} \max_{x,x'\in I_{i,\omega}^{(n)}} |D\xi_{i,\omega}^{(n)}(x)^{-1}D\xi_{i,\omega}^{(n)}(x')|<K$. So, in particular, $|D\xi_i(x)^{-1}A_i|\leq K$ and $|A_i^{-1}D\xi_i(x)|\leq K$ for every $x\in I_i$.
It follows directly from \cite[Lemma 3.7]{GTQuas}
that if $\phi:\mathbb{R}\circlearrowleft$ is a diffeomorphism such that $\|D\phi\|_\infty, \|D\phi^{-1}\|_\infty\leq K$, then there exists a constant $C_K$ such that for every $u\in {\mc{H}_p^t}$ we have that
$\|u\circ \phi\|_{{\mc{H}_p^t}} \leq C_K \|u\|_{{\mc{H}_p^t}}$.
Combining with the previous estimate we get that
\begin{align*}
&\|(1_{I_i} |DT_\omega^{(n)}|^{-1} u)\circ \xi_i\|_{{\mc{H}_p^t}}^p \\
&\quad \leq C_{\mc{T}} \||DT_\omega^{(n)}|^{-1}\|_{\infty}^{p-1}\| u\|^p_p + C_{\mc{T}} \||DT_\omega^{(n)}|^{-1}\|_{\infty}^{p+pt-1}\|u\|_{{\mc{H}_p^t}}^p. \quad \qed
\end{align*}
\subsection{Proof of Claim~\ref{it:boundPFhpt.2}}\label{S:pfboundPFhpt.2}
The first claim follows from \cite[Theorem 2.4.7]{Triebel}.
For the second claim, we first observe that
\begin{align*}
\|\eta_{m,r} u\|^p_{{\mc{H}_p^t}}&=\|(u\circ R_{r^{-1}} \cdot \eta_m) \circ R_r\|^p_{{\mc{H}_p^t}} \\
&\leq C_\# \Big(\|(u\circ R_{r^{-1}} \cdot \eta_m) \circ R_r\|_{p} + \|D_t((u\circ R_{r^{-1}} \cdot \eta_m) \circ R_r)\|_{p} \Big)^p \\
&\leq C_\# \Big(r^{-1}\|u\circ R_{r^{-1}} \cdot \eta_m\|^p_{p} + r^{-1+pt}\|D_t(u\circ R_{r^{-1}} \cdot \eta_m)\|_{p}^p \Big).
\end{align*}
Combining with the first part, we get
\begin{align*}
\sum_{m\in \mathbb{Z}} \|\eta_{m,r} u\|^p_{{\mc{H}_p^t}} &\leq
C_\# \Big(r^{-1} \sum_{m\in \mathbb{Z}}\|u\circ R_{r^{-1}} \cdot \eta_m\|^p_{p} + r^{-1+pt}\sum_{m\in \mathbb{Z}}\|u\circ R_{r^{-1}} \cdot \eta_m\|_{{\mc{H}_p^t}}^p \Big)\\
&\leq C_\# \Big( \|u \|_p^p + r^{-1+pt} (\|u \circ R_{r^{-1}}\|_p^p + \|D_t(u\circ R_{r^{-1}})\|_{p}^p) \Big) \\
&\leq C_\# \Big( (1+r^{pt})\|u\|_p^p + \|u\|_{{\mc{H}_p^t}}^p \Big). \quad \qed
\end{align*}
|
1,314,259,994,253 | arxiv | \section{Introduction \label{sec:intro}}
After the seminal work \cite{Garcia-Garcia:2016mno,Cotler:2016fpe},
the spectral form factor is intensively studied as a diagnostic
of the quantum chaotic behavior of the Sachdev-Ye-Kitaev (SYK) model \cite{KitaevTalks,Sachdev,Maldacena:2016hyu}, which is a solvable example of the
holographic model of a certain black hole in two-dimension.
At late times, the spectral form factor
of SYK model exhibits a structure of the so-called \textit{ramp}
and \textit{plateau}, and
it is well-approximated by the behavior of the Gaussian Unitary Ensemble
(GUE) random matrix model
when the number of fermions mod 8 is 2 or 6 \cite{you}\footnote{See also \cite{Gharibyan:2018jrp,Hunter-Jones:2017crg,Li:2017hdt,Kanazawa:2017dpd,Saad:2018bqo,Garcia-Garcia:2018ruf,Nosaka:2018iat,Garcia-Garcia:2017bkg} for the study of spectral form factor
in SYK model and its supersymmetric generalizations.}.
In this paper, we will consider the the
spectral form factor $g(\beta,t)$ in GUE matrix model
with non-zero inverse temperature $\beta$.
We will show that
$g(\beta,t)$
is written exactly as a trace of
an $N\times N$ matrix $A(z)$ defined in \eqref{eq:Amat}.
$g(\beta,t)$ consists of two parts: the disconnected part
$g_{\text{disc}}(\beta,t)$ \eqref{eq:gdisc} and the connected part
$g_{\text{conn}}(\beta,t)$ \eqref{eq:gconn}.
In Figure \ref{fig:gtotal}, we show the plot of this exact $g(\beta,t)$
for $\beta=5$ with the matrix size $N=500$.
As we can see from Figure \ref{fig:gtotal},
after the initial decay described by
the disconnected part $g_{\text{disc}}(\beta,t)$,
$g(\beta,t)$ has the structure of ramp
and plateau at late times. This late time behavior
comes from the connected part
$g_{\text{conn}}(\beta,t)$ and it was studied
extensively in the literature (see e.g. \cite{hikami,Liu:2018hlr} and references therein).
\footnote{The spectral form factor was first introduced in
\cite{jost} as a Fourier transform of the two-level correlation
function,
and it was observed that the spectral form factor exhibits a structure of dip,
which was originally called the ``correlation hole'' in \cite{jost}.}
The ramp is closely related to
the short range correlation of eigenvalues described by
the so-called sine kernel, and if we focus on the contribution from
a small window around some fixed eigenvalue the ramp grows linearly in $t$.
However, since $g(\beta,t)$ is defined by integrating over
the whole range of eigenvalue distribution, the actual ramp is not a linear function of $t$.
In this paper, we will study the non-linearity of ramp using the exact result at finite $N$.
To see the deviation from the linear behavior, it is natural
to consider the time derivative of $g_{\text{conn}}(\beta,t)$, which we will call the
\textit{slope of ramp}.
If the ramp were a linear function of $t$, the slope of ramp would be a constant.
However, the actual slope of ramp is not constant in time.
It turns out that the slope of ramp obeys the semi-circle law as a function of time.
This is a direct consequence of the semi-circle law of eigenvalue distribution,
of course, but there is an interesting twist:
the slope of ramp corresponds to the eigenvalues and the time
corresponds to
the eigenvalue density (see Figure \ref{fig:circle} for the detail
of this correspondence). In other words,
the eigenvalue density manifests itself as the time direction
in the graph of the slope of ramp.
\begin{figure}[thb]
\centering
\includegraphics[width=10cm]{gtotal.pdf}
\caption{Plot of the exact spectral form factor $g(\beta,t)$
in GUE for $\beta=5, N=500$.
}
\label{fig:gtotal}
\end{figure}
This paper is organized as follows.
In section \ref{sec:exact}, we write down the exact
closed form expression of the slope of ramp
$\partial_t g_{\text{conn}}(\beta,t)$ at finite $N$.
In section \ref{sec:largeN}, we compute the late time behavior of
$g_{\text{conn}}(\beta,t)$ in the large $N$
limit. We point out that after an appropriate change of variable
\eqref{eq:sbt}, the slope of ramp obeys the semi-circle law as a function of time.
In section \ref{sec:plot}, we plot the slope of ramp as a function of time
using our exact result at finite $N$ for both $\beta=0$ and $\beta\ne0$ cases,
and confirm that the slope of ramp exhibits the semi-circle law.
In section \ref{sec:smallt}, we consider the slope of ramp in the small $t$ regime.
Finally, we conclude in section \ref{sec:conclusion}.
In Appendix \ref{app:mat}, we explain how to compute
$\Tr A(z)$ and $\Tr A(z_1)A(z_2)$.
\section{Exact slope of ramp at finite $N$ \label{sec:exact}}
In this paper we consider the spectral form factor in Gaussian
matrix model defined by
\begin{equation}
\begin{aligned}
g(\beta,t)=\Bigl\langle \Tr e^{-(\beta+\mathrm{i} t)H}
\Tr e^{-(\beta-\mathrm{i} t)H}\Bigr\rangle
&=\frac{\int dHe^{-\frac{N}{2}\Tr H^2}\Tr e^{-(\beta+\mathrm{i} t)H}
\Tr e^{-(\beta-\mathrm{i} t)H}}{\int dHe^{-\frac{N}{2}\Tr H^2}},
\end{aligned}
\label{eq:def-g}
\end{equation}
where the integral is over the $N\times N$ hermitian matrix $H$.
By definition, $g(\beta,t)$ is an even function of $t$. Moreover,
since the Gaussian measure is invariant under $H\to -H$,
$g(\beta,t)$ is independent of the sign
of $\beta$.
In the following we will assume that $\beta$ and $t$
are both positive without loss of generality:
\begin{equation}
\begin{aligned}
\beta\geq0,\quad t\geq0.
\end{aligned}
\end{equation}
In the normalization of Gaussian measure
in \eqref{eq:def-g},
the eigenvalue $\mu$ of matrix $H$ is distributed along the cut $\mu\in[-2,2]$
in the large $N$ limit,
and the eigenvalue density $\rho(\mu)$ is given by the
Wigner semi-circle law
\begin{equation}
\begin{aligned}
\rho(\mu)=\frac{1}{2\pi}\rt{4-\mu^2} .
\end{aligned}
\label{eq:wigner}
\end{equation}
As pointed out in \cite{delCampo:2017bzr},
$g(\beta,t)$ in \eqref{eq:def-g}
is formally equivalent to the correlator of 1/2 BPS Wilson loops in
4d $\mathcal{N}=4$ Super Yang-Mills (SYM) theory,
which is also given by the Gaussian matrix model via
the supersymmetric localization \cite{Erickson:2000af,Drukker:2000rr,Pestun:2007rz}.
Thus, we can immediately find the exact form of $g(\beta,t)$
by borrowing the known result of $\mathcal{N}=4$ SYM
in \cite{Drukker:2000rr,Kawamoto:2008gp,Okuyama:2018aij}.
To do this, it is convenient to rescale the matrix
\begin{equation}
\begin{aligned}
H=\rt{\frac{2}{N}}M,
\end{aligned}
\end{equation}
so that the measure becomes $\int dM e^{-\Tr M^2}$.
In this normalization, $g(\beta,t)$ is written as
\begin{equation}
\begin{aligned}
g(\beta,t)=\Bigl\langle \Tr e^{\frac{\beta+\mathrm{i} t}{\rt{N}}\rt{2}M}
\Tr e^{\frac{\beta-\mathrm{i} t}{\rt{N}}\rt{2}M}\Bigr\rangle.
\end{aligned}
\label{eq:gmat}
\end{equation}
On the other hand, the correlator of 1/2 BPS Wilson loops with winding number
$k_i$ is given by \cite{Okuyama:2018aij}
\begin{equation}
\begin{aligned}
\left\langle\prod_i \Tr e^{k_i\rt{\frac{\lambda}{4N}}\rt{2}M}\right\rangle,
\end{aligned}
\label{eq:Wmat}
\end{equation}
where $\lambda$ denotes the 't Hooft coupling of $\mathcal{N}=4$ SYM.
Comparing \eqref{eq:gmat} and \eqref{eq:Wmat}, we find a dictionary between
Wilson loops in $\mathcal{N}=4$ SYM and the spectral form factor
\begin{equation}
\begin{aligned}
k_i\rt{\lambda}~\leftrightarrow~2(\beta\pm\mathrm{i} t).
\end{aligned}
\end{equation}
As shown in \cite{Fiol:2013hna,Okuyama:2018aij},
the correlator of $\Tr e^{z\rt{2}M}$ is written in terms of the $N\times N$
symmetric matrix
$A(z)$ defined by
\begin{equation}
\begin{aligned}
A(z)_{i,j}=\rt{\frac{i!}{j!}}e^{\frac{z^2}{2}}z^{j-i}
L_i^{j-i}(-z^2), \quad(i,j=0,\cdots,N-1),
\end{aligned}
\label{eq:Amat}
\end{equation}
where $L_n^\alpha(x)$ denotes the associated Laguerre polynomial.
The one-point function is given by
the trace of $A(z)$ (see Appendix \ref{app:mat} for a derivation of this result)
\begin{equation}
\begin{aligned}
\Bigl\langle \Tr e^{z\rt{2}M}\Bigr\rangle
=\Tr A(z)=e^{\frac{z^2}{2}}L_{N-1}^1(-z^2).
\end{aligned}
\end{equation}
The spectral form factor $g(\beta,t)$ in \eqref{eq:gmat}
is a two-point function
of $\Tr e^{z\rt{2}M}$ and $\Tr e^{\b{z}\rt{2}M}$ with
\begin{equation}
\begin{aligned}
z=\frac{\beta+\mathrm{i} t}{\rt{N}},\quad
\b{z}=\frac{\beta-\mathrm{i} t}{\rt{N}}.
\end{aligned}
\label{eq:z-bt}
\end{equation}
One can naturally decompose $g(\beta,t)$ into the disconnected part
$g_{\text{disc}}(\beta,t)$
and the connected part $g_{\text{conn}}(\beta,t)$
\begin{equation}
\begin{aligned}
g(\beta,t)= g_{\text{disc}}(\beta,t)+ g_{\text{conn}}(\beta,t).
\end{aligned}
\end{equation}
The disconnected part is given by a product of one-point functions
\begin{equation}
\begin{aligned}
g_{\text{disc}}(\beta,t)=\Tr A(z)\Tr A(\b{z})
=e^{\frac{z^2+\b{z}^2}{2}}L_{N-1}^1(-z^2)L_{N-1}^1(-\b{z}^2),
\end{aligned}
\label{eq:gdisc}
\end{equation}
where $z$ and $\b{z}$ are defined in \eqref{eq:z-bt}.
This part is responsible for the early time decay of $g(\beta,t)$,
which we will not consider in this paper.
The late time behavior of $g(\beta,t)$, the so-called ramp and plateau,
comes form the connected part. Using the result in
\cite{Drukker:2000rr,Kawamoto:2008gp,Okuyama:2018aij},
$g_{\text{conn}}(\beta,t)$ is written as
\begin{equation}
\begin{aligned}
g_{\text{conn}}(\beta,t)&=\Tr \Bigl[A(z+\b{z})-A(z)A(\b{z})\Bigr].
\end{aligned}
\label{eq:gconn}
\end{equation}
Since $z+\b{z}=\frac{2\beta}{\rt{N}}$,
the first term of \eqref{eq:gconn} is independent of time
and it sets the value of plateau
\begin{equation}
\begin{aligned}
g_{\text{plateau}}(\beta)&=\Tr A(z+\b{z})
=e^{\frac{2\beta^2}{N}}L_{N-1}^1\Bigl(-\frac{4\beta^2}{N}\Bigr).
\end{aligned}
\end{equation}
Using the result of Wilson loop in $\mathcal{N}=4$ SYM \cite{Erickson:2000af},
the large $N$ limit of $g_{\text{plateau}}(\beta)$
with fixed $\beta$ is given by\footnote{The initial value of the
disconnected part $g_{\text{disc}}(\beta,t=0)$ is order $N^2$
in the large $N$ limit
\begin{equation}
\begin{aligned}
g_{\text{disc}}(\beta,t=0)\approx N^2\frac{I_1(2\beta)^2}{\beta^2}.
\end{aligned}
\end{equation}
Note that this is larger than the value of plateau \eqref{eq:plateau-value}
by a factor of $N$.
}
\begin{equation}
\begin{aligned}
g_{\text{plateau}}(\beta)\approx N\frac{I_1(4\beta)}{2\beta},
\end{aligned}
\label{eq:plateau-value}
\end{equation}
where $I_n(x)$ denotes the modified Bessel function of the first kind.
The non-trivial time dependence comes from the second term of \eqref{eq:gconn}
\begin{equation}
\begin{aligned}
g_{\text{ramp}}(\beta,t)&=-\Tr \Bigl[A(z)A(\b{z})\Bigr].
\end{aligned}
\end{equation}
In what follows, we will consider the time derivative of $g_{\text{ramp}}(\beta,t)$,
which we call the \textit{slope of ramp}.
Since $g_{\text{plateau}}(\beta)$ is independent of time,
the slope of ramp is equal to the time derivative of the connected part
of spectral form factor
\begin{equation}
\begin{aligned}
\frac{\partial g_{\text{ramp}}}{\partial t}(\beta,t)=\frac{\partial g_{\text{conn}}}{\partial t}(\beta,t).
\end{aligned}
\end{equation}
As explained in Appendix \ref{app:mat},
we can write down a closed form expression of the slope of ramp
\begin{equation}
\begin{aligned}
\frac{\partial g_{\text{conn}}}{\partial t}(\beta,t)&=
\frac{N}{\beta}e^{\frac{z^2+\b{z}^2}{2}}\text{Im}
\Bigl[L_N(-z^2)L_{N-1}(-\b{z}^2)\Bigr] .
\end{aligned}
\label{eq:slope-bt}
\end{equation}
By taking the limit $\beta\to0$ of \eqref{eq:slope-bt}, the slope of ramp for $\beta=0$ becomes
\begin{equation}
\begin{aligned}
\frac{\partial g_{\text{conn}}}{\partial t}(0,t)=
2te^{-\frac{t^2}{N}}\Biggl[L_{N-1}\Bigl(\frac{t^2}{N}\Bigr)
L_{N-1}^1\Bigl(\frac{t^2}{N}\Bigr)-L_{N}\Bigl(\frac{t^2}{N}\Bigr)
L_{N-2}^1\Bigl(\frac{t^2}{N}\Bigr)\Biggr] .
\end{aligned}
\label{eq:slope-0}
\end{equation}
We are interested in the large $N$ limit of the slope of ramp
\eqref{eq:slope-bt} and
\eqref{eq:slope-0}.
When $\beta=0$, as pointed out in \cite{hikami},
$\partial_t g_{\text{conn}}(0,t)$ in \eqref{eq:slope-0}
happens to be equal to the eigenvalue
density in the Wishart-Laguerre ensemble, which is known to
obey the semi-circle law in the large $N$ limit.\footnote{See
eq.(3.16) and eq.(3.30) in \cite{Brezin:1995dp} (see also \cite{Verbaarschot:1993pm}).
The eigenvalue density of Wishart-Laguerre ensemble
$\rho(\mu)=\mu\tilde{\rho}(\mu^2)$ in \cite{Brezin:1995dp} is equal to
$\frac{1}{2}\partial_t g_{\text{conn}}(0,t)$ under the identification $\mu=t/N$;
eq.(3.30) in \cite{Brezin:1995dp} corresponds to the exact finite $N$ result
of $\partial_t g_{\text{conn}}(0,t)$ in \eqref{eq:slope-0}, while eq.(3.16)
in \cite{Brezin:1995dp} represents its large $N$ limit.}
However, the large $N$ limit of $\partial_t g_{\text{conn}}(\beta,t)$
with non-zero $\beta$ is not well studied in the literature.
In section \ref{sec:largeN},
we will numerically study the large $N$ behavior of the exact result \eqref{eq:slope-bt} and
\eqref{eq:slope-0}.
Before doing this numerical study, in the next section we will review the
analytic derivation of the large $N$ behavior of ramp
in \cite{hikami,Liu:2018hlr}.
\section{Large $N$ limit of the slope of ramp \label{sec:largeN}}
The large $N$ limit of $g_{\text{conn}}(\beta,t)$
is written in terms of the connected part of the two-level
correlation function $\rho^{(2)}(\mu_1,\mu_2)$
\begin{equation}
\begin{aligned}
g_{\text{conn}}(\beta,t)&=\int d\mu_1d\mu_2\rho^{(2)}(\mu_1,\mu_2)
e^{(\beta+\mathrm{i} t)\mu_1}e^{(\beta-\mathrm{i} t)\mu_2}\\
&=\int d\mu_1d\mu_2\rho^{(2)}(\mu_1,\mu_2)e^{\mathrm{i} t(\mu_1-\mu_2)+\beta(\mu_1+\mu_2)}.
\end{aligned}
\end{equation}
At late times $t\gg1$, the dominant contribution comes from
the region $|\mu_1-\mu_2|\ll1$.
Thus we can use the universal form of the short range correlation, known as the \textit{sine kernel}
(see e.g. \cite{Mehta})
\begin{equation}
\begin{aligned}
g_{\text{conn}}(\beta,t) \approx
-N^2\int d\mu_1d\mu_2\left[ \frac{\sin N\pi (\mu_1-\mu_2) \rho\bigl(\frac{\mu_1+\mu_2}{2}\bigr)}
{N\pi (\mu_1-\mu_2)}\right]^2 e^{\mathrm{i} t(\mu_1-\mu_2)+\beta (\mu_1+\mu_2)}.
\end{aligned}
\label{eq:gconn-sine}
\end{equation}
Introducing the variables $\omega$ and $u$ by
\begin{equation}
\begin{aligned}
\omega=2N(\mu_1-\mu_2),\quad
u=\frac{\mu_1+\mu_2}{4},
\end{aligned}
\end{equation}
\eqref{eq:gconn-sine} is rewritten as
\begin{equation}
\begin{aligned}
g_{\text{conn}}(\beta,t) \approx
-\frac{4N}{\pi^2}\int dud\omega \frac{\sin^2 \frac{\pi}{2} \rho(2u)\omega}{\omega^2} e^{\mathrm{i} \omega\tau+4\beta u},
\end{aligned}
\label{eq:g-omint}
\end{equation}
where $\tau$ is given by
\begin{equation}
\begin{aligned}
\tau=\frac{t}{2N} .
\end{aligned}
\label{eq:tau-def}
\end{equation}
In the large $N$ limit, the integration region of $\omega$ can be extended to $\omega\in[-\infty,\infty]$,
and the $\omega$-integral is explicitly evaluated as \cite{hikami}
\begin{equation}
\begin{aligned}
\int_{-\infty}^\infty d\omega\frac{\sin^2 \frac{\pi}{2} \rho(2u)\omega}{\omega^2}e^{\mathrm{i} \omega\tau}
=\left\{
\begin{aligned}
&\frac{\pi}{2} \big(\pi\rho(2u)-\tau\big),\quad & (\pi\rho(2u)>\tau),\\
&0, \quad & (\pi\rho(2u)<\tau).
\end{aligned}
\right.
\end{aligned}
\label{eq:relu}
\end{equation}
The condition $\pi\rho(2u)>\tau$ limits the range of $u$-integration
to $u\in[-u_\tau,u_\tau]$, where $u_\tau$ is determined by $\pi\rho(2u_\tau)=\tau$.
From the explicit form of eigenvalue density in \eqref{eq:wigner},
we find
\begin{equation}
\begin{aligned}
\pi\rho(2u_\tau)= \rt{1-u_\tau^2}=\tau,
\end{aligned}
\label{eq:u-tau}
\end{equation}
and $u_\tau$ is given by
\begin{equation}
\begin{aligned}
u_\tau=\rt{1-\tau^2}.
\end{aligned}
\label{eq:utau-sqrt}
\end{equation}
Since the maximal value of $\pi\rho(2u_\tau)$ is one,
$\tau=1$ is the critical value at which
the behavior of $g_{\text{conn}}(\beta,t)$ changes discontinuously from ramp to plateau.
In the following, we will
consider the ramp regime $\tau<1$.
When $\tau<1$, plugging \eqref{eq:relu}
into \eqref{eq:g-omint} we find that $g_{\text{conn}}(\beta,t)$ is written as
\begin{equation}
\begin{aligned}
g_{\text{conn}}(\beta,t)=\frac{2N}{\pi}\int_{-u_\tau}^{u_\tau}du \,e^{4\beta u}\bigl(
\tau-\pi \rho(2u)\bigr).
\end{aligned}
\label{eq:gconn-uint}
\end{equation}
Let us consider the time derivative of
$g_{\text{conn}}(\beta,t)$ in \eqref{eq:gconn-uint}.
The $t$-derivative of the boundary term $\pm u_\tau$ vanishes
due to the condition \eqref{eq:u-tau}.
Thus, the $t$-derivative of \eqref{eq:gconn-uint} comes only from the
derivative of integrand
\begin{equation}
\begin{aligned}
\frac{\partial g_{\text{conn}}}{\partial t}(\beta,t)&=\frac{2N}{\pi}\int_{-u_\tau}^{u_\tau}du\, e^{4\beta u}\frac{\partial\tau}{\partial t}=
\frac{1}{\pi}\int_{-u_\tau}^{u_\tau}du\, e^{4\beta u}=
\frac{\sinh 4\beta u_\tau}{2\pi\beta}.
\end{aligned}
\label{eq:delg-sinh}
\end{equation}
Let us take a closer look at the case of $\beta=0$.
By setting $\beta=0$ in \eqref{eq:delg-sinh}, one can see
that $\partial_t g_{\text{conn}}(0,t)$ is proportional
to $u_\tau$
\begin{equation}
\begin{aligned}
\frac{\partial g_{\text{conn}}}{\partial t}(0,t)=\frac{2}{\pi}u_\tau.
\end{aligned}
\end{equation}
Introducing the rescaled slope of ramp $s(0,t)$ by
\begin{equation}
\begin{aligned}
s(0,t):=\frac{\pi}{2}\frac{\partial g_{\text{conn}}}{\partial t}(0,t)=u_\tau,
\end{aligned}
\label{eq:s0-def}
\end{equation}
it follows from \eqref{eq:utau-sqrt} that $s(0,t)$ obeys the semi-circle law
\begin{equation}
\begin{aligned}
s(0,t)^2+\tau^2=1.
\end{aligned}
\label{eq:st-circle}
\end{equation}
\begin{figure}[thb]
\centering
\begin{tikzpicture}
\draw[->,thick] (-5.5,0)--(5.5,0);
\draw[->,thick] (0,-0.5)--(0,5.8);
\coordinate (u) at (5.5,0) node at (u) [right] {$u$};
\coordinate (r) at (0,5.8) node at (r) [above] {$\pi\rho(2u)$};
\draw[thick,blue] (5,0) arc (0:180:5);
\coordinate (up) at (4,0) node at (up) [below] {$u_\tau$};
\coordinate (um) at (-4,0) node at (um) [below] {$-u_\tau$};
\coordinate (O) at (0,-0.22) node at (O) [left]{$0$};
\coordinate (t) at (0,3.22) node at (t) [left] {$\tau$};
\coordinate (s) at (2,3) node at (s) [below] {$s(\beta,t)$};
\coordinate (c1) at (5,0) node at (c1) [below] {$1$};
\coordinate (c2) at (-5,0) node at (c2) [below] {$-1$};
\coordinate (c3) at (0,5.22) node at (c3) [left] {$1$};
\draw[thick,red, dotted] (-4,3)--(0,3);
\draw[<->,thick,red] (4,3)--(0,3);
\draw[thick,dashed] (-4,3)--(-4,0);
\draw[thick,dashed] (4,3)--(4,0);
\end{tikzpicture}
\caption{This figure shows the interpretation of $\tau$ and $s(\beta,t)$
in the eigenvalue distribution. The blue semi-circle is the graph of eigenvalue density $\pi\rho(2u)=\rt{1-u^2}$. The time slice $\pi\rho(2u)=\tau$ is represented by
the horizontal red line. The \textit{slope of ramp} $s(\beta,t)=u_\tau$
corresponds to the length of solid red line.}
\label{fig:circle}
\end{figure}
When $\beta\ne0$,
one can similarly define the quantity $s(\beta,t)$
by applying
the inverse function of sinh to $\partial_t g_{\text{conn}}$ in \eqref{eq:delg-sinh}:
\begin{equation}
\begin{aligned}
s(\beta,t):=\frac{1}{4\beta}\text{arcsinh}\left(2\pi\beta \frac{\partial g_{\text{conn}}}{\partial t}(\beta,t)\right)
=u_\tau.
\end{aligned}
\label{eq:sbt}
\end{equation}
Again, from \eqref{eq:utau-sqrt} it follows that
$s(\beta,t)$ obeys the semi-circle law
\begin{equation}
\begin{aligned}
s(\beta,t)^2+\tau^2=1 .
\end{aligned}
\label{eq:sb-circle}
\end{equation}
In the rest of this paper, we will use the name ``slope of ramp''
for both $\partial_t g_{\text{conn}}(\beta,t)$ and $s(\beta,t)$ interchangeably.
In Figure \ref{fig:circle}, we show the interpretation of $s(\beta,t)$
in the Wigner semi-circle distribution.
Here we comment on some feature of this figure:
\begin{itemize}
\item The time $\tau$ corresponds to the vertical axis in Figure \ref{fig:circle}.
Namely, $\tau$ probes the value of eigenvalue density (see \eqref{eq:u-tau}).
\item The \textit{slope of ramp} $s(\beta,t)$ in \eqref{eq:sbt}
corresponds to the horizontal direction in Figure \ref{fig:circle}.
In other words, $s(\beta,t)$ plays the role of eigenvalue.
\item The point $(s(\beta,t),\tau)$ lies on the unit semi-circle \eqref{eq:sb-circle}.
\end{itemize}
Before closing this section, we note in passing that
the large $N$ limit of $g_{\text{conn}}(\beta,t)$ is easily obtained by integrating
$\partial_t g_{\text{conn}}$ in \eqref{eq:delg-sinh}
\begin{equation}
\begin{aligned}
g_{\text{conn}}(\beta,t)
=g_{\text{conn}}(\beta,0)+2N\int_0^\tau d\tau' \frac{\sinh4\beta\rt{1-\tau'^2}}{2\pi \beta}.
\end{aligned}
\end{equation}
After a change of variable $\tau=\sin\th$,
this integral can be performed by using the relation
\begin{equation}
\begin{aligned}
\sinh(4\beta\cos\th)=2\sum_{n=1}^\infty I_{2n-1}(4\beta)\cos(2n-1)\th.
\end{aligned}
\end{equation}
Then we find
\begin{equation}
\begin{aligned}
g_{\text{conn}}(\beta,t)=
g_{\text{conn}}(\beta,0)+
\frac{N}{\pi\beta}\left[
I_1(4\beta)\th
+\sum_{n=1}^\infty \frac{I_{2n+1}(4\beta)+
I_{2n-1}(4\beta)}{2n}\sin 2n\th\right],
\end{aligned}
\label{eq:gconn-Ibt}
\end{equation}
where $\th$ is related to time $\tau$ by
\begin{equation}
\begin{aligned}
\th=\arcsin(\tau).
\end{aligned}
\end{equation}
Note that the initial value $g_{\text{conn}}(\beta,0)$ is given by
\begin{equation}
\begin{aligned}
g_{\text{conn}}(\beta,0)=
\Tr\left[A\Bigl(\frac{2\beta}{\rt{N}}\Bigr)-A\Bigl(\frac{\beta}{\rt{N}}\Bigr)^2\right].
\end{aligned}
\label{eq:gc-init}
\end{equation}
When $\beta=0$ this initial value vanishes $g_{\text{conn}}(0,0)=0$, but
it is non-zero for $\beta\ne0$. The large $N$ limit of $g_{\text{conn}}(\beta,0)$
in \eqref{eq:gc-init} can be obtained by
borrowing the result of two-point correlator of 1/2 BPS Wilson loops in $\mathcal{N}=4$
SYM \cite{Akemann:2001st,Giombi:2009ms,Okuyama:2018aij}
\begin{equation}
\begin{aligned}
g_{\text{conn}}(\beta,0)= \beta I_0(2\beta)I_1(2\beta)+\mathcal{O}(N^{-2}).
\end{aligned}
\end{equation}
When $\beta=0$, \eqref{eq:gconn-Ibt} reproduces the known result
in \cite{hikami,Liu:2018hlr}
\begin{equation}
\begin{aligned}
g_{\text{conn}}(0,t)&=\frac{2N}{\pi}\Bigl(\th+\frac{1}{2}\sin2\th\Bigr)
=\frac{2N}{\pi}\Bigl(\arcsin(\tau)+\tau\rt{1-\tau^2}\Bigr).
\end{aligned}
\end{equation}
We have also checked that the small $\beta$ expansion of our result \eqref{eq:gconn-Ibt}
is consistent with the $\mathcal{O}(\beta^2)$ term of $g_{\text{conn}}(\beta,t)$
computed in
\cite{Liu:2018hlr}.
\section{Plot of the exact slope of ramp \label{sec:plot}}
In this section, we will study numerically
the behavior of the exact slope of ramp $s(\beta,t)$ at finite $N$.
Plugging the exact result of $\partial_t g_{\text{conn}}(\beta,t)$ \eqref{eq:slope-bt}
into \eqref{eq:sbt}, we find the exact form of
$s(\beta,t)$ at finite $N$
\begin{equation}
\begin{aligned}
s(\beta,t)= \frac{1}{4\beta}\text{arcsinh}\left(
2\pi Ne^{\frac{\beta^2-t^2}{N}}\text{Im}\Biggl[L_N\Bigl(-\frac{(\beta+\mathrm{i} t)^2}{N}\Bigr)
L_{N-1}\Bigl(-\frac{(\beta-\mathrm{i} t)^2}{N}\Bigr)\Biggr]\right).
\end{aligned}
\label{eq:sbtexact}
\end{equation}
When $\beta=0$, using the result of $\partial_t g_{\text{conn}}(0,t)$ in \eqref{eq:slope-0}
the exact form of
$s(0,t)$ at finite $N$ becomes
\begin{equation}
\begin{aligned}
s(0,t)=\pi
te^{-\frac{t^2}{N}}\Biggl[L_{N-1}\Bigl(\frac{t^2}{N}\Bigr)
L_{N-1}^1\Bigl(\frac{t^2}{N}\Bigr)-L_{N}\Bigl(\frac{t^2}{N}\Bigr)
L_{N-2}^1\Bigl(\frac{t^2}{N}\Bigr)\Biggr].
\end{aligned}
\label{eq:s0exact}
\end{equation}
\begin{figure}[htb]
\centering
\subcaptionbox{$s(0,t)$\label{sfig:sbt0}}{\includegraphics[width=7cm]{sbt0.pdf}}
\hskip5mm
\subcaptionbox{$s(5,t)$\label{sfig:sbt5}}{\includegraphics[width=7cm]{sbt5.pdf}}
\caption{Plot of $s(\beta,t)$
for \subref{sfig:sbt0} $\beta=0$ and \subref{sfig:sbt5} $\beta=5$.
The horizontal axis is the rescaled time $\tau=t/2N$.
The blue dots are the exact values of $s(\beta,t)$ at $N=500$ while the
red curve represents the semi-circle law $s=\rt{1-\tau^2}$.
}
\label{fig:sbt}
\end{figure}
In Figure \ref{fig:sbt}, we plot the exact slope of ramp
$s(\beta,t)$ at $N=500$
as a function of time $\tau=t/2N$.
One can clearly see that $s(\beta,t)$ obeys the semi-circle law as predicted by the large $N$
analysis in the previous section.
We emphasize that $s(\beta,t)$ is independent of $\beta$
in the large $N$ limit and it obeys the semi-circle law for both $\beta=0$ and $\beta\ne0$
as shown in \eqref{eq:st-circle} and \eqref{eq:sb-circle}.
On the other hand, $g_{\text{conn}}(\beta,t)$ itself has a non-trivial $\beta$-dependence,
whose explicit form in the large $N$ limit is given by \eqref{eq:gconn-Ibt}.
Note that the vertical and horizontal axes in Figure \ref{fig:circle}
are flipped in Figure \ref{fig:sbt}.
As we explained in the previous section, the
$\tau$-axis corresponds to the eigenvalue density and
the $s$-axis
corresponds to the eigenvalues.
In other words, the eigenvalue density manifests itself as the time direction
in Figure \ref{fig:sbt}.
As we can see from Figure \ref{fig:sbt}, the slope of ramp vanishes beyond
the critical value $\tau=1$, which corresponds to the so-called Heisenberg time
$t_H=2N$
where the plateau regime sets in.
This critical time is determined by the maximal value of
the eigenvalue density.
\section{Small $t$ behavior of the slope of ramp \label{sec:smallt}}
In this section we will consider the small $t$
behavior of the slope of ramp $s(\beta,t)$. Since $s(\beta,t)$ is an odd function of $t$,
its Taylor expansion starts from the linear term in $t$ \footnote{In \cite{Cotler:2017jue} it was observed numerically
that in the small $t$ regime
$g_{\text{conn}}(0,t)$ behaves as $g_{\text{conn}}(0,t)\sim t^2$.
This behavior simply follows from the fact that
$g_{\text{conn}}(0,t)$ is an even function of $t$ with
the initial value $g_{\text{conn}}(0,0)=0$, hence its Taylor expansion
starts from $t^2$.}.
From the exact result of $s(\beta,t)$ at finite $N$ in \eqref{eq:sbtexact},
we can compute the coefficient of this linear term
\begin{equation}
\begin{aligned}
s(\beta,t)=
\pi e^{\frac{\beta^2}{N}}\left[
L_{N-1}\Bigl(-\frac{\beta^2}{N}\Bigr)
L_{N-1}^1\Bigl(-\frac{\beta^2}{N}\Bigr)-
L_{N}\Bigl(-\frac{\beta^2}{N}\Bigr)
L_{N-2}^1\Bigl(-\frac{\beta^2}{N}\Bigr)\right]t+\mathcal{O}(t^3).
\end{aligned}
\end{equation}
In the large $N$ limit this becomes
\begin{equation}
\begin{aligned}
s(\beta,t)=\pi \Bigl[I_0(2\beta)^2-I_1(2\beta)^2\Bigr]t+\mathcal{O}(t^3).
\end{aligned}
\end{equation}
One can in principle compute the coefficient of $t^3,t^5,\cdots,$
as a function of $\beta$ using the
exact result in \eqref{eq:sbtexact}.
However, the computation for general $\beta$
becomes tedious when we go to higher order terms.
Instead, here we focus on the $\beta=0$ case
where the higher order coefficients are easily extracted from
the exact result at finite $N$ in \eqref{eq:s0exact}
\begin{equation}
\begin{aligned}
s(0,t)= \frac{\pi}{2}\left[ 2 t-2 t^3+t^5 +
\left(-\frac{5}{18}-\frac{1}{18N^2}\right)t^7+\mathcal{O}(t^9)\right].
\end{aligned}
\label{eq:s0taylor}
\end{equation}
This expansion is valid until the first and the second terms in \eqref{eq:s0taylor}
become comparable. The order of this time scale is
\begin{equation}
\begin{aligned}
t\sim \mathcal{O}(N^0).
\end{aligned}
\end{equation}
Summing over the order $N^0$ terms in \eqref{eq:s0taylor}, we find
that the large $N$ limit of $s(0,t)$ in the small $t$ regime
is given by the Bessel function
\begin{equation}
\begin{aligned}
s(0,t)= \pi t\Bigl[J_0(2t)^2+J_1(2t)^2\Bigr]+\mathcal{O}(N^{-2}).
\end{aligned}
\label{eq:bessel-osc}
\end{equation}
\begin{figure}[htb]
\centering
\includegraphics[width=10cm]{smallt.pdf}
\caption{Plot of $s(\beta=0,t)$
in the small $t$ region.
The dots are the exact values of $s(0,t)$ for $N=500$.
The blue line is the first term $s=\pi t$
in the Taylor expansion of $s(0,t)$ in \eqref{eq:s0taylor},
while the red curve represents the Bessel function in
\eqref{eq:bessel-osc}.
This figure is a closeup of the small $t$ region of Figure \ref{sfig:sbt0}.
}
\label{fig:smallt}
\end{figure}
In Figure \ref{fig:smallt}, we plot the exact $s(0,t)$ at $N=500$
in the small $t$ region. $s(0,t)$ grows linearly at very early time and then
starts to oscillate around $s=1$. The linear behavior of
$s(0,t)$ around $t=0$ comes from the first term in the
Taylor expansion \eqref{eq:s0taylor}, while the oscillating behavior
is captured by the Bessel function \eqref{eq:bessel-osc}
as discussed in \cite{hikami}.
When $t$ becomes of order $N$, the expression \eqref{eq:bessel-osc}
is no longer valid;
$s(0,t)$ is described instead by the semi-circle law \eqref{eq:st-circle} when $t\sim \mathcal{O}(N)$.
\section{Conclusion \label{sec:conclusion}}
In this paper, we have studied the slope of ramp $s(\beta,t)$,
which is related to $\partial_t g_{\text{conn}}(\beta,t)$ by \eqref{eq:sbt},
in the Gaussian matrix model.
We found the exact closed form expression of $s(\beta,t)$ in \eqref{eq:sbtexact} and
confirmed numerically that $s(\beta,t)$
obeys the semi-circle law as a function of time for both $\beta=0$ and $\beta\ne0$ cases.
Interestingly, in the plot of $s(\beta,t)$ the time direction plays the role of eigenvalue
density.
There are many interesting open questions. We list several avenues for
future research.
The relation between $g_{\text{conn}}$ and the eigenvalue density
$\rho(\mu)$ in \eqref{eq:gconn-sine} is expected to be quite universal, and
hence it is not restricted to the Gaussian matrix model.
It would be very interesting to study the slope of ramp in other models,
such as the SYK model,
and see if the eigenvalue density
manifests itself in the time direction for other models as well\footnote{See
\cite{Gaikwad:2017odv} for a study of
spectral form factor in hermitian matrix model with
a non-Gaussian potential.}.
It would be also interesting to generalize our study to the
higher point correlation function of $\Tr e^{-(\beta\pm\mathrm{i} t)H}$.
In the case of Gaussian matrix model, the exact form of the connected part of
higher point function
was recently studied in \cite{Okuyama:2018aij}.
It would be interesting to see if the multi-point correlator of
eigenvalues $\rho^{(n)}(\mu_1,\cdots,\mu_n)$
appears in the time dependence of higher point functions of $\Tr e^{-(\beta\pm\mathrm{i} t)H}$
in the large $N$ limit. To see this, we need to go beyond the ``box approximation''
used in \cite{Liu:2018hlr}.
\acknowledgments
I would like to thank Nick Hunter-Jones for correspondence
and careful reading of the manuscript.
This work was supported in part by JSPS KAKENHI Grant Number 16K05316.
|
1,314,259,994,254 | arxiv | \section{Introduction}
The re-emergence of Deep Learning~\cite{DeepLearningBook2016} has demonstrated significant success in difficult real-world domains such as image \cite{krizhevsky2012imagenet}, audio \cite{audio} and video processing \cite{videoCVPR}.
Deep Learning is recently being increasingly applied to structured domains, where the data is represented using {\em richer symbolic or graph features} to capture relational structure between entities and attributes in the domain.
Such models are able to capture increasingly complex interactions between features with deeper layers. However, the combinatorial complexity of reasoning over a large number of relations and objects has remains a significant bottleneck to overcome.
While recent work in relational deep learning seeks to
address the problem of faithful modeling of relational structure
\cite{KazemiPoole18-RelNNs,SourekEtAl-15-LRNNs,KaurEtAl18-RRBM},
we focus on {\bf Column Networks} (CLNs) \cite{pham2017column} which are deep architectures composed of several (feedforward) mini-columns, each of which represents an entity in the domain. Relationships between two entities are modeled through edges between mini-columns.
The true power of column networks come from natural modeling of long-range inter-entity interactions with progressively deeper layers and have been successfully applied to collective classification tasks. However, CLNs rely on large amounts of data and incorporate little to no knowledge about the problem domain. While this may suffice for low-level applications such as image/video processing, it is a concern
in relational domains consisting of rich, semantic information.
Biasing the learners is necessary in order to allow them to inductively leap from training instances to true generalization over new instances ~\cite{Mitchell80}.
While deep learning does incorporate one such bias in the form of domain knowledge (for example, through parameter tying or convolution, which exploits neighborhood information), we are motivated to develop systems that can incorporate richer and more general forms of domain knowledge. This is especially germane for deep relational models as they inherently construct and reason over richer representations.
One way in which a human can guide learning is by providing {\em rules over training examples and features} \cite{shavlik89ebnn,towell1994knowledge,fung2003knowledge,kunapuli2010online}.
Another way that has been studied extensively is expressing {\em preferences} within the preference-elicitation framework \cite{BoutilierEtAl06}. We are inspired by this form of advice as they have been successful within the context of inverse reinforcement learning \cite{KunapuliEtAl13}, imitation learning \cite{odomaaai15} and planning \cite{DasEtAl18}.
The motivation for our approach is as follows: to develop a framework that {\bf allows a human to guide deep learning} by incorporating rules and constraints that define the domain and its aspects. Incorporation of prior knowledge into deep learning has begun to receive interest recently \cite{DingEtAl18}.
However, in many such approaches, the guidance is not through a human, but rather through a pre-processing algorithm to generate guidance. Our framework is much more general, in that a domain expert provides guidance during learning. We exploit the rich representation power of relational methods to capture, represent and incorporate such rules into relational deep learning models.
We make the following contributions: (1) we propose the formalism of Knowledge-augmented Column Networks (K-CLN), (2) we present an approach to inject generalized domain knowledge in a CLN and develop the learning strategy that exploits this knowledge, and (3) we demonstrate, across two real problems, in some of which CLNs have been previously employed, the effectiveness and efficiency of injecting domain knowledge. Specifically, our results across the domains clearly show statistically superior performance with small amounts of data. As far as we are aware, this is the first work on human-guided CLNs.
\begin{figure*}[h
\begin{minipage}[b]{0.6\textwidth}
\centering
\includegraphics[width=\columnwidth]{columnNWfull.png}
\captionof{figure}{Original Column network (diagram source: \cite{pham2017column})}
\label{fig:CLN}
\end{minipage}
\begin{minipage}[b]{0.4\textwidth}
\centering
\includegraphics[width=0.9\columnwidth]{columnNWformalGetFinalLayer2.png}
\captionof{figure}{K-CLN architecture}
\label{fig:kcln}
\end{minipage}
\end{figure*}
\section{Knowledge-augmented Column Networks}
Column Networks~\cite{pham2017column} allow for encoding interactions/relations between entities as well as the attributes of such entities in a principled manner without explicit relational feature construction or vector embedding. This is important when dealing with structured domains, especially, in the case of collective classification. This enables us to seamlessly transform a multi-relational knowledge graph into a deep architecture making them one of the robust \textit{relational} deep models. Figure \ref{fig:CLN} illustrates an example column network, w.r.t. the knowledge graph on the left. Note how each entity forms its own column and relations are captured via the sparse inter-column connectors.
Consider a graph $\mathcal{G}=(V,A)$, where $V = \{e_i\}_{i=1}^{\left|V\right|}$ is the set of vertices/entities.
$A$ is the set of arcs/edges between two entities $e_i$ and $e_j$ denoted as $r(e_i,e_j)$. Also, $\mathcal{G}$ is multi-relational, \textit{i.e.,} $r\in R$ where $R$ is the set of relation types in the domain. To obtain the equivalent Column Network $\mathcal{C}$ from $G$, let $x_i$ be the feature vector representing the attributes of an entity $e_i$ and $y_i$ its label predicted by the model\footnote{$\because i$ uniquely indexes $e_i$, we use $e_i$ and $i$ interchangeably}. $h_i^t$ denotes a hidden node w.r.t. entity $e_i$ at the hidden layer $t$ ($t=1, \ldots, T$ is the index of the hidden layers).
As mentioned earlier, the \textit{context} between 2 consecutive layers captures the dependency of the immediate neighborhood (based on arcs/edges/inter-column connectors). For entity $e_i$, the context w.r.t. $r$ and hidden nodes are computed as,
\begin{align}
\label{eqn:hiddencontext}
c_{ir}^t = \frac{1}{\left|\mathcal{N}_r(i)\right|}\sum_{j\in \mathcal{N}_r(i)} h_j^{t-1}; \\
h_i^t = g\left(b^t + W^t h_i^{t-1} + \frac{1}{z} \sum_{r\in R} V_r^t c_{ir}^t\right)
\end{align}
where $\mathcal{N}_r(i)$ are all the neighbors of $e_i$ w.r.t. $r$ in the knowledge graph $\mathcal{G}$. Note the absence of context connectors between $h^t_2$ and $h^t_4$ (Figure \ref{fig:CLN}, \textit{right}) since there does not exist any relation between $e_2$ and $e_4$ (Figure \ref{fig:CLN}, \textit{left}).
The activation of the hidden nodes is computed as the sum of the bias, the weighted output of the previous hidden layer and the weighted contexts
where $W^t \in \mathbb{R}^{K^t \times K^{t−1}}$ and $V^t_r \in R^{K^t\times K^{t−1}}$ are weight parameters and $b^t$ is a bias for some activation function $g$. $z$ is a pre-defined constant that controls the parameterized contexts from growing too large for complex relations. Setting $z$ to the average number of neighbors of an entity is a reasonable assumption.
The final output layer is a softmax over the last hidden layer.
\begin{equation}
\label{eq:op} P(y_i = \ell|h_i^T) = softmax\left( b_l + W_l h_i^T \right)
\end{equation}
where $\ell\in L$ is the label ($L$ is the set of labels) and $T$ is the index of the last hidden layer.
While CLNs are relation-aware deep models that can represent and learn from structured data faithfully, they are not devoid of limitations, especially the challenges of effective learning with sparse samples or systematic noise. Several approaches~\cite{jiang2018mentornet,goldberger2016training,miyato2018virtual} enable effective learning of deep models in presence of noise, however our problem setting is significantly different, due to -
\textbf{{[(1) Type of noise]}}: we aim to handle systematic/targeted noise \cite{odom2018human}. It occurs frequently in real-world due to cognitive bias or sample sparsity.
\textbf{{[(2) Type of error]}}: Systematic noise leads to generalization errors.
\textbf{{[(3) Structured data]}}: K-CLN works in the context of structured data (entities/relations).
Structured/relational data, thought crucial, is inherently sparse (most relations are false in the real world).
\noindent \fbox{
\parbox{0.97\columnwidth}{
\noindent {\bf Given}: A sparse multi-relational graph $\mathcal{G}$, attributes $x_i$ of each entity (sparse or noisy) in $\mathcal{G}$, equivalent Column-Network $\mathcal{C}$ and access to a Human-expert\\
\noindent{\bf To Do:} More effective and efficient collective classification by knowledge augmented training of $\mathcal{C}(\theta)$, where $\theta = \langle\{W^t\}_1^T, \{V_r^t\}_{r\in R; t=1}^{t=T}, \{W_{\ell}\}_{\ell\in L}\rangle$ is the set of all the network parameters of the Column Network.
}}
To this effect we propose \textbf{K}nowledge-augmented \textbf{C}o\textbf{L}umn \textbf{N}etwork that incorporates human advice into deep models in a principled manner using a gated architecture, where \textit{`Advice Gates'} augment/modify the trained network parameters based on the advice.
\subsection{Knowledge Representation}
Any model specific encoding of domain knowledge, such as numeric constraints or modified loss functions etc., has several limitations, namely (1) counter-intuitive to the humans since they are domain experts and not experts in machine learning (2) the resulting framework is brittle and not generalizable. Consequently, we employ preference rules (akin to IF-THEN statements) to capture human knowledge
\begin{definition}
\label{def:pref}
A preference is a modified Horn clause,
\begin{align}
\nonumber \mathtt{\land_{k, x} Attr_k(E_x) \land \ldots \land_{r\in R, x, y} r(E_x,E_y)} \Rightarrow \mathtt{[} & \mathtt{label(E_z,\ell_1)} \uparrow; \\ \nonumber &\mathtt{label(E_k,\ell_2) \downarrow]}
\end{align}
where $\ell_1,\ell_2 \in L$ and the $\mathtt{E_x}$ are variables over entities, $\mathtt{Attr_k(E_x)}$ are attributes of $E_x$ and $\mathtt{r}$ is a relation. $\mathtt{\uparrow}$ and $\mathtt{\downarrow}$ indicate the preferred non-preferred labels respectively. Quantification is implicitly $\forall$ and hence dropped. We denote a set of preference rules as $\mathfrak{P}$.
\end{definition}
\subsection{Knowledge Injection}
Given that knowledge is provided as \textit{partially-instantiated} preference rules $\mathfrak{P}$, more than one entity may satisfy a preference rule. Also, more than one preference rules may be applicable for a single entity.
The key idea is that we aim to consider the error of the trained model w.r.t. both the data and the advice. Consequently, in addition to the \textit{``data gradient"} as with original CLNs, there is a \textit{``advice gradient''}. This gradient acts a feedback to augment the learned weight parameters (both column and context weights) towards the direction of the \textit{advice gradient}. It must be mentioned that not all parameters will be augmented. Only the parameters w.r.t. the entities and relations (contexts) that satisfy $\mathfrak{P}$ should be affected.
Let $\mathcal{P}$ be the set of entities and relations that satisfy the set of preference rules $\mathfrak{P}$. The hidden nodes (equation~\ref{eqn:hiddencontext}) can now be expressed as,
\begin{align}
\label{eq:modhidden} \nonumber h_i^t = g\left(b^t + W^t h_i^{t-1} \Gamma^{(W)}_i + \frac{1}{z} \sum_{r\in R} V_r^t c_{ir}^t \Gamma^{(c)}_{ir}\right)\\
\text{s.t.}~ \Gamma_i, \Gamma_{i,r} =
\begin{cases}
1 & \text{if $i,r \notin \mathcal{P}$} \\
\mathcal{F}(\alpha\nabla_i^{\mathfrak{P}}) & \text{if $i,r \in \mathcal{P}$}
\end{cases}
\end{align}
where $i \in \mathcal{P}$ and $\Gamma^{(W)}_i$ and $\Gamma^{(c)}_{ir}$ are advice-based soft gates with respect to a hidden node and its context respectively. $\mathcal{F}()$ is some gating function, $\nabla_i^{\mathfrak{P}}$ is the \textit{``advice gradient''} and $\alpha$ is the trade-off parameter explained later. The key aspect of soft gates is that they attempt to enhance or decrease the contribution of particular edges in the column network aligned with the direction of the \textit{``advice gradient''}. We choose the gating function $\mathcal{F}()$ as an exponential $[\mathcal{F}(\alpha\nabla_i^{\mathfrak{P}}) = \exp{(\alpha\nabla_i^{\mathfrak{P}})}]$. The intuition is that soft gates are natural, as they are multiplicative and a positive gradient will result in $\exp{(\alpha\nabla_i^{\mathfrak{P}})} > 1$ increasing the value/contribution of the respective term, while a negative gradient results in $\exp{(\alpha\nabla_i^{\mathfrak{P}})} < 1$ pushing them down. We now present the \textit{``advice gradient''} (the gradient with respect to preferred labels).
\begin{proposition}
\label{eq:grad}
Under the assumption that the loss function with respect to advice / preferred labels is a log-likelihood, of the form $\mathcal{L^\mathfrak{P}} = \log P(y_i^{(\mathfrak{P})}|h_i^T)$, then the advice gradient is,
$ \nabla_i^{\mathfrak{P}} = I({y_i^{(\mathfrak{P})}}) - P(y_i)$,
where $y_i^{(\mathfrak{P})}$ is the preferred label of entity and $i\in \mathcal{P}$ and $I$ is an indicator function over the preferred label. For binary classification, the indicator is inconsequential but for multi-class scenarios it is essential ($I = 1$ for preferred label $\ell$ and $I=0$ for $L\setminus \ell$).
\end{proposition}
An entity can satisfy multiple advice rules. So we take the most preferred label.
In case of conflicting advice (i.e. different labels are equally advised), we set the advice label to be the label given by the data, $y_i^{(\mathfrak{P})}=y_i$.
\begin{proposition}
\label{prop:balance}
Given that the loss function $\mathcal{H}_i$ of original CLN is cross-entropy (binary or sparse-categorical for the binary and multi-class prediction cases respectively) and the objective with respect to advice is log-likelihood, the functional gradient of the modified objective for K-CLN is,
\begin{align}
\label{eqn:modgrad}
\nonumber \nabla(\mathcal{H}'_i) = & (1-\alpha)\left(y_iI - P(y_i|h^T)\right) + \alpha (I_i^{\mathfrak{P}}-P(y_i^{\mathfrak{P}}|h^T))\\
= & (1-\alpha)\nabla_i + \alpha \nabla_i^{\mathfrak{P}}
\end{align}
where $0\leq\alpha\leq 1$ is the trade-off parameter between the effect of data and effect of advice, $I_i$ and $I_i^{\mathfrak{P}}$ are the indicator functions on the label w.r.t. the data and the advice respectively and $\nabla_i$ and $\nabla_i^{\mathfrak{P}}$ are the gradients, similarly, w.r.t. data and advice respectively.
\end{proposition}
Hence, it follows from Proposition~\ref{prop:balance} that the data and the advice balances the training of the K-CLN network parameters $\theta^\mathfrak{P}$ via the trade-off hyperparameter $\alpha$. When data is noisy (or sparse with negligible examples for a region of the parameter space) \textbf{the advice (if correct) induces a bias on the output distribution towards the correct label}. Even if the advice is incorrect, the network still tries to learn the correct distribution to some extent from the data (if not noisy). The contribution of the effect of data versus the effect of advice will primarily depend on $\alpha$. \textbf{If both the data and human advice are sub-optimal (noisy), the correct label distribution is not even learnable}. We exclude the formal proofs due to space limitation.
\setlength{\textfloatsep}{4pt}
\begin{algorithm}
\begin{algorithmic}[1]
\REQUIRE Knowledge graph $\mathcal{G}$, Column network $\mathcal{C}(\theta)$, Advice $\mathfrak{P}$, Trade-off $\alpha$
\STATE K-CLN $\mathcal{C}^{\mathfrak{P}}(\theta^{\mathfrak{P}}) \gets \mathcal{C}(\theta)$
\STATE Initialize parameters of K-CLN $\theta^{\mathfrak{P}} \gets \{0\}$
\STATE $\mathcal{M}^\mathcal{P} = \langle\mathcal{M}^W,\mathcal{M}^c,\mathcal{M}^{label}\rangle \gets$ \textsc{CreateMask}({$\mathcal{G},\mathfrak{P}$})
\STATE Initial gradients @ epoch 0 $\forall i ~ \mathbf{\nabla}_{i,0}^{\mathfrak{P}} = 0$; $i \in \mathcal{P}$
\FOR{epochs k=1 to convergence}
\STATE Get advice gradients $\nabla_{i,(k-1)}^{\mathfrak{P}}$ w.r.t. prev. epoch $k-1$
\STATE Gates $\Gamma^{\mathfrak{P}}_i, \Gamma^{\mathfrak{P}}_{i,r} \gets \exp{(\alpha \nabla_i^{\mathfrak{P}}\times \mathcal{M}_i^\mathcal{P})}$
\STATE Train $\mathcal{C}^{\mathfrak{P}}$ using Equation~\ref{eq:modhidden}; Update $\theta^{\mathfrak{P}}$
\STATE Compute $\forall i ~ P(y_i)$ from $\mathcal{C}^{\mathfrak{P}}$
{for current epoch $k$}
\STATE Store $\forall i ~ \nabla_{i,k}^{\mathfrak{P}} \gets I({y_i^{(\mathfrak{P})}}) - P(y_i)$
{using $\mathcal{M}^{label}$}
\ENDFOR
\STATE \textbf{return} {K-CLN $C^{\mathfrak{P}}$}
\end{algorithmic}
\caption{\underline{\textbf{K}}nowledge-augmented \underline{\textbf{C}}o\underline{\textbf{L}}umn \underline{\textbf{N}}etworks}
\label{algo:kcln}
\end{algorithm}
\subsection{The Algorithm}
Algorithm~\ref{algo:kcln} outlines the key steps involved in our approach. It trains a Column Network using both the data (the knowledge graph $\mathcal{G}$) and the human advice (set of preference rules $\mathfrak{P}$). It returns a K-CLN $\mathcal{C}^{\mathfrak{P}}$ where $\theta^{\mathfrak{P}}$ are the network parameters. As described earlier, the network parameters of K-CLN (same as CLN) are manipulated (stored and updated) via tensor algebra with appropriate indexing for entities and relations. Also recall that our gating functions are piece-wise/non-smooth and apply only to the subspace of entities, features and relations where the preference rules are satisfied. Thus, as a pre-processing step, we create tensor masks that compactly encode such a subspace with a call to the procedure \textsc{CreateMask()}, explained later.
At the end of every epoch the output probabilities and the gradients are computed and stored in a shared data structure [\textbf{line: 10}] for computing advice gates in the next epoch. Rest of the training strategy is similar original CLN, except modified hidden units (Equation~\ref{eq:modhidden}) [\textbf{line: 8}] and data and advice trade-off parameter $\alpha$.
Procedure \textsc{CreateMask()} constructs the advice tensor mask(s) over the space of entities, features and relations/contexts, based on the advice rules, that are required to compute the gates.
The main components are -
\textbf{(1)} The entity mask $\mathcal{M}^W$ ($\left|entities\right| \times \left|feature\right|$ tensor) that indicates entity and feature indexes affected by the preferences;
\textbf{(2)} The context mask $\mathcal{M}^c$ ($\left|entities\right| \times \left|entities\right|$) which indicates the affected contexts/relations;
\textbf{(3)} The label mask $\mathcal{M}^{label}$ stores the preferred label of the affected entities, in one-hot encoding.
Advice mask computation requires efficient satisfiability checking for each preference rule against the knowledge graph. We solve this via efficient subgraph matching proposed by Das et al.~\yrcite{DasAAAI19}. The masks are binary with $1$ encoding true and $0$ encoding false.
\begin{figure*}[t]
\begin{minipage}{\textwidth}
\centering
\subfigur
{
\includegraphics[width=0.40\textwidth]{OLDPLOT_pubmicro.png}
\label{fig:microPub}
}
\subfigure
{
\includegraphics[width=0.40\textwidth]{OLDPLOT_microPubSamples1.png}
\label{fig:microPubSam}
}
\caption{\textbf{[Pubmed Diabetes publication prediction (multi-class)]} Learning curves - Micro-F1, {\em (Left)} ~ w.r.t. training epochs at 24\% (of total) sample, {\em (Right)} ~ w.r.t. varying sample sizes [best viewed in color].}
\label{fig:PubMed}
\end{minipage}
\begin{minipage}{\textwidth}
\centering
\subfigur
{
\includegraphics[width=0.40\textwidth]{OLDPLOT_plot5DebateAUC.png}
\label{fig:debateauc}}
\subfigur
{
\centering
\includegraphics[width=0.40\textwidth]{OLDPLOT_debateAUCVary.png}
\label{fig:debateaucvary}}
\caption{\textbf{[Internet Social debate stance prediction (binary class)]} Learning curves - Micro-F1, {\em (Top)} ~ w.r.t. training epochs at 24\% (of total) sample, {\em (Bottom)} ~ w.r.t. varying sample sizes [best viewed in color].}
\label{fig:debate}
\end{minipage}
\end{figure*}
\begin{figure*}
\begin{minipage}{\textwidth}
\centering
\subfigure[F1 (varying sample \& $\alpha$)]{
\includegraphics[width = 0.40\columnwidth]{OLDPLOT_debateF1VaryAlpha.png}
}
\subfigure[AUC-PR (varying sample \& $\alpha$)]{
\includegraphics[width=0.40\columnwidth]{OLDPLOT_debateAUCVaryAlpha.png}
}
\caption{Performance, F1 and AUC-PR, of K-CLN on \textbf{Internet Social Debates data set} across different sample sizes, with varying \textbf{\textit{trade-off parameter} $\alpha$} (on the advice gradient). Note that the advice here is incorrect/sub-optimal. $\alpha = 0$ has the same performance as no-advice (Vanilla CLN), hence not plotted.}
\label{fig:alphas}
\end{minipage}
\end{figure*}
\section{Experiments}
We investigate the following questions as part of our experiments, -
[\textbf{Q1}] Can K-CLNs learn effectively with noisy sparse samples i.e., performance?
[\textbf{Q2}] Can K-CLNs learn efficiently with noisy sparse samples i.e., speed of learning?
[\textbf{Q3}] How does quality of advice affect the performance of K-CLN i.e., reliance on robust advice?
We compare against the original Column Networks architecture with no advice\footnote{Vanilla CLN indicates original architecture \cite{pham2017column}} as a baseline. We show how advice/knowledge can guide model learning towards better predictive performance and efficiency, in the context of collective classification using Column Networks.
\subsection{Experimental Setup}
\noindent \textbf{System:}
K-CLN has been developed by extending original CLN architecture, which uses \textit{Keras} as the functional deep learning API with a \textit{Theano} backend for tensor manipulation. We extend this system to include: (1) advice gradient feedback at the end of every epoch, (2) modified hidden layer computations and (3) a pre-processing wrapper to parse the advice/preference rules and create appropriate tensor masks. Since it is not straightforward to access final layer output probabilities from inside any hidden layer using keras, we use \textit{Callbacks} to write/update the predicted probabilities to a shared data structure at the end of every \textit{epoch}. Rest of the architecture follows from original CLNs.
The \textit{advice masks} encode $\mathcal{P}$, \textit{i.e.}, the set of entities and contexts where the gates are applicable.
\noindent\textbf{Domains:} We evaluate our approach on {\bf two relational} domains -- \textit{Pubmed Diabetes}, a multi-class classification problem and \textit{Internet Social Debates}, a binary classification problem. \textit{Pubmed Diabetes}\footnote{\url{https://linqs.soe.ucsc.edu/data}} is a citation network for predicting whether a peer-reviewed article is about \textit{Diabetes Type 1, Type 2 or none}, using textual features (TF-IDF vectors) from $19717$ pubmed abstracts. It comprises articles, considered as an entities, with $500$ bag-of-words textual features (TF-IDF weighted word vectors), and $44,338$ citation relationships among each other.
\textit{Internet Social Debates}\footnote{\url{http://nldslab.soe.ucsc.edu/iac/v2/}} is a data set for predicting stance (`for'/`against') about a debate topic from online posts on social debates. It contains $6662$ posts (entities) characterized by TF-IDF vectors, extracted from the text and header, and $\sim 25000$ relations of 2 types, {`sameAuthor'} and {`sameThread'}.
\noindent\textbf{Metrics:} Following \cite{pham2017column}, we report micro-F1 score, \textit{which aggregates the contributions of all classes to compute the average F1 score}, for the multi-class problem and AUC-PR for the binary one. We use $10$ hidden layers and $40$ hidden units per column in each layer. All results are averaged over 5 runs and our settings are consistent with original CLN.
\noindent\textbf{Human Advice:} K-CLN can handle arbitrarily complex advice (encoded as preference rules). However, even with some relatively simple rules K-CLN is effective in sparse samples. \textit{For instance, in Pubmed, the longest preference rule used is, $\mathtt{HasWord(e_1,`fat')}$ $\land$ $\mathtt{HasWord(e_1,`obese')}$ $\land$ $\mathtt{Cites(e_2,e_1)}$ $\Rightarrow$ $\mathtt{label(e_2, type_2)}\uparrow$}. This simply indicates an article citing another one discussing obesity. is likely about Type2 diabetes, Expert knowledge from real physicians can thus, prove to be even more effective.
Note that sub-optimal advice may lead to a wrong direction of the \textit{Advice Gradient}. However, since the data balances the effect of advice during training as shown by \cite{patrini2017making}, our soft gates do not alter the loss but instead promote/demote the contribution of nodes/contexts.
\subsection{Experimental Results}
Our goal is to demonstrate the efficiency and effectiveness of K-CLNs with smaller set of training examples. Hence, we present the aforementioned metrics with varying sample size and with varying epochs and compare our model against \textit{Vanilla CLN}. We split the data sets into a training set and a hold-out test set with 60\%-40\% ratio. For varying epochs we only learn on 40\% of our training set (\textit{i.e.}, 24\% of the complete data) to train the model with varying epochs and test on the hold-out test set. Figures \ref{fig:PubMed} {\em (Left)}~ and \ref{fig:debate} {\em (Left)}~ illustrate the micro-F1 scores with \textit{varying epochs} for \textit{PubMed diabetes} and \textit{internet social debate} data sets resp.
K-CLN converges {\bf significantly faster} (less epochs), at times, with better predictive performance at convergence
which shows that K-CLNs learn more \textit{efficiently} with noisy sparse samples thereby answering \textbf{(Q1)} affirmatively.
Effectiveness of K-CLN is illustrated by its performance with respect to the varying sample sizes of the training set, especially in the low sample ranges. The intuition is, \textit{domain knowledge should help guide the model to learn better when the amount of training data available is small}. K-CLN is trained on gradually varying sample size from 5\% of the training data (3\% of the complete data) till 80\% of the training data (48\% of complete data) and tested on the hold-out test set. Figures \ref{fig:PubMed} {\em (Right)}~ and \ref{fig:debate} {\em (Right)}~ present the micro-F1 with varying sample sizes for \textit{PubMed diabetes} and \textit{internet social debate} respectively
For \textit{internet social debate stance prediction}, K-CLN outperforms Vanilla CLN with all sample sizes lower than, approximately, $35\%$. However, in case of \textit{PubMed}, K-CLN outperforms Vanilla CLN for all sample sizes we experimented with, thus answering \textbf{(Q2)} affirmatively. K-CLNs learn \textit{effectively} with noisy sparse samples.
An obvious question that will arise is -- {\em how robust is our learning system to that of noisy/incorrect advice?} Conversely, {\em how does the choice of $\alpha$ affect the quality of the learned model?}
To answer these questions specifically, we performed an additional experiment on the \textbf{Internet Social Debates} domain by augmenting the learner with incorrect advice. This incorrect advice is essentially created by changing the preferred label of the advice rules to incorrect values (based on our understating). Also, recall that the contribution of advice is dependent on the trade-off parameter $\alpha$, which controls the robustness of K-CLN to advice quality. Consequently, we experimented with different values of $\alpha$ ($0.2,0.4,\ldots,1.0$), across varying sample sizes.
Figure~\ref{fig:alphas} shows how with higher $\alpha$ values the performance deteriorates due to the effect of noisy advice. $\alpha=0$ is not plotted since the performance is same as no-advice/Vanilla CLN. Note that with reasonably low values of $\alpha = 0.2, 0.4$, the performance does not deteriorate much and is, in fact, better in some samples. Thus with reasonably low values of $\alpha$ K-CLN is robust to quality of advice \textbf{(Q3)}. We picked one domain to present the results of this robustness but have observed similar behavior in both the domains. These experiments empirically support our theoretical analysis (Proposition~\ref{prop:balance}). We found that when $\alpha \leq 0.5$, K-CLN performs well even with noisy advice. In the earlier experiments where we use potentially good advice, we report the results with $\alpha=1$, So it is reasonable to assign higher weight to the advice and the contribution of the entities and relations/contexts affected by it, given the advice is noise-free. Also, note that the drop in performance towards very low sample sizes (in Figure~\ref{fig:alphas}) highlights how learning is challenging in the noisy-data and noisy-advice scenario. This aligns with our general understanding of most human-in-the-loop/advice-based approaches in AI. Trade-off between data and advice via a weighted combination of both is a well studied solution in related literature \cite{OdomNatarajan18} and, hence, we adapt the same in our context. Tracking the expertise of humans to infer advice quality is an interesting future research direction.
\section{Conclusion}
We considered the problem of providing guidance for CLNs. Specifically, inspired by treating the domain experts as true domain experts and not CLN experts, we developed a formulation based on {\em preferences}. This formulation allowed for natural specification of guidance. We derived the gradients based on advice and outlined the integration with the original CLN formulation. Our initial evaluation across a couple of domains clearly demonstrate the effectiveness and efficiency of the approach, specifically in knowledge-rich, data-scarce problems. We are also experimenting on a few more domains and the results will be included in the full version of the paper. Exploring other types of advice including feature importance, qualitative constraints, privileged information, etc. is a potential future direction. Scaling our approach to web-scale data is a natural extension. Finally, extending the idea to other deep models and applications to more real domains remains an interesting direction for future research.
\paragraph{Acknowledgements: }
MD, GK \& SN gratefully acknowledge the support of CwC Program Contract W911NF-15-1-0461 with the US Defense Advanced Research Projects Agency (DARPA) and the Army Research Office (ARO). SN also acknowledges the NSF grant IIS-1836565 and AFOSR award FA9550-18-1-0462. DSD acknowledges the National Institute of Health (NIH) grant no. R01 GM097628. Any opinions, findings and conclusion or recommendations are those of the authors and do not necessarily reflect the view of the DARPA, ARO, AFOSR or the US government.
\bibliographystyle{icml2019}
|
1,314,259,994,255 | arxiv | \section{Introduction}
\label{sec:intro}
Topological entanglement entropy (TEE), first introduced in condensed matter physics \cite{KitaevPreskill,LevinWen}, has been widely used to characterize topological phases. It is the constant subleading term (relative to the area-law term) in the entanglement entropy, only dependent on universal data of the corresponding topological phase.
At low energy, a large class of topological phases can be effectively described using Chern-Simons gauge theory with a compact, simple, simply-connected gauge group. When this is the case, TEE can be found using surgery \cite{Fradkin} and replica trick \cite{Calabrese} by computing the partition function on certain $3$-manifolds. For compact gauge groups, TEE is expressed \cite{Fradkin} in terms of modular $S$ matrices of Wess-Zumino-Witten rational conformal field theory (RCFT) on a $2d$ compact Riemann surface, following the CS/WZW correspondence first described in geometric quantization by \cite{Witten1}.
In three-dimensional spacetime, gravity can be classically described by Chern-Simons gauge theory with a non-compact, possibly complex gauge group \cite{Witten1988}. Specifically, in Euclidean picture with a negative cosmological constant $\Lambda=-1/l^2<0$, in the first-order formulation of general relativity, the spin connection $\omega$ combines with the ``vierbein'' $e$ to make the holomorphic Chern-Simons gauge field $\omega+ e/l$ and anti-holomorphic gauge field $\omega- e/l$ of gauge group $SL(2,\mathbb{C})$, where $l$ is the AdS$_3$ curvature radius. The following questions thus arise naturally:
is there a similar notion of TEE in 3d gravity? If so, can one compute the TEE for 3d gravity using surgery? Is the TEE related to modular $S$ matrices of a CFT living on the conformal boundary? In Ref. \cite{Verlinde}, the authors proposed that the Bekenstein-Hawking entropy of a BTZ black hole \cite{BTZ,BHTZ} in AdS$_3$ can be interpreted as TEE. The argument is supported by calculations in the dual CFT.
Unfortunately it is still not clear what is the meaning of this entanglement entropy, i.e. what are the two subregions or components that are entangled together.
We are motivated by these questions to calculate TEE via 3d surgery in an Euclidean spacetime that is asymptotically AdS$_3$. In the case of thermal AdS$_3$, the constant time slice is a disk. We first bipartite this disk into two disks as shown in Fig. \ref{fig:tads-partition}, where $a$ denotes the ratio between the interval length on the boundary circle that is contained in subregion $A$ and the circumference of the full circle. After applying the replica trick, the glued manifold is a genus-$n$ handlebody. Using one-loop partition function on this handlebody \cite{Witten0706,YinXi,BinChen,MaloneyWitten,Giombi,DongXi}, we derive an explicit expression for TEE, which vanishes in the low-temperature limit.
\begin{figure}[htbp]
\label{fig:a}
\centering
\begin{tikzpicture}
\draw[thick] (0,1) arc (90:-180:1 and 1);
\draw[thick] (-1,0) arc (-90:0:1 and 1);
\draw[thick,blue] (-1,0) arc (180:90:1 and 1);
\node at (-1,0.75) {\textcolor{blue}{a}};
\node at (1.2,-0.8) {1-a};
\node at (-0.5,0.5) {\textcolor{blue}{A}};
\node at (0.5,-0.2) {B};
\end{tikzpicture}
\caption{Bipartition of constant time slice of thermal AdS$_3$~.}
\label{fig:tads-partition}
\end{figure}
Then we consider two disjoint thermal AdS$_3$~ and calculate the TEE between them, which turns out to be the thermal entropy of one thermal AdS$_3$~. However, this does not mean any nontrivial entanglement between the two solid tori, and we support this argument by calculating the mutual information between them, which gives zero.
We also compute TEEs in an eternal BTZ background. In the Euclidean picture there is only one asymptotic region for the eternal BTZ black hole \cite{KrasnovEuclidean}, which corresponds to the gluing of the two asymptotic regions of the two single-sided black holes in the Lorentzian picture. We show that TEE between the two single-sided black holes is equal to the Bekenstein-Hawking entropy of one single-sided black hole. The mutual information between them does not vanish and again equals to the Bekenstein-Hawking entropy, which guarantees the explanation of the result as supporting the ER=EPR conjecture to be true \cite{MaldacenaTFD,Raamsdonk,ER=EPR}.
Focusing on one single-sided black hole, we then derive an Entangling-Thermal relation, stating
\begin{equation}
\lim_{\text{Area}(\bar{A})\rightarrow 0} [S(A)-S(\bar{A})] = S_{BTZ}^{\text{thermal}},
\end{equation}
where $A$ and $\bar{A}$ denotes the two complementary subregions. Quantities on both sides of this equation are intrinsically three-dimensional. The underlying physical reason of this relation is that, subregion $A$ wraps the non-contractible loop of the constant time slice, while its complement $\bar{A}$ does not. The difference between $S_A$ and $S_{\bar{A}}$ thus detects the effect of the non-contractible loop, which is exactly the outer horizon of the BTZ black hole. This relation is similar to but different from the thermal entropy relation \cite{Azeyanagi} derived from the Ryu-Takayanagi formula \cite{RT}, in that our result is topological and does not depend on geometrical details.
The full modular-invariant genus one partition function of three-dimensional pure gravity is a summation of classical geometries or gravitational instantons, which include both thermal AdS$_3$~ and the BTZ black hole.
At high temperatures, the full partition function is dominated by the $SL(2,\mathbb{Z})$ family of black hole solutions, whereas the low-temperature solution is dominated by the thermal AdS$_3$. We compute TEE for the full partition function with a bipartition between the two single-sided black holes in the high temperature regime and again observe ER=EPR explicitly. When Chern-Simons level $k_R=k_L=l/16G=1$, after defining the quantum dimension data on the boundary Monster CFT, we see from the TEE calculation that the black hole geometries correspond to a topological phase in the bulk which contains a maximally-entangled superposition of $194$ types of ``anyons'', labeled by the irreducible representations of the Monster group. This state, dubbed as \emph{Moonshine double state}, has the similar property as the thermofield double state on the asymptotic boundary in that TEE between the anyon pairs is equal to the Bekenstein-Hawking entropy.
The rest of the paper is organized as follows. In section \ref{sec:tool} we give a minimal introduction to the knowledge that facilitate the TEE calculation, including replica trick and Schottky uniformization. In section \ref{sec:tads} we show the calculation of TEE in thermal AdS$_3$, which amounts to the computation of the partition function on a genus $n$-handlebody. We also compute the TEE between two disjoint thermal AdS$_3$~ and show their mutual information vanishes. Section \ref{sec:btz} illustrates the TEE calculation for BTZ black holes for several different bipartitions. We discuss the relations with ER=EPR and show that mutual information between the two single-sided black holes is equal to the Bekenstein-Hawking entropy. We further propose an Entangling-Thermal relation for single-sided black holes. Then in section \ref{sec:whole} we demonstrate the TEE of the full modular-invariant partition function after summing over geometries and present the quantum dimension interpretation. The system is mapped to a superposition of $194$ types of anyons. Comments on the implication of TEE on the Hawking-Page transition and the outlook can be found in section \ref{sec:summary}.
\section{Review of Relevant Components}
\label{sec:tool}
In this section we will introduce basic concepts that are essential to understanding the rest of the paper.
\subsection{``Surgery'' and Replica Trick}
Surgery was originally invented by Milnor \cite{Milnor} to study and classify manifolds of dimension greater than three.
In this work we use this concept in a broader sense, i.e. as a collection of techniques used to produce a new finite-dimensional manifold from an existing one in a controlled way. Specifically, it refers to cutting out parts of a manifold and replacing it by a part of another manifold, matching up along the cut.
As a warm-up, we review the usage of surgery in the entanglement calculation of 2d CFT for a single interval at finite temperature $T=1/\beta$ \cite{Calabrese}.
The interval $A$ lies on an infinitely long line whose thermal density matrix is denoted as $\rho$. The reduced density matrix of subregion $A$ is then defined as $\rho_A=\text{tr}_{\bar{A}} \rho$, where the trace $\text{tr}_{\bar{A}}$ over the complement of $A$ only glues together points that are not in $A$, while an open cut is left along $A$. Entanglement entropy between $A$ and its complement $\bar{A}$ is then $S_A=
-\text{tr} \rho_A\ln\rho_A$. The matrix logarithm is generally hard to compute, so alternatively one applies the replica trick to obtain an equivalent expression, with proper normalization (so that the resultant quantity is $1$ when analytically continued to $n=1$):
\begin{equation}
S(A)=-\frac{d}{dn}\left(\frac{\text{tr} (\rho_A^n)}{(\text{tr}\rho_A)^n}\right)\Bigg|_{n=1}.
\end{equation}
Now the problem reduces to the computation of $\text{tr} (\rho_A^n)$. Using surgery, one can interpret it as the path integral on the glued 2-manifold \cite{CallanWilczek}. An example for $n=3$ is shown in Fig. \ref{fig:replica}, where the left panel sketches $\rho_A^3$, and the right panel is $\text{tr}(\rho_A^3)$. In this case with a finite temperature, $S_A$ is not necessarily equal to $S_{\bar{A}}$.
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=0.4]
\draw[thick] (-0.75,0) arc (180:10:0.75 and 3);
\draw[thick] (-0.75,0) arc (-180:-10:0.75 and 3);
\draw[thick] (0.7,0.5)--(3.5,0);
\draw[thick] (0.7,-0.5)--(3.5,0);
\draw[thick] (0,3)--(10,3);
\draw[thick] (0,-3)--(10.75,-3);
\draw[thick] (10,1) arc (-90:90:0.25 and 1);
\draw[dashed,thick] (10,3) arc (90:270:0.25 and 1);
\draw[thick] (10,1)--(8,1);
\draw[thick] (10,1)--(11.5,1);
\draw[thick] (11.5,-1) arc (-90:90:0.25 and 1);
\draw[dashed,thick] (11.5,1) arc (90:270:0.25 and 1);
\draw[thick] (11.5,-1)--(8,-1);
\draw[thick] (10.75,-1)--(11.5,-1);
\draw[thick] (10.75,-3) arc (-90:90:0.25 and 1);
\draw[dashed,thick] (10.75,-1) arc (90:270:0.25 and 1);
\node[left] at (-1,0) {$3\beta$};
\node at (10.7,2) {$\beta$};
\node at (12.2,0) {$\beta$};
\node at (11.5,-2) {$\beta$};
\node at (2,-1) {$A$};
\node at (7,-1) {$B$};
\end{tikzpicture}~~~~
\begin{tikzpicture}[scale=0.4]
\draw[thick] (0,0) circle [x radius=0.75, y radius=3];
\draw[thick] (0,3)--(10,3);
\draw[thick] (0,-3)--(10.75,-3);
\draw[thick] (10,1) arc (-90:90:0.25 and 1);
\draw[dashed,thick] (10,3) arc (90:270:0.25 and 1);
\draw[thick] (10,1)--(8,1);
\draw[thick] (10,1)--(11.5,1);
\draw[thick] (11.5,-1) arc (-90:90:0.25 and 1);
\draw[dashed,thick] (11.5,1) arc (90:270:0.25 and 1);
\draw[thick] (11.5,-1)--(8,-1);
\draw[thick] (10.75,-1)--(11.5,-1);
\draw[thick] (10.75,-3) arc (-90:90:0.25 and 1);
\draw[dashed,thick] (10.75,-1) arc (90:270:0.25 and 1);
\node[left] at (-1,0) {$3\beta$};
\node at (10.7,2) {$\beta$};
\node at (12.2,0) {$\beta$};
\node at (11.5,-2) {$\beta$};
\node at (2,0) {$A$};
\node at (7,0) {$B$};
\end{tikzpicture}
\caption{Left: Sketch of $\rho_A^3$. Right: Sketch of $\text{tr}\rho_A^3$}
\label{fig:replica}
\end{figure}
This operation can be extended to $3$-manifolds in a straightforward way, as shown in Ref. \cite{Fradkin}. The authors calculated examples where the constant time slices are closed surfaces and restricted to ground states, so that the $\beta$ cycle is infinitely long.
The constant time slices that we are interested in for Euclidean AdS$_3$~ are all open surfaces with asymptotic conformal boundaries, and the quantum states do not necessarily belong to the ground state Hilbert subspace. Details will be presented in sections \ref{sec:tads} and \ref{sec:btz}.
\subsection{Conformal Boundary and $\mathbb{H}^3/\Gamma$}
We now introduces the hyperbolic three-space $\mathbb{H}^3$ that describes the Euclidean AdS$_3$. It is the 3d analogue of hyperbolic plane, with standard Poincare-like metric
\begin{equation}\label{eq:metric}
ds^2=\frac{dy^2+dzd\bar{z}}{y^2},
\end{equation}
where $y>0$ and $z$ is a complex coordinate.
Any $3$-manifold $M$ having a genus $n$ Riemann surface $\Sigma_n$ as its conformal boundary that permits a complete metric of constant negative curvature can be constructed using Schottky uniformization. The idea is to represent the $3$-manifold $M$ as the quotient of $\mathbb{H}^3$ by a Kleinian group $\Gamma$ \cite{Thurston}, which is a discrete subgroup of $SL(2,\mathbb{C})$ as well as a discrete group of conformal automorphisms of $\Sigma_n$.
The conformal boundary of $\mathbb{H}^3$ is a sphere at infinity, $S^2_{\infty}$, on which $\Gamma$ acts discretely, except for a \emph{limit set} of accumulation points of $\Gamma$ denoted by $\Lambda(\Gamma)$. The complement $\Omega(\Gamma)=S^2_{\infty}-\Lambda(\Gamma)$ is called the domain of discontinuity. Then the $3$-manifold $M$ has boundary $\Omega(\Gamma)/\Gamma$, a well-defined quotient.
In particular, when $M$ is a handlebody, $\Gamma$ reduces to a Schottky group, which is freely finitely generated by the loxodromic elements $\gamma_1,\dots,\gamma_n\in SL(2,\mathbb{C})$, that acts on $S^2_{\infty}$ as a fractional linear transformation. Among these generators, there are $3n-3$ independent complex parameters, which are coordinates on the Schottky space, a covering space of the complex moduli of the Riemann surface.
Each $\gamma\in\Gamma$ is completely characterized by its fixed points and its multiplier $q_\gamma$. An eigenvalue $q_\gamma$ is defined through the unique conjugation of $\gamma$ under $SL(2,\mathbb{C})$: $z\mapsto q_\gamma z$ with $|q_{\gamma}|<1$.
More explicitly, denoting $\eta,\,\xi$ as the fixed points of $\gamma$, one has
\begin{equation}
\frac{\gamma(z)-\eta}{\gamma(z)-\xi}=q_{\gamma}\frac{z-\eta}{z-\xi}.
\end{equation}
Within the Schottky group $\Gamma$, there are primitive conjugacy classes $\langle\gamma_1,\dots,\gamma_n\rangle$ of $\Gamma$, with ``primitive'' meaning that $\gamma$ is not a positive power of any other element in $\Gamma$.
\subsection{Solid Tori Classified as $M_{c,d}$}
The physical spacetimes we are concerned about in this paper are all solid tori, i.e. the $n=1$ case in the previous subsection. They have toroidal conformal boundaries, so the Schottky group actions is relatively simple.
After these topological constructions, we can further classify them into the $M_{c,d}$ family according to their geometries. This family first appeared in the discussion of classical gravitational instantons which dominate the path integral in Ref. \cite{SL2Z}, and is further explained in Refs. \cite{MaloneyWitten} and \cite{Tail}.
In this case, $\Lambda(\Omega)$ composes of the north and south poles of $S^2_{\infty}$. Since solid tori have boundaries $T^2\cong\Omega(\Gamma)/\Gamma$, $\pi_1(\Omega(\Gamma))$ must be a subgroup of $\pi_1(T^2)$, so $\pi_1(\Omega(\Gamma))$ can only be isomorphic to $\mathbb{Z}\oplus\mathbb{Z}$, $\mathbb{Z}$, or the trivial group. When $\pi_1(\Omega(\Gamma))=\mathbb{Z}\oplus\mathbb{Z}$, $\Omega(\Gamma)$ has to be a Riemann surface of genus 1, which cannot be isomorphic to an open subset of $S^2_{\infty}$. When $\pi_1(\Omega(\Gamma))$ is trivial, $\Omega(\Gamma)$ is a simply-connected universal cover of $T^2$, so that $\Gamma$ has to be $\mathbb{Z}\oplus\mathbb{Z}$. It is easily seen from \eqref{eq:metric} that if $\Gamma\cong\mathbb{Z}\oplus\mathbb{Z}$, then although $\mathbb{H}^3/(\mathbb{Z}\oplus\mathbb{Z})$ has a toroidal boundary at $y=0$, there is a cusp at $y\rightarrow\infty$, whose sub-Plackian length scale invalidates semi-classical treatments.
The only possibility is thus $\pi_1(\Omega(\Gamma))=\mathbb{Z}$, where $\Gamma$ can be either $\mathbb{Z}$ or $\mathbb{Z}\oplus\mathbb{Z}_n$. The latter yields $M$ to be a $\mathbb{Z}_n$-orbifold, indicating the existence of massive particles, which are not allowed in pure gravity. To avoid undesirable geometries such as cusps and orbifolds in the contributions to path integral \cite{Giombi,MaloneyWitten}, we restrict our Schottky group to be $\Gamma\cong\mathbb{Z}$, generated by the matrix
\begin{equation}
\label{eq:generator}
W=\left(\begin{matrix}
q & 0\\
0 & ~q^{-1}\\
\end{matrix}\right)
\end{equation}
where $|q|<1$.
The boundary torus is thus obtained by quotiening the complex $z$-plane without the origin by $\mathbb{Z}$. Redefine $z=e^{2\pi i\omega}$, so $\omega$ is defined up to $\omega\rightarrow\omega+1$, and $W$ acts by $\omega\rightarrow\omega+\ln q/2\pi i$. Hence, the complex modulus of the torus is $\tau\equiv\ln q/2\pi i$, defined up to a $PSL(2,\mathbb{Z})$ M\"{o}bius transformation $\tau\sim(a\tau+b)/(c\tau+d)$, where integers $a,b,c,d$ satisfy $ad-bc=1$.
When constructing a solid torus from its boundary torus, $\tau$ is defined only up to $\tau\sim\tau+\mathbb{Z}$ by a choice of solid filling, completely determined by the pair $(c,d)$ of relatively prime integers. This is because the flip of sign $(a,b,c,d)\rightarrow(-a,-b,-c,-d)$ does not affect $q$, and once $(c,d)$ are given, $(a,b)$ can be uniquely determined by $ad-bc=1$ up to a shift $(a,b)\rightarrow(a,b)+t(c,d),\, t\in\mathbb{Z}$ which leaves $q$ unaffected. We call these solid tori $M_{c,d}$'s, and any $M_{c,d}$ can be obtained from $M_{0,1}$ via a modular transformation on $\tau$. Physically, $M_{0,1}$ is the Euclidean thermal AdS$_3$ and $M_{1,0}$ is the traditional Euclidean BTZ black hole obtained from Wick rotating the original metric in \cite{BTZ}. Excluding $M_{0,1}$, $M_{c,d}$'s are collectively called the $SL(2,\mathbb{Z})$ family of Euclidean black holes, to be discussed in section \ref{sec:whole}.
\section{Thermal AdS$_3$~}
\label{sec:tads}
The Euclidean thermal AdS$_3$~ has the topology of a solid torus $M_{0,1}$, whose non-contractible loop is parametrized by the Euclidean time.
The constant time slice is thus a disk $D^2$ with a boundary $S^1$, perpendicular to the non-contractible loop.
\subsection{Bipartition into Two Disks}
We bipartite the disk into upper and lower subregions $A$ and $B$, both having the topology of a disk. The solid torus is then turned into a sliced bagel as in Fig. \ref{fig:tads}. Boundary of each subregion contains an interval lying on the $S^1$. In the following we will denote the ratio between the length of one interval and the circumference of the boundary $S^1$ to be $a$, satisfying $0\leq a \leq 1$. Except for the symmetric case where $a=1/2$ and the two subregions are equivalent, generally $S_A\neq S_B$.
As introduced in section \ref{sec:tool}, one then glues each of $n$ copies of subregion $B$'s separately while gluing the $n$ copies of subregion $A$'s together. The resultant 3-manifold is an $n$-handlebody, which is a filled genus-$n$ Riemann surface, shown in Fig. \ref{fig:tads}. (In the special case of $n=1$, the handlebody reduces to a solid torus.)
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.18]{bagel1}
\hskip 1cm
\begin{tikzpicture}[scale=0.8]
\draw[thick] (0,0) arc (230:-50:1);
\draw[thick] (0,0) arc (50:330:1);
\draw[thick] (0.244,-1.27) arc (140:400:1);
\draw[thick] (-1.15,-0.7) arc (-170:-10:0.5 and 0.25);
\draw[thick] (-0.95,-0.85) arc (180:0:0.3 and 0.15);
\draw[thick] (0.5,-1.9) arc (-170:-10:0.5 and 0.25);
\draw[thick] (0.7,-2.05) arc (180:0:0.3 and 0.15);
\draw[thick] (0.15,0.9) arc (-170:-10:0.5 and 0.25);
\draw[thick] (0.35,0.75) arc (180:0:0.3 and 0.15);
\node at (2,-0.5) {$\cdots$};
\end{tikzpicture}
\caption{\textbf{Left}: bipartition of the thermal AdS$_3$.
\textbf{Right}: the glued 3-manifold is a flat bouquet-like $n$-handlebody.}
\label{fig:tads}
\end{figure}
With a proper normalization, the entanglement entropy corresponding to subregion $A$ is then
\begin{equation}
\label{eq:tads-replica}
S_{TAdS}=-\frac{d}{dn}\left(\frac{Z(n\text{-handlebody})}{Z(1\text{-handlebody})^n}\right)\Bigg|_{n=1}.
\end{equation}
Contribution to the path integral around a classical saddle point for an $n$-handlebody takes the form
\begin{equation}
Z(n)=\exp \left[ k S_0(n) + \sum_i k^{-i+1} S_i(n) \right],
\end{equation}
where $k^{-i+1}S_i(n)$ is the $i$-loop free energy of boundary graviton excitations. At tree level ($i=0$), $Z_{tree}(n\text{-handlebody})$ can be derived assuming the dual CFT is an extremal CFT \cite{YinXi}\footnote{This partition function is motivated by the Liouville action of a single free boson on a handlebody, and is conjectured in \cite{YinXi} as a weight $12k$ modular form to avoid singularities of special functions.},
\begin{equation}
\label{eq:tads-tree}
Z_{tree}(n)=\prod_{\gamma~\text{prim.}}\prod_{m=1}^\infty |1-q_\gamma^m| ^{24k},
\end{equation}
with the product running over primitive conjugacy classes of $\gamma$, $q_\gamma$ being the multiplier of $\gamma$ introduced in section \ref{sec:tool}, and $k=l/16G$.
In general the two products are hard to evaluate. However, in the low-temperature regime when thermal AdS$_3$~ dominates, the leading contribution to the infinite product over $m$ comes from $m=1$. Furthermore, the product over $\gamma$ is dominated by a single-letter contribution \cite{DongXi,Das}, $\prod\limits_{\gamma~\text{prim.}} |1-q_\gamma| \approx |1-q_1|^{2n}$. Combining these, we obtain
\begin{equation}
Z_{tree}(n)\approx \prod_{\gamma~\text{prim.}} |1-q_1|^{24k}=|1-q_1|^{48nk},
\end{equation}
with $q_1$ a function of $n$ and $a$, having the form
\begin{equation}
\label{eq:tads-q1}
q_1=\frac{\sin^2(\pi a)}{n^2\sin^2(\pi a/n)}e^{-2\pi\beta}.
\end{equation}
At one-loop ($i=1$) level, the general expression for $Z_{loop}(n\text{-handlebody})$ can be derived from either the boundary extremal CFT \cite{YinXi,BinChen} or the bulk heat kernel method \cite{Giombi}. They both depend on the Schottky parametrization of the boundary genus $n$-Riemann surface. The result i
\begin{equation}\label{eq:tads-loop}
Z_{loop}(n)=\prod_{\gamma~\text{prim.}} \prod_{m=2}^\infty \frac{1}{|1-q_\gamma^m|}\approx \frac{1}{|1-q_1^2|^{2n}},
\end{equation}
in the low-temperature regime $q_1\ll 1$. Plugging $Z(n\text{-handlebody})=Z_{tree}(n)Z_{loop}(n)$ into \eqref{eq:tads-replica}, we obtain
\begin{equation}
\label{eq:ads3}
S_{TAdS}(a)\approx\left[96k e^{-2\pi\beta}+(96k-8) e^{-4\pi\beta} + O(e^{-6\pi\beta})\right]\left(\pi a\cot(\pi a)-1\right).
\end{equation}
The terms containing $k$ come from tree-level, while others are one-loop contributions. The entire expression approaches to zero very fast in the low-temperature regime $\beta\rightarrow\infty$ for any $k$. The dependence of the above result on $a$ distinguishes itself from the original definition \cite{KitaevPreskill, LevinWen} of TEE, which is a universal constant. We note that $a$ enters as the boundary condition on the constant time slice, and has nothing to do with the leading area-law term in usual expressions of entanglement entropies.
When subregion $A$ is ``nothing'', i.e. $a\rightarrow 0$, $\pi a\cot(\pi a)\rightarrow 1$, thus the TEE between $S_A$ vanishes. When $A$ is instead ``everything'', i.e. $a\rightarrow 1$, $\pi a \cot(\pi a)\rightarrow -\infty$, balanced by the smaller $e^{-2\pi\beta}\ll 1$ at low temperatures. We observe that apart from the $a\rightarrow 0$ case, the TEE for thermal AdS$_3$~ is always negative. Another important case is when $a=1/2$ so that the two subregions are symmetric. In this case we have
\begin{equation}
\label{eq:tads-symmetric}
S_{TAdS}\left(a=\frac{1}{2}\right)\approx-\left[96k e^{-2\pi\beta}+(96k-8) e^{-4\pi\beta} + O(e^{-6\pi\beta})\right].
\end{equation}
\subsection{Two Disjoint Thermal AdS$_3$~}
Now we take two non-interacting thermal AdS$_3$~ as the whole system, represented by two disjoint solid tori $M_{0,1}$. There are two non-interacting, non-entangled, identical CFTs living on their asymptotic boundaries. One would naively expect the TEE between these two solid tori to be zero, which is not really the case. To calculate the entanglement entropy between these two solid tori, one can simply use
\begin{equation}
\label{eq:tads-epr}
S_{TAdS}=-\frac{d}{dn}\left(\frac{Z_{0,1}(n\tau)Z_{0,1}(\tau)^n}{Z_{0,1}(\tau)^{2n}}\right)\Bigg|_{n=1}.
\end{equation}
We have used the shorthand notation $Z_{0,1}(\tau)=Z_{0,1}(\tau,\bar{\tau})$ to take into account both holomorphic and anti-holomorphic sectors. The partition function $Z_{0,1}(n\tau)$ comes from gluing $n$ copies of solid torus $A$, which is a new solid torus with modular parameter $n\tau$.
Meanwhile, $Z_{0,1}(\tau)^n$ comes from gluing individually the $n$ copies of solid torus $B$. We can simply multiply the contributions from $A$ and $B$ together because they are disjoint. Then we can plug these into the expression for the solid torus partition function, i.e. the $1$-handlebody result from \eqref{eq:tads-tree} and \eqref{eq:tads-q1},
\begin{equation}
Z_{0,1}(\tau)=|q|^{2k}\prod_{m=2}^\infty |1-q^m|^{-2}.
\end{equation}
In the low temperatures, we can approximate $q=e^{2\pi i\tau}=e^{-2\pi\beta}$ as a small number and thus at leading order $Z_{0,1}(\tau)\approx q^{-2k}(1-q^2)^{-2}$.
After straightforward calculations we obtain
\begin{equation}
S_{TAdS}\approx 2(1+4\pi\beta)e^{-4\pi\beta}.
\end{equation}
This contains only the loop contribution, i.e. the semi-classical result is zero. For comparison, we also calculate the canonical ensemble thermal entropy of a single thermal AdS$_3$~ at temperature $\beta^{-1}$: $S_{TAdS}^{\text{thermal}}=\ln Z(1\text{-handlebody})-\beta Z(1\text{-handlebody})^{-1} \frac{\partial Z(1\text{-handlebody})}{\partial \beta}.$
It has the low-temperature form
\begin{equation}
S_{TAdS}^{\text{thermal}}\approx 2(1+4\pi\beta)e^{-4\pi\beta},
\end{equation}
which again solely comes from loop contributions. We immediately observe that the thermal entropy of a single thermal AdS$_3$~ is the same as the TEE between two independent thermal AdS$_3$~.
This does {\it not} imply that there are nontrivial topological entanglement between the two copies of thermal AdS$_3$, but simply reveals the insufficiency of using entanglement entropy as an entanglement measure at finite temperatures. For example, consider two general subsystems $A$ and $B$ with thermal density matrices $\rho_A$ and $\rho_B$ and combine them into a separable system,
\begin{equation}
\rho=\rho_A\otimes \rho_B.
\end{equation}
These two subregions are thus obviously non-entangled. But if one attempts to calculate the entanglement entropy between $A$ and $B$ by tracing over $B$, one can still get an arbitrary result depending on the details of $\rho_A$. If we choose $\rho_A=|\psi\rangle \langle \psi|$ where $|\psi\rangle$ is some pure state, then the entanglement entropy will be zero. If instead we choose $\rho_A=\frac{1}{\dim (\mathcal{H}_A)}\mathbf{1}$ as the proper normalized identity matrix, then the entanglement entropy will be $\ln(\dim (\mathcal{H}_A))$. So depending on the choice of $\rho_A$, one can obtain any value of the entanglement entropy between these minimum and maximum values. This shortcoming is due to the fact that now the entanglement entropy calculation involves undesired classical correlations in mixed states.
To address this issue, we look at the topological mutual information between the two solid tori,
\begin{equation}
I(A,B)=S(A)+S(B)-S(A\cup B),
\end{equation}
so that the thermal correlations can be canceled. Following similar replica trick calculations, one easily obtain $S(A\cup B)=2S(A)=2S(B)$, thus the mutual information vanishes and there exists no nontrivial topological entanglement between the two disjoint thermal AdS$_3$. We will observe in the next section that this statement no longer holds true for an eternal BTZ black hole.
\section{BTZ Black Hole}
\label{sec:btz}
We will explore in this section the topological entanglement in the bulk of Euclidean BTZ black hole.
\subsection{BTZ Geometry}
It has been speculated for a long time that the 3d gravity is rather trivial because there is no gravitational wave besides local fluctuations. However in 1992, authors of \cite{BTZ} proposed a new type of AdS-Schwarzschild black hole with Lorentzian metric
\begin{equation}
\label{eq:btz}
ds_L^2=-N_L^2dt_L^2+N_L^{-2}dr^2+r^2(d\phi+N_L^\phi dt)^2,
\end{equation}
where the lapse and shift functions have the form $N_L^2=-8G M_L +\frac{r^2}{l^2}+\frac{16G^2J_L^2}{r^2},~~N_L^\phi=-\frac{4G J_L}{r^2}.$
$G$ is the three-dimensional Newton constant, $l$ the curvature radius of AdS$_3$, and $M$, $J_L$ are the mass and angular momentum of the black hole, respectively. The outer and inner horizons are defined by
\begin{equation}
r_\pm^2=4GM_Ll^2\left(1\pm \sqrt{1-\frac{J_L^2}{M_L^2l^2}}\right).
\end{equation}
Let $t_L=i t$ and $J_L=iJ$, and we do the Wick rotation to get
\begin{equation}
\label{eq:wick}
ds^2=N^2dt^2+N^{-2}dr^2+r^2(d\phi+N^\phi dt)^2,
\end{equation}
with $N^2=-8G M +\frac{r^2}{l^2}-\frac{16G^2J^2}{r^2},~~N^\phi(r)=-\frac{4G J}{r^2}$. The horizons are now given by
\begin{equation}
r_\pm^2=4GMl^2\left(1\pm \sqrt{1+\frac{J^2}{M^2l^2}}\right).
\end{equation}
The Euclidean BTZ black hole is locally isometric to the hyperbolic three-space $\mathbb{H}^3$ and is globally described by $\mathbb{H}^3/\Gamma$ with $\Gamma\cong\mathbb{Z}$. The topology is a solid torus, and one can make it explicit by doing the following coordinate transformations \cite{CoordianteTransformation}
\begin{equation}
\begin{split}
\label{eq:spherical}
x &=\sqrt{\frac{r^2-r_+^2}{r^2-r_-^2}}\cos\left(\frac{r_+}{l^2}t+\frac{|r_-|}{l}\phi\right)\exp\left(\frac{r_+}{l}\phi-\frac{|r_-|}{l^2}t\right),\\
y &=\sqrt{\frac{r^2-r_+^2}{r^2-r_-^2}}\sin\left(\frac{r_+}{l^2}t+\frac{|r_-|}{l}\phi\right)\exp\left(\frac{r_+}{l}\phi-\frac{|r_-|}{l^2}t\right),\\
z &=\sqrt{\frac{r_+^2-r_-^2}{r^2-r_-^2}}\exp\left(\frac{r_+}{l}\phi-\frac{|r_-|}{l^2}t\right)>0.\\
\end{split}
\end{equation}
They bring the metric \eqref{eq:wick} to the upper half-space $\mathbb{H}^3$ with $z>0$. Further changing to the spherical coordinates $(x,y,z)=(R\cos\theta\cos\chi,R\sin\theta\cos\chi,R\sin\chi)$, we finally arrive at
\begin{equation}
\label{eq:fundamental}
ds^2=\frac{l^2}{\sin^2\chi}\left(\frac{dR^2}{R^2}+\cos^2\chi d\theta^2+d\chi^2\right).
\end{equation}
To ensure that the above coordinate transformation is non-singular (contains no conical singularities) at the $z$ axis $r=r_+$, we must require periodicity in the arguments of the trigonometric functions. That is, we must identify
\begin{equation}
\frac{1}{2\pi l} \left(\phi,t\right)\sim \frac{1}{2\pi l}(\phi+\Phi,t+\beta),
\end{equation}
where $\Phi=\frac{|r_-|}{r_+^2-r_-^2},\,\beta=\frac{r_+ l}{r_+^2-r_-^2}.$ We recombine the real pair $(\Phi,\beta)$ into a single complex variable
\begin{equation}
\label{eq:tau}
\tau=\Phi+i\beta,
\end{equation}
which is the complex modular parameter of the boundary torus. In terms of metric \eqref{eq:fundamental}, this corresponds to the global identifications
\begin{equation}
(R,\theta,\chi)\sim\left(Re^{2\pi r_+/l},\theta+\frac{2\pi|r_-|}{l},\chi\right).
\end{equation}
A fundamental region for \eqref{eq:fundamental} is the filling of the slice between inner and outer hemispheres centered at the origin having radii $R=1$ and $R=e^{2\pi r_+/l}$ respectively, with an opening $2\pi|r_-|/l$ or $2\pi$ (if $r_-=0$) in azimuthal angle, as shown by Fig. \ref{fig:BTZ}, and two hemispheres are identified along the radial lines with a twist of angle $2\pi|r_-|/l$ or $2\pi$. Hence, the segment on $z$-axis between two hemispheres corresponding to the outer horizon, and is mapped to the central cord of solid torus at $\chi=\pi/2$ (the boundary torus is at $\chi=0$).
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=0.4]
\draw[gray,->] (0,0)--(5,0);
\draw[gray,->] (0,0)--(0,6);
\draw[gray,->] (0,0)--(-2.5,-2.5);
\draw[thick] (-4,0) arc (-180:0:4 and 2);
\draw[thick] (4,0) arc (0:180:4 and 2);
\draw[thick] (-2,0) arc (-180:0:2 and 1);
\draw[dashed,thick] (2,0) arc (0:180:2 and 1);
\draw[thick] (4,0) arc (0:180:4 and 5);
\draw[thick] (2,0) arc (0:180:2 and 3);
\node at (5,-0.5) {$y$};
\node at (-3,-2.5) {$x$};
\node at (0.5,6) {$z$};
\end{tikzpicture}
\hskip 1cm
\begin{tikzpicture}[scale=0.5]
\draw[thick] (0,0) circle [x radius=4, y radius=2.5];
\draw[thick] (-1.5,0.41) arc (-170:-10:1.5 and 0.9375);
\draw[thick] (-1,-0.15) arc (180:0:1 and 0.625);
\draw[dashed,blue,thick] (0,0) circle [x radius=2.8, y radius=1.5];
\draw[white] (0,-3.5)--(3,-3.5)
\end{tikzpicture}
\caption{(Color online) \textbf{Left}: The spherical coordinates on $\mathbb{H}^3$, which converts the original Schwarzschild metric \eqref{eq:btz} of BTZ black hole into the right picture. \textbf{Right}: Topology of the Euclidean BTZ black hole is a solid torus. Horizon is the blue dashed line threading the central cord of the solid torus. The Euclidean time runs in the meridian direction.}
\label{fig:BTZ}
\end{figure}
For convenience, in the rest of the paper, unless stated otherwise, we only focus on non-rotating Euclidean BTZ black hole, so that $\tau$ is pure imaginary and $r_-=0$.
\subsection{TEE between Two One-Sided Black Holes and Mutual Information}
\label{subsec:btz-erepr}
Following Refs. \cite{MaldacenaTFD,Raamsdonk,ER=EPR}, an eternal Lorentzian AdS black hole has two asymptotic regions and can be viewed as two black holes connected through a non-transversable wormhole. It is also suggested from the dual CFT perspective that the entanglement entropy between the CFTs living on the two asymptotic boundaries is equal to the thermal entropy of one CFT. Motivated by this, we are interested in calculating the TEE between the two single-sided black holes in the bulk.
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=0.7]
\filldraw[fill=lightgray, draw=black, thick] (0,0) circle [radius=2];
\filldraw[fill=white, draw=blue, thick] (0,0) circle [radius=1];
\end{tikzpicture}
\hskip 1cm
\begin{tikzpicture}[scale=0.7]
\filldraw[fill=lightgray, draw=black, thick] (0,0) circle [radius=2];
\filldraw[fill=gray, draw=blue, thick] (0,0) circle [radius=1.5];
\filldraw[fill=white, thick] (0,0) circle [radius=1];
\end{tikzpicture}
\caption{(Color online) \textbf{Left}: Constant time slice of each single-sided BTZ black hole is an annulus. The inner boundary in blue denotes the horizon. Time evolution of this slice corresponds to rotating angle $\pi$ around the inner blue boundary. \textbf{Right}: Gluing the constant time slices of single-sided black hole $R$ (light grey) and $L$ (dark grey) along the horizon (blue line) in the middle.}
\label{fig:ConstantBTZ}
\end{figure}
However, for the Euclidean BTZ black hole \eqref{eq:wick} and \eqref{eq:fundamental}, the metrics only cover the spacetime outside the horizon of one single-sided black hole. Everything inside the horizon is hidden, including the other single-sided black hole. In order to make the computation of TEE between two single-sided black holes possible, we take an alternative view of the solid torus $M_{1,0}$, as in Fig. \ref{fig:ConstantBTZ}. In the left panel, we sketch the constant time slice of the right single-sided black hole, call it $R$. It is the constant $\theta$ slice in metric \eqref{eq:fundamental} with an annulus topology, whose inner boundary is identified with the horizon. In the right panel, we glue the two constant time slices for black holes $L$ and $R$ along the horizon. Then comes the most important step: we fold the annulus of black hole $L$ along the horizon, so that it coincides with the annulus of black hole $R$. To obtain the full spacetime geometry, one rotates the constant time slice of $L$ about the horizon \emph{counterclockwise} by $\pi$, while rotating the constant time slice of $R$ about the horizon \emph{clockwise} by $\pi$. Namely, the two annuli meet twice: one at angle $0$, the other at $\pi$. The resultant manifold is a solid torus, same as the $M_{1,0}$ introduced before. Hence one can view this solid torus either as one single-sided black hole $R$ with modular parameter $\tau=i\beta$, or as two single-sided black holes $L$ and $R$, each contributing $\tau'=i\beta/2$.
It might concern some readers that the CFTs living on the asymptotic boundaries of $L$ and $R$ in the Lorentzian picture are now glued together. We note that this is a feature of the Euclidean picture: due to the different direction of evolutions, we have CFT$_L(t)=$CFT$_R(-t)$. At $t=0$, these obviously coincide. Then at $t=\beta/2$, this gives CFT$_L(t=\beta/2)=$CFT$_R(t=-\beta/2)$. Using the fact that in the Euclidean picture we have $-\beta/2=-\beta=2+\beta=\beta/2$, we arrive at CFT$_L(t=\beta/2)=$CFT$_R(t=\beta/2)$, thus they coincide again and the two CFTs are glued together. This is consistent with the fact that in the Euclidean signature, there should only be one asymptotic region, as shown in \cite{KrasnovEuclidean}.
Now we can calculate the TEE between the constant time slices of $L$ and $R$, which we denote as $A$ and $B$. Importantly, since in general the result can be time dependent, we specify the cut to be done at $t=0$. Shown in the left panel of Fig. \ref{fig:beta-p}, each subregion contributes $\tau'$ to the modular parameter of the solid torus. We sketch one copy of $\rho_A$ in the right panel.
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=0.6]
\draw[thick] (0,0) circle [radius=2];
\draw[thick] (-2,0)--(2,0);
\node at (0,1) {$A$};
\node at (0,-1) {$B$};
\draw[thick] (5.586,1.414) arc (135:-180:2 and 2);
\draw[thick] (5,0)--(9,0);
\draw[thick] (7,0)--(5.586,1.414);
\draw[thick] (7,0)--(9,0);
\node at (7,1) {$A$};
\node at (7,-1) {$B$};
\end{tikzpicture}
\caption{The disk perpendicular to the horizon, which pierces the center of the disk. \textbf{Left:} Here, parts $A$ and $B$ in spacetime are respectively formed by rotating both spatial subregions $A$ and $B$ by $\pm\pi$. \textbf{Right:} The graphical representation of $\rho_A$, with a wedge missing in spacetime subregion $A$.}
\label{fig:beta-p}
\end{figure}
To find $S(A)$, we need to calculate the partition function of the 3-manifold that correspond to $\text{tr}\rho_A^n$. We first enlarge the missing wedge in the right panel of Fig. \ref{fig:beta-p} and shrink the size of $A$, $B$. To add the second copy of $\rho_A$, one should glue $A_1$ to $B_2$, with $B_2$ glued with $A_2$, as shown in Fig. \ref{fig:BTZsurgery}. Note that this differs from the usual way of doing replica tricks, where $A_1$ is always glued to $A_2$. This is again a result of the opposite directions of time evolutions for $L$ and $R$: the $B$ spatial slice at $t=\beta/2$ should always be identified with the $A$ spatial slice at $t=\beta/2$. One can then follow this procedure and glue $n$-copies of $\rho_A$.
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=0.6]
\draw[thick] (0,0) circle [x radius=4, y radius=2.5];
\draw[thick] (-1.5,0.41) arc (-170:-10:1.5 and 0.9375);
\draw[thick] (-1,-0.15) arc (180:0:1 and 0.625);
\draw[blue,thick,dashed] (0,0) circle [x radius=2.8, y radius=1.5];
\draw[blue,thick] (-2.12,-0.953) arc (-145:-35:2.62 and 1.28);
\draw[thick] (-2.1,-1)--(-2.3,-0.4);
\draw[thick] (2.1,-1)--(2.3,-0.4);
\draw[thick] (-2.3,-0.4).. controls (0,-1.2)..(2.3,-0.4);
\draw[thick] (-2.1,-1)--(-2.5,-1.6);
\draw[thick] (2.1,-1)--(2.3,-1.6);
\draw[thick] (-2.5,-1.6)..controls(0,-2)..(2.3,-1.6);
\draw[thick] (9,0) circle [radius=2];
\draw[thick] (9,0) -- (11,0);
\draw[thick] (9,0)--(9,2);
\draw[thick] (9,0)--(10,1.73205);
\draw[thick] (9,0)--(10.73205,-1);
\draw[thick] (9,0)--(9,-2);
\node at (8.2,-0) {$\cdots$};
\node at (10.2,0.75) {{\small $B_1$}};
\node at (10.4,-0.4) {{\small $A_2$}};
\node at (9.4,1.5) {{\small $A_1$}};
\node at (9.6,-1) {{\small $B_2$}};
\end{tikzpicture}
\caption{(Color online) \textbf{Left}: front view of the pictorial representation of $\rho_A$. Notice that the cutaway wedge runs along the longitude (non-contractible loop) of the solid torus, with its vertex on the horizon. \textbf{Right}: Graphical representation of $\text{tr}\rho_A^n$. The disk is perpendicular to the horizon.}
\label{fig:BTZsurgery}
\end{figure}
The resultant 3-manifold is a solid torus with modular parameter $2n\tau'$, since each copy of $A$ contributes $\tau'$ and the same goes for $B$. Replica trick then gives
\begin{equation}
\label{eq:replica}
S_{BTZ}(A)=-\frac{d}{dn}\left( \frac{Z_{1,0}(2n\tau')}{Z_{1,0}(2\tau')^n} \right)\Bigg|_{n=1}.
\end{equation}
Partition function $Z_{1,0}(\tau)$ can be obtained from that of the thermal AdS$_3$~ by a modular transformation $\tau\rightarrow -1/\tau$,
\begin{equation}
\label{eq:btz-partition}
Z_{1,0}(\tau)=|q_-|^{-2k}\prod_{m=2}^\infty \frac{1}{|1-q_-^m|^2},
\end{equation}
where we have defined $q_-\equiv e^{-2\pi i/\tau}=e^{-2\pi/\beta}$. In the high-temperature regime $\beta\ll 1$, the above reduces to
$Z_{1,0}(\tau)\approx e^{4\pi k/\beta} \left(1-e^{-4\pi/\beta}\right)^{-2}.$ Substituting it into \eqref{eq:replica}, one obtains at leading order
\begin{equation}
\label{eq:btz-thermal-A}
S_{BTZ}(A) =\frac{8\pi k}{\beta}-2 e^{-4\pi/\beta} \left(\frac{4\pi}{\beta}-1\right)+O(e^{-6\pi/\beta}).
\end{equation}
where the first term comes from tree level and is identified with the Bekenstein-Hawking entropy. The above expression matches with the thermal entropy of one single-sided black hole at one-loop,
\begin{equation}
S_{BTZ}^{\text{thermal}}(A)=\ln Z_{1,0}(\tau)-\beta Z_{1,0}(\tau)^{-1} \frac{\partial Z_{1,0}(\tau)}{\partial \beta}=S_{BTZ}(A).
\end{equation}
Remarkably, this equation holds true regardless of $Z_{1,0}(\tau)$'s specific form.
It might be confusing at first that the Bekenstein-Hawking entropy, usually viewed as an area-law term, appears in the calculation of topological entanglement entropy. To make it explicit that the results above are TEE instead of the full entanglement entropy, alternatively we can use $Z_{1,0}(\tau)$ derived from supersymmetric localization method in Chern-Simons theory on 3-manifolds with boundaries \cite{3J}. Following the replica trick, we find exactly the same expression\footnote{The supersymmetric localization method involves boundary fermions. We need to remove the contribution from the boundary fermions to match with the partition function \eqref{eq:btz-partition}}. Since Chern-Simons theory is a topological quantum field theory, the resulting entanglement entropy is a TEE. The horizon area $r_+$ should be understood as a topological quantum number of the theory
In the calculation of TEE between two disjoint thermal AdS$_3$'s, as stated in section \ref{sec:tads}, we have seen that a nonzero TEE is not enough to guarantee true nontrivial entanglement between two subregions because of the possible contribution from classical correlations. So we resort to the mutual information $I(A,B)$ between two single-sided black holes. We then need to find $S(A\cup B)$. Since in the Euclidean picture we are no longer at a pure state, it is not necessary that $S(A\cup B)$ vanishes, although $A\cup B$ consists the entire system.
We start with bipartiting the system into $A\cup B$ and $C$ at $t=0$, as shown in Fig. \ref{fig:cup}. $C$ is a very small region whose area will finally be taken to zero.
\begin{figure}[htbp]
\centering
\centering
\begin{tikzpicture}[scale=0.7]
\draw[thick,fill=lightgray] (0,0) circle [radius=2];
\draw[thick,draw=blue,fill=gray] (0,0) circle[radius=1.5];
\draw[thick,fill=white] (0,0) circle [radius=1];
\draw[fill=white,thick] (-1.98,0.2)--(-1.7,0.2)--(-1.7,-0.2)--(-1.98,-0.2);
\draw[thick] (8,0) arc (0:150:2 and 2)--(6,0);
\draw[thick] (8,0) arc (0:-150:2 and 2)--(6,0);
\draw[thick] (8,0) arc (0:-180:2 and 2);
\draw[thick] (8.2,0) arc (0:-180:2.2 and 2.2);
\draw[thick] (8,0)--(8.2,0);
\draw[thick] (4,0)--(3.8,0);
\draw[thick] (6,0)--(8,0);
\node at (6,1) {$A$};
\node at (6,-1) {$B$};
\node at (6,-2.7) {$C$};
\end{tikzpicture}
\caption{(Color online.) \textbf{Left:} Subregion $C$ is the small white square in the constant time slice. \textbf{Right:} One copy of $\rho_A$. The picture shows the disk perpendicular to the horizon. The thin layer surrounding the lower half circle corresponds $C$.}
\label{fig:cup}
\end{figure}
The glued manifold is a solid torus with modular parameter $2n\tau'$, exactly the same form with Fig. \ref{fig:beta-p}. The contributions from the $C$ vanish because they are still contractible in the glued manifold so we can safely take their area to be zero. Plugging \eqref{eq:btz-partition} into the replica trick formula \eqref{eq:replica}, we again obtain
\begin{equation}
\label{eq:union}
S_{BTZ}(A\cup B)=S_{BTZ}^{\text{thermal}}(A).
\end{equation}
So indeed the TEE of $A\cup B$ does not vanish. Combining these, we find that the mutual information is the same as the Bekenstein-Hawking entropy for a single-sided black hole:
\begin{equation}
I(A,B)=S_{BTZ}(A)+S_{BTZ}(B)-S_{BTZ}(A\cup B)=S_{BTZ}^{\text{thermal}}(A).
\end{equation}
Note that, had we naively taken the full partition function of the eternal BTZ black hole to be $Z_{1,0}(\tau)^2$, namely, the two single-sided black holes are independent and non-entangled so that their partition functions can be multiplied together, then $S_{BTZ}(A\cup B)$ would have been twice $S_{BTZ}^{\text{thermal}}(A)$ and the mutual information would have vanished. So the nonzeroness of mutual information indicates nontrivial entanglement between $L$ and $R$.
There is still another surgery that can yield $S_{BTZ}^{\text{thermal}}(A)$: (1) restrict to the right single-sided black hole $R$ as the full spacetime, which is a solid torus with modular parameter $\tau$, obtained from rotating the constant time slice of it by $2\pi$; (2) thicken the horizon $S^1$ to a narrow annulus inside the spatial slice of the solid torus $R$; (3) calculate the TEE between the thin solid torus generated by thickened horizon, denoted by $B$, and the rest, denoted by $A$; (4) and finally take the limit that thickness of solid torus $B$ goes to zero.
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=0.7]
\draw[thick, fill=gray] (0,0) circle [radius=2];
\draw[thick,fill=white] (0,0) circle [radius=1.2];
\draw[thick, draw=blue] (0,0) circle [radius=1];
\end{tikzpicture}
\caption{With the absence of black hole $L$, bipartitions of the constant time slices of black hole $R$ lead to $Z_{1,0}(n\tau)$ after gluing.
The gray area corresponds to subregion $A$, and the width of the annulus $B$ will be taken to zero.}
\label{fig:alternative}
\end{figure}
The bipartition of the constant slice in this case is sketched in Fig. \ref{fig:alternative}. In this bipartition, the obtained TEE is between the exterior and the interior of horizon, rather than that between two single-sided black holes. The glued manifold is again represented by $Z_{1,0}(n\tau)$ and the replica trick yields the Bekenstein-Hawking entropy.
We have thus come to a conclusion that the followings are equal:
\begin{itemize}
\item[(a)] TEE between the two single-sided black holes,
\item[(b)] TEE between the exterior and the interior of the horizon for a single-sided black hole,
\item[(c)] thermal entropy of one single-sided black hole,
\item[(d)] mutual information between the two single-sided black holes.
\end{itemize}
The equivalence of (a) and (c) supports the ER=EPR conjecture \cite{MaldacenaTFD,Raamsdonk,ER=EPR} in the Euclidean AdS$_3$ case.
The equivalence between (b) and (c) shows explicitly from the bulk perspective that one should view the thermal entropy of a black hole as entanglement entropy (see for example Ref. \cite{Solodukhin}).
In general for a rotating BTZ black hole, although there is an inner horizon at $r=r_-$, the $z$-axis still represents the outer horizon at $r=r_+$ in the spherical coordinates \eqref{eq:spherical} for the upper $\mathbb{H}^3$. Hence, the replica trick described earlier still applies to a rotating BTZ black hole with modular parameter $\tau=\Phi+i\beta$, where $\Phi$ is the angular potential, the conjugate variable to angular momentum. Geometrically, we just need to put $r=|r_-|$ ``inside'' the inner edge of the constant time slice, so that it is not observable.\footnote{A similar situation will be described in appendix \ref{app:partition}.}
\subsection{The Entangling-Thermal Relation}
In Ref. \cite{Azeyanagi}, the authors showed a relation \eqref{eq:thermal}
for a single-sided BTZ black hole between the entanglement entropy of CFT on the conformal boundary and the Bekenstein-Hawking entropy:
\begin{equation}
\label{eq:thermal}
\lim_{l\rightarrow 0}(S_{A}(L-l)-S_A(l))=S^{\text{thermal}},
\end{equation}
where $S_A(L-l)$ is the entanglement entropy of a subregion $A$ on the boundary 1+1d CFT with an interval length $(L-l)$, and $S^{\text{thermal}}$ is the thermal entropy in the bulk. In this section, we propose another similar but different Entangling-Thermal relation.
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=0.7]
\draw[thick,fill=gray] (0,0) circle [radius=2];
\draw[thick,blue,fill=white] (0,0) circle [radius=1];
\filldraw[fill=white,thick] (-1.98,0.2)--(-1.7,0.2)--(-1.7,-0.2)--(-1.98,-0.2);
\draw[thick,fill=gray] (6,0) circle [radius=2];
\filldraw[fill=white,thick,draw=blue] (6,0) circle [radius=1];
\filldraw[thick,fill=white] (4.02,0.2)--(3.7,0.2)--(3.7,-0.2)--(4.02,-0.2);
\end{tikzpicture}
\caption{Bipartition of the constant time slice. Left and right panels are equivalent.}
\label{fig:ring-partition}
\end{figure}
We first consider the bipartition of the constant time slice as in Fig. \ref{fig:ring-partition} for a single-sided black hole. We put the separation between two subregions away from the horizon, so that region $B$ is the white contractible region in the left panel. The right panel is equivalent to the left one, and will be convenient for visualization of the gluing. We will call the glued manifold as the ``ring'', because after time evolution, region $B=\overline{A}$ will glue to itself and form a ring around the solid torus, shown in the middle panel of Fig. \ref{fig:ring}, where the small white part corresponds to the unglued part in its left panel. Hence, a single copy is the middle panel: away from the ring, the open wedge running around the longitude is the same as that in the left panel of Fig. \ref{fig:BTZsurgery}.
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=0.8]
\filldraw[fill=gray, thick] (0,0) circle [radius=1.5];
\draw[thick] (0,0) circle [radius=2];
\filldraw[fill=white,thick] (0,0) arc (0:180:0.75 and 0.375);
\filldraw[fill=white,thick] (0,0) arc (0:-180:0.75 and 0.375);
\node at (-0.7, 0.1) {{\tiny $t=\beta$}};
\node at (-0.7,-0.14) {{\tiny $t=0$}};
\filldraw[fill=blue,thick] (0,0) circle [radius=0.02];
\draw[dashed,thick] (-2,0)--(-1.5,0);
\draw[->,thick,white] (-0.7,0.4)--(-0.5,0.6);
\draw[->,thick,white] (-0.7,-0.4)--(-0.5,-0.6);
\end{tikzpicture}
\hskip 0.75 cm
\includegraphics[scale=0.3]{fig-ring}
\hskip 0.75 cm
\includegraphics[scale=0.25]{fig-ring-layer}
\caption{(Color online) \textbf{Left:} The side view of $\text{tr}\rho_A$ for the ``ring''; the dashed line is only used to separate $t=0$ and $t=\beta$ ends of the grey region. \textbf{Middle:} the front view of $\text{tr}\rho_A$ for the ``ring'' configuration. \textbf{Right:} the side view of $\text{tr}\rho_A^4$ inside the ``ring'' of the first $\text{tr}\rho_A$.}
\label{fig:ring}
\end{figure}
Naively it seems that one is unable to glue $n$ copies of the above geometry, since the ring blocks a portion of the wedge's opening. However, there do exist a unique embedding from $n$ copies to $\mathbb{R}^3$ up to homotopy equivalence, as shown in the right panel of Fig. \ref{fig:ring}: one first stretches the grey region in the left panel to the blue area in right panel, and glue a second light grey copy so that its $t=0$ edge are glued to the $t=\beta$ edge of the blue copy; now one repeats this process for green and yellow regions and so on, still preserving the replica symmetry. Notice that rings from gray, green and yellow copies (color online) are not in this piece of paper, but on parallel planes above or below. Then one puts rings from each copy side by side on the boundary torus, which requires each ring to be infinitesimally thin since $n$ is arbitrarily large. The resultant manifold is again a solid torus of modular parameter $n\tau$. So the replica trick calculation follows the previous equation \eqref{eq:replica} and gives
\begin{equation}
\label{eq:ring}
\lim_{\text{Area}(\bar{A})\rightarrow 0} S(A)=S_{BTZ}^{\text{thermal}}.
\end{equation}
For completeness, we note that Fig. \ref{fig:ring} has another limiting case, where the width of the ring covers almost the entire longitudinal direction of the solid torus, and its depth occupies a considerable portion of the radial direction, as shown in Fig. \ref{fig:other-limit}. Now in order to put rings side by side upon gluing $n$ copies, we need to stretch the non-contractible direction for $n$ times to accommodate them, so that the resultant manifold is approximately a solid torus with modular parameter $\tau/n$. Now plug $Z_{1,0}(\tau/n)$ into \eqref{eq:replica}:
\begin{equation}
\begin{split}
\lim_{\text{Area}(A)\rightarrow 0}S(A) & =-\frac{d}{dn} \left( \frac{Z_{1,0}(\tau/n)Z_{1,0}(\tau)^{n}}{Z_{1,0}(\tau)^{2n}} \right)\Bigg|_{n=1}=\ln Z_{1,0}(\tau)+\tau\frac{d}{d\tau}Z_{1,0}(\tau).
\end{split}
\end{equation}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.2]{fig-asym}
\caption{Another limit of the ring configuration.}
\label{fig:other-limit}
\end{figure}
Using $Z_{1,0}(\tau)\approx e^{4k\pi/\beta}(1+2e^{-4\pi/\beta})$ again, we obtain
\begin{equation}
\label{eq:A}
\lim_{\text{Area}(A)\rightarrow 0}S(A)=2\left(\frac{4\pi}{\beta}+1\right)e^{-4\pi/\beta},
\end{equation}
which vanishes at high temperature. Note that here is no $k$-dependence, meaning we can observe the one-loop effect directly.
Now we consider the complementary bipartition to Fig. \ref{fig:ring}, shown in Fig. \ref{fig:vertical}, where the grey region is $\bar{A}$ in Fig. \ref{fig:ring}. The gluing here is simple: since the unglued cut in the grey region $A$ is parallel to the longitude, $n$ copies should be arranged around a virtual axis tangent to the annulus. The resultant manifold is a vertical $n$-handlebody.
\begin{figure}[htbp]
\centering
\label{fig:complement}
\begin{tikzpicture}[scale=0.7]
\filldraw[thick,fill=white] (6,0) circle [radius=2];
\filldraw[fill=white,thick,draw=blue] (6,0) circle [radius=1];
\draw[thick,fill=gray] (4.02,0.2)--(3.7,0.2)--(3.7,-0.2)--(4.02,-0.2);
\end{tikzpicture}
\hskip 1cm
\includegraphics[scale=0.2]{fig-vertical-handlebody.pdf}
\caption{\textbf{Left:} The complementary bipartition which leads to $S_{\bar{A}}$. \textbf{Right:} The glued manifold is a vertical bouquet-like handlebody.}
\label{fig:vertical}
\end{figure}
One can calculate the corresponding TEE following a parallel procedure in the calculation of thermal AdS$_3$~ in section \ref{sec:tads}. The partition function of the glued manifold is
\begin{equation}
Z(n)= \prod_{\gamma~\text{prim.}}\prod_{m=1}^\infty |1-q_\gamma^m|^{24k}\times \prod_{\gamma~\text{prim.}} \prod_{m=2}^\infty \frac{1}{|1-q_\gamma^m|},
\end{equation}
where the first and second factors come from tree level and one-loop, respectively. The products are over primitive conjugacy classes of $\gamma\in\Gamma$. In the high-temperature regime, this expression can be simplified by the single-letter word approximation $\prod\limits_{\gamma~\text{prim.}} |1-q_\gamma| \approx |1-q_1'|^{2n}$, so that
\begin{equation}
Z(n,q_1')\approx \frac{|1-q_1'|^{48nk}}{|1-q_1'^2|^{2n}}.
\end{equation}
Here $q_1'$ can be obtained from $q_1$ in \eqref{eq:tads-q1} using a modular transformation,
\begin{equation}
\label{eq:tads-q2}
q_1'(n,a)=\frac{\sinh^2(\pi a/\beta)}{n^2\sinh^2(\pi a/n\beta)}e^{-2\pi/\beta}.
\end{equation}
The replica trick then gives
\begin{equation}
S(\bar{A})=-\frac{d}{dn}\left[\frac{Z(n,q_1'(n))}{Z(1,q_1'(1))^n}\right]\Bigg|_{n=1}.
\end{equation}
This is explicitly written as
\begin{equation}
\label{eq:entangling}
S(\bar{A})=96k\left(\frac{\pi a}{\beta}-2\right)e^{-2\pi/\beta}+8(12k-1)\left(\frac{\pi a}{\beta}-2\right)e^{-4\pi/\beta}+O(e^{-6\pi/\beta}).
\end{equation}
We now take the limit $a\rightarrow 0$ because this corresponds to the limit where the grey region in Fig. \ref{fig:vertical} goes to zero, so that:
\begin{equation}
\label{eq:Abar}
\lim_{\text{Area}(\bar{A})\rightarrow 0}S(\bar{A})\equiv\lim_{a\rightarrow 0}S(\bar{A})=-192ke^{-2\pi/\beta}-16(12k-1)e^{-4\pi/\beta}+O(e^{-6\pi/\beta}),
\end{equation}
which vanishes at high temperature. The infinitesimally negative value is a quirk due to approximation on $q_{\gamma}$'s.
Combining equations \eqref{eq:ring} and \eqref{eq:Abar}, one obtains the {\it Entangling-Thermal relation:}
\begin{equation}
\label{eq:entangling-thermal}
\lim_{\text{Area}(\bar{A})\rightarrow 0}[S(A)-S(\bar{A})]=S_{BTZ}^{\text{thermal}},
\end{equation}
We give this relation a different name from the two-dimensional thermal entropy relation in the dual CFT calculation \eqref{eq:thermal} because this is not merely a generalization of it in one higher dimension. The thermal entropy relation \eqref{eq:thermal} relates the entanglement entropy on the dual CFT with the thermal entropy of black hole in the bulk, while the entangling-thermal relation connects the topological entanglement entropy and thermal entropy both in the bulk gravitational theory. Additionally, the explanation for thermal entropy relation relies on the geometrical detail (minimal surfaces) in the bulk \cite{Azeyanagi}, while the entangling-thermal relation is of topological origin. In the first bipartition in Fig. \ref{fig:ring}, subregion $A$ sees the non-contractible loop and the nontrivial flux threading through the hole inside the annulus. In the second bipartition in Fig. \ref{fig:vertical}, subregion $A$ does not completely surround the non-contractible circle, i.e. the horizon. The difference between them this characterizes the non-contractible loop.
Finally we remark that there are several cases in which gluing procedures are not available. The no-gluing criterion being that, as long as the boundary of a subregion is contractible and not anchored on the boundary $S^1$, the spatial slice is not $n$-glueable. Also, a single copy in which glued region $B$ completely surrounds region except for the inner edge is not $n$-glueable.
\section{Summation over Geometries}
\label{sec:whole}
The partition functions of thermal AdS$_3$~ $Z_{0,1}(\tau)$ and BTZ black hole $Z_{1,0}(\tau)$ are not modular-invariant by themselves. To obtain the full modular-invariant partition function, one needs to sum over the pair of parameters $(c,d)$ for $Z_{c,d}$. This can alternatively be written as the summation over modular transformations of $Z_{0,1}$ as follows:
\begin{equation}
\label{eq:sum}
Z(\tau)=\sum_{\Gamma_{\infty}\textbackslash SL(2,\mathbb{Z})}Z_{c,d}(\tau)=\sum_{\Gamma_{\infty}\textbackslash SL(2,\mathbb{Z})}Z_{0,1}\left(\frac{a\tau+b}{c\tau+d}\right).
\end{equation}
where $\Gamma_{\infty}\textbackslash SL(2,\mathbb{Z})$ denotes the left coset of $SL(2,\mathbb{Z})$ by $\Gamma_{\infty}$ \cite{Manschot}, the translational subgroup generated by $2\times2$ matrices $\left(\begin{matrix}
1 & r\\
0 & 1
\end{matrix}\right)$ with action $\tau\rightarrow\tau+r$. Solid torus filling and Schottky parametrization are invariant under $\Gamma_{\infty}$, and the summation over coset is to make the full partition function invariant under both $T:\tau\rightarrow \tau+1$ and $S:\tau\rightarrow-1/\tau$.
Note that in the previous sections we have used $Z_{c,d}(\tau)=Z_{c,d}(\tau,\bar{\tau})$ as the shorthand for the product of holomorphic and anti-holomorphic pieces, whereas in this section we return to the notation that $Z_{c,d}(\tau)$ describes the holomorphic part of the partition function only. The anti-holomorphic part can easily be found as $\bar{Z}(\bar{\tau})$ and $Z(\tau,\bar{\tau})=Z(\tau)\bar{Z}(\bar{\tau})$.
Modular-invariant partition function of the form \eqref{eq:sum} is unique for the most negative cosmological constant $(k=1)$ \cite{Hohn,Witten0706} and was investigated in more general situations $(k>1)$ in \cite{MaloneyWitten}. An important theorem due to \cite{Hohn} is that the moduli space of Riemann surfaces of genus one is itself a Riemann surface of genus zero, parametrized by the $j$-function. Consequently, any modular-invariant function can be written as a function of it. The $J$-function is defined as
\begin{equation}
\begin{split}
J(\tau)&\equiv\frac{1728g_2(\tau)^3}{g_2(\tau)^3-27g_3(\tau)^2}-744\\
&=q^{-1}+196884q+21493760q^2+864299970q^3+20245856256q^4+\dots
\end{split}
\end{equation}
where $q=e^{2\pi i\tau}$ as usual, and $g_2(\tau)\equiv60G_4(\tau)$ and $g_3(\tau)\equiv140G_6(\tau)$, where $G_{2k}$ are holomorphic Eisenstein series of weight $2k,\,k\geq2$, defined as $G_{2k}\equiv\sum_{(m,n)\neq (0,0)}(m+n\tau)^{-2k}.$
Since the pole in the full partition function $Z(q)$ at $q=0$ is of order $k$ (due to the holomorphic tree-level contribution of thermal AdS$_3$, $q^{-k}$), it must be a polynomial in $J$ of degree k,
\begin{equation}
Z(q)=\sum_{j=0}^k a_i J^i=\sum_n c(k,n)q^n.
\end{equation}
For $k=1$ we simply have $Z(q)=J(q)$
The coefficients of $J(q)$ in front of $q^n$ was known to be intimately related to the dimensions of irreducible representations of the monster group $\mathbb{M}$, the largest sporadic group. It has $2^{46}\cdot3^{20}\cdot5^9\cdot7^6\cdot11^2\cdot13^3\cdot17\cdot19\cdot23\cdot29\cdot31\cdot41\cdot47\cdot59\cdot71\approx8\times10^{53}$ group elements and $194$ conjugacy classes. Dimensions of the irreducible representations of the monster group can be found in the first column of its character table \cite{Atlas}: 1, 196883, 21296876, 842609326, 18538750076, 19360062527 $\dots$.
After John McKay's observation $196884=1+196883$, Thompson further noticed \cite{Thompson}:
\begin{equation}
\label{eq:Thompson}
\begin{split}
& 21493760 = 1 + 196883 + 21296876,\\
& 864299970 = 2\times1 + 2\times196883 + 21296876 + 842609326,\\
& 20245856256 = 2\times1 + 3\times196883 + 2\times21296876 + 842609326 + 19360062527.
\end{split}
\end{equation}
This phenomenon is dubbed ``monstrous moonshine'' by Conway and Norton \cite{ConwayNorton}, later proved by Borcherds \cite{Borcherds}.
Ref. \cite{Witten0706} conjectures that for cosmological constant $k\equiv l/16G\in \mathbb{Z}$, quantum 3d Euclidean pure gravity including BTZ black holes can be completely described by a rational CFT (RCFT) called extremal self-dual CFT (ECFT) with central charge $(c_L,c_R)=(24k,24k)$, which is factorized into holomorphic and an anti-holomorphic pieces. An ECFT is a CFT whose lowest dimension of primary field is $k+1$, and it has a sparsest possible spectrum consistent with modular invariance, presenting a finite mass gap. The only known example is the $k=1$ one with a monster symmetry, constructed by Frenkel-Lepowsky-Meurman (FLM) \cite{FLM} to have partition function as $J(q)$, but its uniqueness has not been proved. The existence of ECFTs with $k>1$ is conjectured to be true \cite{Witten0706} and is also an active open question \cite{bootstrap,Gaiotto}.
In this section we will mainly focus on the $k=1$ case.
\subsection{TEE for the Full Partition Function}
The modular-invariant partition function is still defined on a solid torus. We will again consider the bipartition that separate the two single-sided black holes, similar to the section \ref{subsec:btz-erepr}. It is justified in appendix \ref{app:partition} that one can still cut $SL(2,\mathbb{Z})$ family of BTZ black holes along their outer horizons, which lie in the core of the solid torus. So one just needs to plug the partition function $J(q)$ into the replica trick formula.
At low temperatures, $q=e^{-2\pi\beta}$ is small, so that the full partition function will be dominated by the $q^{-1}$ term with almost trivial thermal entropy and TEE, trivial in the sense that there are no tree-level contributions. At high temperatures, richer physics is allowed.
Below we calculate the TEE of the full partition function in this regime.
Generally, the coefficient in front of $q^n$ in the partition function $Z(q)$ for any $k$ can be written as
\begin{equation}
\label{eq:c}
c(k,n)=\sum_{i=0}^{193}\textbf{m}_i(-k,n)d_i,
\end{equation}
where each $d_i$ is the dimension of the corresponding irreducible representations $M_i$ of $\mathbb{M}$, and $\textbf{m}_i(-k,n)$ is the multiplicity of the irreducible representation $M_i$ in the decomposition similar to \eqref{eq:Thompson}. It is guaranteed to be a non-negative integer. At large $n$, $\textbf{m}_i(-k,n)$ has the following asymptotic form \cite{Duncan1},
\begin{equation}
\label{eq:asymptotic}
\textbf{m}_i(-k,n)\sim\frac{d_i|k|^{1/4}}{\sqrt{2}|\mathbb{M}||n|^{3/4}}e^{4\pi\sqrt{|kn|}}.
\end{equation}
Now we restrict to the $k=1$ case and let $n$ to be a variable.
Taking care of the anti-holomorphic part, the replica trick \eqref{eq:replica} gives the following TEE, which is again equal to the thermal entropy \begin{equation}
\label{eq:full}
S_{\text{full}}(A)=S_{\text{full}}^{\text{thermal}}=2\ln J(q)-2\beta J(q)^{-1}\frac{\partial J(q)}{\partial\beta}.
\end{equation}
Note that this is again the same as the expression for calculation of thermal entropy in the canonical ensemble. (Using $\beta=l/r_+=1/\sqrt{M}= 1/\sqrt{n}$, n is viewed as a function of $\beta$ so the second term in \eqref{eq:full} is nonzero.)
The computation of $S_{A\cup B}$ for the entire $SL(2,\mathbb{Z})$ family of black holes is also similar to that of $M_{1,0}$ calculated in section \ref{subsec:btz-erepr}. The result is again equal to the thermal entropy, based on the fact that the $SL(2,\mathbb{Z})$ family of black holes are all solid tori with horizons living in the core. This implies that the system is again in a mixed state due to the Euclideanization, as expected in \cite{Page,Hawking}. The mutual information $I(A,B)$ is also the thermal entropy, parallel to the discussion in section \ref{sec:btz}.
In the high-temperature expansion, we only take the $q^n$ term $J_n(q)$ from the summation in $J(q)$ to calculate TEE because the desired term has a coefficient exponentially larger than those at lower temperatures\footnote{We will take into account all terms of $J(q)$ in appendix \ref{app:J}.}:
\begin{equation}
\label{eq:J}
J_n(q) = \sum_{i=0}^{193}\frac{d_i^2}{|\mathbb{M}|}\frac{e^{4\pi\sqrt{n}}}{\sqrt{2}n^{3/4}}q^n.
\end{equation}
Mathematically the two copies of $d_i$ in $d_i^2$ are both the quantum dimension of irreducible module $M_i$ of the monster group, which will be explained in detail later in section \ref{sec:qdim}. But physically they have different origins: one is the contribution from a single $M_i$ as shown in equation \eqref{eq:c}, while the other is probability amplitude for $M_i$ to appear in the summation as in equation \eqref{eq:asymptotic}. Namely, there is a correspondence between the partition function $J(q)$ and a pure state in the bulk, which is a superposition of different $M_i$'s:
\begin{equation}
\label{eq:Baez}
|\Psi\rangle = \sum_{i=0}^{193} \frac{d_i}{\sqrt{|\mathbb{M}|}} |i, i^*\rangle.
\end{equation}
In analogy to topological phases, the state is a {\it maximally-entangled state of $194$ types of anyons} labelled by the irreducible representations of the Monster group $\mathbb{M}$. The $d_i$ that appears explicitly in \eqref{eq:Baez} corresponds to that in \eqref{eq:asymptotic}, whereas $|i, i^*\rangle$ means a quasiparticle-antiquasiparticle pair labeled by $M_i$ and contributes another $d_i$, which correspond to the one in \eqref{eq:c}. In Ref. \cite{Baez}, the authors proposed from abstract category theory, that the ER=EPR realization in the context of TQFT should be exactly of the form \eqref{eq:Baez}. We will show later that this specific maximally-entangled superposition is the bulk TQFT version of the thermofield double state on the dual CFTs.
Applying to equation \eqref{eq:J} the identity for finite groups: $\sum_i d_i^2=|\mathbb{M}|$, we arrive at
\begin{equation}
J_n(q)=\frac{e^{4\pi\sqrt{n}}}{\sqrt{2}n^{3/4}}q^n=\frac{1}{\sqrt{2}}\beta^{3/2}e^{2\pi/\beta}.
\end{equation}
Plugging into \eqref{eq:full} and taking into account the anti-holomorphic part, we again recover the Bekenstein-Hawking entropy:
\begin{equation}
\label{eq:J-S}
S_{\text{full}}(A)
=\frac{8\pi}{\beta}+3\ln\beta-\ln2-3.
\end{equation}
The first three terms agree with Witten's asymptotic formula for Bekenstein-Hawking entropy \cite{Witten0706}, and provides an additional term $-3$. Remarkably, the ``anyons'' become invisible in TEE after the summation over $i$. This is exactly due to the appearance of the maximally-entangled superposition in equation \eqref{eq:Baez}. Had we taken another state where only one single $M_j$ appears with probability amplitude $1$ and all the others appear with amplitude $0$, then the corresponding term would have been proportional to $\ln \left(d_j/\sqrt{|\mathbb{M}|}\right)$. The latter matches with the entanglement entropy calculations in Refs. \cite{He1,Caputa,He2} for an excited state labeled by $j$ in a rational CFT.\footnote{This disappearance of ``anyons'' in the TEE for a maximally-entangled superposition is also expected in the context of topological phases, see equation (40) of Ref. \cite{Fradkin}, where one takes $|\psi_j|$ there to be $d_j/D$. }
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=0.6]
\filldraw[fill=lightgray, draw=black, thick] (0,0) circle [radius=3];
\filldraw[fill=gray, draw=blue, thick] (0,0) circle [radius=2];
\filldraw[fill=white, thick] (0,0) circle [radius=1];
\filldraw[fill=red] (1.5,0) circle [radius=0.01];
\filldraw[fill=red] (2.5,0) circle [radius=0.01];
\draw[thick,red] (1.5,0)--(2.5,0);
\node at (1.3,0.4) {\textcolor{red}{$i$}};
\node at (2.7,0.4) {\textcolor{red}{$i^*$}};
\end{tikzpicture}
\caption{(Color online) Constant time slice of the eternal BTZ black hole as in Fig.\ref{fig:ConstantBTZ}. The Wilson line corresponding to the quasiparticle-antiquasiparticle pair $i$, $i^*$ intersects with horizon both on the constant time slice and in the $3$d bulk.}
\label{fig:Wilson}
\end{figure}
In our case, the creation of the quasiparticle-antiquasiparticle pair $i$ and $i^*$ can be represented by a Wilson line, as shown in Fig. \ref{fig:Wilson}. The Wilson line intersects the non-contractible loop of the solid torus, i.e. the horizon, which is the reason why it can be detected by a cut along the horizon.
To make full understanding of the ``anyon'' picture, we rewrite state \eqref{eq:Baez} as
\begin{equation}
\label{eq:moonshine-double}
|\Psi\rangle=\frac{1}{\sqrt{J(q)}}\sum_{i=0}^{193}e^{-\frac{\beta}{2}E_i}|i,i^*\rangle,
\end{equation}
where the energy level corresponding to the anyon pair $i, i^*$ is described by the quantum dimension of $M_i$:
\begin{equation}
\label{eq:energy}
E_i=-\frac{1}{\beta}\ln\left[\frac{d_i^2}{|\mathbb{M}|}J_n(q)\right]
\end{equation}
Denoting $|i,i^*\rangle\equiv|i\rangle|i^*\rangle$, one can trace over all the $|i^*\rangle$'s and obtain the reduced density matrix
\begin{equation}
\rho_A=\sum_i e^{-\beta E_i}|i\rangle\langle i|,
\end{equation}
which is just the thermal density matrix for anyons, and different types of anyons $i$ form an ensemble. Using the expression for energy levels \eqref{eq:energy}, the entanglement entropy between the anyon pair can be easily calculated as
\begin{equation}
S_\Psi(A)=S^{\text{thermal}}(A)=S_{\text{full}}(A),
\end{equation}
where we have added the anti-holomorphic contribution. Thus the state \eqref{eq:moonshine-double} has the similar property as the thermofield double state in that the entanglement entropy between the quasiparticle-antiquasiparticle pair is equal to the thermal entropy of one quasiparticle. We call this state in the 3d bulk as the \emph{Moonshine double state}, in which the pair of anyons are separated by the horizon, just like the two single-sided black holes $L$ and $R$ are separated by it.
Unfortunately it has a shortcoming: as a pure state, the Moonshine double state above cannot reproduce the result of nonzero $S(A\cup B)$ \eqref{eq:union}.
To account for this, one could modify the final total quantum state as
\begin{equation}
\rho=|\tilde{\Psi}\rangle\langle\tilde{\Psi}|\otimes\rho_{\text{th}},
\end{equation}
where the modified moonshine double state now reads $|\tilde{\Psi}\rangle=\frac{1}{\sqrt[4]{J(q)}}\sum_{i=0}^{193}e^{-\frac{\beta}{2}\tilde{E}_i}|i,i^*\rangle$ with $\tilde{E}_i=-\frac{1}{\beta}\ln\left[\frac{d_i^2}{|\mathbb{M}|}J_n(q)^{1/2}\right]$. These energy level lead to the partition function $Z(q)=J(q)^{1/2}$. When one bipartites the system into two two single-sided black holes $A$ and $B$, one can see from straightforward computation that $|\tilde{\Psi}\rangle$ will contribute half of Bekenstein-Hawking entropy. The newly introduced $
\rho_{\text{th}}$ is purely thermal and exhibits no non-local correlations between $A$ and $B$, so that its von Neumann entropy is extensive and scales with volume. When one bipartites the system into the two single-sided black holes $A$ and $B$, it will give half of the Bekenstein-Hawking entropy. Combining the contribution from $|\tilde{\Psi}\rangle$, we recover $S_{\tilde{\Psi}}(A)=S^{\text{thermal}}(A)$, the Bekenstein-Hawking entropy. When considering $S(A\cup B)$, the modified moonshine double state contributes nothing as a pure state, while the result for $\rho_{\text{th}}$ is simply $S^{\text{thermal}}(A)$, matching with the calculations in \eqref{eq:union}.
Another caveat is that since $\ln J$ is approximately the Bekenstein-Hawking entropy, and the leading term in $E_i$ scales with $-\beta^{-2}\sim-n$. So in order to have a genuine quantum theory, our theory has to have a UV cutoff scale at a certain $n$.
Apart from the asymptotic expression \eqref{eq:asymptotic} which gives rise to the tree-level Bekenstein-Hawking entropy, there is the remainder formula \cite{Remainder} for coefficients of $q^n$ in the whole partition function $J(q)$ which is possibly related to the one-loop contribution to TEE. For general $k\in\mathbb{Z}_+$, the remainder formula reads
\begin{equation}
\begin{split}
\label{eq:remainder}
c(k,n)&=\frac{ke^{4\pi\sqrt{kn}}}{\sqrt{2}(kn)^{3/4}}\left[1+\sum_{m=1}^{p-1}\frac{(-1)^m(1,m)}{(8\pi\sqrt{kn})^m}+\frac{r_p(kn)}{(kn)^{p/2}}+\frac{\sqrt{2}n^{3/4}}{e^{4\pi\sqrt{n}}}S(k,n)\right.\\
&\left.+\frac{1}{k^{1/4}}\sum_{1\leq r<k}\frac{r^{1/4}a_{-r}(k)}{e^{4\pi\sqrt{n}(\sqrt{k}-\sqrt{r})}}\left(1+\sum_{m=1}^{p-1}\frac{(-1)^m(1,m)}{(8\pi \sqrt{kn})^m}+\frac{r_p(kn)}{(kn)^{p/2}}+\frac{\sqrt{2}n^{3/4}}{e^{4\pi\sqrt{n}}}S(k,n)\right)\right],
\end{split}
\end{equation}
where $p(x)$ is the integer partition of $x\in\mathbb{Z_+}$, and
\begin{equation}
\begin{split}
& (1,k)\equiv\prod_{j=0}^{k-1}\frac{4-(2j+1)^2}{4^kk!},\quad
a_r(k)\equiv p(r+k)-p(r+k-1),\\
& |r_p(n)|\leq\frac{|(1,p)|}{\sqrt{2}(4\pi)^p}+62\sqrt{2}e^{-2\pi\sqrt{n}}n^{p/2},\quad
0<\frac{\sqrt{2}n^{3/4}}{e^{4\pi\sqrt{n}}}S(k,n)\leq \frac{1}{4}\zeta^2\left(\frac{3}{2}\right)\frac{(rn)^{3/2}}{e^{4\pi\sqrt{rn}}}.
\end{split}
\end{equation}
To check this claim, one could restrict to the $k=1$ monstrous case and plug this expression into \eqref{eq:full}.
Alternatively one may fix $n$ and view the $c(k,n)$ as number of possible microstates at fixed energy, i.e. in the micro-canonical ensemble. One then performs a unilateral forward Laplace transform to return to canonical ensemble and then plug it to \eqref{eq:full}. Computations in both methods are in general complicated, and we do not pursue it here.
We provide another perspective towards the loop contribution in appendix \ref{app:J} by plugging in the whole $J$ function instead of only one large $n$ term. We observe that the loop correction is negative, consistent with both the thermal section AdS$_3$~ \ref{sec:tads} and the BTZ case in section \ref{sec:btz}.
\subsection{$d_i$ as Quantum Dimensions}
\label{sec:qdim}
In this section we provide more mathematical details and show that $d_i$ is the quantum dimension of the irreducible module $M_i$ of $|\mathbb{M}|$. An ECFT at $k=1$ is a special vertex operator algebra (VOA) $V^\natural$ whose automorphism group is the Monster group $\mathbb{M}$. This VOA, also known as the moonshine module \cite{FLM}, is an infinite-dimensional graded representation of $\mathbb{M}$ with an explicit grading:
\begin{equation}
V^\natural=\bigoplus_{n=-1}^{\infty} V_n^\natural,
\end{equation}
where every $V_n^\natural$ is an $\mathbb{M}$-module, called a homogeneous subspace. It can be further decomposed into
\begin{equation}
\label{eq:decomposition}
V_n^\natural\simeq \bigoplus_{i=0}^{193} M_i^{~\oplus\mathbf{m}_i(-1,n)},
\end{equation}
with $M_i$ labeling the irreducible $\mathbb{M}$-modules, and $\mathbf{m}_i(-1,n)$ is the multiplicity of $M_i$. This is the same multiplicity that appears in \eqref{eq:c}. (For ECFTs with general $k$, we have a tower of moonshine modules \cite{Duncan1} $V^{(-k)}=\bigoplus_{n=-k}^{\infty}V_n^{(-k)}$, where $V^{(-k)}_n$'s are all irreducible $\mathbb{M}$-modules. For each summand, one can similarly define $\textbf{m}_i(-k,n)$ as the multiplicity of the $\mathbb{M}$-modules $M_i$ in $V_n^{(-k)}$, so that $V_n^{(-k)}\simeq\bigoplus_{i=0}^{193}M_i^{\oplus\textbf{m}_i(-k,n)}.$)
Since we restrict to the holomorphic part of $Z(\tau,\overline{\tau})$ in this section, the entire dual CFT contains the ECFT above as a holomorphic piece. Furthermore, it is diagonal, i.e. its Hilbert space is a graded sum of tensor products of holomorphic and anti-holomorphic sectors:
\begin{equation}
\mathcal{H}\cong\bigoplus_{\alpha\in\mathbb{C}}\mathcal{M}_{\alpha}\otimes\overline{\mathcal{M}}_{\alpha},
\end{equation}
where $\mathcal{M}_{\alpha}$ and $\overline{\mathcal{M}}_{\alpha}$ are indecomposable representations of right and left Virasoro algebras. Since Virasoro action is built into the VOA axioms \cite{axiom}, these are also modules of the right and left monstrous VOAs, so $V^{\natural}$ admits induced representations from representations of the Virasoro algebra \cite{Ben-Zvi}. Obviously there are infinite number of Virasoro primaries, and $V^{\natural}$ is not an RCFT in this sense. However, $V^{\natural}$ is a typical example of a holomorphic/self-dual VOA, i.e. there is only one single irreducible $V^{\natural}$-module which is itself. Knowing that there is only one VOA-primary, one can reorganize Virasoro fields in $\mathcal{M}_\alpha$ and $\overline{\mathcal{M}}_\alpha$ into irreducible representations of $V^\natural$, by introducing the graded dimension of the $V^\natural$-module $N$, defined as
\begin{equation}
\text{ch}_q N\equiv \text{tr}_Nq^{L_0}=\sum_{n=0}^{\infty}\dim N_nq^n,
\end{equation}
where $L_0$ is the usual Virasoro generator and $N_n$'s are homogeneous subspaces of $N$ labelled by eigenvalues of $L_0$. (Note that we have omitted the overall prefactor $q^{-c/24}$ often appeared in literature.) The above procedure is similar to regourpong infinite Virasoro primaries in WZW models into finite Kac-Moody primaries.
To explain the $d_i$ appearing in \eqref{eq:J}, it is natural to consider quantum dimensions associated to $V^{\mathbb{M}}$ consisted of fixed points of the action by $\mathbb{M}$ on $V^{\natural}$. By theorem 6.1 in \cite{Dong}, we have the following decomposition of $V^\natural$
\begin{equation}
V^{\natural}\simeq\bigoplus^{194}_{i=1}V^{\mathbb{M}}_i\otimes M_i
\end{equation}
as $V^{\mathbb{M}}\times\mathbb{M}$-modules, for the 194 $V^{\mathbb{M}}$-submodules $V^{\mathbb{M}}_i$ in $V^{\natural}$ with $V^{\mathbb{M}}=V^{\mathbb{M}}_1$, where $M_i$ denotes an irreducible module for $\mathbb{M}$ with character $d_i$. This $V^{\mathbb{M}}$ is a sub-VOA of $V^{\natural}$ of CFT type \cite{Gannon}, and is called the \textit{monster orbifold}, because it is obtained from orbifolding $V^{\natural}$ by its automorphism group $\mathbb{M}$ \cite{Norton}, in the same sense as orbifolding the Leech lattice VOA by $\mathbb{Z}/2\mathbb{Z}$ in the FLM construction.
The standard definition of the quantum dimension of a VOA-module $N$ with respect to a general VOA $V$ is \cite{Dong}
\begin{equation}
\label{eq:character}
\text{qdim}_VN=\lim_{q\rightarrow1^-}\frac{\text{ch}_qN}{\text{ch}_qV}.
\end{equation}
The quantum dimensions of submodules of orbifold VOA $V^G$ obtained from orbifolding $V$ by a subgroup $G\subseteq\text{Aut}(V)$ only recently found their applications in quantum Galois theory \cite{Dong}. In our case, the quantum dimensions of all $V^{\mathbb{M}}_i$'s with respect to $V^{\mathbb{M}}$ were first calculated to be $\text{qdim}_{V^{\mathbb{M}}}V^{\mathbb{M}}_i=d_i$ in \cite{Duncan1}, using the asymptotic formula for multiplicities of $\mathbb{M}$-module $M_i$ in Fourier coefficient of $j$-invariant, bypassing the knowledge of $V^{\mathbb{M}}$'s rationality, which is still only conjectured to be true.
The remaining question is to define in parallel a quantum dimension for the the $\mathbb{M}$-modules in the above pair $\left(V^{\mathbb{M}}_i,M_i\right)$. The definition \eqref{eq:character} does not directly apply to an $\mathbb{M}$-module, but one can extend the definition using the $n$-graded dimension of $\mathbb{M}$-modules $M_i$'s. We define $\text{ch}_q M_i$ as \footnote{We are deeply grateful to Richard E. Borcherds for suggesting this alternative formula. It is similar to the generating function of multiplicity $\textbf{m}_i(-1,n)$ in Section 8.6 of \cite{Duncan1}, but without normalization by $1/|\mathbb{M}|$. }
\begin{equation}
\label{eq:chq}
\text{ch}_q M_i\equiv\sum_{\sigma}j_\sigma\cdot\overline{\chi_i(\sigma)}.
\end{equation}
Here $j_{\sigma}\equiv\sum_{n=-1}^{\infty} \chi_{V_n^\natural}(\sigma)q^n$ is the monstrous McKay-Thompson series for each $\sigma$ as well as the unique Hauptmodul for a genus-0 subgroup $\Gamma_{\sigma}$ of $SL(2,\mathbb{R})$ for each $\sigma$ \cite{ConwayNorton,Borcherds}. $\sigma$ belongs to an index set with order 171, deduced from the 194 conjugacy classes of $\mathbb{M}$. The difference $194-171=23$ can be understood from the one-to-one correspondence between conjugacy classes and irreducible representations of $\mathbb{M}$: most of the $194$ irreducible representations have distinct dimensions, except for $23$ coincidences. $\sigma$'s are only sensitive to the dimensions of the corresponding irreducible representations. $\overline{\chi_i(\sigma)}$ is complex conjugation of the character of the irreducible representation $M_i$ of the 171 ``conjugacy class'' $\sigma$.\footnote{In literature this is often denoted by $\overline{\text{tr}(\sigma|M_i)}$ or $\overline{\text{tr}(M_i(\sigma))}$ or $\overline{\text{ch}_{M_i}(\sigma)}$ as well.} At large $n$, summation in $\text{ch}_qM_i$ is dominated by the first Hauptmodul for the identity of $\mathbb{M}$, which is exactly the Klein's invariant $j(q)$, so that
\begin{equation}
\lim_{q\rightarrow1^-}\text{ch}_qM_i\approx j(q) \times d_i.
\end{equation}
In other words, one can view ch$_qM_i$ as a function ch$_qM_i(g)$ on group $\mathbb{M}$, and when defining the quantum dimension in \eqref{eq:chq}, we take the value when its argument is the identity element.
With this, we can define the quantum dimension of $\mathbb{M}$-modules $M_i$ in \eqref{eq:decomposition} relative to $V^{\natural}$ as
\begin{equation}
\label{eq:qdim}
\text{qdim}_{V^\natural}M_i\equiv\text{lim}_{q\rightarrow 1^-}\frac{\text{ch}_q M_i}{\text{ch}_q V^\natural}=\lim_{n\rightarrow\infty} \frac{\dim (M_i)_n}{\dim V^{\natural}_n}.
\end{equation}
Here $\text{ch}_q V^\natural=J(q)$ by applying \eqref{eq:character} to $V^{\natural}$, which is a $V^{\natural}$-module of itself. Combining the discussions above, the quantum dimension is just
\begin{equation}
\text{qdim}_{V^\natural}M_i=d_i.
\end{equation}
The $d_i$'s that appeared explicitly in \eqref{eq:J} of the TEE calculation are quantum dimensions of $M_i$, while those in \eqref{eq:asymptotic} are quantum dimensions of $V^{\mathbb{M}}_i$. They coincide numerically. As we mentioned before, the rationality of $V^{\mathbb{M}}$ is widely conjectured to be true\footnote{Unfortunately, the conjecture has only been proved only when the subgroup of the automorphism group is solvable \cite{Carnahan, Miyamoto}, which is not our case.}, and by a theorem of Huang \cite{Huang}, the module category of any rational, $C_2$-cofinite VOA is modular, i.e. it is a modular tensor category with a non-degenerate $S$-matrix. If one believes in the rationality conjecture, then $\text{qdim}_{V^{\mathbb{M}}}V^{\mathbb{M}}_i$'s have a well-defined interpretation in terms of modular $S$-matrices of the orbifold CFT $V^{\mathbb{M}}$:
\begin{equation}
\label{eq:smatrix}
d_i=S_{i0}/S_{00}.
\end{equation}
Note that these 194 ``anyons'' are the pure charge exitations in the corresponding topological ordered system described by the modular tensor category associated with the orbifold VOA $V^\mathbb{M}$.
\section{Discussion and Outlook}
\label{sec:summary}
In the high-temperature regime, the full modular-invariant partition function \eqref{eq:sum} is dominated by the black hole solution $Z_{1,0}(\tau)$, while in the low-temperature regime, it is dominated by $Z_{0,1}(\tau)$, the thermal AdS$_3$~ solution \cite{Tail, MaloneyWitten}. It is widely believed that there exists a Hawking-Page \cite{HawkingPage,Daives} transition at the critical temperature $\beta\sim 1$, or $r_+\sim l$. However, there is no consensus on whether this transition really exists \cite{MaloneyWitten,EvenOdd,NoTransition}, or if it exists, whether it is a first-order or a continuous phase transition \cite{Caputa1, Eune, Cappiello, Kurita, Sokolowski, Stephens}, or something else that is more subtle. In this section we offer a clue from the TEE perspective.
We compare the $a=1$ (defined in Fig. \ref{fig:a}) case in \eqref{eq:ads3} of thermal AdS$_3$~ and the Fig. \ref{fig:alternative} case of a single-sided black hole, for their subregion $A$'s both cover the whole space. One then observes that even at the tree level, TEE of BTZ and thermal AdS$_3$~ have different signs. A natural guess would thus be that, if the transition exists, it should be topological and happen at where the TEE changes sign.
Our definition of topological entanglement entropy is the constant subleading term in the expression for entanglement entropy, which is in general different from the tripartite information as used in \cite{KitaevPreskill}. For topological phases in condensed matter physics, these differ by a factor of two and are both negative. For gravitational theories in the bulk, our topological entanglement entropies can be either positive (as in BTZ black hole case) or negative (as in the thermal AdS$_3$~ case). To calculate the tripartite information, one can use the surgery method presented in this paper and find the time dependence, which at late times is negative of the Bekenstein-Hawking entropy \cite{Future}. This matches with the results in CFTs with gravitational dual, it is expected that the tripartite information should be negative \cite{Monogamy} and that for thermofield double state, it equals negative of the Bekenstein-Hawking entropy \cite{Channel}.
Quantum dimensions also appears in the calculation of left-right entanglement in RCFT \cite{LeftRight}. One might perform similar computations in the orbifold VOA $V^{\mathbb{M}}$ appeared in section \ref{sec:qdim}, by using the Ishibashi boundary CFT states that were constructed in \cite{MonstrousBrane} for open bosonic strings ending on D-branes.
Given the anyonic interpretation in section \ref{sec:whole}, one natural question to ask is that, to what extent 3d pure quantum gravity can be described as a theory of topological order. Naively one would expect the corresponding topological order to be the 3d Dijkgraaf-Witten theory of the monster group $\mathbb{M}$, which gives rise to the same modular tensor category as the one given by orbifold CFT $V^{\mathbb{M}}$ as explained in section \ref{sec:qdim}. On the other hand, it is also natural to expect the corresponding topological order to be the one which is effectively described by the double $SL(2,\mathbb{C})$ Chern-Simons theory. It would be highly non-trivial to find a mechanism that reconciles these two theories.
Another remark is that we have specified the bipartitions to be done at $t=0$ in section \ref{sec:btz}, while in general the result can be time-dependent. In the latter case one can still use the surgery method proposed in this paper to find the TEE or R\'enyi entropies, which can serve as an indicator of scrambling \cite{scrambling}.
A final mathematically motivated direction is the following. Vaughn Jones considered how one von Neumann algebra can be embedded in another and developed subfactor theory \cite{Jones1}. In general, the Jones program is about how to embed one \emph{infinite} object into another, reminiscent of field extensions in abstract algebra, and quantum dimension is defined exactly in this spirit. It would be interesting to see how subfactor theory in general can help connect topological phases and pure quantum gravity \cite{Jones2}.
\acknowledgments
We are deeply grateful to Richard E. Borcherds for teaching us quantum dimensions of $\mathbb{M}$-modules over $V^{\natural}$. We appreciate Song He and Mudassir Moosa's suggestions on the manuscript, and thank Ori J. Ganor and Yong-Shi Wu for remarks on Hawking-Page transition. We thank Norihiro Iizuka and Seiji Terashima for explaining their work, Andreas W. W. Ludwig and Zhenghan Wang for extremely helpful comments on the moonshine module. We thank Diptarka Das, Shouvik Datta and Sridip Pal for explaining their work and pointing out Ref. \cite{MonstrousBrane} to us. Zhu-Xi thanks Herman Verlinde for comments on the sign of BTZ TEE, and Zheng-Cheng Gu, Muxin Han, Jian-dong Zhang for helpful discussions.
We also appreciate the workshop ``Mathematics of Topological Phases of Matter'' at SCGP, where part of the work was completed.
|
1,314,259,994,256 | arxiv |
\section*{For further information please contact us:}
\maketitle
\thispagestyle{plain}
\begin{abstract}
\emph{Finite Sample Smeariness} (FSS) has been recently discovered. It means that the distribution of sample Fr\'echet means of underlying rather unsuspicious random variables can behave as if it were smeary for quite large regimes of finite sample sizes. In effect classical quantile-based statistical testing procedures do not preserve nominal size, they reject too often under the null hypothesis. Suitably designed bootstrap tests, however, amend for FSS. On the circle it has been known that arbitrarily sized FSS is possible, and that all distributions with a nonvanishing density feature FSS. These results are extended to spheres of arbitrary dimension. In particular all rotationally symmetric distributions, not necessarily supported on the entire sphere feature FSS of Type I. While on the circle there is also FSS of Type II it is conjectured that this is not possible on higher-dimensional spheres.
\end{abstract}
\section{Introduction}
In non-Euclidean statistics, the Fr\'echet mean \citep{F48} takes the role of the expected value of a random vector in Euclidean statistics. Thus an enormous body of literature has been devoted to the study of Fr\'echet means and its exploitation for descriptive and inferential statistics \citep{HL98,BP05, H_Procrustes_10,LeBarden2014,BL17}. For the latter, it was only recently discovered that the asymptotics of Fr\'echet means may differ substantially from that of its Euclidean kin \citep{HH15,EltznerHuckemann2019}. Initially, such examples were rather exotic. Corresponding distributions have been called \emph{smeary}. More recently, however, it has been discovered that also for a large class of classical distributions (e.g. all with nonvanishing densities on the circle, like, e.g. all von-Mises-Fisher distributions) Fr\'echet means behave in a regime up to a considerable sample sizes as if they were smeary. We call this effect \emph{finite sample smeariness} (FSS), also the term \emph{lethargic means} has been suggested. Among others, this effect is highly relevant for asymptotic one- and two-sample tests for equality of means. In this contribution, after making the new terminology precise, we illustrate the effect of FSS on statistical tests concerning the change of wind directions in the larger picture of climate change.
Furthermore, while we have shown earlier that FSS of any size can be present on the circle and the torus, here we show that FSS of arbitrary size is also present on spheres of arbitrary dimension, at least for local Fr\'echet means. For such, on high dimensional spheres, distributions supported by barely more than a geodesic half ball may feature arbitrary high FSS. Moreover, we show that a large class of distributions on spheres of arbitrary dimension, namely all rotationally symmetric ones, e.g. all Fisher distributions, feature FSS. This means not only that the finite sample rate may be wrong, also the rescaled asymptotic variance of Fr\'echet means may be considerably different from the sample variance in tangent space.
\section{Finite Sample Smeariness on Spheres}
Let $\mathbb{S}^m$ be the unit sphere in $\mathbb{R}^{m+1}$ for $m > 1$ and $\mathbb{S}^1 = [-\pi,\pi)/\sim$ with $-\pi$ and $\pi$ identified be the unit circle, with the distance
$$d(x,y) = \left\{\begin{array}{rcl}\arccos (x^Ty)&\mbox{ for }& x,y\in \mathbb{S}^m,\\
\min\{|y-x|, 2\pi- |y-x|\}&\mbox{ for }&x,y \in \mathbb{S}^1.
\end{array}\right.$$
For random variables $X_1,\ldots,X_n \operatorname{\stackrel{i.i.d.}{\sim}} X$ on $\mathbb{S}^m$ , $m\geq 1$, with silently underlying probability space $(\Omega,\mathbb P)$ we have the \emph{Fr\'echet functions}
\begin{eqnarray}\label{eq:Frechet-fcns} F(p) = \mathbb{E}[d(X,p)^2]&\mbox{ and }& F_n(p) = \frac{1}{n}\sum_{j=1}^n d(X_j,p)^2\mbox{ for }p\in \mathbb{S}^m\,.\end{eqnarray}
We work under the following assumptions. In particular, the third Assumption below is justified by \cite[Lemma 1]{TranEltznerHuGSI2021}.
\begin{As}\label{As:1}
Assume that
\begin{enumerate}
\item $X$ is not a.s. a single point,
\item there is a unique minimizer $\mu = \argmin_{p\in \mathbb{S}^m} F(p)$, called the \emph{Fr\'echet population mean},
\item for $m>1$, $\mu$ is the north pole $(1,0,\ldots,0)$ and $\mu =0$ on $\mathbb{S}^1$,
\item $\widehat{\mu}_n \in \argmin_{p\in \mathbb{S}^m} F_n(p)$ is a selection from the set of minimizers uniform with respect to the Riemannian volume, called a \emph{Fr\'echet sample mean},
\end{enumerate}
\end{As}
Note that $\mathbb P\{X = - \mu\} =0$ for $m>1$ and $\mathbb P\{X = - \pi\} =0$ on $\mathbb{S}^1$ due to \cite{LeBarden2014,HH15}.
\begin{Def}
We have the \emph{population variance}
$$V := F(\mu) = f(0) = \mathbb{E}[d(X,\mu)^2]\,,$$
which, on $\mathbb{S}^1$ is just the classical variance $\mathbb{V}[X]$, and
the \emph{Fr\'echet sample mean variance}
$$V_n := \mathbb{E}[d(\widehat{\mu}_n,\mu)^2] $$ giving rise to the \emph{modulation}
\begin{eqnarray*}
\mathfrak{m}_n &:=& \frac{nV_n}{V}\,.
\end{eqnarray*}
\end{Def}
We have the following finding from \cite{HundrieserEltznerHuckemann2020}
\begin{Th}\label{Th:ModulationProperties}
Consider $X_1,\ldots,X_n \operatorname{\stackrel{i.i.d.}{\sim}} X$ on $\mathbb{S}^1$ and suppose that $J \subseteq \mathbb{S}^1$ is the support of $X$. Assume Assumption \ref{As:1} and let $n>1$.
Then $\mathfrak{m}_n = 1$ under any of the two following conditions
\begin{itemize}
\item[(i)] $J$ is strictly contained in a closed half circle,
\item[(ii)] $J$ is a closed half circle and one of its end points is assumed by $X$ with zero probability.
\end{itemize}
Further, $\mathfrak{m}_n>1$ under any of the two following conditions
\begin{itemize}
\item[(iii)] the interior of $J$ contains a closed half circle,
\item[(iv)] $J$ contains two antipodal points, each of which is assumed by $X$ with positive probability.
\end{itemize}
Finally, suppose that $X$ has near $-\pi$ a continuous density $f$.
\begin{itemize}
\item[(v)] If $f(-\pi) =0$ then $\lim_{n\to \infty} \mathfrak{m}_n =1$,
\item[(vi)] if $0<f(-\pi)<\frac{1}{2\pi}$ then $\lim_{n\to \infty} \mathfrak{m}_n =\frac{1}{(1-f(-\pi) 2\pi)^2} >1$\,.
\end{itemize}
\end{Th}
In \cite{HH15} it has been shown that $f(-\pi) 2\pi$ can be arbitrary close to $1$, i.e. that $\lim_{n\to \infty} \mathfrak{m}_n$ can be arbitrary large. In fact, whenever
$f(-\pi) 2\pi=1$, then $\lim_{n\to \infty} \mathfrak{m}_n=\infty$.
These findings give rise to the following.
\begin{Def}
We say that $X$ is
\begin{itemize}
\item[(i)] \emph{Euclidean} if $\mathfrak{m}_n = 1$ for all $n \in \mathbb{N}$,
\item[(ii)] \emph{finite sample smeary} if $1 <\sup_{n\in\mathbb{N}} \mathfrak{m}_n < \infty$,
\begin{itemize}
\item[($ii_1$)] \emph{Type I finite sample smeary} if $\lim_{n\to \infty} \mathfrak{m}_n >1$,
\item[($ii_2$)] \emph{Type II finite sample smeary} if $\lim_{n\to \infty} \mathfrak{m}_n =1$,
\end{itemize}
\item[(iii)] \emph{smeary} if $\sup_{n\in\mathbb{N}} \mathfrak{m}_n = \infty$.
\end{itemize}
\end{Def}
\begin{figure}[t!]
\centering
\includegraphics[width = \textwidth, trim = 0 0 0 0, clip]{Modulationplots}
\caption{\it Modulation $\mathfrak{m}_n$ for von Mises distribution (Mardia \& Jupp, 2000) with mean $\mu = 0$ and concentration $\kappa = 1/2$ (left), conditioned on $[-\pi+0.2,\pi-0.2]$ (center), and conditioned on $[-\pi,-\pi +0.1] \cup [-\pi+0.2,\pi +0.2]\cup [\pi-0.1,\pi)$ (right). The dashed lines represent the respective limits of $\mathfrak{m}_n$ obtained by Theorem \ref{Th:ModulationProperties} (v), (vi). }\label{fig:circular-modulation-curves}
\end{figure}
\section{Why is Finite Sample Smeariness called Finite Sample Smeariness?}
Under FSS on the circle in simulations we see typical shapes of modulation curves in Figure \ref{fig:circular-modulation-curves}. For statistical testing, usually building on smaller sample sizes, as detailed further in Section \ref{scn:testing}, the initial regime is decisive, cf. Figure \ref{fig:circular-modulation-scheme}:
There are constants $C_+, C_-, K > 0$, $0 < \alpha_- < \alpha_+ < 1 $ and integers $1 < n_- < n_+ < n_0$ satisfying $C_+ n_{-}^{\alpha_+}\leq C_- n_{+}^{\alpha_-}$, such that
\begin{itemize}
\item[(a)] $\forall n \in [n_-, n_+] \cap \mathbb{N} \, : \quad 1 < C_- n^{\alpha_-} \le \mathfrak{m}_n \le C_+ n^{\alpha_+}$.
\item[(b)] $\forall n \in [n_0, \infty) \cap \mathbb{N} \, :~~ \quad \mathfrak{m}_n \le K $.
\end{itemize}
\begin{figure}[b!]
\begin{center}
\input{FSS_UpperLower_1_b}
\end{center}
\caption{\it Schematically illustrating the modulation curve $n\mapsto\mathfrak{m}_n$ for FSS on the circle. \label{fig:circular-modulation-scheme} Along $[n_-, n_+]$ the curve is between the lower ($C_- n^{\alpha_-}$) and upper ($C_+ n^{\alpha_+}$) bounds (dashed), satisfying the condition $C_+ n_-^{\alpha_+}\leq C_- n_+^{\alpha_-}$, and for $n\geq n_0$ it is below the horizontal upper bound (dashed).
}
\end{figure}
Although under FSS, $\mathfrak{m}_n$ is eventually constant, i.e. the asymptotic rate of $\widehat{\mu}_n$ is the classical $n^{-1/2}$, for nonvanishing intervals of sample sizes $[n_-,n_+]$, the ``finite sample'' rate is (in expectation) between
$$ \Big(n^{-\frac{1}{2}} < \Big) \quad n^{-\frac{1-\alpha_-}{2}}\mbox{ and }n^{-\frac{1-\alpha_+}{2}}\,,$$
i.e. like a smeary rate, cf. \cite{HundrieserEltznerHuckemann2020}.
Of course, as illustrated in Figure \ref{fig:circular-modulation-curves}, the modulation curve can be subject to different regimes of $\alpha_-$ and $\alpha_+$, in applications, typically the first regime is of interest, cf. Section \ref{scn:testing}.
\section{Correcting for Finite Sample Smeariness in Statistical Testing}\label{scn:testing}
The central limit theorem by \cite{HL98} and \cite{BP05} for an $m$-dimensional manifold $M$, cf. also \cite{H_Procrustes_10,H_ziez_geod_10,BL17} for sample Fr\'echet means $\widehat{\mu}_n$, has been extended by \cite{EltznerHuckemann2019} to random variables no longer avoiding arbitrary neighborhoods of possible cut points of the Fr\'echet mean $\mu$. Under nonsmeariness it has the following form:
\[ \sqrt{n}\, \phi(\widehat{\mu}_n) \operatorname{\stackrel{\cD}{\to}} \mathcal{N}\left(0, 4\,H^{-1} \Sigma H^{-1}\right)\,. \]
\begin{figure}[b!]
\centering
\includegraphics[width = \textwidth, trim = 0 0 0 0, clip]{CombinedTests_KX5-NX100-EX20}
\caption{\it Empirical rejection probabilities of quantile based tests (red) and bootstrap based tests (blue) to test for significance $95\%$ if two samples of size $n = 50$ (left) and $n = 100$ (right) have identical Fr\'echet means. The two samples are taken independently from a von Mises distribution with mean $\mu=0$ and $\mu = p$, respectively, and concentration $\kappa = 1/2$. The dashed line represents $5\%$. }
\label{fig:RejectionCurves}
\end{figure}
Here $\phi$ is a local chart mapping $\mu$ to the origin, $H$ is the expected value of the Hessian of the Fr\'echet function $F$ from (\ref{eq:Frechet-fcns}) in that chart at $\mu$ and $\Sigma$ is the covariance of $\phi(X)$. In practical applications, $H$ is usually ignored, as it has got no straightforward plugin estimators, and $4 H^{-1} \Sigma H^{-1}$ is simply estimated by the empirical covariance $\widehat{\Sigma}_n$ of $\phi(X_1),\ldots,\phi(X_n)$
giving rise to the approximation
\begin{eqnarray}\label{eq:BP-test} n \phi(\widehat{\mu}_n)^T\widehat{\Sigma}_n^{-1}\phi(\widehat{\mu}_n)&\operatorname{\stackrel{\cD}{\to}}& \chi^2_m\,,\end{eqnarray}
e.g. \cite{BP05,BL17}. For finite samples sizes, this approximation depends crucially on
$$\mathfrak{m}_n = \frac{\mathbb{E}[n\|\phi(\widehat{\mu}_n)\|^2]}{\mathbb{E}[\mbox{\rm trace}(\widehat{\Sigma}_n)]}=1\,,$$
and it is bad in regimes whenever $\mathfrak{m}_n\gg1$.
This is illustrated in Figure~\ref{fig:RejectionCurves} where two samples from von Mises distributions with concentration $\kappa = 1/2$ are tested for equality of Fr\'echet means.
Indeed the quantile based test does not keep the nominal level, whereas the bootstrap based test, see \cite{EH2017}, keeps the level fairly well and is shown to be consistent under FSS on $\mathbb{S}^1$, cf. \cite{HundrieserEltznerHuckemann2020}.
Moreover, Table \ref{tab:wind} shows a comparison of $p$-values of the quantile test based on (\ref{eq:BP-test}) and the suitably designed bootstrap test for daily wind directions taken at Basel for the years 2018, 2019, and 2020, cf. Figure \ref{fig:wind}. While the quantile based test asserts that the year 2018 is high significantly different from 2019 and 2020, the bootstrap based test shows that a significant difference can be asserted at most for the comparison between 2018 and 2019.
The reason for the difference in $p$-values between quantile and bootstrap based test is the presence of FSS in the data, i.e. $\mathfrak{m}_n \gg 1$.
Indeed, estimating for $n = 365$ the modulation $\mathfrak{m}_n$ of the yearly data using $B = 10.000$ bootstrap repetitions, as further detailed in \cite{HundrieserEltznerHuckemann2020}, yields $\mathfrak{m}_n^{2018} = 2.99$, $\mathfrak{m}_n^{2019} = 2.97$, and $\mathfrak{m}_n^{2020} = 4.08$.
\begin{table}[t!]
\centering
\begin{tabular}{r||c|c|c} $p$-value\;&\;2018 vs. 2019\; & \;2019 vs. 2020 \;&\; 2018 vs. 2020\\ \hline
quantile based test\;& $0.00071$ &$0.27$&$0.019$\\
bootstrap based test\;& $0.047$\textcolor{white}{00}&$0.59$ &$0.21$\textcolor{white}{0}
\end{tabular}
\vspace*{0.5cm}
\caption{\it Comparing $p$-values of the quantile based test for equality of means of yearly wind data from Basel (Figure \ref{fig:wind}), based on (\ref{eq:BP-test}) with the bootstrap test amending for FSS proposed in \cite{HundrieserEltznerHuckemann2020} for $B= 10.000$ bootstrap realizations. \label{tab:wind}}
\end{table}
\begin{figure}[t!]
\centering
\includegraphics[width = \textwidth, trim = 0 20 0 5, clip]{Basel_WindDirections_FrechetMeans}
\caption{\it Histograms of daily wind directions for Basel (provided by meteoblue AG) for 2018 (left), 2019 (center), and 2020 (right).}
\label{fig:wind}
\end{figure}
\section{Finite Sample Smeariness Universality}
Consider $p \in {{\mathbb S}^m}$ parametrized as $(\theta, \sin\theta q) \in {{\mathbb S}^m}$ where $\theta \in [0,\pi]$ denotes distance from the north pole $\mu \in {{\mathbb S}^m}$ and $q \in \mathbb{S}^{m-1}$, which is rescaled by $\sin\theta$.
\begin{Th}
Let $m \ge 4$, $Y$ uniformly distributed on $\mathbb{S}^{m-1}$ and $K>1$ arbitrary. Then there are $\theta^* \in (\pi/2,\pi)$ and $\alpha \in (0,1)$ such that for every $\theta \in (\theta^*,\pi)$ a random variable $X$ on ${{\mathbb S}^m}$ with $\mathbb P\{X=(\theta, \sin \theta\, Y)\}= \alpha$ and $\mathbb P\{X=\mu\}= 1 - \alpha$ features
$ \sup_{n\in \mathbb{N}} \limits \mathfrak{m}_n \ge \lim_{n \to \infty} \limits \mathfrak{m}_n > K.$
In particular,
$\theta^*=\frac{\pi}{2} + \mathcal{O}(m^{-1})\,.$
\end{Th}
\begin{proof} The first assertion follows from \cite[Theorem 4.3]{Eltzner2020a} and its proof in Appendix A.5 there. Notably $\theta^* = \theta_{m,4}$ there. The second assertion has been shown in \cite[Lemma A.5]{Eltzner2020a}.\end{proof}
\begin{Th}
Let $X$ be a random variable on $\mathbb{S}^m$ with $m \ge 2$ with unique nonsmeary mean $\mu$, which is invariant under rotation around $\mu$ and which is not a point mass at $\mu$. Then $\mu$ is Type I finite sample smeary.
\end{Th}
\begin{proof}
From \cite{Eltzner2020a}, page 17, we see that the Fr\'echet function $F_\theta$ for a uniform distribution on the $\mathbb{S}^{m-1}$ at polar angle $\theta$ evaluated at a point with polar angle $\psi$ from the north pole is
\begin{align*}
a(\psi, \theta, \phi) :=& \arccos\left( \cos\psi \cos\theta + \sin\psi \sin\theta \cos\phi \right)\\
F_\theta (\psi) =& \left( \int_0^{2\pi} \sin^{m-2}\phi \, d\phi \right)^{-1} \int_0^{2\pi} \sin^{m-2}\phi \, a^2(\psi, \theta, \phi) \, d\phi \, .
\end{align*}
Defining a probability measure $d\mathbb{P}(\theta)$ on $[0,\pi]$, the Fr\'echet function for the corresponding rotation invariant random variable is
\begin{align*}
F (\psi) = \int_0^\pi F_\theta (\psi) \, d\mathbb{P}(\theta) \, .
\end{align*}
On page 17 of \cite{Eltzner2020a} the function
\begin{align*}
f_2(\theta,\psi) := \frac{1}{2} \sin^{m-1} \theta \int_0^{2\pi} \sin^{m-2}\phi \, d\phi \, \frac{d^2}{d\psi^2} F_\theta (\psi)
\end{align*}
is defined and from Equation (5) on page 19 we can calculate
\begin{align*}
f_2(\theta,0) =& \sin^{m-2} \theta \left( \frac{1}{m-1} \sin\theta + \theta \cos\theta \right) \int_0^{2\pi} \sin^{m}\phi \, d\phi\\
=& \sin^{m-1} \theta \int_0^{2\pi} \sin^{m-2}\phi \, d\phi \left( \frac{1}{m} + \frac{m-1}{m} \theta \cot\theta \right)\, ,
\end{align*}
which yields the Hessian of the Fr\'echet function for $d\mathbb{P}(\theta)$ as
\begin{align*}
\textnormal{Hess}F(0) &= 2 \text{Id}_m \int_0^\pi \left( \frac{1}{m} + \frac{m-1}{m} \theta \cot \theta \right) d\mathbb{P}(\theta) \, .
\end{align*}
One sees that $\theta \cot \theta \le 0$ for $\theta \ge \pi/2$. For $\theta \in (0, \pi/2)$ we have
\begin{align*}
\tan \theta > \theta \quad \Leftrightarrow \quad \theta \cot \theta < 1 \quad \Leftrightarrow \quad \left( \frac{1}{m} + \frac{m-1}{m} \theta \cot \theta \right) < 1 \, .
\end{align*}
Using $\Sigma[\mu]$ to denote the CLT limit $n \textnormal{Cov}[\widehat{\mu}_n] \to \Sigma[\mu]$, cf. \cite{BP05}, we get the result
\begin{align*}
\textnormal{Hess}F(\mu) < 2 \text{Id}_m \quad \Rightarrow \quad \Sigma[\mu] > \textnormal{Cov} \left[\log_\mu X \right] \quad \Rightarrow \quad \mbox{\rm trace} \left( \Sigma[\mu] \right) > \textnormal{Var}[X] \, .
\end{align*}
The claim follows at once.
\end{proof}
\begin{Conj}
Let $X$ be a random variable supported on a set $A \subset {{\mathbb S}^m}$ whose convex closure has nonzero volume and which has a unique mean $\mu$. Then $\mu$ is Type I finite sample smeary.
\end{Conj}
\begingroup
\let\clearpage\relax
\vspace*{\baselineskip}
|
1,314,259,994,257 | arxiv | \section*{Intrinsic \& Extrinsic Geometry}
In thin structures, the effect of mechanical or chemical stimuli can be represented mathematically by a target metric~$\overline{a}$ -- a tensor that describes what the local distances between points should be in response to the applied stimuli\cite{Efrati2009}. The actual distances between points across a surface are represented by the realized metric $a$ and uniquely determine its Gaussian curvature, which describes the product of the surface's two principal curvatures, $\kappa_1$ and $\kappa_2$. If the Gaussian curvature imposed by the stimuli is not zero, the target metric cannot be realized in a flat disk, and the disk will bend to minimize the in--plane strains. Since the ratio of the bending to stretching energies scales like $\mathcal{U}_b/\mathcal{U_s}\sim h^2$, the disk will bend to minimize the in--plane strains if the thickness $h$ is small\footnote{The elastic energy is $\mathcal{U}=h\mathcal{U}_s+h^3\mathcal{U}_b$ where $\mathcal{U}_s$ has the dimension of an energy over a length, while $\mathcal{U}_b$ has the dimension of an energy over the cube of a length.}. Therefore, for thin disks, a reasonable approximation is to determine the structure's shape by minimizing the stretching energy, and allowing the disk to bend in whatever way necessary to accommodate the target metric. We write the bending energy density as $Eh^3(4H^2-K)$, where $H=(\kappa_1+\kappa_2)/2$ is the mean curvature of the mid--surface of the sheet, an extrinsic geometry property that depends on the embedding space, $K=\kappa_1\kappa_2$ is the Gaussian curvature, an intrinsic geometric property of the surface, and $E$ is the Young modulus of the sheet. Then, considering an incompressible material ($\nu=0.5$), the stretching energy may be written as~\cite{Efrati2009}
\begin{equation}
\label{eq-stretch}
\mathcal{U}_s\simeq h\int E[\tr^2(a-\overline{a})+\tr(a-\overline{a})^2] \sqrt{\lvert\overline{a}\rvert} \ \text{d}A\,.
\end{equation}
\section{Geometric Composites}
\subsection{Membrane Approximation}
Before we consider the dynamic problem of a swelling--induced changing metric, we construct as a proof of concept a mechanical analog of the geometric composite and consider the static case of a mechanically prescribed hyperbolic or elliptic metric over a disk. In the simplest case, when homothetic transformation radially stretches a circular disk, the radial and azimuthal distances all expand uniformly, and the disk stays flat since the surface's metric remains Euclidean (Fig.~\ref{fig:att}a). If we consider, instead, a circular disk of radius $R$ and an annulus of inner radius $R_i$ and outer radius $R_e$ (Fig.~\ref{fig:att}b), we can then impose a homothetic transformation independently on either the disk or the annulus. By defining $\alpha\equiv R_i/R$ as the mismatch between the disk and the annulus, it is immediately apparent that if $\alpha\neq1$ the two structures are incompatible and the annulus must be stretched ($\alpha<1$) or compressed ($\alpha>1$) to fit the circular disk (Fig.~\ref{fig:att}c). When the inner radius of the annulus is bonded to the edge of the circular disk, the resulting disk is a geometric composite that may be roughly modeled as a body having the following target metric in polar coordinates
\begin{equation}
\overline{a}=
f^2(r)
\begin{pmatrix}
1&0\\
0&r^2
\end{pmatrix}\,,
\quad f^2(r) = \left\{
\begin{array}{lr}
1, & r\le R \\
\alpha^2, & r>R
\end{array}
\right.\,.
\end{equation}
This metric is flat within each of the two domains, but the metric of the geometric composite is not flat, {\em i.e.} there does not exist any parabola that fits $\overline{a}_{\theta\theta}(r)$, the azimuthal covariant metric coefficient. As in~\citenum{Audoly2003}, we approximate all the strains to zero but $a_{\theta\theta}-\overline{a}_{\theta\theta}$; therefore, if the disk and the annulus are made of the same material, the stretching energy from equation~\eqref{eq-stretch} reads (see {\it Appendix})
\begin{equation}\label{eq:func}
\mathcal{U}_s\simeq Eh\int_0^{R}\frac{(a_{\theta\theta}-r^2)^2}{r^{3}}\ \text{d}r+Eh\int_{R}^{R_e/\alpha}\frac{(a_{\theta\theta}-\alpha^2r^2)^2}{\alpha^{2}r^{3}}\ \text{d}r\,.
\end{equation}
Physical intuition tells that when the annulus is stretched (compressed), the disk will bend into a dome--like (saddle--like) shape. This statement may be mathematically represented as $\sgn K=\sgn(1-\alpha)$. To describe the resulting shape, we use Gaussian normal coordinates ($\rho$, $\theta$) to express the realized metric~\cite{doCarmo1976}, where $\rho(r)~=~\int_0^r\sqrt{a_{rr}(r')}dr'$ measures the arc length along radial geodesics while $\theta$ is the azimuthal angle. In these coordinates, the first fundamental form is written as $ds^2=d\rho^2+a_{\theta\theta}(\rho)d\theta^2$, and by the Gauss theorem, the Gaussian curvature is $-\partial_{\rho\rho}\sqrt{a_{\theta\theta}}/\sqrt{a_{\theta\theta}}$, where $\partial_{\rho\rho}$ is the second order partial derivative with respect to $\rho$. We minimize the stretching energy by looking for metrics with constant Gaussian curvature, that is $a_{\theta\theta}(\rho)=(\sin(\sqrt{K}\rho)/\sqrt{K})^2$.~\footnote{Notice that when the Gaussian curvature is negative, {\em i.e.} $K<0$, the metric may be rewritten as $a_{\theta\theta}(\rho)~=~(\sinh(\sqrt{-K}\rho)/\sqrt{-K})^2$.} As long as $\lvert K\rvert<1/R_e^2$~\footnote{This upper bound means that each principal direction cannot have a curvature that exceeds $1/R_e$.}, we can Taylor expand $a_{\theta\theta}(\rho)$ to linearize the metric in $K$ as
\begin{equation}\label{eq:att}
a_{\theta\theta}(\rho)=\rho^2-\frac{K}{3}\rho^4+O(\rho^5)\,.
\end{equation}
Note that the first order term corresponds to a flat metric whereas the second one dictates the kind of non--Euclidean geometry that the disk will develop depending on the sign of $K$. The energy is quadratic in $K$ and therefore can be minimized analytically; notice that, if the annulus is neither deformed ($\alpha=1$) nor present ($R_i=R_e$), the energy is a simple parabola in $K$ with the minimum at $K=0$ since the disk is not constrained, and does not need to bend. Similarly, when the disk is not considered ($R=0$), the annulus does not need to bend either, and stays flat ($K=0$) with a radial stretch equal to $\alpha$. Once the stretching energy is minimized, we observe that the bending energy density is equal to $3H^2$ for dome--like shapes~($K=H^2$) and to~$4H^2-K$ for saddle--like shapes. In the latter case, since~$K<0$, the disk tries to morph into a minimal surface~($H=0$). Important aspects, albeit beyond the scope of this work, are the study of the discontinuities in the metric and the effect of a finite thickness.
When the target metric is elliptic, the resulting shape is unique \cite{Santangelo2009}. On the other hand, when the target metric is hyperbolic, the embedding is not unique, and shapes that are more complex then a saddle may develop when the thickness is very small\cite{Klein2011}. In our case, both experimental and numerical evidence indicate that the thickness to radius ratio ($\simeq 0.16$) is sufficiently high to avoid the development of complex shapes other than the saddle, yet small enough for the structure to be considered thin.
\section{Mechanical Analog}
\subsection{Experiments and Numerics}
To test our model, we prepared geometrically frustrated structures to realize dome--like and saddle--like disks, and measured their Gaussian curvatures. Circular molds were laser cut out of acrylic sheets, and used to cast samples with polyvinylsiloxane (PVS -- Zhermack Elite Double $32$). In these model experiments, we use an elastomer with a Young modulus $E=0.96$~MPa and a Poisson ratio $\nu=0.5$. The molds had a thickness $h=1.6$~mm with the radii of the disks varying between $5$ and $12$~mm. We designed the geometry so that the outer radius of the stretched disks was equal to $10$~mm. Bonding between the stretched annulus and the disk was accomplished by a small amount of uncrosslinked PVS. To measure their Gaussian curvature, we projected a laser sheet normal to the disk, and captured images of the reflected light with a Edmund--Optics GigE camera with a Nikkor lens ($35$~mm f: $1$-$1.4$) at $24$ equally spaced points along the disk's diameter. Image analysis was performed using Matlab to reconstruct the deformed shape. The annuli of polyvinylsiloxane elastomer were stretched homothetically and bonded to the inner disk. Upon release from the molds, the disks spontaneously morphed into domes or saddles. The annuli may be thought of as springs that want to release the energy by recovering their original shapes: for example, figure~\ref{fig:att}~(c) shows how the annulus must be compressed to get a saddle. We measured the shape of the deformed disks to determine the two principal radii of curvature, and plotted the Gaussian curvature in figure~\ref{fig:att}. These measurements confirmed the assumption used in our analysis that~$K$ is approximately constant across the disk. We also carried out numerical simulations to solve the problem within the context of finite incompressible tridimensional elasticity with large distortions using a Neo--Hookean material model~\cite{Lucantonio2014} implemented in the commercial software COMSOL Multiphysics. The stimulus in the numerical model is represented by a unimodular distortion field $\vett{F}_o=f(r)(\vett{I}-\vett{e_3}\otimes\vett{e_3})+f(r)^{-2}\vett{e_3}\otimes\vett{e_3}$, where $\vett{e_3}$ is the unit vector field orthogonal to the undeformed mid--surface of the disk. In this way, the tridimensional model is consistent with the $2$D model since $\vett{F}_o^T\vett{F}_o\cdot\vett{e_{\gamma}}\otimes~\vett{e_{\mu}}~=~\bar{a}_{\gamma\mu}$, where $\vett{e}_{\gamma}$ is the $\gamma$--th vector of the covariant basis spanning the undeformed mid--surface of the disk. The numerical results are also plotted in figure~\ref{fig:att}.
\subsection{Analysis}
The main plot in figure~\ref{fig:att} refers to the case when $R/R_e\alpha=0.7$. The blue curves show the azimuthal component of the target metric of the mid--surface $a_{\theta\theta}$ as a function of $r$. As long as $r<R$, $a_{\theta\theta}$ is equal to $\rho^2$ since the inner disk has not been stretched; on the contrary, within the annulus ($r>R$), the azimuthal component is equal to $\alpha^2\rho^2$ with $\alpha<1$ ($\alpha>1$) if the annulus has been stretched (compressed). Notice that the parabola in the inner disk is represented also for $r>R$ (dashed blue curve) to show how the metric should look like if the disk were flat and not stretched. By the Taylor approximation of the metric, it is evident that if $K>0$ ($K<0$), $a_{\theta\theta}$ stays below (above) the parabola. Solid black lines are the analytical solutions of~\eqref{eq:func} for elliptic and hyperbolic target metrics. By looking at equation~\eqref{eq:func}, we notice that for the energy to be minimized, the analytical solution should be as close as possible (in a $L^2$ sense~\footnote{$L^2$ is the Lebesgue space of squared integrable functions.}) to the target metric with weight functions that are $r^{-3}$ and $\alpha^{-2}r^{-3}$ in the disk and in the annulus, respectively. This explains why the realized metric stays in between the target metrics, and closer to the target metric in the disk. Circles and triangles in the main plot show the experimental result (we measure $K$ and compute $a_{\theta\theta}$ from equation~\eqref{eq:att}). Dashed red curves represent the numerical solutions and show that the assumption of homogenous Gaussian curvature accurately describes the metric in this case apart from slight deviations near the edges. The experimental and numerical results are in excellent agreement with our closed form analytical solution. The numerics also shows that $H\simeq\sqrt{K}$ for domes and $H\simeq 0$ for saddles, as predicted analytically. It is interesting to note that the capability of estimating an extrinsic geometric quantity, \emph{i.e.} the mean curvature $H$, from a model built around intrinsic geometries allows us to determine the displacement field, {\em i.e.} the embedding of the structure, up to rigid motions. This indetermination does not affect the predictability of axisymmetric shapes like domes, but it does affect the one of saddles that have an axisymmetric metric but a not axisymmetric embedding. While we can predict the magnitude of the principal curvatures, the principal directions of curvature are dictated by imperfections in both experiments and numerics when the target metric is hyperbolic.
\begin{figure}[!h]
\includegraphics[scale=1]{Figure2.pdf}
\caption{(a) A schematic of the metric of a circular disk in polar coordinates changing by $\alpha$ as the disk is stretched homothetically. (b) A schematic of the relevant radii. (c) Schematics of the metrics of an annular ring and a circular disk stretched independently, along with images of the resulting shell denoting the principal directions of curvature and Gaussian normal coordinates. (d) A plot of the realized metric (analytical as solid black and grey curves, numerical as dashed red curves, experiments as black symbols) normalized by $R_e^2\alpha^2$ vs. the normalized radial coordinate for prescribed target metrics (blue curves) when $R/R_e\alpha=0.7$, which result in positively and negatively curved shells. The inset shows the normalized Gaussian curvature vs. the radii ratios (analytical as solid black and grey lines, numerical as red symbols, experiments as black symbols). \label{fig:att}}
\end{figure}
\subsection{Variation of Geometric Composition}
We then tested the models (analytical and numerical) for other values of $R/R_e\alpha$. When this ratio is between $0$ and $0.5$, the Gaussian curvature cannot be approximated as homogenous throughout the disk but attains two constant values in the inner disk and the annulus. However, the metric does not diverge much from the constant curvature solution, and the analytical model gives a result that is in good agreement with the mean Gaussian curvature of the disk, which is exactly what we measured experimentally. The inset in figure~\ref{fig:att} shows the agreement between experiments, numerics and analytics as~$R/R_e\alpha$ varies. Notice that the analytical model always overestimates the Gaussian curvature since it is based on a membrane approximation. The numerical model shows that, when the target metric is hyperbolic, wrinkles arise below a critical thickness as also shown in~\citenum{Klein2011}. We then expect our hypothesis of constant Gaussian curvature to hold in a finite range of thicknesses.
\section{Residual Swelling}
\subsection{Experiments}
Residual swelling is a fairly more complicated phenomenon than the geometrical confinement that dictated the shape change in the simplified mechanical problem. Swelling--induced deformations cannot be seen as distortions, as they are related to both the elastic properties of the gel, and the chemical conditions of the residual free polymer chains. Moreover, in this case swelling is driven by the concentration gradient of these chains across the entire structure. We used the circular molds of the mechanical analog to cast samples with polyvinylsiloxane as shown in figure~\ref{figcartoon} (PVS -- Zhermack Elite Double 32 for the annulus and Zhermack Elite Double 8 for the inner disk). Both elastomers are incompressible ($\nu=0.5$) and their Young's moduli were measured as $0.96$~MPa and $0.23$~MPa for PVS $32$ (annulus) and PVS $8$ (inner disk), respectively. The inner disks (radius $R$) and the annuli (radii $R$ and $R_e$) were geometrically compatible so that they could be bonded without pre--stretch. Once released from the molds, the geometric composites were flat, and the plates morphed into curved disks over time due to residual swelling -- the flow of free polymer chains from high density regions (softer gel, disk) to low density regions (stiffer gel, annulus). To study the influence of $R/R_e$ on the morphing process, we fixed the radius of the whole disk to $12$~mm and varied the radius of the inner disk from $5$~mm to $11$~mm casting $7$ disks with different~$R/R_e$. We measured the time evolution of the Gaussian curvature of each disk with the same procedure used for the mechanical analog, repeated every three hours.
\subsection{Residual Swelling of Geometric Composites}
While this problem couples nonlinear geometric mechanics with elastomer swelling, we can provide insight into this process by incorporating swelling into our mechanical analogy. The stretching ratio $\alpha$ now dictates the metric that each part of the disk would realize upon swelling if it were free (not bonded to the other). The inner disk and the annulus would like to shrink and swell, respectively, as molecules are flowing from the former to the latter. We assume that if the annulus would like to swell by a factor $\alpha$, the inner disk would like to shrink by a factor $\alpha^{-1}$. Incorporating the difference between the two Young's moduli, the functional in equation~\eqref{eq:func} is modified as
\begin{equation}\label{eq:funcswelling}
\mathcal{U}_s\simeq\int_0^{R}\frac{(a_{\theta\theta}-\alpha^{-2}r^2)^2}{\alpha^{-2}r^{3}}\ \text{d}r+\frac{E_a}{E_d}\int_{R}^{R_e}\frac{(a_{\theta\theta}-\alpha^2r^2)^2}{\alpha^{2}r^{3}}\ \text{d}r\,.
\end{equation}
Notice that, since no pre--stretch is applied, the radius of the disk is $R_e$, \emph{i.e.} it coincides with the outer radius of the green annulus. The Young's moduli of the green annulus and the pink disk are denoted as $E_a$ and $E_d$, respectively.\footnote{Their ratio is roughly equal to $4$ and its variation with swelling is neglected.} To analytically determine how $\alpha$ should vary with $R/R_e$, we denoted as $c_d$ and $c_a$ the concentrations of the diffusive species in the disk and in the annulus, respectively. Since molecules flowed from the disk to the annulus, we fixed $c_a<c_d$ and imposed the conservation of mass as $c_\textup{eq}\pi R_e^2=c_d\pi R^2+c_a\pi(R_e^2-R^2)$, where~$c_\textup{eq}$ denotes the concentration at equilibrium. Then, we reasoned that the stretching ratio $\alpha$ will be proportional to the cubic root of the mass uptake inside the annulus so that $\alpha^3-1\sim(c_\textup{eq}-c_a)\pi(R_e^2-R^2)$. Finally, by expressing $c_\textup{eq}$ from the mass conservation, we got
\begin{equation}\label{eq:alpha}
\alpha=\Biggl[1+\eta\left(c_d-c_a\right)\left(\frac{R}{R_e}\right)^2\left(1-\left(\frac{R}{R_e}\right)^2\right)\Biggr]^{1/3}\,,
\end{equation}
where $\eta$ is a proportionality coefficient having the dimension of the inverse of a concentration and representing the link between mass uptake and stretch.\footnote{Similar to the hydrophilicity coefficient introduced in~\citenum{Nardinocchi2013b} to describe stretching induced by cation's motion in ionic polymer--metal composites.} The presence of a concentration gradient of polymer chains with a polydisperse molecular weight makes identifying this parameter difficult, and beyond the scope of this work. Qualitatively, the bigger the free chains, the higher $\eta$ should be. Notice that $\alpha$ is equal to $1$ when the mass uptake is zero, that is when the structure is homogeneous ($R/R_e=0\ \text{or}\ 1$). This is the important difference with the mechanical analog, since~$\eta$ is not known for swelling as it depends on the material and chemical properties of the elastomers; we therefore used it as a fitting parameter. Figure~\ref{fig:KR} shows the stationary values of the Gaussian curvature of the seven disks after residual swelling as obtained in the experiments (triangles), numerics (circles) and analytics (solid curve). Numerical and analytical results are obtained by using~\eqref{eq:alpha} in~\eqref{eq:funcswelling} and setting $\eta(c_d-c_a)$ equal to~$0.54$ that sets $\alpha_\textup{max}=1.043$ from~\eqref{eq:alpha}. The three linear regimes identified in figure~\ref{fig:KR} point out that the maximum of the Gaussian curvature is not attained for $R/R_e=1/2$ but for $R/R_e\simeq0.77$. A similar result was obtained for the mechanical analog as can be seen in the inset in figure~\ref{fig:att}. These two observations let us conclude that both the dimensionality of the swelling and simple geometry shift the maximum of the Gaussian curvature to high radii ratios instead of $R/R_e=1/2$. By the conservation of mass, it can be demonstrated that if the swelling had been $1$D, the maximum mass uptake would have been attained at $R/R_e=1/2$; if it had been $3$D, the maximum would have been attained at $R/R_e=1/2^{1/3}\simeq0.8$. In our $2$D case, the maximum is attained when $R/R_e=1/\sqrt2\simeq0.7$ as also experiments showed. So, in general, if $n$ is the dimensionality of the swelling, the maximum mass uptake is achieved for $R/R_e=1/2^{1/n}$. The agreement among experiments, numerics and analytics is quite good, and it is remarkable that the analytical model captures the linear regimes with the same slopes. The closed form analytical solution of the problem is cumbersome, but it may be simplified by noticing that $\alpha_\textup{max}\simeq1$, which allows us to perform a Taylor expansion in terms of $\alpha_\textup{max}-1$. At the leading order, defining $\bar{E}=E_a/E_d$ and $\bar{R}=R/R_e$, it reads
\begin{equation}\label{eq:Timo}
KR_e^2\simeq96(1-\alpha_\textup{max})\bar{E}\bar{R}^3\frac{\bigl(1-\bar{R}^2\bigr)\bigl(1-\bar{R}^3\bigr)}{\bar{R}^6\bigl(1-\bar{E}\bigr)+\bar{E}}\,,
\end{equation}
which we think can be interpreted as the $2$D analog of Timoshenko's formula for beams~\cite{Timoshenko1925} as it expresses how the dimensionless Gaussian curvature varies with material and geometric ratios (see {\it Appendix}). To the best of our knowledge, this is the first analytical formula relating the Gaussian curvature of a geometric composite to its material and geometrical properties, \textit{i.e.} moduli and radii ratios. This simplified expression is represented in figure~\ref{fig:KR} as a grey dashed line, and is very close to the full solution: the linear regimes highlighted in the figure are in excellent agreement with our Timoshenko--like formula. The physical interpretation of our first order Taylor approximation is that the strains are assumed to be small, which is the same limit that Timoshenko obtained his formula within. It is worth noting that, unlike thermal stretches in uniformly heated bimetallic strips, the stretching ratio $\alpha$ should depend on the elastic properties of the geometric composites, as discussed in~\citenum{Lucantonio2014a}.
\begin{figure}[!h]
\includegraphics[scale=1]{Figure3.pdf}
\caption{(a) Stationary Gaussian curvature vs $R/R_e$ achieved by residual swelling. Three different regimes may be identified as the radii ratio varies. In the first one (I) the curvature is very small. The second one (II, \emph{linear increasing}) shows a linear scaling of the Gaussian curvature with the radii ratio up to its maximum when the third regime (III, \emph{linear decreasing}) starts with a steeper linear scaling of the curvature now decreasing to zero. The solid black line is the analytical solution, the dashed grey line is its Taylor approximation (eq.~\eqref{eq:Timo}); triangles and circles represent experimental and numerical results, respectively. (b) Deformed shapes for four different radii ratios and corresponding experimental profiles of the Gaussian curvature.\label{fig:KR}}
\end{figure}
\subsection{Swelling Dynamics}
Our model successfully captures the steady--state morphology of residually swollen plates. Unlike the mechanical analog presented earlier, the residual swelling process adds a time--dependency to the deformation. The experimental results in figure~\ref{fig:Kt}~(a) show the time evolution of the Gaussian curvature of disks with seven different $R/R_e$, and the shape evolution contains two notable features: 1.) there is a critical \emph{activation time}, {\em i.e.} the time it takes for the structure to start deforming that depends on $R/R_e$, and 2.) following actuation, the disks deform in a diffusive manner.
We assume that the swelling dynamics may be described as a diffusive process with a Fourier--like differential equation. The main features of a diffusive equation are that it is of the first order in time, giving rise to transients that are described by the exponential of time up to the steady--state, and of the second order in space (quasi--1D in our case). As we are studying the transient by looking at a homogeneous field -- the Gaussian curvature $K$ -- we focus on its variation with time, and note that the dashed lines in figure~\ref{fig:Kt}~(a) correspond to an exponential of time ($K_\textup{steady}R_e^2(1-e^{-t/\tau})$), as expected. Figure~\ref{fig:Kt}~(b) shows that the activation time varies with $R/R_e$ as $t_a\sim Ae^{-BR/R_e}$, where $A$ and $B$ are positive real numbers equal to $21038$~h and $10.862$, respectively, in our case.\footnote{Error bars correspond to $\pm3$~h since we measured $K$ every three hours.} This numerical fitting is shown in the plot as a straight dashed grey line. Following activation, the diffusive shape change is characterized by the time constant~$\tau$ that dictates the time scale of the transient as it is the time at which the Gaussian curvature reaches the~$63\%$ of its stationary value. Figure~\ref{fig:Kt}~(a) shows that the disk corresponding to $R/R_e=11/12$ is faster than the other geometries; indeed, we found that its time constant is approximately $\tau\simeq40$~h whereas all the other disks have $\tau\simeq90$~h as shown in figure~\ref{fig:Kt}~(c).
As this is a diffusive process, we expect the dynamics will scale with the square of the characteristic length scale in the problem. We believe the observed difference in dynamics is the result of a change in the relevant characteristic length scale, \textit{i.e.} from the total radius of the disk to the width of the annulus. The characteristic time scale (time constant) of a diffusive process is equal to $\tau=\ell^2/D$, where $\ell$ is the characteristic length and $D$ is the diffusivity. While the latter is a property of the materials, the former is dictated by geometry. To identify the characteristic length, we examined a simpler geometry -- a bilayer beam.
Using the same materials as in section 3.1, we prepared a bilayered beam of equivalent thickness (figure~\ref{fig:Kt}~a - inset), and measured the time it took equilibrate into an arch, finding $\tau_{beam}\simeq 10$~h represented by a diamond in figure~\ref{fig:Kt}~(c). In this case, a reasonable assumption for the characteristic length is $\ell\sim h$, where $h$ is the total thickness of the beam as shown in the inset of figure~\ref{fig:Kt}~(a). Since the materials for the beam and disk are the same, they share the same value of~$D$. Therefore, we compared the $1$D diffusion in the beam with the $2$D diffusion in the disk with $R/R_e=1/2$ and assumed $\ell\sim R_e$ obtaining $\tau_\textup{disk}=(R_e/h)^2\tau_\textup{beam}\simeq90$~h: this analytical estimate is shown in figure~\ref{fig:Kt}~(c) as a square and it excellently predicts the experimental time constant of the disk. The experimental data show a decay of the time constant as the radii ratio approaches~$1$ that we interpret as the result of a decaying characteristic length, which represents the portion of the radius where swelling is actually taking place. When $R/R_e\simeq0.5$, our approximation $\ell\sim R_e$ is a good estimate for the characteristic length but when $R/R_e\rightarrow1$ the inner area of the disk is shielded from swelling and the characteristic length is smaller than $R_e$. We suggest that, as $R/R_e\rightarrow1$, the characteristic length approaches the width of the annulus.
\begin{figure}[!h]
\includegraphics[scale=1]{Figure4.pdf}
\caption{(a) Normalized Gaussian curvature versus time for the different radii ratios. (b) Activation times versus $R/R_e$ (triangles) and numerical fitting (dashed curve). (c) Time constants of the disks versus $R/R_e$ (circles), time constant of the beam (diamond) and analytical estimate of the time constant of the disk corresponding to $R/R_e=1/2$ (square) assuming $\ell\sim R_e$. \label{fig:Kt}}
\end{figure}
\section{Conclusions}
We have studied the morphing of geometric composites from flat plates into curved, three--dimensional shapes. The geometric composites morph by residual swelling, a phenomenon that we gained insight into by considering a mechanical analog that copies their geometry and morphs by geometrical confinement. The morphing problem of the mechanical analog is purely geometrical, and we developed an analytical model following the theory of non--Euclidean plates, which quantitatively describes how the Gaussian curvature is dictated by geometry. The strength of the model is indeed its analytical tractability that results from the assumption of a homogeneous Gaussian curvature throughout the disk. The agreement among experiments, numerics, and analytics is excellent even when the Gaussian curvature is not homogeneous because the analytical model provides a mean Gaussian curvature as a result, which is important for the design of actuators.
We then employed the analytical model of the mechanical analog to study the morphing of geometric composites by approximating the swelling as a distortion. By using the conservation of mass, we analytically determined how the mass uptake should vary with the radii ratio, and assumed a linear proportionality between the mass uptake and the volume variation. The agreement among experiments, numerics and analytics is quite good and each approach identified three regimes for the Gaussian curvature as a function of the radii ratio: it is remarkable that the model catches these regimes and their linear features. Then, by assuming small stretches ($\alpha\simeq1$), we simplified the cumbersome analytical solution and provided the first $2$D extension of the Timoshenko's formula for beams. Finally, we studied the swelling dynamics and identified two different characteristic lengths depending on geometry.
We think that the proposed model improved the understanding of the complex interplay among geometry, mechanics, and swelling. Additionally, the experiments demonstrate a robust and scalable means for the growth--actuated manufacturing of elastic shells -- a material that is traditionally difficult to prepare via additive and reductive manufacturing techniques. It is important to note that while residual fluid within the crosslinked elastomer drives the diffusion and swelling of the structure, the material behaves like an elastic solid, rather than a swollen gel. We expect this experimental procedure to translate to any combination of material--compatible elastomers where a gradient of small molecules can be programmatically prescribed. This may provide the foundation for an inkjet--like approach to 3D printing whereby small molecule fluids can be locally applied to a flat elastic sheet, allowing controlled diffusion to dictate the resulting growth pattern. Careful selection of the initial geometry will allow this technique to be used for generating regions of high curvature -- or folds -- which may form the basic building blocks for the growth of origami structures.
\section*{Acknowledgments}
We are grateful to Anupam Pandey (University of Twente) and Alexander Kotsch (Virginia Tech) for characterizing the experimental behavior of the bilayered beams, which provided helpful insights to the problem discussed herein. S.A.S. and D.P.H. acknowledge the financial support from NSF through CMMI-1300860. M.P. acknowledges the National Group of Mathematical Physics (GNFM-INdAM) for support (Young Researcher Project).
|
1,314,259,994,258 | arxiv | \section{Introduction}
\label{introduction}
Almost two decades ago Weinberg proposed a way to extend baryon chiral
perturbation theory to few-nucleon systems \cite{Weinberg:rz,Weinberg:um}.
These seminal papers triggered an intensive research activity
starting with the pioneering work of Ref.~\cite{Ordonez:1992xp}.
In this approach, chiral perturbation theory is applied to the effective
potential, defined as the sum of all possible $N$-nucleon irreducible
diagrams, rather than to the scattering amplitude. The amplitude is then generated
by solving the corresponding dynamical equation such as the Lippmann-Schwinger
(LS) equation in the two-nucleon sector. For recent reviews and references
the reader is referred to
Refs.~\cite{Bedaque:2002mn,Epelbaum:2005pn,Epelbaum:2008ga}.
While phenomenologically successful, the consistency of Weinberg's approach
was questioned by several authors. The nucleon-nucleon (NN)
potential in this formalism is non-renormalizable in the traditional sense,
i.e.~iterations of the LS
equation generate divergent terms with structures which are not included in the
original potential. For example, the leading-order (LO) NN potential is given
by derivative-less contact interactions contributing only to S-waves and
the one-pion exchange (OPE) term whose spin-triplet part behaves at short
distances as $1/r^3$ and, therefore, generates divergences also in higher partial
waves. Consequently,
renormalization of the solution
of the LS equation requires inclusion of contributions of infinitely many
higher-order short-range operators in the potential (counterterms).
The freedom in the choice of the finite parts of counterterms is compensated
by the running of the corresponding renormalized coupling constants.
It has been argued \cite{Kaplan:1996xu} that the coefficients in front of the
divergent parts of the
counterterms contributing at a given order set the scale of the corresponding
renormalized couplings. As a consequence, even if these couplings were natural
at some value of the renormalization scale, they would become unnaturally large
for slightly different values of this parameter.
This problem, also treated non-perturbatively, is what is usually referred to
as inconsistency of Weinberg's approach, see also Ref.~\cite{Beane:2001bc} for
a related discussion.
An
alternative power counting scheme has been proposed by Kaplan, Savage and Wise
(KSW) \cite{Kaplan:1996xu,Kaplan:1998tg,Kaplan:1998we,Savage:1998vh},
in which the troublesome OPE contribution to the potential is
shifted from LO to next-to-leading order (NLO). The
LO dynamical equation becomes renormalizable, both perturbatively and non-perturbatively,
i.e. all divergences
can be absorbed into redefinition of low-energy constants (LECs)
entering the potential. Moreover,
the LO equation is exactly solvable and dimensional regularization
can be applied. All corrections are treated perturbatively which guarantees
that {\it all} divergences are absorbed into redefinition of
parameters entering at a given order.
Unfortunately the resulting perturbative
expansion for the scattering amplitude was found not to converge for nucleon
momenta of the order of the pion mass at least in certain spin-triplet channels
\cite{Fleming:1999ee}, see however, Ref.~\cite{Beane:2008bt} for a new formulation
which is claimed to yield a convergent expansion. The reason for the breakdown
of the KSW expansion was attributed to the perturbative treatment of the
pion-exchange contributions
\cite{Gegelia:1998ee,Gegelia:1999ja,Cohen:1998jr,Cohen:1999iaa,Cohen:1999ds}.
This appears to be in line with phenomenological successes of Weinberg's approach
which treats pion exchange contributions nonperturbatively. Indeed, the most
advanced analyses of the NN system at next-to-next-to-next-to-leading order
in the Weinberg's power counting scheme demonstrate the ability to accurately
describe NN
scattering data up to center-of-mass momenta at least of the order $\sim 2
M_\pi$ \cite{Entem:2003ft,Epelbaum:2004fk}. It is important to emphasize that
these studies are carried out within the cutoff EFT along the lines of
Lepage \cite{Lepage:1997,Lepage:1999kt,Lepage:2000} who argued that the
cutoff parameter in such calculations should be taken of the
order of the relevant hard large scale such as e.g. the mass of the $\rho$
meson, see also Refs.~\cite{Gegelia:gn,Gegelia:1998iu,Park:1998cu,
Epelbaum:2004fk,Gegelia:2004pz,Epelbaum:2006pt}.
The fairly narrow range of cutoffs $\Lambda = 450
\ldots 600$ MeV used in Refs.~\cite{Entem:2003ft,Epelbaum:2004fk} was
criticized by Nogga et al.~\cite{Nogga:2005hy} who considered low NN partial
waves based on the OPE potential and contact interactions employing a much
larger range of cutoffs with $\Lambda < 4$~GeV. They found that higher-order
counterterms have to be promoted to LO in the $^3P_0$, $^3P_2$-$^3F_2$ and,
possibly, the $^3D_2$ channel in order to stabilize the amplitude in the
employed cutoff range. The authors of Ref.~\cite{Nogga:2005hy} conjecture
that the ``mixture of perturbative treatment of higher partial
waves, resummation of lower partial waves, and promotion of a finite
number of counterterms is the most consistent approach'' to
chiral effective field theory (EFT) in the two-nucleon sector, see however, Ref.~\cite{Epelbaum:2006pt}
for criticism. The possibility of a perturbative treatment of two- and
more-pion exchange corrections to the potential was explored using
renormalization-group methods \cite{Birse:2003nz,Barford:2002je,Birse:2005um}.
Finally, the consequences of completely removing the cutoff
$\Lambda$ by
taking the limit $\Lambda \to \infty$ in the LS equation based on the NN
potentials at various orders in chiral EFT are also being explored by
several groups
\cite{Frederico:1999ps,PavonValderrama:2003np,PavonValderrama:2004nb,
Timoteo:2005ia,PavonValderrama:2005gu,
PavonValderrama:2005wv,PavonValderrama:2005uj,Higa:2007gz,Entem:2007jg,
Long:2007vp,Yang:2007hb,Valderrama:2008kj,Yang:2009kx}.
The purpose of this paper is to clarify some conceptual issues related to
renormalization in the context of EFT for the two-nucleon system. First,
we discuss renormalization scheme dependence of the scattering
amplitude in the KSW and Weinberg's approaches. Contrary to widespread
belief, we show that renormalization scheme independence in the KSW
framework is only achievable up to the order to which the calculations are
performed. From this point of view, the KSW framework does not offer any
conceptual advantage over the Weinberg's approach. Secondly, we regard cutoff
EFT and explore the consequences of completely removing (or taking very large
values of) the cutoff. To that aim, we construct effective theory for an
exactly solvable quantum mechanical model with long- ($r_l \sim
m_l^{-1}$) and short-range ($r_s \sim m_s^{-1} \ll m_l^{-1}$) interactions of
a separable type valid for momenta of the order $k \sim m_l$. This can be
viewed as a toy-model for pionful EFT. We explain the meaning
of low-energy theorems in this model using the KSW-like framework with
subtractive renormalization and
demonstrate their validity in the Weinberg-like approach with a finite cutoff
$\Lambda$ as long as it is chosen of the order $\Lambda \sim m_s$. Next, it is
shown that taking the limit $\Lambda \to \infty$ yields a finite result for
the amplitude but
leads to breakdown of low-energy theorems. This procedure is, therefore, not
compatible with the EFT framework.
We argue that $\Lambda$ should not be
taken (considerably) larger than the short-range scale $m_s$ in the problem.
Our paper is organized as follows. In section \ref{pionless} we consider
renormalization scheme dependence in the KSW and Weinberg's approaches
concentrating mainly on a pionless theory.
Cutoff EFT for the exactly solvable toy model is discussed in section
\ref{cutoffEFT}. Finally, the findings of our work are briefly summarized in
section \ref{summary}.
\section{KSW versus Weinberg's approach}
\def\arabic{section}.\arabic{equation}{\arabic{section}.\arabic{equation}}
\setcounter{equation}{0}
\label{pionless}
For very low energies the effective non-relativistic
Lagrangian relevant for S-wave nucleon-nucleon scattering can be
written as \cite{Weinberg:rz,Weinberg:um,Kaplan:1996xu}:
\begin{equation}
{\cal L} = N^{\dagger}\bigg[ i\partial_t + {\nabla^2 \over
2m} \bigg] N - {C_S \over 2}\left( N^{\dagger}N\right)^2-{C_T\over 2}
\left( N^{\dagger}\mbox{\boldmath $\sigma$}N\right)^2 -{C_2\over 2}\left(
N^{\dagger}\nabla^2 N\right) \left(
N^{\dagger}N\right)
+ \mbox{\,h.c.}+ \ldots \,,\label{e1}
\end{equation}
where the nucleonic field $N$ is a two-component spinor in spin
and isotopic spin spaces and $\mbox{\boldmath $\sigma$} $
are the Pauli matrices acting on spin indices. Further, $m$ is the nucleon mass
and $C_T$, $C_S$ and $C_2$ are low energy coupling
constants. The LO contribution to the NN potential in the
${ }^1S_0$ partial wave is
\begin{equation}
V_0\left( {p},{p'}\right)=C_S-3\,C_T =C\,,
\end{equation}
while the NLO one has the form:
\begin{equation}
V_2\left( {p},{p'}\right)=C_2\left({p}^2+{p'}^2\right).
\end{equation}
In Weinberg's approach, the scattering amplitude is obtained by
solving the Lippmann-Schwinger equation. For the potential $V_0+V_2$, the
well-known solution for the on-the-energy-shell $T$-matrix reads, see e.~g.~\cite{Phillips:1997xu},
\begin{equation}
T = \frac{C+C_2^2I^\Lambda_5+ k^2C_2\left( 2-C_2I^\Lambda_3\right)}{\left(
1-C_2I^\Lambda_3\right)^2-\left[C+C_2^2I^\Lambda_5+ k^2C_2\left(
2-C_2I^\Lambda_3\right)\right]\,I^\Lambda(k)}\,, \label{2}
\end{equation}
with the cutoff-regularized loop integrals defined as
\begin{eqnarray}
I_n^\Lambda & = & -{m \over (2\pi )^3}\int {d^3l}\, l^{n-3}\,\theta \left(\Lambda-l
\right) = -\frac{m\,\Lambda^n}{2n\pi^2} \,,\; \mbox{
with } \; n=1,3,5\,, \nonumber \\
I^\Lambda(k) & = & {m\over (2\pi )^3}\int {d^3l \,\theta \left(\Lambda-l
\right)\over
k^2-l^2+i\eta} = I_1^\Lambda -\frac{i\,m\, k}{4\pi} -
\frac{m k}{4\pi^2} \,\ln \frac{\Lambda-k}{\Lambda+k}
\,, \label{3}
\end{eqnarray}
where $k$ refers to the on-shell momentum in the NN center-of-mass system and
the last equation is valid for $k < \Lambda$.
To renormalize (\ref{2}) we divide loop integrals into the divergent and
finite parts and take the limit $\Lambda \to \infty$:
\begin{eqnarray}
I_n & \equiv & \lim_{\Lambda \to \infty} I_n^\Lambda = \lim_{\Lambda \to
\infty} \left(I_n^\Lambda + \frac{m \mu_n^n}{2 n \pi^2}\right)
-\frac{m \mu_n^n}{2 n \pi^2} \equiv \Delta_n(\mu_n)+I_n^R(\mu_n)\,, \mbox{
with } n=3,5\,,\nonumber\\
I(k) & \equiv & \lim_{\Lambda \to \infty} I^\Lambda (k) = \lim_{\Lambda \to \infty}
\left(I_1^\Lambda + \frac{m \mu}{2 \pi^2}\right)
+\left[-\frac{m \mu}{2 \pi^2} -\frac{i\,m\, k}{4\pi} \right] \equiv
\Delta(\mu)+I^R(\mu,k)\,. \label{splitting2}
\end{eqnarray}
Here, $\Delta_n(\mu_n)$ and $\Delta(\mu)$ denote the divergent parts
of the loop integrals while $I_n^R(\mu_n)$ and $I^R(\mu,k)$ are finite. The
splitting of
loop integrals in Eq.~(\ref{splitting2}) is not unique. The freedom in the
choice of renormalization
conditions is parameterized by $\mu$ and $\mu_n$. The divergent parts
$\Delta_n(\mu_n)$
and $\Delta(\mu)$ are to be canceled by contributions of
counterterms. To absorb all appearances of $\Delta_n(\mu_n)$
and $\Delta(\mu)$ in Eq.~(\ref{2}) one needs to include
contributions of an infinite number of counterterms of increasingly higher
orders in powers of momenta \cite{Gegelia:gn}.
While it is impossible to write down these counterterms
explicitly, one can take their contributions into account by
dropping $\Delta_n(\mu_n)$ and $\Delta(\mu)$ terms and replacing
$C$ and $C_2$ by renormalized couplings. The subtracted
(renormalized) amplitude reads:
\begin{equation}
T = \frac{C_R+(C_2^R)^2 I_5^R(\mu_5)+ k^2C_2^R\left(
2-C_2^RI_3^R(\mu_3)\right)}{\left( 1- C_2^R
I_3^R(\mu_3)\right)^2-\left[ C_R+(C_2^R)^2 I_5^R(\mu_5)+
k^2C_2^R\left( 2-C_2^RI_3^R(\mu_3)\right) \right]\,I^R(\mu,k)}\,.
\label{rentmp2}
\end{equation}
Note that we are free to fix the freedom in finite parts of loop
integrals. Any scheme that puts an effective cut-off of the order of
external momenta is equally good from the EFT point of view. We also emphasize
that Eq.~(\ref{rentmp2}) is not obtained from (\ref{2}) by just
expressing $C$ and $C_2$ in terms of renormalized coupling
constants since these two low-energy constants are insufficient to absorb all
occuring divergences.
The renormalized couplings $C_R$ and $C_2^R$ depend on the
renormalization conditions through the renormalization group, but
this implicit dependence \emph{does not} cancel completely the
explicit dependence of the amplitude on $\mu$ and $\mu_n$.
However, different choices are equivalent up to the order of accuracy of the
calculation, provided that the chosen renormalization conditions respect power
counting.
We now turn to the KSW approach \cite{Kaplan:1998tg}, where the $T$-matrix
up to subleading order is given by\footnote{It results from Eq.~(\ref{2}) by
expanding in powers of $C_2$ and keeping the
first two terms.}
:
\begin{equation}
T={C\over 1-C\,I^\Lambda(k)}+{2 k^2C_2+2CC_2I_3^\Lambda\over \left[
1-C\,I^\Lambda(k)\right]^2}. \label{chtmatrix}
\end{equation}
One can absorb \emph{all} divergences appearing in the above expression by
expressing the bare couplings $C$ and $C_2$ in terms of renormalized ones
$C_R (=C_R(\mu,\mu_3))$ and $C_2^R (=C_2^R(\mu,\mu_3))$.
The complete functional dependencies $C \equiv C(C_R, \, C_2^R )$ and $C_2 \equiv
C_2(C_R,
\, C_2^R )$ can be found in Ref.~\cite{Gegelia:1998iu}. For our purposes, it
is sufficient to expand these expressions in powers of $C_2^R$ which leads to
\begin{eqnarray}
C & = & {C_R\over 1+C_R\Delta(\mu)}+{2C_RC_2^R I_3^R(\mu_3) \over
\left[ 1+C_R\Delta(\mu)\right]^2}-{2C_RC_2^RI_3\over \left[
1+C_R\Delta(\mu)\right]^3}+\cdots , \label{npct3}\\
C_2 & = & {C_2^R\over \left[ 1+C_R\Delta(\mu)\right]^2}+\cdots .
\label{npct4}
\end{eqnarray}
In the KSW approach, the first term in Eq.~(\ref{npct3}) is treated
non-perturbatively while all
other terms in Eqs.~(\ref{npct3}) and (\ref{npct4}) are
taken into account perturbatively, order-by-order.
Substituting Eqs.~(\ref{npct3}) and (\ref{npct4}) into
(\ref{chtmatrix}) we obtain a finite renormalized expression:
\begin{equation}
\label{TKSW}
T={C_R\over 1-C_R\,I^R(\mu,k)}+{2C_RC_2^R\,I_3^R(\mu_3)\over
\left[ 1-C_R\,I^R(\mu,k)\right]^2}+{2 k^2C_2^R \over \left[
1-C_R\,I^R(\mu,k)\right]^2}+\cdots . \label{renchtmatrix}
\end{equation}
If we choose $\Delta_3\equiv I_3$ (i.e.~we set $\mu_3=0$),
the explicit dependence on $\mu$ can be completely compensated by implicit
dependence of running couplings $C_R$ and $C_{2}^R$ at any fixed order in the
EFT expansion. This is analogous to Refs.~\cite{Kaplan:1998tg,Kaplan:1998we} where
the dimensional regularization in combination with power
divergence subtraction (PDS) scheme is used.
Complete order-by-order renormalization scale-independence cannot be achieved
for any other choice of $\mu_3$.
Indeed, if the amplitude were $\mu$- and $\mu_3$-independent
order-by-order, the third term in
Eq.~(\ref{renchtmatrix}) should satisfy this condition by itself. Denoting
this term with $t_3$ we obtain
\begin{equation}
\sqrt{\frac{2\,k^2}{t_3}} =\left[ \frac{1}{C_R}-
I^R(\mu,k)\right]\,\frac{C_R}{\sqrt{C_2^R}}\,. \label{t3inv}
\end{equation}
As the integral $I^R(\mu,k)$ has an imaginary part which is renormalization
scale independent,
it follows from Eq.~(\ref{t3inv}) that $C_R/\sqrt{C_2^R}$ must be
renormalization scale independent. If this were the case, both $C_R$ and $C_2^R$
must be $\mu_3$-independent as it is easily seen from the real part of the
same equation.
However, in this case the explicit $\mu_3$-dependence of the second term in
Eq.~(\ref{renchtmatrix}) cannot be canceled by running of the
coupling constants {\rm $C_R$ and $C_2^R$}. We are forced to conclude that
the amplitude cannot be renormalization scale-independent
order-by-order.
It is instructive to verify that the amplitude is indeed renormalization-scale
independent up to terms of order $\mathcal{O} ((C_2^R)^2)$.
By differentiating the expressions of the bare couplings in terms of renormalized ones, $C=C(C_R, \,
C_2^R)$ and $C_2=C(C_R, \, C_2^R)$ \cite{Gegelia:1998iu}, with respect to renormalization scales one obtains the
corresponding renormalization group equations for renormalized couplings.
For the beta-functions to first order in
$C_2^R$ these equations read:
\begin{eqnarray}
\frac{\partial C_R}{\partial\mu} & = & \frac{m}{2\,\pi^2}\,C_R^2
+\frac{m^2 \mu_3^3}{6\,\pi^4}\,C_R^2 C_2^R\,,\nonumber\\
\frac{\partial C_R}{\partial\mu_3} & = & \frac{m \mu_3^2}{\pi^2}\,C_R C_2^R\,,\nonumber\\
\frac{\partial C^R_2}{\partial\mu} & = & \frac{m}{\pi^2}\,C_R C_2^R\,,\nonumber\\
\frac{\partial C^R_2}{\partial\mu_3} & = & 0\,. \label{rengreqs}
\end{eqnarray}
We were unable to solve Eqs.~(\ref{rengreqs}) in a closed form but obtained
the expantion of the solution in powers of $C_2^R$:
\begin{eqnarray}
C_R(\mu,\mu_3) & = & \frac{2 \pi^2 C_R(0,0)}{2 \pi^2- \mu m C_R(0,0)
}+\frac{8 \pi^4 \mu _3^3 \, m\, {C_R(0,0)} \,{C_2^R(0)}}{3
\left(2 \pi^2- \mu m C_R(0,0) \right)^3}+\cdots \,,\nonumber\\
C_2^R(\mu) & = & \frac{4 \pi^4 C_2^R(0)}{\left( 2 \pi^2- \mu m C_R(0,0)
\right)^2}+\cdots\,. \label{solutions}
\end{eqnarray}
Substituting Eq.~(\ref{solutions}) into Eq.~(\ref{renchtmatrix}) leads to
\begin{eqnarray}
T & = & -\frac{2 \pi ^2 C_R(0,0)} { 2 \pi ^2 -C_R(0,0) \left[m
\mu +2 \pi^2 I^R(\mu,k)\right]}
+ \frac{8\,C_2^R(0)\,\pi ^4 k^2}{\left\{ 2 \pi ^2 -C_R(0,0)
\left[m \mu +2 \pi^2 I^R(\mu,k)\right]\right\}^2} \nonumber \\
&+& \cdots\,,
\label{rengrinvampl}
\end{eqnarray}
which is renormalization scheme independent up to the considered
order.
In case of the KSW approach, even using the PDS scheme, the residual
renormalization scale dependence is present in running coupling
constants if pions are included as explicit degrees of freedom.
The LO renormalized (running) coupling constant in the $^1S_0$ partial wave
given in Ref.~\cite{Kaplan:1998we} reads
\begin{equation}
C_0^{(^1S_0)}(\mu) =
-\frac{4\,\pi}{\mu\,m}\,\Biggl( \frac{1}{1- \left[\mu
\,\left(a+1/\Lambda_{NN}\right)
\right]^{-1}}+\frac{\mu}{\Lambda_{NN}}\Biggr)\,, \quad
\Lambda_{NN} = \frac{8 \pi \,f^2}{g_A^2
m}\,,\label{runningcoupling}
\end{equation}
where $a$ is the $^1S_0$ scattering length,
$f$ denotes the pion decay constant normalized to be $f= 132 $ MeV
and $g_A$ is the axial-vector pion-nucleon coupling constant. In the expansion of
$C_0^{(^1S_0)}(\mu)$ in powers of $g_A$, the renormalization scale
dependence is present to all orders. This dependence is canceled
by an infinite number of higher-order terms in the expansion of the amplitude.
Notice that in KSW approach the appearance of the residual renormalization scale dependence
has a different origin compared to the Weinberg approach. There it arises from
the explicit renormalization scheme dependence of loop integrals due to
missing contributions of the corresponding higher-order renormalized coupling
constants.
To summarize the above considerations, renormalization scheme dependence is present
explicitly in loop contributions and implicitly in running
coupling constants. These two types of dependence exactly cancel each other
in the full amplitude. Due to the order-by-order calculations in
the KSW approach, there is a residual renormalization scheme
dependence generated by running couplings. On the other hand, in the case of
Weinberg's approach, the residual renormalization scale dependence arises from
loop contributions. There is no conceptual difference between the two
approaches from the point of view of the renormalization scale dependence.
If one approach is inconsistent, then so is the
other. In fact both are conceptually as consistent as perturbative
QCD is. Of course, the crucial issue is to choose an optimal
renormalization scheme (if such a scheme exists at all). In perturbative
QCD, when expressed in terms of the running coupling, the full
amplitudes are independent of the renormalization scale $\mu$.
Perturbative expressions are, however, $\mu$-independent only up to the
order of accuracy of the calculation. The residual $\mu$-dependence is canceled
by higher-order terms in perturbation theory. At high
energies one could choose renormalization scale much smaller than
the characteristic scale in a process under consideration. This would
generate a large value of the running coupling constant and, at the same time, lead
to large coefficients in the perturbative series. The failure of this kind
of perturbative scheme does not mean the failure of perturbation
theory in high-energy QCD in general. The only problem is that for
such an inappropriate choice of the renormalization scheme, ``higher-order''
$\mu$-dependent terms play a crucial role and are by no means
suppressed. EFT for few nucleons is conceptually
similar. Although observables are calculated by solving the corresponding
dynamical equations, one is still doing perturbative calculations with respect
to the chiral expansion. While the full amplitude is renormalization scheme
independent, truncated expressions at any finite order are generally not.
\medskip
We now switch to our next topic and consider Weinberg's approach based on a
finite cutoff rather than subtractive renormalization. In particular, we are
interested in the implications of taking the cutoff value very large.
To keep trace of the loop expansion, we first rewrite Eq.~(\ref{2}) by showing
explicitly factors of $\hbar$:
\begin{eqnarray}
T & = & \frac{C+C_2^2 \hbar I_5+ k^2C_2\left( 2-C_2 \hbar I_3\right)}{\left(
1-C_2 \hbar I_3\right)^2-\left[C+C_2^2 \hbar I_5+ k^2 C_2\left(
2-C_2 \hbar I_3\right)\right] \hbar I(k)}\,.
\label{2x}
\end{eqnarray}
The bare coupling constants $C$ and $C_2$ can be expressed in terms of the
scattering length $a$ and the effective range $r$ by matching the amplitude in
Eq.~(\ref{2x}) to the first two terms in the effective range expansion
\begin{equation}
\Re \left( T^{-1} \right) = - \frac{m}{4 \pi} \left( -\frac{1}{a} +
\frac{1}{2} r k^2 + \ldots \right) \,,
\end{equation}
which leads to the following expressions
\begin{eqnarray}
\label{bareLEC}
C & = & C(a, \, r , \, \Lambda ) = \frac{6 \pi ^2 [ a^2 \hbar \Lambda^3 m (64 \hbar
-3 \pi \Lambda r) -6 \left(D -3 \pi ^2 \Lambda
m\right) -62 \pi a \hbar\Lambda^2 m ]
}{5
\hbar \Lambda^2 m^2 \left[a^2 \hbar \Lambda^2 (16 \hbar -\pi \Lambda
r)-12 \pi a \hbar \Lambda + 3 \pi ^2 \right]}\,, \nonumber \\
C_2 & = & C_2(a, \, r, \, \Lambda) = -\frac{6 \pi ^2
[ - D + a^2
\hbar m \Lambda ^3 (16 \hbar - \pi r \Lambda )-12 \pi
a \hbar m \Lambda ^2 + 3 \pi ^2 m \Lambda ]
}{\hbar m^2 \Lambda ^4 \left[a^2 \hbar \Lambda ^2 (16
\hbar - \pi r \Lambda )-12 \pi a \hbar \Lambda +3 \pi
^2\right]} \,,
\end{eqnarray}
with $D$ defined as
\begin{equation}
D = \sqrt{3} \sqrt{\Lambda^2 m^2 (\pi -2
a \hbar\Lambda)^2 \left(a^2 \hbar \Lambda^2 (16 \hbar -\pi
\,\Lambda
r)-12 \pi a \hbar \Lambda + 3 \pi ^2\right)}\,.
\end{equation}
Substituting the resulting expressions for the bare couplings
$C(a, \, r, \, \Lambda)$ and $C_2(a, \, r, \, \Lambda)$ back into
Eq.~(\ref{2x}), we obtain for the inverse
scattering amplitude
\begin{eqnarray}
T^{-1} &=& \frac{m }{4 \pi ^2 a
\left[a \left(\pi k^2 \,r\, \Lambda -4 \,\hbar\,
\left(k^2+\Lambda ^2\right)\right)+2 \pi \Lambda \right]}\,\Biggl\{2
\Lambda \left[a^2 \,\hbar\, k^2 (\pi
\,r\, \Lambda -4 \,\hbar)-2 \pi a \,\hbar\,
\Lambda +\pi ^2\right]\nonumber\\
&& +a \,\hbar\, k \ln
\frac{\Lambda -k}{\Lambda +k} \left[a \left(\pi
k^2 \,r\, \Lambda -4 \,\hbar\, \left(k^2+\Lambda
^2\right)\right)+2 \pi \Lambda \right]\Biggr\}+ i \hbar \frac{m k}{4 \pi }\,.
\label{invamplCutOff}
\end{eqnarray}
Although this expression for $T^{-1}$ possesses a finite limit as
$\Lambda \to \infty$ which, as desired, correctly reproduces the first two
terms in the effective range expansion (ERE),
\begin{equation}
T^{-1} = - \frac{m}{4 \pi} \left( - \frac{1}{a} + \frac{1}{2} r k^2 - i \hbar\,
k \right) + \mathcal{O} \left(\Lambda^{-1} \right).
\label{invamplCutOffExpanded}
\end{equation}
taking this limit without including all
relevant contributions of counterterms is a meaningless procedure within an EFT
\cite{Gegelia:gn}. Not surprisingly, one encounters pathologies, such as
e.~g.~the coupling $C_2$ becoming complex for positive values of the
effective range, see also Ref.~\cite{Beane:1997pk}.
To further explore the large-cutoff limit, we expand the
obtained expressions for the bare LECs in Eq.~(\ref{bareLEC}) in powers of
$\hbar$ which leads to
\begin{eqnarray}
C & = & \frac{4 \pi a}{m}+ \hbar \, \frac{3 a^4 r^2 \Lambda ^5 + 40 a^3 r
\Lambda ^3+240 a^2 \Lambda }{30 m}+{\cal O}(\hbar^2)\,,\nonumber\\ [6pt]
C_2 & = & \frac{\pi a^2 r}{m}+ \hbar\, \frac{a^4
r^2 \Lambda ^4 + 16 a^3 r \Lambda ^2 - 16
a^2}{4 m \Lambda }+{\cal O}(\hbar^2)\,.
\label{CandC2expanded}
\end{eqnarray}
Making use of the standard splitting of bare quantities into renormalized ones
and counterterms,
\begin{eqnarray}
C & = & C^R +\sum_{k=1}^\infty \hbar^k\,\delta C_k\,,\nonumber\\
C_2 & = & C_2^R +\sum_{k=1}^\infty \hbar^k\,\delta C_{2 k}\,,
\end{eqnarray}
we identify the renormalized low-energy constants in this particular
scheme with \begin{equation} \label{LECSrenormalized} C_R = \frac{4 \pi
a}{m}\,, \quad \quad C_2^R = \frac{\pi a^2 r}{m}\,.
\label{rencouplings} \end{equation} Inverting the above expressions and
replacing in Eq.~(\ref{invamplCutOff}) the scattering length and the
effective range by $C_R$ and $C_2^R$, the inverse amplitude can be
re-written in terms of renormalized coupling constants. To see what
does the $\Lambda\to\infty$ limit correspond to in the language of
the EFT diagrams, we regard the loop expansion of the amplitude
(thus we reproduce the perturbative series summed up by iterating
the LS equation):
\begin{eqnarray}
T & = &C_R + 2C_2^R k^2
- i \hbar \frac{m\,k}{4 \pi} \left( C_R + 2 C_2^R k^2 \right)^2 \nonumber \\
&+& \hbar \frac{2 m\,k^4}{\pi^2} \Big[ - \left(C_2^R \right)^2 \Lambda +
\left(C_2^R \right)^2 k^2 \Lambda^{-1} + C_2^R C_R \Lambda^{-1} + \mathcal{O}
\left(\Lambda^{-2} \right) \Big] +\cdots
\,,
\label{loopexp}
\end{eqnarray}
where ellipsis refer to higher-order terms in the loop expansion.
For momenta $k\gtrsim 1/a$ we cannot truncate the loop expansion in
Eq.~(\ref{loopexp}) at any finite order (i.e. in the language of
Feynman diagrams we need to sum up an infinite number of them). For
our demonstrating purposes we consider here the case when $r \ll a$.
With renormalized couplings of Eq.~(\ref{rencouplings}) the small
parameter of the EFT expansion is given by $k^2/m_s^2 \sim 2 C_2^R
k^2/C_R= a r k^2/2$, i.e. the hard scale of the problem is $m_s \sim
1/\sqrt{r a}$.\footnote{For very large scattering length $a \to
\infty$
a more suitable renormalization scheme should be used
\cite{Kaplan:1998tg,Gegelia:gn}.} The term linear in $\Lambda$ in
the second line of Eq.~(\ref{loopexp}) violates the dimensional
power counting and, for very large values of $\Lambda$ (much larger
than {\bf $m_s$}), yields the numerically dominant contribution at
one loop order, instead of being absorbed into redefinition of
higher-order coupling constants or, equivalently, being subtracted.
The situation is similar at higher orders in the loop expansion.
Hence, one completely looses the power counting which the EFT is
based on: terms which are supposed to be subtracted yield dominant
contributions to the amplitude. On the other hand, if $\Lambda$ is
taken of the order of the hard scale in the problem, i.e.
$\Lambda \sim m_s$, the term linear in $\Lambda$ in the second line of
Eq.~(\ref{loopexp}) appears to be of order three and is beyond the
accuracy of the considered calculation. Notice that reducing the
value of $\Lambda$ considerably below the hard scale would lead to
large cutoff artefacts generated by terms with negative powers of
$\Lambda$. One is, therefore, forced to conclude that in the cutoff
theory, $\Lambda$ should ideally be chosen of the order of the hard
scale in the problem.
\section{Cutoff EFT: Renormalization versus ``peratization''}
\def\arabic{section}.\arabic{equation}{\arabic{section}.\arabic{equation}}
\setcounter{equation}{0}
\label{cutoffEFT}
In this section we further explore and extend the above ideas by considering
effective theory for an exactly solvable quantum mechanical model for
two nucleons interacting via the long- and short-range forces. This may be
regarded as a toy model for chiral EFT in the two-nucleon sector. We employ
both the subtractive renormalization and the cutoff formulation of the
resulting effective theory and discuss the similarities and differences
between these two approaches. We also explore the consequences of taking very
large values of the cutoff in this model.
\subsection{The model}
\label{ToyModel}
We consider two nucleons in the spin-singlet S-wave interacting
via the two-range separable potential
\begin{equation}
\label{Vunderlying}
V (p ,\, p') = v_{l} \, F_{l}(p)\, F_{l}(p')+
v_{s}\, F_{s}(p)\, F_{s}(p')\,, \quad
F_l (p) \equiv \frac{\sqrt{p^2 + m_s^2}}{p^2 + m_l^2}\,, \quad
F_s (p) \equiv \frac{1}{\sqrt{p^2 + m_s^2}}\,,
\end{equation}
where the subscripts $l$ and $s$ refer to long- and short-range
interactions and the mass scales $m_l$ and $m_s$ fulfill the condition $m_l \ll
m_s$. Further, the dimensionless quantities
$v_l$ and $v_s$ denote the strengths of the long- and short-range
interactions, respectively. Our choice of the explicit form of $F_{l,s}(p)$
is entirely motivated by the simplicity of calculations. The reader may verify
that all conclusions reached in this section remain valid if one chooses,
for example, $F_{l,s}(p) \propto 1/(p^2 + m_{l,s}^2 )$. In this case, however,
one will need to go to subleading order in the EFT expansion in order to
explore the ``peratization'' procedure which would make the calculations
considerably more involved.
For an interaction of a separable type, the off-shell
T-matrix can be easily calculated analytically by solving the corresponding
Lippmann-Schwinger equation
\begin{equation}
\label{LS}
T (p ,\, p'; \, k ) = V (p ,\, p') + 4 \pi \int \frac{l^2 dl}{(2 \pi)^3} V
(p ,\, l)
\frac{m}{k^2-l^2 + i \epsilon} T (l ,\, p'; \, k )\,,
\end{equation}
where $m$ is the nucleon mass and $k$ corresponds to the on-shell momentum
which is related to the two-nucleon center-of-mass energy via $E_{\rm CMS} =
k^2/m$.
The phase shift $\delta (k)$ can be obtained from the on-the-energy-shell T-matrix
elements via
\begin{equation}
T (k ,\, k; \, k ) = - \frac{4 \pi}{m} \frac{1}{k \cot \delta (k)
- i k}\,.
\end{equation}
Here and in the following, we are particularly interested in the coefficients
entering the effective range expansion
\begin{equation}
\label{ERE}
k \cot \delta (k) = - \frac{1}{a} + \frac{1}{2}r k^2 + v_2 k^4 + v_3 k^6 +
\ldots\,,
\end{equation}
with $a$, $r$ and $v_i$ referring to the scattering length, effective range and
the so-called shape parameters. The coefficients in the ERE generally scale
with the mass corresponding to the long-range interaction which gives rise to
the first left-hand cut in the T-matrix. Notice that the scattering
length can be tuned to any value by adjusting the strength of the
interaction. The coefficients in the ERE
can be expanded in powers of $m_l/m_s$ leading to the ``chiral'' expansion:
\begin{eqnarray}
\label{EREexpanded}
a &=& \frac{1}{m_l} \bigg( \alpha_a^{(0)} + \alpha_a^{(1)} \frac{m_l}{m_s} +
\alpha_a^{(2)} \frac{m_l^2}{m_s^2} + \ldots \bigg) \,, \nonumber \\
r &=& \frac{1}{m_l} \bigg( \alpha_r^{(0)} + \alpha_r^{(1)} \frac{m_l}{m_s} +
\alpha_r^{(2)} \frac{m_l^2}{m_s^2} + \ldots \bigg) \,, \nonumber \\
v_i &=& \frac{1}{m_l^{2 i -1}} \bigg( \alpha_{v_i}^{(0)} + \alpha_{v_i}^{(1)}
\frac{m_l}{m_s} +
\alpha_{v_i}^{(2)} \frac{m_l^2}{m_s^2} + \ldots \bigg) \,,
\end{eqnarray}
where $\alpha_a^{(m)}$, $\alpha_r^{(m)}$ and $\alpha_{v_i}^{(m)}$ are
dimensionless constants whose values are determined by the specific form of the
interaction potential. We fine tune the strengths of the long- and short-range
interactions in such a way that they generate scattering lengths of a natural
size. More
precisely, we require that the scattering length takes the value $a =
\alpha_l/m_l$
($a = \alpha_s/m_s$) with a dimensionless constant $| \alpha_l | \sim 1$ ($|
\alpha_s | \sim 1$)
when the short-range (long-range) interaction is switched off. This leads to
\begin{equation}
\label{strengths}
v_l = -\frac{8 \pi m_l^3 \alpha _l}{m \left(\alpha _l m_s^2+m_l^2 \alpha _l-2
m_s^2\right)}\,,
\quad
v_s = -\frac{4 \pi m_s \alpha _s}{m \left(\alpha _s-1\right)}\,.
\end{equation}
One then finds the following expressions for the first three terms in the
``chiral'' expansion in Eq.~(\ref{EREexpanded}).
\begin{itemize}
\item
Scattering length:
\begin{equation}
\label{LET1}
\alpha_a^{(0)} = \alpha_l \,, \quad \quad
\alpha_a^{(1)} = (\alpha_l -1 )^2 \alpha_s \,, \quad \quad
\alpha_a^{(2)} = (\alpha_l -1 )^2 \alpha_l \alpha_s^2 \,.
\end{equation}
\item
Effective range:
\begin{eqnarray}
\label{LET2}
\alpha_r^{(0)} &=& \frac{3 \alpha_l - 4}{\alpha_l} \,, \quad \quad
\alpha_r^{(1)} = \frac{2 \left(\alpha _l-1\right) \left(3 \alpha
_l-4\right) \alpha _s}{\alpha _l^2} \,,\nonumber \\
\alpha_r^{(2)} &=& \frac{\left(\alpha _l-1\right) \left(3 \alpha _l-4\right)
\left(5 \alpha _l-3\right) \alpha _s^2+\left(2-\alpha _l\right) \alpha
_l^2}{\alpha _l^3} \,.
\end{eqnarray}
\item
First shape parameter:
\begin{eqnarray}
\label{LET3}
\alpha_{v_2}^{(0)} &=& \frac{\alpha_l - 2}{2 \alpha_l} \,,\quad \quad
\alpha_{v_2}^{(1)} = \frac{\left[\alpha _l \left(13 \alpha
_l-36\right)+24\right] \alpha _s}{4 \alpha _l^2} \,, \nonumber \\
\alpha_{v_2}^{(2)} &=&\frac{\left\{\alpha _l \left[\alpha _l \left(46 \alpha
_l-159\right)+174\right]-60\right\} \alpha _s^2-4 \left(\alpha
_l-2\right) \alpha _l^2}{4
\alpha _l^3}\,.
\end{eqnarray}
\item
Second shape parameter:
\begin{eqnarray}
\label{LET4}
\alpha_{v_3}^{(0)} &=& 0\,, \quad \quad
\alpha_{v_3}^{(1)} = \frac{\left(\alpha _l-2\right) \left(3 \alpha _l-4\right)
\alpha _s}{2 \alpha _l^2} \,, \nonumber \\
\alpha_{v_3}^{(2)} &=& \frac{\left(3 \alpha _l-4\right) \left[\alpha _l \left(25
\alpha _l-68\right)+40\right] \alpha _s^2-4 \left(\alpha _l-2\right)
\alpha _l^2}{8 \alpha _l^3} \,.
\end{eqnarray}
\end{itemize}
Notice that the ERE is not applicable in the case $\alpha_l \to 0$ as follows
immediately from the considerations based on the Born approximation. We further
stress that in our model the leading terms in the $m_l/m_s$-expansion of the ERE
coefficients are completely fixed by the long-range interaction. The scenario
realized corresponds to a strong (at momenta $k \sim
m_l$) long-range interaction which needs to be treated non-perturbatively and
a weak short-range interaction which can be taken into account
perturbatively. We, however, emphasize that this particular hierarchy is not
important for our purposes.
\subsection{KSW-like approach and the low-energy theorems}
\label{LET}
Various coefficients in the ERE are \emph{correlated} with each other as a
consequence of the long-range interaction. In the context of effective (field)
theory, such correlations are to be regarded as low-energy theorems.
They have been discussed for the realistic case of nucleon-nucleon
interaction within the KSW scheme in
Refs.~\cite{Cohen:1998jr,Cohen:1999iaa,Cohen:1999ds} and were
shown to fail badly in the $^1S_0$ and $^3S_1$--$^3D_1$ channels. This failure
is a clear signal towards the non-perturbative nature of the one-pion exchange
in these channels. At the qualitative level, the low-energy theorems in the
pionful EFT simply reflect the hierarchy $M_\pi \ll \Lambda_{\rm hard}$
between the soft and hard scales in the problem, which set the upper bounds for
the convergence radii of the
ERE and chiral expansion, respectively. We will specify the precise meaning of the
low-energy theorems for the case at hand in the following.
We now develop EFT for the model specified above by keeping the long-range
interaction and replacing the short-range potential by a series of contact
zero-range interactions:
\begin{equation}
V_{\rm short} (p, \, p' ) = C_0 + C_2 (p^2 + {p'} ^2) + \ldots \,,
\end{equation}
where $C_{2n}$ are low-energy constants.
We begin with the most convenient and elegant
formulation which respects the standard dimensional power counting. To achieve
that we use subtractive renormalization for all divergent integrals and choose
the subtraction constant $\mu \sim m_l$. We expand the long-range
interaction in powers of $p/m_s$ in order to prevent the appearance of
positive powers of the large scale in the expressions for renormalized loop
diagrams which
would spoil the power counting. We also expand the strength of the long-range
interaction $v_l$ in Eq.~(\ref{strengths}) in powers of $m_l/m_s$ although
this is not necessary to maintain the power
counting. Here and in the following, we refer to
this approach as KSW-like. To be specific, we compute
the first few terms in the $Q/\lambda$-expansion of the T-matrix with $Q=\{k,
m_l , \mu \}$ and $\lambda = \{ m_s, m \}$. Notice that the natural size of
the short-range effects in our model suggests the scaling of the short-range
interactions in agreement with the naive dimensional analysis, i.e.~$C_{2n}
\sim Q^0$. The leading contribution to the T-matrix at order $Q^{-1}$ is
generated by the leading term in the $Q/\lambda$-expansion of the long-range
interaction
\begin{eqnarray}
\label{long}
V_{\rm long} (p, \, p' ) &=& v_{l} \, F_{l}(p)\, F_{l}(p') \\
&\simeq&-
\frac{8 \pi m_l^3 \alpha _l}{m \left(\alpha _l-2\right) ( p^2 + m_l^2)( {p
'}^2 + m_l^2)}
\left[ 1 -\frac{\alpha _l m_l^2 }{\left(\alpha _l-2\right) m_s^2}
+ \frac{p^2}{2 m_s^2} + \frac{{p '}^2}{2 m_s^2} + \mathcal{O} \left(
\frac{Q^4}{\lambda^4}
\right)\right] \nonumber
\end{eqnarray}
which scales as $Q^{-1}$ and, therefore,
needs to be summed up to an infinite order, see Fig.~\ref{fig1}.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=\textwidth,keepaspectratio,angle=0,clip]{perturbative.eps}
\vskip 0.5 true cm
\caption{Leading, next-to-leading and next-to-next-to-leading order
contributions to the scattering amplitude in the KSW-like approach. The
solid lines denote nucleons while the dashed ones represent an insertion
of the lowest-order long-range interaction. Solid dots (dotted lines)
denote an insertion of the lowest-order contact interaction $\propto
C_0$ (subleading
contribution to the long-range interaction).
\label{fig1}
}
\end{center}
\end{figure}
This leads to the following expression for the on-the-energy shell T-matrix:
\begin{equation}
T^{(-1)}=-\frac{8 \pi m_l^3 \alpha _l}{m \left(k-i m_l\right){}^2 \left[k^2
\left(\alpha _l-2\right)+2 i k m_l \left(\alpha _l-2\right)+2
m_l^2\right]} \,,
\end{equation}
from which one deduces
\begin{eqnarray}
k \cot \delta &=& - \frac{4 \pi}{m} \frac{1}{T^{(-1)}} + i k \nonumber \\
&=& {} -\frac{m_l}{\alpha _l} + \frac{ \left(3
\alpha _l-4\right)}{2 m_l \alpha _l} k^2 +
\frac{\left(\alpha _l-2\right)}{2 m_l^3 \alpha _l} k^4 \,.
\end{eqnarray}
Not surprisingly, one observes that the leading terms in the expansion of the ERE
coefficients in Eq.~(\ref{EREexpanded}) are correctly reproduced.
The first correction to the scattering amplitude at order $Q^0$ is given by
the leading-order contact interaction dressed with the iterated leading
long-range interaction as visualized in
Fig.~\ref{fig1}. One finds
\begin{equation}
T^{(0)} = \frac{C_0 \left(k+i m_l\right){}^2 \left[k^2 \left(\alpha
_l-2\right)+2 m_l^2 \left(\alpha _l-1\right)\right]{}^2}{\left(k-i
m_l\right){}^2 \left[k^2 \left(\alpha
_l-2\right)+2 i k m_l \left(\alpha _l-2\right)+2 m_l^2\right]{}^2}.
\end{equation}
Notice that all integrals entering $T^{(-1)}$ and $T^{(0)}$ are finite.
The effective range function $k \cot \delta$ at NLO
can be computed via
\begin{equation}
k \cot \delta = - \frac{4 \pi}{m} \frac{1}{T^{(-1)}} \bigg( 1 -
\frac{T^{(0)}}{T^{(-1)}} \bigg) + i k \,.
\end{equation}
The ``chiral'' expansion of the coefficients in the ERE results from
expanding the right-hand side in this
equation in powers of $k^2$ and, subsequently, in powers of $m_l$. The
LEC $C_0$ can be determined from matching to $\alpha_a^{(1)}$ in
Eq.~(\ref{LET1}) which yields
\begin{equation}
\label{C0_LO}
C_0 = \frac{4 \pi \alpha_s}{m m_s}\,.
\end{equation}
This leads to the following predictions for $r$, $v_2$ and $v_3$:
\begin{eqnarray}
r^{\rm NLO}&=&\frac{1}{m_l} \Bigg[\frac{3 \alpha_l-4}{\alpha _l}
+\frac{2 \left( \alpha_l-1\right) \left(3 \alpha _l-4\right)
\alpha _s}{\alpha _l^2 m_s} m_l \Bigg]\,,\nonumber \\
v_2^{\rm NLO}&=&\frac{1}{m_l^3} \Bigg[ \frac{\left(\alpha _l-2\right)}{2
\alpha _l}+
\frac{\left(\alpha _l \left(13 \alpha _l-36\right)+24\right) \alpha _s}
{4\alpha _l^2
m_s} m_l \Bigg] \,, \nonumber \\
v_3^{\rm NLO}&=&\frac{1}{m_l^4} \, \frac{\left(\alpha _l-2\right) \left(3
\alpha _l-4\right) \alpha _s}{2 \alpha _l^2 m_s} \,.
\end{eqnarray}
One observes that $\alpha_r^{(1)}$, $\alpha_{v_2}^{(1)}$ and
$\alpha_{v_3}^{(1)}$ are correctly
reproduced at NLO. Using dimensional analysis it is easy to verify
that, in fact, $\alpha_{v_i}^{(1)}$ for
all $i$ \emph{must} be reproduced correctly at this order.
Finally, at next-to-next-to-leading order (NNLO) one has to
take into account the leading corrections to the long-range potential in
Eq.~(\ref{long}) and the contribution due to once
iterated leading-order contact term. Clearly, these contributions have to be
dressed
by the iterated leading long-range interaction, see Fig.~\ref{fig1}.
The contribution $\propto C_0^2$ involves a linearly divergent integral which
we regularize with a cutoff $\Lambda \gg m_l$:
\begin{equation}
\label{defenitionI1}
I_1^{\rm reg} \equiv 4 \pi m \int_0^\Lambda \frac{l^2 dl}{(2 \pi)^3}
\frac{1}{k^2 - l^2 + i \epsilon} = - \frac{m \Lambda}{2 \pi^2} - i \frac{m k
}{4 \pi} + \mathcal{O} (\Lambda^{-1}) \,.
\end{equation}
We carry out renormalization by subtracting the divergent
part of the integral $- m/(2 \pi^2) \int_\mu^\Lambda dl$, taking the limit $\Lambda
\to \infty$,
\begin{equation}
I_1^{\rm reg} \to I_1^{\rm subtr} = - \frac{m \mu}{2 \pi^2} - i \frac{m k
}{4 \pi} \,,
\end{equation}
and replacing the bare $C_0$ by the renormalized one $C_0 (\mu)$. As already
pointed out before, we choose $\mu \sim m_l$ in order to be consistent with
the standard power counting based on the dimensional analysis. Clearly, the
above procedure is exactly
equivalent to the power divergence subtraction prescription utilized in the
KSW framework. A simple calculation yields the following result for the
sub-subleading contribution to the amplitude:
\begin{eqnarray}
T^{(1)} &=& \frac{\left(k+i m_l\right){}^2}{4 \pi ^2 m m_s^2 \left(k-i
m_l\right){}^2 \left[k \alpha _l \left(k+2 i m_l\right)-2 \left(k+i
m_l\right){}^2\right]{}^2} \Bigg[
-32 \pi ^3 k^2 m_l^3 \left(\alpha _l-2\right) \alpha _l \nonumber \\
&& {}+ \left( C_0 (\mu)\right) ^2 m^2 m_s^2 \left[k^2 \left(\alpha
_l-2\right)+2 m_l^2
\left(\alpha _l-1\right)\right]{}^2 \\
&& {} \times \frac{
\alpha _l \left[k^2 (-2 \mu
-i
\pi k)+2 k (\pi k-2 i \mu ) m_l+2 \pi m_l^3\right]+2 (2 \mu +i
\pi k) \left(k+i m_l\right){}^2}{k \alpha _l \left(k+2 i
m_l\right)-2 \left(k+i m_l\right){}^2} \Bigg].
\nonumber
\end{eqnarray}
The LEC $C_0( \mu )$ can be written in terms of the perturbative expansion as
follows
\begin{equation}
C_0( \mu ) = C_0^{(0)} + C_0^{(1)}( \mu ) + \ldots \,,
\end{equation}
where the superscript refers to the power of the soft scale $Q$. The first term
does not depend on $\mu$ and equals $C_0$ in Eq.~(\ref{C0_LO}).
The $\mu$-dependence of $C_0^{(1)} (\mu )$ can be determined by solving the
renormalization group equation
\begin{equation}
\frac{d}{d \mu} \bigg[ T^{(-1)} + T^{(0)} + T^{(1)} \bigg] = 0\,.
\end{equation}
One also needs one additional input parameter, such as e.~g.~$\alpha_a^{(2)}$,
in order to fix the
integration constant. This leads to
\begin{equation}
\label{C0_NLO}
C_0^{(1)} = \frac{8 \mu \alpha _s^2}{m m_s^2}\,.
\end{equation}
It is then easy to verify that the scattering amplitude $T^{(-1)} + T^{(0)} +
T^{(1)}$ is $\mu$-independent up to terms of order $Q^2$. Further, the
effective range function is given at this order by
\begin{equation}
k \cot \delta = - \frac{4 \pi}{m} \frac{1}{T^{(-1)}} \Bigg[ 1 -
\frac{T^{(0)}}{T^{(-1)}} +
\left( \frac{T^{(0)}}{T^{(-1)}} \right)^2 -
\frac{T^{(1)}}{T^{(-1)}}
\Bigg] + i k \,.
\end{equation}
One then obtains the following predictions for the ERE coefficients:
\begin{eqnarray}
r^{\rm NNLO}&=&\frac{1}{m_l} \Bigg[\frac{3\alpha _l -4}{\alpha _l}
+ \frac{2 \left(\alpha _l-1\right) \left(3 \alpha _l-4\right)
\alpha _s}{\alpha _l^2 m_s} m_l \nonumber \\
&& {} +\frac{\left(\alpha _l-1\right) \left(3
\alpha _l-4\right) \left(5 \alpha _l-3\right) \alpha _s^2+\left(2-\alpha
_l\right) \alpha _l^2}{\alpha _l^3 m_s^2} m_l^2 \nonumber \\
&& {} -\frac{4 \mu m_l \left(\alpha _l-1\right) \left(3 \alpha _l-4\right)
\alpha _s^3 \left(\pi m_l \left(3-5 \alpha _l\right)+4 \mu \alpha
_l\right)}{\pi ^2 \alpha _l^3 m_s^3}
+ \mathcal{O} \left( Q^4 \right)\Bigg] \,, \nonumber \\
v_2^{\rm NNLO}&=&\frac{1}{m_l^3} \Bigg[ \frac{\alpha _l-2}{2 \alpha _l}
+
\frac{\left(\alpha _l \left(13 \alpha _l-36\right)+24\right) \alpha _s}{4
\alpha _l^2 m_s} m_l \nonumber \\
&& {}
+\frac{\left(\alpha _l \left(\alpha _l \left(46 \alpha
_l-159\right)+174\right)-60\right) \alpha _s^2-4
\left(\alpha _l-2\right) \alpha _l^2}{4 \alpha _l^3 m_s^2} m_l^2 \nonumber \\
&& {} + \frac{\mu m_l \alpha _s^3 \left(\pi m_l \left(\alpha _l \left(\alpha _l
\left(46 \alpha _l-159\right)+174\right)-60\right)-2 \mu \alpha _l
\left(\alpha _l \left(13 \alpha _l-36\right)+24\right)\right)}{\pi ^2
\alpha _l^3 m_s^3} \nonumber \\
&& {}
+ \mathcal{O} \left( Q^4 \right)\Bigg] \,,\nonumber \\
v_3^{\rm NNLO}&=&\frac{1}{m_l^5} \Bigg[
\frac{\left(\alpha _l-2\right) \left(3 \alpha _l-4\right) \alpha _s}{2
\alpha _l^2 m_s} m_l + \frac{\left(3 \alpha _l-4\right) \left(\alpha _l
\left(25 \alpha _l-68\right)+40\right) \alpha _s^2-4 \left(\alpha
_l-2\right) \alpha _l^2}{8 \alpha _l^3 m_s^2} m_l^2 \nonumber \\
&& {} + \frac{\mu m_l \left(3 \alpha _l-4\right) \alpha _s^3 \left(\pi m_l
\left(\alpha _l \left(25 \alpha _l-68\right)+40\right)-8 \mu \left(\alpha
_l-2\right) \alpha _l\right)}{2 \pi ^2 \alpha _l^3 m_s^3}
+ \mathcal{O} \left( Q^4 \right)\Bigg] \,,
\end{eqnarray}
where $Q = \{ m_l , \; \mu \}$.
As expected, the first three terms in the ``chiral'' expansion of \emph{all}
ERE coefficients are reproduced correctly at NNLO. Notice further that the
contributions beyond the order of accuracy of the calculation are explicitly
renormalization-scale dependent, see section \ref{pionless} for a general
discussion.
The above results reveal
the meaning of the LETs in the present context. All $i$-th terms
$\alpha_x^{(i)}$ in the ``chiral'' expansion of the coefficients in the ERE, $x =
\{ a, \, r, \, v_2, \, \ldots \}$ are correlated with each other due to the
long-range interaction and its interplay with the short-range interaction in
the underlying model. The knowledge of $\alpha_{x_j}^{(i)}$ for one particular
$x_j$ is sufficient to predict $\alpha_{x_k}^{(i)}$ for all $k \neq j$. In an
EFT, short-range physics is incorporated in a
systematic way by taking into account contact interactions with an increasing
number of derivatives. Matching the strengths of the corresponding LECs to the
first $n$ terms in the ``chiral'' expansion of some of the ERE coefficients
allows to correctly describe the ``chiral'' expansion of
\emph{all} ERE coefficients up to order $m_l^n/m_s^n$. It should be emphasized
that at low energies and in the absence of external sources, the
appearance of the above mentioned correlations is \emph{the only} signature of the
long-range interaction in the 2N system.
\subsection{Weinberg-like approach with a finite cutoff}
\label{WeinbergCutoff}
An EFT formulation like the one described above which respects the
manifest power counting at every stage of the calculation is not
available in a general case of a long-range interaction which
is
strong enough to have to be treated non-perturbatively such as e.~g.~the
one-pion exchange potential.
Here, one lacks a regularization prescription for \emph{all} divergent integrals
resulting from iterations of the potential in the LS equation
which would keep regularization artefacts small without, at the same time,
introducing a new hard
scale in the problem.
In the context of pionful EFT for few-nucleon systems,
the divergent
integrals are usually dealt with by introducing an UV cutoff $\Lambda$. In
order to keep regularization artefacts small, the cutoff, ideally, needs to be
taken of the order $\Lambda \sim m_s$ or higher. Clearly, this spoils the manifest
power counting for regularized loop contributions.\footnote{This, however,
does not
mean a breakdown of EFT since power counting is only required for
\emph{renormalized} scattering amplitude.} We now consider the
Weinberg-like formulation in which the effective potential, given by the
long-range interaction and a series of contact terms, is iterated in
the LS equation to all orders, see the work by Lepage \cite{Lepage:1997} for a
related discussion. This is visualized in Fig.~\ref{fig2}.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=14cm,keepaspectratio,angle=0,clip]{nonperturbative.eps}
\vskip 0.5 true cm
\caption{Effective potential and scattering amplitude in the Weinberg-like
approach. The dashed-dotted line refers to the full long-range
interaction. Solid dot and filled rectangle refer to the leading and
subleading contact interactions, respectively. For remaining notation
see Fig.~\ref{fig1}.
\label{fig2}
}
\end{center}
\end{figure}
We carry out renormalization by literally following
the steps outlined in Ref.~\cite{Lepage:1997} and summarized
in Ref.~\cite{Gasser:1982ap} in the following way: ''The
theory is fully specified by the values of the bare constants ...
once a suitable regularization procedure is chosen. In principle,
the renormalization program is straightforward: one calculates
quantities of physical interest in terms of the bare parameters at
given, large value of (ultraviolet cutoff) $\Lambda$. Once a
sufficient number of physical quantities have been determined as
functions of the bare parameters one inverts the result and
expresses the bare parameters in terms of physical quantities,
always working at some given, large value of $\Lambda$. Finally,
one uses these expressions to eliminate the bare parameters in all
other quantities of physical interest. Renormalizability
guarantees that this operation at the same time also eliminates
the cutoff.'' Notice that by iterating the truncated expansion for the
effective potential in the LS equation one unavoidably generates
higher-order contributions without being able to absorb all arising
divergences into redefinition of the LECs present in the considered truncated
potential. Thus, for the case at hand,
cutoff dependence in observables is expected to be eliminated only up to the
considered order
in the EFT expansion. We further emphasize that expressing the bare parameters
(i.~e.~LECs $C_i$) in terms of physical quantities is a non-trivial step due
to a nonlinear dependence of the scattering amplitude on $C_i$. The resulting
nonlinear equations may have no real solutions, especially when $\Lambda$ is
chosen to be considerably larger than $m_s$.
As pointed out in Ref.~\cite{Lepage:1997}, ``In fact, as nonlinearities
develop for small $a$'s,\footnote{In this work, $a$ refers to a
coordinate-space cutoff, $a \sim \Lambda^{-1}$.} results often degrade, or,
in more extreme cases, the theory may become unstable or untunable''. The Wigner
bound in pionless EFT \cite{Phillips:1997xu} and the repulsive NN channels in the
pionful case \cite{PavonValderrama:2005wv,PavonValderrama:2005uj} may serve as examples for such an
untunable theory, see Ref.~\cite{Epelbaum:2006pt} for another example.
In the following, we consider the low-energy theorems for our model and
demonstrate explicitly that removing the
cutoff from the scattering amplitude by taking the limit $\Lambda \to \infty$
is not compatible with the EFT framework even if such a
limit exists and the theory does not become untunable.
To be specific, we consider the effective potential of the form
\begin{equation}
V_{\rm eff}^{(1)} (p ,\, p') = v_{l} \, F_{l}(p)\, F_{l}(p')
+ C_0 \,,
\end{equation}
where the superscript of $V_{\rm eff}$ refers to the number of contact terms
included.
The off-shell T-matrix $T^{(1)} (p,\, p'; \, k)$ can be easily calculated by
solving the $2 \times 2$ matrix equation
\begin{equation}
\label{LSmatrix}
t (k) = v_{\rm eff} + v_{\rm eff} \, \mathcal{G}(k) \, t(k)
\end{equation}
where we have defined
\begin{equation}
V_{\rm eff}^{(1)} (p ,\, p') =\gamma^T (p) \, v_{\rm eff} \, \gamma (p '), \quad
T^{(1)} (p ,\, p', \, k) =\gamma^T (p) \, t (k) \, \gamma (p ')\,,
\end{equation}
with
\begin{equation}
v_{\rm eff} \equiv \left( \begin{array}{cc} v_l & 0 \\ 0 & C_0 \end{array}
\right)\,, \quad
\gamma ( p) \equiv \left( \begin{array}{c} F_l (p) \\ 1 \end{array}
\right)\,, \quad
\mathcal{G}(k) \equiv \left( \begin{array}{cc} I_l(k) & I_{l1}^{\rm reg}(k) \\
I_{l1}^{\rm reg} (k)
& I_1^{\rm reg} (k) \end{array}
\right)\,.
\end{equation}
The integrals entering $\mathcal{G}(k)$ are given by
\begin{eqnarray}
\label{otherintegrals}
I_l (k) &=& 4 \pi m \int_0^\infty \frac{l^2 \, dl}{(2 \pi)^3}
\frac{l^2+ m_s^2}{[k^2 - l^2 + i \epsilon][l^2 + m_l^2]^2} \nonumber \\
&=& \frac{m \left(-2 i k m_l+m_l^2+m_s^2\right)}{8 \pi m_l \left(k+i
m_l\right){}^2} \,, \nonumber \\
I_{l1}^{\rm reg} (k) &=& 4 \pi m \int_0^\Lambda \frac{l^2 \, dl}{(2 \pi)^3}
\frac{\sqrt{l^2+ m_s^2}}{[k^2 - l^2 + i \epsilon][l^2 + m_l^2]} \nonumber \\
&=& \frac{m}{2 \pi^2}
\Bigg( k \frac{\sqrt{k^2 + m_s^2}}{k^2 + m_l^2} \ln \Bigg(\frac{k + \sqrt{k^2
+ m_s^2}}{m_s} \Bigg) -
\frac{m_l \sqrt{m_s^2 - m_l^2}}{k^2 + m_l^2} \,
\text{arccot} \, \Bigg(
\frac{m_l}{\sqrt{m_s^2 - m_l^2}} \Bigg) \nonumber \\
&& {} + \ln \left(\frac{m_s}{2 \Lambda }\right) - \frac{i \pi k
\sqrt{k^2+m_s^2}}{2 \left(k^2+m_l^2\right)} \Bigg) + \mathcal{O}
(\Lambda^{-1})\,,
\end{eqnarray}
and the integral $I_1^{\rm reg} (k)$ is defined in Eq.~(\ref{defenitionI1}).
Here and in what follows, we keep the cutoff $\Lambda$ at least of the
order $\Lambda \sim m_s$. We will, therefore, omit the finite
cutoff artefacts in order to keep the presentation
simple, i.~e.~we neglect the $\mathcal{O} (\Lambda^{-1})$-terms in
Eqs.~(\ref{defenitionI1}) and (\ref{otherintegrals}). The reader can easily
verify that taking into account finite cutoff artefacts
(i.e. terms with negative powers of $\Lambda$) in the expressions for
regularized loop integrals does not alter the conclusions of this work.
With the above definitions, the LS equation (\ref{LSmatrix}) can be easily
solved leading to a somewhat lengthy expressions for the on-shell T-matrix
$T^{(1)}(k, \, k; \, k)$ which can be used to extract the coefficients in the
ERE.
One obtains for the scattering length
\begin{equation}
\label{aWeinb1}
a^{(1)} = \frac{\pi m_s \left\{C_0 m \left[2 \alpha _l \left(m_s \left(\Lambda
-\text{s} m_l\right)+2 m_l^2 \ln (m_s/2 \Lambda )
\right)+\pi m_l m_s\right]+4 \pi ^2 \alpha _l m_s\right\}}{m_l
\left\{2 \pi m_s^2 \left(C_0 m \Lambda +2 \pi ^2\right)-C_0 m m_l
\alpha _l \left[\text{s} m_s-2 m_l \ln (m_s/2 \Lambda)
\right]^2\right\}} \,,
\end{equation}
where we have introduced
\begin{equation}
s \equiv 2 \frac{\sqrt{m_s^2-m_l^2}}{m_s}
\text{arccot}\left(\frac{m_l}{\sqrt{m_s^2-m_l^2}}\right)\,.
\end{equation}
One can, in principle, determine the LEC $C_0$ by expanding $a^{(1)}$ in
powers of $m_l$
and matching the second term in this expansion to $\alpha_a^{(2)}$ as we did
in the case of the KSW-approach. However, in practice, ``chiral'' expansion of the
coefficients in the ERE is not available. We, therefore, determine $C_0$ for
a given value of the cutoff $\Lambda$, i.e.~$C_0(\Lambda)$, by
matching $a^{(1)}$ to the full expression of the scattering length
resulting in our model
\begin{equation}
a_{\rm underlying} = \frac{m_l \left(2 \alpha _l-1\right) \alpha _s-\alpha _l
m_s}{m_l \left(m_l \alpha _l \alpha _s-m_s\right)}\,,
\end{equation}
which we regard as a synthetic data.
This leads to the following result for $C_0 (\Lambda)$:
\begin{eqnarray}
\label{C0Weinberg}
C_0 (\Lambda) &=& 4 \pi ^3 \left(\alpha _l-1\right){}^2 m_s^2 \alpha _s \Bigg\{
m m_s \left[m_s \left(\pi -\text{s} \alpha _l\right)+2 m_l \alpha _l \ln
\frac{m_s}{2 \Lambda }\right]{}^2 \nonumber \\
&&
-m \alpha _s \Bigg[m_s^2 \left(\alpha _l \left((\pi -\text{s}) m_l \left(-2
\text{s} \alpha
_l+\text{s}+\pi \right)
+2 \pi \Lambda \left(\alpha _l-2\right)\right)+2 \pi \Lambda \right)
\nonumber \\
&& {} +4
m_l^2 \alpha _l m_s \left((\pi -2
\text{s}) \alpha _l+\text{s}\right) \ln \frac{m_s}{2 \Lambda
}+4 m_l^3 \alpha _l \left(2 \alpha _l-1\right) \bigg( \ln
\frac{m_s}{2 \Lambda } \bigg)^2\Bigg] \Bigg\}^{-1}\,.
\end{eqnarray}
Having determined the LEC $C_0 (\Lambda )$, we are now in the position to
verify the low-energy theorems by making predictions for the effective range
and shape coefficients. A straightforward calculation yields the following
result for \emph{renormalized} expressions for
$r^{(1)}$ and $v_2^{(1)}$:
\begin{eqnarray}
\label{LETWeinberg}
r^{(1)} &=& \frac{1}{m_l} \Bigg[\frac{3 \alpha_l - 4}{\alpha _l}
+\frac{2 \left(\alpha _l-1\right) \left(3 \alpha _l-4\right) \alpha _s}{\alpha
_l^2 m_s} m_l + \Bigg( \frac{4 \left(\alpha _l-2\right) \alpha _s }{\pi
\alpha _l m_s^2} \left(\ln \frac{m_s}{2 \Lambda }+1\right) \nonumber \\
&& {} +
\frac{\left(\alpha _l-1\right) \left(3 \alpha _l-4\right) \left(5 \alpha
_l-3\right) \alpha _s^2+\left(2-\alpha _l\right) \alpha
_l^2}{\alpha _l^3 m_s^2} \Bigg) m_l^2 + \mathcal{O} \left( m_l^3
\right)\Bigg]\,,\nonumber \\
v_2^{(1)}&=&\frac{1}{m_l^3} \Bigg[ \frac{\alpha_l - 2}{2 \alpha _l}
+\frac{\left(\alpha _l \left(13
\alpha _l-36\right)+24\right) \alpha _s}{4 \alpha _l^2
m_s} m_l
+ \Bigg(\frac{ \left(\alpha _l-2\right) \left(5 \alpha _l-6\right) \alpha
_l^2 \alpha
_s }{ \pi \left(\alpha
_l-1\right) \alpha _l^3 m_s^2} \left(\ln \frac{m_s}{2 \Lambda }+1\right)\nonumber \\
&& {} +
\frac{\left(\alpha _l \left(\alpha _l \left(46 \alpha
_l-159\right)+174\right)-60\right) \alpha _s^2-4 \left(\alpha
_l-2\right) \alpha _l^2}{4 \alpha _l^3 m_s^2} \Bigg) m_l^2
+ \mathcal{O} \left( m_l^3 \right)\Bigg]\,, \nonumber \\
v_3^{(1)}&=&\frac{1}{m_l^5} \Bigg[ \frac{\left(\alpha _l-2\right)
\left(3 \alpha _l-4\right) \alpha _s}{2 \alpha _l^2 m_s} m_l
+ \Bigg(
\frac{2 \left(\alpha _l-2\right) \left(2 \alpha _l-3\right) \alpha _s
}{\pi
\left(\alpha _l-1\right) \alpha _l m_s^2}
\left(\ln \frac{m_s}{2 \Lambda } + 1 \right) \nonumber \\
&& {} +
\frac{\left(3 \alpha _l-4\right) \left(\alpha _l \left(25 \alpha
_l-68\right)+40\right) \alpha _s^2-4 \left(\alpha _l-2\right) \alpha
_l^2}{8
\alpha _l^3 m_s^2}
\Bigg) m_l^2 + \mathcal{O} \left( m_l^3 \right)\Bigg]\,.
\end{eqnarray}
Again, not surprisingly, one observes that the subleading terms in the
``chiral'' expansion of the ERE coefficients are correctly reproduced. In fact,
any quantum-mechanically well-defined short-range interaction accompanied with
the underlying long-range force would do equally good job in describing
correlations between $\alpha_{x}^{(1)}$. While all coefficients
$\alpha_{x}^{(1)}$ have to be reproduced correctly at this order once the
short-range parameter entering the effective potential is appropriately tuned
(as guaranteed by the analytic structure of the scattering amplitude), there
is no restriction regarding higher-order terms in the ``chiral''
expansion.\footnote{A careful reader may realize that also the sub-subleading
coefficients in the ``chiral'' expansion of $r$, $v_2$ and $v_3$ are
correctly reproduced once the cutoff is tuned to the value $\Lambda = e
m_s/2$. This, in fact, also holds true for higher coefficients in the ERE
and can be traced back to the fact that in the case at hand, the cutoff
$\Lambda$ itself may
be considered as an additional short-range "counterterm'' provided one
allows for a fine tuning of $\Lambda$. The resulting expressions are completely
equivalent to the next-higher-order calculation in the Weinberg-like
approach and provide explicit evidence for the validity of low-energy
theorems in that case.}
Indeed, one observes that the coefficients $\alpha_{r, \, v_i}^{(2)}$ in
Eq.~(\ref{LETWeinberg}) deviate from their correct values given in
Eqs.~(\ref{LET2})-(\ref{LET4}).
Moreover, since the included LEC is insufficient to absorb all divergencies
arising from iterations of the LS equation, nothing prevents the appearance of
positive powers or logarithms of the cutoff $\Lambda$ in the expressions for
$\alpha_{r, v_i}^{(n)}$ with $n \geq 2$.\footnote{The appearance of
only logarithmic dependence on $\Lambda$ in Eq.~(\ref{LETWeinberg}) is
specific to the form of the long-range interaction and the order in the EFT
expansion. We have verified that positive powers of $\Lambda$ occur in
the expressions for $\alpha_{x}^{(3)}$ when one includes the
subleading contact interaction in the effective potential.} The results
in Eq.~(\ref{LETWeinberg}) show that this is indeed the case. The
dependence on $\Lambda$ occurs, however, only in contributions beyond the
accuracy of calculation and, obviously, does not affect the predictive power
of the EFT provided the cutoff is chosen to be of the order of the
characteristic hard scale in the problem, $\Lambda \sim m_s$. Taking values
$\Lambda \gg m_s$ artificially enhances certain higher-order contributions in the
``chiral'' expansion of the ERE coefficients spoiling the predictive power of
the theory.
The appearance of positive powers of $\Lambda$ and/or logarithmic terms in the
predicted ``chiral'' expansion of the effective range
and the shape parameters in Eq.~(\ref{LETWeinberg}) may give the wrong
impression that no finite limit exists for $r^{(1)} (\Lambda )$ and $v_i^{(1)}
( \Lambda )$ as $\Lambda \to \infty$. In fact, taking
the limit $\Lambda \to \infty$ does not commute with the Taylor expansion of
the ERE coefficients in
powers of $m_l$. It is easy to see, that all coefficients in the ERE as well
as the on-shell T-matrix approach a finite limit as $\Lambda \to \infty$.
Substituting the value for $C_0 (\Lambda )$ from Eq.~(\ref{C0Weinberg}) into
the solution of the LS equation (\ref{LSmatrix}) and taking the limit $\Lambda
\to \infty$ one obtains the following cutoff-independent result for the
inverse amplitude:
\begin{eqnarray}
\label{Tperatized}
(T^{(1)}_{\rm peratized})^{-1} &=& i \frac{k m}{4 \pi} - \frac{m}{8 \pi m_l^3
\left(k^2+m_s^2\right)
\left(\alpha _l m_s+m_l \left(1-2 \alpha _l\right) \alpha _s\right)} \Big(
2 m_l^4 m_s^2 \left(m_s-m_l \alpha _l \alpha _s\right) \nonumber \\
&& {} + k^2 m_l^2 \left(\left(4-3 \alpha _l\right) m_s^3+m_l^2 \alpha _l
m_s+m_l \alpha _s \left(\left(2 \alpha _l-3\right) m_s^2+m_l^2 \left(1-2
\alpha
_l\right)\right)\right) \nonumber \\
&& {} + k^4 \left(-\alpha _l m_s \left(m_l^2+m_s^2\right)-m_l \alpha _s
\left(m_l^2 \left(1-2 \alpha _l\right)+m_s^2\right)+2 m_s^3\right) \Big)\,.
\end{eqnarray}
The above procedure is very much in spirit of the so-called
peratization, the technique introduced by Feinberg and Pais
\cite{Feinberg:1963zz,Feinberg:1964zz}, see also \cite{Guttinger:1965zz},
to evaluate higher-order corrections to S-matrix in
non-renormalizable field theories. The essential idea of this method consists
in the resummation of the most divergent contributions to the Born series.
In the late sixties of the last century, this technique was widely used in
potential scattering as an attempt to generate approximations to the
scattering length for different classes of singular potentials. In some cases
where the exact solution to the Schr\"odinger equation with a given singular
potential is known, peratization was indeed shown to provide reasonable
approximations to the scattering length while in other cases this approach
fails completely, see \cite{Frank:1971xx} for a comprehensive review
article.
The
cutoff-removed results for the ERE coefficients can be read off from
Eq.~(\ref{Tperatized}):
\begin{eqnarray}
r_{\rm peratized}^{(1)} &=& \frac{m_l^3 \alpha _s+m_l^2 \left(\alpha
_l-2\right) m_s+m_l \left(2 \alpha _l-3\right) m_s^2 \alpha _s+\left(4-3
\alpha _l\right) m_s^3}{m_l
m_s^2 \left(m_l \left(2 \alpha _l-1\right) \alpha _s-\alpha _l m_s\right)}
\nonumber \\
&=& \frac{1}{m_l} \Bigg[\frac{3 \alpha_l - 4}{\alpha _l}
+\frac{4 \left(\alpha _l-1\right){}^2 \alpha _s}{\alpha _l^2 m_s} m_l
\nonumber \\
&& {} +
\frac{\alpha _l^3 \left(8 \alpha _s^2-1\right)+\alpha _l^2
\left(2-20 \alpha _s^2\right)+16 \alpha _l \alpha _s^2-4 \alpha
_s^2}{\alpha _l^3 m_s^2} m_l^2
+ \mathcal{O} \left( m_l^3 \right)\Bigg]\,,\nonumber \\
(v_2^{(1)})_{\rm peratized} &=&-\frac{\left(m_l^2-m_s^2\right){}^2
\left(\left(\alpha _l-2\right) m_s+m_l \alpha _s\right)}{2 m_l^3 m_s^4
\left(m_l \left(2 \alpha
_l-1\right) \alpha _s-\alpha _l m_s\right)} \nonumber \\
&=& \frac{1}{m_l^3} \Bigg[ \frac{\alpha_l - 2}{2 \alpha _l} +
\frac{\left(\alpha _l-1\right){}^2 \alpha _s}{\alpha _l^2 m_s} m_l
+\frac{\alpha _l^3 \left(2 \alpha _s^2-1\right)+\alpha _l^2 \left(2-5
\alpha _s^2\right)+4 \alpha _l \alpha _s^2-\alpha _s^2}{\alpha _l^3 m_s^2}
m_l^2
\nonumber \\
&& {}
+ \mathcal{O} \left( m_l^3 \right)\Bigg]\,, \nonumber \\
(v_3^{(1)})_{\rm peratized} &=& \frac{\left(m_l^2-m_s^2\right){}^2
\left(\left(\alpha _l-2\right) m_s+m_l \alpha _s\right)}{2 m_l^3 m_s^6
\left(m_l \left(2 \alpha _l-1\right)
\alpha _s-\alpha _l m_s\right)} \nonumber \\
&=& \frac{1}{m_l^5} \Bigg[ -\frac{\alpha _l-2}{2 \alpha _l m_s^2} m_l^2 +
\mathcal{O} \left( m_l^3 \right)\Bigg]\,.
\end{eqnarray}
One observes that the results after removing the cutoff fail to reproduce the
low-energy theorems by yielding wrong values for $\alpha_{r}^{(1)}$ and
$\alpha_{v_i}^{(1)}$ (notice that, per construction, the scattering length
corresponding to $T^{(1)}_{\rm peratized}$ exactly matches $a_{\rm underlying}$).
Note that we could allow for a stronger fine-tuning in our model to
make both the long- and short-range interactions nonperturbative at $k \sim
m_l$ (as it probably happens in the realistic case of NN scattering). The
breakdown of LETs in the ``peratized'' expressions would then imply that the
coefficients in the ERE are completely uncorrelated with each other, that is,
the predictive power of such an approach is the same as in the theory with
only short-range interactions (i.~e.~''pionless'' theory).
The breakdown of LETs in the ``peratized'' approach can be traced back to
spurious $\Lambda$-dependent contributions in the T-matrix which are
irrelevant (at the order of calculations) in the regime $\Lambda
\sim m_s$ but become numerically dominant if $\Lambda \gg m_s$. In general,
such spurious terms involve positive powers of $\Lambda$ which, as $\Lambda$
gets increased beyond the hard scale $m_s$, become,
at some point, comparable in size with the lower-order terms.
For example, as already mentioned before, terms linear in $\Lambda$ will show up in
the renormalized expressions for $\alpha_x^{(3)}$ at next-higher order. Low-energy
theorems will then break down as the cutoff will approach the scale $\Lambda
\sim m_s^2/m_l$. The unavoidable appearance of ever higher power-law
divergences when going to higher orders in the EFT expansion implies that the
cutoff should not be increased beyond the hard scale in the problem, which
leads to the optimal
choice $\Lambda \sim m_s$.
\section{Summary and conclusions}
\label{summary}
We discussed some conceptual aspects of renormalization in the
context of effective field theories for the two-nucleon system. First,
we considered renormalization scheme dependence of the scattering
amplitude in the KSW and Weinberg's approaches. Renormalization scale dependence
is present explicitly in the loop contributions and implicitly due to the running of the
coupling constants. These two types of dependence cancel exactly
in the full amplitude. Contrary to widespread belief, we showed that
renormalization scheme independence of the amplitude in pionless EFT based on
the KSW framework is only achievable up to the order to which the calculations are
performed. The residual renormalization-scheme dependence arises from the
running coupling constants. On the other hand, in the Weinberg's approach the
residual renormalization scale dependence is generated by loop
contributions. From this point of view, the KSW framework does not offer any
conceptual advantage over the Weinberg's approach. If one approach is
conceptually inconsistent, then so is the other. In fact both are conceptually
as consistent as perturbative QCD. Clearly, the crucial point is to choose the
appropriate renormalization condition.
Secondly, we considered the cutoff version
of pionless theory for NN scattering in the $^1S_0$ partial wave
up to next-to-leading order. We expressed the scattering amplitude in terms
of renormalized coupling constants and explored the consequences of
taking the cutoff $\Lambda$ very large, i.e.~much larger than the hard scale
in the problem. Making use of the loop expansion for the scattering amplitude,
we observed that the contributions which diverge in the
limit $\Lambda \to \infty$, instead of being
absorbed into redefinition of higher-order coupling constants (or,
equivalently, being subtracted), start playing a dominant role as $\Lambda$ is
increased significantly beyond the pertinent hard scale.
One, therefore, completely looses the power
counting (at the level of the amplitude) on which the EFT is based.
On the other hand, if $\Lambda$ is chosen of the order of the hard scale,
violation of the power counting by terms proportional to $\Lambda$
appears to be beyond the accuracy of the calculation.
To further explore the role of the cutoff we constructed
a toy-model for pionful EFT. Specifically, we developed an effective theory
for an exactly
solvable quantum mechanical problem with long- and short-range interactions of
a separable type. We revealed the meaning of low-energy theorems in this model
using the KSW-like framework with subtractive renormalization and demonstrated
their validity in the Weinberg-like approach with a finite cutoff $\Lambda$ as
long as it is chosen of the order of short-range scale. Taking the limit
$\Lambda \to \infty$ while keeping the scattering length at its correct value
yields a finite result for the amplitude but violates the
low-energy theorems. This procedure is, therefore, not compatible with the EFT
framework. It is much more in spirit of peratization
\cite{Feinberg:1963zz,Feinberg:1964zz,Guttinger:1965zz,Frank:1971xx} than
renormalization as it is understood in the context of EFT.
Contrary to popular opinion, the considered example demonstrates
that the existence of a finite limit of the amplitude as $\Lambda \to \infty$
under requirement, that certain low-energy observables such as e.~g.~the
scattering length are kept at their physical values, is not yet sufficient
for a proper renormalization in the context of
chiral EFT (neither is it necessary, see e.g.~\cite{Lepage:1997,Lepage:2000}).
We argue that $\Lambda$ should not be
increased (considerably) beyond the short-range scale in the problem in EFT
calculations of that kind.
\acknowledgments We would like to thank Dalibor Djukanovic,
Ulf-G.~Mei{\ss}ner, Daniel Phillips and Manuel Pav\'on Valderrama
for useful comments on the manuscript. The work of E.E.~was
supported by funds provided by the Helmholtz Association to the
young investigator group ``Few-Nucleon Systems in Chiral Effective
Field Theory'' (grant VH-NG-222) and to the virtual institute ``Spin
and strong QCD'' (VH-VI-231), by the DFG (SFB/TR 16 ``Subnuclear
Structure of Matter'') and by the EU HadronPhysics2 project ``Study
of strongly interacting matter''. J.G.~acknowledges the support of
the Deutsche Forschungsgemeinschaft (SFB 443) and Georgian National
Foundation grant GNSF/ST08/4-400.
|
1,314,259,994,259 | arxiv | \section{Introduction and statement of results}
The main goal of this paper is to study zeros of hypergeometric series in the $p$-adic setting introduced by D. McCarthy \cite{mccarthy-ijnt-2, mccarthy-pacific}. We also establish analogues of classical hypergeometric series transformations, particularly very special cases of Kummer's and Pfaff's linear transformations, for hypergeometric series in the $p$-adic setting. This type of questions were posed by D. McCarthy \cite{mccarthy-pacific}. We now begin with the definition of classical hypergeometric series.
For a complex number $a$ and a non negative integer $k$ the rising factorial denoted by $(a)_k$ is defined by $(a)_k:=a(a+1)(a+2)\cdots(a+k-1)$ for $k>0$ and $(a)_0:=1.$ Then for $a_i,b_i,\lambda\in\mathbb{C}$ with $b_i\not\in\{\ldots,-3,-2,-1,0\},$ the classical hypergeometric series ${_{r+1}F_r}$ is defined by
\begin{align*}
{_{r+1}}F_{r}\left(\begin{array}{cccc}
a_1, & a_2, & \ldots, & a_{r+1} \\
& b_1, & \ldots, & b_r
\end{array}\mid \lambda
\right):=\sum_{k=0}^{\infty}\frac{(a_1)_k\cdots(a_{r+1})_k}
{(b_1)_k\cdots(b_r)_k}\cdot\frac{\lambda^k}{k!}.
\end{align*}
This series converges for $|\lambda|<1.$ Classical hypergeomeometric series play important role in different areas of mathematics. For example, they have significant applications in modular forms, elliptic curves, representation theory, differential equations etc. \cite{mccarthy-proc, mccarthy-ijnt, mortenson}. J. Greene \cite{greene} introduced the notion of hypergeometric series over finite fields which are finite field analogues of classical hypergeometric series.
Let $p$ be an odd prime and $\mathbb{F}_p$ denote a finite field with $p$ elements. Let $\widehat{\mathbb{F}_p^\times}$ denote the group of all multiplicative characters of $\mathbb{F}_p^{\times}$ and $\overline{\chi}$ denote the inverse of a multiplicative character $\chi$. We extend the domain of each $\chi\in\widehat{\mathbb{F}_p^\times}$ to $\mathbb{F}_p$ by simply setting $\chi(0):=0$ including the trivial character $\varepsilon.$
For multiplicative characters $\chi$ and $\psi$ of $\mathbb{F}_p$ the Jacobi sum is defined by
\begin{align}\label{jacobi}
J(\chi,\psi):=\sum_{y\in\mathbb{F}_p}\chi(y)\psi(1-y),
\end{align}
and the normalized Jacobi sum known as binomial is defined by
\begin{align}\label{binomial}
{\chi\choose \psi}:=\frac{\psi(-1)}{p}J(\chi,\overline{\psi}).
\end{align}
Let $n$ be a non negative integer. For multiplicative characters $A_1,A_2,\ldots,A_{n+1},$ and $B_1,B_2,\ldots,B_n$ of
$\mathbb{F}_p$ with $t\in\mathbb{F}_p,$ J. Greene \cite{greene} defined ${_{n+1}F_n}(\cdots)$ hypergeometric function over finite field
$\mathbb{F}_p$ by
\begin{align*}
{_{n+1}}F_{n}\left(\begin{array}{cccc}
A_1, & A_2, & \ldots, & A_{n+1} \\
& B_1, & \ldots, & B_n
\end{array}\mid t
\right):=\frac{p}{p-1}\sum_{\chi\in\widehat{\mathbb{F}_p^\times}}
{A_1\chi\choose\chi}\cdots{A_{n+1}\chi\choose B_n\chi}\chi(t).
\end{align*}
This function is also known as Gaussian hypergeometric function.
These functions were developed to have a parallel study of character sums analogous to special functions.
Gaussian hypergeometric functions satisfy many identities which are often analogues of classical hypergeometric series identities, for more details, see \cite{greene}. Since the entries of the Gaussian hypergeometric function are multiplicative characters so results involving Gaussian hypergeometric functions often be restricted to primes in certain congruence classes for the existence of characters of specific orders, see for example \cite{EG, fuselier2, lennon-1, lennon-2}. To overcome these limitations, D. McCarthy \cite{mccarthy-ijnt-2, mccarthy-pacific} defined a function $_nG_n[\cdots]$ in terms of quotients of $p$-adic gamma functions which can be best described as an analogue of classical hypergeometric series in the $p$-adic setting. Let $\mathbb{Z}_p$ and $\mathbb{Q}_p$ denote the ring of $p$-adic integers and the field of $p$-adic numbers, respectively.
Let $\Gamma_p(\cdot)$ denote the Morita's $p$-adic gamma function. Let $\omega$ denote the Teichm\"{u}ller character of $\mathbb{F}_p,$ satisfying $\omega(a)\equiv a\pmod{p},$ and $\overline{\omega}$ denote the character inverse of $\omega$. For $x\in\mathbb{Q}$ let $\lfloor x\rfloor$ denote the greatest integer less than or equal to $x$ and $\langle x\rangle$ denote the fractional part of $x$, satisfying $0\leq\langle x\rangle<1$.
We now recall the McCarthy's hypergeometric function $_{n}G_{n}[\cdots]$ in the $p$-adic setting.
\begin{definition}\cite[Definition 5.1]{mccarthy-pacific} \label{defin1}
Let $p$ be an odd prime and $t \in \mathbb{F}_p$.
For positive integer $n$ and $1\leq k\leq n$, let $a_k$, $b_k$ $\in \mathbb{Q}\cap \mathbb{Z}_p$.
Then the function $_{n}G_{n}[\cdots]$ is defined as
\begin{align}
&{_nG_n}\left[\begin{array}{cccc}
a_1, & a_2, & \ldots, & a_n \\
b_1, & b_2, & \ldots, & b_n
\end{array}|t
\right]:=\frac{-1}{p-1}\sum_{a=0}^{p-2}(-1)^{an}~~\overline{\omega}^a(t)\notag\\
&\times \prod\limits_{k=1}^n(-p)^{-\lfloor \langle a_k \rangle-\frac{a}{p-1} \rfloor -\lfloor\langle -b_k \rangle +\frac{a}{p-1}\rfloor}
\frac{\Gamma_p(\langle a_k-\frac{a}{p-1}\rangle)}{\Gamma_p(\langle a_k \rangle)}
\frac{\Gamma_p(\langle -b_k+\frac{a}{p-1} \rangle)}{\Gamma_p(\langle -b_k \rangle)}.\notag
\end{align}
\end{definition}
This function is also known as $p$-adic hypergeometric function. It is clear from the definition \ref{defin1} that the value of $_nG_n[\cdots]$ function depends only on the fractional part of the parameters $a_k$ and $b_k$. Therefore, we may assume that $0\leq a_k,b_k<1.$ Gaussian hypergeometric functions satisfy many powerful transformation formulas that are often mirror symmetrical to their classical counterparts, for details see \cite{greene}. Note that these results can be converted into identities involving $_{n}G_{n}[\cdots]$ via the transformations
\cite[Lemma 3.3]{mccarthy-pacific} and \cite[Proposition 2.5]{mccarthy-ffa} between finite field hypergeometric function and $p$-adic hypergeometric series. However, the new identities involving $_{n}G_{n}[\cdots]$ will be valid for the primes $p$, where the original characters existed over $\mathbb{F}_p.$
Therefore, it will be interesting to extend such results to almost all primes. In \cite{fm}, Fuselier-McCarthy established certain transformation identities for $p$-adic hypergeometric series in full generality. In particular, they proved a transformation result analogous to a Whipple's result for ${_3F_2}$-classical hypergeometric series. These transformations eventually leaded to settle one supercongruence conjecture of Rodriguez-Villegas between a truncated
${_4F_3}$-classical hypergeometric series and the Fourier coefficients of a certain weight four modular form. This is one of the motivation to study transformation formulas with the expectation that transformation formulas will lead to new identities.
Let $\chi_4,$ be a multiplicative character of $\mathbb{F}_p$
of orders 4. Also, let $\varphi$ be the quadratic character of $\mathbb{F}_p.$ Consider the classical hypergeometric series
${_2F_1}\left(\begin{array}{cc}
\frac{1}{4} , & \frac{3}{4}\vspace{1mm} \\
& \frac{1}{2}
\end{array}\mid t\right)$, Then the finite field analogue of this series can be considered as
${_2F_1}\left(\begin{array}{cc}
\chi_4, & \chi_4^3 \\
& \varphi
\end{array}\mid t\right)$
Using the transformations \cite[Lemma 3.3]{mccarthy-pacific}, and \cite[Proposition 2.5]{mccarthy-ffa}, the function ${_2G_2}\left[\begin{array}{cc}
\frac{1}{4} , & \frac{3}{4}\vspace{1mm} \\
0, & \frac{1}{2}
\end{array}\mid \frac{1}{t}\right]$ can be described as $p$-adic analogue of the classical hypergeometric series
${_2F_1}\left(\begin{array}{cc}
\frac{1}{4} , & \frac{3}{4}\vspace{1mm} \\
& \frac{1}{2}
\end{array}\mid t\right)$.
We know that classical hypergeometric series satisfy many powerful identities.
For example, Gauss \cite{gauss}, Kummer \cite{kummer}, Whipple \cite[p. 54]{lidl}, Saalch\"{u}tz \cite[p. 49]{slater}, Dixon \cite[p. 51]{slater}, and Watson \cite[p. 54]{slater}
studied special values of classical hypergeometric series. For instance, the follwing evaluation of classical hypergeometric series in terms of quotients of classical gamma function was due to Gauss \cite{gauss}. If $R(c-a-b)>0$ then
\begin{align}\label{gauss}
{_2F_1}\left(\begin{array}{cc}
a, & b \\
& c
\end{array}\mid1\right)=\frac{\Gamma(c)\Gamma(c-a-b)}{\Gamma(c-a)\Gamma(c-b)}.
\end{align}
If we put $a=\frac{1}{4},$ $b=\frac{3}{4}$ and $c=1+\frac{1}{2}$ in \eqref{gauss} then we have
\begin{align}\label{gauss-value}
{_2F}_{1}\left(\begin{array}{cc}
\frac{1}{4}, & \frac{3}{4} \\
& 1+\frac{1}{2}
\end{array}\mid1\right)=
\frac{\Gamma(1+\frac{1}{2})\Gamma(\frac{1}{2})}{\Gamma(1+\frac{1}{4})\Gamma(\frac{3}{4})}.
\end{align}
Also, consider the Kummer's theorem \cite{kummer}
\begin{align}\label{kummer}
{_2F_1}\left(\begin{array}{cc}
a, & b \\
& 1+b-a
\end{array}\mid-1\right)=\frac{\Gamma(1+b-a)\Gamma(1+\frac{b}{2})}{\Gamma(1+b)\Gamma(1+\frac{b}{2}-a)}.
\end{align}
Putting $a=\frac{1}{4}$ and $b=\frac{3}{4}$ into \eqref{kummer} we have
\begin{align}\label{kummer-value}
{_2F}_{1}\left(\begin{array}{cc}
\frac{1}{4}, & \frac{3}{4} \\
& 1+\frac{1}{2}
\end{array}\mid-1\right)=\frac{\Gamma(1+\frac{1}{2})\Gamma(1+\frac{3}{8})}{\Gamma(1+\frac{3}{4})
\Gamma(1+\frac{1}{8})}.
\end{align}
Classical hypergeometric series with dihedral monodromy group can be expressed as elementary functions as their hypergeometric equations can be reformulated to Fuchsian equations with cyclic monodromy groups. For example, two interesting cases that can be expressed as square roots inside powers are:
\begin{align}\label{dg-1}
{_2F_1}\left(\begin{array}{cc}
\frac{a}{2}, & \frac{a+1}{2}\\
~& a+1
\end{array}\mid z\right)=\left(\frac{1+\sqrt{1-z}}{2}\right)^{-a},
\end{align}
and
\begin{align}\label{dg-2}
{_2F_1}\left(\begin{array}{cc}
\frac{a}{2}, & \frac{a+1}{2}\vspace{1mm}\\
~& \frac{1}{2}
\end{array}\mid z\right)=\frac{(1-\sqrt{z})^{-a}+(1+\sqrt{z})^{-a}}{2}.
\end{align}
All these evaluations of Gauss, Kummer etc. motivates us to study the special values of $p$-adic hypergeometric series ${_2G_2}[\cdots].$ Indeed, we completely determine all the possible zeros and non zero values of a certain family of ${_2G_2}[\cdots].$ We first discuss a theorem that classify all the zeros and non zero values of the function $_2G_2[\cdots].$
For brevity we write $a\neq\square$ if $a$ is not square in $\mathbb{F}_p.$
\begin{theorem}\label{SV-1}
Let $p\geq3$ be a prime and $t\in\mathbb{F}_p^\times.$ Then we have the following values.
\begin{enumerate}
\item \begin{align}\label{value-1}
{_2G_2}\left[\begin{array}{cc}
\frac{1}{4}, & \frac{3}{4}\vspace{1mm}\\
0, & \frac{1}{2}
\end{array}\mid 1\right]=\frac{-\varphi(-1)}{\Gamma_p(\frac{1}{4})\Gamma_p(\frac{3}{4})}.
\end{align}
\item Let $t\neq1$ and $\frac{t-1}{t}$ be a square in $\mathbb{F}_p^\times$ such that
$\frac{t-1}{t}=a^2$ for some $a\in\mathbb{F}_p^\times.$ Then we have
\begin{align}\label{value-2}
{_2G_2}\left[\begin{array}{cc}
\frac{1}{4}, & \frac{3}{4}\vspace{1mm}\\
0, & \frac{1}{2}
\end{array}\mid t\right]=\frac{-\varphi(-1)}{\Gamma_p(\frac{1}{4})\Gamma_p(\frac{3}{4})}
(\varphi(1+a)+\varphi(1-a)).
\end{align}
\item Also, if $\frac{t-1}{t}\neq\square$ in $\mathbb{F}_p$ then
\begin{align}\label{value-3}
{_2G_2}\left[\begin{array}{cc}
\frac{1}{4}, & \frac{3}{4}\vspace{1mm}\\
0, & \frac{1}{2}
\end{array}\mid t\right]=0.
\end{align}
\end{enumerate}
\end{theorem}
\begin{remark}
Note that \eqref{value-1} can be described as a $p$-adic analogue of \eqref{gauss-value}. Theorem \ref{SV-1} provides $p$-adic analogue of \eqref{kummer-value}. The value of the function ${_2G_2}\left[\begin{array}{cc}
\frac{1}{4}, & \frac{3}{4}\\
0, & \frac{1}{2}
\end{array}\mid -1\right]$ completely depends on the prime as if $p\equiv\pm3\pmod{8}$ then it will be equal to zero. However, \eqref{kummer-value} cannot be equal to zero.
Theorem \ref{SV-1} can also be described as $p$-adic analogue of \eqref{dg-1} and \eqref{dg-2} for $a=\frac{1}{2}.$
\end{remark}
Another purpose of this paper is to establish $p$-adic analogous of the Kummer's linear transformation
\cite[p. 4 eq. (1)]{bailey} given below.
\begin{align}\label{kummer-transformation}
&{_2F_1}\left(\begin{array}{cc}
a, & b \\
& c
\end{array}\mid z\right)=\frac{\Gamma(c)\Gamma(c-a-b)}{\Gamma(c-a)\Gamma(c-b)}\cdot
{_2F_1}\left(\begin{array}{cc}
a, & b \\
& 1+a+b-c
\end{array}\mid1-z\right)\\
&+\frac{\Gamma(c)\Gamma(a+b-c)}{\Gamma(a)\Gamma(b)}(1-z)^{c-a-b}\cdot
{_2F_1}\left(\begin{array}{cc}
c-a, & c-b \\
& 1+c-a-b
\end{array}\mid1-z\right).\notag
\end{align}
The next theorem provides transformations of $p$-adic hypergeometric series which can be described as $p$-adic analogue of a particular case of Kummer's linear transformation \eqref{kummer-transformation}. This theorem is obtained as a consequence of Theorem \ref{SV-1}.
\begin{theorem}\label{kummer-1}
Let $p\geq3$ be a prime and $x\in\mathbb{F}_p$ be such that $x\neq0,1$. Then we have the followings:
\begin{enumerate}
\item If $x$ and $1-x$ are not squares in $\mathbb{F}_p$ then
\begin{align}\label{trans-1}
{_2G_2}\left[\begin{array}{cc}
\frac{1}{4}, & \frac{3}{4}\vspace{1mm}\\
0, & \frac{1}{2}
\end{array}\mid\frac{1}{x}\right]={_2G_2}\left[\begin{array}{cc}
\frac{1}{4}, & \frac{3}{4}\vspace{1mm}\\
0, & \frac{1}{2}
\end{array}\mid\frac{1}{1-x}\right].
\end{align}
\item If $x=b^2$ for some $b\in\mathbb{F}_p$ and $1-x$ is not a square in $\mathbb{F}_p$ then
\begin{align}\label{trans-2}
{_2G_2}\left[\begin{array}{cc}
\frac{1}{4}, & \frac{3}{4}\vspace{1mm}\\
0, & \frac{1}{2}
\end{array}\mid\frac{1}{x}\right]&=\Gamma_p\left(\frac{1}{4}\right)
\Gamma_p\left(\frac{3}{4}\right)
{_2G_2}\left[\begin{array}{cc}
\frac{1}{4}, & \frac{3}{4}\vspace{1mm}\\
0, & \frac{1}{2}
\end{array}\mid\frac{1}{1-x}\right]\\
&+\varphi(-1)(\varphi(1+b)+\varphi(1-b))\notag.
\end{align}
\item If $x,$ and $1-x$ are both squares such that $1-x=a^2$ and $x=b^2$ for some $a,b\in\mathbb{F}_p$ then
\begin{align}\label{trans-3}
&(\varphi(1+b)+\varphi(1-b)){_2G_2}\left[\begin{array}{cc}
\frac{1}{4}, & \frac{3}{4}\vspace{1mm}\\
0, & \frac{1}{2}
\end{array}\mid\frac{1}{x}\right]\notag\\
&=(\varphi(1+a)+\varphi(1-a)){_2G_2}\left[\begin{array}{cc}
\frac{1}{4}, & \frac{3}{4}\vspace{1mm}\\
0, & \frac{1}{2}
\end{array}\mid\frac{1}{1-x}\right].
\end{align}
\item If $1-x=a^2$ for some $a\in\mathbb{F}_p$ and $x$ is not a square in $\mathbb{F}_p$ then
\begin{align}\label{trans-4}
&\Gamma_p\left(\frac{1}{4}\right)
\Gamma_p\left(\frac{3}{4}\right){_2G_2}\left[\begin{array}{cc}
\frac{1}{4}, & \frac{3}{4}\vspace{1mm}\\
0, & \frac{1}{2}
\end{array}\mid\frac{1}{x}\right]\notag\\
&={_2G_2}\left[\begin{array}{cc}
\frac{1}{4}, & \frac{3}{4}\vspace{1mm}\\
0, & \frac{1}{2}
\end{array}\mid\frac{1}{1-x}\right]
-\varphi(-1)(\varphi(1+a)+\varphi(1-a)).
\end{align}
\end{enumerate}
\end{theorem}
Note that the finite field analogue of Theorem \ref{kummer-1} involving characters of order 4 follows from
Greene's evaluation \cite[Theorem 4.4 (i)]{greene} and if we use this result of Greene along with the relations \cite[Lemma 3.3]{mccarthy-pacific}, and \cite[Proposition 2.5]{mccarthy-ffa} then we also obtain a similar transformation for the $p$-adic hypergeometric series ${_2G_2}\left[\begin{array}{cc}
\frac{1}{4}, & \frac{3}{4}\vspace{1mm}\\
0, & \frac{1}{2}
\end{array}\mid t\right]$ under the condition that $p\equiv1\pmod{4}.$ However, Theorem \ref{kummer-1} has no congruence condition on primes.
\par Fuselier-McCarthy \cite{fm} evaluated certain summation identities for $p$-adic hypergeometric series. This motivates us to study summation identities of $p$-adic hypergeometric series.
\begin{theorem}\label{sum-1}
Let $p\geq3$ be a prime. Let $x\in\mathbb{F}_p^{\times}.$ Then we have the following:
\begin{enumerate}
\item \begin{align}\label{formula-1}
\sum_{t\in\mathbb{F}_p^\times}\varphi(t(t-1))~{_2G_2}\left[\begin{array}{cc}
\frac{1}{4}, & \frac{3}{4}\vspace{1mm}\\
0, & 0
\end{array}\mid t\right]=-1+\frac{p\varphi(-1)}{\Gamma_p(\frac{1}{4})\Gamma_p(\frac{3}{4})}.
\end{align}
\item If $x\neq1$ and $1-x$ is not a square in $\mathbb{F}_p$ then we have
\begin{align}\label{formula-2}
\sum_{t\in\mathbb{F}_p^\times}\varphi(t(t-1))~{_2G_2}\left[\begin{array}{cc}
\frac{1}{4}, & \frac{3}{4}\vspace{1mm}\\
0, & 0
\end{array}\mid\frac{t}{x}\right]=-1.
\end{align}
\item If $x\neq1$ and $1-x=a^2$ for some $a\in\mathbb{F}_p$ then
\begin{align}\label{formula-3}
\sum_{t\in\mathbb{F}_p^\times}\varphi(t(t-1))~{_2G_2}\left[\begin{array}{cc}
\frac{1}{4}, & \frac{3}{4}\vspace{1mm}\\
0, & 0
\end{array}\mid\frac{t}{x}\right]=-1+\frac{p\varphi(-1)(\varphi(1+a)+\varphi(1-a))}
{\Gamma_p(\frac{1}{4})\Gamma_p(\frac{3}{4})}.
\end{align}
\end{enumerate}
\end{theorem}
The following theorem gives a summation identity of Gaussian hypergeometric functions involving characters of order 4.
\begin{theorem}\label{sum-3}
Let $p\geq3$ be a prime such that $p\equiv1\pmod{4}$. Let $x\in\mathbb{F}_p^{\times}$ and $\chi_4$ be a multiplicative character of $\mathbb{F}_p$ of order $4.$ Then we have the following.
\begin{enumerate}
\item \begin{align}\label{formula-4}
\sum_{t\in\mathbb{F}_p^\times}\varphi(1-t)~{_2F_1}\left(\begin{array}{cc}
\chi_4, & \chi_4^3\\
& \varepsilon
\end{array}\mid t\right)=\frac{1}{p}+\chi_4(-1).
\end{align}
\item If $x\neq1$ and $1-x$ is not a square in $\mathbb{F}_p$ then we have
\begin{align}\label{formula-5}
\sum_{t\in\mathbb{F}_p^\times}\varphi(x-t)~{_2F_1}\left(\begin{array}{cc}
\chi_4, & \chi_4^3\\
& \varepsilon
\end{array}\mid t\right)=\frac{\varphi(x)}{p}.
\end{align}
\item If $x\neq1$ and $1-x=a^2$ for some $a\in\mathbb{F}_p^{\times}$ then
\begin{align}\label{formula-6}
\sum_{t\in\mathbb{F}_p^\times}\varphi(x-t)~{_2F_1}\left(\begin{array}{cc}
\chi_4, & \chi_4^3\\
& \varepsilon
\end{array}\mid t\right)=\frac{\varphi(x)}{p}+\varphi(x)\chi_4(-1)(\varphi(1+a)+\varphi(1-a)).
\end{align}
\end{enumerate}
\end{theorem}
Apart from Kummer's transformation there are other interesting transformation formulas exist in the literature. For example, Euler \cite[p. 10]{slater}, Whipple \cite{whipple}, Dixon \cite{dixon} studied transformation properties of classical hypergeometric series. However, we are interested in the Pfaff's transformation \cite[p. 31]{slater}
\begin{align}
{_2F_1}\left(\begin{array}{cc}
a, & b\\
~ & c
\end{array}\mid z\right)=(1-z)^{-a}{_2F_1}\left(\begin{array}{cc}
a, & c-b\\
~ & c
\end{array}\mid \frac{z}{z-1}\right).\notag
\end{align}
In particular, if $a=\frac{1}{4}$, $b=\frac{3}{4}$, and $c=\frac{1}{2}$ then the above result gives
\begin{align}\label{pfaff}
{_2F_1}\left(\begin{array}{cc}
\frac{1}{4}, & \frac{3}{4}\vspace{1mm}\\
~ & \frac{1}{2}
\end{array}\mid z\right)=(1-z)^{-1/4}{_2F_1}\left(\begin{array}{cc}
\frac{1}{4}, & \frac{-1}{4}\\
~ & \frac{1}{2}
\end{array}\mid \frac{z}{z-1}\right).
\end{align}
We know that $p$-adic analogue of ${_2F_1}\left(\begin{array}{cc}
\frac{1}{4}, & \frac{-1}{4}\vspace{1mm}\\
~ & \frac{1}{2}
\end{array}\mid z\right)$ can be described as the function ${_2G_2}\left[\begin{array}{cc}
\frac{1}{4}, & \frac{-1}{4}\vspace{1mm}\\
0, & \frac{1}{2}
\end{array}\mid\frac{1}{z}\right]={_2G_2}\left[\begin{array}{cc}
\frac{1}{4}, & 1-\frac{1}{4}\vspace{1mm}\\
0, & \frac{1}{2}
\end{array}\mid\frac{1}{z}\right].$ Then a $p$-adic analogue of Pfaff's transformation \eqref{pfaff} is described in the next theorem.
\begin{theorem}\label{paf}
Let $p\geq3$ be a prime and $1\neq x\in\mathbb{F}_p^\times$. Then we have the followings.
\begin{enumerate}
\item If $1-x\neq\square$ then
\begin{align}\label{paf-1}
{_2G_2}\left[\begin{array}{cc}
\frac{1}{4}, & \frac{3}{4}\vspace{1mm}\\
0, & \frac{1}{2}
\end{array}\mid\frac{1}{x}\right]={_2G_2}\left[\begin{array}{cc}
\frac{1}{4}, & \frac{3}{4}\vspace{1mm}\\
0, & \frac{1}{2}
\end{array}\mid\frac{x-1}{x}\right].
\end{align}
\item If $1-x=a^2$ for some $a\in\mathbb{F}_p^\times$ then
\begin{align}\label{paf-2}
&\varphi(a)(\varphi(a+1)+\varphi(a-1)){_2G_2}\left[\begin{array}{cc}
\frac{1}{4}, & \frac{3}{4}\vspace{1mm}\\
0, & \frac{1}{2}
\end{array}\mid\frac{1}{x}\right]\notag\\
&=(\varphi(1+a)+\varphi(1-a)){_2G_2}\left[\begin{array}{cc}
\frac{1}{4}, & \frac{3}{4}\vspace{1mm}\\
0, & \frac{1}{2}
\end{array}\mid\frac{x-1}{x}\right].
\end{align}
\end{enumerate}
\end{theorem}
The rest of this paper is organized as follows. We introduce some basic definitions in Section 2 including Gauss sum and $p$-adic gamma function. In section 2 we state some results including Hasse-Davenport result, Gross-Koblitz formula. We give the proofs of the main theorems in Section 3.
\section{Notation and Preliminary results}
Let $\overline{\mathbb{Q}_p}$ be the algebraic closure of $\mathbb{Q}_p$ and $\mathbb{C}_p$ be the completion of $\overline{\mathbb{Q}_p}.$ Since each $\chi\in\mathbb{F}_p^\times$ takes values from $\mu_{p-1},$ the group of $(p-1)$-th roots of unity in $\mathbb{C}^\times,$ and $\mathbb{Z}_p^\times$ contains $\mu_{p-1}$, so we may assume that the multiplicative characters of $\mathbb{F}_p^\times$ to be mapped
$\chi:\mathbb{F}_p^{\times}\mapsto\mathbb{Z}_p^\times.$
Recall that $\omega:\mathbb{F}_p^\times\mapsto\mathbb{Z}_p^\times$ is the Teichm$\ddot{u}$ller character. Also, $\widehat{\mathbb{F}_p^\times}=\{\omega^j:0\leq j\leq p-2\}$ and $\overline{\omega}$ denotes the inverse of $\omega.$
\subsection{Preliminary results on Multiplicative characters and Gauss sums:}
The following result gives the orthogonality relation of multiplicative characters.
\begin{lemma}\cite[Chapter 8]{ireland}
Let $p$ be an odd prime. Then we have
\begin{align}\label{orthogonal-1}
\sum_{\chi\in\widehat{\mathbb{F}_p^\times}}\chi(x)=\left\{
\begin{array}{ll}
p-1 , & \hbox{if $x=1$;} \\
0, & \hbox{if $x\neq1$.}
\end{array}
\right.
\end{align}
\end{lemma}
\par Let $\zeta_p$ be a fixed primitive $p$-th root of unity in $\overline{\mathbb{Q}_p}.$
For $\chi \in \widehat{\mathbb{F}_p^\times}$, the Gauss sum is defined by
\begin{align}
g(\chi):=\sum\limits_{x\in \mathbb{F}_p}\chi(x)~\zeta_p^x.\notag
\end{align}
From the definition we can say that $g(\varepsilon)=-1$. For more details on Gauss sums, see \cite{berndt}. We now introduce some properties of Gauss sums. Let
$\delta: \widehat{\mathbb{F}_p^\times}
\rightarrow\{0,1\}$ be defined by
\begin{align}
\delta(\chi)=\left\{
\begin{array}{ll}
1 , & \hbox{if $\chi=\varepsilon$;} \\
0, & \hbox{if $\chi\neq\varepsilon$.}
\end{array}
\right.
\end{align}
We start with a result that provides a formula for the multiplicative inverse of Gauss sum.
\begin{lemma}\cite[eq. 1.12]{greene} Let $\chi\in \widehat{\mathbb{F}_p^\times}.$ Then
\begin{align}\label{inverse}
g(\chi)g(\overline{\chi})=p\chi(-1)-(p-1)\delta(\chi).
\end{align}
\end{lemma}
Another important product formula for Gauss sums is the Hasse-Davenport formula.
\begin{theorem}\cite[Hasse-Davenport relation, Theorem 11.3.5]{berndt}
Let $\chi$ be a character of order $m$ of $\widehat{\mathbb{F}_p^\times}$ for some positive integer $m.$ For a multiplicative character $\psi$ of $\widehat{\mathbb{F}_p^\times}$ we have
\begin{align}\label{dh}
\prod_{i=0}^mg(\psi\chi^i)=g(\psi^m)\psi^{-m}(m)\prod_{i=1}^{m-1}g(\chi^i).
\end{align}
\end{theorem}
The following lemma relates Gauss and Jacobi sums.
\begin{lemma}\cite[eq. 1.14]{greene}
Let $\chi_1,\chi_2\in \widehat{\mathbb{F}_p^\times}.$ Then
\begin{align}\label{gauss-jacobi}
J(\chi_1,\chi_2)=\frac{g(\chi_1)g(\chi_2)}{g(\chi_1\chi_2)}+(p-1)\chi_2(-1)\delta(\chi_1\chi_2).
\end{align}
\end{lemma}
Let $\chi,\psi$ be multiplicative characters of $\mathbb{F}_p.$ Then the following special values of binomials are very useful to prove our main results, for more details we refer \cite[eq. 2.12, eq. 2.7]{greene}.
\begin{align}\label{rel-1}
{\chi\choose\varepsilon}={\chi\choose\chi}=-\frac{1}{p}+\frac{p-1}{p}\delta(\chi).
\end{align}
\begin{align}\label{rel-2}
{\chi\choose\psi}={\psi\overline{\chi}\choose\psi}\psi(-1).
\end{align}
\subsection{$p$-adic Preliminaries:}
We recall the $p$-adic gamma function, for further details see \cite{kob}.
For a positive integer $n$,
the $p$-adic gamma function $\Gamma_p(n)$ is defined as
\begin{align}
\Gamma_p(n):=(-1)^n\prod\limits_{0<j<n,p\nmid j}j\notag
\end{align}
and one can extend it to all $x\in\mathbb{Z}_p$ by setting $\Gamma_p(0):=1$ and
\begin{align}
\Gamma_p(x):=\lim_{x_n\rightarrow x}\Gamma_p(x_n)\notag
\end{align}
for $x\neq0$, where $x_n$ runs through any sequence of positive integers $p$-adically approaching $x$.
Two important product formulas of $p$-adic gamma function form \cite{gross} are as follows.
If $x\in\mathbb{Z}_p$ then
\begin{align}\label{prod-3}
\Gamma_p(x)\Gamma_p(1-x)=(-1)^{a_0(x)},
\end{align}
where $a_0(x)\equiv x\pmod{p}$ such that $a_0(x)\in\{1,2,\ldots,p\}.$
If $m\in\mathbb{Z}^+,$
$p\nmid m$ and $x=\frac{r}{p-1}$ with $0\leq r\leq p-1$ then
\begin{align}\label{prod-1}
\prod_{h=0}^{m-1}\Gamma_p\left(\frac{x+h}{m}\right)=\omega(m^{(1-x)(1-p)})~\Gamma_p(x)\prod_{h=1}^{m-1}\Gamma_p\left(\frac{h}{m}\right).
\end{align}
Another interesting product formula of $p$-adic gamma function given in \cite{mccarthy-pacific} as a consequence of \eqref{prod-1} described as follows. Let $t\in\mathbb{Z}^{+}$ and $p\nmid t$. Then for $0\leq j\leq p-2$ we have
\begin{align}\label{prod-2}
\omega(t^{-tj})\Gamma_p\left(\left\langle\frac{-tj}{p-1}\right\rangle\right)\prod_{h=1}^{t-1}\Gamma_p\left(\frac{h}{t}\right)
=\prod_{h=1}^{t}\Gamma_p\left(\left\langle\frac{h}{t}-\frac{j}{p-1}\right\rangle\right).
\end{align}
Let $\pi \in \mathbb{C}_p$ be the fixed root of the polynomial $x^{p-1} + p$, which satisfies the congruence condition
$\pi \equiv \zeta_p-1 \pmod{(\zeta_p-1)^2}$. The Gross-Koblitz formula relates the Gauss sum and $p$-adic gamma function as follows.
\begin{theorem}\cite[Gross-Koblitz formula]{gross}\label{gross-koblitz} For $j\in \mathbb{Z}$,
\begin{align}
g(\overline{\omega}^j)=-\pi^{(p-1)\langle\frac{j}{p-1} \rangle}\Gamma_p\left(\left\langle \frac{j}{p-1} \right\rangle\right).\notag
\end{align}
\end{theorem}
The next two lemmas are helpful in the proof of our main results. These two lemmas are direct applications of Gross-Koblitz formula.
\begin{lemma}
For $1\leq j\leq p-2$
\begin{align}\label{lemma-1}
\Gamma_p\left(\left\langle1-\frac{j}{p-1}\right\rangle\right)\Gamma_p\left(\left\langle\frac{j}{p-1}\right\rangle\right)
=-\omega^j(-1).
\end{align}
\end{lemma}
\begin{proof}
Applying Gross-Koblitz formula (Theorem \ref{gross-koblitz}) on the left hand side of \eqref{lemma-1} and then using \eqref{inverse} it is straightforward to verify the lemma.
\end{proof}
\begin{lemma}
For $1\leq j\leq p-2$ we have
\begin{align}\label{lemma-2}
&\frac{(-p)^{-\lfloor\frac{1}{2}+\frac{j}{p-1}\rfloor}}{\Gamma_p(\frac{1}{2})}~
\Gamma_p\left(\left\langle\frac{1}{2}+\frac{j}{p-1}\right\rangle\right)
\Gamma_p\left(\left\langle1-\frac{j}{p-1}\right\rangle\right)\notag\\
&=\frac{1}{p}\sum_{t\in\mathbb{F}_p^\times}
\overline{\omega}^j(-t)\varphi(t(t-1)).
\end{align}
\end{lemma}
\begin{proof}
Let $U=\frac{(-p)^{-\lfloor\frac{1}{2}+\frac{j}{p-1}\rfloor}}{\Gamma_p(\frac{1}{2})}~
\Gamma_p\left(\left\langle\frac{1}{2}+\frac{j}{p-1}\right\rangle\right)
\Gamma_p\left(\left\langle1-\frac{j}{p-1}\right\rangle\right).$ Using Gross-Koblitz formula (Theorem \ref{gross-koblitz}), \eqref{inverse}, and \eqref{gauss-jacobi} we obtain
\begin{align*}
U&=\frac{\varphi\omega^j(-1)}{p}J(\varphi\overline{\omega}^j,\varphi)
=\frac{1}{p}\sum_{t\in\mathbb{F}_p^\times}\overline{\omega}^j(-t)\varphi(t(t-1)).
\end{align*}
This completes the proof of the lemma.
\end{proof}
\section{Proof of the theorems}
We begin with a proposition which explicitly determines the value of a character sum. We use this proposition to prove Theorem \ref{SV-1} and Theorem \ref{sum-1}.
\begin{proposition}\label{prop-1}
For $x\in\mathbb{F}_p^{\times}$ we have
\begin{align*}
&\sum_{\chi\in\widehat{\mathbb{F}_p^\times}}g(\varphi\chi^2)g(\varphi\overline{\chi})g(\overline{\chi})
\chi\left(\frac{x}{4}\right)\\
&=\left\{
\begin{array}{ll}
0 , & \hbox{if $1-x\neq\square$;} \\
p(p-1)\varphi(-2), & \hbox{if $x=1$;}\\
p(p-1)\varphi(-2)(\varphi(1+a)+\varphi(1-a)), & \hbox{if $x\neq1$ and $1-x=a^2$.}
\end{array}
\right.
\end{align*}
\end{proposition}
\begin{proof}
Let $A=\displaystyle\sum_{\chi\in\widehat{\mathbb{F}_p^\times}}g(\varphi\chi^2)g(\varphi\overline{\chi})g(\overline{\chi})
\chi\left(\frac{x}{4}\right).$ Multiplying both numerator and denominator by $g(\varphi\chi)$ we can write
\begin{align}\label{eqn-1}
A=\sum_{\chi\in\widehat{\mathbb{F}_q^\times}}\frac{g(\varphi\chi^2)g(\overline{\chi})}{g(\varphi\chi)}
~g(\varphi\chi)g(\varphi\overline{\chi})~\chi\left(\frac{x}{4}\right).
\end{align}
Applying \eqref{gauss-jacobi} we have
\begin{align}\label{eqn-2}
\frac{g(\varphi\chi^2)g(\overline{\chi})}{g(\varphi\chi)}=J(\varphi\chi^2,\overline{\chi})-(p-1)\chi(-1)\delta(\varphi\chi).
\end{align}
Also, applying \eqref{inverse} we have
\begin{align}\label{eqn-3}
g(\varphi\chi)g(\varphi\overline{\chi})=p\varphi\chi(-1)-(p-1)\delta(\varphi\chi).
\end{align}
Substituting \eqref{eqn-2} and \eqref{eqn-3} into \eqref{eqn-1} we obtain
\begin{align}\label{eqn-4}
A=A_1+A_2+A_3+A_4,
\end{align}
where \begin{align}\label{eqn-5}
A_1=p\varphi(-1)\sum_{\chi\in\widehat{\mathbb{F}_p^\times}}J(\varphi\chi^2,\overline{\chi})\chi\left(\frac{-x}{4}\right),
\end{align}
\begin{align}\label{eqn-6}
A_2&=-(p-1)\sum_{\chi\in\widehat{\mathbb{F}_p^\times}}\delta(\varphi\chi)J(\varphi\chi^2,\overline{\chi})
~\chi\left(\frac{x}{4}\right)\notag\\
&=-(p-1)\varphi(x)J(\varphi,\varphi),
\end{align}
\begin{align}\label{eqn-7}
A_3&=-p(p-1)\varphi(-1)\sum_{\chi\in\widehat{\mathbb{F}_p^\times}}\delta(\varphi\chi)~\chi\left(\frac{x}{4}\right)\notag\\
&=-p(p-1)\varphi(-x),
\end{align}
and
\begin{align}\label{eqn-8}
A_4&=(p-1)^2\sum_{\chi\in\widehat{\mathbb{F}_p^\times}}\delta(\varphi\chi)~\chi\left(\frac{-x}{4}\right)\notag\\
&=(p-1)^2\varphi(-x).
\end{align}
Using \eqref{binomial}, and \eqref{rel-1} in \eqref{eqn-6} we have
\begin{align}\label{eqn-9}
A_2&=-p(p-1)\varphi(-x){\varphi\choose\varphi}\notag\\
&=(p-1)\varphi(-x).
\end{align}
Adding \eqref{eqn-7}, \eqref{eqn-8}, and \eqref{eqn-9} we obtain
\begin{align}\label{eqn-10}
A_2+A_3+A_4=0.
\end{align}
Substituting \eqref{eqn-10} into \eqref{eqn-4} we have
\begin{align}
A=A_1=p\varphi(-1)\sum_{\chi\in\widehat{\mathbb{F}_p^\times}}J(\varphi\chi^2,\overline{\chi})~\chi\left(\frac{-x}{4}\right).\notag
\end{align}
\eqref{binomial} gives
\begin{align}
A=p^2\varphi(-1)\sum_{\chi\in\widehat{\mathbb{F}_p^\times}}{\varphi\chi^2\choose\chi}~
\chi\left(\frac{x}{4}\right).\notag
\end{align}
If we use \eqref{rel-2} then we have ${\varphi\chi^2\choose\chi}=\chi(-1){\varphi\overline{\chi}\choose\chi}.$ This yields
\begin{align}\label{eqn-11}
A&=p^2\varphi(-1)\sum_{\chi\in\widehat{\mathbb{F}_p^\times}}{\varphi\overline{\chi}\choose\chi}~\chi\left(\frac{-x}{4}\right)\notag\\
&=p\varphi(-1)\sum_{\substack{\chi\in\widehat{\mathbb{F}_p^\times}\\y\in\mathbb{F}_p}}\varphi(y)\overline{\chi}(y)
\overline{\chi}(1-y)\chi\left(\frac{x}{4}\right).
\end{align}
Replacing $\chi$ by $\overline{\chi}$ in \eqref{eqn-11} we obtain
\begin{align}\label{eqn-13}
A=p\varphi(-1)\sum_{y\in\mathbb{F}_p}\varphi(y)\sum_{\chi\in\widehat{\mathbb{F}_p^\times}}
\chi\left(\frac{4y(1-y)}{x}\right).
\end{align}
Using the orthogonality relation \eqref{orthogonal-1} we can say that second sum present in \eqref{eqn-13} is non zero if and only if the equation $4y^2-4y+x=0$ admits a solution. We know that $4y^2-4y+x=0$ is solvable if and only if $1-x$ is a square in $\mathbb{F}_p.$ Let $1-x=a^2$ for some $a\in\mathbb{F}_p.$ Then $\frac{1}{2}(1\pm a)$ are solutions of
$4y^2-4y+x=0.$ Hence, we obtain
\begin{align*}
A=\left\{
\begin{array}{ll}
p(p-1)\varphi(-2), & \hbox{if $x=1$;} \\
p(p-1)\varphi(-2)(\varphi(1+a)+\varphi(1-a)) , & \hbox{if $x\neq1$ and $1-x=a^2$;} \\
0, & \hbox{if $1-x\neq\square$.}
\end{array}
\right.
\end{align*}
This completes the proof.
\end{proof}
In the next proposition, we again consider the same character sum as considered in Proposition \ref{prop-1} and express the sum as a special value of $p$-adic hypergeometric series. We use this proposition to prove Theorem \ref{SV-1}.
\begin{proposition}\label{prop-2}
For $x\in\mathbb{F}_p^{\times}$ we have
\begin{align*}
&\sum_{\chi\in\widehat{\mathbb{F}_p^\times}}g(\varphi\chi^2)g(\varphi\overline{\chi})g(\overline{\chi})
\chi\left(\frac{x}{4}\right)=p(1-p)\varphi(2)\Gamma_p\left(\frac{1}{4}\right)\Gamma_p\left(\frac{3}{4}\right)~{_2G_2}\left[\begin{array}{cc}
\frac{1}{4},& \frac{3}{4}\vspace{1mm}\\
0,& \frac{1}{2}
\end{array}\mid\frac{1}{x}\right].
\end{align*}
\end{proposition}
\begin{proof}
Let $A=\displaystyle\sum_{\chi\in\widehat{\mathbb{F}_p^\times}}g(\varphi\chi^2)g(\varphi\overline{\chi})g(\overline{\chi})
\chi\left(\frac{x}{4}\right).$ Since $\widehat{\mathbb{F}_p^\times}=\{\omega^j:0\leq j\leq p-2\},$ replacing $\chi$ by
$\omega^j$ and applying Gross-Koblitz formula we obtain
\begin{align}\label{eqn-14}
A&=-\sum_{j=0}^{p-2}\omega^j\left(\frac{x}{4}\right)\pi^{(p-1)\ell_j}
~\Gamma_p\left(\left\langle\frac{1}{2}-\frac{2j}{p-1}\right\rangle\right)\Gamma_p\left(\left\langle\frac{1}{2}+\frac{j}{p-1}\right\rangle\right)\\
&\times\Gamma_p\left(\frac{j}{p-1}\right)\notag,
\end{align}
where $\ell_j=\langle\frac{1}{2}-\frac{2j}{p-1}\rangle+\langle\frac{1}{2}+\frac{j}{p-1}\rangle+\left(\frac{j}{p-1}\right).$
Applying \eqref{prod-1} with $x=\left\langle\frac{1}{2}-\frac{2j}{p-1}\right\rangle$ and $m=2$ we obtain
\begin{align*}
\Gamma_p\left(\left\langle\frac{1}{2}-\frac{2j}{p-1}\right\rangle\right)
=\frac{\Gamma_p\left(\frac{1}{2}\left\langle\frac{1}{2}-\frac{2j}{p-1}\right\rangle\right)
\Gamma_p\left(\frac{1}{2}\left\langle\frac{1}{2}-\frac{2j}{p-1}\right\rangle+\frac{1}{2}\right)}
{\Gamma_p(\frac{1}{2})~\omega(2^{(1-p)(1-\langle\frac{1}{2}-\frac{2j}{p-1}\rangle)})}.
\end{align*}
Taking $j$ in the intervals $[0,\lfloor\frac{p-1}{4}\rfloor], (\lfloor\frac{p-1}{4}\rfloor, \lfloor\frac{3(p-1)}{4}\rfloor]$ and
$(\lfloor\frac{3(p-1)}{4}\rfloor,p-2]$ we verify that
\begin{align*}
&\Gamma_p\left(\frac{1}{2}\left\langle\frac{1}{2}-\frac{2j}{p-1}\right\rangle\right)
\Gamma_p\left(\frac{1}{2}\left\langle\frac{1}{2}-\frac{2j}{p-1}\right\rangle+\frac{1}{2}\right)\\
&=\Gamma_p\left(\left\langle\frac{1}{4}-\frac{j}{p-1}\right\rangle\right)\Gamma_p\left(\left\langle\frac{3}{4}-\frac{j}{p-1}\right\rangle\right)
\end{align*}
and $\omega(2^{(1-p)(1-\langle\frac{1}{2}-\frac{2j}{p-1}\rangle)})
=\varphi(2)~\overline{\omega}^j(4).$ Therefore, we can write
\begin{align}\label{eqn-15}
\Gamma_p\left(\left\langle\frac{1}{2}-\frac{2j}{p-1}\right\rangle\right)
=\frac{\Gamma_p\left(\left\langle\frac{1}{4}-\frac{j}{p-1}\right\rangle\right)\Gamma_p\left(\left\langle\frac{3}{4}-\frac{j}{p-1}\right\rangle\right)}
{\Gamma_p(\frac{1}{2})~\varphi(2)~\overline{\omega}^j(4)}.
\end{align}
Substituting \eqref{eqn-15} into \eqref{eqn-14} we obtain
\begin{align}\label{eqn-16}
A&=-\frac{\varphi(2)}{\Gamma_p(\frac{1}{2})}\sum_{j=0}^{p-2}\omega^j(x)~\pi^{(p-1)\ell_j}~\Gamma_p\left(\left\langle\frac{1}{4}-\frac{j}{p-1}\right\rangle\right)\Gamma_p\left(\left\langle\frac{3}{4}-\frac{j}{p-1}\right\rangle\right)\\
&\times\Gamma_p\left(\left\langle\frac{1}{2}+\frac{j}{p-1}\right\rangle\right)
\Gamma_p\left(\frac{j}{p-1}\right)\notag.
\end{align}
Now, \begin{align}
\ell_j&=\left\langle\frac{1}{2}-\frac{2j}{p-1}\right\rangle+\left\langle\frac{1}{2}+\frac{j}{p-1}\right\rangle+\left(\frac{j}{p-1}\right)\notag\\
&=1-\left\lfloor\frac{1}{2}-\frac{2j}{p-1}\right\rfloor-\left\lfloor\frac{1}{2}+\frac{j}{p-1}\right\rfloor.\notag
\end{align}
By considering $\left\lfloor\frac{1}{2}-\frac{2j}{p-1}\right\rfloor=2k+s$ for some $k\in\mathbb{Z}$ and $s=0,1$ it is straight forward to verify that
$\left\lfloor\frac{1}{2}-\frac{2j}{p-1}\right\rfloor=\left\lfloor\frac{1}{4}-\frac{j}{p-1}\right\rfloor
+\left\lfloor\frac{3}{4}-\frac{j}{p-1}\right\rfloor.$ This gives
\begin{align}\label{eqn-17}
\ell_j=1-\left\lfloor\frac{1}{4}-\frac{j}{p-1}\right\rfloor-\left\lfloor\frac{3}{4}-\frac{j}{p-1}\right\rfloor
-\left\lfloor\frac{1}{2}+\frac{j}{p-1}\right\rfloor.
\end{align}
Substituting \eqref{eqn-17} into \eqref{eqn-16} we obtain
\begin{align*}
A=p(1-p)\varphi(2)\Gamma_p\left(\frac{1}{4}\right)\Gamma_p\left(\frac{3}{4}\right)~{_2G_2}\left[\begin{array}{cc}
\frac{1}{4},& \frac{3}{4}\vspace{1mm}\\
0,& \frac{1}{2}
\end{array}\mid\frac{1}{x}\right].
\end{align*}
This completes the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{SV-1}]
Comparing Proposition \ref{prop-1} and Proposition \ref{prop-2} for $x=1$ we prove \eqref{value-1}. Now, letting $x\neq0,1$, and $1-x=a^2$ for some $a\in\mathbb{F}_p^\times$ and then comparing Proposition \ref{prop-1} and Proposition \ref{prop-2}
we obtain
\begin{align}\label{equation-1}
{_2G_2}\left[\begin{array}{cc}
\frac{1}{4},& \frac{3}{4}\vspace{1mm}\\
0,& \frac{1}{2}
\end{array}\mid\frac{1}{x}\right]=\frac{-\varphi(-1)}{\Gamma_p(\frac{1}{4})\Gamma_p(\frac{3}{4})}(\varphi(1+a)+\varphi(1-a)).
\end{align}
Then replacing $x$ by $\frac{1}{t}$ in \eqref{equation-1} we obtain \eqref{value-2}.
Finally, if $1-x$ is not a square in $\mathbb{F}_p$ then again, comparing Proposition \ref{prop-1} and Proposition \ref{prop-2}
we obtain
\begin{align}\label{equation-2}
{_2G_2}\left[\begin{array}{cc}
\frac{1}{4},& \frac{3}{4}\vspace{1mm}\\
0,& \frac{1}{2}
\end{array}\mid\frac{1}{x}\right]=0.
\end{align} Replacing $x$ by $\frac{1}{t}$ in \eqref{equation-2} we derive \eqref{value-3}.
This completes the proof of the theorem.
\end{proof}
\begin{proof}[Proof of Theorem \ref{kummer-1}]
If $t\neq0,1$ and $1-\frac{1}{t}=\frac{t-1}{t}\neq\square$ then from \eqref{value-3} we have
\begin{align}\label{equation-3}
{_2G_2}\left[\begin{array}{cc}
\frac{1}{4},& \frac{3}{4}\vspace{1mm}\\
0,& \frac{1}{2}
\end{array}\mid t\right]=0.
\end{align}
Replacing $t$ by $\frac{1}{x}$ in \eqref{equation-3} we obtain that if $1-x\neq\square$ then
\begin{align}\label{equation-4}
{_2G_2}\left[\begin{array}{cc}
\frac{1}{4},& \frac{3}{4}\vspace{1mm}\\
0,& \frac{1}{2}
\end{array}\mid \frac{1}{x}\right]=0.
\end{align}
Similarly, if $t\neq0,1$ and $1-\frac{1}{1-t}=\frac{t}{t-1}\neq\square$ then \eqref{value-3} gives
\begin{align}\label{equation-5}
{_2G_2}\left[\begin{array}{cc}
\frac{1}{4},& \frac{3}{4}\vspace{1mm}\\
0,& \frac{1}{2}
\end{array}\mid1- t\right]=0.
\end{align}
Replacing $1-t$ by $\frac{1}{1-x}$ in \eqref{equation-5} we can write that if $x\neq\square$ then
\begin{align}\label{equation-6}
{_2G_2}\left[\begin{array}{cc}
\frac{1}{4},& \frac{3}{4}\vspace{1mm}\\
0,& \frac{1}{2}
\end{array}\mid\frac{1}{1-x}\right]=0.
\end{align}
Combining \eqref{equation-4} and \eqref{equation-6} we obtain \eqref{trans-1}.
Now, let $x=b^2.$ Putting $1-x=\frac{1}{t}$ we have $1-\frac{1}{t}=b^2.$ Applying \eqref{value-2} we have
\begin{align}
{_2G_2}\left[\begin{array}{cc}
\frac{1}{4},& \frac{3}{4}\vspace{1mm}\\
0,& \frac{1}{2}
\end{array}\mid t\right]=\frac{-\varphi(-1)}{\Gamma_p(\frac{1}{4})\Gamma_p(\frac{3}{4})}(\varphi(1+b)+\varphi(1-b)).\notag
\end{align}
Therefore, if $x=b^2$ then
\begin{align}\label{equation-7}
{_2G_2}\left[\begin{array}{cc}
\frac{1}{4},& \frac{3}{4}\vspace{1mm}\\
0,& \frac{1}{2}
\end{array}\mid \frac{1}{1-x}\right]=\frac{-\varphi(-1)}{\Gamma_p(\frac{1}{4})\Gamma_p(\frac{3}{4})}(\varphi(1+b)+\varphi(1-b)).
\end{align}
Combining \eqref{equation-4} and \eqref{equation-7} we deduce \eqref{trans-2}.
Let $1-x=a^2.$ Also, let $x=\frac{1}{t}$. Then $1-\frac{1}{t}=a^2.$ Using \eqref{value-2} we obtain that
\begin{align}
{_2G_2}\left[\begin{array}{cc}
\frac{1}{4},& \frac{3}{4}\vspace{1mm}\\
0,& \frac{1}{2}
\end{array}\mid t\right]=\frac{-\varphi(-1)}{\Gamma_p(\frac{1}{4})\Gamma_p(\frac{3}{4})}(\varphi(1+a)+\varphi(1-a)).\notag
\end{align}
Therefore, if $1-x=a^2$ then
\begin{align}\label{equation-8}
{_2G_2}\left[\begin{array}{cc}
\frac{1}{4},& \frac{3}{4}\vspace{1mm}\\
0,& \frac{1}{2}
\end{array}\mid \frac{1}{x}\right]=\frac{-\varphi(-1)}{\Gamma_p(\frac{1}{4})\Gamma_p(\frac{3}{4})}(\varphi(1+a)+\varphi(1-a)).
\end{align}
Combining \eqref{equation-7} and \eqref{equation-8} we derive \eqref{trans-3}.
Finally, comparing \eqref{equation-6} and \eqref{equation-8} we obtain \eqref{trans-4}. This completes the proof of the theorem.
\end{proof}
\begin{proof}[Proof of Theorem \ref{sum-1}]
Again, consider the sum $A=\displaystyle\sum_{\chi\in\widehat{\mathbb{F}_p^\times}}g(\varphi\chi^2)g(\varphi\overline{\chi})g(\overline{\chi})
\chi\left(\frac{x}{4}\right).$ Then from \eqref{eqn-16}, and \eqref{eqn-17} we have
\begin{align}\label{eqn-101}
A&=-\frac{\varphi(2)}{\Gamma_p(\frac{1}{2})}\sum_{j=0}^{p-2}\omega^j(x)~\pi^{(p-1)\ell_j}~\Gamma_p\left(\left\langle\frac{1}{4}-\frac{j}{p-1}\right\rangle\right)\Gamma_p\left(\left\langle\frac{3}{4}-\frac{j}{p-1}\right\rangle\right)\\
&\times\Gamma_p\left(\left\langle\frac{1}{2}+\frac{j}{p-1}\right\rangle\right)
\Gamma_p\left(\frac{j}{p-1}\right)\notag,
\end{align}
where $\ell_j=1-\left\lfloor\frac{1}{4}-\frac{j}{p-1}\right\rfloor-\left\lfloor\frac{3}{4}-\frac{j}{p-1}\right\rfloor
-\left\lfloor\frac{1}{2}+\frac{j}{p-1}\right\rfloor.$
Now, the term for $j=0$ present in \eqref{eqn-101} is equal to $p\varphi(2)\Gamma_p\left(\frac{1}{4}\right)
\Gamma_p\left(\frac{3}{4}\right).$ Therefore, we have
\begin{align}\label{eqn-102}
A&=-\frac{\varphi(2)}{\Gamma_p(\frac{1}{2})}\sum_{j=1}^{p-2}\omega^j(x)~\pi^{(p-1)\ell_j}~\Gamma_p\left(\left\langle\frac{1}{4}-\frac{j}{p-1}\right\rangle\right)\Gamma_p\left(\left\langle\frac{3}{4}-\frac{j}{p-1}\right\rangle\right)
\\&\times
\Gamma_p\left(\left\langle\frac{1}{2}+\frac{j}{p-1}\right\rangle\right)
\Gamma_p\left(\frac{j}{p-1}\right)+p\varphi(2)\Gamma_p\left(\frac{1}{4}\right)
\Gamma_p\left(\frac{3}{4}\right)\notag.
\end{align}
Using \eqref{lemma-1} we can write
\begin{align}
A&=p\varphi(2)\Gamma_p\left(\frac{1}{4}\right)
\Gamma_p\left(\frac{3}{4}\right)+\frac{\varphi(2)}{\Gamma_p(\frac{1}{2})}\sum_{j=1}^{p-2}\omega^j(-x)~\pi^{(p-1)\ell_j}~\Gamma_p\left(\left\langle\frac{1}{4}-\frac{j}{p-1}\right\rangle\right)\notag
\\&\times\Gamma_p\left(\left\langle\frac{3}{4}-\frac{j}{p-1}\right\rangle\right)
\Gamma_p\left(\left\langle\frac{1}{2}+\frac{j}{p-1}\right\rangle\right)
\Gamma_p\left(\left\langle 1-\frac{j}{p-1}\right\rangle\right)
\Gamma_p\left(\frac{j}{p-1}\right)^2\notag\\
&=-p\varphi(2)\sum_{j=1}^{p-2}\omega^j(-x)
\frac{(-p)^{(-\left\lfloor\frac{1}{2}+\frac{j}{p-1}\right\rfloor)}
\Gamma_p\left(\left\langle\frac{1}{2}+\frac{j}{p-1}\right\rangle\right)}{\Gamma_p(\frac{1}{2})}
\Gamma_p\left(\left\langle 1-\frac{j}{p-1}\right\rangle\right)
\notag\\
&\times(-p)^{-\left\lfloor\frac{1}{4}-\frac{j}{p-1}\right\rfloor-\left\lfloor\frac{3}{4}-\frac{j}{p-1}\right\rfloor}
\Gamma_p\left(\frac{j}{p-1}\right)^2\Gamma_p\left(\left\langle\frac{1}{4}-\frac{j}{p-1}\right\rangle\right)
\Gamma_p\left(\left\langle\frac{3}{4}-\frac{j}{p-1}\right\rangle\right)\notag\\
&+p\varphi(2)\Gamma_p\left(\frac{1}{4}\right)
\Gamma_p\left(\frac{3}{4}\right).\notag
\end{align}
Also, applying \eqref{lemma-2} we obtain
\begin{align}\label{eqn-103}
A&=-\varphi(2)\sum_{t\in\mathbb{F}_p^{\times}}\varphi(t(t-1))\sum_{j=1}^{p-2}\omega^j\left(\frac{x}{t}\right)
(-p)^{-\left\lfloor\frac{1}{4}-\frac{j}{p-1}\right\rfloor-\left\lfloor\frac{3}{4}-\frac{j}{p-1}\right\rfloor}\\
&\times\Gamma_p\left(\left\langle\frac{1}{4}-\frac{j}{p-1}\right\rangle\right)
\Gamma_p\left(\left\langle\frac{3}{4}-\frac{j}{p-1}\right\rangle\right)\Gamma_p\left(\frac{j}{p-1}\right)^2\notag\\
&~~+p\varphi(2)\Gamma_p\left(\frac{1}{4}\right)
\Gamma_p\left(\frac{3}{4}\right).\notag
\end{align}
The term under summation for $j=0$ is equal to
\begin{align*}
&-\varphi(2)\displaystyle\sum_{t\in\mathbb{F}_p^{\times}}\varphi(t(t-1))\Gamma_p\left(\frac{1}{4}\right)
\Gamma_p\left(\frac{3}{4}\right)=-\varphi(-2)\Gamma_p\left(\frac{1}{4}\right)\Gamma_p\left(\frac{3}{4}\right)J(\varphi,\varphi)\\&=-p\varphi(2)\Gamma_p\left(\frac{1}{4}\right)\Gamma_p\left(\frac{3}{4}\right){\varphi\choose\varphi}=\varphi(2)
\Gamma_p\left(\frac{1}{4}\right)\Gamma_p\left(\frac{3}{4}\right).
\end{align*}
Note that the last equality is obtained by applying \eqref{rel-1}.
Using this value in \eqref{eqn-103} we obtain
\begin{align}\label{eq-1}
A=(p-1)\varphi(2)\Gamma_p\left(\frac{1}{4}\right)\Gamma_p\left(\frac{3}{4}\right)\left(1+\sum_{t\in\mathbb{F}_p^{\times}}\varphi(t(t-1))
{_2G_2}\left[\begin{array}{cc}
\frac{1}{4}, & \frac{3}{4}\\
0,& 0
\end{array}\mid\frac{t}{x}\right]\right).
\end{align}
Now, from Proposition \ref{prop-1} comparing the values of $A$ we deduce \eqref{formula-1}, \eqref{formula-2}, and
\eqref{formula-3}. This completes the proof of the theorem.
\begin{proof}[Proof of Theorem \ref{sum-3}]
Applying the transformations \cite[Lemma 3.3]{mccarthy-pacific} and \cite[Proposition 2.5]{mccarthy-ffa}
for $x,t\neq0$ we obtain
\begin{align}\label{eq-2}
{_2G_2}\left[\begin{array}{cc}
\frac{1}{4}, & \frac{3}{4}\\
0,& 0
\end{array}\mid \frac{t}{x}\right]&={\chi_4^3\choose\varepsilon}^{-1}{_2F_1}\left(\begin{array}{cc}
\chi_4, & \chi_4^3\\
& \varepsilon
\end{array}\mid\frac{x}{t}\right)\notag\\&=-p\cdot{_2F_1}\left(\begin{array}{cc}
\chi_4, & \chi_4^3\\
& \varepsilon
\end{array}\mid\frac{x}{t}\right).
\end{align}
Note that we obtain the last equality by using \eqref{rel-1}.
\end{proof}
Let $x=1.$ Then substituting \eqref{eq-2} into \eqref{formula-1} we have
\begin{align}\label{eq-3}
-p\sum_{t\in\mathbb{F}_p^{\times}}\varphi(t(t-1)){_2F_1}\left(\begin{array}{cc}
\chi_4, & \chi_4^3\\
& \varepsilon
\end{array}\mid\frac{1}{t}\right)=-1+\frac{p\varphi(-1)}{\Gamma_p(\frac{1}{4})\Gamma_p(\frac{3}{4})}.
\end{align}
Since $p\equiv1\pmod{4}$ so $\varphi(-1)=1$ and \eqref{prod-3} gives $\Gamma_p(\frac{1}{4})\Gamma_p(\frac{3}{4})=-\chi_4(-1).$ Substituting these two values into \eqref{eq-3} and replacing $t$ by $1/t$ we derive \eqref{formula-4}. Similarly, if $x\neq1$ and $1-x$ is not a square then substituting \eqref{eq-2} into \eqref{formula-2} and replacing $t$ by $x/t$ we obtain \eqref{formula-5}. Finally, if $x\neq1$ and $1-x=a^2$ then substituting \eqref{eq-2} into \eqref{formula-3} and replacing $t$ by $x/t$ we deduce \eqref{formula-6}. This completes the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{paf}]
If $1-x\neq\square$ then applying Proposition \ref{prop-1} and Proposition \ref{prop-2} we have
\begin{align*}
{_2G_2}\left[\begin{array}{cc}
\frac{1}{4}, & \frac{3}{4}\vspace{1mm}\\
0, & \frac{1}{2}
\end{array}\mid\frac{1}{x}
\right]={_2G_2}\left[\begin{array}{cc}
\frac{1}{4}, & \frac{3}{4}\vspace{1mm}\\
0, & \frac{1}{2}
\end{array}\mid\frac{x-1}{x}
\right]=0.
\end{align*}
This proves \eqref{paf-1}. Now, let $1-x=a^2$ for some $a\in\mathbb{F}_p^\times.$ Let $a^{-1}$ denote the inverse of $a$ in $\mathbb{F}_p^{\times}.$ Then again applying Proposition \ref{prop-1} and Proposition \ref{prop-2} we have
\begin{align}\label{one}
{_2G_2}\left[\begin{array}{cc}
\frac{1}{4}, & \frac{3}{4}\vspace{1mm}\\
0, & \frac{1}{2}
\end{array}\mid\frac{1}{x}
\right]=-\frac{\varphi(-1)}{\Gamma_p(\frac{1}{4})\Gamma_p(\frac{3}{4})}(\varphi(1+a)+\varphi(1-a)),
\end{align}
and
\begin{align}\label{two}
{_2G_2}\left[\begin{array}{cc}
\frac{1}{4}, & \frac{3}{4}\vspace{1mm}\\
0, & \frac{1}{2}
\end{array}\mid\frac{x-1}{x}
\right]=-\frac{\varphi(-1)}{\Gamma_p(\frac{1}{4})\Gamma_p(\frac{3}{4})}(\varphi(1+a^{-1})+\varphi(1-a^{-1})).
\end{align}
Comparing \eqref{one} and \eqref{two} we prove \eqref{paf-2}. This completes the proof.
\end{proof}
\section{Concluding remarks}
\begin{remark}
Let $\mathbb{F}_q$ be a finite field with $q$ elements, where $q=p^r$.
We note that all the transformations and special values of $p$-adic hypergeometric series that are proved in this paper can also be extended to the $q$-version of the $p$-adic hypergeometric series ${_nG_n[\cdots\mid t]_q}$ with $t\in\mathbb{F}_q$ using the definition \cite[Definition 5.1]{mccarthy-pacific}. We avoid this case here for brevity. We also make the same comment for Gaussian hypergeometric functions over $\mathbb{F}_q$. We believe that using this method we can settle many other transformation formulas for $p$-adic hypergeometric series that are analogous to classical hypergeometric series transformations. This is considered as the subject of forthcoming work.
\end{remark}
|
1,314,259,994,260 | arxiv | \section*{Acknowledgements} The authors acknowledge useful discussions
with the participants of the CETUP2012 workshop. This work was
supported by U.S. DOE under contract DE-FG02-12ER41808.
|
1,314,259,994,261 | arxiv | \section{Introduction}
Interactions between water waves and sea ice are a key mechanism for the polar regions, as they participate in shaping the extent and quality of sea ice covers. Better quantitative understanding of these interactions could lead to improved sea state and climate models and better predictions of sea ice hazards for human activities \citep{Pfirman1995129, KaiReport, Wadhams200998}. In recent years, there has been an increase in research into wave-ice-interaction motivated by both increased human activity in the polar regions, and new insights into some of the feedback mechanisms that may be involved in sea ice decline. In particular, it has been shown from satellite data that reduced sea-ice extent, by increasing the fetch available for wave development, has led to more energetic sea states in the Arctic basin, which in turn increases ice breakup and melting \citep{GRL:GRL51656}.
In nature, several types of sea ice are observed with increased distance from the free water limit as one progresses into the sea ice. Going from the open water into the ice-covered region, generally one first encounters ice in the form of grease ice slicks, pancake ice and broken ice floes of progressively increasing size. This region, the Marginal Ice Zone (MIZ), substantially attenuates the energy of the high frequency waves that would otherwise separate the inner continuous ice pack \citep{WeberArticle,TwoLayersModel,GRL:GRL52708}. Therefore, obtaining detailed understanding and quantification of the dominant mechanisms affecting wave propagation in the MIZ is critical to the description of polar regions. In the following, we will focus mostly on the interaction between waves and grease ice. Grease ice is composed of frazil ice crystals, typically disks of size 1 to 4~mm in diameter and 1 to 100~$\mu$m in thickness \citep{NewyearLab1}. Grease ice accumulates and forms slicks of typical thickness 10 to 20~cm in the arctic ocean \citep{Smedsrud2006171, 2011AnGla5277S}, that effectively damp high frequency waves, therefore, appearing visually similar to an oil slick \citep{NewyearLab1}.
Studying physical processes in the MIZ is a challenging task, since it is a region where sea ice comes in several different forms which has the consequence of making it difficult to describe theoretically and to model \citep{squire2016evolution,Collins2017dispersioninice}. The strong inhomogeneity of the ice in the MIZ makes it especially difficult to derive theoretical or numerical models with a basis in simple physical principles. In addition, field measurements are made challenging by the existence of a number of external factors, such as wind input, which can introduce artefacts in the data \citep{li2017rollover}. As a consequence, small scale experiments and simple analytical models have been used to investigate the effect of grease ice on wave propagation in the laboratory. Focusing on slush ice and pancake ice, \citet{Martin1981} were the first to perform laboratory experiments, allowing for a detailed investigation of the phenomena. Important conclusions reached from laboratory experiments included the fact that the mean thickness of the grease ice increases along the direction of the propagation of the waves, caused by the packing effect of the waves on the ice. Complex grease-ice dynamics that included global circulation of the ice under the influence of the waves were observed by the authors.
To our knowledge the first theoretical model describing wave attenuation by grease ice was presented by \citet{WeberArticle}. In this model, the ice is considered to be so viscous that it undergoes a `creeping motion', corresponding to a balance between pressure gradient and viscous stress. This should not be confused with ice creep \citep{wadhams1973attenuation}. This imposes a no-slip boundary condition under the ice, and all of the wave energy dissipation occurs in the underlying water. This is similar to the solution found by \citet{lamb1932hydrodynamics} (Equation 351.8) for waves propagating under an inextensible surface film, though the result of \citet{lamb1932hydrodynamics} should be converted to a spatial attenuation rate following the Gaster relation \citep{gaster_1962}. An effective eddy viscosity much higher than the molecular viscosity of water is required in the water layer for the model to be consistent with observations and laboratory experiments. While a crude simplification of reality, this simple model provides good agreement with both laboratory and field data when an empirically fit value for the effective viscosity of the water is used \citep{NewyearLab1, RabaultSutherlandGlaciology}. The use of a high effective viscosity is usually justified by the need to describe the existence of a large range of eddies under the ice that enhance mixing \citep{DeCarolis2002399, GRL:GRL53001}.
Next, \citet{NewyearLab1} introduced another one-layer model to compare with experimental results in a small wave tank facility. By contrast with the model of \citet{WeberArticle}, their model assumes that since most of the wave motion is usually concentrated near the free surface, attenuation can be approximated using an infinitely deep ice layer. This solution is similar to the calculation of \citet{lamb1932hydrodynamics} for waves propagating in a viscous fluid (Equation 349.21) with a no-stress boundary condition at the free surface. There also, the Gaster relation can be used for performing conversion between spatial and temporal attenuation rates. In the model of \citet{NewyearLab1}, the value of the viscosity to use in the ice layer is not known a priori, and is usually obtained from a fit to experimental data.
This model of \citet{NewyearLab1} was extended to a two-layer model by \citet{TwoLayersModel}, who considered the case of a finite ice thickness layer with inviscid water under the layer. This opened the way for the development of a variety of such two-layer models. \citet{DeCarolis2002399} formulated a model in which viscosity is also added to the water under the ice, while \citet{JGRC:JGRC11467} considered a model in which the water layer is inviscid, but the ice layer is viscoelastic (a Voigt model). Such models are a better description of reality since a clear separation between the grease ice and the water is observed in the experiments, and these two phases have very different mechanical properties. Better agreement is also observed between those models and laboratory data than with the previously mentioned one layer models \citep{NewyearLab2}, even though the estimation of model quality has often been based on curve fitting and visual impression rather than quantitative metrics (see, for a discussion of this issue, \citet{RabaultSutherlandGlaciology}). These models are usually applied indistinctly to grease ice, pancake ice, and continuous ice, as the parametrization they rely on can be tuned to accomodate for different ice rheologies.
However, two layer models are affected by at least two issues. First, while being a better description of reality, the parameters they rely on are obtained empirically from the experimental data rather than analysis of the underlying properties of the ice, and, as a consequence, they differ little from the arguably simplistic single layer models in this aspect. Therefore, it is difficult to know whether the better agreement observed with experimental data is due to a better description of the physics or is a mathematical artefact due to more fitting parameters being available. In addition, the model of \citet{Wang201090} has several issues that were raised by \citet{JGRC:JGRC21350}. In particular, the large number of roots in the dispersion relation makes it challenging to use such models for real-world applications, as selection of the physically relevant propagation mode is more challenging than it is for simpler models.
By contrast, a recently proposed alternative to these models \citep{SutherlandDissipation} assumes that, if the ice is thick and viscous enough, a portion of the ice consists of ``creeping motion'', and that the no-slip condition should be applied within the ice layer. If the ice is less thick or viscous, then this creeping motion does not exist and a solution similar to \citet{lamb1932hydrodynamics, NewyearLab1} can be used. This model has proven successful at describing a wide range of data, both from laboratory experiments and fieldwork studies \citep{SutherlandDissipation}. Moreover, it is mathematically straightforward to implement. Therefore, if further validated, it would appear as an interesting alternative for computing wave attenuation in the marginal ice zone.
Faced with the wide range of models presented in the literature, the natural reaction should be to test one or several of the theories against laboratory or field experiments. However, by contrast with the variety and complexity of the models presented in the literature, experimental data have remained scarce and indeed are limited to quite simple measurements such as single optical images \citep{Martin1981}, and the collection of a few single point measurements of wave elevation or exceeding pressure along one wave tank length \citep{Martin1981, NewyearLab1, Wang201090, Zhao201571}. Out of these publications, it should be noted that both the work of \citet{Wang201090} and \citet{Zhao201571} are investigating thin continuous ice covers rather than grease ice. This is probably at least partly due to the fact that measurements of waves in grease ice are challenging to perform: access to a carefully temperature-controlled wave tank is needed, and to grow ice in a specific fashion to obtain realistic grease ice before performing experiments is a process that can be time consuming on its own. Therefore, performing measurements requires both specific infrastructure and also more time and work than the acquisition of data on its own. In addition, owing to the constraints associated with growing ice at the surface of the wave tank, optical access can be challenging therefore making it more difficult to use modern measurement techniques such as Particle Image Velocimetry (PIV) and Particle Tracking Velocimetry (PTV). This is probably the reason why no study of wave propagation in grease ice published to this date has attempted PIV or PTV. While it is challenging to perform laboratory measurements, the situation is even worse regarding field data. A combination of factors, such as the ice condition (that cannot be controlled), the difficulties associated with working in polar environments, and external sources of disturbance (e.g. currents, wind, ship wakes) make it difficult to obtain more than single point wave elevation measurements when working in the field. While field data have proven valuable \citep{WeberArticle, JGRC:JGRC4212, SquireOOWASI, JGRC:JGRC21649, RabaultSutherlandGlaciology}, they may not be sufficient to elucidate the mechanisms at stake, therefore emphasizing the need for more laboratory data. In addition, it should be underlined here that \citet{RabaultSutherlandGlaciology} study specifically grease ice, while \citet{WeberArticle}, \citet{JGRC:JGRC4212} and \citet{SquireOOWASI} deal with the MIZ, which contains a significant amount of grease ice, but not exclusively.
In this article, we present experimental data about wave propagation in grease ice, both traditional wave elevation measurements using an array of ultrasonic gauges, and also for the first time direct measurements of the water dynamics using PIV. In addition, we perform our wave elevation measurements over a wider frequency range than what is usually reported in the literature. The organization of the paper is as follows. First, we present the methodology used for analyzing the data obtained in the present study and comparing it with data obtained in previous studies. Both the processing of the wave elevation and the PIV data are detailed there. Second, we present the results obtained from processing of the data. Finally, we discuss the results obtained with regards to real world applicability.
\section{Laboratory measurements: methodology}
Two sources of laboratory data are used in all the following: the data obtained in the present study and the data reported by \citet{NewyearLab1}, which are available from Tables 1 and 2 of their article, comprising two experiments in the same test facilities. They report experimental details in the corresponding article, and the data were used in one later study \citep{NewyearLab2}. \citet{NewyearLab1} used a flap-type paddle (i.e., a plate rotating around its lower edge) to generate waves in a wave tank $3.5$~m long, $1$~m wide and $1$~m deep, which was filled with a depth $H = 50$~cm of water. The wave tank was located in a cold room, which allowed the authors to create slush ice in a controlled environment. The slush ice thickness reported by the authors was $11.3$ and $14.6$~cm for experiments 1 and 2, respectively. The frequency range investigated by \citet{NewyearLab1} is relatively small, between $1.0$ and $1.6$~Hz, and the lower end of the frequency range does not strictly fulfill the deep water waves condition, which adds complexity to the situation in addition to the effect of grease ice.
The data collected in the present study were obtained in a cold room facility located at the University Center in Svalbard (UNIS). All the details of the actuation and logging system are released as open source material, see Appendix A for more details and a link to the description of the system and code. A custom-designed flat-blade paddle (i.e., the whole plate moves back and forth while staying vertical) was used to generate waves over a frequency range between $1.5$ and $2.7$~Hz, with an increment of $0.1$~Hz. This is a wider frequency range than investigated previously by \citet{NewyearLab1}. Measurements were performed for 6 different wave-paddle displacement amplitudes at each of those frequencies, for a total of 78 measurements. The wave tank is built in transparent acrylic with the dimensions $3.5$~m long, $0.3$~m wide and $0.5$~m high, and the water depth is $25$~cm. A damping beach is located at the opposite end of the wave tank to the paddle.
During the experiments, the effect of the grease ice alone absorbs most of the wave energy. As a consequence, very little water motion is still present at the position of the beginning of the damping beach, and the water at the end of the damping beach is essentially still. Consequently, we do not fear that energy may be reflected at the end of the wave tank. Removable insulation foam is placed under and on the sides of the wave tank in order to avoid ice formation on the wave tank walls during grease ice growth, and can be taken away for gaining optical access when PIV measurements are performed. The slush ice is generated once at the beginning of the series of measurements by setting the temperature of the cold room to $-6^{\circ}$C, and turning the paddle on so that waves of frequency $2$~Hz are generated with a large paddle motion amplitude. As a consequence, breaking waves are generated that introduce turbulence in the whole wave tank creating the right conditions for grease ice formation. In addition, small ice crystals generated in a slush-ice machine are added in the water to speed up the process. As a result, the grease ice layer obtained is about $4$~cm thick when the water is at rest and the ice spreads evenly in the whole wave-tank. The methodology used for generating the grease ice is similar to the one reported by \citet{NewyearLab1}. Following ice formation, the room temperature was set to 0 degrees so that no significant ice melting nor freezing took place.
A total of 6 ultrasonic gauges (model U-gauge S18U, from Banner Engineering) are used to measure wave elevation. The gauges are located $50$, $66.6$, $88$, $110.5$, $133.9$, and $179.9$~cm away from the wave-maker. The distance between consecutive gauges is deliberately varied, as this helps detecting possible aliasing effects when performing cross-correlation analysis of the wave elevation signal. The gauges have a resolution of $0.5$~mm, and perform measurements at a frequency of $100$~Hz. The noise level of gauges 4 and 6 is much higher than for the other gauges, possibly because of ice formation on the gauges, and therefore we only use the signals from gauges 1, 2, 3, and 5.
The data from the ultrasonic gauges are used in two ways. For each gauge, the wave amplitude is computed by integrating the Fourier spectrum of wave elevation around the peak frequency. The Fourier spectrum is computed from time series of duration 300 seconds using a $50\%$ percent overlap on $40$ seconds time windows, and the width of the integration domain around the peak frequency is $1.0$~Hz. For each run, the wave amplitudes obtained from the different gauges are then fitted to a decreasing exponential in order to extract the coefficient of wave damping, using:
\begin{equation}
\frac{\partial a}{\partial x} = - \alpha a,
\end{equation}
\noindent where $a$ is the wave amplitude in the convention that $\eta(t) = a \cos(\omega t)$ is the wave elevation at a fixed point in space, with $\omega = 2 \pi f$ the wave angular frequency in $rad / s$, $f$ the wave frequency, $x$ the distance to the wave-maker, and $\alpha$ the spatial decay coefficient that describes wave attenuation. This processing method is similar to what was presented by \citet{Sutherland201788}.
The spatial damping coefficient obtained experimentally, $\alpha$, is compared with the result obtained analytically in Eqn. (4.15) of \citet{WeberArticle}, the no-stress boundary condition of \citet{lamb1932hydrodynamics} and \citet{NewyearLab1}, and both Eqn. (10) and Eqn. (13) of \citet{SutherlandDissipation}. More explicitly, Eqn. (4.15) of \citet{WeberArticle} corresponds to the wave dissipation due to the presence of an inextensible surface cover \citep{lamb1932hydrodynamics} and can be written as:
\begin{equation}
\alpha_{in} = \frac{1}{2} \nu \gamma k / c_g,
\label{weber_equation}
\end{equation}
\noindent where $\nu$ is the kinematic eddy viscosity, which can be taken higher than the viscosity of water to account for eddies and, in some measure, grease ice properties, $\gamma = \sqrt{\omega / 2 \nu}$ is the inverse thickness of the surface boundary layer, $k$ is the wavenumber, and $c_g$ is the group velocity.
The damping obtained in the case of a no-stress boundary condition \citep{lamb1932hydrodynamics,NewyearLab1} can be written as:
\begin{equation}
\alpha_{ns} = 2 \nu k^2 / c_g,
\label{damping_NM97}
\end{equation}
\noindent where $\nu$ is again a fitting parameter used to model grease ice properties.
Finally, Eqn. (10) of \citet{SutherlandDissipation}, can be written as:
\begin{equation}
\alpha_{\epsilon} = \frac{1}{2} \epsilon h k^2,
\label{sutherland_damping_v1}
\end{equation}
\noindent where $0 < \epsilon < 1$ is the fractional ice thickness, and $h$ is the total ice thickness. A fractional ice thickness $\epsilon = 0.7$ was previously found to model well a wide range of experimental datasets \citep{SutherlandDissipation}. Eqn. (13) of \citet{SutherlandDissipation} can be written as:
\begin{equation}
\alpha_t = 2 h^2 k^3.
\label{sutherland_damping_v2}
\end{equation}
The difference between Eqn. (\ref{sutherland_damping_v1}) and Eqn. (\ref{sutherland_damping_v2}), which correspond respectively with equations (10) and (13) of the model by \citet{SutherlandDissipation}, is that in the former case the grease ice is considered thick enough so that the top of the ice is still.
Since the wave-tank is of finite width and depth, damping is introduced by boundary layers developing on the side and bottom walls. Those sources of damping must be assessed to confirm that the main damping effect observed arises from the grease ice. The damping introduced by the laminar boundary layers created by the wave motion can be written as \citep{van1966boundary, Sutherland201788}:
\begin{equation}
\alpha_{bs} = \nu_{0} \gamma_0 k \left( \frac{1}{\sinh(2kH)} + \frac{1}{kB}\right) / c_g,
\label{DampingBL}
\end{equation}
\noindent where $\nu_{0}$ is the kinematic viscosity of water, $\gamma_0 = \sqrt{\omega / 2 \nu_{0}}$ is the inverse boundary thickness, and $B$ is the width of the wave tank. The damping coefficient $\alpha_{bs}$ is found to be two orders of magnitude smaller than the damping measured experimentally, and therefore is neglected in all of the following.
The wave number is obtained from performing a cross-correlation analysis of adjacent ultrasonic gauges. For this, the phase difference between waves recorded by the gauges $m$ and $n$, $\phi_{mn}$, is obtained from the co-spectral density between adjacent sensors, $S_{mn}$, as:
\begin{equation}
\phi_{mn} = \tan^{-1} \left( \frac{\Im[S_{mn}(f)]}{\Re[S_{mn}(f)]} \right),
\label{get_wave_phase_shift}
\end{equation}
\noindent where $\Im$ and $\Re$ indicate the imaginary and real part, respectively. The co-spectral density is computed using windows of length 41 s, with a 50\% percent overlap. The equation describing the propagation of waves phase can be written as:
\begin{equation}
\phi_{mn} = \bm{k} \cdot \bm{x}_{mn},
\label{get_wave_vector}
\end{equation}
\noindent where $\bm{k}$ is the wave vector and $\bm{x}_{mn}$ the distance between gauges $m$ and $n$. The wave vector corresponding to the frequency generated by the paddle can therefore be obtained from Eqn. (\ref{get_wave_vector}) since the distance between the gauges is known and the phase shift between adjacent gauges can be computed using Eqn. (\ref{get_wave_phase_shift}). This methodology is similar to what was presented by \citet{JGRC:JGRC21649} and by \citet{marchenko2017field} in the case of field data. Results are compared with both the finite depth, open water dispersion relation and the mass loading effect. The open water dispersion relation is written as:
\begin{equation}
\omega^2 = g k \tanh \left( kH \right),
\label{free_water_dispersion_relation}
\end{equation}
\noindent with $H$ water depth. The dispersion relation including mass loading is taken from \citet{NewyearLab1}:
\begin{equation}
\omega^2 = \frac{gk \rho_{water} \tanh \left( kH \right)}{\rho_{water} + c \rho_{ice} h k \tanh \left( kH \right)}, \label{mass_loading_dispersion_relation}
\end{equation}
\noindent with $c \approx 0.5$ the volumetric fraction of ice.
When analyzing laboratory data, we consider that dissipation is produced by the viscous effects introduced in water and / or the ice, as calculated by \citet{WeberArticle}, \citet{lamb1932hydrodynamics}, or \citet{SutherlandDissipation}, while modification of the wave number is mostly due to the mass loading of the grease ice layer, and that as a first approximation both effects can be accounted for separately.
When performing PIV, a white color linear LED array is used for illumination. Spherical Polyamid Seeding Particles (PSP) of diameter $50$~$\mu$m are used for seeding. A single Falcon2 4M camera is used for taking pictures at a frame rate of 75Hz. This PIV setup is similar to that used previously by authors of this paper \citep{rabault2016ptv}. The measurement plane is located in the middle of the wave-tank, along the main direction of the wave-tank and wave propagation, and a mirror inclined $45^{\circ}$ is used for finely adjusting the position of the light sheet. Filtering is used to detect both the free water surface and the area that corresponds to grease ice, as shown in Fig. \ref{filtering}. Consecutive frames are used for performing PIV using the Digiflow software \citep{Digiflow}.
\begin{figure}
\begin{center}
\includegraphics[width=.45\textwidth]{Figures/Picture700}
\includegraphics[width=.45\textwidth]{Figures/PictureMasked700}
\caption{\label{filtering} Picture extracted from the case 2, which corresponds to an initially discontinuous ice cover. Left: raw picture before filtering. Both the water surface and the region occupied by grease ice are clearly visible. Right: picture after filtering.}
\end{center}
\end{figure}
The velocity fields obtained from PIV are processed using the Proper Orthogonal Decomposition (POD). POD is based on the singular value decomposition (or an eigenvalue analysis of the positive semi-definite matrix $X^{T}X$, where $X$ is the snapshot matrix, see under), and is known under different names depending on the field of study, such as 'Principal Component Analysis' (from statistics) and 'Empirical Decomposition Functions' (from meteorology). In the present case, the POD is computed in Matlab from the Singular Value Decomposition (SVD) of the snapshots matrix (for detailed explanations about POD and the snapshot method, see for example \citet{berkooz1993proper, Kerschen2005}). The snapshot matrix, $X$, is constructed as:
\begin{equation}
X =
\begin{bmatrix}
u_1^1 & ... & u_n^1 \\
\vdots & ... & \vdots \\
u_1^k & ... & u_n^k \\
\end{bmatrix},
\end{equation}
\noindent where $u_i^j = u(\bm{x}_j, t_i)$, with $\bm{x}_j$ the position of the point considered, and $t_i$ the time of the snapshot. Both the $X$ and $Y$ components of velocity are stored in $u$, by letting the $u_i^{1..k/2}$ represent the $X$ component and the $u_i^{k/2+1..k}$ represent the $Y$ component, respectively. As a consequence, each column of $X$ contains the 2D, two component velocity field at a given time reshaped into a 1D vector. The SVD decomposition is then computed as:
\begin{equation}
X = U S V^*,
\end{equation}
\noindent where $U$ and $V$ are unitary matrices. The diagonal coefficients of $S^2$ give the energy of each mode, while $U$ and $V$ contain the POD modes and POD mode coefficients. The POD modes together with the POD mode coefficients contain the full information about the velocity field at all times. In particular, the description of the velocity field obtained with POD is interesting as it optimizes the energy content in the modes of lower index, therefore effectively extracting an ordered list of the most energetic coherent structures of the flow. Therefore, the POD can be applied to produce a condensed overview of the main structures present in the data of a dataset too large to be analyzed by simple visualization techniques.
\section{Results}
\subsection{Ultrasonic gauges and comparison with models for damping}
The wavenumbers reported by \citet{NewyearLab1}, together with the results obtained in our experiments using the cross-spectrum analysis and Eqn. (\ref{get_wave_vector}), are presented in Fig. \ref{wavelength}. Data obtained from the experiments 1 and 2 of \citet{NewyearLab1} are indicated by red dots, while our data are indicated by dots of varying color, which depends on the pair of gauges used in the cross correlation analysis. In the case of the data from our experiments, since wave amplitude is found to have no significant effect on the values obtained the mean values and 95\% confidence intervals are computed based on the results obtained at the different wave amplitudes. The open water dispersion relation curve is computed from Eqn. (\ref{free_water_dispersion_relation}), using the water depth corresponding to the experiments performed by the present authors. Owing to the scaling of the figure, the difference with the open water dispersion relation corresponding to the water depth reported by \citet{NewyearLab1} (not shown) is barely visible. The mass-loading dispersion relations, corresponding to different grease ice thicknesses, are computed from Eqn. (\ref{mass_loading_dispersion_relation}) using also the water depth corresponding to the experiments performed by the present authors.
Several observations can be made from Fig. \ref{wavelength}. Firstly, while the data from \citet{NewyearLab1} could indicate a limited deviation from the open water dispersion relation in the opposite direction to the mass loading effect, our experiments performed over a wider frequency range show the opposite effect. There are several possible sources to this discrepancy. A first explanation could be that the properties of the grease ice generated may be different between both sets of experiments. However, this does not seem likely as the methodology used for generating grease ice was similar in both cases. Another explanation could be that the small deviation from the open water dispersion relation reported by \citet{NewyearLab1} arises from sensor noise, random experimental error, or is of other technical origin. Finally, the different grease ice thicknesses used by \citet{NewyearLab1} and the present authors could also play a role in this discrepancy. Whatever the explanation, Fig. \ref{wavelength} underlines the importance of performing laboratory measurements of wave propagation in ice on a frequency range as wide as possible, in order to observe large deviations from the open water baseline.
Secondly, the dispersion relation obtained in our experiments is different depending on the pair of gauges which is used for computing the wave number. The origin of this difference comes from the varying mean thickness of the grease ice layer under the influence of the wave-induced stress gradient, as was reported already by \citet{Martin1981}. In addition, changes in the concentration $c$ of the grease ice (which, unfortunately, was not measured in this experimental setup) may participate in the phenomenon, as $ch$ is the relevant parameter deciding the shape of the dispersion relation predicted by the mass loading model. Between gauges 1 and 2, which are closest to the wave paddle, the grease ice layer is very thin and the dispersion relation corresponds well to the mass loading effect obtained with an effective grease-ice layer thickness of $0.5$~cm. Between gauges 2 and 3, that are further away from the wave paddle, an effective grease ice thickness $ch = 4.5$~cm gives the best agreement with experimental data, which corresponds to a true physical grease ice thickness of $9$~cm. This is in good agreement with the initial measurement of the grease ice thickness, which was around $4$~cm when water was at rest before any piling of the ice took place. Using signals from gauges 1 and 3 for computing the wave number leads to an intermediate result, which is a weighted averaged of both values.
Thirdly, the mass-loading dispersion relation accurately describes the wave number obtained from experiments up to the point where $k= (ch)^{-1}$, which is indicated by a black square in Fig. \ref{wavelength} for each ice thickness. Above this frequency, the wave number obtained experimentally deviates from the value predicted by the mass-loading model, and reaches a plateau or even decreases slightly. This effect does not have its origin in aliasing of the phase shift between the gauges, as the corresponding aliasing is expected for higher frequencies than those occurring at the start of this deviation, and is accounted for in the processing. As a consequence, the deviation from the mass loading model is most likely due to the fact that some of the assumptions at the origin of this model are not verified at higher frequencies. One such assumption, which fails above $k = (ch)^{-1}$, is that the ice as a thin, heavy layer on top of the water column.
\begin{figure}
\begin{center}
\includegraphics{Figures/DispersionRelation}
\caption{\label{wavelength} Wavenumber reported by \citet{NewyearLab1} (noted NM97 in the legend), and obtained in the present study using Eqn. (\ref{get_wave_vector}) for three different pairs of gauges (1 and 2, 1 and 3, and 2 and 3 are denoted as $k_{12}$, $k_{13}$ and $k_{23}$, respectively). The black line indicates the open water dispersion relation. Dotted lines indicate the mass loading dispersion relation, for different ice thicknesses. Large black dots ( $\blacksquare$ ) denote $k = (ch)^{-1}$, in the case of each ice thickness. The mean values and 95\% confidence intervals are computed for each frequency based on the experiments performed at different wave amplitude.}
\end{center}
\end{figure}
Results for wave damping are presented in Fig. \ref{fig_damping}. The theoretical curves for the spatial damping coefficient are indicated in different colors depending on the model used. Blue curves indicate the results obtained from the best non-linear fit with Eqn. (\ref{weber_equation}), magenta curves indicated the result obtained from the best non linear fit with Eqn. (\ref{damping_NM97}), green curves indicate the results obtained from Eqn. (\ref{sutherland_damping_v1}), and orange curves with Eqn. (\ref{sutherland_damping_v2}). Results are presented for both the data of \citet{NewyearLab1} and the data obtained by the present authors. Due to the limited frequency range investigated, little difference is observed between the quality of the fit of the different models to the data from \citet{NewyearLab1}.
In contrast, large differences between the quality of the fit for each of the models are observed in the case of the newly collected data. The value for the damping coefficient at each frequency is found to be predominantly independent of wave amplitude, and, therefore, all results presented are averaged over all amplitudes for a given frequency. This is clearly visible from the size of the error bars on Fig. \ref{fig_damping}, and from the raw data (made available in Appendix B, Table \ref{all_data_table}). We also provide an additional illustration of this fact in Appendix C, Fig. \ref{non_dependence_alpha}. It is clear that the use of Eqn. (\ref{sutherland_damping_v2}) gives the best agreement to the experimental data, while Eqn. (\ref{weber_equation}) performs the worst. The statistics characterizing the quality of fit for each model with the data collected in this paper are summarized in Table \ref{table_quality_fit}, and confirm the visual impression of Fig. \ref{fig_damping}. In particular, the coefficient of determination $R^2$ \citep{rao73} increases from $0.83$ to $0.98$ when using Eqn. (\ref{sutherland_damping_v2}), compared with Eqn. (\ref{weber_equation}). In addition, both the Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) are reduced by a factor of about $3$ and a factor of about $1.5$ respectively, between the results predicted using Eqns. (\ref{damping_NM97}) and (\ref{sutherland_damping_v2}). As expected, Eqn. (\ref{sutherland_damping_v1}) performs better than the model of \citet{WeberArticle}, but is less applicable to ice of such low thickness than Eqn. (\ref{sutherland_damping_v2}). The effective ice thickness corresponding to the best fit to both versions of the model from \citet{SutherlandDissipation} should be expected to reflect a weighted average of the grease ice thickness along the region where damping is happening, which is confirmed by comparing the results in Fig. \ref{wavelength} and \ref{fig_damping}. Similarly to what was observed under the analysis of the wavenumber data, we find that wave damping should be analyzed on a frequency range as wide as possible in order to be able to clearly identify the main trends in the wave attenuation. In particular, the transition in the damping coefficient relatively to Eqn. (\ref{weber_equation}), at around $2.2$~Hz, is made clearly visible by the data collected in the present study, while it was not visible from the data collected by \citet{NewyearLab1}.
\begin{table}
\begin{center}
\begin{tabular}{lccc}
\hline
Model & $R^2$ & MAE & RMSE \\
\hline\hline
One layer inextensible surface \citep{WeberArticle} & 0.83 & 1.12 & 0.98 \\
\hline
One layer stress-free surface \citep{NewyearLab1} & 0.95 & 0.60 & 0.55 \\
\hline
Model 1, Eqn. (10) of \citet{SutherlandDissipation} & 0.89 & 0.90 & 0.80 \\
\hline
Model 2, Eqn. (13) of \citet{SutherlandDissipation} & 0.98 & 0.39 & 0.34 \\
\hline
\end{tabular}
\caption{Coefficient of determination ($R^2$), Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) of the best fit between the data collected by the present authors and both the one layer models of \citet{WeberArticle}, \citet{NewyearLab1}, and both versions of the model of \citet{SutherlandDissipation}.}
\label{table_quality_fit}
\end{center}
\end{table}
\begin{figure}
\begin{center}
\includegraphics{Figures/WaveAttenuation_fit_both}
\caption{\label{fig_damping} Spatial damping coefficient $\alpha$ as a function of wave frequency $f$. Blue curves indicate the results obtained with the one layer model of \citet{WeberArticle} from Eqn. (\ref{weber_equation}), magenta curves indicate the results obtained with the one layer model of \citet{lamb1932hydrodynamics} from Eqn. (\ref{damping_NM97}), green curves indicate the results obtained with the thick ice model of \citet{SutherlandDissipation} from Eqn. (\ref{sutherland_damping_v1}), and orange curves with the thin ice model of \citet{SutherlandDissipation} from Eqn. (\ref{sutherland_damping_v2}). The shaded region denotes 1 standard deviation in the best fit parameters. The attenuation measured at each frequency is computed as the average of all experiments performed for different wave amplitudes, and the error bars correspond to the 95\% confidence intervals. Left: results with the data reported by \citet{NewyearLab1}. Right: results with the data obtained by the authors.}
\end{center}
\end{figure}
In a recent paper, \citet{meylan2018dispersion} suggested that power laws may provide an insight inside the dissipation mechanisms at play for waves in ice. There are several issues to a power law approach, in particular the fact that the frequency ranges considered are often far from covering even a couple of decades and therefore considerable uncertainties may be introduced by experimental errors. In addition, a number of additional mechanisms, such as wind input \citep{li2017rollover}, may influence the fitting of power laws on experimental data. When applying a power law least-square fitting to our data, we obtain an exponent of $6.5 \pm 0.5$ (considering a 3-$\sigma$ confidence interval), in good agreement with the prediction of Eqn. (\ref{sutherland_damping_v2}) considering the sources of uncertainty previously mentioned (as this is peripheral to the main point of the paper, the fit is presented in Appendix C, Fig. \ref{alpha_power}). This is in contrast with the results of \citet{meylan2018dispersion}, and may indicate that different regimes are being considered. In particular, the power law of \citet{meylan2018dispersion} corresponds to weakly attenuated waves propagating for a long distance in the MIZ and the packed ice, while our experiments rather describe the sharp attenuation of high frequency waves by a highly viscous grease ice layer. Also, the data from \citet{meylan2018dispersion} may include non-linear effects, and involve a combination of different kinds of ice which is not the case in our experiment.
A summary of the processed data collected for this study is summarized in Table \ref{all_data_table} (see Appendix B).
\subsection{Measurements of kinematics}
Owing to the large amount of data generated when performing PIV, image processing was restricted to two cases. Both cases correspond to a wave frequency of $2.5$~Hz, and the same amplitude for the motion of the paddle. The corresponding wave steepness in the open water is $\epsilon = k a \approx 0.12$. In the first case, a continuous layer of slush ice was present since the beginning of the experiment and got piled up by the wave-induced stress, but no large slush-ice motion was present. In this case, the field of view of the camera is looking at the water motion under the piled grease ice. In the second case, an initially discontinuous layer of slush ice was present, and the initially separate slush-ice packs were accelerated by the waves and collided in the area of the field of view of the camera before piling up. Therefore, the first situation is ideal for testing theoretical models while the second situation is dynamic and may be more representative of turbulent conditions obtained in the ocean.
In both cases, the POD snapshot method presented in the Methodology section is used. This allows for the extraction of information from all the PIV frames into a reduced number of high energy modes, and, therefore, makes analysis of the data easier and less sensitive to noise. The decay of the energy content of the POD modes is very sharp, with both cases containing around 87 and 80 percent of the total energy in the first two modes respectively. This confirms that the POD is able to extract significant flow features.
We first focus on the analysis of the POD modes obtained in case 1. The POD modes 1 and 2 obtained from the case 1 are presented in Fig. \ref{case1_PIV_continuous_mode12}. The POD modes 3 and 4, also in the case 1, are presented in Fig. \ref{case1_PIV_continuous_mode34}. The time coefficients of modes 1, 2, 3 and 4 in case 1 are presented in Fig. \ref{case1_PODcoefficients}. As visible on Fig. \ref{case1_PIV_continuous_mode12}, modes 1 and 2 are very similar. The plots showing the maps of the $X$ and $Y$ velocity components of mode 1 clearly demonstrate that those modes capture the orbital wave motion. Both modes 1 and 2 have nearly identical velocity magnitude fields, while being shifted spatially by a quarter of a wavelength when it regards the phase of the orbital motion they feature. This is confirmed by Fig. \ref{case1_PODcoefficients}, on which the time shift between the time coefficients for modes 1 and 2 is a quarter of a period.
This time-shift provides strong evidence that modes 1 and 2 represent the orthogonal basis for the wave orbital motion. This corresponds well with the fact that, in case 1, $87$ percent of the total energy is contained in modes 1 and 2. The time coefficients for modes 3 and 4, visible in Fig. \ref{case1_PODcoefficients}, are quite different from those for modes 1 and 2. The time coefficient for mode 3 does present some fluctuations with a 2.5~Hz frequency, but has a predominantly constant offset. Comparing this constant trend with the results presented in Fig. \ref{case1_PIV_continuous_mode34}, it appears that mode 3 describes a counter-flow under the grease ice layer. This is consistent with Fig. 19 of \citet{Martin1981}, who observed a counter-flowing grease ice flow that we could expect to trigger mean water currents under the ice. This can also be understood as a consequence of the packing of the ice by the wave-induced stress as the conservation of mass then requires a back-flow of water in the opposite direction. Mode 4 (Fig. \ref{case1_PIV_continuous_mode34}), in contrast, features no mean motion in the water domain, but a clear oscillatory motion at the interface with the grease ice. The time coefficient associated with this mode (Fig. \ref{case1_PODcoefficients}), though noisy, has a spectral peak at the wave frequency.
\begin{figure}
\begin{center}
\includegraphics[width=.49\textwidth]{Figures/case1_continuous_1Mag_TwoColumn}
\includegraphics[width=.49\textwidth]{Figures/case1_continuous_1X_TwoColumn}
\includegraphics[width=.49\textwidth]{Figures/case1_continuous_2Mag_TwoColumn}
\includegraphics[width=.49\textwidth]{Figures/case1_continuous_1Z_TwoColumn}
\caption{\label{case1_PIV_continuous_mode12} Summary of the POD modes 1 and 2, obtained in case 1. Left: flow direction (arrows, scaled by flow magnitude) and magnitude (colormap) for modes 1 (top) and 2 (bottom). Right: flow direction (arrows, scaled by flow magnitude) together with the $X$ (top) and $Y$ (bottom) velocity fields (colormap), for mode 1.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=.49\textwidth]{Figures/case1_continuous_3X_TwoColumn}
\includegraphics[width=.49\textwidth]{Figures/case1_continuous_4X_TwoColumn}
\caption{\label{case1_PIV_continuous_mode34} Summary of the POD modes 3 and 4, obtained in case 1. Left: POD mode 3. Right: POD mode 4. Arrows indicate flow direction, and are scaled by flow magnitude. Colormaps indicate the $X$ velocity component.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=.49\textwidth]{Figures/case1_Mode_1_2_continuous_TwoColumn}
\includegraphics[width=.49\textwidth]{Figures/case1_Mode_3_4_continuous_TwoColumn}
\caption{\label{case1_PODcoefficients} POD time coefficients in case 1, associated with mode 1 and 2 (left), and 3 and 4 (right).}
\end{center}
\end{figure}
POD modes are also computed in case 2. Modes 1 and 2 and the associated time coefficients are very similar to the results obtained in case 1, and are therefore not reproduced. Modes 3, 4 and 5, along with the corresponding time coefficients, are presented in Fig. \ref{case2_POD}. As visible on the POD time coefficients, a transient is observed at the beginning of the wave activity for about $7.5$ seconds, before a steady state is reached. Modes 3 and 5 are active in the beginning of the time series, and are essentially zero after $7.5$ seconds. By contrast, mode 4 gets active at around $7$ seconds, and slowly decays later on.
As was indicated previously, case 2 is chosen so as to investigate the transient effect induced by an initially inhomogeneous grease ice layer. This temporal evolution is visible in the higher order POD modes presented in Fig. \ref{case2_POD}. The initial position of the grease ice is visible through the irregular shape of the interface at the top of the water. Initially, two packs of grease ice are present, with free water in between. As the waves develop, the grease ice pack on the left accelerates to the right under the influence of the waves, and the free water gap gets filled with grease ice. This leads to a collision between the two packs of grease ice creating a large vortex in the water. POD mode 3 corresponds to the fluid motion induced by the displacement of the left pack of grease ice. It is similar to the dipole-aspect fluid motion that would be obtained from the displacement of a solid rectangle, in potential flow theory. POD mode 4 is the vortex generated by the collision between the two grease ice packs. POD mode 5 is related to the jets generated before the time when the free water gap gets closed, due to the variations in the free water gap created by wave motion. The interpretation of those 3 modes is consistent with the temporal evolution of the respective POD modes.
\begin{figure}
\begin{center}
\includegraphics[width=.49\textwidth]{Figures/case2_non_uniform_3Mag_TwoColumn}
\includegraphics[width=.49\textwidth]{Figures/case2_non_uniform_4Mag_TwoColumn}
\includegraphics[width=.49\textwidth]{Figures/case2_non_uniform_5Mag_TwoColumn}
\includegraphics[width=.49\textwidth]{Figures/case2_Mode_3_4_5_non_uniform_TwoColumn}
\caption{\label{case2_POD} POD modes 3, 4, 5 (first three plots, left to right, top to bottom) obtained in the case 2. Arrows indicate flow direction and are scaled by the velocity magnitude, the color map indicates velocity magnitude. Corresponding POD time coefficients (bottom right).}
\end{center}
\end{figure}
\section{Discussion of effective viscosity and scaling of experiments}
The use of an effective water viscosity much higher than the kinematic viscosity of water in order to reproduce observations of wave damping is often discussed in the literature. While most authors justify it from considerations about water turbulence and the presence of eddies \citep{DeCarolis2002399}, the origin of those eddy structures, especially under grease-pancake ice where breaking waves are quickly attenuated, is problematic. The eddy structures could arise from wave-induced turbulence, but this topic is after many years still the object of much debate \citep{doi:10.1175/2009JPO4202.1, GRL:GRL22071, Beya2012}. At least two other mechanisms could participate in the generation of eddy structures, influencing wave damping by ice. Ice drifting relatively to the underlying water, for example under the influence of wind or waves, could create shear on large temporal and spatial scales and be the origin of eddy structures. Collisions between pancakes, small broken ice floes or small packs of grease ice could also inject energy in the superficial water layer. We document for the first time the existence of such eddy structures in a laboratory experiment, when the waves are started from a situation where some packs of grease ice are separated by open water regions. Unsurprisingly, collision between packs of grease ice leads to the creation of a jet when water is forced out of the gap, and a strong vortex is created which decays slowly with time. Such collision mechanisms could be important in the field for injecting energy in the water under the ice, and therefore increasing the level of eddy viscosity and viscous dissipation.
However, more work is necessary to confirm this mechanism. Indeed, scaling between real sea conditions and wave tank experiments is problematic, and the generality of the experiment presented here would need to be further validated in experiments of larger scale.
More specifically, the very first imperative of laboratory experiments on waves is to be in the right water depth regime. Due to the very limited depth of most wave tanks, this puts a sharp requirement on the minimum frequency that can be used for tests in the laboratory. We consider as a first approximation in all the discussion that follows that the deep water dispersion relation is enforced, so that $\omega^2 = gk$, and that deep-water waves are the most common situation for the ocean, which implies that $kH > 1$, i.e. $\omega > \omega_{min} = \sqrt{g/H}$. In addition, the steepness of the waves must be kept moderate to limit nonlinear effects, typically $ \epsilon = k a \approx 0.1$, i.e. $a < 0.1 g / \omega^2$, where $k$ is the wave vector and $a$ the wave amplitude. If the amplitude-based Reynolds number defined in \cite{GRL:GRL22071}, $Re_a = a^2 \omega / \nu_{water}$, is the right non-dimensional parameter for describing turbulence under waves and large eddy structures under the ice, then in a laboratory experiment, with a wave-tank of depth H:
\begin{equation}
Re_a = \frac{a^2 \omega}{\nu_{water}} < \frac{10^{-2} g^2}{\omega^3 \nu_{water}} < \frac{10^{-2} g^{1/2} H^{3/2}}{\nu_{water}}.
\end{equation}
This implies that, even in the case of a wave-tank several meters deep, the maximum Reynolds number that can be achieved in the laboratory for deep water waves of reasonable steepness is much lower than any typical swell on deep water. More specifically, the maximum Reynolds number obtained in a wave tank scales as the water depth to the power $3/2$. As a consequence, studying phenomena related to water wave turbulence and wave-induced eddy viscosity is challenging in the laboratory, as it is impossible to investigate whether a higher Reynolds number could lead to different physics. However, while this scaling issue is certainly problematic in the study of, for example, wave turbulence, one could expect that it is less of a problem in the present case. Indeed, the dynamics presented here are not relying on the existence of a general instability in the flow, but rather on largely inviscid dynamics, with the effect of viscosity being limited to thin boundary layers that are difficult to observe as they are predicted to be on the order of $\delta = \sqrt{2 \nu / \omega} \approx 5 \times 10^{-4}$~m, using an approximate value for water viscosity $\nu = 1.7 \times 10^{-6}$~m$^2$/s and a frequency $f = \omega / (2 \pi) = 2.0$~Hz. For example, the eddy we observed is due to the existence of a water jet created by the reduction of the free volume between the grease ice packs, rather than (directly) viscous effects. In addition, the dynamics analyzed presently are relevant for the grease ice part of the Marginal Ice Zone, where low frequency waves propagate mostly unaffected and high frequency waves, such as those investigated here, are the ones interacting most with the ice.
\section{Conclusions}
A set of experiments about wave propagation in grease ice is presented. For the first time, the data reported contain both traditional single point measurements obtained with ultrasonic gauges, and measurements of the water kinematics using Particle Image Velocimetry. In addition, the frequency range investigated is much larger than what was reported in previous experiments. Collecting such data is necessary to help progress towards the development of models describing propagation of waves in the Marginal Ice Zone.
In good agreement with \citet{Martin1981}, we observe that the grease-ice layer gets thicker in the direction of wave propagation, which can be understood from the effect of the gradient in the wave-induced stress. In addition, we are able to measure with PIV the recirculation flow under the grease ice, imposed by mass conservation as a result of the grease ice layer thickening along the direction of wave propagation and also visually observed by \citet{Martin1981}. Unfortunately, the distribution of grease ice thickness during the experiments, and its variation with time and position, was not measured directly. Therefore no detailed comparison can be drawn between the grease ice thickness predictions of the mass loading and damping models, and reality. This indicates the necessity to perform detailed time measurements of grease ice thickness in subsequent experiments. When optical access is granted, this could be performed by recording calibrated side images of the grease ice layer.
By contrast with the findings by \citet{NewyearLab1}, we observe an increase in the wavenumber (reduction in the wavelength) in a way similar to what is described by the mass-loading model for intermediate frequencies. This phenomenon gets more visible in the higher frequency range we investigate than in the lower frequency range used by \citet{NewyearLab1}, which may explain why this observation was not reported previously. In addition, the wavenumbers observed depend upon the position along the wave-tank as a consequence of the grease ice layer thickness gradient previously described. For the highest part of the frequency range considered, when the wavelength is no longer much larger than the grease ice layer thickness, the wavenumber reaches a plateau. This corresponds to the fact that the grease ice can no longer be considered as a thin layer on top of the water, and therefore the hypothesis used in the mass loading model is no longer valid.
Wave attenuation is maybe the most important parameter to predict for real world applications. PIV and ultrasonic gauges measurements confirm the existence of an exponential wave attenuation when waves propagate in the ice. Interestingly, we observe a transition in the curve for the attenuation coefficient at around 2.2 Hz, relative to the simple model by \citet{WeberArticle}. Such feature had not been reported as clearly in the literature before, and we can observe it as a consequence of the wide frequency range investigated. This transition is successfully captured by the model of \citet{SutherlandDissipation}, adding credibility to the physical explanation it relies on.
Finally, PIV measurements reveal the existence of an active eddy under the ice when discontinuous patches of grease ice collide with each other under the influence of waves. Although a simple and unsurprising phenomenon, this is the first time to the authors knowledge that an eddy generation mechanism under sea ice has been documented in a laboratory experiment. The underlying collision mechanism may be important in real world conditions in the case of tightly packed ice.
\section{Acknowledgements}
We want to thank laboratory engineer Olav Gundersen for his help and precious advices when designing the wave maker. We are grateful to UNIS faculty and staff, for their help and hospitality during the experimental campaign in Svalbard. The help of Jostein Kolaas, whose Matlab scripts are often an inspiration for other members of the laboratory when processing Digiflow data, is acknowledged. This study was funded by the Norwegian Research Council under the PETROMAKS2 scheme [project WOICE, Grant Number 233901, and project DOFI, Grant number 280625].
|
1,314,259,994,262 | arxiv | \section{A theorem of existence and uniqueness}
Let $\Omega$ be (the interior of) a triangular domain in $\mathbb{R}^2$, the cartesian $\{x,y\}$ plane, with vertices $O=(0,0)$, $A=(2a,0)$, $B=(a,a)$, where $a$ is a positive real number. Note that the triangle $OAB$ is rectangular and $\overline{OB}=\overline{BA}$. Let be $f(x,y) \in C^0(\overline{\Omega},\mathbb{R})$. We want to resolve the differential problem
\begin{equation}\label{diffProblem}
- \partial^2_{xx}\phi + \partial^2_{yy}\phi = f \hspace{0.3cm} \textnormal{in} \hspace{0.1cm} \Omega, \hspace{0.3cm} \phi = 0 \hspace{0.3cm} \textnormal{on} \hspace{0.1cm} \partial\Omega
\end{equation}
\noindent for a function $\phi(x,y)$, $\phi \in C^2(\overline{\Omega},\mathbb{R})$. Note that the partial differential equation $- \partial^2_{xx}\phi + \partial^2_{yy}\phi = f$ admits a general solution of the form (see \cite{jeffrey} or \cite{sneddon}) $\phi(x,y)=g(-x+y)+h(x+y)+\phi_0(x,y)$, where $g$ and $h$ are arbitrary real functions and $\phi_0$ is a particular solution of the equation. But this general expression is not very useful for applying the boundary condition $\phi = 0$ on $\partial\Omega$. It is more interesting and instructive the following direct method.\\
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=7cm]{Trianglexy.eps}
\caption{{\it The domain in the $xy$-plane}.}
\label{trianglexy}
\end{center}
\end{figure}
Consider the differential operator $- \partial^2_{xx} + \partial^2_{yy}$ written as $(\partial_x+\partial_y)(-\partial_x+\partial_y)$, and consider a linear transformation rule for cartesian coordinates $X=ax+by$, $Y=cx+dy$. If we want to have $2\partial_X = \partial_x+\partial_y$ and $2\partial_Y = -\partial_x+\partial_y$, using the chain rule it must be $a=b=1=d=1$ and $c=-1$, that is
\begin{equation}\label{transformationRule}
X=x+y, \hspace{0.2cm} Y=-x+y
\end{equation}
\noindent The transformation is invertible:
\begin{equation}\label{inverseTransformationRule}
2x=X-Y, \hspace{0.2cm} 2y=X+Y
\end{equation}
\noindent With the notation $\Phi(X,Y)=\phi(x(X,Y),y(X,Y))$ and analogous for $f$, the differential equation $- \partial^2_{xx}\phi + \partial^2_{yy}\phi = f$ becomes
\begin{equation}\label{diffEqnXY}
4\partial^2_{XY}\Phi(X,Y) = F(X,Y)
\end{equation}
\noindent Note that the transformation (\ref{transformationRule}) is a $45^{\circ}$-rotation and a $\sqrt{2}$-dilation of the plane $\{x,y\}$. Also, the boundary condition doesn't change: $\Phi=0$ on $\partial\Omega$ (for simplicity we use for the domain in the plane $\{X,Y\}$ the same symbol $\Omega$ used for the plane $\{x,y\}$). For example, $\Phi(X,-X)=\phi(x,0)=0$. Therefore, the differential problem (\ref{diffProblem}) becomes
\begin{equation}\label{diffProblemXY}
4\partial^2_{XY}\Phi = F \hspace{0.3cm} \textnormal{in} \hspace{0.1cm} \Omega, \hspace{0.3cm} \Phi = 0 \hspace{0.3cm} \textnormal{on} \hspace{0.1cm} \partial\Omega
\end{equation}
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=7cm]{TriangleXYnew.eps}
\caption{{\it The domain in the $XY$-plane}.}
\label{triangleXYnew}
\end{center}
\end{figure}
\noindent We remark the fact that the operator $4\partial^2_{XY}$ is the \textit{canonical form} of the differential operator $- \partial^2_{xx}\phi + \partial^2_{yy}$, which has the lines $y=x$ and $y=-x$ as \textit{characteristic curves} (\cite{jeffrey} or \cite{sneddon}).\\
Now we want to discuss the resolution of the differential problem (\ref{diffProblemXY}).
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=6cm]{domainIntegration.eps}
\caption{{\it Domain of integration}.}
\label{domainIntXY}
\end{center}
\end{figure}
\noindent Let $P=(X,Y)$ be a point in the interior of the domain $\Omega$. Then we can construct the polygon $\Sigma$ using segments parallel to $X$ and $Y$ axes (see Fig.\ref{domainIntXY}). Note that $M=(-Y,Y)$, $R=(-Y,0)$, $Q=(2a,0)$, $S=(2a,-X)$, $N=(X,-X)$. Using the identity 2$\partial^2_{XY}=(\partial_X\partial_Y + \partial_Y\partial_X)$, from the differential equation it follows that
\begin{equation}\label{eqnDiffIntegrated}
2\int_{\Sigma} \left[ \partial_X\partial_Y\Phi(X,Y) + \partial_Y\partial_X\Phi(X,Y) \right] dXdY = \int_{\Sigma} F(X,Y) dXdY
\end{equation}
\noindent Now apply the Green theorem (\cite{matthews} or \cite{stewart}) to the first integral:
\begin{equation}
\int_{\Sigma} \left(\partial_X\partial_Y\Phi + \partial_Y\partial_X\Phi\right) dXdY = \int_{\partial\Sigma}\left(\partial_Y\Phi \hspace{0.1cm} dY - \partial_X\Phi \hspace{0.1cm} dX\right)
\end{equation}
\noindent It is now simple to calculate the line integral along the edges of the polygon $\Sigma$ (note that the boundary must be walked in counter-clockwise sense):
\begin{eqnarray}\label{lineIntegral}
\int_{\partial\Sigma}\left(\partial_Y\Phi \hspace{0.1cm} dY - \partial_X\Phi \hspace{0.1cm} dX\right) = \nonumber \\
= -2\Phi(P)+2\Phi(N)-2\Phi(S)+2\Phi(Q)-2\Phi(R)+2\Phi(M)
\end{eqnarray}
\noindent So we have
\begin{eqnarray}
-2\Phi(P)+2\Phi(N)-2\Phi(S)+2\Phi(Q)-2\Phi(R)+2\Phi(M) = \nonumber \\
= \int_{\Sigma} F dXdY
\end{eqnarray}
\noindent Now apply the boundary condition $\Phi_{|\partial\Omega} = 0$: it follows that
\begin{equation}\label{solution1}
\Phi(P) = \Phi(X,Y) = -\frac{1}{2}\int_{\Sigma(X,Y)} F(t,s) \hspace{0.1cm} dt \hspace{0.1cm} ds
\end{equation}
\noindent with the consequence that, if the point $P(X,Y)$ lies on the boundary of $\Omega$, that is if $P=M=N$ or $P=S$ or $P=R$, the function $F$ must satisfy the necessary condition
\begin{equation}
0 = \int_{\Sigma(X,Y)} F(t,s) \hspace{0.1cm} dt \hspace{0.1cm} ds \hspace{0.3cm} \forall (X,Y)\in\partial\Omega
\end{equation}
\noindent It is easy to see that previous condition can be written in a more explicit fashion:
\begin{equation}\label{necConditionF}
\int_{X}^{2a}\int_{-X}^{0} F(t,s) \hspace{0.1cm} ds \hspace{0.1cm} dt = 0 \hspace{0.3cm} \forall X\in[0,2a]
\end{equation}
\noindent Therefore we have shown that a solution to differential problem (\ref{diffProblemXY}), and hence to (\ref{diffProblem}), exists if and only if $F$ satisfies condition (\ref{necConditionF}). Also, formula (\ref{solution1}) is an analytical expression for a solution. Note that, denoted by $T$ the point $(X,0)$, the integral can be divided into the two integrals defined on the two simple rectangles $PTRM$ and $NSQT$.\\
Now we discuss uniqueness of solution. Suppose to have two solutions $\Phi_1$ and $\Phi_2$ for the problem (\ref{diffProblemXY}). Then $\Phi=\Phi_1-\Phi_2$ is a function such that $\partial^2_{XY}\Phi=0$ $\forall (X,Y)\in\Omega$ and $\Phi_{|\partial\Omega}=0$. Note that we can write
\begin{equation}
\partial_Y\left[\partial_X\Phi\right]^2 = 2 \hspace{0.1cm} \partial_X\Phi \hspace{0.1cm} \partial_{XY}^2\Phi=0
\end{equation}
\noindent Applying the Green theorem to domain $\Sigma$ for the expression $\partial_Y\left[\partial_X\Phi\right]^2$, we have
\begin{eqnarray}\label{boundaryIntegrals}
0 = \int_{\partial\Sigma}\left[\partial_X\Phi\right]^2dX = \nonumber \\
= \int_{N}^{S}\left[\partial_X\Phi\right]^2dX + \int_{Q}^{R}\left[\partial_X\Phi\right]^2dX + \int_{M}^{P}\left[\partial_X\Phi\right]^2dX
\end{eqnarray}
\noindent Using integration by parts, the following identity holds:
\begin{equation}
\int\left[\partial_X\Phi\right]^2dX = \Phi \hspace{0.1cm} \partial_X\Phi - \int\left[\Phi \hspace{0.1cm} \partial_{XX}^2\Phi\right]dX
\end{equation}
\noindent Hence, being $\Phi_{|\partial\Omega}=0$, the second integral in (\ref{boundaryIntegrals}) is null, therefore
\begin{equation}
\int_{N}^{S}\left[\partial_X\Phi\right]^2dX + \int_{M}^{P}\left[\partial_X\Phi\right]^2dX = 0
\end{equation}
\noindent The two integrals are evaluated in the same sense of the integration path, so that $\partial_X\Phi=0$ along the segments $NS$ and $MP$, therefore $\Phi(P)=\Phi(M)$. But $\Phi(M)=0$, being $M\in\partial\Omega$, so for a generic point $P=(X,Y)$ we have $\Phi(P)=0$. The solutions $\Phi_1$ and $\Phi_2$ are identical.\\
We have shown the following result (remember that $X=x+y$, $Y=-x+y$):
\begin{theo}\label{theoremExistUniq}
Let $f$ be a real function of $C^0(\overline{\Omega},\mathbb{R})$ such that
\begin{equation}\label{theoCondition}
\int_{X}^{2a}\int_{-X}^{0} f\left(\frac{t-s}{2},\frac{t+s}{2}\right) \hspace{0.1cm} ds \hspace{0.1cm} dt = 0 \hspace{0.3cm} \forall X\in[0,2a]
\end{equation}
Then the differential problem
\begin{equation}
- \partial^2_{xx}\phi + \partial^2_{yy}\phi = f \hspace{0.3cm} in \hspace{0.1cm} \Omega, \hspace{0.3cm} \phi = 0 \hspace{0.3cm} on \hspace{0.1cm} \partial\Omega
\end{equation}
has one and only one solution in the space $C^2(\overline{\Omega},\mathbb{R})$. The solution is given by the formula
\begin{eqnarray}\label{formulaSolution}
\phi(x,y) = &-&\frac{1}{2}\int_{x-y}^{x+y}\int_{-x+y}^0 f\left(\frac{t-s}{2},\frac{t+s}{2}\right) \hspace{0.1cm}ds \hspace{0.1cm} dt - \nonumber \\
&-&\frac{1}{2}\int_{x+y}^{2a}\int_{-x-y}^0 f\left(\frac{t-s}{2},\frac{t+s}{2}\right) \hspace{0.1cm}ds \hspace{0.1cm} dt
\end{eqnarray}
\end{theo}
\section{An application: cavity driven flows}
In this section we discuss an application of previous theorem to a problem of two-dimensional cavity driven flow, that is a plane flow confined in a cavity and induced by the stress due to a primary flow external to the cavity (see \cite{shankar}). This phenomenon has great importance in scientific research (see e.g. \cite{madaniCom}) and technological applications. Assume that the cavity has the shape of the triangle $OAB$ of Fig.\ref{trianglexy} in the $xy$-plane. Stress due to the primary flow acts on the horizontal edge $OA$. We suppose that the fluid is newtonian and incompressible, that is plane stress $\mathbb{T}$ and plane strain-rate $\mathbb{D}$ tensors are linked by the formula (see \cite{gurtin})
\begin{equation}
\mathbb{T} = 2\mu \mathbb{D}
\end{equation}
\noindent where $\mu$ is the dynamic viscosity and $2\mathbb{D}_{ij}=\left(\partial_j v_i + \partial_i v_j \right)$ (see \cite{madani}), where $\bf{v}$=$(v_1,v_2)$=$(v_x,v_y)$ is the flow velocity field. Plane incompressible flows admit a stream function (\cite{madani}), that is a function $\Psi(x,y)$ such that
\begin{equation}
u = v_x = \partial_y \Psi, \hspace{0.3cm} v = v_y = - \partial_x \Psi
\end{equation}
\noindent Therefore, a plane newtonian incompressible flow is described by the partial differential equation
\begin{equation}\label{partialDiffPsi}
- \partial_{xx}^2 \Psi + \partial_{yy} \Psi = \frac{1}{\mu}\mathbb{T}_{xy}
\end{equation}
In the next of the paper we suppose to know the analytical expression of $\mathbb{T}_{xy}$ and we try to find a solution of (\ref{partialDiffPsi}) for a stream function $\Psi$ such that $\Psi_{|\partial \Omega} = 0$. This boundary condition is usual for plane incompressible flow (see \cite{madani} and \cite{erturk}), but in the case of a cavity driven flow it (or an analogous $\Psi_{|\partial \Omega}$ = const) has an important physical meaning. In fact, if $\Psi_{|\partial \Omega} = 0$, then $\partial\Omega$ is a level curve for $\Psi$, therefore at each point of the boundary $\nabla\Psi$ is orthogonal to the tangent of the boundary itself (\cite{stewart}). But $\nabla\Psi = (-v,u)$, which is orthogonal to the flow velocity field $(u,v)$. Therefore, at each point of $\partial\Omega$, the geometrical tangent and the velocity field are parallel, that is the flow is confined into the cavity $\Omega$.\\
\noindent Applying theorem (\ref{theoremExistUniq}), it can be stated that if $\mathbb{T}_{xy} \in C^0(\overline{\Omega},\mathbb{R})$, then there is a unique stream function $\Psi \in C^2(\overline{\Omega},\mathbb{R})$ solving the linear equation (\ref{partialDiffPsi}) with boundary condition $\Psi|_{\partial\Omega}=0$. Note that \cite{albensoeder} consider a nonlinear problem about cavity driven flow where uniqueness can fail.
We consider at first the more simple analytical form for a possible stress:
\begin{equation}
\mathbb{T}_{xy} = \mu(c_1y+c_2)
\end{equation}
\noindent In this case, along the horizontal edge $OA$ ($y=0$) of the cavity the stress is constant. From theorem (\ref{theoremExistUniq}), a solution to our differential problem exists if the function $c_1y+c_2$ satisfies the condition (\ref{theoCondition}). It is easy to show that the condition is satisfied for all $X\in[0,2a]$ if and only if $2 c_2 = - a c_1$. Note that in this case the stress has expression
\begin{equation}
\mathbb{T}_{xy} = \mu c_2 \left(-\frac{2}{a}y+1\right)
\end{equation}
\noindent and for $y=\frac{a}{2}$ it changes its sign. So flow can recirculate. We make the choice $c_2=-8a$, so that $c_1=16$. The differential problem to solve is
\begin{equation}\label{diffProblemTest}
- \partial^2_{xx}\phi + \partial^2_{yy}\phi = 16y-8a \hspace{0.3cm} \textnormal{in} \hspace{0.1cm} \Omega, \hspace{0.3cm} \phi = 0 \hspace{0.3cm} \textnormal{on} \hspace{0.1cm} \partial\Omega
\end{equation}
\noindent From formula (\ref{formulaSolution}), using the transformation rule $X=x+y$, $Y=-x+y$ the solution to (\ref{diffProblemTest}) can be computed by
\begin{eqnarray}\label{formulaSolutionTest}
\Psi(x,y) = &-&4\int_{x-y}^{x+y}\int_{-x+y}^0 (t+s-a) \hspace{0.1cm}ds \hspace{0.1cm} dt - \nonumber \\
&-&4\int_{x+y}^{2a}\int_{-x-y}^0 (t+s-a) \hspace{0.1cm}ds \hspace{0.1cm} dt
\end{eqnarray}
\noindent which gives the expression
\begin{equation}
\Psi(x,y)=2y^3-2x^2y-4ay^2+4axy
\end{equation}
\noindent for the stream function of the flow. The velocity field is $(\partial_y\Psi,-\partial_x\Psi)=(-2x^2+6y^2+4ax-8ay,4xy-4ay)$. It is interesting to find the points where the velocity is null. Solving the algebraic system $(-2x^2+6y^2+4ax-8ay,4xy-4ay)=(0,0)$, we find as expected the three vertices $(0,0)$, $(2a,0)$ and $(a,a)$, and also the interior point $(\frac{a}{3},a)$ which is the center of the recirculation gyre (see Fig.\ref{pathLines}).
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=6cm]{pathLines.eps}
\caption{{\it Flow path-lines in the case a}=1.}
\label{pathLines}
\end{center}
\end{figure}
Now we consider a more interesting case. Let the stress be described by a sinusoidal expression of the form
\begin{equation}
\mathbb{T}_{xy} = A \mu \hspace{0.1cm} \textnormal{cos}(ky)
\end{equation}
\noindent with $A$ and $k$ real numbers. Using (\ref{theoCondition}), it is easy to show that if we suppose $q=0$, then
\begin{equation}
k = m\frac{\pi}{a}, \hspace{0.3cm} m=2n+1, \hspace{0.3cm} n \in \mathbb{N}
\end{equation}
\noindent is the condition for existence and uniqueness of a flow in the triangular cavity. Consider $m=1$. By integration (\ref{formulaSolution}), the analytic form of the stream function, solution of the differential problem, is
\begin{equation}
\Psi=-\frac{2Aa^2}{9\pi^2}\left[\textnormal{cos}\left(\frac{3\pi}{a}y\right)+\textnormal{cos}\left(\frac{3\pi}{2a}(x-y)\right)-2\textnormal{cos}^2\left(\frac{3\pi}{4a}(x+y)\right)\right]
\end{equation}
\noindent and Fig.\ref{sinusoidalPathLines} shows some path lines, where one primary central eddy and three secondary eddies are present. It is also interesting to draw the graph of $u=\partial_y\Psi$ for $x=a$ and $y$ variable in the range $[0,a]$: there are two values of $y$, not equal to $a$, for which $u=0$. One of the two values is equal to the value where $u=0$ in the previous case of the linear stress (see Fig.\ref{graphsSin}).\\
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=6cm]{sinusoidalStressPathLines.eps}
\caption{{\it Flow path-lines in the case of the sinusoidal stress, with a}=1, A=5.}
\label{sinusoidalPathLines}
\end{center}
\end{figure}
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=6cm]{graphsSin.eps}
\caption{{\it Comparison of $u=\partial_y\Psi$ in the linear (thin line) and sinusoidal (thick line) case}.}
\label{graphsSin}
\end{center}
\end{figure}
A more realistic example is based on a stress with analytical expression of the form
\begin{equation}
\mathbb{T}_{xy}=\mu \sum_{m,n=0}^4a_{m,n} \hspace{0.1cm} x^my^n
\end{equation}
\noindent Applying condition (\ref{theoCondition}) for the computation of the coefficients $a_{m,n}$, a possible stream function is
\begin{equation}
\Psi(x,y)=(2y^3-2x^2y-4ay^2+4axy)(y-100x^2-a)\left(y+\frac{1}{4}x-\frac{5}{6}a\right)
\end{equation}
\noindent The horizontal component $u$ of the velocity field, along the $x$-axes, is a 5-order $x$-polynomial whose graph is shown on Fig.\ref{ugraphRealistic}. The stress acts on the horizontal segment of the triangular domain as a variable shear of positive sign.
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=6cm]{ugraphRealistic.eps}
\caption{{\it Profile of $u(x,0)=\partial_y\Psi(x,0)$ in the case of a variable shear stress}.}
\label{ugraphRealistic}
\end{center}
\end{figure}
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=6cm]{realisticStressPathLines.eps}
\caption{{\it Flow path-lines in the case of a variable shear stress, with a}=1.}
\label{realisticPathLines}
\end{center}
\end{figure}
\noindent The resulting flow path-lines (see Fig.\ref{realisticPathLines}) show the presence of a primary gyre, and of a secondary gyre near the vertex opposite to the edge subjected to external stress. This image is similar to a corrisponding picture (fig.2(b)) in \cite{erturk}, where stream function is computed by numerical method.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.